[openstack-dev] [nova] Nova API meeting cancelled this week

2014-05-22 Thread Christopher Yeoh
Hi,

Given there's a lot of regular attendees who are still travelling back
from summit or recovering from jet lag I'm cancelling the regular Nova
API meeting this week. We'll meet up again next week at the regular
time and place.

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][FWaaS]Firewall Web Services Research Thesis Applicability to the OpenStack Project

2014-05-22 Thread A, Keshava
Hi,

1. When the group policy is applied ( across to all the VMs ) say deny for 
specific TCP port = 80, however because some special reason one of that VM 
needs to 'ALLOW TCP port' how to handle this ?  
When deny is applied to any one of VM in that group ,   this framework  takes 
care of 
individually breaking that and apply ALLOW for other VM  
automatically ?
and apply Deny for that specific VM ? 

2. Can there be 'Hierarchy of Group Policy  ? 



Thanks  regards,
Keshava.A

-Original Message-
From: Michael Grima [mailto:mike.r.gr...@gmail.com] 
Sent: Wednesday, May 21, 2014 5:00 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][FWaaS]Firewall Web Services Research 
Thesis Applicability to the OpenStack Project

Sumit,

Unfortunately, I missed the IRC meeting on FWaaS (got the timezones screwed 
up...).

However, in the meantime, please review this section of my thesis on the 
OpenStack project:
https://docs.google.com/document/d/1DGhgtTY4FxYxOqhKvMSV20cIw5WWR-gXbaBoMMMA-f0/edit?usp=sharing

Please let me know if it is missing anything, or contains any wrong 
information.  Also, if you have some time, please review the questions I have 
asked in the previous messages.

Thank you,

--
Mike Grima, RHCE

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Manual VM migration

2014-05-22 Thread Naveed Ahmad
I have devstack based Openstack cloud on two different machines. I am
interested to migrate instance from A to B devstack cloud.

I tried to copy instance files from Cloud A (
../../data/nova/instace--ab-cd-ef )  to Cloud B. I am also copying related
metadata of instance from its database to destination cloud.

After migration, when i restart services on destination cloud, nova-compute
is giving error.

Regards







On Wed, May 21, 2014 at 10:41 PM, Aditya Thatte aditya.that...@gmail.comwrote:

 Hi,

 What kind of errors are you getting? Can you give more details about what
 you have tried?


 On Wed, May 21, 2014 at 11:02 PM, Naveed Ahmad 12msccsnah...@seecs.edu.pk
  wrote:


 Hi community,

 I need some help from you people. Openstack provides Hot (Live) and Cold
 (Offline) migration between clusters/compute. However i am interested to
 migrate Virtual Machine from one OpenStack Cloud to another.  is it
 possible ?  It is inter cloud VM migration not inter cluster or compute.

 I need help and suggestion regarding VM migration. I tried to manually
 migrate VM from one OpenStack Cloud to another but no success yet.

 Please guide me!

 Regards


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Aditya Thatte
 BrainChamber Research

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][NFV] NFV BoF at design summit

2014-05-22 Thread Isaku Yamahata
On Wed, May 21, 2014 at 10:54:03AM +0300,
Dmitry mey...@gmail.com wrote:

 HI,

Hi.

 I would happy to get explanation of what is the difference between Adv
 Service 
 Managementhttps://docs.google.com/file/d/0Bz-bErEEHJxLTGY4NUVvTzRDaEk/editfrom
 the Service VM

The above document is stale.
the right one is
https://docs.google.com/document/d/1pwFVV8UavvQkBz92bT-BweBAiIZoMJP0NPAO4-60XFY/edit?pli=1
https://docs.google.com/document/d/1ZWDDTjwhIUedyipkDztM0_nBYgfCEP9Q77hhn1ZduCA/edit?pli=1#
https://wiki.openstack.org/wiki/ServiceVM

Anyway how did you find the link? I'd like to remove stale links.


 and NFVO
 orchestrationhttp://www.ietf.org/proceedings/88/slides/slides-88-opsawg-6.pdffrom
 NFV Mano.
 The most interesting part if service provider management as part of the
 service catalog.

servicevm corresponds to (a part of) NFV orchestrator and VNF manager.
Especially life cycle management of VMs/services. configuration of services.
I think the above document and the NFV documents only give high level
statement of components, right?

thanks,

 
 Thanks,
 Dmitry
 
 
 On Wed, May 21, 2014 at 9:01 AM, Isaku Yamahata 
 isaku.yamah...@gmail.comwrote:
 
  Hi, I will also attend the NFV IRC meeting.
 
  thanks,
  Isaku Yamahata
 
  On Tue, May 20, 2014 at 01:23:22PM -0700,
  Stephen Wong s3w...@midokura.com wrote:
 
   Hi,
  
   I am part of the ServiceVM team and I will attend the NFV IRC
  meetings.
  
   Thanks,
   - Stephen
  
  
   On Tue, May 20, 2014 at 8:59 AM, Chris Wright chr...@sous-sol.org
  wrote:
  
* balaj...@freescale.com (balaj...@freescale.com) wrote:
  -Original Message-
  From: Kyle Mestery [mailto:mest...@noironetworks.com]
  Sent: Tuesday, May 20, 2014 12:19 AM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [Neutron][NFV] NFV BoF at design
  summit
 
  On Mon, May 19, 2014 at 1:44 PM, Ian Wells ijw.ubu...@cack.org.uk
  
  wrote:
   I think the Service VM discussion resolved itself in a way that
   reduces the problem to a form of NFV - there are standing issues
using
   VMs for services, orchestration is probably not a responsibility
  that
   lies in Neutron, and as such the importance is in identifying the
   problems with the plumbing features of Neutron that cause
   implementation difficulties.  The end result will be that VMs
   implementing tenant services and implementing NFV should be much
  the
   same, with the addition of offering a multitenant interface to
  Openstack users on the tenant service VM case.
  
   Geoff Arnold is dealing with the collating of information from
  people
   that have made the attempt to implement service VMs.  The problem
   areas should fall out of his effort.  I also suspect that the key
   points of NFV that cause problems (for instance, dealing with
  VLANs
   and trunking) will actually appear quite high up the service VM
  list
as
  well.
   --
  There is a weekly meeting for the Service VM project [1], I hope
  some
  representatives from the NFB sub-project can make it to this
  meeting
and
  participate there.
 [P Balaji-B37839] I agree with Kyle, so that we will have enough
  synch
between Service VM and NFV goals.
   
Makes good sense.  Will make sure to get someone there.
   
thanks,
-chris
   
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
   
 
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  --
  Isaku Yamahata isaku.yamah...@gmail.com
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 
Isaku Yamahata isaku.yamah...@gmail.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Manual VM migration

2014-05-22 Thread Naveed Ahmad
Hi Diego ,

Thanks for sharing steps for VM migration from customer end to your cloud.
Well! i am not going to propose new idea for VM migration. I am using VM
migration  for POC of my research idea.

I have few question for you!


1. Can we use suspend/pause feature instead of snapshot for saving VM
states. ?
2. How you are managing VM metadata (such as instance detail from
nova,cinder database)



Is it possible for you to share script? I need this VM migration feature in
Openstack for POC only.
Thanks again for your reply.


Regards
Naveed




On Wed, May 21, 2014 at 10:47 PM, Diego Parrilla Santamaría 
diego.parrilla.santama...@gmail.com wrote:

 Hi Naveed,

 we have customers running VMs in their own Private Cloud that are
 migrating to our new Public Cloud offering. To be honest I would love to
 have a better way to do it, but this is how we do. We have developed a tiny
 script that basically performs the following actions:

 1) Take a snapshot of the VM from the source Private Cloud
 2) Halts the source VM (optional, but good for state consistency)
  3) Download the snapshot from source Private Cloud
 4) Upload the snapshot to target Public Cloud
 5) Start a new VM using the uploaded image in the target public cloud
 6) Allocate a floating IP and attach it to the VM
 7) Change DNS to point to the new floating IP
 8) Perform some cleanup processes (delete source VM, deallocate its
 floating IP, delete snapshot from source...)

 A bit rudimentary, but it works if your VM does not have attached volumes
 right away.

 Still, I would love to hear some sexy and direct way to do it.

 Regards
 Diego

  --
 Diego Parrilla
 https://mailtrack.io/trace/link/660f588f5b8ce60a3da368dfbfeda30eb0548109
 *CEO*
 *www.stackops.com
 https://mailtrack.io/trace/link/660f588f5b8ce60a3da368dfbfeda30eb0548109 | *
  diego.parri...@stackops.com | +34 91 005-2164 | skype:diegoparrilla




 On Wed, May 21, 2014 at 7:32 PM, Naveed Ahmad 
 12msccsnah...@seecs.edu.pkwrote:


 Hi community,

 I need some help from you people. Openstack provides Hot (Live) and Cold
 (Offline) migration between clusters/compute. However i am interested to
 migrate Virtual Machine from one OpenStack Cloud to another.  is it
 possible ?  It is inter cloud VM migration not inter cluster or compute.

 I need help and suggestion regarding VM migration. I tried to manually
 migrate VM from one OpenStack Cloud to another but no success yet.

 Please guide me!

 Regards


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-devhttps://mailtrack.io/trace/link/86e76f2270da640047a3867c01c2cc077eb9a20c



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano] BP Review for J1

2014-05-22 Thread Serg Melikyan
We will held BP review for milestone 1 of Juno release at 15:00 UTC on May
23.

Please, join us and vote for features that you are interested in!
-- 
Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
http://mirantis.com | smelik...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][NFV] NFV BoF at design summit

2014-05-22 Thread Kevin Benton
3. OpenStack itself should ( its own Compute Node/L3/Routing,  Controller
)  have (5 nine capable) reliability.

Can you elaborate on this a little more? Reliability is pretty deployment
specific (e.g. database chosen, number of cluster members, etc). I'm sure
nobody would disagree that OpenStack should be reliable, but without
specific issues to address it doesn't really give us a clear target.

Thanks,
Kevin Benton


On Wed, May 21, 2014 at 11:24 PM, A, Keshava keshav...@hp.com wrote:

 Hi

 In my opinion the first and foremost requirement for NFV ( which is from
 carrier class ) is 99.9 ( 5 nines ) reliability.
 If we want OpenStack architecture to scale to Carrier class below are
 basic thing we need to address.

 1. There should be a framework from open-stack to support 5 nine level
 reliability  to Service/Tennant-VM . ? ( Example for Carrier Class NAT
 Service/ SIP Service/HLR/VLR service/BRAS service)

 2. They also should be capable of 'In-service up gradation (ISSU) without
 service disruption.

 3. OpenStack itself should ( its own Compute Node/L3/Routing,  Controller
 )  have (5 nine capable) reliability.

 If we can provide such of infrastructure to NFV then we think of adding
 rest of requirement .

 Let me know others/NFv people opinion for the same.



 Thanks  regards,
 Keshava.A

 -Original Message-
 From: Kyle Mestery [mailto:mest...@noironetworks.com]
 Sent: Monday, May 19, 2014 11:49 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][NFV] NFV BoF at design summit

 On Mon, May 19, 2014 at 1:44 PM, Ian Wells ijw.ubu...@cack.org.uk wrote:
  I think the Service VM discussion resolved itself in a way that
  reduces the problem to a form of NFV - there are standing issues using
  VMs for services, orchestration is probably not a responsibility that
  lies in Neutron, and as such the importance is in identifying the
  problems with the plumbing features of Neutron that cause
  implementation difficulties.  The end result will be that VMs
  implementing tenant services and implementing NFV should be much the
  same, with the addition of offering a multitenant interface to Openstack
 users on the tenant service VM case.
 
  Geoff Arnold is dealing with the collating of information from people
  that have made the attempt to implement service VMs.  The problem
  areas should fall out of his effort.  I also suspect that the key
  points of NFV that cause problems (for instance, dealing with VLANs
  and trunking) will actually appear quite high up the service VM list as
 well.
  --
 There is a weekly meeting for the Service VM project [1], I hope some
 representatives from the NFB sub-project can make it to this meeting and
 participate there.

 Thanks,
 Kyle

 [1] https://wiki.openstack.org/wiki/Meetings/ServiceVM

  Ian.
 
 
 
  On 18 May 2014 20:01, Steve Gordon sgor...@redhat.com wrote:
 
  - Original Message -
   From: Sumit Naiksatam sumitnaiksa...@gmail.com
  
   Thanks for initiating this conversation. Unfortunately I was not
   able to participate during the summit on account of overlapping
 sessions.
   As has been identified in the wiki and etherpad, there seem to be
   obvious/potential touch points with the advanced services'
   discussion we are having in Neutron [1]. Our sub team, and I, will
   track and participate in this NFV discussion. Needless to say, we
   are definitely very keen to understand and accommodate the NFV
 requirements.
  
   Thanks,
   ~Sumit.
   [1] https://wiki.openstack.org/wiki/Neutron/AdvancedServices
 
  Yes, there are definitely touch points across a number of different
  existing projects and sub teams. The consensus seemed to be that
  while a lot of people in the community have been working in
  independent groups on advancing the support for NFV use cases in
  OpenStack we haven't necessarily been coordinating our efforts
  effectively. Hopefully having a cross-project sub team will allow us to
 do this.
 
  In the BoF sessions we started adding relevant *existing* blueprints
  on the wiki page, we probably need to come up with a more robust way
  to track these from launchpad :). Further proposals will no doubt
  need to be built out from use cases as we discuss them further:
 
  https://wiki.openstack.org/wiki/Meetings/NFV
 
  Feel free to add any blueprints from the Advanced Services efforts
  that were missed!
 
  Thanks,
 
  Steve
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 

Re: [openstack-dev] [infra] Gerrit downtime on May 23 for project renames

2014-05-22 Thread Clint Byrum
Excerpts from Tom Fifield's message of 2014-05-21 21:39:22 -0700:
 On 22/05/14 11:06, Kyle Mestery wrote:
  On Wed, May 21, 2014 at 5:06 PM, Tom Fifield t...@openstack.org wrote:
  On 22/05/14 05:48, James E. Blair wrote:
 
  Tom Fifield t...@openstack.org writes:
 
  May I ask, will the old names have some kind of redirect to the new
  names?
 
 
  Of course you may ask!  And it's a great question!  But sadly the answer
  is no.  Unfortunately, Gerrit's support for renaming projects is not
  very good (which is why we need to take downtime to do it).
 
  I'm personally quite fond of stable URLs.  However, these started as an
  experiment so we were bound to get some things wrong (and will
  probably continue to do so) and it's better to try to fix them early.
 
 
  This is a really poor outcome.
 
  Can we delay the migration until we have some time to think about the
  communication strategy?
 
  At the least, I'd suggest a delay for renaming neutron-specs until until
  after the peak of the Juno blueprint work is done. Say in ~3 weeks time.
 
  I tend to agree with James that we should do this early and take the
  bullet on renaming now. The process for adding new Neutron specs is
  outlined here [1], and this will be updated once the repository is
  renamed. In addition, I'm working on adding/updating some Neutron wiki
  pages around the Neutron development process, and the specs repo will
  be highlighted there once that's done. It would be good to have the
  renaming done before then.
 
 ... and how would you propose we communicate this to the users we've 
 been asking to do blueprint review specifically during this early 
 period? We can't exactly send them an email saying sorry, the link we 
 mentioned earlier is now wrong :)
 
 What would you gain from doing it this week instead of later in the month?
 
 We're really trying to engage users to help out with the spec review 
 process, but it seems they weren't taken into account at all when 
 planning this change. Seems like a bad precedent to set for our first 
 experiment.

You didn't also ask them to subscribe to the users and/or operators
mailing lists? I would think at least one of those two lists would be
quite important for users to stay in the loop about the effort.

Any large scale movement will be limited in scale by the scale of its
mass communication.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][NFV] NFV BoF at design summit

2014-05-22 Thread Dmitry
Hi Isaku,
Thank you for the updated link. I'n not sure where from I get the previous
one, probably from the direct Google search.
If we're talking about NFV Mano, it's very important to keep NFVO and VNFM
as a separate services, where VNFM might be (and probably will be) supplied
jointly with a vendor's specific VNF.
In addition, it's possible that VNFC components will not be able to be
placed on the same machine - anti-affinity rules.
Talking in NFV terminology, we need to have a new OpenStack Services which
(from what I've understood from the document you sent) is called Adv
Service and is responsible to be:
1) NFVO - which is using Nova to provision new Service VMs and Neutron to
establish connectivity and service chaining
2) Service Catalog - to accommodate multiple VNF services. Question: the
same problem exists with Trove which need a catalog for multiple concrete
DB implementations. Do you know which solution they will take for Juno?
2) Infrastructure for VNFM plugins - which will be called by NFVO to decide
where Service VM should be placed and which LSI should be provisioned on
these Service VMs.

This flow is more or less what was stated by NFV committee.

Please let me know what you think about this and how far is that from what
you planed for Service VM.
In addition, I would happy to know if Service VM will be incubated for Juno
release.

Thank you very much,
Dmitry



On Thu, May 22, 2014 at 9:28 AM, Isaku Yamahata isaku.yamah...@gmail.comwrote:

 On Wed, May 21, 2014 at 10:54:03AM +0300,
 Dmitry mey...@gmail.com wrote:

  HI,

 Hi.

  I would happy to get explanation of what is the difference between Adv
  Service Management
 https://docs.google.com/file/d/0Bz-bErEEHJxLTGY4NUVvTzRDaEk/editfrom
  the Service VM

 The above document is stale.
 the right one is

 https://docs.google.com/document/d/1pwFVV8UavvQkBz92bT-BweBAiIZoMJP0NPAO4-60XFY/edit?pli=1

 https://docs.google.com/document/d/1ZWDDTjwhIUedyipkDztM0_nBYgfCEP9Q77hhn1ZduCA/edit?pli=1#
 https://wiki.openstack.org/wiki/ServiceVM

 Anyway how did you find the link? I'd like to remove stale links.


  and NFVO
  orchestration
 http://www.ietf.org/proceedings/88/slides/slides-88-opsawg-6.pdffrom
  NFV Mano.
  The most interesting part if service provider management as part of the
  service catalog.

 servicevm corresponds to (a part of) NFV orchestrator and VNF manager.
 Especially life cycle management of VMs/services. configuration of
 services.
 I think the above document and the NFV documents only give high level
 statement of components, right?

 thanks,

 
  Thanks,
  Dmitry
 
 
  On Wed, May 21, 2014 at 9:01 AM, Isaku Yamahata 
 isaku.yamah...@gmail.comwrote:
 
   Hi, I will also attend the NFV IRC meeting.
  
   thanks,
   Isaku Yamahata
  
   On Tue, May 20, 2014 at 01:23:22PM -0700,
   Stephen Wong s3w...@midokura.com wrote:
  
Hi,
   
I am part of the ServiceVM team and I will attend the NFV IRC
   meetings.
   
Thanks,
- Stephen
   
   
On Tue, May 20, 2014 at 8:59 AM, Chris Wright chr...@sous-sol.org
   wrote:
   
 * balaj...@freescale.com (balaj...@freescale.com) wrote:
   -Original Message-
   From: Kyle Mestery [mailto:mest...@noironetworks.com]
   Sent: Tuesday, May 20, 2014 12:19 AM
   To: OpenStack Development Mailing List (not for usage
 questions)
   Subject: Re: [openstack-dev] [Neutron][NFV] NFV BoF at design
   summit
  
   On Mon, May 19, 2014 at 1:44 PM, Ian Wells 
 ijw.ubu...@cack.org.uk
   
   wrote:
I think the Service VM discussion resolved itself in a way
 that
reduces the problem to a form of NFV - there are standing
 issues
 using
VMs for services, orchestration is probably not a
 responsibility
   that
lies in Neutron, and as such the importance is in
 identifying the
problems with the plumbing features of Neutron that cause
implementation difficulties.  The end result will be that VMs
implementing tenant services and implementing NFV should be
 much
   the
same, with the addition of offering a multitenant interface
 to
   Openstack users on the tenant service VM case.
   
Geoff Arnold is dealing with the collating of information
 from
   people
that have made the attempt to implement service VMs.  The
 problem
areas should fall out of his effort.  I also suspect that
 the key
points of NFV that cause problems (for instance, dealing with
   VLANs
and trunking) will actually appear quite high up the service
 VM
   list
 as
   well.
--
   There is a weekly meeting for the Service VM project [1], I
 hope
   some
   representatives from the NFB sub-project can make it to this
   meeting
 and
   participate there.
  [P Balaji-B37839] I agree with Kyle, so that we will have enough
   synch
 between Service VM and NFV goals.

 Makes good sense.  Will make sure to get someone there.

Re: [openstack-dev] [Murano] BP Review for J1

2014-05-22 Thread Dmitry
Any possibility to move the session to Monday (May 26)?


On Thu, May 22, 2014 at 9:42 AM, Serg Melikyan smelik...@mirantis.comwrote:

 We will held BP review for milestone 1 of Juno release at 15:00 UTC on May
 23.

 Please, join us and vote for features that you are interested in!
 --
 Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
 http://mirantis.com | smelik...@mirantis.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] handling drivers that will not be third-party tested

2014-05-22 Thread Lucas Alvares Gomes
On Thu, May 22, 2014 at 1:03 AM, Devananda van der Veen
devananda@gmail.com wrote:
 I'd like to bring up the topic of drivers which, for one reason or another,
 are probably never going to have third party CI testing.

 Take for example the iBoot driver proposed here:
   https://review.openstack.org/50977

 I would like to encourage this type of driver as it enables individual
 contributors, who may be using off-the-shelf or home-built systems, to
 benefit from Ironic's ability to provision hardware, even if that hardware
 does not have IPMI or another enterprise-grade out-of-band management
 interface. However, I also don't expect the author to provide a full
 third-party CI environment, and as such, we should not claim the same level
 of test coverage and consistency as we would like to have with drivers in
 the gate.

+1


 As it is, Ironic already supports out-of-tree drivers. A python module that
 registers itself with the appropriate entrypoint will be made available if
 the ironic-conductor service is configured to load that driver. For what
 it's worth, I recall Nova going through a very similar discussion over the
 last few cycles...

 So, why not just put the driver in a separate library on github or
 stackforge?

I would like to have this drivers within the Ironic tree under a
separated directory (e.g /drivers/staging/, not exactly same but kinda
like what linux has in their tree[1]). The advatanges of having it in
the main ironic tree is because it makes it easier to other people
access the drivers, easy to detect and fix changes in the Ironic code
that would affect the driver, share code with the other drivers, add
unittests and provide a common place for development.

We can create some rules for people who are thinking about submitting
their driver under the staging directory, it should _not_ be a place
where you just throw the code and forget it, we would need to agree
that the person submitting the code will also babysit it, we also
could use the same process for all the other drivers wich wants to be
in the Ironic tree to be accepted which is going through ironic-specs.

Thoughts?

[1] http://lwn.net/Articles/285599/

Cheers,
Lucas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Need help to start contribute to Neutron

2014-05-22 Thread Li, Chen
Hi list,

I have using Openstack/Neutron for a while.
And now I hope I can do some contributions too.
But neutron is too complicated, I don't know where to start.

In IRC, iwamoto suggested me to work with developer doc team.
That sounds like a good idea.
But I still doesn't know what/where I should start with.
Can someone help me ?

Or just told me anything you think I can work on ?

Thanks.
-chen

I used to working on CentOS, installing neutron directly using command yum 
install neutron-xxx.
Now, I have already download neutron code, and run successfully run unittest.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Core API refactoring

2014-05-22 Thread Salvatore Orlando
On 22 May 2014 03:47, Mandeep Dhami dh...@noironetworks.com wrote:

 Hi Salvatore:

 Comments inline as well

 ​ ​
 This is a bit obscure to me. I read it as you're hinting the core team or
 part of it has double standards.
 ​ ​
 In that case I would invite you to clarify.

 ​Last week, I had requested reference to a design document for neutron
 refactoring work. As this is a critical change, I wanted to understand what
 was being proposed (and hopefully contribute to it's implementation). The
 only feedback I received was about the etherpad of the meeting, and I was
 hoping for more. I re-requested it again, yesterday, just in case my
 request had got buried under all summit related email that everyone was so
 busy with last week.


 The update from Sean seem to suggest to me that we needed blueprints only
 if the public API changes, and not for design changes that are internal to
 neutron.


I don't think so. For instance we have a lot of blueprints for internal
agent changes. A blueprint is required for any new feature or any change
which affects in a non trivial way internal or external interfaces or the
architecture of a component.


 My comments were meant to extol the virtue of creating a design
 document, and reviewing all significant design changes, even for updates
 that do not change the public API. In that context, I was trying to make a
 point that reviews should be prioritized by importance/impact to the
 project and not based on any other criteria (like, say a delta difference
 from previous spec - something which would be triggered by a simple name
 change - and something that thought that I had seen in a review just that
 very day).

 When I sent that email I did not know who was working it, there was no
 aspersion cast or implied. The only mention of core was in something
 as core and central to neutron as refactoring ... and I never mentioned
 the core team. If you are working on it, I apologize if it came across that
 way to you. At the same time, I am not comfortable with the conclusion that
 you drew about my intentions. I am happy to address this face-to-face if
 that helps (or hangout to hangout) - I am not that adroit with emails and I
 worry that my response may again be misunderstood.


There is no need to make a big deal of this. I just wrote that this could
have been read in that way, not that it was that way.
I asked for a clarification, and you did it. That's it!



 ​ ​
 I am not entirely sure what kind of v3 APIs you're referring to.

 ​My understanding was that there was a proposal for a V3 API. But based on
 Mark Mclain's response to this thread that is probably now slated for
 K-release.


Sorry - my bad. I mixed up L3 and v3; unfortunately despite visiting
several doctors I still can't resist the urge of getting drunk while
reading the neutron mailing list!
It has been discussed, but given the delicate nature of the topic, and the
number of high priority items already on the plate, it's really unlikely to
end up in the Juno roadmap.
I thing however discussion on shaping the API should be encouraged.



 ​ ​
 I don't see a mandatory relationship between pecan and taskflow.

 ​I don't see a relationship either. I, simplistically, put all the
 following issues in the refactoring bucket:
 1. Paste + stuff = Pecan
 2. V2 = V3
 3. Taskflow
 4. Cleaning up of distributes locks


Well, they might all be refactoring: in a way - but they're definitely
different buckets. Also, the second items is actually constituted by two
big tickets, and none of them should be, in my opinion, in the Juno
roadmap: one side there is the v3 tenant API, and on the other side the v3
plugin interface. I think Mark presented some code snippets for this as
well.
If I understand the fourth topic correctly, the issue is probably that
Neutron's database management is not:
- deadlock-safe
- prone to lock wait timeout errors due to eventlet switches during
transactions
- not friendly to active/active replication
For this particular topic I think both a short term fix than a longer term
refactoring are being discussed. A few patches concerning lock wait timeout
errors have already been merged.

 ​

 No relationship between them is necessarily implied (either in design or
 in timing). I figured that any refactoring effort will have to design
 solution for each of them and then weigh priority based on
 effort/risk/impact/value of each of those changes. I suspect that there are
 more - that was just what I understood to be urgent or important.

 As stated before, any change which impact the current architecture of the
software, or changes in a non trivial way any interface, be it internal or
external, should be accompanied by a specification that will be available
for review on neutron-specs repo.




 ​ ​
 There was a session discussing the possibility of having a task based
 interaction between the front end and
 ​ ​
  the
 ​
 backend - taskflow would be a candidate task manager solution there. But
 from 

Re: [openstack-dev] [Storyboard] [UX] Atlanta Storyboard UX Summary

2014-05-22 Thread Thierry Carrez
Angus Salkeld wrote:
 On 21/05/14 13:32 -0700, Michael Krotscheck wrote:
 I've compiled a list of takeaways from the one-on-one ux sessions that we
 did at the Atlanta Summit. These comments aren't exhaustive - they're the
 big items that came up over and over again. If you'd like to review the
 videos yourself or peruse my notes, please drop me a private line:
 They're
 rather large and I don't want to share my google drive with the world.

 *Users were extremely confused about Tasks vs. Stories*
 
 +1 me too.
 
 Do they have to be different things (stories and tasks)? Would it be
 better to just have
 issues that could be made children of other issues?

That is how most other trackers do it. This is a very project-centric
approach, and it doesn't solve the OpenStack-specific challenges. For
example, Launchpad blueprints have blueprints that can be made
children of other blueprints. The inability to have a single
overarching feature that affect multiple code repositories is why we
can't use it anymore. If tracking tasks across project boundaries was
not a critical coordination challenge (created by OpenStack unique
scope) we would not have to create our own tool, we would just adopt
Jira or Bugzilla.

The specific problem we are trying to solve with Storyboard is task
tracking across multiple projects. Have an openstack-wide problem and
describe the project-specific tasks that will solve that problem.

Now I understand it can be confusing, especially for people without a
Launchpad Bugs background. Maybe we can find a better term for tasks
(work items ? steps ? commits ?), maybe we need to
educate/document more, maybe the UI should make it easier to grasp as a
concept. But given that our primary audience is OpenStack developers, I
suspect that once they understand the concept it's not that much of an
issue. And since most of them have to use Launchpad Bugs now (which has
a similar concept), the learning curve is not too steep...

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Need help to start contribute to Neutron

2014-05-22 Thread Rossella Sblendido
Hello chen,

You are already doing great, using irc and the mailing list is a good start.

Let me give you some links that can help you ramping up:

Neutron development:
https://wiki.openstack.org/wiki/NeutronDevelopment

If you wanna contribute fixing some bug I suggest you have a look at the
low hanging fruits:
https://wiki.openstack.org/wiki/NeutronStarterBugs

We are using Gerrit, read here how to set it up:
https://wiki.openstack.org/wiki/Gerrit_Workflow

If you are more interested into documentation please follow this guide:
https://wiki.openstack.org/wiki/Documentation/HowTo
I am not an expert here, maybe you should contact Edgar Magana, irc: emagana

Feel free to contact me on irc, my nick is rossella-s

thanks for your interest in Neutron!

Rossella

On 05/22/2014 10:56 AM, Li, Chen wrote:

 Hi list,

  

 I have using Openstack/Neutron for a while.

 And now I hope I can do some contributions too.

 But neutron is too complicated, I don't know where to start.

  

 In IRC, iwamoto suggested me to work with developer doc team.

 That sounds like a good idea.

 But I still doesn't know what/where I should start with.

 Can someone help me ?

  

 Or just told me anything you think I can work on ?

  

 Thanks.

 -chen

  

 I used to working on CentOS, installing neutron directly using command
 yum install neutron-xxx.

 Now, I have already download neutron code, and run successfully run
 unittest.

  

  

  

  



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Nominating Nikita Konovalov for storyboard-core

2014-05-22 Thread Thierry Carrez
James E. Blair wrote:
 Nikita Konovalov has been reviewing changes to both storyboard and
 storyboard-webclient for some time.  He is the second most active
 storyboard reviewer and is very familiar with the codebase (having
 written a significant amount of the server code).  He regularly provides
 good feedback, understands where the project is heading, and in general
 is in accord with the current core team, which has been treating his +1s
 as +2s for a while now.
 
 Please respond with +1s or concerns, and if the consensus is in favor, I
 will add him to the group.
 
 Nikita, thank you very much for your work!

+1

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] OpenStack Beijing Meetup on May 24th

2014-05-22 Thread Hua ZZ Zhang

Hi ALL,

The 2nd OpenStack Beijing meetup is opened for registration. Welcome to
join us!

Title: OpenStack Beijing Meetup on May 24th

http://www.meetup.com/China-OpenStack-User-Group/events/182091162/

Theme:OpenStack Atlanta Summit

Guests:
Jeffrey Yang, Jian Hua Geng, Edward Zhang, Vincent Hou from IBM
Jouston Huang, Xin Xu from UnitedStack

Overview:
OpenStack Atalanta Summit was just concluded. During this meet up, we
invite guests,
who has attended Atlanta Summit, to share their feelings and thoughts about
Atlanta Summit
with people who were not able to attend this summit.

Our guests will share their feel gained from summit, specifically Docker,
Juniper OpenContrail, Ceph,
and Ubuntu Juju network weakness, and etc. They will share their
experiences and lessons learned in
their speeches and design sessions in the summit too.

Agenda:
2:00-2:15 Sign in
2:15-3:15 Round table sharing:Summit briefing(By all guests)
3:15-3:30 Tea break
3:30-4:00 Small talk:How to make a speech in OpenStack Summit
4:00-4:30 Small talk:Lessons learned while leading a OpenStack design
session(By: Edward Zhang, Vincent Hou)
4:30-5:00 Social time___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Gerrit downtime on May 23 for project renames

2014-05-22 Thread Thierry Carrez
James E. Blair wrote:
 openstack/oslo-specs - openstack/common-libraries-specs

I understand (and agree with) the idea that -specs repositories should
be per-program.

That said, you could argue that oslo is a shorthand for common
libraries and is the code name for the *program* (rather than bound to
any specific project). Same way infra is shorthand for
infrastructure. So I'm not 100% convinced this one is necessary...

Regards,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][QoS] Weekly IRC Meeting?

2014-05-22 Thread Irena Berezovsky
+1 to attend,

Regards,
Irena

-Original Message-
From: Collins, Sean [mailto:sean_colli...@cable.comcast.com] 
Sent: Wednesday, May 21, 2014 5:59 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Neutron][QoS] Weekly IRC Meeting?

Hi,

The session that we had on the Quality of Service API extension was well 
attended - I would like to keep the momentum going by proposing a weekly IRC 
meeting.

How does Tuesdays at 1800 UTC in #openstack-meeting-alt sound?

--
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][nova] how to best deal with default periodic task spacing behavior

2014-05-22 Thread Matthew Gilliard
We (HP Helion) are completely fine with a change that reduces server load
and makes things more predictable.  It would have been very hard to rely
heavily on the old behaviour.  Thanks for adding me to the patch, too, Matt

  Matthew


On Wed, May 21, 2014 at 4:40 PM, Doug Hellmann
doug.hellm...@dreamhost.comwrote:

 On Tue, May 20, 2014 at 10:15 AM, Matt Riedemann
 mrie...@linux.vnet.ibm.com wrote:
  Between patch set 1 and patch set 3 here [1] we have different solutions
 to
  the same issue, which is if you don't specify a spacing value for
 periodic
  tasks then they run whenever the periodic task processor runs, which is
  non-deterministic and can be staggered if some tasks don't complete in a
  reasonable amount of time.
 
  I'm bringing this to the mailing list to see if there are more opinions
 out
  there, especially from operators, since patch set 1 changes the default

 You may get more feedback from operators on the main openstack list.
 I'm still catching up on backlog after the summit, so apologies if
 you've already posted there.

 Doug

  behavior to have the spacing value be the DEFAULT_INTERVAL (hard-coded 60
  seconds) versus patch set 3 which makes that behavior configurable so the
  admin can set global default spacing for tasks, but defaults to the
 current
  behavior of running every time if not specified.
 
  I don't like a new config option, but I'm also not crazy about changing
  existing behavior without consensus.
 
  [1] https://review.openstack.org/#/c/93767/
 
  --
 
  Thanks,
 
  Matt Riedemann
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-php-sdk] Testing proposal

2014-05-22 Thread Jamie Hannaford
Hey everyone,

Based on our conversation in irc yesterday, I’m detailing a few proposals for 
the way we handle testing. Before that, I want to establish the terminology so 
we’re all on the same page. In a software project like this, there are 
generally three types of tests: unit tests, integration tests, end-to-end tests.


End-to-end testing is the full run-through of a service operation as an 
end-user would interact with it. You pass in your identity parameters, 
instantiate a service like Swift, and execute an operation. A real HTTP request 
is sent over the wire, and a real HTTP response from the server is received. In 
other words, it’s a live network test of all components from end to end. Right 
now, any time we’re communicating with the server API in our test suite, it’s 
an end-to-end test. There doesn’t need to be many of these - just enough to 
test that our API works for end-users. E2E tests will typically be slow (due to 
network calls) - but this does not matter.


Integration testing is like end-to-end testing except no network connections 
happen. All it does is test is the integration between modules of the 
application. So if we want to test Swift operation - we’d instantiate a context 
object with identity parameters, then instantiate a Swift service object, and 
then after we’ve done the setup, finally test the operation object. In an 
integration test, the flow of execution happens like it would an end-to-end 
test, but all we’re testing is that different components work together. This is 
useful for ensuring that contracts between interfaces are being satisfied, etc. 
I don’t think we need to worry about writing these.


Unit testing is very different from both of the above. Instead, you test 
extremely small “units” of behavior in a particular class. Each test needs to 
be fully isolated and have only 1 responsibility (i.e. test one unit). The 
class you’re testing should not collaborate with real objects; instead, you 
need to pass in mocks. So, if we’re unit testing a Swift operation, instead of 
using a real service or transport client - we mock them and use the mock 
objects in our tests. If our tested class invokes methods on this mock, we also 
need to explicitly define how it does so. For example, if we’re testing a 
method that calls `$this-client-foo()` internally and expects a response, we 
need to explicitly tell the mocked client object to return a value when its 
“foo” method is called. We can also be more granular and strict: we can say 
that the method should only be called with certain arguments, that the method 
should be called x number of times. With unit tests, you are defining and 
testing communication promises. The point of doing this is that you’re testing 
HOW your object communicates in a precise way. Because they’re isolated, there 
should be hundreds of unit tests, and they should be very quick.


Here’s an example of how you’d mock for unit tests in phpunit: 
https://gist.github.com/jamiehannaford/ad7f389466ac5dcafe7a

There was a proposal made that we should just inject a HTTP client that doesn’t 
make real transactions - but that is not feasible because it goes against what 
a unit test is. If you were to do this, your tested class would be executing 
UNMOCKED methods against a REAL object - making as many calls as it wants. It 
would not be isolated, and for that reason it would not be a unit test. It 
would be an integration test because you’re forcing your tested class to 
interact like it would in the wild. Instead, you should mock every collaborator 
and explicitly define which calls are made against them by your tested class.

There are amazing libraries out there like Prophecy and phpspec which mocking a 
whole load easier and more natural - but I assume nobody wants to move away 
from phpunit…


Proposal going forward

So, here are my proposals for our current library:

1. Refactor our unit tests to use mocking instead of real HTTP/network calls. 
If a class relies on dependencies or collaborators, they will need to be mocked.

2. As services are added (Swift, Nova, Keystone), end-to-end tests are added 
for each. We’d therefore ensure that our SDK is interacting with the real API 
as expected.

3. Never use the @depends annotation on unit tests, because it makes them 
tightly coupled with each other and brittle. A unit test is supposed to be 
completely autonomous and independent - it should never depend on the output of 
another test.

4. Use the “setUp” and “tearDown” helper methods to easily set up test fixtures

5. In our source code, we need to make use of Dependency Injection AS MUCH AS 
POSSIBLE because it’s easier to test. If we don’t (choosing to directly 
instantiate objects in our code), it introduces tight coupling and is extremely 
hard to mock.



Does anybody have any major disagreements with my above proposal?


Jamie



Jamie Hannaford
Software Developer III - CH [experience Fanatical Support]

Tel:+41434303908

Re: [openstack-dev] [Storyboard] [UX] Atlanta Storyboard UX Summary

2014-05-22 Thread Sean Dague
On 05/22/2014 05:28 AM, Thierry Carrez wrote:
 Angus Salkeld wrote:
 On 21/05/14 13:32 -0700, Michael Krotscheck wrote:
 I've compiled a list of takeaways from the one-on-one ux sessions that we
 did at the Atlanta Summit. These comments aren't exhaustive - they're the
 big items that came up over and over again. If you'd like to review the
 videos yourself or peruse my notes, please drop me a private line:
 They're
 rather large and I don't want to share my google drive with the world.

 *Users were extremely confused about Tasks vs. Stories*

 +1 me too.

 Do they have to be different things (stories and tasks)? Would it be
 better to just have
 issues that could be made children of other issues?
 
 That is how most other trackers do it. This is a very project-centric
 approach, and it doesn't solve the OpenStack-specific challenges. For
 example, Launchpad blueprints have blueprints that can be made
 children of other blueprints. The inability to have a single
 overarching feature that affect multiple code repositories is why we
 can't use it anymore. If tracking tasks across project boundaries was
 not a critical coordination challenge (created by OpenStack unique
 scope) we would not have to create our own tool, we would just adopt
 Jira or Bugzilla.
 
 The specific problem we are trying to solve with Storyboard is task
 tracking across multiple projects. Have an openstack-wide problem and
 describe the project-specific tasks that will solve that problem.
 
 Now I understand it can be confusing, especially for people without a
 Launchpad Bugs background. Maybe we can find a better term for tasks
 (work items ? steps ? commits ?), maybe we need to
 educate/document more, maybe the UI should make it easier to grasp as a
 concept. But given that our primary audience is OpenStack developers, I
 suspect that once they understand the concept it's not that much of an
 issue. And since most of them have to use Launchpad Bugs now (which has
 a similar concept), the learning curve is not too steep...

It's worth noting, most (90%) of OpenStack developers aren't trying to
land or track features across projects. And realistically, in my
experience working code into different repositories, the blueprint / bug
culture between projects varies widely (what requires and artifact, how
big that artifact is, etc).

So given that to a first order approximation, all Stories will only
impact a single project, it's important that in the majority case we
don't confuse people. I honestly like the idea of nested issues. It also
feels more like it could be a more gentle onboarding for folks in our
community.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Spec Template Change Proposal

2014-05-22 Thread Derek Higgins
On 21/05/14 22:56, James Slagle wrote:
 On Wed, May 21, 2014 at 4:37 PM, Jay Dobies jason.dob...@redhat.com wrote:
 Currently, there is the following in the template:



 Proposed change
 ===

 [snip]

 Alternatives
 

 [snip]

 Security impact
 ---



 The unit tests assert the top and second level sections are standard, so if
 I add a section at the same level as Alternatives under Proposed Change, the
 tests will fail. If I add a third level section using ^, they pass.

 The problem is that you can't add a ^ section under Proposed Change. Sphinx
 complains about a title level inconsistency since I'm skipping the second
 level and jumping to the third. But I can't add a second-level section
 directly under Proposed Change because it will break the unit tests that
 validate the structure.

 The proposed change is going to be one of the beefier sections of a spec, so
 not being able to subdivide it is going to make the documentation messy and
 removes the ability to link directly to a portion of a proposed change.

 I propose we add a section at the top of Proposed Change called Overview
 that will hold the change itself. That will allow us to use third level
 sections in the change itself while still having the first and section
 section structure validated by the tests.

 I have no problem making the change to the templates, unit tests, and any
 existing specs (I don't think we have any yet), but before I go through
 that, I wanted to make sure there wasn't a major disagreement.

 
 I'm a bit ambivalent to be honest, but adding a section for Overview
 doesn't really do much IMO.  Just give an overview in the first couple
 of sentences under Proposed Change. If I go back and add an Overview
 section to my spec in review, I'm just going to slap everything in
 Proposed Change into one Overview section :).  To me, Work Items is
 where more of the details goes (which does support aribtrary
 subsections with ^^^).
 
 In general though I think that the unit tests are too rigid and
 pedantic. Plus, having to go back and update old specs when we make
 changes to unit tests seems strange. No biggie right now, but we do
 have a couple of specs in review. Unless we write the unit tests to be
 backwards compatible. This just feels a bit like engineering just for
 the sake of it.  Maybe we need a spec on it :).
 
 I was a bit surprised to see that we don't have the Data Model section
 in our specs, and when I had one, unit tests failed. We actually do
 have data model stuff in Tuskar and our json structures in tripleo.

You can blame me for that, when I created the repository I took the nova
template and removed the sections I thought we're not relevant perhaps I
was a little too aggressive. I got no problem if we want to add any of
them back in.

Looks like these are the sections I removed:
Data model impact
REST API impact
Notifications impact

I'd obviously forgotten about Tuskar, sorry.


 
 Anyway, just my $0.02.
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Static file handling -- followup

2014-05-22 Thread Matthias Runge
On Tue, May 20, 2014 at 05:18:18PM +0200, Radomir Dopieralski wrote:
 Hello,
 
 this is a followup on the design session we had at the meeting about
 the handling of static files. You can see the etherpad from that session
 here: https://etherpad.openstack.org/p/juno-summit-horizon-static-files

 The JavaScript libraries unbundling:
 
 I'm packaging all the missing libraries, except for Angular.js, as
 XStatic packages:
 
 https://pypi.python.org/pypi/XStatic-D3
 https://pypi.python.org/pypi/XStatic-Hogan
 https://pypi.python.org/pypi/XStatic-JSEncrypt
 https://pypi.python.org/pypi/XStatic-QUnit
 https://pypi.python.org/pypi/XStatic-Rickshaw
 https://pypi.python.org/pypi/XStatic-Spin
 
 There is also a patch for unbundling JQuery:
 https://review.openstack.org/#/c/82516/
 And the corresponding global requirements for it:
 https://review.openstack.org/#/c/94337/

Awesome, thank you!

Looking at the change, that includes a not so ancient
version of jquery, but IMHO it's better to upgrade that to a more
up-to-date version as step -1?
That might be solved by adding jquery-migrate as well; the sanest
solution would be to fix our JavaScript code.
 
 The style files compilation:
 
 We are going to go with PySCSS compiler, plus django-pyscss. The
 proof-of-concept patch has been rebased and updated, and is waiting
 for your reviews: https://review.openstack.org/#/c/90371/
 It is also waiting for adding the needed libraries to the global
 requirements: https://review.openstack.org/#/c/94376/
 

Karma added as well.
Thank you for driving this effort! It's really worth it.

-- 
Matthias Runge mru...@redhat.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] handling drivers that will not be third-party tested

2014-05-22 Thread Dmitry Tantsur
On Thu, 2014-05-22 at 09:48 +0100, Lucas Alvares Gomes wrote:
 On Thu, May 22, 2014 at 1:03 AM, Devananda van der Veen
 devananda@gmail.com wrote:
  I'd like to bring up the topic of drivers which, for one reason or another,
  are probably never going to have third party CI testing.
 
  Take for example the iBoot driver proposed here:
https://review.openstack.org/50977
 
  I would like to encourage this type of driver as it enables individual
  contributors, who may be using off-the-shelf or home-built systems, to
  benefit from Ironic's ability to provision hardware, even if that hardware
  does not have IPMI or another enterprise-grade out-of-band management
  interface. However, I also don't expect the author to provide a full
  third-party CI environment, and as such, we should not claim the same level
  of test coverage and consistency as we would like to have with drivers in
  the gate.
 
 +1
But we'll still expect unit tests that work via mocking their 3rd party
library (for example), right?

 
 
  As it is, Ironic already supports out-of-tree drivers. A python module that
  registers itself with the appropriate entrypoint will be made available if
  the ironic-conductor service is configured to load that driver. For what
  it's worth, I recall Nova going through a very similar discussion over the
  last few cycles...
 
  So, why not just put the driver in a separate library on github or
  stackforge?
 
 I would like to have this drivers within the Ironic tree under a
 separated directory (e.g /drivers/staging/, not exactly same but kinda
 like what linux has in their tree[1]). The advatanges of having it in
 the main ironic tree is because it makes it easier to other people
 access the drivers, easy to detect and fix changes in the Ironic code
 that would affect the driver, share code with the other drivers, add
 unittests and provide a common place for development.
I do agree, that having these drivers in-tree would make major changes
much easier for us (see also above about unit tests).

 
 We can create some rules for people who are thinking about submitting
 their driver under the staging directory, it should _not_ be a place
 where you just throw the code and forget it, we would need to agree
 that the person submitting the code will also babysit it, we also
 could use the same process for all the other drivers wich wants to be
 in the Ironic tree to be accepted which is going through ironic-specs.
+1

 
 Thoughts?
 
 [1] http://lwn.net/Articles/285599/
 
 Cheers,
 Lucas
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] Deleting cluster in sahara horizon, 500 Error occured in nova.

2014-05-22 Thread cosmos cosmos
Hello My name is inhye park from samsung SDS.

I am a developer about nova and sahara.

Recently Our system updated to icehouse.

After deleting cluster in sahara, VM occurred 500 error.

But this problem not occur in nova cli command.

Only in sahara this error occurred.

This is error log.

-

2014-05-22 20:06:42.631 25640 ERROR oslo.messaging.rpc.dispatcher [-]
Exception during message handling: Failed to terminate process 8756 with
SIGKILL: Device or resource busy

2014-05-22 20:06:42.631 25640 TRACE oslo.messaging.rpc.dispatcher Traceback
(most recent call last):

2014-05-22 20:06:42.631 25640 TRACE oslo.messaging.rpc.dispatcher   File
/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line
133, in _dispatch_and_reply

2014-05-22 20:06:42.631 25640 TRACE oslo.messaging.rpc.dispatcher
incoming.message))

2014-05-22 20:06:42.631 25640 TRACE oslo.messaging.rpc.dispatcher   File
/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line
176, in _dispatch

2014-05-22 20:06:42.631 25640 TRACE oslo.messaging.rpc.dispatcher
return self._do_dispatch(endpoint, method, ctxt, args)

2014-05-22 20:06:42.631 25640 TRACE oslo.messaging.rpc.dispatcher   File
/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line
122, in _do_dispatch

2014-05-22 20:06:42.631 25640 TRACE oslo.messaging.rpc.dispatcher
result = getattr(endpoint, method)(ctxt, **new_args)

2014-05-22 20:06:42.631 25640 TRACE oslo.messaging.rpc.dispatcher   File
/usr/lib/python2.7/dist-packages/nova/exception.py, line 88, in wrapped

2014-05-22 20:06:42.631 25640 TRACE oslo.messaging.rpc.dispatcher
payload)

2014-05-22 20:06:42.631 25640 TRACE oslo.messaging.rpc.dispatcher   File
/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py, line
68, in __exit__

2014-05-22 20:06:42.631 25640 TRACE oslo.messaging.rpc.dispatcher
six.reraise(self.type_, self.value, self.tb)

2014-05-22 20:06:42.631 25640 TRACE oslo.messaging.rpc.dispatcher   File
/usr/lib/python2.7/dist-packages/nova/exception.py, line 71, in wrapped

2014-05-22 20:06:42.631 25640 TRACE oslo.messaging.rpc.dispatcher
return f(self, context, *args, **kw)

2014-05-22 20:06:42.631 25640 TRACE oslo.messaging.rpc.dispatcher   File
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 333, in
decorated_function

2014-05-22 20:06:42.631 25640 TRACE oslo.messaging.rpc.dispatcher
function(self, context, *args, **kwargs)

2014-05-22 20:06:42.631 25640 TRACE oslo.messaging.rpc.dispatcher   File
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 309, in
decorated_function

2014-05-22 20:06:42.631 25640 TRACE oslo.messaging.rpc.dispatcher e,
sys.exc_info())

2014-05-22 20:06:42.631 25640 TRACE oslo.messaging.rpc.dispatcher   File
/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py, line
68, in __exit__

2014-05-22 20:06:42.631 25640 TRACE oslo.messaging.rpc.dispatcher
six.reraise(self.type_, self.value, self.tb)

2014-05-22 20:06:42.631 25640 TRACE oslo.messaging.rpc.dispatcher   File
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 296, in
decorated_function

2014-05-22 20:06:42.631 25640 TRACE oslo.messaging.rpc.dispatcher
return function(self, context, *args, **kwargs)

2014-05-22 20:06:42.631 25640 TRACE oslo.messaging.rpc.dispatcher   File
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 2262, in
terminate_instance

2014-05-22 20:06:42.631 25640 TRACE oslo.messaging.rpc.dispatcher
do_terminate_instance(instance, bdms)

2014-05-22 20:06:42.631 25640 TRACE oslo.messaging.rpc.dispatcher   File
/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py, line
249, in inner

2014-05-22 20:06:42.631 25640 TRACE oslo.messaging.rpc.dispatcher
return f(*args, **kwargs)

2014-05-22 20:06:42.631 25640 TRACE oslo.messaging.rpc.dispatcher   File
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 2260, in
do_terminate_instance

2014-05-22 20:06:42.631 25640 TRACE oslo.messaging.rpc.dispatcher
self._set_instance_error_state(context, instance['uuid'])

2014-05-22 20:06:42.631 25640 TRACE oslo.messaging.rpc.dispatcher   File
/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py, line
68, in __exit__

2014-05-22 20:06:42.631 25640 TRACE oslo.messaging.rpc.dispatcher
six.reraise(self.type_, self.value, self.tb)

2014-05-22 20:06:42.631 25640 TRACE oslo.messaging.rpc.dispatcher   File
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 2250, in
do_terminate_instance

2014-05-22 20:06:42.631 25640 TRACE oslo.messaging.rpc.dispatcher
reservations=reservations)

2014-05-22 20:06:42.631 25640 TRACE oslo.messaging.rpc.dispatcher   File
/usr/lib/python2.7/dist-packages/nova/hooks.py, line 103, in inner

2014-05-22 20:06:42.631 25640 TRACE oslo.messaging.rpc.dispatcher rv =
f(*args, **kwargs)

2014-05-22 20:06:42.631 25640 TRACE oslo.messaging.rpc.dispatcher   File

Re: [openstack-dev] [Ironic] handling drivers that will not be third-party tested

2014-05-22 Thread Sergey Lukjanov
IMO the separated did in the project repo is a good approach (like
contrib dir in Heat).

On Thu, May 22, 2014 at 3:13 PM, Dmitry Tantsur dtant...@redhat.com wrote:
 On Thu, 2014-05-22 at 09:48 +0100, Lucas Alvares Gomes wrote:
 On Thu, May 22, 2014 at 1:03 AM, Devananda van der Veen
 devananda@gmail.com wrote:
  I'd like to bring up the topic of drivers which, for one reason or another,
  are probably never going to have third party CI testing.
 
  Take for example the iBoot driver proposed here:
https://review.openstack.org/50977
 
  I would like to encourage this type of driver as it enables individual
  contributors, who may be using off-the-shelf or home-built systems, to
  benefit from Ironic's ability to provision hardware, even if that hardware
  does not have IPMI or another enterprise-grade out-of-band management
  interface. However, I also don't expect the author to provide a full
  third-party CI environment, and as such, we should not claim the same level
  of test coverage and consistency as we would like to have with drivers in
  the gate.

 +1
 But we'll still expect unit tests that work via mocking their 3rd party
 library (for example), right?


 
  As it is, Ironic already supports out-of-tree drivers. A python module that
  registers itself with the appropriate entrypoint will be made available if
  the ironic-conductor service is configured to load that driver. For what
  it's worth, I recall Nova going through a very similar discussion over the
  last few cycles...
 
  So, why not just put the driver in a separate library on github or
  stackforge?

 I would like to have this drivers within the Ironic tree under a
 separated directory (e.g /drivers/staging/, not exactly same but kinda
 like what linux has in their tree[1]). The advatanges of having it in
 the main ironic tree is because it makes it easier to other people
 access the drivers, easy to detect and fix changes in the Ironic code
 that would affect the driver, share code with the other drivers, add
 unittests and provide a common place for development.
 I do agree, that having these drivers in-tree would make major changes
 much easier for us (see also above about unit tests).


 We can create some rules for people who are thinking about submitting
 their driver under the staging directory, it should _not_ be a place
 where you just throw the code and forget it, we would need to agree
 that the person submitting the code will also babysit it, we also
 could use the same process for all the other drivers wich wants to be
 in the Ironic tree to be accepted which is going through ironic-specs.
 +1


 Thoughts?

 [1] http://lwn.net/Articles/285599/

 Cheers,
 Lucas

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Need help to start contribute to Neutron

2014-05-22 Thread Gary Kotton
Hi,
I would suggest the following:

 1.  Look at the bugs and see if there is any low hanging fruit - 
https://bugs.launchpad.net/neutron/+bugs?field.tag=low-hanging-fruit
 2.  Try and add some additional unit tests – pick a section of code that 
interests you and try and see that it has some good code coverage with the unit 
tests
 3.  Go over the blueprints and see if there is something that interests you – 
if so, ask the guys driving it if you can help

Good luck.
Thanks
Gary

From: Li, Chen chen...@intel.commailto:chen...@intel.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, May 22, 2014 11:56 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Neutron] Need help to start contribute to Neutron

Hi list,

I have using Openstack/Neutron for a while.
And now I hope I can do some contributions too.
But neutron is too complicated, I don’t know where to start.

In IRC, iwamoto suggested me to work with developer doc team.
That sounds like a good idea.
But I still doesn’t know what/where I should start with.
Can someone help me ?

Or just told me anything you think I can work on ?

Thanks.
-chen

I used to working on CentOS, installing neutron directly using command “yum 
install neutron-xxx”.
Now, I have already download neutron code, and run successfully run unittest.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All] Disabling Pushes of new Gerrit Draft Patchsets

2014-05-22 Thread Sergey Lukjanov
Great, I think it'll make CRs more consistent, especially from the
reviewers PoV.

On Thu, May 22, 2014 at 3:24 AM, Clark Boylan clark.boy...@gmail.com wrote:
 Hello everyone,

 Gerrit has long supported Draft patchsets, and the infra team has long
 recommended against using them as they are a source of bugs and
 confusion (see below for specific details if you are curious). The newer
 version of Gerrit that we recently upgraded to allows us to prevent
 people from pushing new Draft patchsets. We will take advantage of this
 and disable pushes of new Drafts on Friday May 30, 2014.

 The impact of this change should be small. You can use the Work in
 Progress state instead of Drafts for new patchsets. Any existing
 Draft patchsets will remain in a Draft state until it is published.

 Now for the fun details on why drafts are broken.

 * Drafts appear to be secure but they offer no security. This is bad
   for user expectations and may expose data that shouldn't be exposed.
 * Draft patchsets pushed after published patchsets confuse reviewers as
   they cannot vote with a value because the latest patchset is hidden.
 * Draft patchsets confuse the Gerrit event stream output making it
   difficult for automated tooling to do the correct thing with Drafts.
 * Child changes of Drafts will fail to merge without explanation.

 Let us know if you have any questions,

 Clark (on behalf of the infra team)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] team meeting May 22 1800 UTC

2014-05-22 Thread Sergey Lukjanov
Hi folks,

We'll be having the Sahara team meeting as usual in
#openstack-meeting-alt channel.

Agenda: 
https://wiki.openstack.org/wiki/Meetings/SaharaAgenda#Agenda_for_May.2C_22

http://www.timeanddate.com/worldclock/fixedtime.html?msg=Sahara+Meetingiso=20140522T18

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] Deleting cluster in sahara horizon, 500 Error occured in nova.

2014-05-22 Thread SMIGIELSKI, Radoslaw (Radoslaw)
Looks like one or more of your VMs get stuck and libvirt having problems to 
stop it now: 

2014-05-22 20:06:42.631 25640 TRACE oslo.messaging.rpc.dispatcher if ret == 
-1: raise libvirtError ('virDomainDestroy() failed', dom=self)
2014-05-22 20:06:42.631 25640 TRACE oslo.messaging.rpc.dispatcher libvirtError: 
Failed to terminate process 8756 with SIGKILL: Device or resource busy


I would recommend you to check what exactly is PID (8756) in the error? I 
assume is a qemu process and what's going on with this process? 
Does it have some issue with accessing storage or something else? You can also 
check the libvirtd logs, there should be something more.


Radosław Śmigielski
ALCATEL-LUCENT - Cloudband/Node Team
Office: 238 646 13 or +353 (0)188 646 13
Mobile: +353 86 820 74 23





From: cosmos cosmos [cosmos0...@gmail.com]
Sent: 22 May 2014 12:30
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev]  [sahara] Deleting cluster in sahara horizon, 500 
Error occured in nova.

Hello My name is inhye park from samsung SDS.

I am a developer about nova and sahara.

Recently Our system updated to icehouse.

After deleting cluster in sahara, VM occurred 500 error.

But this problem not occur in nova cli command.

Only in sahara this error occurred.

This is error log.

-

2014-05-22 20:06:42.631 25640 ERROR oslo.messaging.rpc.dispatcher [-] Exception 
during message handling: Failed to terminate process 8756 with SIGKILL: Device 
or resource busy

2014-05-22 20:06:42.631 25640 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):

2014-05-22 20:06:42.631 25640 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 133, 
in _dispatch_and_reply

2014-05-22 20:06:42.631 25640 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))

2014-05-22 20:06:42.631 25640 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 176, 
in _dispatch

2014-05-22 20:06:42.631 25640 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)

2014-05-22 20:06:42.631 25640 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 122, 
in _do_dispatch

2014-05-22 20:06:42.631 25640 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)

2014-05-22 20:06:42.631 25640 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/nova/exception.py, line 88, in wrapped

2014-05-22 20:06:42.631 25640 TRACE oslo.messaging.rpc.dispatcher payload)

2014-05-22 20:06:42.631 25640 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py, line 68, 
in __exit__

2014-05-22 20:06:42.631 25640 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)

2014-05-22 20:06:42.631 25640 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/nova/exception.py, line 71, in wrapped

2014-05-22 20:06:42.631 25640 TRACE oslo.messaging.rpc.dispatcher return 
f(self, context, *args, **kw)

2014-05-22 20:06:42.631 25640 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 333, in 
decorated_function

2014-05-22 20:06:42.631 25640 TRACE oslo.messaging.rpc.dispatcher 
function(self, context, *args, **kwargs)

2014-05-22 20:06:42.631 25640 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 309, in 
decorated_function

2014-05-22 20:06:42.631 25640 TRACE oslo.messaging.rpc.dispatcher e, 
sys.exc_info())

2014-05-22 20:06:42.631 25640 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py, line 68, 
in __exit__

2014-05-22 20:06:42.631 25640 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)

2014-05-22 20:06:42.631 25640 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 296, in 
decorated_function

2014-05-22 20:06:42.631 25640 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)

2014-05-22 20:06:42.631 25640 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 2262, in 
terminate_instance

2014-05-22 20:06:42.631 25640 TRACE oslo.messaging.rpc.dispatcher 
do_terminate_instance(instance, bdms)

2014-05-22 20:06:42.631 25640 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py, line 
249, in inner

2014-05-22 20:06:42.631 25640 TRACE oslo.messaging.rpc.dispatcher 

Re: [openstack-dev] [Ironic] handling drivers that will not be third-party tested

2014-05-22 Thread Joe Gordon
On Thu, May 22, 2014 at 1:48 AM, Lucas Alvares Gomes
lucasago...@gmail.comwrote:

 On Thu, May 22, 2014 at 1:03 AM, Devananda van der Veen
 devananda@gmail.com wrote:
  I'd like to bring up the topic of drivers which, for one reason or
 another,
  are probably never going to have third party CI testing.
 
  Take for example the iBoot driver proposed here:
https://review.openstack.org/50977
 
  I would like to encourage this type of driver as it enables individual
  contributors, who may be using off-the-shelf or home-built systems, to
  benefit from Ironic's ability to provision hardware, even if that
 hardware
  does not have IPMI or another enterprise-grade out-of-band management
  interface. However, I also don't expect the author to provide a full
  third-party CI environment, and as such, we should not claim the same
 level
  of test coverage and consistency as we would like to have with drivers in
  the gate.

 +1

 
  As it is, Ironic already supports out-of-tree drivers. A python module
 that
  registers itself with the appropriate entrypoint will be made available
 if
  the ironic-conductor service is configured to load that driver. For what
  it's worth, I recall Nova going through a very similar discussion over
 the
  last few cycles...
 
  So, why not just put the driver in a separate library on github or
  stackforge?

 I would like to have this drivers within the Ironic tree under a
 separated directory (e.g /drivers/staging/, not exactly same but kinda
 like what linux has in their tree[1]). The advatanges of having it in
 the main ironic tree is because it makes it easier to other people
 access the drivers, easy to detect and fix changes in the Ironic code
 that would affect the driver, share code with the other drivers, add
 unittests and provide a common place for development.

 We can create some rules for people who are thinking about submitting
 their driver under the staging directory, it should _not_ be a place
 where you just throw the code and forget it, we would need to agree
 that the person submitting the code will also babysit it, we also
 could use the same process for all the other drivers wich wants to be
 in the Ironic tree to be accepted which is going through ironic-specs.

 Thoughts?

 [1] http://lwn.net/Articles/285599/


Linux has a very different model then OpenStack does, the article you
mention is talking about a whole separate git repo, along with a separate
(re: just OR, not exclusive or) set of maintainers. If you leave these
drivers in a staging directory would you still require two cores for the
code to land? Would this be a bandwidth burden (It can be in nova)?

On a related note, Ironic has one similarity the linux that most other
OpenStack projects don't have: Testing drivers requires specific hardware.
 Because of this linux doesn't claim to test every driver in the kernel, if
they did they would have a pretty impressive collection of hardware lying
around.




 Cheers,
 Lucas

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] handling drivers that will not be third-party tested

2014-05-22 Thread Lucas Alvares Gomes
 Linux has a very different model then OpenStack does, the article you
 mention is talking about a whole separate git repo, along with a separate
 (re: just OR, not exclusive or) set of maintainers. If you leave these
 drivers in a staging directory would you still require two cores for the
 code to land? Would this be a bandwidth burden (It can be in nova)?

Yes Linux has a different model and I'm not proposing us to do the
same thing as they do, I just pointed out that what I'm proposing here
is slightly similar.

Sorry about the article, it was a bad reference. The staging dir lives
within the Linux kernel tree[1], perhaps this reference would be
better: http://www.kroah.com/log/linux/linux-staging-update.html

About the review, I would say that yes the 2 +2 approach should work
the same for those drivers just like any other patch in the queue and
If it's become a burden we can always expand the core team.

[1] 
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/drivers/staging

Cheers,
Lucas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] strutils: enhance safe_decode() and safe_encode()

2014-05-22 Thread Flavio Percoco

On 21/05/14 11:32 -0400, Doug Hellmann wrote:

On Thu, May 15, 2014 at 11:41 AM, Victor Stinner
victor.stin...@enovance.com wrote:

Hi,

The functions safe_decode() and safe_encode() have been ported to Python 3,
and changed more than once. IMO we can still improve these functions to make
them more reliable and easier to use.


(1) My first concern is that these functions try to guess user expectation
about encodings. They use sys.stdin.encoding or sys.getdefaultencoding() as
the default encoding to decode, but this encoding depends on the locale
encoding (stdin encoding), on stdin (is stdin a TTY? is stdin mocked?), and on
the Python major version.

IMO the default encoding should be UTF-8 because most OpenStack components
expect this encoding.

Or maybe users want to display data to the terminal, and so the locale
encoding should be used? In this case, locale.getpreferredencoding() would be
more reliable than sys.stdin.encoding.


From what I can see, most uses of the module are in the client
programs. If using locale to find a default encoding is the best
approach, perhaps we should go ahead and make the change you propose.

One place I see safe_decode() used in a questionable way is in heat in
heat/engine/parser.py where validation errors are being re-raised as
StackValidationFailed (line 376 in my version). It's not clear why the
message is processed the way it is, so I would want to understand the
history before proposing a change there.


The original intent for these 2 functions was to provide a reliable
way to encode/decode the input. As already mentioned in this thread,
it's not good to assume what the best encoding for every case is and I
would also prefer to keep these functions generci - as in, not thought
just for client libraries. We use this module in Glance as well,
unfortunately, not as much as I'd like.

I would prefer the improved-encoding guess to happen outside these
functions, if it's meant for client library. For example, glanceclient
could use `getpreferredencoding` and pass that to
safe_(encode|decode).

Flavio






(2) My second concern is that safe_encode(bytes, incoming, encoding)
transcodes the bytes string from incoming to encoding if these two encodings
are different.

When I port code to Python 3, I'm looking for a function to replace this
common pattern:

if isinstance(data, six.text_type):
data = data.encode(encoding)

I don't want to modify data encoding if it is already a bytes string. So I
would prefer to have:

def safe_encode(data, encoding='utf-8'):
if isinstance(data, six.text_type):
data = data.encode(encoding)
return data

Changing safe_encode() like this will break applications relying on the
transcode feature (incoming = encoding). If such usage exists, I suggest to
add a new function (ex: transcode ?) with an API fitting this use case. For
example, the incoming encoding would be mandatory.

Is there really applications using the incoming parameter?


The only place I see that parameter used in integrated projects is in
the tests for the module. I didn't check the non-integrated projects.
Given its symmetry with safe_decode(), I don't really see a problem
with the current name. Something like the shortcut you propose is
present in safe_encode(), so I'm not sure what benefit a new function
brings?


+1

Flavio

P.S: I'm working on graduating strutils from the incubator. I'm glad
you brought this up. I'm almost done with the graduation thing.

--
@flaper87
Flavio Percoco


pgpHNaKdhG35L.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [relmgt] Proposed Juno release schedule

2014-05-22 Thread Thierry Carrez
Thierry Carrez wrote:
 At the Design Summit last week we discussed the Juno release schedule
 and came up with the following proposal:
 
 https://wiki.openstack.org/wiki/Juno_Release_Schedule
 
 The main reported issue with it is the presence of the US labor day
 weekend just before juno-3 (feature freeze) week. That said, there
 aren't a lot of options there if we want to preserve 6 weeks between FF
 and release. I expect that with feature freeze happening on the Thursday
 rather than the Tuesday (due to the new process around milestone
 tagging), it will have limited impact.
 
 The schedule will be discussed and approved at the release meeting today
 (21:00 UTC in #openstack-meeting).

With no objection raised at the meeting nor on the list, it's now time
to make it final and official:

https://wiki.openstack.org/wiki/Juno_Release_Schedule

I'll proceed to creating the resulting milestones in Launchpad for
integrated projects.

Regards,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate] failing postgres jobs

2014-05-22 Thread Sergey Lukjanov
Thanks for the confirmation.

On Wed, May 21, 2014 at 12:18 AM, Nikhil Manchanda nik...@manchanda.me wrote:
 Yes, this issue is fixed now that 94315 is merged.


 On Tue, May 20, 2014 at 3:38 PM, Sergey Lukjanov slukja...@mirantis.com
 wrote:

 As I see, the 94315 merged atm, is the issue fixed?


 On Tuesday, May 20, 2014, Joe Gordon joe.gord...@gmail.com wrote:

 Hi All,

 If you hit an unknown error in a postgres job since Tue May 20 00:30:48
 2014 + you probably hit https://bugs.launchpad.net/trove/+bug/1321093
 (*-tempest-dsvm-postgres-full failing on trove-manage db_sync)

 A fix is in the works: https://review.openstack.org/#/c/94315/

 so once the fix lands, just run 'recheck bug 1321093'

 Additional patches are up to prevent this from happening again as well
 [0][1].

 best,
 Joe

 [0] https://review.openstack.org/#/c/94307/
 [1] https://review.openstack.org/#/c/94314/



 --
 Sincerely yours,
 Sergey Lukjanov
 Sahara Technical Lead
 (OpenStack Data Processing)
 Mirantis Inc.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [UX] OpenStack UX - IRC meeting

2014-05-22 Thread Sergey Lukjanov
Thanks for the reminder, it's very interesting for Sahara.

P.S. /me adding Chad directly (he's working on sahara@horizon).

On Mon, May 19, 2014 at 4:12 PM, Jaromir Coufal jcou...@redhat.com wrote:
 Hello everybody interested in UX,

 for one more time, I am reminding that there is ongoing survey for times
 which will work for you regarding regular OpenStack UX IRC meetings:

 http://doodle.com/3m29dkn3ef2em5in

 Cheers
 -- Jarda

 On 2014/06/05 17:27, Jaromir Coufal wrote:

 Hello UX folks,

 I am following the initial discussion about tools proposal. Everybody
 blessed UX IRC meetings so I am starting an initiative to organize them.
 At this moment, I would like to ask everybody interested in the meeting
 to participate in a survey and mark times which will work for you so
 that we can find the best match for the meeting:

 http://doodle.com/3m29dkn3ef2em5in

 Thank you all for participation and if you have any questions, I am
 happy to help.
 -- Jarda

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Nominating Sergey Lukjanov for infra-root

2014-05-22 Thread Anita Kuno
On 05/21/2014 05:42 PM, James E. Blair wrote:
 The Infrastructure program has a unique three-tier team structure:
 contributors (that's all of us!), core members (people with +2 ability
 on infra projects in Gerrit) and root members (people with
 administrative access).  Read all about it here:
 
   http://ci.openstack.org/project.html#team
 
 Sergey has been an extremely valuable member of infra-core for some time
 now, providing reviews on a wide range of infrastructure projects which
 indicate a growing familiarity with the large number of complex systems
 that make up the project infrastructure.  In particular, Sergey has
 expertise in systems related to the configuration of Jenkins jobs, Zuul,
 and Nodepool which is invaluable in diagnosing and fixing operational
 problems as part of infra-root.
 
 Please respond with any comments or concerns.
 
 Thanks again Sergey for all your work!
 
 -Jim
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
Yes please, +1.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Nominating Joshua Hesketh for infra-core

2014-05-22 Thread Anita Kuno
On 05/21/2014 05:57 PM, James E. Blair wrote:
 The Infrastructure program has a unique three-tier team structure:
 contributors (that's all of us!), core members (people with +2 ability
 on infra projects in Gerrit) and root members (people with
 administrative access).  Read all about it here:
 
   http://ci.openstack.org/project.html#team
 
 Joshua Hesketh has been reviewing a truly impressive number of infra
 patches for quite some time now.  He has an excellent grasp of how the
 CI system functions, no doubt in part because he runs a copy of it and
 has been doing significant work on evolving it to continue to scale.
 His reviews of python projects are excellent and particularly useful,
 but he also has a grasp of how the whole system fits together, which is
 a key thing for a member of infra-core.
 
 Please respond with any comments or concerns.
 
 Thanks, Joshua, for all your work!
 
 -Jim
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
By all means, +1.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [Keystone] Help extending Keystone JSON documents with custom attributes, safe?

2014-05-22 Thread Phillip Guerin

To be a bit more succinct, if I PATCH existing Keystone JSON documents 
(projects, roles, users, etc) with my own custom JSON attributes, can I expect 
this to be a safe practice? 

Meaning, I'd like to add my own custom attributes and be able to query them 
back at a later time when I look up the user or verify the authentication token.

Is this a behavior that we can count on working in the future?

If it is an appropriate way to add the metadata we want, are there naming 
conventions we must preserve in our custom attributes to avoid name collisions 
in the future?

Thank you very much for your time, help, and consideration,

-PG

-Original Message-
From: Phillip Guerin 
Sent: Thursday, May 15, 2014 4:16 PM
To: 'openst...@lists.openstack.org'
Subject: [Openstack] [Keystone] Extending Keystone JSON documents with custom 
attributes, safe?

Hello, 

We're working on a project that uses a REST interface that exposes a set of 
APIs to our internal systems. We'd like to leverage the Keystone data models 
for our own fine grained authorization by adding our own custom attributes to 
Keystone 'projects', 'users', etc.

For example:

user: {
domain: {
id: 1789d1,
links: {
self: http://identity:35357/v3/domains/1789d1;
},
name: example.com
},
id: 0ca8f6,
links: {
self: http://identity:35357/v3/users/0ca8f6;
},
name: Joe

sandvine: {
authorization: {
requests: [url, url?fields=a,b,c]
attributes: {
obfuscation: [attr1]
}
}
accounting: { 
}
}

}

While I've done some simple tests and it essentially works, is this procedure 
of PATCHing Keystone user/role/etc... documents acceptable practice from your 
point of view?

Is this a behaviour that we can count on working in the future?

If it is an appropriate way to add the metadata we want, are there naming 
conventions we must preserve in our custom attributes to avoid name collisions 
in the future?

While this works on the direct objects I update, I don't see my custom fields 
when I verify the associated user token. Meaning, I can add my own attributes 
to 'user', but when I verify the token, I only see a subset of the 'user' 
attributes in the response payload. I don't see my own custom attributes and I 
don't see the 'links' attribute either. Whereas when I do a GET on the 'user' I 
just PATCHed, I see 'links' and my own custom attributes as well.

Is this by design, or am I potentially missing something in my token 
verification request or configuration that would return the full data model 
associated with the token? 
 - My work flow is to create a user, PATCH the user, GET the user to 
   confirm, then GET the token to ensure the PATCHed data has been 
   associated to it. I see changes to pre-existing Keystone attributes
   when I GET the token, I just don't see my custom additions.

If this isn't appropriate, is there an alternative method to add custom 
metadata to elements in the data model (users/roles/etc..)?

For example, we've also considered building a single nested JSON document and 
serializing that into the 'blob' section of the 'policy'
attribute.

Our service is not an Openstack service, so we cannot take advantage of writing 
policy to handle fine grained authorization of the APIs we're exploring through 
our own REST interface. The above is how we're trying to bridge that gap.


Thanks a lot for your time and feedback!

Phillip Guerin
Software Engineer
+1-519-572-4668
skype: phillip.guerin



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Spec Template Change Proposal

2014-05-22 Thread Jay Dobies

Merging a few of the replies into a single response:

 I like all of this plan, except for the name Overview. To me, 
Overview suggests a high-level summary rather than being one of the 
beefier sections of a spec. Something like Detail or Detailed 
overview (because the low-level detail will come in the changes that 
implement the spec, not in the spec) seem like better descriptions of 
what we intend to have there.


I didn't put much thought into the name, so Overview, Summary, Detail, 
etc. doesn't matter to me. If we agree to go down the route of a holder 
section here (as compared to loosening the validation), I'll poll for a 
better name.



I'm a bit ambivalent to be honest, but adding a section for Overview
doesn't really do much IMO.  Just give an overview in the first couple
of sentences under Proposed Change. If I go back and add an Overview
section to my spec in review, I'm just going to slap everything in
Proposed Change into one Overview section :).  To me, Work Items is
where more of the details goes (which does support aribtrary
subsections with ^^^).


That's actually my expectation, that everything currently in place gets 
slapped under Overview. The change is pretty much only to support being 
able to further break down that section while still leaving the existing 
level of validation in place. It's not so much organizational as it is 
to make sphinx happy.



In general though I think that the unit tests are too rigid and
pedantic. Plus, having to go back and update old specs when we make
changes to unit tests seems strange. No biggie right now, but we do
have a couple of specs in review. Unless we write the unit tests to be
backwards compatible. This just feels a bit like engineering just for
the sake of it.  Maybe we need a spec on it :).


I agree that it's possible I'll be back here in the next few days 
complaining that my problem description is too large and would benefit 
from subsections, which I couldn't currently add because they'd be 
second-level sections which are strictly enforced.



I was a bit surprised to see that we don't have the Data Model section
in our specs, and when I had one, unit tests failed. We actually do
have data model stuff in Tuskar and our json structures in tripleo.


You can blame me for that, when I created the repository I took the nova
template and removed the sections I thought we're not relevant perhaps I
was a little too aggressive. I got no problem if we want to add any of
them back in.

Looks like these are the sections I removed:
Data model impact
REST API impact
Notifications impact

I'd obviously forgotten about Tuskar, sorry.



 We just landed a change to permit the third level subsections, but the
intent AIUI of requiring exact titles to constrain the expression
space in the interests of clarity. We can (and should) add more
standard sections as needed.

I do like the idea of having these look consistent. I can work within 
the structure fine given that third-level subsections are permitted, but 
my issue is still that I have been treating the first section under 
Proposed Change as the meaty part of the change, which due to the lack 
of a second-level subsection doesn't let me add my own subsections.



Given the feedback, there are a few approaches we can take:

1. Add a second-level subsection at the start of Proposed Change. This 
subsection will be the description of the actual change and adding in 
this will allow custom subsections to be permitted by the existing unit 
tests.


2. Reduce the validation to only enforce required sections but not barf 
on the addition of new ones.



Somewhat tangential (but to address Slagle's concern) is the question of 
whether or not we need some sort of template version number to prevent 
having to update X many existing specs when changing the structure in 
the future. I feel like this is overkill and it's probably much simpler 
to settle on a Juno template in the very near future (selfishly, I say 
near to allow my own issue here to be addressed) and then only change 
the templates at new versions. Again, I'm probably overthinking things 
at this point, but just throwing it out there.



Personally, my vote is for #1. Existing specs are simple to update, just 
slap the existing change under the new subsection and move on. For the 
naming of it, I'm fine with James P's suggestion of Detail.


Then for K, we make any changes to the template based on our usage of it 
in Juno. It's like a scrum post mortem task for a giant 6 month sprint :)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Proposed changes to core team

2014-05-22 Thread Robert Kukura


On 5/21/14, 4:59 PM, Kyle Mestery wrote:

Neutron cores, please vote +1/-1 for the proposed addition of Carl
Baldwin to Neutron core.



+1

-Bob


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] VM performance of Openstack launched over Esxi

2014-05-22 Thread Rakesh Sinha
Hi,

I am using Openstack Havana Setup with Esxi as Compute Node.

I launch two VMs with identical configuration over the Esxi - one via
Openstack and other manually via vSphere Client.

VM launched over Esxi manually via vSphere Client performs better than one
via Openstack over Esxi. I am talking in terms of Computation

I am not taking Networking into consideration as in my case they play no
role and they are disabled.

Am I wrong or is that true?

Thanks
Rock
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] handling drivers that will not be third-party tested

2014-05-22 Thread Dan Prince


- Original Message -
 From: Devananda van der Veen devananda@gmail.com
 To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
 Sent: Wednesday, May 21, 2014 8:03:15 PM
 Subject: [openstack-dev] [Ironic] handling drivers that will not be   
 third-party tested
 
 I'd like to bring up the topic of drivers which, for one reason or another,
 are probably never going to have third party CI testing.
 
 Take for example the iBoot driver proposed here:
   https://review.openstack.org/50977
 
 I would like to encourage this type of driver as it enables individual
 contributors, who may be using off-the-shelf or home-built systems, to
 benefit from Ironic's ability to provision hardware, even if that hardware
 does not have IPMI or another enterprise-grade out-of-band management
 interface. However, I also don't expect the author to provide a full
 third-party CI environment, and as such, we should not claim the same level
 of test coverage and consistency as we would like to have with drivers in
 the gate.

+1

Not claiming the same level of support seems reasonable if we don't have 3rd 
party CI running on it.

This specific driver is integral to my personal TripleO dev/testing environment 
and as such I will provide a timely feedback loop on keeping it working. I've 
been using the same driver in Nova for around a year now and it has proven 
useful for testing on real baremetal as I'm using machines without IPMI.

 
 As it is, Ironic already supports out-of-tree drivers. A python module that
 registers itself with the appropriate entrypoint will be made available if
 the ironic-conductor service is configured to load that driver. For what
 it's worth, I recall Nova going through a very similar discussion over the
 last few cycles...

I believe Nova may have too far... see below.

 
 So, why not just put the driver in a separate library on github or
 stackforge?

Short answer:

 Because the driver can be easily broken due to internal Ironic driver API 
changes. :(

Long answer:

 Anyone who has tried to keep a driver in-sync with an OpenStack project out of 
tree knows how frustrating it can be when internal API's change. These API's 
don't change often (obviously we try to minimize it) but they do change, often 
in subtle ways that you might not detect in code review. By remaining in-tree 
we allow our drivers to be unit tested which can help avoid some of these 
breakages. These internal interfaces have already changed in Ironic during the 
review cycle for this branch (see the differences between patches 2 and 3 for 
example) so having it live in tree would already be useful here.

 With regards to Nova: I think OpenStack as a whole may be going too far with 
its 3rd party CI rules which force drivers out of tree. As a community I think 
we might benefit from having both the Ironic and Docker drivers in the Nova 
tree at this point... regardless of 3rd party CI rules.  We have people in the 
community who are willing to maintain these drivers, they are very popular, and 
we are arguably causing ourselves extra work by keeping them out of tree. Now, 
where they live in tree and what we say about our support for them... that is a 
very good question. Perhaps they should also live in tree but in a separate 
directory ('contrib' or 'experimental' works for me). Talking about this here 
is probably a bit out of scope though because a compute driver is a good bit 
more complicated than an Ironic power driver is... and would likely require a 
lot more code, maintenance, etc. I only mention it because I believe we may 
have got it wrong with Nova... so perhaps we shouldn't repeat 
 the same rules in Ironic?

 In summary: Having a diverse set of power drivers live in Ironic is useful for 
both users and developers. The changes requiring specific domain knowledge are 
probably low enough that they won't be an issue. 3rd party CI requirements are 
desirable, but in many cases the requirements are sufficiently high enough to 
make them impractical. As an added plus I think the Ironic community reviews 
help to raise the bar on these drivers in a way that wouldn't happen if they 
lived out of tree (you guys help make my code better!)

Dan

 
 
 -Devananda
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Gerrit downtime on May 23 for project renames

2014-05-22 Thread James E. Blair
Thierry Carrez thie...@openstack.org writes:

 James E. Blair wrote:
 openstack/oslo-specs - openstack/common-libraries-specs

 I understand (and agree with) the idea that -specs repositories should
 be per-program.

 That said, you could argue that oslo is a shorthand for common
 libraries and is the code name for the *program* (rather than bound to
 any specific project). Same way infra is shorthand for
 infrastructure. So I'm not 100% convinced this one is necessary...

data-processing-specs has been pointed out as a similarly awkward
name.  According to the programs.yaml file, each program does have a
codename, and the compute program's codename is 'nova'.  I suppose we
could have said the repos are per-program though using the program's
codename.  But that doesn't actually help someone who wants to write a
swift-bench spec know that it should go in the swift-specs repo.

I'm happy to drop oslo from the rename list if Doug wants to mull this
over a bit more.  The only thing I hate more than renaming repos is
renaming repos twice.  I'm hoping we can have some kind of consistency,
though.  People are in quite a hurry to have these created (we made 5
more for official openstack programs yesterday, plus a handful for
stackforge).

-Jim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Storyboard] [UX] Atlanta Storyboard UX Summary

2014-05-22 Thread James E. Blair
Sean Dague s...@dague.net writes:

 It's worth noting, most (90%) of OpenStack developers aren't trying to
 land or track features across projects. And realistically, in my
 experience working code into different repositories, the blueprint / bug
 culture between projects varies widely (what requires and artifact, how
 big that artifact is, etc).

Fortunately, storyboard doesn't make that simple case any harder.  A
story with a single task looks just like a simple bug in any other
system (including a bug in LP that only affects one project).  There
are probably some things we can change in the UI that make that easier
for new users.

It's worth noting that at this point, all blueprint-style stories are
likely to affect at least two projects (nova, nova-specs), possibly many
more (novaclient, tempest, *-manual, ...).  So being able to support
this kind of work is key.  But yeah, we should make the simple case of
report a bug in nova easy.  And I think we will.  But storyboard is
barely self-hosting at this point and there's still quite a bit to flesh
out.

-Jim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Core API refactoring

2014-05-22 Thread Collins, Sean
On Wed, May 21, 2014 at 10:47:16PM EDT, Mandeep Dhami wrote:
 The update from Sean seem to suggest to me that we needed blueprints only
 if the public API changes, and not for design changes that are internal to
 neutron. 

There was no statement in my e-mail that made that
suggestion. My e-mail was only an attempt to try and help provide
context for what was discussed at the summit. 

I dislike having people put words in my mouth, and you also seem to
continue to insinuate that things are not done in the open, with
everyone having a chance to participate.

I believe this is a serious charge, and I do not appreciate being
publicly accused of this.

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [neutron] [trove] [swift] Uniform name for logger in projects

2014-05-22 Thread Jason Dunsmore
(Adding relevant projects to subject.  Hope I didn't miss any.)

Heat, Neutron, Trove, and Swift devs,

Do we want to change all instances of logger variable names to LOG
(like most OpenStack projects use) and enforce that via the hacking
rules?

Regards,
Jason


On Wed, May 21 2014, Sergey Kraynev wrote:

 Hello, community.

 I hope, that most of your know, that a bug with name Log debugs should not
 have translations (f.e. [1], [2], [3]) was recently raised in several
 projects. The reason for this work is related with the following concerns
 [4].
 There is a special check that is used (or will be used in some projects,
 where the related patches have not merged yet) for verification process
 (f.e. [5] or [6]). As you can see, this ([5]) verification uses the LOG
 name of logger in regexp and if cases.
 However, there are a lot of projects where both names LOG and logger
 are used [7].
 So I have a question about the current situation:
 - Should we use one uniform name for logger or add support for both names
 in checks?

 In my opinion, declaration of one uniform name in hacking rules is
 preferable, because it cleans code from useless duplicate names of one
 variable and allows to create one uniform check for this rule.

 [1] https://bugs.launchpad.net/neutron/+bug/1320867
 [2] https://bugs.launchpad.net/swift/+bug/1318333
 [3] https://bugs.launchpad.net/oslo/+bug/1317950
 [4] https://wiki.openstack.org/wiki/LoggingStandards#Log_Translation
 [5]
 https://github.com/openstack/nova/blob/master/nova/hacking/checks.py#L201
 [6] https://review.openstack.org/#/c/94255/11/heat/hacking/checks.py
 [7] https://github.com/openstack/heat/search?q=getLoggertype=Code

 Regards,
 Sergey.
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Barbican][OSSG][Keystone] Mid-Cycle Meetup

2014-05-22 Thread David Stanek
On Thu, May 22, 2014 at 10:48 AM, Jarret Raim jarret.r...@rackspace.comwrote:


 This should make travel a bit easier for everyone as people won't need


Hey Jarret,

I'm going to be at the Keystone meetup for sure, but I'm also thinking
about going to the Barbican meetup too.

-- 
David
blog: http://www.traceback.org
twitter: http://twitter.com/dstanek
www: http://dstanek.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Barbican][OSSG][Keystone] Mid-Cycle Meetup

2014-05-22 Thread Jarret Raim
All,

There was some interest at the Summit in semi-combining the mid-cycle meet
ups for Barbican, Keystone and the OSSG as there is some overlap in team
members and interest areas. The current dates being considered are:

Mon, July 7 - Barbican
Tue, July 8 - Barbican
Wed, July 9 - Barbican / Keystone
Thu, July 10 - Keystone
Fri, July 11 - Keystone

Assuming these dates work for for everyone, we'll fit some OSSG work in
during whatever days make the most sense. The current plan is to have the
meet up in San Antonio at the new Geekdom location, which is downtown.
This should make travel a bit easier for everyone as people won't need
cars are there are plenty of hotels and restaurants within walking / short
cab distance.

I wanted to try to get a quick head count from the Barbican and OSSG folks
(I think Dolph already has one for Keystone). I'd also like to know if you
are a Barbican person interested in going to the Keystone sessions or vice
versa.

Once we get a rough head count estimate, Dolph and I can work on getting
everything booked.





Thanks,

--
Jarret Raim 
@jarretraim




smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Extending nova models

2014-05-22 Thread Solly Ross
Actually, that line you linked to about IMPL is a bit misleading.  In this case,
under the hood IMPL is really just the sqlalchemy implementation of the Nova db 
api at
https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L2404.

In Nova we don't insert a row in to a table per se.  Instead (in general), we 
have
sqlalchemy models which are then saved (which handles inserting, etc).  So the 
path
is api wrapper (the file you linked to) -- sqlalchemy api implementation -- 
sqlalchemy
model.

Hope this helps.

Best Regards,
Solly Ross

- Original Message -
 From: Rad Gruchalski ra...@gruchalski.com
 To: openstack-dev@lists.openstack.org
 Sent: Wednesday, May 21, 2014 3:38:04 PM
 Subject: [openstack-dev] Extending nova models
 
 Hi everyone,
 
 This is my first question here. I hope I could get an answer for the problem
 I'm currently facing in the development of a nova API extension. I am trying
 to add a couple of API endpoints that would serve as an interface to the
 table storing some data. I was able to create an API endpoint by placing my
 extension in api/openstack/compute/contrib and modifying the policy.json
 file. This is now working.
 
 I then added the migration to create a table to
 nova/db/sqlalchemy/migrate_repo_versions/245_add_custom_table.
 
 After unstack.sh and stack.sh (I'm using devstack) I can see my table being
 created. Great.
 
 Next, I proceeded with creating an object definition and created a file in
 nova/objects. I am basing myself on keypairs.py example
 (https://github.com/openstack/nova/blob/2efd3faa3e07fdf16c2d91c16462e7e1e3f33e17/nova/api/openstack/compute/contrib/keypairs.py#L97)
 
 self.api.create_key_pair
 
 calls this
 https://github.com/openstack/nova/blob/839fe777e256d36e69e9fd7c571aed2c860b122c/nova/compute/api.py#L3512
 the important part is
 
 keypair = keypair_obj.KeyPair()
 keypair.user_id = user_id
 keypair.name = key_name
 keypair.fingerprint = fingerprint
 keypair.public_key = public_key
 keypair.create(context)
 
 `KeyPair()` is
 https://github.com/openstack/nova/blob/master/nova/objects/keypair.py
 
 this has a method
 https://github.com/openstack/nova/blob/master/nova/objects/keypair.py#L52
 and it's calling `db_keypair = db.key_pair_create(context, updates)`
 `db` points to `from nova import db`
 
 which I believe points to this
 https://github.com/openstack/nova/blob/master/nova/db/__init__.py
 which loads https://github.com/openstack/nova/blob/master/nova/db/api.py
 there's a function called
 https://github.com/openstack/nova/blob/master/nova/db/api.py#L922
 `key_pair_create`
 https://github.com/openstack/nova/blob/master/nova/db/api.py#L924
 
 `IMPL` is
 https://github.com/openstack/nova/blob/master/nova/db/api.py#L69-L95
 but where is `IMPL.key_pair_create`?
 
 Is there an easy way to insert a record into the table?
 Thank you for any pointers.
 
 I’ve posted the same question on ask.openstack.org
 (https://ask.openstack.org/en/questions/30231).
 
 
 
 
 
 
 
 
 Kind regards,
 Radek Gruchalski
 ra...@gruchalski.com
 de.linkedin.com/in/radgruchalski/
 +4917685656526
 
 Confidentiality:
 This communication is intended for the above-named person and may be
 confidential and/or legally privileged.
 If it has come to you in error you must take no action based on it, nor must
 you copy or show it to anyone; please delete/destroy and inform the sender
 immediately.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] BP Review for J1

2014-05-22 Thread Serg Melikyan
Dmitry, we actually need to move meeting to another day, since not all
people from our core team is yet returned from OpenStack Summit. BP Review
for milestone 1 of Juno release will be held on Tuesday, June 3 at 15:00
UTC. I hope it works for you too.


On Thu, May 22, 2014 at 12:44 PM, Dmitry mey...@gmail.com wrote:

 Any possibility to move the session to Monday (May 26)?


 On Thu, May 22, 2014 at 9:42 AM, Serg Melikyan smelik...@mirantis.comwrote:

 We will held BP review for milestone 1 of Juno release at 15:00 UTC on
 May 23.

 Please, join us and vote for features that you are interested in!
 --
 Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
 http://mirantis.com | smelik...@mirantis.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
http://mirantis.com | smelik...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Cleaning up configuration settings

2014-05-22 Thread Renat Akhmerov
I tend to disagree with the whole idea. Not sure 100% though yet. Could you 
please explain the point of scattering configuration all over the code? In my 
opinion, we’re mixing different application concerns. With the current approach 
I always know where to look at in order to find all my configuration option 
definitions (types, descriptions etc.) but with this refactoring it seems to be 
a tricky task.

Thoughts? What benefits of doing that?

Renat Akhmerov
@ Mirantis Inc.



On 16 May 2014, at 21:32, Dmitri Zimine d...@stackstorm.com wrote:

 +1 for breaking down the configuration by modules. 
 
 Not sure about names for configuration group. Do you mean just using the same 
 group name? or more? 
 
 IMO groups are project specific; it doesn’t always make sense to use the same 
 group name in the context of different projects. Our requirement is 1) 
 auto-generate mistral.conf.example from the config.py and 2) sections make 
 sense in the product context.
 
 For example: how do we deal with rpc_backend and transport_url for oslo 
 messaging? Should we do something like CONF.import(_transport_opts, 
 “oslo.messaging.transport”, “transport”)? And use it by passing the group, 
 not entire contfig, like:
 
   transport = messaging.get_transport(cfg.messaging.CONF)
 instead of
   transport = messaging.get_transport(cfg.CONF)
 
 DZ 
 
 
 On May 16, 2014, at 12:46 PM, W Chan m4d.co...@gmail.com wrote:
 
 Regarding config opts for keystone, the keystoneclient middleware already 
 registers the opts at 
 https://github.com/openstack/python-keystoneclient/blob/master/keystoneclient/middleware/auth_token.py#L325
  under a keystone_authtoken group in the config file.  Currently, Mistral 
 registers the opts again at 
 https://github.com/stackforge/mistral/blob/master/mistral/config.py#L108 
 under a different configuration group.  Should we remove the duplicate from 
 Mistral and refactor the reference to keystone configurations to the 
 keystone_authtoken group?  This seems more consistent.
 
 
 On Thu, May 15, 2014 at 1:13 PM, W Chan m4d.co...@gmail.com wrote:
 Currently, the various configurations are registered in ./mistral/config.py. 
  The configurations are registered when mistral.config is referenced.  Given 
 the way the code is written, PEP8 throws referenced but not used error if 
 mistral.config is referenced but not called in the module.  In various use 
 cases, this is avoided by using importutils to import mistral.config (i.e. 
 https://github.com/stackforge/mistral/blob/master/mistral/tests/unit/engine/test_transport.py#L34).
   I want to break down registration code in ./mistral/config.py into 
 separate functions for api, engine, db, etc and move the registration closer 
 to the module where the configuration is needed.  Any objections?
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Manual VM migration

2014-05-22 Thread Diego Parrilla Santamaría
Yep, wrong order. You are right.

Thanks!

 --
Diego Parrilla
 http://www.stackops.com/*CEO*
*www.stackops.com http://www.stackops.com/ | * diego.parri...@stackops.com |
+34 91 005-2164 | skype:diegoparrilla




On Wed, May 21, 2014 at 11:53 PM, CARVER, PAUL pc2...@att.com wrote:

  Are you sure steps 1 and 2 aren’t in the wrong order? Seems like if
 you’re going to halt the source VM you should take your snapshot after
 halting. (Of course if you don’t intend to halt the VM you can just do your
 best to quiesce your most active writers before taking the snapshot and
 hope the disk is sufficiently consistent.)





 1) Take a snapshot of the VM from the source Private Cloud

 2) Halts the source VM (optional, but good for state consistency)



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Haproxy configuration options

2014-05-22 Thread Dmitriy Shulyak
Created spec https://review.openstack.org/#/c/94907/

I think it is WIP still, but would be nice to hear some comments/opinions


On Thu, May 22, 2014 at 1:59 AM, Robert Collins
robe...@robertcollins.netwrote:

 On 18 May 2014 08:17, Miller, Mark M (EB SW Cloud - RD - Corvallis)
 mark.m.mil...@hp.com wrote:
  We are considering the following connection chain:
 
 - HAProxy   -   stunnel -OS services bound
 to 127.0.0.1
   Virtual IP server IP
 localhost 127.0.0.1
   secure  SSL terminate unsecure

 Interestingly, and separately, HAProxy can do SSL termination now, so
 we might want to consider just using HAProxy for that.

  In this chain none of the ports need to changed. One of the major issues
 I have come across is the hard coding of the Keystone ports in the
 OpenStack service's configuration files. With the above connection scheme
 none of the ports need to change.

 But we do need to have HAProxy not wildcard bind, as Greg points out,
 and to make OS services bind to 127.0.0.1 as Jan pointed out.

 I suspect we need to put this through the specs process (which ops
 teams are starting to watch) to ensure we get enough input.

 I'd love to see:
  - SSL by default
  - A setup we can document in the ops guide / HA openstack install
 guide - e.g we don't need to be doing it a third different way (or we
 can update the existing docs if what we converge on is better).
  - Only SSL enabled endpoints accessible from outside the machine (so
 python processes bound to localhost as a security feature).

 Eventually we may need to scale traffic beyond one HAProxy, at which
 point we'll need to bring something altogether more sophisticated in -
 lets design that when we need it.
 Sooner than that we're likely going to need to scale load beyond one
 control plane server at which point the HAProxy VIP either needs to be
 distributed (so active-active load receiving) or we need to go
 user - haproxy (VIP) - SSL endpoint (on any control plane node) -
 localhost bound service.

 HTH,
 Rob

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Manual VM migration

2014-05-22 Thread Diego Parrilla Santamaría
Hi Naveed,

I don't think it's a good idea to suspend/pause. If you want to keep the
state of the VM then have a look at the live migration capabilities of KVM.
Our script is very simple and works for any VM without attached block
storage.

Here goes the little script. Keep in mind it's something very simple.

https://gist.github.com/diegoparrilla/6288e1521bffe741f71a

Regards
Diego



 --
Diego Parrilla
 http://www.stackops.com/*CEO*
*www.stackops.com http://www.stackops.com/ | * diego.parri...@stackops.com |
+34 91 005-2164 | skype:diegoparrilla




On Thu, May 22, 2014 at 8:36 AM, Naveed Ahmad 12msccsnah...@seecs.edu.pkwrote:


 Hi Diego ,

 Thanks for sharing steps for VM migration from customer end to your cloud.
 Well! i am not going to propose new idea for VM migration. I am using VM
 migration  for POC of my research idea.

 I have few question for you!


 1. Can we use suspend/pause feature instead of snapshot for saving VM
 states. ?
 2. How you are managing VM metadata (such as instance detail from
 nova,cinder database)



 Is it possible for you to share script? I need this VM migration feature
 in Openstack for POC only.
 Thanks again for your reply.


 Regards
 Naveed




 On Wed, May 21, 2014 at 10:47 PM, Diego Parrilla Santamaría 
 diego.parrilla.santama...@gmail.com wrote:

 Hi Naveed,

 we have customers running VMs in their own Private Cloud that are
 migrating to our new Public Cloud offering. To be honest I would love to
 have a better way to do it, but this is how we do. We have developed a tiny
 script that basically performs the following actions:

 1) Take a snapshot of the VM from the source Private Cloud
 2) Halts the source VM (optional, but good for state consistency)
  3) Download the snapshot from source Private Cloud
 4) Upload the snapshot to target Public Cloud
 5) Start a new VM using the uploaded image in the target public cloud
 6) Allocate a floating IP and attach it to the VM
 7) Change DNS to point to the new floating IP
 8) Perform some cleanup processes (delete source VM, deallocate its
 floating IP, delete snapshot from source...)

 A bit rudimentary, but it works if your VM does not have attached volumes
 right away.

 Still, I would love to hear some sexy and direct way to do it.

 Regards
 Diego

  --
 Diego Parrilla
 https://mailtrack.io/trace/link/660f588f5b8ce60a3da368dfbfeda30eb0548109
 *CEO*
 *www.stackops.com
 https://mailtrack.io/trace/link/660f588f5b8ce60a3da368dfbfeda30eb0548109 | 
 *
  diego.parri...@stackops.com | +34 91 005-2164 | skype:diegoparrilla




 On Wed, May 21, 2014 at 7:32 PM, Naveed Ahmad 12msccsnah...@seecs.edu.pk
  wrote:


 Hi community,

 I need some help from you people. Openstack provides Hot (Live) and Cold
 (Offline) migration between clusters/compute. However i am interested to
 migrate Virtual Machine from one OpenStack Cloud to another.  is it
 possible ?  It is inter cloud VM migration not inter cluster or compute.

 I need help and suggestion regarding VM migration. I tried to manually
 migrate VM from one OpenStack Cloud to another but no success yet.

 Please guide me!

 Regards


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-devhttps://mailtrack.io/trace/link/86e76f2270da640047a3867c01c2cc077eb9a20c



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Haproxy configuration options

2014-05-22 Thread Miller, Mark M (EB SW Cloud - RD - Corvallis)

HAProxy SSL termination is not a viable option when HAProxy is used to proxy 
traffic between servers. If HAProxy terminates the SSL it will then proxy the 
traffic unencrypted to any other server across a network. However, since SSL 
termination and SSL re-encryption are now features of the current HAProxy 
development releases, I would vote to add these features in addition to stunnel.

Mark 

From: Dmitriy Shulyak [dshul...@mirantis.com]
Sent: Thursday, May 22, 2014 8:35 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [TripleO] Haproxy configuration options

Created spec https://review.openstack.org/#/c/94907/

I think it is WIP still, but would be nice to hear some comments/opinions


On Thu, May 22, 2014 at 1:59 AM, Robert Collins 
robe...@robertcollins.netmailto:robe...@robertcollins.net wrote:
On 18 May 2014 08:17, Miller, Mark M (EB SW Cloud - RD - Corvallis)
mark.m.mil...@hp.commailto:mark.m.mil...@hp.com wrote:
 We are considering the following connection chain:

- HAProxy   -   stunnel -OS services bound to 
 127.0.0.1
  Virtual IP server IP   localhost 
 127.0.0.1
  secure  SSL terminate unsecure

Interestingly, and separately, HAProxy can do SSL termination now, so
we might want to consider just using HAProxy for that.

 In this chain none of the ports need to changed. One of the major issues I 
 have come across is the hard coding of the Keystone ports in the OpenStack 
 service's configuration files. With the above connection scheme none of the 
 ports need to change.

But we do need to have HAProxy not wildcard bind, as Greg points out,
and to make OS services bind to 127.0.0.1 as Jan pointed out.

I suspect we need to put this through the specs process (which ops
teams are starting to watch) to ensure we get enough input.

I'd love to see:
 - SSL by default
 - A setup we can document in the ops guide / HA openstack install
guide - e.g we don't need to be doing it a third different way (or we
can update the existing docs if what we converge on is better).
 - Only SSL enabled endpoints accessible from outside the machine (so
python processes bound to localhost as a security feature).

Eventually we may need to scale traffic beyond one HAProxy, at which
point we'll need to bring something altogether more sophisticated in -
lets design that when we need it.
Sooner than that we're likely going to need to scale load beyond one
control plane server at which point the HAProxy VIP either needs to be
distributed (so active-active load receiving) or we need to go
user - haproxy (VIP) - SSL endpoint (on any control plane node) -
localhost bound service.

HTH,
Rob

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-security] [Barbican][OSSG][Keystone] Mid-Cycle Meetup

2014-05-22 Thread Nathan Kinder


On 05/22/2014 07:48 AM, Jarret Raim wrote:
 All,
 
 There was some interest at the Summit in semi-combining the mid-cycle meet
 ups for Barbican, Keystone and the OSSG as there is some overlap in team
 members and interest areas. The current dates being considered are:
 
 Mon, July 7 - Barbican
 Tue, July 8 - Barbican
 Wed, July 9 - Barbican / Keystone
 Thu, July 10 - Keystone
 Fri, July 11 - Keystone

I'm interested in attending, but unfortunately I can't make these dates.

 
 Assuming these dates work for for everyone, we'll fit some OSSG work in
 during whatever days make the most sense. The current plan is to have the
 meet up in San Antonio at the new Geekdom location, which is downtown.
 This should make travel a bit easier for everyone as people won't need
 cars are there are plenty of hotels and restaurants within walking / short
 cab distance.
 
 I wanted to try to get a quick head count from the Barbican and OSSG folks
 (I think Dolph already has one for Keystone). I'd also like to know if you
 are a Barbican person interested in going to the Keystone sessions or vice
 versa.

All of the above. :)

-NGK

 
 Once we get a rough head count estimate, Dolph and I can work on getting
 everything booked.
 
 
 
 
 
 Thanks,
 
 --
 Jarret Raim 
 @jarretraim
 
 
 
 
 ___
 Openstack-security mailing list
 openstack-secur...@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-security
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Gerrit downtime on May 23 for project renames

2014-05-22 Thread Kyle Mestery
On Thu, May 22, 2014 at 1:52 AM, Clint Byrum cl...@fewbar.com wrote:
 Excerpts from Tom Fifield's message of 2014-05-21 21:39:22 -0700:
 On 22/05/14 11:06, Kyle Mestery wrote:
  On Wed, May 21, 2014 at 5:06 PM, Tom Fifield t...@openstack.org wrote:
  On 22/05/14 05:48, James E. Blair wrote:
 
  Tom Fifield t...@openstack.org writes:
 
  May I ask, will the old names have some kind of redirect to the new
  names?
 
 
  Of course you may ask!  And it's a great question!  But sadly the answer
  is no.  Unfortunately, Gerrit's support for renaming projects is not
  very good (which is why we need to take downtime to do it).
 
  I'm personally quite fond of stable URLs.  However, these started as an
  experiment so we were bound to get some things wrong (and will
  probably continue to do so) and it's better to try to fix them early.
 
 
  This is a really poor outcome.
 
  Can we delay the migration until we have some time to think about the
  communication strategy?
 
  At the least, I'd suggest a delay for renaming neutron-specs until until
  after the peak of the Juno blueprint work is done. Say in ~3 weeks time.
 
  I tend to agree with James that we should do this early and take the
  bullet on renaming now. The process for adding new Neutron specs is
  outlined here [1], and this will be updated once the repository is
  renamed. In addition, I'm working on adding/updating some Neutron wiki
  pages around the Neutron development process, and the specs repo will
  be highlighted there once that's done. It would be good to have the
  renaming done before then.

 ... and how would you propose we communicate this to the users we've
 been asking to do blueprint review specifically during this early
 period? We can't exactly send them an email saying sorry, the link we
 mentioned earlier is now wrong :)

 What would you gain from doing it this week instead of later in the month?

 We're really trying to engage users to help out with the spec review
 process, but it seems they weren't taken into account at all when
 planning this change. Seems like a bad precedent to set for our first
 experiment.

 You didn't also ask them to subscribe to the users and/or operators
 mailing lists? I would think at least one of those two lists would be
 quite important for users to stay in the loop about the effort.

 Any large scale movement will be limited in scale by the scale of its
 mass communication.

I agree with Clint here. Participation involves more than just handing
a link out. While it's slightly inconvenient to have this change so
quick, the reality is that we need people to be engaged across all of
our forms of communication. We will do our best to communicate these
changes through all mediums necessary, but really, subscribing to the
mailing lists mentioned above from a users/ops perspective should
almost be a prerequisite.

Thanks,
Kyle

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Gerrit downtime on May 23 for project renames

2014-05-22 Thread Sergey Lukjanov
BTW I'm working on preparing all changes for this renaming session.

On Thu, May 22, 2014 at 8:05 PM, Doug Hellmann
doug.hellm...@dreamhost.com wrote:
 On Thu, May 22, 2014 at 10:16 AM, James E. Blair jebl...@openstack.org 
 wrote:
 Thierry Carrez thie...@openstack.org writes:

 James E. Blair wrote:
 openstack/oslo-specs - openstack/common-libraries-specs

 I understand (and agree with) the idea that -specs repositories should
 be per-program.

 That said, you could argue that oslo is a shorthand for common
 libraries and is the code name for the *program* (rather than bound to
 any specific project). Same way infra is shorthand for
 infrastructure. So I'm not 100% convinced this one is necessary...

 data-processing-specs has been pointed out as a similarly awkward
 name.  According to the programs.yaml file, each program does have a
 codename, and the compute program's codename is 'nova'.  I suppose we
 could have said the repos are per-program though using the program's
 codename.  But that doesn't actually help someone who wants to write a
 swift-bench spec know that it should go in the swift-specs repo.

 I'm happy to drop oslo from the rename list if Doug wants to mull this
 over a bit more.  The only thing I hate more than renaming repos is
 renaming repos twice.  I'm hoping we can have some kind of consistency,
 though.  People are in quite a hurry to have these created (we made 5
 more for official openstack programs yesterday, plus a handful for
 stackforge).

 I don't feel strongly, and am prepared to go along with the consensus
 on using the longer names.

 Doug


 -Jim

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Haproxy configuration options

2014-05-22 Thread Gregory Haynes
On Thu, May 22, 2014, at 08:51 AM, Miller, Mark M (EB SW Cloud - RD -
Corvallis) wrote:
 
 HAProxy SSL termination is not a viable option when HAProxy is used to
 proxy traffic between servers. If HAProxy terminates the SSL it will then
 proxy the traffic unencrypted to any other server across a network.
 However, since SSL termination and SSL re-encryption are now features of
 the current HAProxy development releases, I would vote to add these
 features in addition to stunnel.

Relevant ML thread from a few months ago:
http://lists.openstack.org/pipermail/openstack-dev/2014-March/031229.html

 
 Mark 
 
 From: Dmitriy Shulyak [dshul...@mirantis.com]
 Sent: Thursday, May 22, 2014 8:35 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [TripleO] Haproxy configuration options
 
 Created spec https://review.openstack.org/#/c/94907/
 
 I think it is WIP still, but would be nice to hear some comments/opinions
 
 
 On Thu, May 22, 2014 at 1:59 AM, Robert Collins
 robe...@robertcollins.netmailto:robe...@robertcollins.net wrote:
 On 18 May 2014 08:17, Miller, Mark M (EB SW Cloud - RD - Corvallis)
 mark.m.mil...@hp.commailto:mark.m.mil...@hp.com wrote:
  We are considering the following connection chain:
 
 - HAProxy   -   stunnel -OS services bound to 
  127.0.0.1
   Virtual IP server IP   localhost 
  127.0.0.1
   secure  SSL terminate unsecure
 
 Interestingly, and separately, HAProxy can do SSL termination now, so
 we might want to consider just using HAProxy for that.

This would be a nice next step, but in the long term I can see users
wanting SSL termination and load balancing separated due to:
A) Different scaling requirements
B) Access control to machines with SSL certs

 
  In this chain none of the ports need to changed. One of the major issues I 
  have come across is the hard coding of the Keystone ports in the OpenStack 
  service's configuration files. With the above connection scheme none of the 
  ports need to change.
 
 But we do need to have HAProxy not wildcard bind, as Greg points out,
 and to make OS services bind to 127.0.0.1 as Jan pointed out.
 
 I suspect we need to put this through the specs process (which ops
 teams are starting to watch) to ensure we get enough input.
 
 I'd love to see:
  - SSL by default
  - A setup we can document in the ops guide / HA openstack install
 guide - e.g we don't need to be doing it a third different way (or we
 can update the existing docs if what we converge on is better).
  - Only SSL enabled endpoints accessible from outside the machine (so
 python processes bound to localhost as a security feature).

+1

 
 Eventually we may need to scale traffic beyond one HAProxy, at which
 point we'll need to bring something altogether more sophisticated in -
 lets design that when we need it.
 Sooner than that we're likely going to need to scale load beyond one
 control plane server at which point the HAProxy VIP either needs to be
 distributed (so active-active load receiving) or we need to go
 user - haproxy (VIP) - SSL endpoint (on any control plane node) -
 localhost bound service.

Putting SSL termination behind HAProxy seems odd. Typically your load
balancer wants to be able to grok the traffic sent though it which is
not possible in this setup. For an environment where sending unencrypted
traffic across the internal work is not allowed I agree with Mark's
suggestion of re-encrypting for internal traffic, but IMO it should
still pass through the load balancer unencrypted. Basically:
User - External SSL Terminate - LB - SSL encrypt - control plane

This is a bit overkill given our current state, but I think for now its
important we terminate external SSL earlier on: See ML thread linked
above for reasoning.

 
 HTH,
 Rob

Thanks,
Greg

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] reservation of fixed ip

2014-05-22 Thread Fuente, Pablo A
Hi,
I'm part of a project that aims to manage Reservations on OpenStack.
Maybe this could be implemented on it. The name of the project is Blazar
(ex Climate). Currently we have two reservation plugins: one for
physical host reservations and other for virtual instances reservation.
We are planning to implement some new plugins: volume reservation and
neutron resource reservation. For the later we are still analyzing which
resources reserve, but from this email seems that fixed IP should be one
of them. Of course we already have implemented all the things needed to
handle reservations in our core code: reservation life cycle, leases
states (under review), notifications (using oslo), pluging mechanism for
new resource reservations, DB schema with alembic migrations, REST API
(with Pecan), etc. And yes, should be easy to try Blazar using our
Devstack integration.
If you need more information about the project please visit our wiki
[1] or send me an email.

Pablo.

[1] https://wiki.openstack.org/wiki/Climate

On Wed, 2014-05-21 at 23:51 +0100, Salvatore Orlando wrote:
 In principle there is nothing that should prevent us from implementing
 an IP reservation mechanism.
 
 
 As with anything, the first thing to check is literature or related
 work! If any other IaaS system is implementing such a mechanism, is
 it exposed through the API somehow?
 Also this feature is likely to be provided by IPAM systems. If yes,
 what constructs do they use?
 I do not have the answers to this questions, but I'll try to document
 myself; if you have them - please post them here.
 
 
 This new feature would probably be baked into neutron's IPAM logic.
 When allocating an IP, first check from within the IP reservation
 pool, and then if it's not found check from standard allocation pools
 (this has non negligible impact on availability ranges management, but
 these are implementation details).
 Aspects to consider, requirement-wise, are:
 1) Should reservations also be classified by qualification of the
 port? For instance, is it important to specify that an IP should be
 used for the gateway port rather than for a floating IP port?
 2) Are reservations something that an admin could specify on a
 tenant-basis (hence an admin API extension), or an implicit mechanism
 that can be tuned using configuration variables (for instance create
 an IP reservation a for gateway port for a given tenant when a router
 gateway is set).
 
 
 I apologise if these questions are dumb. I'm just trying to frame this
 discussion into something which could then possibly lead to submitting
 a specification.
 
 
 Salvatore
 
 
 On 21 May 2014 21:37, Collins, Sean sean_colli...@cable.comcast.com
 wrote:
 (Edited the subject since a lot of people filter based on the
 subject
 line)
 
 I would also be interested in reserved IPs - since we do not
 deploy the
 layer 3 agent and use the provider networking extension and a
 hardware
 router.
 
 On Wed, May 21, 2014 at 03:46:53PM EDT, Sławek Kapłoński
 wrote:
  Hello,
 
  Ok, I found that now there is probably no such feature to
 reserve fixed
  ip for tenant. So I was thinking about add such feature to
 neutron. I
  mean that it should have new table with reserved ips in
 neutron
  database and neutron will check this table every time when
 new port
  will be created (or updated) and IP should be associated
 with this
  port. If user has got reserved IP it should be then used for
 new port,
  if IP is reserver by other tenant - it shouldn't be used.
  What You are thinking about such possibility? Is it possible
 to add it
  in some future release of neutron?
 
  --
  Best regards
  Sławek Kapłoński
  sla...@kaplonski.pl
 
 
  Dnia Mon, 19 May 2014 20:07:43 +0200
  Sławek Kapłoński sla...@kaplonski.pl napisał:
 
   Hello,
  
   I'm using openstack with neutron and ML2 plugin. Is there
 any way to
   reserve fixed IP from shared external network for one
 tenant? I know
   that there is possibility to create port with IP and later
 connect VM
   to this port. This solution is almost ok for me but
 problem is when
   user delete this instance - then port is also deleted and
 it is not
   reserved still for the same user and tenant. So maybe
 there is any
   solution to reserve it permanent?
   I know also about floating IPs but I don't use L3 agents
 so this is
   probably not for me :)
  
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
 

[openstack-dev] [Fuel] Hard code freeze 5.0 announcement

2014-05-22 Thread Mike Scherbakov
Stackers,
according to https://wiki.openstack.org/wiki/Fuel/Hard_Code_Freeze,
we call for a HCF.

It will take some time to switch all required CI infrastructure to handle
both stable/5.0 (which was just created), and master branches.

Since now, master is open for merges of everything deferred to 5.1. If we
find any Critical bugs for 5.0, then patch *must be merged first into the
master, and only after that it can be accepted into stable/5.0*.

Folks, we fixed enormous amount of bugs in this release. With the ground
for future Fuel upgrades based on LXC and Docker, endless amount of fixes
for OpenStack HA, it is huge leap forward for all of us. Thank you guys for
the hard work!

-- 
Mike Scherbakov
#mihgen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] handling drivers that will not be third-party tested

2014-05-22 Thread Dean Troyer
On Thu, May 22, 2014 at 9:05 AM, Dan Prince dpri...@redhat.com wrote:

 - Original Message -
  From: Devananda van der Veen devananda@gmail.com
 [...]

 interface. However, I also don't expect the author to provide a full
  third-party CI environment, and as such, we should not claim the same
 level
  of test coverage and consistency as we would like to have with drivers in
  the gate.

 Not claiming the same level of support seems reasonable if we don't have
 3rd party CI running on it.


As was repeated many times last week:  if it isn't tested it is broken.
 We get to argue over the definition and degree of 'tested' now...


  So, why not just put the driver in a separate library on github or
  stackforge?

  Because the driver can be easily broken due to internal Ironic driver API
 changes. :(


I have similar issues with DevStack and including in-repo support for
drivers/projects that are not integrated or incubated or even an
OpenStack-affiliated project at all.  And have come to the conclusion that
at some point some things just make sense to include anyway.  But the
different status needs to be communicated somehow.

As a user of an open source project it is frustrating for me to want/need
to use something in a 'contrib' directory and not be able to know if that
thing still works or if it is even maintained.  Having a certain amount of
testing to at least demonstrate non-brokenness should be a requirement for
inclusion.

It would be great if we would adopt common nomenclature here so user
expectations are consistent:
* 'experimental' - stuff not tested at all?
* 'contrib' - stuff with some amount of testing short of gating
* 'thirdparty' - stuff that requires hardware or licensed software to fully
test

Also, contact information should be required for anything 'special' to at
least know who to notify if the thing is so broken that removal is
contemplated.

Projects are going to do what they want regarding inclusion/exclusion
policies, I hope we can use common practices to implement those choices.

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] reservation of fixed ip

2014-05-22 Thread Sławek Kapłoński
Hello


Dnia Wed, 21 May 2014 23:51:48 +0100
Salvatore Orlando sorla...@nicira.com napisał:

 In principle there is nothing that should prevent us from
 implementing an IP reservation mechanism.
 
 As with anything, the first thing to check is literature or related
 work! If any other IaaS system is implementing such a mechanism, is
 it exposed through the API somehow?
 Also this feature is likely to be provided by IPAM systems. If yes,
 what constructs do they use?
 I do not have the answers to this questions, but I'll try to document
 myself; if you have them - please post them here.
 
 This new feature would probably be baked into neutron's IPAM logic.
 When allocating an IP, first check from within the IP reservation
 pool, and then if it's not found check from standard allocation pools
 (this has non negligible impact on availability ranges management, but
 these are implementation details).
 Aspects to consider, requirement-wise, are:
 1) Should reservations also be classified by qualification of the
 port? For instance, is it important to specify that an IP should be
 used for the gateway port rather than for a floating IP port?

IMHO it is not required when IP is reserved. User should have
possibility to reserve such IP for his tenant and later use it as he
want (floating ip, instance or whatever)

 2) Are reservations something that an admin could specify on a
 tenant-basis (hence an admin API extension), or an implicit mechanism
 that can be tuned using configuration variables (for instance create
 an IP reservation a for gateway port for a given tenant when a router
 gateway is set).
 
 I apologise if these questions are dumb. I'm just trying to frame this
 discussion into something which could then possibly lead to
 submitting a specification.
 
 Salvatore
 
 
 On 21 May 2014 21:37, Collins, Sean sean_colli...@cable.comcast.com
 wrote:
 
  (Edited the subject since a lot of people filter based on the
  subject line)
 
  I would also be interested in reserved IPs - since we do not deploy
  the layer 3 agent and use the provider networking extension and a
  hardware router.
 
  On Wed, May 21, 2014 at 03:46:53PM EDT, Sławek Kapłoński wrote:
   Hello,
  
   Ok, I found that now there is probably no such feature to reserve
   fixed ip for tenant. So I was thinking about add such feature to
   neutron. I mean that it should have new table with reserved ips
   in neutron database and neutron will check this table every time
   when new port will be created (or updated) and IP should be
   associated with this port. If user has got reserved IP it should
   be then used for new port, if IP is reserver by other tenant - it
   shouldn't be used. What You are thinking about such possibility?
   Is it possible to add it in some future release of neutron?
  
   --
   Best regards
   Sławek Kapłoński
   sla...@kaplonski.pl
  
  
   Dnia Mon, 19 May 2014 20:07:43 +0200
   Sławek Kapłoński sla...@kaplonski.pl napisał:
  
Hello,
   
I'm using openstack with neutron and ML2 plugin. Is there any
way to reserve fixed IP from shared external network for one
tenant? I know that there is possibility to create port with IP
and later connect VM to this port. This solution is almost ok
for me but problem is when user delete this instance - then
port is also deleted and it is not reserved still for the same
user and tenant. So maybe there is any solution to reserve it
permanent? I know also about floating IPs but I don't use L3
agents so this is probably not for me :)
   
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  --
  Sean M. Collins
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

-- 
Best regards
Sławek Kapłoński
sla...@kaplonski.pl

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [NFV][Neutron] Link to patch/review for allowing instances to receive vlan tagged traffic

2014-05-22 Thread Steve Gordon
Hi Alan/Balazs,

In one of the NFV BoF sessions in Atlanta one of you (I assume one of you 
anyway!) noted in the etherpad [1] (line 89) that Ericsson had submitted a 
patch to Neutron which would allow instances to use tagged vlan traffic. Do you 
happen to have a link handy to the review for this?

Thanks in advance,

Steve

[1] https://etherpad.openstack.org/p/juno-nfv-bof

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][group-based-policy] Should we revisit the priority of group-based policy?

2014-05-22 Thread Maru Newby
At the summit session last week for group-based policy, there were many 
concerns voiced about the approach being undertaken.  I think those concerns 
deserve a wider audience, and I'm going to highlight some of them here.

The primary concern seemed to be related to the complexity of the approach 
implemented for the POC.  A number of session participants voiced concern that 
the simpler approach documented in the original proposal [1] (described in the 
section titled 'Policies applied between groups') had not been implemented in 
addition to or instead of what appeared in the POC (described in the section 
titled 'Policies applied as a group API').  The simpler approach was considered 
by those participants as having the advantage of clarity and immediate 
usefulness, whereas the complex approach was deemed hard to understand and 
without immediate utility.

A secondary but no less important concern is related to the impact on Neutron 
of the approach implemented in the POC.  The POC was developed monolithically, 
without oversight through gerrit, and the resulting patches were excessive in 
size (~4700 [2] and ~1500 [3] lines).  Such large patches are effectively 
impossible to review.  Even broken down into reviewable chunks, though, it does 
not seem realistic to target juno-1 for merging this kind of complexity.  The 
impact on stability could be considerable, and it is questionable whether the 
necessary review effort should be devoted to fast-tracking group-based policy 
at all, let alone an approach that is considered by many to be unnecessarily 
complicated.  

The blueprint for group policy [4] is currently listed as a 'High' priority.  
With the above concerns in mind, does it make sense to continue prioritizing an 
effort that at present would seem to require considerably more resources than 
the benefit it appears to promise?


Maru

1: https://etherpad.openstack.org/p/group-based-policy
2: https://review.openstack.org/93853
3: https://review.openstack.org/93935
4: https://blueprints.launchpad.net/neutron/+spec/group-based-policy-abstraction

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] reservation of fixed ip

2014-05-22 Thread Sławek Kapłoński
Hello,

Thanks for answears and info about this project - I will take a look on
that for sure :)
For me now reservation of fixed ips is enough and I don't know what
else can be reserved in neutron. Floating IP you can now assign to
tenant and this is some kind of reservation. Maybe someone else will
have other ideas about resources for reservation from neutron.

-- 
Pozdrawiam
Sławek Kapłoński
sla...@kaplonski.pl


Dnia Thu, 22 May 2014 10:15:11 -0700
Nikolay Starodubtsev nstarodubt...@mirantis.com napisał:

 Hi,
 I'm agree with Pablo. We've looked at Neutron resource reservations
 some time ago.
 But we can't decide which resources should be reserved and this
 question has been delayed.
 
 
 
 Nikolay Starodubtsev
 
 Software Engineer
 
 Mirantis Inc.
 
 
 Skype: dark_harlequine1
 
 
 2014-05-22 10:00 GMT-07:00 Fuente, Pablo A pablo.a.fue...@intel.com:
 
  Hi,
  I'm part of a project that aims to manage Reservations on
  OpenStack.
  Maybe this could be implemented on it. The name of the project is
  Blazar (ex Climate). Currently we have two reservation plugins: one
  for physical host reservations and other for virtual instances
  reservation. We are planning to implement some new plugins: volume
  reservation and neutron resource reservation. For the later we are
  still analyzing which resources reserve, but from this email seems
  that fixed IP should be one of them. Of course we already have
  implemented all the things needed to handle reservations in our
  core code: reservation life cycle, leases states (under review),
  notifications (using oslo), pluging mechanism for new resource
  reservations, DB schema with alembic migrations, REST API (with
  Pecan), etc. And yes, should be easy to try Blazar using our
  Devstack integration. If you need more information about the
  project please visit our wiki
  [1] or send me an email.
 
  Pablo.
 
  [1] https://wiki.openstack.org/wiki/Climate
 
  On Wed, 2014-05-21 at 23:51 +0100, Salvatore Orlando wrote:
   In principle there is nothing that should prevent us from
   implementing an IP reservation mechanism.
  
  
   As with anything, the first thing to check is literature or
   related work! If any other IaaS system is implementing such a
   mechanism, is it exposed through the API somehow?
   Also this feature is likely to be provided by IPAM systems. If
   yes, what constructs do they use?
   I do not have the answers to this questions, but I'll try to
   document myself; if you have them - please post them here.
  
  
   This new feature would probably be baked into neutron's IPAM
   logic. When allocating an IP, first check from within the IP
   reservation pool, and then if it's not found check from standard
   allocation pools (this has non negligible impact on availability
   ranges management, but these are implementation details).
   Aspects to consider, requirement-wise, are:
   1) Should reservations also be classified by qualification of
   the port? For instance, is it important to specify that an IP
   should be used for the gateway port rather than for a floating IP
   port? 2) Are reservations something that an admin could specify
   on a tenant-basis (hence an admin API extension), or an implicit
   mechanism that can be tuned using configuration variables (for
   instance create an IP reservation a for gateway port for a given
   tenant when a router gateway is set).
  
  
   I apologise if these questions are dumb. I'm just trying to frame
   this discussion into something which could then possibly lead to
   submitting a specification.
  
  
   Salvatore
  
  
   On 21 May 2014 21:37, Collins, Sean
   sean_colli...@cable.comcast.com wrote:
   (Edited the subject since a lot of people filter based on
   the subject
   line)
  
   I would also be interested in reserved IPs - since we do
   not deploy the
   layer 3 agent and use the provider networking extension
   and a hardware
   router.
  
   On Wed, May 21, 2014 at 03:46:53PM EDT, Sławek Kapłoński
   wrote:
Hello,
   
Ok, I found that now there is probably no such feature
to
   reserve fixed
ip for tenant. So I was thinking about add such feature
to
   neutron. I
mean that it should have new table with reserved ips in
   neutron
database and neutron will check this table every time
when
   new port
will be created (or updated) and IP should be associated
   with this
port. If user has got reserved IP it should be then
used for
   new port,
if IP is reserver by other tenant - it shouldn't be
used. What You are thinking about such possibility? Is
it possible
   to add it
in some future release of neutron?
   
--
Best regards
Sławek 

Re: [openstack-dev] [TripleO] Haproxy configuration options

2014-05-22 Thread Miller, Mark M (EB SW Cloud - RD - Corvallis)
That depends on your security requirements.  If HAProxy is proxying requests to 
multiple servers and you terminate the SSL at HAProxy, then you will be sending 
the request unencrypted from one server to another. I am not at all opposed to 
adding the capabilities to configure HAProxy to terminate and even re-encrypt 
requests for those who have a different set of security requirements. Looks 
like I will need both the stunnel server and client.

Mark

-Original Message-
From: Robert Collins [mailto:robe...@robertcollins.net] 
Sent: Wednesday, May 21, 2014 3:59 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [TripleO] Haproxy configuration options

On 18 May 2014 08:17, Miller, Mark M (EB SW Cloud - RD - Corvallis) 
mark.m.mil...@hp.com wrote:
 We are considering the following connection chain:

- HAProxy   -   stunnel -OS services bound to 
 127.0.0.1
  Virtual IP server IP   localhost 
 127.0.0.1
  secure  SSL terminate unsecure

Interestingly, and separately, HAProxy can do SSL termination now, so we might 
want to consider just using HAProxy for that.

 In this chain none of the ports need to changed. One of the major issues I 
 have come across is the hard coding of the Keystone ports in the OpenStack 
 service's configuration files. With the above connection scheme none of the 
 ports need to change.

But we do need to have HAProxy not wildcard bind, as Greg points out, and to 
make OS services bind to 127.0.0.1 as Jan pointed out.

I suspect we need to put this through the specs process (which ops teams are 
starting to watch) to ensure we get enough input.

I'd love to see:
 - SSL by default
 - A setup we can document in the ops guide / HA openstack install guide - e.g 
we don't need to be doing it a third different way (or we can update the 
existing docs if what we converge on is better).
 - Only SSL enabled endpoints accessible from outside the machine (so python 
processes bound to localhost as a security feature).

Eventually we may need to scale traffic beyond one HAProxy, at which point 
we'll need to bring something altogether more sophisticated in - lets design 
that when we need it.
Sooner than that we're likely going to need to scale load beyond one control 
plane server at which point the HAProxy VIP either needs to be distributed (so 
active-active load receiving) or we need to go user - haproxy (VIP) - SSL 
endpoint (on any control plane node) - localhost bound service.

HTH,
Rob

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][group-based-policy] Should we revisit the priority of group-based policy?

2014-05-22 Thread Maru Newby

On May 22, 2014, at 11:03 AM, Maru Newby ma...@redhat.com wrote:

 At the summit session last week for group-based policy, there were many 
 concerns voiced about the approach being undertaken.  I think those concerns 
 deserve a wider audience, and I'm going to highlight some of them here.
 
 The primary concern seemed to be related to the complexity of the approach 
 implemented for the POC.  A number of session participants voiced concern 
 that the simpler approach documented in the original proposal [1] (described 
 in the section titled 'Policies applied between groups') had not been 
 implemented in addition to or instead of what appeared in the POC (described 
 in the section titled 'Policies applied as a group API').  The simpler 
 approach was considered by those participants as having the advantage of 
 clarity and immediate usefulness, whereas the complex approach was deemed 
 hard to understand and without immediate utility.
 
 A secondary but no less important concern is related to the impact on Neutron 
 of the approach implemented in the POC.  The POC was developed 
 monolithically, without oversight through gerrit, and the resulting patches 
 were excessive in size (~4700 [2] and ~1500 [3] lines).  Such large patches 
 are effectively impossible to review.  Even broken down into reviewable 
 chunks, though, it does not seem realistic to target juno-1 for merging this 
 kind of complexity.  The impact on stability could be considerable, and it is 
 questionable whether the necessary review effort should be devoted to 
 fast-tracking group-based policy at all, let alone an approach that is 
 considered by many to be unnecessarily complicated.  
 
 The blueprint for group policy [4] is currently listed as a 'High' priority.  
 With the above concerns in mind, does it make sense to continue prioritizing 
 an effort that at present would seem to require considerably more resources 
 than the benefit it appears to promise?
 
 
 Maru
 
 1: https://etherpad.openstack.org/p/group-based-policy

Apologies, this link is to the summit session etherpad.  The link to the 
original proposal is:

https://docs.google.com/document/d/1ZbOFxAoibZbJmDWx1oOrOsDcov6Cuom5aaBIrupCD9E/edit

 2: https://review.openstack.org/93853
 3: https://review.openstack.org/93935
 4: 
 https://blueprints.launchpad.net/neutron/+spec/group-based-policy-abstraction
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][group-based-policy] Should we revisit the priority of group-based policy?

2014-05-22 Thread Armando M.
I would second Maru's concerns, and I would also like to add the following:

We need to acknowledge the fact that there are certain architectural
aspects of Neutron as a project that need to be addressed; at the
summit we talked about the core refactoring, a task oriented API, etc.
To me these items have been neglected far too much over the past and
would need a higher priority and a lot more attention during the Juno
cycle. Being stretched as we are I wonder if dev/review cycles
wouldn't be better spent devoting more time to these efforts rather
than GP.

That said, I appreciate that GP is important and needs to move
forward, but at the same time I am thinking that there must be a
better way for addressing it and yet relieve some of the pressure that
GP complexity imposes to the Neutron team. One aspect it was discussed
at the summit was that the type of approach shown in [2] and [3]
below, was chosen because of lack of proper integration hooks...so I
am advocating: let's talk about those first before ruling them out in
favor of a monolithic approach that seems to violate some engineering
principles, like modularity and loose decoupling of system components.

I think we didn't have enough time during the summit to iron out some
of the concerns voiced here, and it seems like the IRC meeting for
Group Policy would not be the right venue to try and establish a
common ground among the people driving this effort and the rest of the
core team.

Shall we try and have an ad-hoc meeting and an ad-hoc agenda to find a
consensus?

Many thanks,
Armando

On 22 May 2014 11:38, Maru Newby ma...@redhat.com wrote:

 On May 22, 2014, at 11:03 AM, Maru Newby ma...@redhat.com wrote:

 At the summit session last week for group-based policy, there were many 
 concerns voiced about the approach being undertaken.  I think those concerns 
 deserve a wider audience, and I'm going to highlight some of them here.

 The primary concern seemed to be related to the complexity of the approach 
 implemented for the POC.  A number of session participants voiced concern 
 that the simpler approach documented in the original proposal [1] (described 
 in the section titled 'Policies applied between groups') had not been 
 implemented in addition to or instead of what appeared in the POC (described 
 in the section titled 'Policies applied as a group API').  The simpler 
 approach was considered by those participants as having the advantage of 
 clarity and immediate usefulness, whereas the complex approach was deemed 
 hard to understand and without immediate utility.

 A secondary but no less important concern is related to the impact on 
 Neutron of the approach implemented in the POC.  The POC was developed 
 monolithically, without oversight through gerrit, and the resulting patches 
 were excessive in size (~4700 [2] and ~1500 [3] lines).  Such large patches 
 are effectively impossible to review.  Even broken down into reviewable 
 chunks, though, it does not seem realistic to target juno-1 for merging this 
 kind of complexity.  The impact on stability could be considerable, and it 
 is questionable whether the necessary review effort should be devoted to 
 fast-tracking group-based policy at all, let alone an approach that is 
 considered by many to be unnecessarily complicated.

 The blueprint for group policy [4] is currently listed as a 'High' priority. 
  With the above concerns in mind, does it make sense to continue 
 prioritizing an effort that at present would seem to require considerably 
 more resources than the benefit it appears to promise?


 Maru

 1: https://etherpad.openstack.org/p/group-based-policy

 Apologies, this link is to the summit session etherpad.  The link to the 
 original proposal is:

 https://docs.google.com/document/d/1ZbOFxAoibZbJmDWx1oOrOsDcov6Cuom5aaBIrupCD9E/edit

 2: https://review.openstack.org/93853
 3: https://review.openstack.org/93935
 4: 
 https://blueprints.launchpad.net/neutron/+spec/group-based-policy-abstraction

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [NFV][Neutron] Link to patch/review for allowing instances to receive vlan tagged traffic

2014-05-22 Thread Yi Sun
Is this the one?
https://review.openstack.org/#/c/92541/


On Thu, May 22, 2014 at 11:04 AM, Steve Gordon sgor...@redhat.com wrote:

 Hi Alan/Balazs,

 In one of the NFV BoF sessions in Atlanta one of you (I assume one of you
 anyway!) noted in the etherpad [1] (line 89) that Ericsson had submitted a
 patch to Neutron which would allow instances to use tagged vlan traffic. Do
 you happen to have a link handy to the review for this?

 Thanks in advance,

 Steve

 [1] https://etherpad.openstack.org/p/juno-nfv-bof

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Android-x86
http://www.android-x86.org
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] handling drivers that will not be third-party tested

2014-05-22 Thread Doug Hellmann
On Thu, May 22, 2014 at 4:48 AM, Lucas Alvares Gomes
lucasago...@gmail.com wrote:
 On Thu, May 22, 2014 at 1:03 AM, Devananda van der Veen
 devananda@gmail.com wrote:
 I'd like to bring up the topic of drivers which, for one reason or another,
 are probably never going to have third party CI testing.

 Take for example the iBoot driver proposed here:
   https://review.openstack.org/50977

 I would like to encourage this type of driver as it enables individual
 contributors, who may be using off-the-shelf or home-built systems, to
 benefit from Ironic's ability to provision hardware, even if that hardware
 does not have IPMI or another enterprise-grade out-of-band management
 interface. However, I also don't expect the author to provide a full
 third-party CI environment, and as such, we should not claim the same level
 of test coverage and consistency as we would like to have with drivers in
 the gate.

 +1


 As it is, Ironic already supports out-of-tree drivers. A python module that
 registers itself with the appropriate entrypoint will be made available if
 the ironic-conductor service is configured to load that driver. For what
 it's worth, I recall Nova going through a very similar discussion over the
 last few cycles...

 So, why not just put the driver in a separate library on github or
 stackforge?

 I would like to have this drivers within the Ironic tree under a
 separated directory (e.g /drivers/staging/, not exactly same but kinda
 like what linux has in their tree[1]). The advatanges of having it in
 the main ironic tree is because it makes it easier to other people
 access the drivers, easy to detect and fix changes in the Ironic code
 that would affect the driver, share code with the other drivers, add
 unittests and provide a common place for development.

 We can create some rules for people who are thinking about submitting
 their driver under the staging directory, it should _not_ be a place
 where you just throw the code and forget it, we would need to agree
 that the person submitting the code will also babysit it, we also
 could use the same process for all the other drivers wich wants to be
 in the Ironic tree to be accepted which is going through ironic-specs.

 Thoughts?

One aspect of the entry points-based plugin system is that the
deployer configuring the driver no longer needs to know where the
source lives in the tree to activate it, since the plugin has a name
that is separate from its module or class name. That has 2
implications: It doesn't really matter where in the tree you put the
code, and you need to do something else to document its status.

If you keep the drivers in tree, you may want to consider prefixing
the names of less-well-tested drivers with contrib- or
experimental- or something similar so the driver's status is clear
at the point when someone goes to activate the driver.

Doug


 [1] http://lwn.net/Articles/285599/

 Cheers,
 Lucas

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] specs blueprints for juno

2014-05-22 Thread Doug Hellmann
On Tue, May 20, 2014 at 9:28 AM, Doug Hellmann
doug.hellm...@dreamhost.com wrote:
 We agreed just before the summit that we wanted to participate in the
 specs repository experiments for this cycle. The repository is set up
 [1] and I've just posted a review for an updated template [2] that
 includes some sections added to nova's template after we copied it and
 some sections we need that other projects don't.

 To keep tracking simpler, ttx and I intend to use launchpad only for
 reporting and not for actually approving blueprints, so I would like
 all blueprints to have a corresponding spec ASAP with 2 exceptions:
 Ben has already finished graduate-config-fixture and the oslo-db-lib
 work is far enough along that the *graduation* part of that doesn't
 need to written up (any other pending db changes not tied to a bug
 should have a spec  blueprint created).

 Please look over the template review, and start thinking about the
 specs for your blueprints. After the updated template lands, we'll be
 ready to start reviewing the specs for all of the blueprints we plan
 to work on during Juno.

 Thanks!
 Doug

 1. http://git.openstack.org/cgit/openstack/oslo-specs
 2. https://review.openstack.org/94359

If you were waiting to rebase and update your spec documents, the
template is now ready. We do have another template for graduation
blueprints under review still [1], but I think we can begin reviewing
blueprints not related to graduations.

Doug

1. https://review.openstack.org/94906

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS]TLS API support for authentication

2014-05-22 Thread Eichberger, German
Hi Sam,

I totally agree - this will definitely reduce our scope and increase the chance 
of getting this in.

I am still (being influenced by Unix methodology) thinking that we should 
explore service chaining more for that. As I said earlier, re-encryption feels 
more like a VPN type thing than a load balancer. Hence, I can imagine a very 
degenerated VPN service which re-encrypts things with SSL. But, admittedly, I 
am looking at that as a software engineer and not a network engineer :)

German

From: Samuel Bercovici [mailto:samu...@radware.com]
Sent: Thursday, May 22, 2014 11:44 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Neutron][LBaaS]TLS API support for authentication

Hi Everone,

I would like to defer addressing client authentication and back-end-server 
authentication for a 2nd phase - after Juno.
This means that from looking on 
https://etherpad.openstack.org/p/neutron-lbaas-ssl-l7 , under the SSL/TLS 
Termination capabilities, not addressing 2.2 and 3.
I think that this would reduce the effort of storing certificates information 
to the actual ones used for the termination.
We will leave the discussion on storing the required trusted certificates and 
CA chains for later.

Any objections?

Regards,
-Sam.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Gerrit downtime on May 23 for project renames

2014-05-22 Thread Sergey Lukjanov
I've published all renaming changes, here is an etherpad [1] with the
list of them. All of the changes are in one chain to avoid merge
conflicts, oslo-specs renaming is the last one in chain if we'll
decide to keep it as is.

[1] https://etherpad.openstack.org/p/repo-renaming-2014-05-23

Thanks.

On Thu, May 22, 2014 at 8:43 PM, Sergey Lukjanov slukja...@mirantis.com wrote:
 BTW I'm working on preparing all changes for this renaming session.

 On Thu, May 22, 2014 at 8:05 PM, Doug Hellmann
 doug.hellm...@dreamhost.com wrote:
 On Thu, May 22, 2014 at 10:16 AM, James E. Blair jebl...@openstack.org 
 wrote:
 Thierry Carrez thie...@openstack.org writes:

 James E. Blair wrote:
 openstack/oslo-specs - openstack/common-libraries-specs

 I understand (and agree with) the idea that -specs repositories should
 be per-program.

 That said, you could argue that oslo is a shorthand for common
 libraries and is the code name for the *program* (rather than bound to
 any specific project). Same way infra is shorthand for
 infrastructure. So I'm not 100% convinced this one is necessary...

 data-processing-specs has been pointed out as a similarly awkward
 name.  According to the programs.yaml file, each program does have a
 codename, and the compute program's codename is 'nova'.  I suppose we
 could have said the repos are per-program though using the program's
 codename.  But that doesn't actually help someone who wants to write a
 swift-bench spec know that it should go in the swift-specs repo.

 I'm happy to drop oslo from the rename list if Doug wants to mull this
 over a bit more.  The only thing I hate more than renaming repos is
 renaming repos twice.  I'm hoping we can have some kind of consistency,
 though.  People are in quite a hurry to have these created (we made 5
 more for official openstack programs yesterday, plus a handful for
 stackforge).

 I don't feel strongly, and am prepared to go along with the consensus
 on using the longer names.

 Doug


 -Jim

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Sincerely yours,
 Sergey Lukjanov
 Sahara Technical Lead
 (OpenStack Data Processing)
 Mirantis Inc.



-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [NFV][Neutron] Link to patch/review for allowing instances to receive vlan tagged traffic

2014-05-22 Thread ZZelle
The associated neutron specs: https://review.openstack.org/94612



On Thu, May 22, 2014 at 8:55 PM, Yi Sun beyo...@gmail.com wrote:

 Is this the one?
 https://review.openstack.org/#/c/92541/


 On Thu, May 22, 2014 at 11:04 AM, Steve Gordon sgor...@redhat.com wrote:

 Hi Alan/Balazs,

 In one of the NFV BoF sessions in Atlanta one of you (I assume one of you
 anyway!) noted in the etherpad [1] (line 89) that Ericsson had submitted a
 patch to Neutron which would allow instances to use tagged vlan traffic. Do
 you happen to have a link handy to the review for this?

 Thanks in advance,

 Steve

 [1] https://etherpad.openstack.org/p/juno-nfv-bof

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Android-x86
 http://www.android-x86.org

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] NFV BoF at design summit

2014-05-22 Thread Alan Kavanagh
+1

I believe the main point is not to confuse what we is need on the Hypervisor 
and networking but focus on what we need Openstack to support to be a robust 
and reliable system, for example most systems aim for 5 9's due to various 
requirements but what they really want to aim for us predictable and 
deterministic systems.

I think to be fair to Kevins email, the focus should be that when we issue a 
port connection that we guanrantee its connected and we have a way to validate 
that. 

Carrier grade is not just about system availability/uptime but also that we can 
ensure when a call for an object/resource is requested we can ensure its 
99.999% of the time going to be handled and not dropped etc etc etc.

Alan

-Original Message-
From: Steve Gordon [mailto:sgor...@redhat.com] 
Sent: May-22-14 9:44 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][NFV] NFV BoF at design summit



- Original Message -
 From: Kevin Benton blak...@gmail.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Thursday, May 22, 2014 2:48:37 AM
 Subject: Re: [openstack-dev] [Neutron][NFV] NFV BoF at design summit
 
 3. OpenStack itself should ( its own Compute Node/L3/Routing,  
 Controller
 )  have (5 nine capable) reliability.
 
 Can you elaborate on this a little more? Reliability is pretty 
 deployment specific (e.g. database chosen, number of cluster members, 
 etc). I'm sure nobody would disagree that OpenStack should be 
 reliable, but without specific issues to address it doesn't really give us a 
 clear target.
 
 Thanks,
 Kevin Benton

I think this comment applies equally to the other items listed. There seemed to 
be agreement at the BoF that one of our key tasks/challenges is to boil down 
such high level NFV requirements to create actionable feature 
requests/proposals in the context of OpenStack.

Thanks,

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Link to patch/review for allowing instances to receive vlan tagged traffic

2014-05-22 Thread Alan Kavanagh
Hi Steven

More than happy to help out here: The BP in question is this, patches have been 
submitted 3 weeks ago by Erik Moe: 
https://blueprints.launchpad.net/neutron/+spec/vlan-aware-vms 
https://review.openstack.org/#/c/92541/ 
This is a generic feature a lot of Telco traffic nodes have and even non telco 
nodes too ;-) so it’s a great feature to have supported.
Alan

-Original Message-
From: Steve Gordon [mailto:sgor...@redhat.com] 
Sent: May-22-14 2:05 PM
To: Alan Kavanagh; Balázs Gibizer
Cc: OpenStack Development Mailing List (not for usage questions)
Subject: [NFV][Neutron] Link to patch/review for allowing instances to receive 
vlan tagged traffic

Hi Alan/Balazs,

In one of the NFV BoF sessions in Atlanta one of you (I assume one of you 
anyway!) noted in the etherpad [1] (line 89) that Ericsson had submitted a 
patch to Neutron which would allow instances to use tagged vlan traffic. Do you 
happen to have a link handy to the review for this?

Thanks in advance,

Steve

[1] https://etherpad.openstack.org/p/juno-nfv-bof
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-docs] [Heat][Documentation] Heat template documentation

2014-05-22 Thread Anne Gentle
On Tue, May 20, 2014 at 6:42 PM, Steve Baker sba...@redhat.com wrote:

  On 21/05/14 02:31, Doug Hellmann wrote:

 On Fri, May 16, 2014 at 2:10 PM, Gauvain 
 Pocentekgauvain.pocen...@objectif-libre.com 
 gauvain.pocen...@objectif-libre.com wrote:

  Le 2014-05-16 17:13, Anne Gentle a écrit :


  On Thu, May 15, 2014 at 10:34 AM, Gauvain 
 Pocentekgauvain.pocen...@objectif-libre.com 
 gauvain.pocen...@objectif-libre.com wrote:


  Hello,

 This mail probably mainly concerns the doc team, but I guess that the
 heat team wants to know what's going on.

 We've shortly discussed the state of heat documentation with Anne Gentle
 and Andreas Jaeger yesterday, and I'd like to share what we think would be
 nice to do.

 Currently we only have a small section in the user guide that describes
 how to start a stack, but nothing documenting how to write templates. The
 heat developer doc provides a good reference, but I think it's not easy to
 use to get started.

 So the idea is to add an OpenStack Orchestration chapter in the user
 guide that would document how to use a cloud with heat, and how to write
 templates.

 I've drafted a spec to keep track of this at [0].

  I'd like to experiment a bit with converting the End User Guide to an
 easier markup to enable more contributors to it. Perhaps bringing in
 Orchestration is a good point to do this, plus it may help address the
 auto-generation Steve mentions.

 The loss would be the single sourcing of the End User Guide and Admin
 User Guide as well as loss of PDF output and loss of translation. If
 these losses are worthwhile for easier maintenance and to encourage
 contributions from more cloud consumers, then I'd like to try an
 experiment with it.

  Using RST would probably make it easier to import/include the developers'
 documentation. But I'm not sure we can afford to loose the features you
 mention. Translations for the user guides are very important I think.

  Sphinx does appear to have translation 
 support:http://sphinx-doc.org/intl.html?highlight=translation

 I've never used the feature myself, so I don't know how good the workflow is.

 Sphinx will generate PDFs, though the LaTeX output is not as nice
 looking as what we get now. There's also a direct-to-pdf builder that
 uses rst2pdf that appears to support templates, so that might be an
 easier path to producing something 
 attractive:http://ralsina.me/static/manual.pdf

  I attempted to make latexpdf on the heat sphinx docs and fell down a
 latex tool-chain hole.

 I tried adding rst2pdf support to the sphinx docs build:
 https://review.openstack.org/#/c/94491/

 and the results are a reasonable start:

 https://drive.google.com/file/d/0B_b9ckHiNkjVS3ZNZmNXMkJkWE0/edit?usp=sharing


   How would we review changes made in external repositories? The user guides
 are continuously published, this means that a change done in the heat/docs/
 dir would quite quickly land on the webserver without a doc team review. I
 completely trust the developers, but I'm not sure that this is the way to
 go.



  The experiment would be to have a new repo set up,
 openstack/user-guide and use the docs-core team as reviewers on it.
 Convert the End User Guide from DocBook to RST and build with Sphinx.
 Use the oslosphinx tempate for output. But what I don't know is if
 it's possible to build the automated output outside of the
 openstack/heat repo, does anyone have interest in doing a proof of
 concept on this?

  I'm not sure that this is possible, but I'm no RST expert.

  I'm not sure this quite answers the question, but the RST directives
 for auto-generating docs from code usually depend on being able to
 import the code. That means heat and its dependencies would need to be
 installed on the system where the build is performed. We accomplish
 this in the dev doc builds by using tox, which automatically handles
 the installation as part of setting up the virtualenv where the build
 command runs.

  I'm sure we could do a git checkout of heat during the docs build, and
 even integrate that with gating. I thought this was already happening for
 some docbook builds, but I can't find any examples now.

I'd also like input on the loss of features I'm describing above. Is
 this worth experimenting with?

  Starting this new book sounds like a lot of work. Right now I'm not
 convinced it's worth it.



  How about this for a suggestion. The Heat template authoring guide is
 potentially so large and different that it deserves to be in its own
 document. It is aimed at users, but there is so much potential content
 hidden in the template format that it wouldn't necessarily belong in the
 current user guide.


Sorry, every doc team member I've talked to doesn't want to take on another
guide.

Also the loss of nice PDF and having to test and maintain a second
translation tool chain isn't enthusiastically embraced from what I'm
hearing.



 We could start a new doc repo which is a sphinx-based template authoring
 guide. 

Re: [openstack-dev] [neutron][group-based-policy] Should we revisit the priority of group-based policy?

2014-05-22 Thread Mandeep Dhami
Maru's concerns are that:
1. It is large
2. It is complex

And Armando's related concerns are:
3. Could dev/review cycles be better spent on refactoring
4. If refactored neutron was available, would a simpler option become more
viable

Let me address them in that order.

1. Re: It is large
Group policy has an ambitious goal  - provide devop teams with policy based
controls that are usable at scale and with automation (say a higher
governance layer like Congress). The fact that meeting a large challenge
requires more code is natural. We understand that challenge, and that is
why we did a prototype (as PoC that was demonstrated on the summit). And
based on that learning we are incrementally creating patches for building
the group based policy. Just because a task is large, we as neutron can not
shy away from building it. That will only drive people who need it out side
neutron (as we are seeing with the frustration that the LBaaS team had
because they have a requirement that is large as well).

2. Re: It is complex
Complexity depends on the context. Our goal was to make the end-user's life
simpler (and more automated). To achieve some of that simplicity, we
required a little more complexity in the implementation. We decide to make
that arbitrage - a little higher complexity in implementation to allow for
simpler usage. But we were careful and did not want to impose that
complexity on every use case - hence a lot of that is optional (and
exercised only if the use case needs it). Unfortunately the model, has to
model all of it so as it not add complexity later in upgrade and backward
compatibility issues. We choose to do architecture upfront, and then
implement it incrementally.

The team came up with the model currently in model based on that review and
evaluation all the proposals in the document that you refer. It is easy to
make general comments, but unless you participate in the process and sign
up to writing the code, those comments are not going to help with solving
the original problem. And this _is_ open-source. If you disagree, please
write code and the community can decide for itself as to what model is
actually simple to use for them. Curtailing efforts from other developers
just because their engineering trade-offs are different from what you
believe your use-case needs is not why we like open source. We enjoy the
mode where different developers try different things, we experiment, and
the software evolves to what the user demands. Or maybe, multiple models
live in harmony. Let the users decide that.

3. Re: Could dev/review cycles be better spent on refactoring
I think that most people agree that policy control is an important feature
that fundamentally improves neutron (by solving the automation and scale
issues). In a large project, multiple sub-projects can, and for a healthy
project should, work in parallel. I understand that the neutron core team
is stretched. But we still need to be able to balance the needs of today
(paying off the technical debt/existing-issues by doing refactoring) with
needs of tomorrow (new features like GP and LBaaS). GP effort was started
in Havana, and now we are trying to get this in Juno. I think that is
reasonable and a long enough cycle for a high priority project to be able
to get some core attention. Again I refer to LBaaS experience, as they
struggled with very similar issues.

4. Re: If refactored neutron was available, would a simpler option become
more viable
We would love to be able to answer that question. We have been trying to
understand the refactoring work to understand this (see another ML thread)
and we are open to understanding your position on that. We will call the
ad-hoc meeting that you suggested and we would like to understand the
refactoring work that might be reused for simpler policy implementation. At
the same time, we would like to build on what is available today, and when
the required refactored neutron becomes available (say Juno or K-release),
we are more than happy to adapt to it at that time. Serializing all
development around an effort that is still in inception phase is not a good
solution. We are looking forward to participating in the core refactoring
work, and based on the final spec that come up with, we would love to be
able to eventually make the policy implementation simpler.

Regards,
Mandeep




On Thu, May 22, 2014 at 11:44 AM, Armando M. arma...@gmail.com wrote:

 I would second Maru's concerns, and I would also like to add the following:

 We need to acknowledge the fact that there are certain architectural
 aspects of Neutron as a project that need to be addressed; at the
 summit we talked about the core refactoring, a task oriented API, etc.
 To me these items have been neglected far too much over the past and
 would need a higher priority and a lot more attention during the Juno
 cycle. Being stretched as we are I wonder if dev/review cycles
 wouldn't be better spent devoting more time to these efforts rather
 

Re: [openstack-dev] Manual VM migration

2014-05-22 Thread Naveed Ahmad
Hi,
Thanks for response and sharing script.

Can i use this script with devstack based cloud platform.?

Actually Live Migration is available between inter-cluster. so that why i
was talking about the suspend/pause the VM before migration.




Regards



On Thu, May 22, 2014 at 8:45 PM, Diego Parrilla Santamaría 
diego.parrilla.santama...@gmail.com wrote:

 Hi Naveed,

 I don't think it's a good idea to suspend/pause. If you want to keep the
 state of the VM then have a look at the live migration capabilities of KVM.
 Our script is very simple and works for any VM without attached block
 storage.

 Here goes the little script. Keep in mind it's something very simple.

 https://gist.github.com/diegoparrilla/6288e1521bffe741f71a

 Regards
 Diego



  --
 Diego Parrilla
  http://www.stackops.com/*CEO*
 *www.stackops.com http://www.stackops.com/ | *
 diego.parri...@stackops.com | +34 91 005-2164 | skype:diegoparrilla




 On Thu, May 22, 2014 at 8:36 AM, Naveed Ahmad 
 12msccsnah...@seecs.edu.pkwrote:


 Hi Diego ,

 Thanks for sharing steps for VM migration from customer end to your
 cloud. Well! i am not going to propose new idea for VM migration. I am
 using VM migration  for POC of my research idea.

 I have few question for you!


 1. Can we use suspend/pause feature instead of snapshot for saving VM
 states. ?
 2. How you are managing VM metadata (such as instance detail from
 nova,cinder database)



 Is it possible for you to share script? I need this VM migration feature
 in Openstack for POC only.
 Thanks again for your reply.


 Regards
 Naveed




 On Wed, May 21, 2014 at 10:47 PM, Diego Parrilla Santamaría 
 diego.parrilla.santama...@gmail.com wrote:

 Hi Naveed,

 we have customers running VMs in their own Private Cloud that are
 migrating to our new Public Cloud offering. To be honest I would love to
 have a better way to do it, but this is how we do. We have developed a tiny
 script that basically performs the following actions:

 1) Take a snapshot of the VM from the source Private Cloud
 2) Halts the source VM (optional, but good for state consistency)
  3) Download the snapshot from source Private Cloud
 4) Upload the snapshot to target Public Cloud
 5) Start a new VM using the uploaded image in the target public cloud
 6) Allocate a floating IP and attach it to the VM
 7) Change DNS to point to the new floating IP
 8) Perform some cleanup processes (delete source VM, deallocate its
 floating IP, delete snapshot from source...)

 A bit rudimentary, but it works if your VM does not have attached
 volumes right away.

 Still, I would love to hear some sexy and direct way to do it.

 Regards
 Diego

  --
 Diego Parrilla

 https://mailtrack.io/trace/link/660f588f5b8ce60a3da368dfbfeda30eb0548109
 *CEO*
 *www.stackops.com
 https://mailtrack.io/trace/link/660f588f5b8ce60a3da368dfbfeda30eb0548109 
 | *
  diego.parri...@stackops.com | +34 91 005-2164 | skype:diegoparrilla




 On Wed, May 21, 2014 at 7:32 PM, Naveed Ahmad 
 12msccsnah...@seecs.edu.pk wrote:


 Hi community,

 I need some help from you people. Openstack provides Hot (Live) and
 Cold (Offline) migration between clusters/compute. However i am interested
 to migrate Virtual Machine from one OpenStack Cloud to another.  is it
 possible ?  It is inter cloud VM migration not inter cluster or compute.

 I need help and suggestion regarding VM migration. I tried to manually
 migrate VM from one OpenStack Cloud to another but no success yet.

 Please guide me!

 Regards


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-devhttps://mailtrack.io/trace/link/86e76f2270da640047a3867c01c2cc077eb9a20c



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-docs] [Heat][Documentation] Heat template documentation

2014-05-22 Thread Steve Baker
On 23/05/14 08:56, Anne Gentle wrote:



 On Tue, May 20, 2014 at 6:42 PM, Steve Baker sba...@redhat.com
 mailto:sba...@redhat.com wrote:

 On 21/05/14 02:31, Doug Hellmann wrote:
 On Fri, May 16, 2014 at 2:10 PM, Gauvain Pocentek
 gauvain.pocen...@objectif-libre.com 
 mailto:gauvain.pocen...@objectif-libre.com wrote:
 Le 2014-05-16 17:13, Anne Gentle a écrit :

 On Thu, May 15, 2014 at 10:34 AM, Gauvain Pocentek
 gauvain.pocen...@objectif-libre.com 
 mailto:gauvain.pocen...@objectif-libre.com wrote:

 Hello,

 This mail probably mainly concerns the doc team, but I guess that the
 heat team wants to know what's going on.

 We've shortly discussed the state of heat documentation with Anne 
 Gentle
 and Andreas Jaeger yesterday, and I'd like to share what we think 
 would be
 nice to do.

 Currently we only have a small section in the user guide that 
 describes
 how to start a stack, but nothing documenting how to write templates. 
 The
 heat developer doc provides a good reference, but I think it's not 
 easy to
 use to get started.

 So the idea is to add an OpenStack Orchestration chapter in the user
 guide that would document how to use a cloud with heat, and how to 
 write
 templates.

 I've drafted a spec to keep track of this at [0].
 I'd like to experiment a bit with converting the End User Guide to an
 easier markup to enable more contributors to it. Perhaps bringing in
 Orchestration is a good point to do this, plus it may help address the
 auto-generation Steve mentions.

 The loss would be the single sourcing of the End User Guide and Admin
 User Guide as well as loss of PDF output and loss of translation. If
 these losses are worthwhile for easier maintenance and to encourage
 contributions from more cloud consumers, then I'd like to try an
 experiment with it.
 Using RST would probably make it easier to import/include the 
 developers'
 documentation. But I'm not sure we can afford to loose the features you
 mention. Translations for the user guides are very important I think.
 Sphinx does appear to have translation support:
 http://sphinx-doc.org/intl.html?highlight=translation

 I've never used the feature myself, so I don't know how good the 
 workflow is.

 Sphinx will generate PDFs, though the LaTeX output is not as nice
 looking as what we get now. There's also a direct-to-pdf builder that
 uses rst2pdf that appears to support templates, so that might be an
 easier path to producing something attractive:
 http://ralsina.me/static/manual.pdf
 I attempted to make latexpdf on the heat sphinx docs and fell down
 a latex tool-chain hole.

 I tried adding rst2pdf support to the sphinx docs build:
 https://review.openstack.org/#/c/94491/

 and the results are a reasonable start:
 
 https://drive.google.com/file/d/0B_b9ckHiNkjVS3ZNZmNXMkJkWE0/edit?usp=sharing



 How would we review changes made in external repositories? The user 
 guides
 are continuously published, this means that a change done in the 
 heat/docs/
 dir would quite quickly land on the webserver without a doc team 
 review. I
 completely trust the developers, but I'm not sure that this is the way 
 to
 go.


 The experiment would be to have a new repo set up,
 openstack/user-guide and use the docs-core team as reviewers on it.
 Convert the End User Guide from DocBook to RST and build with Sphinx.
 Use the oslosphinx tempate for output. But what I don't know is if
 it's possible to build the automated output outside of the
 openstack/heat repo, does anyone have interest in doing a proof of
 concept on this?
 I'm not sure that this is possible, but I'm no RST expert.
 I'm not sure this quite answers the question, but the RST directives
 for auto-generating docs from code usually depend on being able to
 import the code. That means heat and its dependencies would need to be
 installed on the system where the build is performed. We accomplish
 this in the dev doc builds by using tox, which automatically handles
 the installation as part of setting up the virtualenv where the build
 command runs.
 I'm sure we could do a git checkout of heat during the docs build,
 and even integrate that with gating. I thought this was already
 happening for some docbook builds, but I can't find any examples now.

 I'd also like input on the loss of features I'm describing above. Is
 this worth experimenting with?
 Starting this new book sounds like a lot of work. Right now I'm not
 convinced it's worth it.


 How about this for a suggestion. The Heat template authoring guide
 is potentially so large and different that it deserves to be in
 its own document. It is aimed at users, but there is so much
 potential content hidden in the template 

Re: [openstack-dev] [TripleO] Use of environment variables in tripleo-incubator

2014-05-22 Thread Ben Nemec
For everyone's awareness, Alexis proposed a spec related to this:
https://review.openstack.org/#/c/94910

-Ben

On 05/20/2014 05:05 PM, James Polley wrote:
 I spoke to JP offline and confirmed that the link to 85418 should have been
 a link to https://review.openstack.org/#/c/88252
 
 I think that
 https://etherpad.openstack.org/p/tripleo-incubator-rationalise-ui and
 https://etherpad.openstack.org/p/tripleo-devtest.sh-refactoring-blueprintare
 the closest things to documentation we've got about this. Now that we
 have the specs repo, perhaps we should be creating a spec and moving the
 discussion there.
 
 
 
 
 On Tue, May 20, 2014 at 12:06 PM, Sullivan, Jon Paul 
 jonpaul.sulli...@hp.com wrote:
 
  Hi,



 There are a number of reviews[1][2] where new environment variables are
 being disliked, leading to -1 or -2 code reviews because new environment
 variables are added.  It is looking like this is becoming a policy.



 If this is a policy, then could that be stated, and an alternate mechanism
 made available so that any reviews adding environment variables can use the
 replacement mechanism, please?



 Otherwise, some guidelines for developers where environment variables are
 acceptable or not would equally be useful.



 [1] https://review.openstack.org/85009

 [2] https://review.openstack.org/85418



 Thanks,

 ,: jonpaul.sulli...@hp.com J *Cloud Services - @hpcloud*
 (: +353 (91) 75 4169



 Postal Address: Hewlett-Packard Galway Limited, Ballybrit Business Park,
 Galway.

 Registered Office: Hewlett-Packard Galway Limited, 63-74 Sir John
 Rogerson's Quay, Dublin 2.

 Registered Number: 361933



 The contents of this message and any attachments to it are confidential
 and may be legally privileged. If you have received this message in error
 you should delete it from your system immediately and advise the sender.



 To any recipient of this message within HP, unless otherwise stated, you
 should consider this message and attachments as HP CONFIDENTIAL.



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] reservation of fixed ip

2014-05-22 Thread Carl Baldwin
If an IP is reserved for a tenant, should the tenant need to
explicitly ask for that specific IP to be allocated when creating a
floating ip or port?  And it would pull from the regular pool if a
specific IP is not requested.  Or, does the allocator just pull from
the tenant's reserved pool whenever it needs an IP on a subnet?  If
the latter, then I think Salvatore's concern still a valid one.

I think if a tenant wants an IP address reserved then he probably has
a specific purpose for that IP address in mind.  That leads me to
think that he should be required to pass the specific address when
creating the associated object in order to make use of it.  We can't
do that yet with all types of allocations but there are reviews in
progress [1][2].

Carl

[1] https://review.openstack.org/#/c/70286/
[2] https://review.openstack.org/#/c/83664/

On Thu, May 22, 2014 at 12:04 PM, Sławek Kapłoński sla...@kaplonski.pl wrote:
 Hello


 Dnia Wed, 21 May 2014 23:51:48 +0100
 Salvatore Orlando sorla...@nicira.com napisał:

 In principle there is nothing that should prevent us from
 implementing an IP reservation mechanism.

 As with anything, the first thing to check is literature or related
 work! If any other IaaS system is implementing such a mechanism, is
 it exposed through the API somehow?
 Also this feature is likely to be provided by IPAM systems. If yes,
 what constructs do they use?
 I do not have the answers to this questions, but I'll try to document
 myself; if you have them - please post them here.

 This new feature would probably be baked into neutron's IPAM logic.
 When allocating an IP, first check from within the IP reservation
 pool, and then if it's not found check from standard allocation pools
 (this has non negligible impact on availability ranges management, but
 these are implementation details).
 Aspects to consider, requirement-wise, are:
 1) Should reservations also be classified by qualification of the
 port? For instance, is it important to specify that an IP should be
 used for the gateway port rather than for a floating IP port?

 IMHO it is not required when IP is reserved. User should have
 possibility to reserve such IP for his tenant and later use it as he
 want (floating ip, instance or whatever)

 2) Are reservations something that an admin could specify on a
 tenant-basis (hence an admin API extension), or an implicit mechanism
 that can be tuned using configuration variables (for instance create
 an IP reservation a for gateway port for a given tenant when a router
 gateway is set).

 I apologise if these questions are dumb. I'm just trying to frame this
 discussion into something which could then possibly lead to
 submitting a specification.

 Salvatore


 On 21 May 2014 21:37, Collins, Sean sean_colli...@cable.comcast.com
 wrote:

  (Edited the subject since a lot of people filter based on the
  subject line)
 
  I would also be interested in reserved IPs - since we do not deploy
  the layer 3 agent and use the provider networking extension and a
  hardware router.
 
  On Wed, May 21, 2014 at 03:46:53PM EDT, Sławek Kapłoński wrote:
   Hello,
  
   Ok, I found that now there is probably no such feature to reserve
   fixed ip for tenant. So I was thinking about add such feature to
   neutron. I mean that it should have new table with reserved ips
   in neutron database and neutron will check this table every time
   when new port will be created (or updated) and IP should be
   associated with this port. If user has got reserved IP it should
   be then used for new port, if IP is reserver by other tenant - it
   shouldn't be used. What You are thinking about such possibility?
   Is it possible to add it in some future release of neutron?
  
   --
   Best regards
   Sławek Kapłoński
   sla...@kaplonski.pl
  
  
   Dnia Mon, 19 May 2014 20:07:43 +0200
   Sławek Kapłoński sla...@kaplonski.pl napisał:
  
Hello,
   
I'm using openstack with neutron and ML2 plugin. Is there any
way to reserve fixed IP from shared external network for one
tenant? I know that there is possibility to create port with IP
and later connect VM to this port. This solution is almost ok
for me but problem is when user delete this instance - then
port is also deleted and it is not reserved still for the same
user and tenant. So maybe there is any solution to reserve it
permanent? I know also about floating IPs but I don't use L3
agents so this is probably not for me :)
   
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  --
  Sean M. Collins
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 --
 Best regards
 Sławek Kapłoński
 sla...@kaplonski.pl

 ___
 

Re: [openstack-dev] [Neutron] Default routes to SNAT gateway in DVR

2014-05-22 Thread Carl Baldwin
Hi,

I found this message in my backlog from when I was at the summit.
Sorry for the delay in responding.

The default SNAT or dynamic SNAT use case is one of the last
details being worked in the DVR subteam.  That may be why you do not
see any code around this in the patches that have been submitted.
Outbound traffic that will use this SNAT address will first enter the
IR on the compute host.  In the IR, it will not match against any of
the static SNAT addresses for floating IPs.  At that point the packet
will be redirected to another port belonging to the central component
of the DVR.  This port has an IP address  different from the default
gateway address (e.g. 192.168.1.2 instead of 192.168.1.1).  At this
point, the packet will go back out to br-int and but tunneled over to
the network node just like any other intra-network traffic.

Once the packet hits the central component of the DVR on the network
node it will be processed very much like default SNAT traffic is
processed in the current Neutron implementation.  Another
interconnect subnet should not be needed here and would be overkill.

I hope this helps.  Let me know if you have any questions.

Carl

On Fri, May 16, 2014 at 1:57 AM, Wuhongning wuhongn...@huawei.com wrote:
 Hi DVRers,

 I didn't see any detail documents or source code on how to deal with routing
 packet from DVR node to SNAT gw node. If the routing table see a outside ip,
 it should be matched with a default route, so for the next hop, which
 interface will it select?

 Maybe another standalone interconnect subnet per DVR is needed, which
 connect each DVR node and optionally, the SNAT gw node. For packets from dvr
 node-snat node, the interconnect subnet act as the default route for this
 host, and the next hop will be the snat node.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] CI needs YOU

2014-05-22 Thread Clint Byrum
Ahoy there, TripleO interested parties. In the last few months, we've
gotten a relatively robust, though not nearly complete, CI system for
TripleO. It is a bit unorthodox, as we have a strong desire to ensure
PXE booting works, and that requires us running in our own cloud.

We have this working, in two regions of TripleO deployed clouds which
we manage ourselves. We've had quite a few issues, mostly hardware
related, and some related to the fact that TripleO doesn't have HA yet,
so our CI clouds go down whenever our controllers go down.

Anyway, Derek Higgins, Dan Prince, Robert Collins, and myself, have been
doing most of the heavy lifting on this. As a result, CI is not up and
working all that often. It needs more operational support.

So, I would encourage anyone interested in TripleO development to start
working with us to maintain these two cloud regions (hopefully more
regions will come up soon) so that we can keep CI flowing and expand
coverage to include even more of TripleO.

Thank you!

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Tempest Release Naming

2014-05-22 Thread Matthew Treinish

Hi Everyone,

So I was preparing to push the first tag as part of the move to a world with a
branchless tempest and was trying to figure out the naming convention we should
be using. The only complexity here is that we are targeting to do 4 releases a
year that coincide with the OpenStack releases and the stable branch EOL. So the
open question is how do we incorporate that information in the tag naming
scheme? Or should we bother trying to incorporate it in the name at all?

I was just going to go with the standard SemVer so this first tag would be 1.0
and then we would increment it per the normal conventions. I would also just put
the supported releases in the tag message. The concern here is that the tag
names themselves don't really indicate which releases are supported.

Clark had an interesting suggestion that we instead tag each new release
with the version and add a separate eol tag for when we drop support for a
release. So for example, the first release will be 2014.1 because it adds
support for the newly released Icehouse and when Havana goes EOL this summer
we add a 2013.2.eol tag. My only concern with doing this is that it kind of
ignores the other changes and improvements we make along the way. I also think
it would be a bit unclear with this scheme which is the most recent tag at any
given point in time. For example, is 2013.2.eol older or newer than 2014.1? I
think this would only get more confusing if we expand the length of the stable
maint. window.

I'd like to stick with one scheme and not decide to change it later on. I
figured I should bring this out to a wider audience to see if there were other
suggestions or opinions before I pushed out the tag, especially because the tags
are primarily for the consumers of Tempest.


-Matt Treinish

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Introducing task oriented workflows

2014-05-22 Thread Salvatore Orlando
As most of you probably know already, this is one of the topics discussed
during the Juno summit [1].
I would like to kick off the discussion in order to move towards a concrete
design.

Preamble: Considering the meat that's already on the plate for Juno, I'm
not advocating that whatever comes out of this discussion should be put on
the Juno roadmap. However, preparation (or yak shaving) activities that
should be identified as pre-requisite might happen during the Juno time
frame assuming that they won't interfere with other critical or high
priority activities.
This is also a very long post; the TL;DR summary is that I would like to
explore task-oriented communication with the backend and how it should be
reflected in the API - gauging how the community feels about this, and
collecting feedback regarding design, constructs, and related
tools/techniques/technologies.

At the summit a broad range of items were discussed during the session, and
most of them have been reported in the etherpad [1].

First, I think it would be good to clarify whether we're advocating a
task-based API, a workflow-oriented operation processing, or both.

-- About a task-based API

In a task-based API, most PUT/POST API operations would return tasks rather
than neutron resources, and users of the API will interact directly with
tasks.
I put an example in [2] to avoid cluttering this post with too much text.
As the API operation simply launches a task - the database state won't be
updated until the task is completed.

Needless to say, this would be a radical change to Neutron's API; it should
be carefully evaluated and not considered for the v2 API.
Even if it is easily recognisable that this approach has a few benefits, I
don't think this will improve usability of the API at all. Indeed this will
limit the ability of operating on a resource will a task is in execution on
it, and will also require neutron API users to change the paradigm the use
to interact with the API; for not mentioning the fact that it would look
weird if neutron is the only API endpoint in Openstack operating in this
way.
For the Neutron API, I think that its operations should still be
manipulating the database state, and possibly return immediately after that
(*) - a task, or to better say a workflow will then be started, executed
asynchronously, and update the resource status on completion.

-- On workflow-oriented operations

The benefits of it when it comes to easily controlling operations and
ensuring consistency in case of failures are obvious. For what is worth, I
have been experimenting introducing this kind of capability in the NSX
plugin in the past few months. I've been using celery as a task queue, and
writing the task management code from scratch - only to realize that the
same features I was implementing are already supported by taskflow.

I think that all parts of Neutron API can greatly benefit from introducing
a flow-based approach.
Some examples:
- pre/post commit operations in the ML2 plugin can be orchestrated a lot
better as a workflow, articulating operations on the various drivers in a
graph
- operation spanning multiple plugins (eg: add router interface) could be
simplified using clearly defined tasks for the L2 and L3 parts
- it would be finally possible to properly manage resources' operational
status, as well as knowing whether the actual configuration of the backend
matches the database configuration
- synchronous plugins might be converted into asynchronous thus improving
their API throughput

Now, the caveats:
- during the sessions it was correctly pointed out that special care is
required with multiple producers (ie: api servers) as workflows should be
always executed in the correct order
- it is probably be advisable to serialize workflows operating on the same
resource; this might lead to unexpected situations (potentially to
deadlocks) with workflows operating on multiple resources
- if the API is asynchronous, and multiple workflows might be queued or in
execution at a given time, rolling back the DB operation on failures is
probably not advisable (it would not be advisable anyway in any
asynchronous framework). If the API instead stays synchronous the revert
action for a failed task might also restore the db state for a resource;
but I think that keeping the API synchronous missed a bit the point of this
whole work - feel free to show your disagreement here!
- some neutron workflows are actually initiated by agents; this is the
case, for instance, of the workflow for doing initial L2 and security group
configuration for a port.
- it's going to be a lot of work, and we need to devise a strategy to
either roll this changes in the existing plugins or just decide that future
v3 plugins will use it.

From the implementation side, I've done a bit of research and task queue
like celery only implement half of what is needed; conversely I have not
been able to find a workflow manager, at least in the python world, as
complete and 

Re: [openstack-dev] [Neutron][QoS] Weekly IRC Meeting?

2014-05-22 Thread Itsuro ODA
Hi,

I am interested in the meeting but cannot participate (it's 3:00am).
I will check the meeting log later.

BTW, where do you begin a discussion ? I thought you countinue 
work in Icehose. Or do you argue from the beginning of the API
definition ? 

Thanks.
Itsuro Oda (oda-g)

On Wed, 21 May 2014 14:56:09 +
Collins, Sean sean_colli...@cable.comcast.com wrote:

 Hi,
 
 The session that we had on the Quality of Service API extension was well
 attended - I would like to keep the momentum going by proposing a weekly
 IRC meeting.
 
 How does Tuesdays at 1800 UTC in #openstack-meeting-alt sound?
 
 -- 
 Sean M. Collins
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Itsuro ODA o...@valinux.co.jp


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][FWaaS]Firewall Web Services Research Thesis Applicability to the OpenStack Project

2014-05-22 Thread Mike Grima
Hello,

Just to make sure I understand:

1.) I’m assuming that you can dilettante which policies apply to specific VM’s 
within a group (Is this correct?).  With regards to DENY permissions, they are 
handled specially.  In such a case, all other VM’s are provided with ALLOW 
permissions for that rule, while the destined VM for the DENY policy is 
provided with a DENY.
— Would you necessarily want to automatically provide all other VM’s with an 
ALLOW privilege?  Not all VM’s in that group may need access to that port...

2.) Group Policy does support a Hierarchy. (Is this correct?)

3.) On a separate note: Is the Group Policy feature exposed via a RESTful API 
akin to FWaaS?

Thank you,

Mike Grima, RHCE


On May 22, 2014, at 2:08 AM, A, Keshava keshav...@hp.com wrote:

 Hi,
 
 1. When the group policy is applied ( across to all the VMs ) say deny for 
 specific TCP port = 80, however because some special reason one of that VM 
 needs to 'ALLOW TCP port' how to handle this ?  
 When deny is applied to any one of VM in that group , this framework  
 takes care of 
   individually breaking that and apply ALLOW for other VM  
 automatically ?
   and apply Deny for that specific VM ? 
 
 2. Can there be 'Hierarchy of Group Policy  ? 
 
 
 
 Thanks  regards,
 Keshava.A
 
 -Original Message-
 From: Michael Grima [mailto:mike.r.gr...@gmail.com] 
 Sent: Wednesday, May 21, 2014 5:00 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Neutron][FWaaS]Firewall Web Services Research 
 Thesis Applicability to the OpenStack Project
 
 Sumit,
 
 Unfortunately, I missed the IRC meeting on FWaaS (got the timezones screwed 
 up...).
 
 However, in the meantime, please review this section of my thesis on the 
 OpenStack project:
 https://docs.google.com/document/d/1DGhgtTY4FxYxOqhKvMSV20cIw5WWR-gXbaBoMMMA-f0/edit?usp=sharing
 
 Please let me know if it is missing anything, or contains any wrong 
 information.  Also, if you have some time, please review the questions I have 
 asked in the previous messages.
 
 Thank you,
 
 --
 Mike Grima, RHCE
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Core API refactoring

2014-05-22 Thread Mandeep Dhami
OK


On Thu, May 22, 2014 at 7:36 AM, Collins, Sean 
sean_colli...@cable.comcast.com wrote:

 On Wed, May 21, 2014 at 10:47:16PM EDT, Mandeep Dhami wrote:
  The update from Sean seem to suggest to me that we needed blueprints only
  if the public API changes, and not for design changes that are internal
 to
  neutron.

 There was no statement in my e-mail that made that
 suggestion. My e-mail was only an attempt to try and help provide
 context for what was discussed at the summit.

 I dislike having people put words in my mouth, and you also seem to
 continue to insinuate that things are not done in the open, with
 everyone having a chance to participate.

 I believe this is a serious charge, and I do not appreciate being
 publicly accused of this.

 --
 Sean M. Collins
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][group-based-policy] Should we revisit the priority of group-based policy?

2014-05-22 Thread Maru Newby
On May 22, 2014, at 1:59 PM, Mandeep Dhami dh...@noironetworks.com wrote:

 
 Maru's concerns are that:
 1. It is large
 2. It is complex

As per the discussion in the irc meeting today, I hope it is clear now that 
eventual size and complexity are not real issue.  Rather, I am concerned at how 
we get there.  

I keep talking about 'iterating in the open', and want to make it clear what I 
mean by this.  It involves proposing a reviewable patch to openstack gerrit, 
working with reviewers to get the patch merged, and then incorporating their 
feedback into the overall design to drive the implementation of future patches.

'Iterating in the open' does not imply working outside of gerrit to create a 
monolithic codebase that needs to be manually decomposed into reviewable chunks 
at the end.  I understand that this may be an effective way to create a POC, 
but it is not an effective way to produce code that can be merged into Neutron. 
 Core reviewers have a mandate to ensure the quality of every patch, and their 
feedback is likely to have an impact on subsequent implementation.


 
 And Armando's related concerns are:
 3. Could dev/review cycles be better spent on refactoring
 4. If refactored neutron was available, would a simpler option become more 
 viable
 
 Let me address them in that order.
 
 1. Re: It is large
 Group policy has an ambitious goal  - provide devop teams with policy based 
 controls that are usable at scale and with automation (say a higher 
 governance layer like Congress). The fact that meeting a large challenge 
 requires more code is natural. We understand that challenge, and that is why 
 we did a prototype (as PoC that was demonstrated on the summit). And based on 
 that learning we are incrementally creating patches for building the group 
 based policy. Just because a task is large, we as neutron can not shy away 
 from building it. That will only drive people who need it out side neutron 
 (as we are seeing with the frustration that the LBaaS team had because they 
 have a requirement that is large as well).

Again, the amount of code is not the problem.  How code is introduced into the 
tree, and how the design is socialized (both with developers and users), _is_ 
of critical importance.  Neutron is not alone in requiring an 'iterate in the 
open' approach - it is a characteristic common to many open source projects.


 
 2. Re: It is complex
 Complexity depends on the context. Our goal was to make the end-user's life 
 simpler (and more automated). To achieve some of that simplicity, we required 
 a little more complexity in the implementation. We decide to make that 
 arbitrage - a little higher complexity in implementation to allow for simpler 
 usage. But we were careful and did not want to impose that complexity on 
 every use case - hence a lot of that is optional (and exercised only if the 
 use case needs it). Unfortunately the model, has to model all of it so as it 
 not add complexity later in upgrade and backward compatibility issues. We 
 choose to do architecture upfront, and then implement it incrementally.

Doing upfront architecture is fine, so long as the architecture also evolves in 
response to feedback from the review process in gerrit.  Similarly, incremental 
implementation is not enough - it needs to happen in gerrit.  And to be clear, 
the tool is not the critical factor.  When I say gerrit, I mean that each patch 
needs to receive core reviewer attention and that subsequent patches 
incorporate their feedback.


 
 The team came up with the model currently in model based on that review and 
 evaluation all the proposals in the document that you refer. It is easy to 
 make general comments, but unless you participate in the process and sign up 
 to writing the code, those comments are not going to help with solving the 
 original problem. And this _is_ open-source. If you disagree, please write 
 code and the community can decide for itself as to what model is actually 
 simple to use for them. Curtailing efforts from other developers just because 
 their engineering trade-offs are different from what you believe your 
 use-case needs is not why we like open source. We enjoy the mode where 
 different developers try different things, we experiment, and the software 
 evolves to what the user demands. Or maybe, multiple models live in harmony. 
 Let the users decide that.

You are correct in saying that it is not my job to decide what you or other 
developers do.  It is, however, my role as a Neutron core reviewer to ensure 
that we make good use of the resources available to us to meet the project's 
commitments.  If I believe that the approach chosen to implement a given 
Neutron feature has the potential to starve other priorities of resources, then 
I have a responsibility to voice that concern and push back.  You're free to 
implement whatever you want outside of the tree, but participating in the 
Neutron community means accepting the norms 

Re: [openstack-dev] [neutron][group-based-policy] Should we revisit the priority of group-based policy?

2014-05-22 Thread Adam Young

On 05/22/2014 07:03 PM, Maru Newby wrote:

On May 22, 2014, at 1:59 PM, Mandeep Dhami dh...@noironetworks.com wrote:


Maru's concerns are that:
1. It is large
2. It is complex

As per the discussion in the irc meeting today, I hope it is clear now that 
eventual size and complexity are not real issue.  Rather, I am concerned at how 
we get there.

I keep talking about 'iterating in the open', and want to make it clear what I 
mean by this.  It involves proposing a reviewable patch to openstack gerrit, 
working with reviewers to get the patch merged, and then incorporating their 
feedback into the overall design to drive the implementation of future patches.

'Iterating in the open' does not imply working outside of gerrit to create a 
monolithic codebase that needs to be manually decomposed into reviewable chunks 
at the end.  I understand that this may be an effective way to create a POC, 
but it is not an effective way to produce code that can be merged into Neutron. 
 Core reviewers have a mandate to ensure the quality of every patch, and their 
feedback is likely to have an impact on subsequent implementation.
We talked about Stacked policy for RBAC at the Summit.  I wonder if we 
are looking at two sides of the same problem.


For Keystone, teh pieces identified so far are:  associating policy with 
a server or endpoint, the auth_token middleware fetching it, and then 
the ability of a domain or project to define its own sub-policy.


At the API level, I was wondering about delegation, which is really what 
policy is about:  instead of  delegating a role, can I delegate the 
ability to perform just a single operations.  But even that is not fine 
grained enough.


Really, we need policy on objects.  For example, role assignments: I 
want to be able to delegate to a group admin the ability to assign users 
to his group.  That means he can assign the role Member to users in 
his group, but no other role.


The mechanisms to implement this are going to be common, I think across 
all of the services.  I think we are getting into SELinux land.






And Armando's related concerns are:
3. Could dev/review cycles be better spent on refactoring
4. If refactored neutron was available, would a simpler option become more 
viable

Let me address them in that order.

1. Re: It is large
Group policy has an ambitious goal  - provide devop teams with policy based controls that 
are usable at scale and with automation (say a higher governance layer like Congress). 
The fact that meeting a large challenge requires more code is natural. We understand that 
challenge, and that is why we did a prototype (as PoC that was demonstrated on the 
summit). And based on that learning we are incrementally creating patches for building 
the group based policy. Just because a task is large, we as neutron can not shy away from 
building it. That will only drive people who need it out side neutron (as we are seeing 
with the frustration that the LBaaS team had because they have a requirement that is 
large as well).

Again, the amount of code is not the problem.  How code is introduced into the 
tree, and how the design is socialized (both with developers and users), _is_ 
of critical importance.  Neutron is not alone in requiring an 'iterate in the 
open' approach - it is a characteristic common to many open source projects.



2. Re: It is complex
Complexity depends on the context. Our goal was to make the end-user's life 
simpler (and more automated). To achieve some of that simplicity, we required a 
little more complexity in the implementation. We decide to make that arbitrage 
- a little higher complexity in implementation to allow for simpler usage. But 
we were careful and did not want to impose that complexity on every use case - 
hence a lot of that is optional (and exercised only if the use case needs it). 
Unfortunately the model, has to model all of it so as it not add complexity 
later in upgrade and backward compatibility issues. We choose to do 
architecture upfront, and then implement it incrementally.

Doing upfront architecture is fine, so long as the architecture also evolves in 
response to feedback from the review process in gerrit.  Similarly, incremental 
implementation is not enough - it needs to happen in gerrit.  And to be clear, 
the tool is not the critical factor.  When I say gerrit, I mean that each patch 
needs to receive core reviewer attention and that subsequent patches 
incorporate their feedback.



The team came up with the model currently in model based on that review and 
evaluation all the proposals in the document that you refer. It is easy to make 
general comments, but unless you participate in the process and sign up to 
writing the code, those comments are not going to help with solving the 
original problem. And this _is_ open-source. If you disagree, please write code 
and the community can decide for itself as to what model is actually simple to 
use for them. Curtailing efforts from 

[openstack-dev] Gate and Skipped Tests

2014-05-22 Thread Johannes Erdfelt
I noticed recently that some tests are being skipped in the Nova gate.

Some will always be skipped, but others are conditional.

In particular the ZooKeeper driver tests are being skipped because an
underlying python module is missing.

It seems to me that we should want no tests to be conditionally skipped
in the gate. This could lead to fragile behavior where an underlying
environmental problem could cause tests to be erroneously skipped and
broken code could get merged.

Any opinions on this?

JE


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][group-based-policy] Should we revisit the priority of group-based policy?

2014-05-22 Thread Mandeep Dhami
 each patch needs to receive core reviewer attention and that subsequent
patches incorporate their feedback.

At least two core neutron members were involved in creating the PoC, and at
least two more cores were involved in reviews at various times. In addition
to them, senior developers from at least seven networking companies were
involved in developing this code. I concede that this code was on github
for a few weeks, as that made the prototyping faster and allowed us to
fail faster, but it was open and reviewed with the team above (and with
the cores in that team). Based on our learning from that prototype
activity, and feedback of those cores, we are upstreaming the improved
production code to gerrit. All that involvement from the neutron core
reviewers was critical in keeping the larger PoC team above focused on
neutron norms and expectations from design and code.



On Thu, May 22, 2014 at 4:03 PM, Maru Newby ma...@redhat.com wrote:

 On May 22, 2014, at 1:59 PM, Mandeep Dhami dh...@noironetworks.com
 wrote:

 
  Maru's concerns are that:
  1. It is large
  2. It is complex

 As per the discussion in the irc meeting today, I hope it is clear now
 that eventual size and complexity are not real issue.  Rather, I am
 concerned at how we get there.

 I keep talking about 'iterating in the open', and want to make it clear
 what I mean by this.  It involves proposing a reviewable patch to openstack
 gerrit, working with reviewers to get the patch merged, and then
 incorporating their feedback into the overall design to drive the
 implementation of future patches.

 'Iterating in the open' does not imply working outside of gerrit to create
 a monolithic codebase that needs to be manually decomposed into reviewable
 chunks at the end.  I understand that this may be an effective way to
 create a POC, but it is not an effective way to produce code that can be
 merged into Neutron.  Core reviewers have a mandate to ensure the quality
 of every patch, and their feedback is likely to have an impact on
 subsequent implementation.


 
  And Armando's related concerns are:
  3. Could dev/review cycles be better spent on refactoring
  4. If refactored neutron was available, would a simpler option become
 more viable
 
  Let me address them in that order.
 
  1. Re: It is large
  Group policy has an ambitious goal  - provide devop teams with policy
 based controls that are usable at scale and with automation (say a higher
 governance layer like Congress). The fact that meeting a large challenge
 requires more code is natural. We understand that challenge, and that is
 why we did a prototype (as PoC that was demonstrated on the summit). And
 based on that learning we are incrementally creating patches for building
 the group based policy. Just because a task is large, we as neutron can not
 shy away from building it. That will only drive people who need it out side
 neutron (as we are seeing with the frustration that the LBaaS team had
 because they have a requirement that is large as well).

 Again, the amount of code is not the problem.  How code is introduced into
 the tree, and how the design is socialized (both with developers and
 users), _is_ of critical importance.  Neutron is not alone in requiring an
 'iterate in the open' approach - it is a characteristic common to many open
 source projects.


 
  2. Re: It is complex
  Complexity depends on the context. Our goal was to make the end-user's
 life simpler (and more automated). To achieve some of that simplicity, we
 required a little more complexity in the implementation. We decide to make
 that arbitrage - a little higher complexity in implementation to allow for
 simpler usage. But we were careful and did not want to impose that
 complexity on every use case - hence a lot of that is optional (and
 exercised only if the use case needs it). Unfortunately the model, has to
 model all of it so as it not add complexity later in upgrade and backward
 compatibility issues. We choose to do architecture upfront, and then
 implement it incrementally.

 Doing upfront architecture is fine, so long as the architecture also
 evolves in response to feedback from the review process in gerrit.
  Similarly, incremental implementation is not enough - it needs to happen
 in gerrit.  And to be clear, the tool is not the critical factor.  When I
 say gerrit, I mean that each patch needs to receive core reviewer attention
 and that subsequent patches incorporate their feedback.


 
  The team came up with the model currently in model based on that review
 and evaluation all the proposals in the document that you refer. It is easy
 to make general comments, but unless you participate in the process and
 sign up to writing the code, those comments are not going to help with
 solving the original problem. And this _is_ open-source. If you disagree,
 please write code and the community can decide for itself as to what model
 is actually simple to use for them. Curtailing efforts 

Re: [openstack-dev] [neutron][group-based-policy] Should we revisit the priority of group-based policy?

2014-05-22 Thread Maru Newby

On May 22, 2014, at 4:35 PM, Mandeep Dhami dh...@noironetworks.com wrote:

  each patch needs to receive core reviewer attention and that subsequent 
  patches incorporate their feedback.
 
 At least two core neutron members were involved in creating the PoC, and at 
 least two more cores were involved in reviews at various times. In addition 
 to them, senior developers from at least seven networking companies were 
 involved in developing this code. I concede that this code was on github for 
 a few weeks, as that made the prototyping faster and allowed us to fail 
 faster, but it was open and reviewed with the team above (and with the cores 
 in that team). Based on our learning from that prototype activity, and 
 feedback of those cores, we are upstreaming the improved production code to 
 gerrit. All that involvement from the neutron core reviewers was critical in 
 keeping the larger PoC team above focused on neutron norms and expectations 
 from design and code.

The feedback from reviewers needs to be provided on openstack infrastructure 
rather than outside it so that it is both visible to all reviewers (not just 
those directly involved) and that an enduring history of the process is 
retained.  These requirements were not met in working in github on the POC, 
regardless of your protestations of how 'open' that work was and of who was 
involved.  This isn't to suggest that out-of-tree prototyping isn't useful - of 
course it is.  But I think it important to recognize that out-of-tree 
development is unlikely to be an effective way to develop code that can be 
easily merged to Neutron, and that the project can ill-afford the additional 
review cost it is likely to impose.

As such, and as was agreed to in the irc meeting this morning, the way forward 
is to recognize that the POC is best considered a prototype useful in informing 
efforts to iterate in the open.


m.


 
 
 
 On Thu, May 22, 2014 at 4:03 PM, Maru Newby ma...@redhat.com wrote:
 On May 22, 2014, at 1:59 PM, Mandeep Dhami dh...@noironetworks.com wrote:
 
 
  Maru's concerns are that:
  1. It is large
  2. It is complex
 
 As per the discussion in the irc meeting today, I hope it is clear now that 
 eventual size and complexity are not real issue.  Rather, I am concerned at 
 how we get there.
 
 I keep talking about 'iterating in the open', and want to make it clear what 
 I mean by this.  It involves proposing a reviewable patch to openstack 
 gerrit, working with reviewers to get the patch merged, and then 
 incorporating their feedback into the overall design to drive the 
 implementation of future patches.
 
 'Iterating in the open' does not imply working outside of gerrit to create a 
 monolithic codebase that needs to be manually decomposed into reviewable 
 chunks at the end.  I understand that this may be an effective way to create 
 a POC, but it is not an effective way to produce code that can be merged into 
 Neutron.  Core reviewers have a mandate to ensure the quality of every patch, 
 and their feedback is likely to have an impact on subsequent implementation.
 
 
 
  And Armando's related concerns are:
  3. Could dev/review cycles be better spent on refactoring
  4. If refactored neutron was available, would a simpler option become more 
  viable
 
  Let me address them in that order.
 
  1. Re: It is large
  Group policy has an ambitious goal  - provide devop teams with policy based 
  controls that are usable at scale and with automation (say a higher 
  governance layer like Congress). The fact that meeting a large challenge 
  requires more code is natural. We understand that challenge, and that is 
  why we did a prototype (as PoC that was demonstrated on the summit). And 
  based on that learning we are incrementally creating patches for building 
  the group based policy. Just because a task is large, we as neutron can not 
  shy away from building it. That will only drive people who need it out side 
  neutron (as we are seeing with the frustration that the LBaaS team had 
  because they have a requirement that is large as well).
 
 Again, the amount of code is not the problem.  How code is introduced into 
 the tree, and how the design is socialized (both with developers and users), 
 _is_ of critical importance.  Neutron is not alone in requiring an 'iterate 
 in the open' approach - it is a characteristic common to many open source 
 projects.
 
 
 
  2. Re: It is complex
  Complexity depends on the context. Our goal was to make the end-user's life 
  simpler (and more automated). To achieve some of that simplicity, we 
  required a little more complexity in the implementation. We decide to make 
  that arbitrage - a little higher complexity in implementation to allow for 
  simpler usage. But we were careful and did not want to impose that 
  complexity on every use case - hence a lot of that is optional (and 
  exercised only if the use case needs it). Unfortunately the model, has to 
  model all of it 

Re: [openstack-dev] [infra] Gerrit downtime on May 23 for project renames

2014-05-22 Thread Jeremy Stanley
On 2014-05-21 23:52:14 -0700 (-0700), Clint Byrum wrote:
 You didn't also ask them to subscribe to the users and/or operators
 mailing lists? I would think at least one of those two lists would be
 quite important for users to stay in the loop about the effort.
[...]

It's also worth noting that we adjust the database to accommodate
project subscriptions, so any Gerrit user who subscribed to the old
project name will still be subscribed under the new name once the
rename is completed, and will thus still receive Gerrit notification
E-mails about changes for it, still see it in their important
changes dashboard, and so on.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Introducing task oriented workflows

2014-05-22 Thread Nachi Ueno
Hi Salvatore

Thank you for your posting this.

IMO, this topic shouldn't be limited for Neutron only.
Users wants consistent API between OpenStack project, right?

In Nova, a server has task_state, so Neutron should do same way.



2014-05-22 15:34 GMT-07:00 Salvatore Orlando sorla...@nicira.com:
 As most of you probably know already, this is one of the topics discussed
 during the Juno summit [1].
 I would like to kick off the discussion in order to move towards a concrete
 design.

 Preamble: Considering the meat that's already on the plate for Juno, I'm not
 advocating that whatever comes out of this discussion should be put on the
 Juno roadmap. However, preparation (or yak shaving) activities that should
 be identified as pre-requisite might happen during the Juno time frame
 assuming that they won't interfere with other critical or high priority
 activities.
 This is also a very long post; the TL;DR summary is that I would like to
 explore task-oriented communication with the backend and how it should be
 reflected in the API - gauging how the community feels about this, and
 collecting feedback regarding design, constructs, and related
 tools/techniques/technologies.

 At the summit a broad range of items were discussed during the session, and
 most of them have been reported in the etherpad [1].

 First, I think it would be good to clarify whether we're advocating a
 task-based API, a workflow-oriented operation processing, or both.

 -- About a task-based API

 In a task-based API, most PUT/POST API operations would return tasks rather
 than neutron resources, and users of the API will interact directly with
 tasks.
 I put an example in [2] to avoid cluttering this post with too much text.
 As the API operation simply launches a task - the database state won't be
 updated until the task is completed.

 Needless to say, this would be a radical change to Neutron's API; it should
 be carefully evaluated and not considered for the v2 API.
 Even if it is easily recognisable that this approach has a few benefits, I
 don't think this will improve usability of the API at all. Indeed this will
 limit the ability of operating on a resource will a task is in execution on
 it, and will also require neutron API users to change the paradigm the use
 to interact with the API; for not mentioning the fact that it would look
 weird if neutron is the only API endpoint in Openstack operating in this
 way.
 For the Neutron API, I think that its operations should still be
 manipulating the database state, and possibly return immediately after that
 (*) - a task, or to better say a workflow will then be started, executed
 asynchronously, and update the resource status on completion.

 -- On workflow-oriented operations

 The benefits of it when it comes to easily controlling operations and
 ensuring consistency in case of failures are obvious. For what is worth, I
 have been experimenting introducing this kind of capability in the NSX
 plugin in the past few months. I've been using celery as a task queue, and
 writing the task management code from scratch - only to realize that the
 same features I was implementing are already supported by taskflow.

 I think that all parts of Neutron API can greatly benefit from introducing a
 flow-based approach.
 Some examples:
 - pre/post commit operations in the ML2 plugin can be orchestrated a lot
 better as a workflow, articulating operations on the various drivers in a
 graph
 - operation spanning multiple plugins (eg: add router interface) could be
 simplified using clearly defined tasks for the L2 and L3 parts
 - it would be finally possible to properly manage resources' operational
 status, as well as knowing whether the actual configuration of the backend
 matches the database configuration
 - synchronous plugins might be converted into asynchronous thus improving
 their API throughput

 Now, the caveats:
 - during the sessions it was correctly pointed out that special care is
 required with multiple producers (ie: api servers) as workflows should be
 always executed in the correct order
 - it is probably be advisable to serialize workflows operating on the same
 resource; this might lead to unexpected situations (potentially to
 deadlocks) with workflows operating on multiple resources
 - if the API is asynchronous, and multiple workflows might be queued or in
 execution at a given time, rolling back the DB operation on failures is
 probably not advisable (it would not be advisable anyway in any asynchronous
 framework). If the API instead stays synchronous the revert action for a
 failed task might also restore the db state for a resource; but I think that
 keeping the API synchronous missed a bit the point of this whole work - feel
 free to show your disagreement here!
 - some neutron workflows are actually initiated by agents; this is the case,
 for instance, of the workflow for doing initial L2 and security group
 configuration for a port.
 - it's going to be a lot of 

Re: [openstack-dev] [infra] Nominating Nikita Konovalov for storyboard-core

2014-05-22 Thread Jeremy Stanley
On 2014-05-21 14:31:24 -0700 (-0700), James E. Blair wrote:
[...]
 Nikita, thank you very much for your work!

Absolutely! I am wholeheartedly in favor of this proposal.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >