Re: [openstack-dev] [Neutron][NFV] NFV BoF at design summit

2014-05-21 Thread Isaku Yamahata
Hi, I will also attend the NFV IRC meeting.

thanks,
Isaku Yamahata

On Tue, May 20, 2014 at 01:23:22PM -0700,
Stephen Wong s3w...@midokura.com wrote:

 Hi,
 
 I am part of the ServiceVM team and I will attend the NFV IRC meetings.
 
 Thanks,
 - Stephen
 
 
 On Tue, May 20, 2014 at 8:59 AM, Chris Wright chr...@sous-sol.org wrote:
 
  * balaj...@freescale.com (balaj...@freescale.com) wrote:
-Original Message-
From: Kyle Mestery [mailto:mest...@noironetworks.com]
Sent: Tuesday, May 20, 2014 12:19 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][NFV] NFV BoF at design summit
   
On Mon, May 19, 2014 at 1:44 PM, Ian Wells ijw.ubu...@cack.org.uk
wrote:
 I think the Service VM discussion resolved itself in a way that
 reduces the problem to a form of NFV - there are standing issues
  using
 VMs for services, orchestration is probably not a responsibility that
 lies in Neutron, and as such the importance is in identifying the
 problems with the plumbing features of Neutron that cause
 implementation difficulties.  The end result will be that VMs
 implementing tenant services and implementing NFV should be much the
 same, with the addition of offering a multitenant interface to
Openstack users on the tenant service VM case.

 Geoff Arnold is dealing with the collating of information from people
 that have made the attempt to implement service VMs.  The problem
 areas should fall out of his effort.  I also suspect that the key
 points of NFV that cause problems (for instance, dealing with VLANs
 and trunking) will actually appear quite high up the service VM list
  as
well.
 --
There is a weekly meeting for the Service VM project [1], I hope some
representatives from the NFB sub-project can make it to this meeting
  and
participate there.
   [P Balaji-B37839] I agree with Kyle, so that we will have enough synch
  between Service VM and NFV goals.
 
  Makes good sense.  Will make sure to get someone there.
 
  thanks,
  -chris
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 
Isaku Yamahata isaku.yamah...@gmail.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder][OSSG] Security note OSSN-0014 needs Cinder sign off

2014-05-21 Thread Clark, Robert Graham
Hi Cinder folks,

Malini from the security group has drafted an OpenStack Security Note for an 
issue regarding cinder driver permissions that was previously reported to the 
VMT.

Our process for publishing OSSNs requires sign off from two OSSN core and one 
core of the affected project(s) - we’d like someone from Cinder core could give 
it a quick sanity check before we publish it more widely.

https://review.openstack.org/#/c/92434/

Cheers
-Rob

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][NFV] NFV BoF at design summit

2014-05-21 Thread Dmitry
HI,
I would happy to get explanation of what is the difference between Adv
Service 
Managementhttps://docs.google.com/file/d/0Bz-bErEEHJxLTGY4NUVvTzRDaEk/editfrom
the Service VM and NFVO
orchestrationhttp://www.ietf.org/proceedings/88/slides/slides-88-opsawg-6.pdffrom
NFV Mano.
The most interesting part if service provider management as part of the
service catalog.

Thanks,
Dmitry


On Wed, May 21, 2014 at 9:01 AM, Isaku Yamahata isaku.yamah...@gmail.comwrote:

 Hi, I will also attend the NFV IRC meeting.

 thanks,
 Isaku Yamahata

 On Tue, May 20, 2014 at 01:23:22PM -0700,
 Stephen Wong s3w...@midokura.com wrote:

  Hi,
 
  I am part of the ServiceVM team and I will attend the NFV IRC
 meetings.
 
  Thanks,
  - Stephen
 
 
  On Tue, May 20, 2014 at 8:59 AM, Chris Wright chr...@sous-sol.org
 wrote:
 
   * balaj...@freescale.com (balaj...@freescale.com) wrote:
 -Original Message-
 From: Kyle Mestery [mailto:mest...@noironetworks.com]
 Sent: Tuesday, May 20, 2014 12:19 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][NFV] NFV BoF at design
 summit

 On Mon, May 19, 2014 at 1:44 PM, Ian Wells ijw.ubu...@cack.org.uk
 
 wrote:
  I think the Service VM discussion resolved itself in a way that
  reduces the problem to a form of NFV - there are standing issues
   using
  VMs for services, orchestration is probably not a responsibility
 that
  lies in Neutron, and as such the importance is in identifying the
  problems with the plumbing features of Neutron that cause
  implementation difficulties.  The end result will be that VMs
  implementing tenant services and implementing NFV should be much
 the
  same, with the addition of offering a multitenant interface to
 Openstack users on the tenant service VM case.
 
  Geoff Arnold is dealing with the collating of information from
 people
  that have made the attempt to implement service VMs.  The problem
  areas should fall out of his effort.  I also suspect that the key
  points of NFV that cause problems (for instance, dealing with
 VLANs
  and trunking) will actually appear quite high up the service VM
 list
   as
 well.
  --
 There is a weekly meeting for the Service VM project [1], I hope
 some
 representatives from the NFB sub-project can make it to this
 meeting
   and
 participate there.
[P Balaji-B37839] I agree with Kyle, so that we will have enough
 synch
   between Service VM and NFV goals.
  
   Makes good sense.  Will make sure to get someone there.
  
   thanks,
   -chris
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  

  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 --
 Isaku Yamahata isaku.yamah...@gmail.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gantt] summary of scheduler sessions at the Juno design summit

2014-05-21 Thread Sylvain Bauza
Hi Don,
My additions inline.


Le 21/05/2014 00:38, Dugger, Donald D a écrit :

 Here is a brief rundown on the majority of the scheduler sessions from
 the summit, links to the etherpads and some of my incoherent notes
 from the sessions.  Feel free to reply to this email to correct any
 mistakes I made and to add any other thoughts you might have:

  

 1)  Future of gantt interfaces  APIs (Sylvain Bauza)

 https://etherpad.openstack.org/p/juno-nova-gantt-apis

 As from the last summit everyone agrees that yes a separate scheduler
 project is desirable but we need to clean up the interfaces between
 Nova and the scheduler first.  There are 3 main areas that need to be
 cleaned up first (proxying for booting instances, a library to isolate
 the scheduler and isolate access to DB objects).  We have BPs created
 for all of these areas so we need to implement those BPs first, all of
 that work happening in the current Nova tree.  After those 3 steps are
 done we need to check for any other external dependencies (hopefully
 there aren't any) and then we can split the code out into the gantt
 repository.

  


As the devil is in the details, most of this work is quite
straightforward but needs to address a lot of details. One example of
this is how we deal with aggregates for filtering upon them : how the
scheduler has to access aggregates info, that is purely Nova ?
As we can't address all the concerns directly in the blueprints, I think
the best way to find out all the problems is to do a weekly update of
the progress in these important blueprints for Gantt, and raise during
these meetings all the implementation problems we could face.

 2)  Common no DB scheduler (Boris)

 https://etherpad.openstack.org/p/juno-nova-no-db-scheduler

 Pretty much agreement that the new no-db scheduler needs to be
 switchable/configurable so that it can be selected at run time, don't
 want to do a flash cut that requires everyone to suddenly switch to
 the new architecture, it should be possible to design this such that
 the node state info, currently kept in the DB, can be handled by a
 back end that can either use the current DB methods or the new no-db
 methods.

  

 Much discussion over the fact that the current patches use the
 memcached to contain a journal of all update messages about node state
 change which means that the scheduler will just be re-inventing
 journaling problems/solutions that are well handled by current DBs. 
 Another idea would be to use the memcached area to hold complete state
 info for each node, using a counter mechanism to know when the data is
 out of date.  Need to evaluate the pros/cons of different memcached
 designs.

  


I just want to mention here that the idea was widely accepted, only the
details about how it will be implemented were debated hugely. I can
propose to further discuss these implementation details during a next
meeting, as Yuriy Taraday who is the identified contributor to these BPs
was unable to attend the Summit.


 3)  Simultaneous scheduling for server groups (Mike Spreitzer)

 https://etherpad.openstack.org/p/juno-nova-scheduling-server-groups

 The basic idea is a desire to schedule a group of hosts in one call
 (more consistent with server groups) rather than multiple scheduler
 calls for one node at a time.  Talking about this the real issue seem
 to be a resource reservation problem, the user wants to reserve a set
 of nodes and then, given the reservation succeeds, do the actual
 scheduling task.  As such, this sounds like something that maybe
 should be integrated in with the climate and/or heat.  Need to do some
 more research to see if this problem can be addressed and/or helped by
 either of those projects.

  


The formal conclusion was to discuss with Climate team (whose I'm part
of as Climate core reviewer) in order to see how Climate can provide the
reservation by itself. Nova team agreed that probably Climate could need
to request Nova for doing the simultaneous scheduling, but in that case
it's up to Climate responsibility to raise the topic and ask Nova to do
anything related to it.
Heat was discussed as possible top-level system for asking the
reservation ie. requesting reservation to Climate, IIRC.



 4)  Scheduler hints for VM lifecycle (Jay Lau)

 https://etherpad.openstack.org/p/juno-nova-scheduler-hints-vm-lifecycle

 Basic problem is that server hints are only instance instantiation
 time, the info is then lost and not available for migration decisions,
 need to store the hints somehow.  We could create a new table to hold
 the hints, we could add a new (arbitrary blob) field to the instances
 table or we could store the info in the system metadata which means we
 might need to resizulate the thingotron (that was the actual comment,
 interpretation is left to the reader J  No clear consensus on what to
 do, more research needed.

  

  


No clear consensus on that session IIRC.


-Sylvain

  

 --

 Don Dugger

 Censeo Toto nos in Kansa 

Re: [openstack-dev] [Cinder] Support LVM on a shared LU

2014-05-21 Thread Avishay Traeger
So the way I see it, the value here is a generic driver that can work with
any storage.  The downsides:
1. The admin has to manually provision a very big volume and attach it to
the Nova and Cinder hosts.  Every time a host is rebooted, or introduced,
the admin must do manual work. This is one of the things OpenStack should
be trying to avoid. This can't be automated without a driver, which is what
you're trying to avoid.
2. You lose on performance to volumes by adding another layer in the stack.
3. You lose performance with snapshots - appliances will almost certainly
have more efficient snapshots than LVM over network (consider that for
every COW operation, you are reading synchronously over the network).

(Basically, you turned your fully-capable storage appliance into a dumb
JBOD)

In short, I think the cons outweigh the pros.  Are there people deploying
OpenStack who would deploy their storage like this?

Thanks,
Avishay

On Tue, May 20, 2014 at 6:31 PM, Mitsuhiro Tanino
mitsuhiro.tan...@hds.comwrote:

  Hello All,



 I’m proposing a feature of LVM driver to support LVM on a shared LU.

 The proposed LVM volume driver provides these benefits.
   - Reduce hardware based storage workload by offloading the workload to
 software based volume operation.
   - Provide quicker volume creation and snapshot creation without storage
 workloads.
   - Enable cinder to any kinds of shared storage volumes without specific
 cinder storage driver.

   - Better I/O performance using direct volume access via Fibre channel.



 In the attachment pdf, following contents are explained.

   1. Detail of Proposed LVM volume driver

   1-1. Big Picture

   1-2. Administrator preparation

   1-3. Work flow of volume creation and attachment

   2. Target of Proposed LVM volume driver

   3. Comparison of Proposed LVM volume driver



 Could you review the attachment?

 Any comments, questions, additional ideas would be appreciated.





 Also there are blueprints, wiki and patches related to the slide.

 https://blueprints.launchpad.net/cinder/+spec/lvm-driver-for-shared-storage

 https://blueprints.launchpad.net/nova/+spec/lvm-driver-for-shared-storage


 https://wiki.openstack.org/wiki/Cinder/NewLVMbasedDriverForSharedStorageInCinder

 https://review.openstack.org/#/c/92479/

 https://review.openstack.org/#/c/92443/



 Regards,

 Mitsuhiro Tanino mitsuhiro.tan...@hds.com

  *HITACHI DATA SYSTEMS*

  c/o Red Hat, 314 Littleton Road, Westford, MA 01886

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gantt] summary of scheduler sessions at the Juno design summit

2014-05-21 Thread Jay Lau
2014-05-21 15:50 GMT+08:00 Sylvain Bauza sba...@redhat.com:

  Hi Don,
 My additions inline.


 Le 21/05/2014 00:38, Dugger, Donald D a écrit :

  Here is a brief rundown on the majority of the scheduler sessions from
 the summit, links to the etherpads and some of my incoherent notes from the
 sessions.  Feel free to reply to this email to correct any mistakes I made
 and to add any other thoughts you might have:



 1)  Future of gantt interfaces  APIs (Sylvain Bauza)

 https://etherpad.openstack.org/p/juno-nova-gantt-apis

 As from the last summit everyone agrees that yes a separate scheduler
 project is desirable but we need to clean up the interfaces between Nova
 and the scheduler first.  There are 3 main areas that need to be cleaned up
 first (proxying for booting instances, a library to isolate the scheduler
 and isolate access to DB objects).  We have BPs created for all of these
 areas so we need to implement those BPs first, all of that work happening
 in the current Nova tree.  After those 3 steps are done we need to check
 for any other external dependencies (hopefully there aren’t any) and then
 we can split the code out into the gantt repository.




 As the devil is in the details, most of this work is quite straightforward
 but needs to address a lot of details. One example of this is how we deal
 with aggregates for filtering upon them : how the scheduler has to access
 aggregates info, that is purely Nova ?
 As we can't address all the concerns directly in the blueprints, I think
 the best way to find out all the problems is to do a weekly update of the
 progress in these important blueprints for Gantt, and raise during these
 meetings all the implementation problems we could face.


   2)  Common no DB scheduler (Boris)

 https://etherpad.openstack.org/p/juno-nova-no-db-scheduler

 Pretty much agreement that the new no-db scheduler needs to be
 switchable/configurable so that it can be selected at run time, don’t want
 to do a flash cut that requires everyone to suddenly switch to the new
 architecture, it should be possible to design this such that the node state
 info, currently kept in the DB, can be handled by a back end that can
 either use the current DB methods or the new no-db methods.



 Much discussion over the fact that the current patches use the memcached
 to contain a journal of all update messages about node state change which
 means that the scheduler will just be re-inventing journaling
 problems/solutions that are well handled by current DBs.  Another idea
 would be to use the memcached area to hold complete state info for each
 node, using a counter mechanism to know when the data is out of date.  Need
 to evaluate the pros/cons of different memcached designs.




 I just want to mention here that the idea was widely accepted, only the
 details about how it will be implemented were debated hugely. I can propose
 to further discuss these implementation details during a next meeting, as
 Yuriy Taraday who is the identified contributor to these BPs was unable to
 attend the Summit.



   3)  Simultaneous scheduling for server groups (Mike Spreitzer)

 https://etherpad.openstack.org/p/juno-nova-scheduling-server-groups

 The basic idea is a desire to schedule a group of hosts in one call (more
 consistent with server groups) rather than multiple scheduler calls for one
 node at a time.  Talking about this the real issue seem to be a resource
 reservation problem, the user wants to reserve a set of nodes and then,
 given the reservation succeeds, do the actual scheduling task.  As such,
 this sounds like something that maybe should be integrated in with the
 climate and/or heat.  Need to do some more research to see if this problem
 can be addressed and/or helped by either of those projects.




 The formal conclusion was to discuss with Climate team (whose I'm part of
 as Climate core reviewer) in order to see how Climate can provide the
 reservation by itself. Nova team agreed that probably Climate could need to
 request Nova for doing the simultaneous scheduling, but in that case it's
 up to Climate responsibility to raise the topic and ask Nova to do anything
 related to it.
 Heat was discussed as possible top-level system for asking the reservation
 ie. requesting reservation to Climate, IIRC.




   4)  Scheduler hints for VM lifecycle (Jay Lau)

 https://etherpad.openstack.org/p/juno-nova-scheduler-hints-vm-lifecycle

 Basic problem is that server hints are only instance instantiation time,
 the info is then lost and not available for migration decisions, need to
 store the hints somehow.  We could create a new table to hold the hints, we
 could add a new (arbitrary blob) field to the instances table or we could
 store the info in the system metadata which means we might need to
 resizulate the thingotron (that was the actual comment, interpretation is
 left to the reader J  No clear consensus on what to do, more research
 needed.






 No clear consensus on 

[openstack-dev] [Neutron][LBaaS] Neutron LBaaS operator scale service --- Next steps and goals.

2014-05-21 Thread Susanne Balle
We have had some discussions around how to move forward with the LBaaS
service in OpenStack.  I am trying to summarize the key points below.


Feel free to chime in if I misrepresented anything or if you disagree :-)



For simplicity in the rest of the email and so I can differentiate between
all the LBaaS’s e.g. Neutron LBaaS, etc… I will name the new OpenStack
LBaaS project (that we discussed at the summit): Octavia in the rest of
this email. Note that this doesn’t mean we have agree on this name.



*Goal:*

We all want to create a best in class “operator scale” Octavia LBaaS
service to our customers.

Following requirements need to be considered (these are already listed in
some of the etherpads we have worked on)

· Provides scalability, failover, config management, and
provisioning.

· Architecture need to be pluggable so we can offer support for
HAProxy, Nginx, LVS, etc.



*Some disagreements exist around the scope of the new project: *



Some of the participating companies including HP are interested in a best
in class standalone Octavia load-balancer service that is part of OpenStack
and with the “label” OpenStack. http://www.openstack.org/software/

· The Octavia LBaaS project needs to work well with OpenStack or
this effort is not worth doing. HP believes that this should be the primary
focus.

· In this case the end goal would be to have a clean interface
between Neutron and the standalone Octavia LBaaS project and have the
Octavia LBaaS project become an incubated and eventual graduated OpenStack
project.

o   We would start out as a driver to Neutron.

o   This project would deprecate Neutron LBaaS long term since part of the
Neutron LBaaS would move over to the Octavia LBaaS project.

o   This project would continue to support both vendor drivers and new
software drivers e.g. ha-proxy, etc.

· Dougwig created the following diagram which gives a good overview
of my thinking: http://imgur.com/cJ63ts3 where Octavia is represented by
“New Driver Interface” and down. The whole picture shows how we could move
from the old to the new driver interface



Other participating companies want to create a best in class standalone
load-balancer service outside of OpenStack and only create a driver to
integrate with Openstack Neutron LBaaS.

· The Octavia LBaaS driver would be part of Neutron LBaaS tree
whereas the Octavia LBaaS implementation would reside outside OpenStack
e.g. Stackforge or github, etc.



The main issue/confusion is that some of us (HP LBaaS team) do not think of
projects in StackForge as OpenStack branded. HP developed  Libra LBaaS
which is open sourced in StackForge and when we tried to get it into
OpenStack we met resistance.



One person suggested the idea of designing the Octavia LBaaS service
totally independent of Neutron or any other service that calls. This might
makes sense for a general LBaaS service but given that we are in the
context of OpenStack this to me just makes the whole testing and developing
a nightmare to maintain and not necessary. Again IMHO we are developing and
delivering Octavia in the context of OpenStack so the Octavia LBaaS  should
just be super good at dealing with the OpenStack env. The architecture can
still be designed to be pluggable but my experiences tell me that we will
have to make decision and trade-offs and at that point we need to remember
that we are doing this in the context of OpenStack and not in the general
context.



*How do we think we can do it?*



We have some agreement around the following approach:



· To start developing the driver/Octavia implementation in
StackForge which should allow us to increase the velocity of our
development using the OpenStack CI/CD tooling (incl. jenkins) to ensure
that we test any change. This will allow us to ensure that changes to
Neutron do not break our driver/implementation as well as the other way
around.

o   We would use Gerrit for blueprints so we have documented reviews and
comments archived somewhere.

o   Contribute patches regularly into the Neutron LBaaS tree:

§  Kyle has volunteered himself and one more core team members to review
and help move a larger patch into Neutron tree when needed. It was also
suggested that we could do milestones of smaller patches to be merged into
Neutron LbaaS. The latter approach was preferred by most participants.



The main goal behind this approach is to make sure we increase velocity
while still maintaining a good code/design quality. The OpenStack tooling
has shown to work for large distributed virtual teams so let's take
advantage of it.

Carefully planning the various transitions.



Regards Susanne
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Uniform name for logger in projects

2014-05-21 Thread Sergey Kraynev
Hello, community.

I hope, that most of your know, that a bug with name Log debugs should not
have translations (f.e. [1], [2], [3]) was recently raised in several
projects. The reason for this work is related with the following concerns
[4].
There is a special check that is used (or will be used in some projects,
where the related patches have not merged yet) for verification process
(f.e. [5] or [6]). As you can see, this ([5]) verification uses the LOG
name of logger in regexp and if cases.
However, there are a lot of projects where both names LOG and logger
are used [7].
So I have a question about the current situation:
- Should we use one uniform name for logger or add support for both names
in checks?

In my opinion, declaration of one uniform name in hacking rules is
preferable, because it cleans code from useless duplicate names of one
variable and allows to create one uniform check for this rule.

[1] https://bugs.launchpad.net/neutron/+bug/1320867
[2] https://bugs.launchpad.net/swift/+bug/1318333
[3] https://bugs.launchpad.net/oslo/+bug/1317950
[4] https://wiki.openstack.org/wiki/LoggingStandards#Log_Translation
[5]
https://github.com/openstack/nova/blob/master/nova/hacking/checks.py#L201
[6] https://review.openstack.org/#/c/94255/11/heat/hacking/checks.py
[7] https://github.com/openstack/heat/search?q=getLoggertype=Code

Regards,
Sergey.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Where can I find the public URL for the slides from my presentation

2014-05-21 Thread Thierry Carrez
Justin Hammond wrote:
 I have uploaded it using the given link but I can't seem to find where
 people can find it.

Slide links are supposed to be added to openstack.org/summit next week,
as they link all the videos there.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] A proposal for code reduction

2014-05-21 Thread Abhijeet Jain


Hi Openstack-developers,

I am Abhijeet Jain. One of the contributor in OpenStack.

I was just working on optimizing the codes in Neutron , Keystone, Cinder 
modules.
Then, I came across with a very common scenario that I can see at many places.
at many places, I saw that the users have written the code in such form :

assertEqual(user1['id'], user2['id']);
assertEqual(user1['name'], user2['name']);
assertEqual(user1['status'], user2['status']);
assertEqual(user1['xyz'], user2['xyz']);


To optimize such redundancy, I created a help function like below :

def _check(self, expected, actual, keys):
for key in keys:
assertEqual( expected[key], actual[key])


So, everywhere we just need to call this function like this :
_check(user1, user2, ['id', 'name', 'status', 'xyz'])

So, this way lots of code can be reduced.
but, currently i need to put that function in every test file , I want to use. 
There is no global space.

My proposal is :
How about putting this function in some utils like place, which can be accessed 
in every test function.
but for that, I need your approval.
Kindly, provide your valuable feedback on this.



Thanks,
Abhijeet Jain



DISCLAIMER:
---
The contents of this e-mail and any attachment(s) are confidential and
intended
for the named recipient(s) only. 
It shall not attach any liability on the originator or NEC or its
affiliates. Any views or opinions presented in 
this email are solely those of the author and may not necessarily reflect the
opinions of NEC or its affiliates. 
Any form of reproduction, dissemination, copying, disclosure, modification,
distribution and / or publication of 
this message without the prior written consent of the author of this e-mail is
strictly prohibited. If you have 
received this email in error please delete it and notify the sender
immediately. .
---___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] A proposal for code reduction

2014-05-21 Thread Abhijeet Jain
Hi Openstack-developers,

I am Abhijeet Jain. One of the contributor in OpenStack.

I was just working on optimizing the codes in Neutron , Keystone, Cinder 
modules.
Then, I came across with a very common scenario that I can see at many places.
at many places, I saw that the users have written the code in such form :

assertEqual(user1['id'], user2['id']);
assertEqual(user1['name'], user2['name']);
assertEqual(user1['status'], user2['status']);
assertEqual(user1['xyz'], user2['xyz']);


To optimize such redundancy, I created a help function like below :

def _check(self, expected, actual, keys):
for key in keys:
assertEqual( expected[key], actual[key])


So, everywhere we just need to call this function like this :
_check(user1, user2, ['id', 'name', 'status', 'xyz'])

So, this way lots of code can be reduced.
but, currently i need to put that function in every test file , I want to use. 
There is no global space.

My proposal is :
How about putting this function in some utils like place, which can be accessed 
in every test function.
but for that, I need your approval.
Kindly, provide your valuable feedback on this.



Thanks,
Abhijeet Jain



DISCLAIMER:
---
The contents of this e-mail and any attachment(s) are confidential and
intended
for the named recipient(s) only. 
It shall not attach any liability on the originator or NEC or its
affiliates. Any views or opinions presented in 
this email are solely those of the author and may not necessarily reflect the
opinions of NEC or its affiliates. 
Any form of reproduction, dissemination, copying, disclosure, modification,
distribution and / or publication of 
this message without the prior written consent of the author of this e-mail is
strictly prohibited. If you have 
received this email in error please delete it and notify the sender
immediately. .
---___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Neutron LBaaS operator scale service --- Next steps and goals.

2014-05-21 Thread balaj...@freescale.com
Hi Susanne,

Was there any  discussion happened on LBaaS Neutron API [which are available 
now] will be modified while migrating to Octavia.?

Just want to understand the impact on the current LBaaS implementation using 
folks and migrating to Octavia.

Regards,
Balaji.P

From: Susanne Balle [mailto:sleipnir...@gmail.com]
Sent: Wednesday, May 21, 2014 4:36 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Cuddy, Tim; Balle, Susanne; vbhamidip...@paypal.com
Subject: [openstack-dev] [Neutron][LBaaS] Neutron LBaaS operator scale 
service --- Next steps and goals.


We have had some discussions around how to move forward with the LBaaS service 
in OpenStack.  I am trying to summarize the key points below.

Feel free to chime in if I misrepresented anything or if you disagree :-)

For simplicity in the rest of the email and so I can differentiate between all 
the LBaaS’s e.g. Neutron LBaaS, etc… I will name the new OpenStack LBaaS 
project (that we discussed at the summit): Octavia in the rest of this email. 
Note that this doesn’t mean we have agree on this name.

Goal:
We all want to create a best in class “operator scale” Octavia LBaaS service to 
our customers.
Following requirements need to be considered (these are already listed in some 
of the etherpads we have worked on)
• Provides scalability, failover, config management, and provisioning.
• Architecture need to be pluggable so we can offer support for 
HAProxy, Nginx, LVS, etc.

Some disagreements exist around the scope of the new project:

Some of the participating companies including HP are interested in a best in 
class standalone Octavia load-balancer service that is part of OpenStack and 
with the “label” OpenStack. http://www.openstack.org/software/
• The Octavia LBaaS project needs to work well with OpenStack or this 
effort is not worth doing. HP believes that this should be the primary focus.
• In this case the end goal would be to have a clean interface between 
Neutron and the standalone Octavia LBaaS project and have the Octavia LBaaS 
project become an incubated and eventual graduated OpenStack project.
o   We would start out as a driver to Neutron.
o   This project would deprecate Neutron LBaaS long term since part of the 
Neutron LBaaS would move over to the Octavia LBaaS project.
o   This project would continue to support both vendor drivers and new software 
drivers e.g. ha-proxy, etc.
• Dougwig created the following diagram which gives a good overview of 
my thinking: http://imgur.com/cJ63ts3 where Octavia is represented by “New 
Driver Interface” and down. The whole picture shows how we could move from the 
old to the new driver interface

Other participating companies want to create a best in class standalone 
load-balancer service outside of OpenStack and only create a driver to 
integrate with Openstack Neutron LBaaS.
• The Octavia LBaaS driver would be part of Neutron LBaaS tree whereas 
the Octavia LBaaS implementation would reside outside OpenStack e.g. Stackforge 
or github, etc.

The main issue/confusion is that some of us (HP LBaaS team) do not think of 
projects in StackForge as OpenStack branded. HP developed  Libra LBaaS which is 
open sourced in StackForge and when we tried to get it into OpenStack we met 
resistance.

One person suggested the idea of designing the Octavia LBaaS service totally 
independent of Neutron or any other service that calls. This might makes sense 
for a general LBaaS service but given that we are in the context of OpenStack 
this to me just makes the whole testing and developing a nightmare to maintain 
and not necessary. Again IMHO we are developing and delivering Octavia in the 
context of OpenStack so the Octavia LBaaS  should just be super good at dealing 
with the OpenStack env. The architecture can still be designed to be pluggable 
but my experiences tell me that we will have to make decision and trade-offs 
and at that point we need to remember that we are doing this in the context of 
OpenStack and not in the general context.

How do we think we can do it?

We have some agreement around the following approach:

• To start developing the driver/Octavia implementation in StackForge 
which should allow us to increase the velocity of our development using the 
OpenStack CI/CD tooling (incl. jenkins) to ensure that we test any change. This 
will allow us to ensure that changes to Neutron do not break our 
driver/implementation as well as the other way around.
o   We would use Gerrit for blueprints so we have documented reviews and 
comments archived somewhere.
o   Contribute patches regularly into the Neutron LBaaS tree:
•  Kyle has volunteered himself and one more core team members to review and 
help move a larger patch into Neutron tree when needed. It was also suggested 
that we could do milestones of smaller patches to be merged into Neutron LbaaS. 
The latter approach was preferred by most participants.


Re: [openstack-dev] A proposal for code reduction

2014-05-21 Thread Clark, Robert Graham
From: Abhijeet Jain [mailto:abhijeet.j...@nectechnologies.in] 
Sent: 21 May 2014 12:27
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] A proposal for code reduction

 

Hi Openstack-developers,

 

I am Abhijeet Jain. One of the contributor in OpenStack.

 

I was just working on optimizing the codes in Neutron , Keystone, Cinder
modules.

Then, I came across with a very common scenario that I can see at many
places.

at many places, I saw that the users have written the code in such form
:

 

assertEqual(user1['id'], user2['id']);

assertEqual(user1['name'], user2['name']);

assertEqual(user1['status'], user2['status']);

assertEqual(user1['xyz'], user2['xyz']);

 

 

To optimize such redundancy, I created a help function like below :

 

def _check(self, expected, actual, keys):

for key in keys:

assertEqual( expected[key], actual[key])

 

 

So, everywhere we just need to call this function like this :

_check(user1, user2, ['id', 'name', 'status', 'xyz'])

 

So, this way lots of code can be reduced.

but, currently i need to put that function in every test file , I want
to use. There is no global space.

 

My proposal is :

How about putting this function in some utils like place, which can be
accessed in every test function.

but for that, I need your approval.

Kindly, provide your valuable feedback on this.

 

 

 

Thanks,

Abhijeet Jain

 
 
Hi Abhijeet,
 
Excuse me if I've missed the point but it sounds like the oslo project
would be the place you'd want to put bits of shared code like this.
 
If you weren't already aware of it there's some great content here:
https://wiki.openstack.org/wiki/Oslo
 
Hope this helps
-Rob
 


smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] A proposal for code reduction

2014-05-21 Thread FERNANDO LOPEZ AGUILAR
Hi Abhijeet,

Why do not use the oslo library for it? I mean, maybe you can define or find a 
specific class related to testing process there, where you can put this method.

De: Abhijeet Jain 
abhijeet.j...@nectechnologies.inmailto:abhijeet.j...@nectechnologies.in
Responder a: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Fecha: Wednesday 21 May 2014 13:27
Para: 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Asunto: [openstack-dev] A proposal for code reduction

Hi Openstack-developers,

I am Abhijeet Jain. One of the contributor in OpenStack.

I was just working on optimizing the codes in Neutron , Keystone, Cinder 
modules.
Then, I came across with a very common scenario that I can see at many places.
at many places, I saw that the users have written the code in such form :

assertEqual(user1['id'], user2['id']);
assertEqual(user1['name'], user2['name']);
assertEqual(user1['status'], user2['status']);
assertEqual(user1['xyz'], user2['xyz']);


To optimize such redundancy, I created a help function like below :

def _check(self, expected, actual, keys):
for key in keys:
assertEqual( expected[key], actual[key])


So, everywhere we just need to call this function like this :
_check(user1, user2, ['id', 'name', 'status', 'xyz'])

So, this way lots of code can be reduced.
but, currently i need to put that function in every test file , I want to use. 
There is no global space.

My proposal is :
How about putting this function in some utils like place, which can be accessed 
in every test function.
but for that, I need your approval.
Kindly, provide your valuable feedback on this.



Thanks,
Abhijeet Jain

DISCLAIMER:
---
The contents of this e-mail and any attachment(s) are confidential and
intended
for the named recipient(s) only.
It shall not attach any liability on the originator or NEC or its
affiliates. Any views or opinions presented in
this email are solely those of the author and may not necessarily reflect the
opinions of NEC or its affiliates.
Any form of reproduction, dissemination, copying, disclosure, modification,
distribution and / or publication of
this message without the prior written consent of the author of this e-mail is
strictly prohibited. If you have
received this email in error please delete it and notify the sender
immediately. .
---



Este mensaje se dirige exclusivamente a su destinatario. Puede consultar 
nuestra política de envío y recepción de correo electrónico en el enlace 
situado más abajo.
This message is intended exclusively for its addressee. We only send and 
receive email on the basis of the terms set out at:
http://www.tid.es/ES/PAGINAS/disclaimer.aspx
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Question of necessary queries for Event implemented on HBase

2014-05-21 Thread Igor Degtiarov
Hi,

I have found that filter model for Events has mandatory parameters
start_time and end_time
of the events period. So, it seems that structure for rowkey as ''timestamp
+ event_id will be more suitable.


I have started to work on bp
https://blueprints.launchpad.net/ceilometer/+spec/hbase-events-feature
Partial implementation of events into HBase is available by link
https://review.openstack.org/#/c/91408/

By now it was added record method that writes events into HBase and get
method
with filtering by period of generation time of events.

Sincerely,
Igor D.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] A proposal for code reduction

2014-05-21 Thread Christian Berendt
On 05/21/2014 01:34 PM, Clark, Robert Graham wrote:
 If you weren’t already aware of it there’s some great content here: 
 https://wiki.openstack.org/wiki/Oslo

https://github.com/openstack/oslo.test

-- 
Christian Berendt
Cloud Computing Solution Architect
Mail: bere...@b1-systems.de

B1 Systems GmbH
Osterfeldstraße 7 / 85088 Vohburg / http://www.b1-systems.de
GF: Ralph Dehner / Unternehmenssitz: Vohburg / AG: Ingolstadt,HRB 3537

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Question of necessary queries for Event implemented on HBase

2014-05-21 Thread Dmitriy Ukhlov

Hello Igor,

Sounds reasonable.

On 05/21/2014 02:38 PM, Igor Degtiarov wrote:


Hi,

I have found that filter model for Events has mandatory parameters 
start_time and end_time
of the events period. So, it seems that structure for rowkey as 
''timestamp + event_id will be more suitable.



--
Best regards,
Dmitriy Ukhlov
Mirantis Inc.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] A proposal for code reduction

2014-05-21 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 21/05/14 13:20, Abhijeet Jain wrote:
 
 
 Hi Openstack-developers,
 
 I am Abhijeet Jain. One of the contributor in OpenStack.
 
 I was just working on optimizing the codes in Neutron , Keystone,
 Cinder modules. Then, I came across with a very common scenario
 that I can see at many places. at many places, I saw that the users
 have written the code in such form :
 
 assertEqual(user1['id'], user2['id']); assertEqual(user1['name'],
 user2['name']); assertEqual(user1['status'], user2['status']); 
 assertEqual(user1['xyz'], user2['xyz']);
 

Have you really seen lots of cases like that? I can't recollect any
such occurrences that would require a global check function in e.g.
oslo and neutron code.

 
 To optimize such redundancy, I created a help function like below
 :
 
 def _check(self, expected, actual, keys): for key in keys: 
 assertEqual( expected[key], actual[key])
 
 
 So, everywhere we just need to call this function like this : 
 _check(user1, user2, ['id', 'name', 'status', 'xyz'])
 

The semantics of the function is not clear from its name. I also doubt
its usefulness outside specific test cases that require such a check.

 So, this way lots of code can be reduced. but, currently i need to
 put that function in every test file , I want to use. There is no
 global space.
 

You can try to put it in BaseTestCase class. Also see [1] for a new
oslo library that should eventually replace all base test classes in
openstack projects.

[1]: https://github.com/openstack/oslo.test

 My proposal is : How about putting this function in some utils like
 place, which can be accessed in every test function. but for that,
 I need your approval. Kindly, provide your valuable feedback on
 this.
 
 
 
 Thanks, Abhijeet Jain
 
 
 
 DISCLAIMER: 
 ---

 
The contents of this e-mail and any attachment(s) are confidential and
 intended for the named recipient(s) only.

People generally don't send confidential contents to public lists. :)

 It shall not attach any liability on the originator or NEC or its 
 affiliates. Any views or opinions presented in this email are
 solely those of the author and may not necessarily reflect the 
 opinions of NEC or its affiliates. Any form of reproduction,
 dissemination, copying, disclosure, modification, distribution and
 / or publication of this message without the prior written consent
 of the author of this e-mail is strictly prohibited. If you have 
 received this email in error please delete it and notify the
 sender immediately. . 
 ---

 
 
 
 ___ OpenStack-dev
 mailing list OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBCgAGBQJTfJpIAAoJEC5aWaUY1u5726MIAMsqqD93pt9/Ee72VRB04k+3
uEFeOwGwZGwgeWqBx/VldV0H2PPnLsRFSNkuYy3+SdzkKubsf+DsklJx/88iWDa2
Aq1PkcCpFwBQgM05HyExe1x9TR+vCnhBAsU12GDqjQEvxBP3L/M+RTFT4keb+Kjt
STQ05k21Nc2q2/a7TNU4CKfMB6rD1dLshIdXTTcpHFDJ3qh9jgnpI2xWUtMAT4Ow
Na2l773f4zZYkeapcPjcI2otRZQPleZEnayp47O4arI3/3LssB9nfU1G9lWyhk0d
iEs506Ma7b1sLAlanS6l3vgmRLrDV7I2POHJH2gmQ6xVFupl4yoFHIR1j/SNw+c=
=KZBD
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] [Neutron] heal_instance_info_cache_interval - Can we kill it?

2014-05-21 Thread Assaf Muller
Dear Nova aficionados,

Please make sure I understand this correctly:
Each nova compute instance selects a single VM out of all of the VMs
that it hosts, and every heal_instance_info_cache_interval seconds
queries Neutron for all of its networking information, then updates
Nova's DB.

If the information above is correct, then I fail to see how that
is in anyway useful. For example, for a compute node hosting 20 VMs,
it would take 20 minutes to update the last one. Seems unacceptable
to me.

Considering Icehouse's Neutron to Nova notifications, my question
is if we can change the default to 0 (Disable the feature), deprecate
it, then delete it in the K cycle. Is there a good reason not to do this?


Assaf Muller, Cloud Networking Engineer 
Red Hat 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] devstack w/ neutron in vagrant - floating IPs

2014-05-21 Thread Collins, Sean
On Tue, May 20, 2014 at 06:19:44PM EDT, Paul Czarkowski wrote:
 Has anyone had any success with running devstack and neutron in a vagrant 
 machine where the floating Ips are accessible from outside of the vagrant box 
 ( I.e. From the host ).
 
 I’ve spent a few hours trying to get it working without any real success.
 

I have had this issue as well, when developing vagrant_devstack
(https://github.com/bcwaldon/vagrant_devstack) - but I haven't had the
capacity to track down the cause.

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Neutron LBaaS operator scale service --- Next steps and goals.

2014-05-21 Thread Susanne Balle
Balaji

The plan is to work on the next version of the LBaaS APIs in parallel to
maintaining the current version of the APIs and at some point when
everything is ready have a plan to deprecate the old APIs.

Susanne


On Wed, May 21, 2014 at 7:31 AM, balaj...@freescale.com 
balaj...@freescale.com wrote:

  Hi Susanne,



 Was there any  discussion happened on LBaaS Neutron API [which are
 available now] will be modified while migrating to Octavia.?



 Just want to understand the impact on the current LBaaS implementation
 using folks and migrating to Octavia.



 Regards,

 Balaji.P



 *From:* Susanne Balle [mailto:sleipnir...@gmail.com]
 *Sent:* Wednesday, May 21, 2014 4:36 AM

 *To:* OpenStack Development Mailing List (not for usage questions)
 *Cc:* Cuddy, Tim; Balle, Susanne; vbhamidip...@paypal.com

 *Subject:* [openstack-dev] [Neutron][LBaaS] Neutron LBaaS operator
 scale service --- Next steps and goals.





 We have had some discussions around how to move forward with the LBaaS
 service in OpenStack.  I am trying to summarize the key points below.



 Feel free to chime in if I misrepresented anything or if you disagree :-)



 For simplicity in the rest of the email and so I can differentiate between
 all the LBaaS’s e.g. Neutron LBaaS, etc… I will name the new OpenStack
 LBaaS project (that we discussed at the summit): Octavia in the rest of
 this email. Note that this doesn’t mean we have agree on this name.



 *Goal:*

 We all want to create a best in class “operator scale” Octavia LBaaS
 service to our customers.

 Following requirements need to be considered (these are already listed in
 some of the etherpads we have worked on)

 · Provides scalability, failover, config management, and
 provisioning.

 · Architecture need to be pluggable so we can offer support for
 HAProxy, Nginx, LVS, etc.



 *Some disagreements exist around the scope of the new project: *



 Some of the participating companies including HP are interested in a best
 in class standalone Octavia load-balancer service that is part of OpenStack
 and with the “label” OpenStack. http://www.openstack.org/software/

 · The Octavia LBaaS project needs to work well with OpenStack or
 this effort is not worth doing. HP believes that this should be the primary
 focus.

 · In this case the end goal would be to have a clean interface
 between Neutron and the standalone Octavia LBaaS project and have the
 Octavia LBaaS project become an incubated and eventual graduated OpenStack
 project.

 o   We would start out as a driver to Neutron.

 o   This project would deprecate Neutron LBaaS long term since part of
 the Neutron LBaaS would move over to the Octavia LBaaS project.

 o   This project would continue to support both vendor drivers and new
 software drivers e.g. ha-proxy, etc.

 · Dougwig created the following diagram which gives a good
 overview of my thinking: http://imgur.com/cJ63ts3 where Octavia is
 represented by “New Driver Interface” and down. The whole picture shows how
 we could move from the old to the new driver interface



 Other participating companies want to create a best in class standalone
 load-balancer service outside of OpenStack and only create a driver to
 integrate with Openstack Neutron LBaaS.

 · The Octavia LBaaS driver would be part of Neutron LBaaS tree
 whereas the Octavia LBaaS implementation would reside outside OpenStack
 e.g. Stackforge or github, etc.



 The main issue/confusion is that some of us (HP LBaaS team) do not think
 of projects in StackForge as OpenStack branded. HP developed  Libra LBaaS
 which is open sourced in StackForge and when we tried to get it into
 OpenStack we met resistance.



 One person suggested the idea of designing the Octavia LBaaS service
 totally independent of Neutron or any other service that calls. This might
 makes sense for a general LBaaS service but given that we are in the
 context of OpenStack this to me just makes the whole testing and developing
 a nightmare to maintain and not necessary. Again IMHO we are developing and
 delivering Octavia in the context of OpenStack so the Octavia LBaaS  should
 just be super good at dealing with the OpenStack env. The architecture can
 still be designed to be pluggable but my experiences tell me that we will
 have to make decision and trade-offs and at that point we need to remember
 that we are doing this in the context of OpenStack and not in the general
 context.



 *How do we think we can do it?*



 We have some agreement around the following approach:



 · To start developing the driver/Octavia implementation in
 StackForge which should allow us to increase the velocity of our
 development using the OpenStack CI/CD tooling (incl. jenkins) to ensure
 that we test any change. This will allow us to ensure that changes to
 Neutron do not break our driver/implementation as well as the other way
 around.

 o   We would use Gerrit for blueprints so we have 

Re: [openstack-dev] [devstack] Devstack Multinode on CentOS

2014-05-21 Thread Henrique Truta
Hello, Dague!

Thanks for the support, but it's still not working. Have you tested it on
CentOS?

Do you have the controller config example? The one in the Multinode page
didn't work for me.




2014-05-20 11:35 GMT-03:00 Sean Dague s...@dague.net:

 API should only be on the controller. You only want compute services
 (n-cpu, n-net, c-vol) on the computes.

 You also need to set MULTI_HOST=True for nova network. Some examples
 of working config at -

 https://github.com/sdague/devstack-vagrant/blob/master/puppet/modules/devstack/templates/local.erb


 Somewhere on my large TODO is to get this info back into the devstack
 README (it used to be there).

 -Sean

 On 05/20/2014 10:15 AM, Henrique Truta wrote:
  Hello, Sean!
 
  I'm trying to use Nova Network instead of Neutron due to its simplicity,
  that's why I didn't specify any of this on the controller.
 
  On the compute node, I enabled n-cpu,n-net,n-api,c-sch,c-api,c-vol,
  because that's what I thought were needed to become a Host... I'll try
  to disable the Cinder API.
 
  The most strange part is that I run stack.sh on the compute node, and ir
  runs ok, but it doesn't create anything. Appearantly, it only uses the
  API on the Controller :/
 
 
  2014-05-19 18:10 GMT-03:00 Collins, Sean
  sean_colli...@cable.comcast.com mailto:sean_colli...@cable.comcast.com
 :
 
  On Mon, May 19, 2014 at 05:00:26PM EDT, Henrique Truta wrote:
   Controller localrc: http://paste.openstack.org/show/80953/
  
   Compute node localrc: http://paste.openstack.org/show/80955/
 
  These look backwards. The first pastebin link has no enabled
 services,
  while the pastebin you say is the compute node appears to have API
  services running in the enabled_services list.
 
  So - here's an example from my lab:
 
  Controller localrc:
 
  # Nova
  disable_service n-net
  enable_service n-cpu
 
  # Neutron
  ENABLED_SERVICES+=,neutron,q-svc,q-dhcp,q-meta,q-agt
 
  Compute localrc:
 
  ENABLED_SERVICES=n-cpu,rabbit,neutron,q-agt
 
 
  --
  Sean M. Collins
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  mailto:OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  --
  --
  Ítalo Henrique Costa Truta
 
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


 --
 Sean Dague
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
--
Ítalo Henrique Costa Truta
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Core API refactoring

2014-05-21 Thread Collins, Sean
On Tue, May 20, 2014 at 05:18:57PM EDT, Mandeep Dhami wrote:
 Renewing the thread, is there a blueprint for this refactoring effort?
 
 In the email thread till now, we have just had an etherpad link. I would
 like to get more deeply involved in design/implementation and review of
 these changes and I get a feeling that not being able to attend the Atlanta
 summit is going to be a significant barrier to participation in this
 critical effort.


It is possible there is a misconception here: refactoring the API core does
not mean changing the APIs that are presented to the user. We are in the
process of replacing a homegrown WSGI with Pecan to make the WSGI layer
of Neutron cleaner and easier to create API extensions.

http://pecan.readthedocs.org/en/latest/index.html

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Core API refactoring

2014-05-21 Thread Mandeep Dhami
Hi Sean:

While the APIs might not be changing*, I suspect that there are significant
design decisions being made**. These changes are probably more significant
than any new feature being discussed. As a community, are we expected to
document these design changes and review these changes as well? I am still
trying to figure out what Neutron's review standards are. On one hand, I am
seeing code review comments that reject a patch for cosmetic changes (like
a name change from what was in the reviewed blueprint), to having an
attitude that something as core and central to neutron as refactoring and a
major API update to v3 not needing a design document/review.

It is my opinion, and my recommendation, that the proposed changes be
documented and reviewed by same standard that we have for other features.

* I believe that v3 API is being introduced and chnages are being made, but
I might have mis-understood.
** I was under the impression that in addition to the Pecan updates, there
was going to be refactoring to use taskflow as well. And that I expect to
have significant control flow impact, and that is what I really wanted to
review.


Regards,
mandeep



On Wed, May 21, 2014 at 6:52 AM, Collins, Sean 
sean_colli...@cable.comcast.com wrote:

 On Tue, May 20, 2014 at 05:18:57PM EDT, Mandeep Dhami wrote:
  Renewing the thread, is there a blueprint for this refactoring effort?
 
  In the email thread till now, we have just had an etherpad link. I would
  like to get more deeply involved in design/implementation and review of
  these changes and I get a feeling that not being able to attend the
 Atlanta
  summit is going to be a significant barrier to participation in this
  critical effort.


 It is possible there is a misconception here: refactoring the API core does
 not mean changing the APIs that are presented to the user. We are in the
 process of replacing a homegrown WSGI with Pecan to make the WSGI layer
 of Neutron cleaner and easier to create API extensions.

 http://pecan.readthedocs.org/en/latest/index.html

 --
 Sean M. Collins
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] A proposal for code reduction

2014-05-21 Thread Monty Taylor

On 05/21/2014 06:56 AM, Christian Berendt wrote:

On 05/21/2014 01:34 PM, Clark, Robert Graham wrote:

If you weren’t already aware of it there’s some great content here: 
https://wiki.openstack.org/wiki/Oslo


https://github.com/openstack/oslo.test



You may also want to check and see if testtools (Which is already a 
depend) has such a matcher (I think it does) and if not, add one. I 
agree that cleaning up things in that way is a great idea - but it's a 
common enough pattern that the libraries we use for things should 
support it.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] Nominate Trevor McKay for sahara-core

2014-05-21 Thread Trevor McKay
Thank you all,

  I have been away from a computer for a few days post Summit :)

I appreciate your vote of confidence, and I look forward to continued
work on Sahara.  Here's to more Big Data processing on Openstack!

Best regards,

Trevor

On Mon, 2014-05-19 at 10:13 -0400, Sergey Lukjanov wrote:
 Trevor, congrats!
 
 welcome to the sahara-core.
 
 On Thu, May 15, 2014 at 11:41 AM, Matthew Farrellee m...@redhat.com wrote:
  On 05/12/2014 05:31 PM, Sergey Lukjanov wrote:
 
  Hey folks,
 
  I'd like to nominate Trevor McKay (tmckay) for sahara-core.
 
  He is among the top reviewers of Sahara subprojects. Trevor is working
  on Sahara full time since summer 2013 and is very familiar with
  current codebase. His code contributions and reviews have demonstrated
  a good knowledge of Sahara internals. Trevor has a valuable knowledge
  of EDP part and Hadoop itself. He's working on both bugs and new
  features implementation.
 
  Some links:
 
  http://stackalytics.com/report/contribution/sahara-group/30
  http://stackalytics.com/report/contribution/sahara-group/90
  http://stackalytics.com/report/contribution/sahara-group/180
 
  https://review.openstack.org/#/q/owner:tmckay+sahara+AND+-status:abandoned,n,z
  https://launchpad.net/~tmckay
 
  Sahara cores, please, reply with +1/0/-1 votes.
 
  Thanks.
 
 
  +1
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][QoS] Weekly IRC Meeting?

2014-05-21 Thread Collins, Sean
Hi,

The session that we had on the Quality of Service API extension was well
attended - I would like to keep the momentum going by proposing a weekly
IRC meeting.

How does Tuesdays at 1800 UTC in #openstack-meeting-alt sound?

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Is vendor CI tests pass mandatory to merging fixes upstream...

2014-05-21 Thread Narasimhan, Vivekanandan
Hi  Neutron'ers,



Could you please let us know if all vendor CI tests must pass , If we need to 
merge a fix to

the upstream master?



For example , for this bug fix https://review.openstack.org/#/c/93624/

Posted to upstream master for merge,  we see failure for Mellanox CI and 
Hyper-V CI.



The UT failures don't seem to relate to our fix, but it would be helpful for us 
to touch base

with Mellanox CI and Hyper-V CI wwners to go forward and get these failures 
resolved.



Could someone you link us to them?



--

Thanks,



Vivek











___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Concerns about the ballooning size of keystone tokens

2014-05-21 Thread Chuck Thier
There is a review for swift [1] that is requesting to set the max header
size to 16k to be able to support v3 keystone tokens.  That might be fine
if you measure you request rate in requests per minute, but this is
continuing to add significant overhead to swift.  Even if you *only* have
10,000 requests/sec to your swift cluster, an 8k token is adding almost
80MB/sec of bandwidth.  This will seem to be equally bad (if not worse) for
services like marconi.

When PKI tokens were first introduced, we raised concerns about the
unbounded size of of the token in the header, and were told that uuid style
tokens would still be usable, but all I heard at the summit, was to not use
them and PKI was the future of all things.

At what point do we re-evaluate the decision to go with pki tokens, and
that they may not be the best idea for apis like swift and marconi?

Thanks,

--
Chuck

[1] https://review.openstack.org/#/c/93356/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] about policy.json in unit test

2014-05-21 Thread Adam Young

On 05/20/2014 09:39 PM, Bohai (ricky) wrote:


Thanks for your explanation.

I think  the implement in nova maybe  is  a good reference.

I have filed it to a blueprint.

https://blueprints.launchpad.net/cinder/+spec/united-policy-in-cinder



Would like to load policy from the policy store in Keystone.  We need 
load policy for Endpoint in order to make it usable, and some way for 
an endpoint to know its own id.


Ricky.

*From:*Christopher Yeoh [mailto:cbky...@gmail.com]
*Sent:* Monday, May 19, 2014 3:44 PM
*To:* OpenStack Development Mailing List (not for usage questions)
*Subject:* Re: [openstack-dev] [Cinder] about policy.json in unit test

On Mon, May 19, 2014 at 1:14 AM, Mike Perez thin...@gmail.com 
mailto:thin...@gmail.com wrote:


On 02:04 Tue 29 Apr , Bohai (ricky) wrote:
 Hi stackers,

 I found there are two policy.json files in cinder project.
 One is for source code(cinder/etc/policy.json), another is for the 
unit test(cinder/cinder/tests/policy.json).


 Maybe it's better to united them and make the unit test to use the 
policy.json file in the source code:
 1. policy.json in the source code is really what we want to test 
but not the one in unit test.
 2. It's more convenient for the developers, because of only need to 
modify one policy.json file.

   Current it's easy to miss one of them.

 Any advices?

Seems like the right direction. Don't know why they were separate to begin
with.

Nova has the same issue so its probably just historical. I'm not 
familiar with the cinder policy files, but for
Nova the default policy settings are different for the real policy 
file versus the one used for the unittests


and the unittests rely on this. So there's likely there will need to 
be some cleanup required to use just one policy file


and may complicate the unittests a bit more. But overall sounds like a 
good idea just to have one policy file.


Chris



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][DB] Weekly Meeting for DB migration sub-team

2014-05-21 Thread Henry Gessau
For Juno one of the most critical items in Neutron is the issue of
broken DB migrations. Over the past few months some ad-hoc discussions
have taken place. At the Atlanta summit some core team members and
interested developers met at the Neutron pod and discussed the issue and
what should be done about it.

We have decided on a plan[1] and have formed a small sub-team. This team
will hold a weekly meeting on IRC at 1300 UTC on Tuesdays[2].

Please attend if you have any questions or issues related to DB
migrations. Team members (or anyone) please add any missing bugs to the
meeting wiki[3].

[1] https://etherpad.openstack.org/p/neutron-db-migrations
[2] https://wiki.openstack.org/wiki/Meetings/NeutronDB
[3] https://wiki.openstack.org/wiki/Meetings/NeutronDB#Bugs

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Concerns about the ballooning size of keystone tokens

2014-05-21 Thread Adam Young

On 05/21/2014 11:09 AM, Chuck Thier wrote:
There is a review for swift [1] that is requesting to set the max 
header size to 16k to be able to support v3 keystone tokens.  That 
might be fine if you measure you request rate in requests per minute, 
but this is continuing to add significant overhead to swift.  Even if 
you *only* have 10,000 requests/sec to your swift cluster, an 8k token 
is adding almost 80MB/sec of bandwidth.  This will seem to be equally 
bad (if not worse) for services like marconi.


When PKI tokens were first introduced, we raised concerns about the 
unbounded size of of the token in the header, and were told that uuid 
style tokens would still be usable, but all I heard at the summit, was 
to not use them and PKI was the future of all things.


At what point do we re-evaluate the decision to go with pki tokens, 
and that they may not be the best idea for apis like swift and marconi?


Keystone tokens were slightly shrunk at the end of the last release 
cycle by removing unnecessary data from each endpoint entry.


Compressed PKI tokens are enroute and will be much smaller.



Thanks,

--
Chuck

[1] https://review.openstack.org/#/c/93356/


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] strutils: enhance safe_decode() and safe_encode()

2014-05-21 Thread Doug Hellmann
On Thu, May 15, 2014 at 11:41 AM, Victor Stinner
victor.stin...@enovance.com wrote:
 Hi,

 The functions safe_decode() and safe_encode() have been ported to Python 3,
 and changed more than once. IMO we can still improve these functions to make
 them more reliable and easier to use.


 (1) My first concern is that these functions try to guess user expectation
 about encodings. They use sys.stdin.encoding or sys.getdefaultencoding() as
 the default encoding to decode, but this encoding depends on the locale
 encoding (stdin encoding), on stdin (is stdin a TTY? is stdin mocked?), and on
 the Python major version.

 IMO the default encoding should be UTF-8 because most OpenStack components
 expect this encoding.

 Or maybe users want to display data to the terminal, and so the locale
 encoding should be used? In this case, locale.getpreferredencoding() would be
 more reliable than sys.stdin.encoding.

From what I can see, most uses of the module are in the client
programs. If using locale to find a default encoding is the best
approach, perhaps we should go ahead and make the change you propose.

One place I see safe_decode() used in a questionable way is in heat in
heat/engine/parser.py where validation errors are being re-raised as
StackValidationFailed (line 376 in my version). It's not clear why the
message is processed the way it is, so I would want to understand the
history before proposing a change there.



 (2) My second concern is that safe_encode(bytes, incoming, encoding)
 transcodes the bytes string from incoming to encoding if these two encodings
 are different.

 When I port code to Python 3, I'm looking for a function to replace this
 common pattern:

 if isinstance(data, six.text_type):
 data = data.encode(encoding)

 I don't want to modify data encoding if it is already a bytes string. So I
 would prefer to have:

 def safe_encode(data, encoding='utf-8'):
 if isinstance(data, six.text_type):
 data = data.encode(encoding)
 return data

 Changing safe_encode() like this will break applications relying on the
 transcode feature (incoming = encoding). If such usage exists, I suggest to
 add a new function (ex: transcode ?) with an API fitting this use case. For
 example, the incoming encoding would be mandatory.

 Is there really applications using the incoming parameter?

The only place I see that parameter used in integrated projects is in
the tests for the module. I didn't check the non-integrated projects.
Given its symmetry with safe_decode(), I don't really see a problem
with the current name. Something like the shortcut you propose is
present in safe_encode(), so I'm not sure what benefit a new function
brings?

Doug



 So, what do you think about that?

 Victor

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Logging exceptions and Python 3

2014-05-21 Thread Doug Hellmann
On Thu, May 15, 2014 at 11:29 AM, Victor Stinner
victor.stin...@enovance.com wrote:
 Hi,

 I'm trying to define some rules to port OpenStack code to Python 3. I just
 added a section in the Port Python 2 code to Python 3 about formatting
 exceptions and the logging module:
 https://wiki.openstack.org/wiki/Python3#logging_module_and_format_exceptions

 The problem is that I don't know what is the best syntax to log exceptions.
 Some projects convert the exception to Unicode, others use str(). I also saw
 six.u(str(exc)) which is wrong IMO (it can raise unicode error if the message
 contains a non-ASCII character).

 IMO the safest option is to use str(exc). For example, use
 LOG.debug(str(exc)).

 Is there a reason to log the exception as Unicode on Python 2?

Exception classes that define translatable strings may end up with
unicode characters that can't be converted to the default encoding
when str() is called. It's better to let the logging code handle the
conversion from an exception object to a string, since the logging
code knows how to deal with unicode properly.

So, write:

LOG.debug(u'Could not do whatever you asked: %s', exc)

or just:

LOG.debug(exc)

instead of converting explicitly.

Doug


 Victor

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Concerns about the ballooning size of keystone tokens

2014-05-21 Thread Morgan Fainberg
The keystone team is also looking at ways to reduce the data contained in
the token. Coupled with the compression, this should get the tokens back
down to a reasonable size.

Cheers,
Morgan

Sent via mobile

On Wednesday, May 21, 2014, Adam Young ayo...@redhat.com wrote:

  On 05/21/2014 11:09 AM, Chuck Thier wrote:

 There is a review for swift [1] that is requesting to set the max header
 size to 16k to be able to support v3 keystone tokens.  That might be fine
 if you measure you request rate in requests per minute, but this is
 continuing to add significant overhead to swift.  Even if you *only* have
 10,000 requests/sec to your swift cluster, an 8k token is adding almost
 80MB/sec of bandwidth.  This will seem to be equally bad (if not worse) for
 services like marconi.

  When PKI tokens were first introduced, we raised concerns about the
 unbounded size of of the token in the header, and were told that uuid style
 tokens would still be usable, but all I heard at the summit, was to not use
 them and PKI was the future of all things.

  At what point do we re-evaluate the decision to go with pki tokens, and
 that they may not be the best idea for apis like swift and marconi?


 Keystone tokens were slightly shrunk at the end of the last release cycle
 by removing unnecessary data from each endpoint entry.

 Compressed PKI tokens are enroute and will be much smaller.


  Thanks,

  --
 Chuck

  [1] https://review.openstack.org/#/c/93356/


 ___
 OpenStack-dev mailing listopenstack-...@lists.openstack.org 
 javascript:_e(%7B%7D,'cvml','OpenStack-dev@lists.openstack.org');http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][DB] Weekly Meeting for DB migration sub-team

2014-05-21 Thread Kyle Mestery
On Wed, May 21, 2014 at 10:12 AM, Henry Gessau ges...@gmail.com wrote:
 For Juno one of the most critical items in Neutron is the issue of
 broken DB migrations. Over the past few months some ad-hoc discussions
 have taken place. At the Atlanta summit some core team members and
 interested developers met at the Neutron pod and discussed the issue and
 what should be done about it.

 We have decided on a plan[1] and have formed a small sub-team. This team
 will hold a weekly meeting on IRC at 1300 UTC on Tuesdays[2].

Thanks for setting this up Henry! The meeting page says Mondays, but
your announcement says Tuesday. Can you clarify which day the meeting
will be on?

Thanks!
Kyle

 Please attend if you have any questions or issues related to DB
 migrations. Team members (or anyone) please add any missing bugs to the
 meeting wiki[3].

 [1] https://etherpad.openstack.org/p/neutron-db-migrations
 [2] https://wiki.openstack.org/wiki/Meetings/NeutronDB
 [3] https://wiki.openstack.org/wiki/Meetings/NeutronDB#Bugs

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [NFV][QA] Mission statement prosal

2014-05-21 Thread Nicolas Barcet
Thanks a lot for the comments so far.  Please see bellow an updated version
trying to integrate the remarks. I have created an Etherpad for this [1] so
that we can jointly update it.

---
Mission statement for the OpenStack NFV Sub-team:

The sub-team aims to define the use cases and identify and prioritise the
requirements which are needed to run Network Function Virtualization (NFV)
instances on top of OpenStack. This work includes identifying functional
gaps, creating blueprints, submitting and reviewing patches to the relevant
OpenStack projects and tracking their completion in support of NFV

The requirements expressed by this group should be made so that each of
them have a test case which can be verified using an OpenSource
implementation. This is to ensure that tests can be done without any
special hardware or proprietary software, which is key for continuous
integration tests in the OpenStack gate. If special setups are required
which cannot be reproduced on the standard OpenStack gate, the use cases
proponent will have to provide a 3rd party CI setup, accessible by
OpenStack infra, which will be used to validate developments against.

--

[1] https://etherpad.openstack.org/p/nvf-subteam-mission-statement

Further thoughts or updates are welcome here or there.

Nick


On Tue, May 20, 2014 at 7:43 AM, Jiang, Yunhong yunhong.ji...@intel.comwrote:

  Hi, Nick,

 For “have a test case which can be verified using a an OpenSource
 implementation ….. ensure that tests can be done without any special
 hardware or proprietary software”, I totally agree the requirement for
 without proprietary software, however, I’m not sure about  your exact
 meaing of “special hardware”.



 I had a quick chat with Daniel in the summit on this also. Several NFV
 tasks, like large page, guest NUMA, SR-IOV, require hardware support. Those
 features are widely supported in volume servers already for a long time,
 but can’t be achieved, or can’t be achieved well,  in VM yet, thus can’t be
 verified in current  gate. IMHO, even VM can support/emulate such feature,
 it’s not so good to use VM to verify them.



 How about have a standard 3rd party CI test for hardware based feature
 testing and make it an extensible framework? I think there are requirement
 at least from both ironic and NFV?



 Our team have 3rd party CI test  for PCI pass-through and OAT trusted
 computing, which can’t be achieved through CI now.  These tests are based
 on real hardware environment instead of VM. We didn’t publish result yet
 because of some IT logistic support.



 Thanks

 --jyh



 *From:* Nicolas Barcet [mailto:nico...@barcet.com]
 *Sent:* Monday, May 19, 2014 10:19 AM
 *To:* openstack-dev

 *Subject:* [openstack-dev] [NFV] Mission statement prosal



 Hello,

 As promised during the second BoF session (thanks a lot to Chris Wright
 for leading this), here is a first try at defining the purpose of our
 special interest group.

 ---
 Mission statement for the OpenStack NFV Special Interest Group:

 The SIG aims to define and prioritize the use cases which are required to
 run Network Function Virtualization (NFV) instances on top of OpenStack.
 The requirements are to be passed on to various projects within OpenStack
 to promote their implementation.


 The requirements expressed by this group should be made so that each of
 them have a test case which can be verified using a an OpenSource
 implementation. This is to ensure that tests can be done without any
 special hardware or proprietary software, which is key for continuous
 integration tests in the OpenStack gate.

  ---



 Comments, suggestions and fixes are obviously welcome!



 Best,

 Nick



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Nicolas Barcet nico...@barcet.com
a.k.a. nijaba, nick
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Concerns about the ballooning size of keystone tokens

2014-05-21 Thread John Dickinson
Can you explain how PKI info is compressible? I thought it was encrypted, which 
should mean you can't compress it right?


--John





On May 21, 2014, at 8:32 AM, Morgan Fainberg morgan.fainb...@gmail.com wrote:

 The keystone team is also looking at ways to reduce the data contained in the 
 token. Coupled with the compression, this should get the tokens back down to 
 a reasonable size. 
 
 Cheers,
 Morgan
 
 Sent via mobile
 
 On Wednesday, May 21, 2014, Adam Young ayo...@redhat.com wrote:
 On 05/21/2014 11:09 AM, Chuck Thier wrote:
 There is a review for swift [1] that is requesting to set the max header 
 size to 16k to be able to support v3 keystone tokens.  That might be fine if 
 you measure you request rate in requests per minute, but this is continuing 
 to add significant overhead to swift.  Even if you *only* have 10,000 
 requests/sec to your swift cluster, an 8k token is adding almost 80MB/sec of 
 bandwidth.  This will seem to be equally bad (if not worse) for services 
 like marconi.
 
 When PKI tokens were first introduced, we raised concerns about the 
 unbounded size of of the token in the header, and were told that uuid style 
 tokens would still be usable, but all I heard at the summit, was to not use 
 them and PKI was the future of all things.
 
 At what point do we re-evaluate the decision to go with pki tokens, and that 
 they may not be the best idea for apis like swift and marconi?
 
 Keystone tokens were slightly shrunk at the end of the last release cycle by 
 removing unnecessary data from each endpoint entry.
 
 Compressed PKI tokens are enroute and will be much smaller.
 
 
 Thanks,
 
 --
 Chuck
 
 [1] https://review.openstack.org/#/c/93356/
 
 
 ___
 OpenStack-dev mailing list
 
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Is vendor CI tests pass mandatory to merging fixes upstream...

2014-05-21 Thread Kyle Mestery
On Wed, May 21, 2014 at 9:54 AM, Narasimhan, Vivekanandan
vivekanandan.narasim...@hp.com wrote:
 Hi  Neutron’ers,



 Could you please let us know if all vendor CI tests must pass , If we need
 to merge a fix to

 the upstream master?

Not necessarily required by policy, but debugging the failures and
working with the CI owners is a good thing.



 For example , for this bug fix https://review.openstack.org/#/c/93624/

 Posted to upstream master for merge,  we see failure for Mellanox CI and
 Hyper-V CI.



 The UT failures don’t seem to relate to our fix, but it would be helpful for
 us to touch base

 with Mellanox CI and Hyper-V CI wwners to go forward and get these failures
 resolved.

I think reaching out on the list is the best place to look for help
right now. We need to have contacts for these 3rd party CI systems so
it's easier to reach out to people when they do fail, because a fair
amount of the time these are proprietary systems and code submitters
will need help of the CI owners to debug things.

There's a weekly meeting OpenStack-wide [1] on third-party testing,
for those who missed the earlier announcement as well.

Thanks,
Kyle

[1] https://wiki.openstack.org/wiki/Meetings#Third_Party_Meeting



 Could someone you link us to them?



 --

 Thanks,



 Vivek












 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][nova] how to best deal with default periodic task spacing behavior

2014-05-21 Thread Doug Hellmann
On Tue, May 20, 2014 at 10:15 AM, Matt Riedemann
mrie...@linux.vnet.ibm.com wrote:
 Between patch set 1 and patch set 3 here [1] we have different solutions to
 the same issue, which is if you don't specify a spacing value for periodic
 tasks then they run whenever the periodic task processor runs, which is
 non-deterministic and can be staggered if some tasks don't complete in a
 reasonable amount of time.

 I'm bringing this to the mailing list to see if there are more opinions out
 there, especially from operators, since patch set 1 changes the default

You may get more feedback from operators on the main openstack list.
I'm still catching up on backlog after the summit, so apologies if
you've already posted there.

Doug

 behavior to have the spacing value be the DEFAULT_INTERVAL (hard-coded 60
 seconds) versus patch set 3 which makes that behavior configurable so the
 admin can set global default spacing for tasks, but defaults to the current
 behavior of running every time if not specified.

 I don't like a new config option, but I'm also not crazy about changing
 existing behavior without consensus.

 [1] https://review.openstack.org/#/c/93767/

 --

 Thanks,

 Matt Riedemann


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] A proposal for code reduction

2014-05-21 Thread Doug Hellmann
On Wed, May 21, 2014 at 7:20 AM, Abhijeet Jain
abhijeet.j...@nectechnologies.in wrote:


 Hi Openstack-developers,

 I am Abhijeet Jain. One of the contributor in OpenStack.

 I was just working on optimizing the codes in Neutron , Keystone, Cinder
 modules.
 Then, I came across with a very common scenario that I can see at many
 places.
 at many places, I saw that the users have written the code in such form :

 assertEqual(user1['id'], user2['id']);
 assertEqual(user1['name'], user2['name']);
 assertEqual(user1['status'], user2['status']);
 assertEqual(user1['xyz'], user2['xyz']);


 To optimize such redundancy, I created a help function like below :

 def _check(self, expected, actual, keys):
 for key in keys:
 assertEqual( expected[key], actual[key])


 So, everywhere we just need to call this function like this :
 _check(user1, user2, ['id', 'name', 'status', 'xyz'])

 So, this way lots of code can be reduced.
 but, currently i need to put that function in every test file , I want to
 use. There is no global space.

 My proposal is :
 How about putting this function in some utils like place, which can be
 accessed in every test function.
 but for that, I need your approval.
 Kindly, provide your valuable feedback on this.

How close is the function you provide to testtools' MatchesDict [1] matcher?

Doug

1. 
https://github.com/testing-cabal/testtools/blob/master/testtools/matchers/_dict.py#L168




 Thanks,
 Abhijeet Jain

 DISCLAIMER:
 ---
 The contents of this e-mail and any attachment(s) are confidential and
 intended
 for the named recipient(s) only.
 It shall not attach any liability on the originator or NEC or its
 affiliates. Any views or opinions presented in
 this email are solely those of the author and may not necessarily reflect
 the
 opinions of NEC or its affiliates.
 Any form of reproduction, dissemination, copying, disclosure, modification,
 distribution and / or publication of
 this message without the prior written consent of the author of this e-mail
 is
 strictly prohibited. If you have
 received this email in error please delete it and notify the sender
 immediately. .
 ---


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Concerns about the ballooning size of keystone tokens

2014-05-21 Thread Lance Bragstad
John,

Adam had a blog post on Compressed Tokens that might help shed a little
light on them in general[1]. We also have a blueprint for tracking the work
as it gets done[2].


[1] http://adam.younglogic.com/2014/02/compressed-tokens/
[2] https://blueprints.launchpad.net/keystone/+spec/compress-tokens


On Wed, May 21, 2014 at 10:41 AM, John Dickinson m...@not.mn wrote:

 Can you explain how PKI info is compressible? I thought it was encrypted,
 which should mean you can't compress it right?


 --John





 On May 21, 2014, at 8:32 AM, Morgan Fainberg morgan.fainb...@gmail.com
 wrote:

  The keystone team is also looking at ways to reduce the data contained
 in the token. Coupled with the compression, this should get the tokens back
 down to a reasonable size.
 
  Cheers,
  Morgan
 
  Sent via mobile
 
  On Wednesday, May 21, 2014, Adam Young ayo...@redhat.com wrote:
  On 05/21/2014 11:09 AM, Chuck Thier wrote:
  There is a review for swift [1] that is requesting to set the max
 header size to 16k to be able to support v3 keystone tokens.  That might be
 fine if you measure you request rate in requests per minute, but this is
 continuing to add significant overhead to swift.  Even if you *only* have
 10,000 requests/sec to your swift cluster, an 8k token is adding almost
 80MB/sec of bandwidth.  This will seem to be equally bad (if not worse) for
 services like marconi.
 
  When PKI tokens were first introduced, we raised concerns about the
 unbounded size of of the token in the header, and were told that uuid style
 tokens would still be usable, but all I heard at the summit, was to not use
 them and PKI was the future of all things.
 
  At what point do we re-evaluate the decision to go with pki tokens, and
 that they may not be the best idea for apis like swift and marconi?
 
  Keystone tokens were slightly shrunk at the end of the last release
 cycle by removing unnecessary data from each endpoint entry.
 
  Compressed PKI tokens are enroute and will be much smaller.
 
 
  Thanks,
 
  --
  Chuck
 
  [1] https://review.openstack.org/#/c/93356/
 
 
  ___
  OpenStack-dev mailing list
 
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Solum] Issue while running a dot net application on Solum

2014-05-21 Thread Rohit Mathur
Dear Team,
While running a dot net application on Solum, i am facing an error while
the process tries to discover the process types -

[-e:1:in `main': undefined method `[]' for nil:NilClass (NoMethodError)]

Also i have raised a bug for the same in the Solum community.
https://bugs.launchpad.net/solum/+bug/1319406
Please refer this for the logs.

If anyone know the solution then kindly let me know

Thanks and Regards,

Rohit Mathur| Associate Consultant
GlobalLogic India Limited, Noida SEZ -1
P +91.120.4342000.2371  M +91 8130988668
www.globallogic.com
http://www.globallogic.com/
http://www.globallogic.com/email_disclaimer.txt
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Concerns about the ballooning size of keystone tokens

2014-05-21 Thread Dolph Mathews
On Wed, May 21, 2014 at 10:41 AM, John Dickinson m...@not.mn wrote:

 Can you explain how PKI info is compressible? I thought it was encrypted,
 which should mean you can't compress it right?


They're not encrypted - just signed and then base64 encoded. The JSON (and
especially service catalog) is compressible prior to encoding.



 --John





 On May 21, 2014, at 8:32 AM, Morgan Fainberg morgan.fainb...@gmail.com
 wrote:

  The keystone team is also looking at ways to reduce the data contained
 in the token. Coupled with the compression, this should get the tokens back
 down to a reasonable size.
 
  Cheers,
  Morgan
 
  Sent via mobile
 
  On Wednesday, May 21, 2014, Adam Young ayo...@redhat.com wrote:
  On 05/21/2014 11:09 AM, Chuck Thier wrote:
  There is a review for swift [1] that is requesting to set the max
 header size to 16k to be able to support v3 keystone tokens.  That might be
 fine if you measure you request rate in requests per minute, but this is
 continuing to add significant overhead to swift.  Even if you *only* have
 10,000 requests/sec to your swift cluster, an 8k token is adding almost
 80MB/sec of bandwidth.  This will seem to be equally bad (if not worse) for
 services like marconi.
 
  When PKI tokens were first introduced, we raised concerns about the
 unbounded size of of the token in the header, and were told that uuid style
 tokens would still be usable, but all I heard at the summit, was to not use
 them and PKI was the future of all things.
 
  At what point do we re-evaluate the decision to go with pki tokens, and
 that they may not be the best idea for apis like swift and marconi?
 
  Keystone tokens were slightly shrunk at the end of the last release
 cycle by removing unnecessary data from each endpoint entry.
 
  Compressed PKI tokens are enroute and will be much smaller.
 
 
  Thanks,
 
  --
  Chuck
 
  [1] https://review.openstack.org/#/c/93356/
 
 
  ___
  OpenStack-dev mailing list
 
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Meeting times for Juno

2014-05-21 Thread Zane Bitter
I know people are very interested in an update on this, so here's the 
plan such as it is.


I'd like to reassure everyone that we will definitely be keeping our 
original meeting time of Wednesdays at 2000 UTC, every second week 
(including this week).


I want to try out a new time for the alternate meetings, so let's bring 
it forward by 12 hours to Wednesdays at 1200 UTC. Unfortunately we'll 
lose the west coast of the US, but participation from there was not high 
anyway due to bad timing, and we'll gain folks in Europe. I'm also 
hoping that it will be at least as good or better for folks in Asia. The 
first meeting at this time will be next week.


I've reserved #openstack-meeting for this purpose at 
https://wiki.openstack.org/wiki/Meetings but from past experience we 
know that people sometimes don't book it yet still expect not to be 
kicked out, so we'll have to see how it goes ;) If you get lost, look in 
#heat.


We'll see how this works over the next few meetings at that time and 
re-evaluate.


If in doubt, check https://wiki.openstack.org/wiki/Meetings/HeatAgenda 
for times; I try to keep it up to date despite the provocation of a 
ridiculously short timeout on the auth cookies.


Today's meeting is at 2000 UTC - see y'all there :)

cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][DB] Weekly Meeting for DB migration sub-team

2014-05-21 Thread Henry Gessau
On 5/21/2014 11:40 AM, Kyle Mestery wrote:
 On Wed, May 21, 2014 at 10:12 AM, Henry Gessau ges...@gmail.com wrote:
 For Juno one of the most critical items in Neutron is the issue of
 broken DB migrations. Over the past few months some ad-hoc discussions
 have taken place. At the Atlanta summit some core team members and
 interested developers met at the Neutron pod and discussed the issue and
 what should be done about it.

 We have decided on a plan[1] and have formed a small sub-team. This team
 will hold a weekly meeting on IRC at 1300 UTC on Tuesdays[2].

 Thanks for setting this up Henry! The meeting page says Mondays, but
 your announcement says Tuesday. Can you clarify which day the meeting
 will be on?

Sorry, it's Tuesday. I have updated the meeting page.

 
 Thanks!
 Kyle
 
 Please attend if you have any questions or issues related to DB
 migrations. Team members (or anyone) please add any missing bugs to the
 meeting wiki[3].

 [1] https://etherpad.openstack.org/p/neutron-db-migrations
 [2] https://wiki.openstack.org/wiki/Meetings/NeutronDB
 [3] https://wiki.openstack.org/wiki/Meetings/NeutronDB#Bugs


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] strutils: enhance safe_decode() and safe_encode()

2014-05-21 Thread John Dennis
On 05/15/2014 11:41 AM, Victor Stinner wrote:
 Hi,
 
 The functions safe_decode() and safe_encode() have been ported to Python 3, 
 and changed more than once. IMO we can still improve these functions to make 
 them more reliable and easier to use.
 
 
 (1) My first concern is that these functions try to guess user expectation 
 about encodings. They use sys.stdin.encoding or sys.getdefaultencoding() as 
 the default encoding to decode, but this encoding depends on the locale 
 encoding (stdin encoding), on stdin (is stdin a TTY? is stdin mocked?), and 
 on 
 the Python major version.
 
 IMO the default encoding should be UTF-8 because most OpenStack components 
 expect this encoding.
 
 Or maybe users want to display data to the terminal, and so the locale 
 encoding should be used? In this case, locale.getpreferredencoding() would be 
 more reliable than sys.stdin.encoding.

The problem is you can't know the correct encoding to use until you know
the encoding of the IO stream, therefore I don't think you can correctly
write a generic encode/decode functions. What if you're trying to send
the output to multiple IO streams potentially with different encodings?
Think that's far fetched? Nope, it's one of the nastiest and common
problems in Python2. The default encoding differs depending on whether
the IO target is a tty or not. Therefore code that works fine when
written to the terminal blows up with encoding errors when redirected to
a file (because the TTY probably has UTF-8 and all other encodings
default to ASCII due to sys.defaultencoding).

Another problem is that Python2 default encoding is ASCII but in Python3
it's UTF-8 (IMHO the default encoding in Python2 should have been UTF-8,
that fact it was set to ASCII is the cause of 99% of the encoding
exceptions in Python2).

Given that you don't know what the encoding of the IO stream is I don't
think you should base it on the locale nor sys.stdin. Rather I think we
should just agree everything is UTF-8. If that messes up someones
terminal output I think it's fair to say if you're running OpenStack
you'll need to switch to UTF-8. Anything else requires way more
knowledge than we have available in a generic function. Solving this so
the encodings match for each and every IO stream is very complicated,
note Python3 still punts on this.


-- 
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Is vendor CI tests pass mandatory to merging fixes upstream...

2014-05-21 Thread Fawad Khaliq
Hi Narasimhan,

You can find information about third party CIs and their owners in
DriverLog [1]. They should be the point of contact.

[1] http://stackalytics.com/report/driverlog

Thanks,
Fawad Khaliq

On Wed, May 21, 2014 at 8:38 AM, Kyle Mestery mest...@noironetworks.comwrote:

 On Wed, May 21, 2014 at 9:54 AM, Narasimhan, Vivekanandan
 vivekanandan.narasim...@hp.com wrote:
  Hi  Neutron’ers,
 
 
 
  Could you please let us know if all vendor CI tests must pass , If we
 need
  to merge a fix to
 
  the upstream master?
 
 Not necessarily required by policy, but debugging the failures and
 working with the CI owners is a good thing.

 
 
  For example , for this bug fix https://review.openstack.org/#/c/93624/
 
  Posted to upstream master for merge,  we see failure for Mellanox CI and
  Hyper-V CI.
 
 
 
  The UT failures don’t seem to relate to our fix, but it would be helpful
 for
  us to touch base
 
  with Mellanox CI and Hyper-V CI wwners to go forward and get these
 failures
  resolved.
 
 I think reaching out on the list is the best place to look for help
 right now. We need to have contacts for these 3rd party CI systems so
 it's easier to reach out to people when they do fail, because a fair
 amount of the time these are proprietary systems and code submitters
 will need help of the CI owners to debug things.

 There's a weekly meeting OpenStack-wide [1] on third-party testing,
 for those who missed the earlier announcement as well.

 Thanks,
 Kyle

 [1] https://wiki.openstack.org/wiki/Meetings#Third_Party_Meeting

 
 
  Could someone you link us to them?
 
 
 
  --
 
  Thanks,
 
 
 
  Vivek
 
 
 
 
 
 
 
 
 
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Concerns about the ballooning size of keystone tokens

2014-05-21 Thread John Dickinson
Thanks Dolph and Lance for the info and links.


What concerns me, in general, about the current length of keystone tokens is 
that they are unbounded. And the proposed solutions don't change that pattern.

My understanding of why PKI tokens are used is so that the system doesn't have 
to call to Keystone to authorize the request. This reduces the load on 
Keystone, but it adds significant overhead for every API request.

Keystone's first system was to use UUID bearer tokens. These are fixed length, 
small, cacheable, and require a call to Keystone once per cache period.

Moving to PKI tokens, we now have multi-kB headers that significantly increase 
the size of each request. Swift deployers commonly have small objects on the 
order of 50kB, so adding another ~10kB to each request, just to save a 
once-a-day call to Keystone (ie uuid tokens) seems to be a really high price to 
pay for not much benefit.

The other benefit to PKI tokens is that services can make calls to other 
systems on behalf of the user (eg nova can call cinder for the user). This is 
great, but it's not the only usage pattern in OpenStack projects, and therefore 
I don't like optimizing for it at the expense of other patterns.

In addition to PKI tokens (ie signed+encoded service catalogs), I'd like to see 
Keystone support and remain committed to fixed-length bearer tokens or a 
signed-with-shared-secret auth mechanism (a la AWS).

--John




On May 21, 2014, at 9:09 AM, Dolph Mathews dolph.math...@gmail.com wrote:

 
 On Wed, May 21, 2014 at 10:41 AM, John Dickinson m...@not.mn wrote:
 Can you explain how PKI info is compressible? I thought it was encrypted, 
 which should mean you can't compress it right?
 
 They're not encrypted - just signed and then base64 encoded. The JSON (and 
 especially service catalog) is compressible prior to encoding.
 
 
 
 --John
 
 
 
 
 
 On May 21, 2014, at 8:32 AM, Morgan Fainberg morgan.fainb...@gmail.com 
 wrote:
 
  The keystone team is also looking at ways to reduce the data contained in 
  the token. Coupled with the compression, this should get the tokens back 
  down to a reasonable size.
 
  Cheers,
  Morgan
 
  Sent via mobile
 
  On Wednesday, May 21, 2014, Adam Young ayo...@redhat.com wrote:
  On 05/21/2014 11:09 AM, Chuck Thier wrote:
  There is a review for swift [1] that is requesting to set the max header 
  size to 16k to be able to support v3 keystone tokens.  That might be fine 
  if you measure you request rate in requests per minute, but this is 
  continuing to add significant overhead to swift.  Even if you *only* have 
  10,000 requests/sec to your swift cluster, an 8k token is adding almost 
  80MB/sec of bandwidth.  This will seem to be equally bad (if not worse) 
  for services like marconi.
 
  When PKI tokens were first introduced, we raised concerns about the 
  unbounded size of of the token in the header, and were told that uuid 
  style tokens would still be usable, but all I heard at the summit, was to 
  not use them and PKI was the future of all things.
 
  At what point do we re-evaluate the decision to go with pki tokens, and 
  that they may not be the best idea for apis like swift and marconi?
 
  Keystone tokens were slightly shrunk at the end of the last release cycle 
  by removing unnecessary data from each endpoint entry.
 
  Compressed PKI tokens are enroute and will be much smaller.
 
 
  Thanks,
 
  --
  Chuck
 
  [1] https://review.openstack.org/#/c/93356/
 
 
  ___
  OpenStack-dev mailing list
 
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS]LBaaS 2nd Session etherpad

2014-05-21 Thread Carlos Garza
   I'm crc32 on free node.  My TimeZone is U.S. CST (UTC-5).
Let me know when we can clear this up. I need to know what the intent was for 
with the Trusted certificates before we can decide what fields were needed for 
it.



On May 21, 2014, at 9:14 AM, Samuel Bercovici 
samu...@radware.commailto:samu...@radware.com wrote:

Hi Carlos,

What is your IRC nick?
In what time zone you are located?

Regards,
-Sam.






From: Carlos Garza [mailto:carlos.ga...@rackspace.comhttp://rackspace.com]
Sent: Wednesday, May 21, 2014 2:52 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS]LBaaS 2nd Session etherpad

I'm reading through the https://wiki.openstack.org/wiki/Neutron/LBaaS/SSL 
docs as well as the https://etherpad.openstack.org/p/neutron-lbaas-ssl-l7
document that your referencing below and I think who ever wrote the documents 
may have misunder stood the Association between X509 certificates and Private 
and public Keys.
I think we should clean those up and unambiguously declare that.

A certificate shall be defined as a PEM encoded X509 certificate.
For example

Certificate:
-BEGIN CERTIFICATE-
   blah blah blah base64 stuff goes here
-END CERTIFICATE-

A private key shall be a PEM encoded private key that may or may not 
necessarily be an RSA key. For example it could be
a curve key but most likely it will be RSA



a public-key shall mean an actual Pem encoded public key and not the x509 
certificate that contains it. example
-BEGIN PUBLIC KEY-
bah blah blah base64 stuff goes here
-END PUBLIC KEY-

A Private key shall mean a PEM encoded private key.
Example
-BEGIN RSA PRIVATE KEY-
blah blah blah base64 goes here.
-END RSA PRIVATE KEY-

Also the same key could be encoded as pkcs8

-BEGIN PRIVATE KEY-
base64 stuff here
-END PRIVATE KEY-

I would think that we should allow for PKCS8 so that users are not restricted 
to PKCS1 RSA keys via BEGIN PRIVATE KEY. I'm ok with forcing the user to not 
use PKCS8 to send both
the certificate and key.

There seems to be confusion in the neutron-lbaas-ssl-i7 ether pad doc as well 
as the doc at URL https://etherpad.openstack.org/p/neutron-lbaas-ssl-l7
The confusion being that the term public key and certificate are being used 
interchangeably.

For example in the wiki page?
under Resource change:
SSL certidficate(new) declares

certificate_chain : list of PEM-formatted public keys, not mandatory
This should be changed to
certificate_chain: list of PEM-formatted x509 certificates, not mandatory

Also in the CLI portion of the doc their are entries like
neutron ssl-certificate-create --public-key CERTIFICATE-FILE --private-key 
PRIVATE-KEY-FILE --passphrase PASSPHRASE --cert-chain 
INTERMEDIATE-KEY-FILE-1, INTERMEDIATE-KEY-FILE-2 certificate name
The option --public-key should be changed to --cert since it specifies the 
X509. Also the names INTERMEDIATE-KEY-FILE-1 etc should be changed to 
INTERMEDIATE-CERT-FILE-1 since these are x509s and not certs.


The below line mass no sense to me.
neutron ssl-trusted-certificate-create --key PUBLIC-KEY-FILE key name

Are you truing to give the certificate a name? We also will never need to work 
with public keys in general as the public key can be extracted from the x509 or 
the private key file.
Or was the intent to use ssl-trusted-certificates to specify the private keys 
that the Loadbalancer will use when communicating with back end servers that 
are doing client auth?

the rational portion of the doc is declaring that trusted certificates are for 
back end encryption but don't mention if this is for client auth either. Was 
the intent to use a specific key for the SSL session between the load balancer 
and the back end server or was the intention to advertise the client vert to 
the backend server so the the back end server can authenticate with what ever 
CA it(the server) trusts.

in either case both the private key and the certificate or chain should be used 
in this configuration since the loadbalancer needs the private key during the 
SSL session.
the command should look something alone the lines of
neutron ssl-trusted-certificate-create --key PRIVATE_KEY_FILE --cert 
CERTIFICATE-file.


I would like to help out with this but I need to know the intent of the 
person that initially interchanged the terms key and certificate, and its much 
better to fix this sooner then later.


On May 15, 2014, at 10:58 PM, Samuel Bercovici 
samu...@radware.commailto:samu...@radware.com wrote:

Hi Everyone,

https://etherpad.openstack.org/p/neutron-lbaas-ssl-l7

Feel free to modify and update, please make sure you use your name so we will 
know who have added the modification.

Regards,
-Sam.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [Neutron] Core API refactoring

2014-05-21 Thread Mark McClain

On May 21, 2014, at 10:23 AM, Mandeep Dhami 
dh...@noironetworks.commailto:dh...@noironetworks.com wrote:

Hi Sean:

While the APIs might not be changing*, I suspect that there are significant 
design decisions being made**. These changes are probably more significant than 
any new feature being discussed. As a community, are we expected to document 
these design changes and review these changes as well?

There was a bit of high level discussion needed to ensure community consensus 
before we could dive into the details.  The actual changes will be documented 
according to Neutron’s spec process.


I am still trying to figure out what Neutron's review standards are. On one 
hand, I am seeing code review comments that reject a patch for cosmetic changes 
(like a name change from what was in the reviewed blueprint), to having an 
attitude that something as core and central to neutron as refactoring and a 
major API update to v3 not needing a design document/review.

It is my opinion, and my recommendation, that the proposed changes be 
documented and reviewed by same standard that we have for other features.

That has been the plan all along to follow the spec process as with all other 
changes to Neutron.


* I believe that v3 API is being introduced and chnages are being made, but I 
might have mis-understood.

It is important to note that the changes are to the code level interfaces 
within the neutron-server executable and no user facing REST changes.  The 
intent is to keep compatibility with existing V2 plugins while moving towards a 
new V3 plugin architecture.  Our dev cycle is really short, so inserting this 
new layer will be in preparation for a formal V3 definition declared during the 
K cycle.  The discussion intentionally avoided any changes to logical and/or db 
models for this very reason.


** I was under the impression that in addition to the Pecan updates, there was 
going to be refactoring to use taskflow as well. And that I expect to have 
significant control flow impact, and that is what I really wanted to review.

Moving towards tasks is actually independent of the changes to the REST layer.  
Tasks will refactor the implementation of the actual plugins, but should not 
require any cooperation with the REST layer.

mark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][QoS] Weekly IRC Meeting?

2014-05-21 Thread Kanzhe Jiang
+1


On Wed, May 21, 2014 at 7:56 AM, Collins, Sean 
sean_colli...@cable.comcast.com wrote:

 Hi,

 The session that we had on the Quality of Service API extension was well
 attended - I would like to keep the momentum going by proposing a weekly
 IRC meeting.

 How does Tuesdays at 1800 UTC in #openstack-meeting-alt sound?

 --
 Sean M. Collins
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kanzhe Jiang
MTS at BigSwitch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Neutron] - Integration with neutron using external attachment point

2014-05-21 Thread Stig Telfer
Our team here has been looking at something closely related that may be of 
interest.  There seems to be good scope for collaboration.

Here’s our proposal, which includes support for bare metal networking with the 
VLAN mechanism driver:

https://blueprints.launchpad.net/neutron/+spec/ml2-mechanism-snmp-vlan

Our project is a point solution at your step 6.  The rest of the workflow looks 
complimentary and solves the unanswered questions in our bp proposal.  As 
indeed would the neutron-external-ports spec.

Best wishes,
Stig Telfer
Cray Inc.


From: Russell Haering [mailto:russellhaer...@gmail.com]
Sent: Tuesday, May 20, 2014 11:01 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Ironic][Neutron] - Integration with neutron using 
external attachment point

We've been experimenting some with how to use Neutron with Ironic here at 
Rackspace.

Our very experimental code: https://github.com/rackerlabs/ironic-neutron-plugin

Our objective is the same as what you're describing, to allow Nova servers 
backed by Ironic to attach to arbitrary Neutron networks. We're initially 
targeting VLAN-based networks only, but eventually want to do VXLAN from the 
top-of-rack switches, controlled via an SDN controller.

Our approach is a little different than what you're describing though. Our 
objective is to modify the existing Nova - Neutron interaction as little as 
possible, which means approaching the problem by thinking how would an L2 
agent do this?.

The workflow looks something like:

1. Nova calls Neutron to create a virtual port. Because this happens _before_ 
Nova touches the virt driver, the port is at this point identical to one 
created for a virtual server.
2. Nova executes the spawn method of the Ironic virt driver, which makes some 
calls to Ironic.
3. Inside Ironic, we know about the physical switch ports that the selected 
Node is connected to. This information is discovered early-on using LLDP and 
stored in the Ironic database.
4. We actually need the node to remain on an internal provisioning VLAN for 
most of the provisioning process, but once we're done with on-host work we turn 
the server off.
5. Ironic deletes a Neutron port that was created at bootstrap time to trunk 
the physical switch ports for provisioning.
6. Ironic updates each of the customer's Neutron ports with information about 
its physical switch port.
6. Our Neutron extension configures the switches accordingly.
7. Then Ironic brings the server back up.

The destroy process basically does the reverse. Ironic removes the physical 
switch mapping from the Neutron ports, re-creates an internal trunked port, 
does some work to tear down the server, then passes control back to Nova. At 
that point Nova can do what it wants with the Neutron ports. Hypothetically 
that could include allocating them to a different Ironic Node, etc, although in 
practice it just deletes them.

Again, this is all very experimental in nature, but it seems to work fairly 
well for the use-cases we've considered. We'd love to find a way to collaborate 
with others working on similar problems.

Thanks,
Russell

On Tue, May 20, 2014 at 7:17 AM, Akihiro Motoki 
amot...@gmail.commailto:amot...@gmail.com wrote:
# Added [Neutron] tag as well.

Hi Igor,

Thanks for the comment. We already know them as I commented
in the Summit session and ML2 weekly meeting.
Kevin's blueprint now covers Ironic integration and layer2 network gateway
and I believe campus-network blueprint will be covered.

We think the work can be split into generic API definition and implementations
(including ML2). In external attachment point blueprint review, API
and generic topics are mainly discussed so far and the detail
implementation is not discussed
so much yet. ML2 implementation detail can be discussed later
(separately or as a part of the blueprint review).

I am not sure what changes proposed in Blueprint [1].
AFAIK SDN/OpenFlow controller based approach can support this,
but how can we archive this in the existing open source implementation.
I am also interested in the ML2 implementation detail.

Anyway more input will be appreciated.

Thanks,
Akihiro

On Tue, May 20, 2014 at 7:13 PM, Igor Cardoso 
igordc...@gmail.commailto:igordc...@gmail.com wrote:
 Hello Kevin.
 There is a similar Neutron blueprint [1], originally meant for Havana but
 now aiming for Juno.
 I would be happy to join efforts with you regarding our blueprints.
 See also: [2].

 [1] https://blueprints.launchpad.net/neutron/+spec/ml2-external-port
 [2] https://blueprints.launchpad.net/neutron/+spec/campus-network


 On 19 May 2014 23:52, Kevin Benton 
 blak...@gmail.commailto:blak...@gmail.com wrote:

 Hello,

 I am working on an extension for neutron to allow external attachment
 point information to be stored and used by backend plugins/drivers to place
 switch ports into neutron networks[1].

 One of the primary use cases is to integrate ironic with neutron. The
 basic 

Re: [openstack-dev] [oslo] Logging exceptions and Python 3

2014-05-21 Thread Igor Kalnitsky
 So, write:

 LOG.debug(u'Could not do whatever you asked: %s', exc)

 or just:

 LOG.debug(exc)

Actually, that's a bad idea to pass an exception instance to
some log function: LOG.debug(exc). Let me show you why.

Here a snippet from logging.py:

def getMessage(self):
if not _unicode:
msg = str(self.msg)
else:
msg = self.msg
if not isinstance(msg, basestring):
try:
msg = str(self.msg)
except UnicodeError:
msg = self.msg  # we keep exception object as it is
if self.args:   # this condition is obviously False
msg = msg % self.args
return msg  # returns an exception object, not a
text

And here another snippet from the format() method:

record.message = record.getMessage()
# ... some time formatting ...
s = self._fmt % record.__dict__ # FAIL

the old string formatting will call str(), not unicode() and we will FAIL
with UnicodeEncodeError.



On Wed, May 21, 2014 at 6:38 PM, Doug Hellmann
doug.hellm...@dreamhost.comwrote:

 On Thu, May 15, 2014 at 11:29 AM, Victor Stinner
 victor.stin...@enovance.com wrote:
  Hi,
 
  I'm trying to define some rules to port OpenStack code to Python 3. I
 just
  added a section in the Port Python 2 code to Python 3 about formatting
  exceptions and the logging module:
 
 https://wiki.openstack.org/wiki/Python3#logging_module_and_format_exceptions
 
  The problem is that I don't know what is the best syntax to log
 exceptions.
  Some projects convert the exception to Unicode, others use str(). I also
 saw
  six.u(str(exc)) which is wrong IMO (it can raise unicode error if the
 message
  contains a non-ASCII character).
 
  IMO the safest option is to use str(exc). For example, use
  LOG.debug(str(exc)).
 
  Is there a reason to log the exception as Unicode on Python 2?

 Exception classes that define translatable strings may end up with
 unicode characters that can't be converted to the default encoding
 when str() is called. It's better to let the logging code handle the
 conversion from an exception object to a string, since the logging
 code knows how to deal with unicode properly.

 So, write:

 LOG.debug(u'Could not do whatever you asked: %s', exc)

 or just:

 LOG.debug(exc)

 instead of converting explicitly.

 Doug

 
  Victor
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-dev][Neutron] Port Mirroring Extension in Neutron

2014-05-21 Thread Anil Rao
Is this discussion going to happen in today’s meeting?

-Anil

From: Stephen Wong [mailto:s3w...@midokura.com]
Sent: Saturday, May 17, 2014 11:31 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Openstack-dev][Neutron] Port Mirroring Extension 
in Neutron

Hi Sumit,

Srinivasa has agreed to join the advanced service meeting to kick off the 
discussion on this topic at our regular meeting[1]. Looking forward to working 
with him and others that are interested in getting this tap service effort 
started in Neutron.

Thanks,
- Stephen


[1] 
https://wiki.openstack.org/wiki/Meetings#Neutron_Advanced_Services.27_Common_requirements_team_meeting

On Sat, May 17, 2014 at 9:05 PM, Sumit Naiksatam 
sumitnaiksa...@gmail.commailto:sumitnaiksa...@gmail.com wrote:
Hi, Unfortunately I could not participate in this discussion. As
requested in this thread earlier, it would be good to get a summary of
the discussion.

We, in the advanced services team in Neutron, have long discussed[1]
the possibility of accommodating a tap service. So I would like to
understand if/how this discussion is aligning with that goal.

Thanks,
~Sumit.

[1] 
https://blueprints.launchpad.net/neutron/+spec/neutron-services-insertion-chaining-steering

On Thu, May 15, 2014 at 6:52 PM, Anil Rao 
anil@gigamon.commailto:anil@gigamon.com wrote:
 See you all there tomorrow.



 Regards,

 Anil



 From: Vinay Yadhav 
 [mailto:vinayyad...@gmail.commailto:vinayyad...@gmail.com]
 Sent: Thursday, May 15, 2014 12:51 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Openstack-dev][Neutron] Port Mirroring
 Extension in Neutron



 Hi,



 Booked a slot tomorrow at 9:20 AM at the neutron pod.





 Cheers,

 main(i){putchar((5852758((i-1)/2)*8)-!(1i)*'\r')^89main(++i);}



 On Thu, May 15, 2014 at 2:50 PM, Stephen Wong 
 s3w...@midokura.commailto:s3w...@midokura.com wrote:

 Hi Vinay,



 I am interested. Please sign up a slot on Neutron pod for tomorrow
 (Friday) and announce the timeslot to the ML.



 Thanks,

 - Stephen





 On Thu, May 15, 2014 at 7:13 AM, Vinay Yadhav 
 vinayyad...@gmail.commailto:vinayyad...@gmail.com wrote:

 Hi,



 I am Vinay, working with Ericsson.



 I am interested in the following blueprint regarding port mirroring
 extension in neutron:
 https://blueprints.launchpad.net/neutron/+spec/port-mirroring



 I am close to finishing an implementation for this extension in OVS plugin
 and would be submitting a neutron spec related to the blueprint soon.



 I would like to know other who are also interested in introducing Port
 Mirroring extension in neutron.



 It would be great if we can discuss and collaborate in development and
 testing this extension



 I am currently attending the OpenStack Summit in Atlanta, so if any of you
 are interested in the blueprint, we can meet here in the summit and discuss
 how to proceed with the blueprint.



 Cheers,

 main(i){putchar((5852758((i-1)/2)*8)-!(1i)*'\r')^89main(++i);}



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-dev][Neutron] Port Mirroring Extension in Neutron

2014-05-21 Thread Vinay Yadhav
Hi,

I am attaching the first version of the neutron spec for Tap-as-a-Service
(Port Mirroring).

It will be formally commited soon in git.

Cheers,
main(i){putchar((5852758((i-1)/2)*8)-!(1i)*'\r')^89main(++i);}


On Tue, May 20, 2014 at 7:12 AM, Kanzhe Jiang kanzhe.ji...@bigswitch.comwrote:

 Vinay's proposal was based on OVS's mirroring feature.


 On Mon, May 19, 2014 at 9:11 PM, YAMAMOTO Takashi 
 yamam...@valinux.co.jpwrote:

  Hi,
 
  I am Vinay, working with Ericsson.
 
  I am interested in the following blueprint regarding port mirroring
  extension in neutron:
  https://blueprints.launchpad.net/neutron/+spec/port-mirroring
 
  I am close to finishing an implementation for this extension in OVS
 plugin
  and would be submitting a neutron spec related to the blueprint soon.

 does your implementation use OVS' mirroring functionality?
 or is it flow-based?

 YAMAMOTO Takashi

 
  I would like to know other who are also interested in introducing Port
  Mirroring extension in neutron.
 
  It would be great if we can discuss and collaborate in development and
  testing this extension
 
  I am currently attending the OpenStack Summit in Atlanta, so if any of
 you
  are interested in the blueprint, we can meet here in the summit and
 discuss
  how to proceed with the blueprint.
 
  Cheers,
  main(i){putchar((5852758((i-1)/2)*8)-!(1i)*'\r')^89main(++i);}

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kanzhe Jiang
 MTS at BigSwitch

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




Tap-as-a-Service.rst
Description: Binary data
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Manual VM migration

2014-05-21 Thread Naveed Ahmad
Hi community,

I need some help from you people. Openstack provides Hot (Live) and Cold
(Offline) migration between clusters/compute. However i am interested to
migrate Virtual Machine from one OpenStack Cloud to another.  is it
possible ?  It is inter cloud VM migration not inter cluster or compute.

I need help and suggestion regarding VM migration. I tried to manually
migrate VM from one OpenStack Cloud to another but no success yet.

Please guide me!

Regards
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-dev][Neutron] Port Mirroring Extension in Neutron

2014-05-21 Thread Vinay Yadhav
Hi,

Yes happening now.

Follow: https://wiki.openstack.org/wiki/Meetings/AdvancedServices

Cheers,

main(i){putchar((5852758((i-1)/2)*8)-!(1i)*'\r')^89main(++i);}


On Wed, May 21, 2014 at 7:26 PM, Anil Rao anil@gigamon.com wrote:

 Is this discussion going to happen in today’s meeting?



 -Anil



 *From:* Stephen Wong [mailto:s3w...@midokura.com]
 *Sent:* Saturday, May 17, 2014 11:31 PM

 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [Openstack-dev][Neutron] Port Mirroring
 Extension in Neutron



 Hi Sumit,



 Srinivasa has agreed to join the advanced service meeting to kick off
 the discussion on this topic at our regular meeting[1]. Looking forward to
 working with him and others that are interested in getting this tap
 service effort started in Neutron.



 Thanks,

 - Stephen





 [1]
 https://wiki.openstack.org/wiki/Meetings#Neutron_Advanced_Services.27_Common_requirements_team_meeting



 On Sat, May 17, 2014 at 9:05 PM, Sumit Naiksatam sumitnaiksa...@gmail.com
 wrote:

 Hi, Unfortunately I could not participate in this discussion. As
 requested in this thread earlier, it would be good to get a summary of
 the discussion.

 We, in the advanced services team in Neutron, have long discussed[1]
 the possibility of accommodating a tap service. So I would like to
 understand if/how this discussion is aligning with that goal.

 Thanks,
 ~Sumit.

 [1]
 https://blueprints.launchpad.net/neutron/+spec/neutron-services-insertion-chaining-steering


 On Thu, May 15, 2014 at 6:52 PM, Anil Rao anil@gigamon.com wrote:
  See you all there tomorrow.
 
 
 
  Regards,
 
  Anil
 
 
 
  From: Vinay Yadhav [mailto:vinayyad...@gmail.com]
  Sent: Thursday, May 15, 2014 12:51 PM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [Openstack-dev][Neutron] Port Mirroring
  Extension in Neutron
 
 
 
  Hi,
 
 
 
  Booked a slot tomorrow at 9:20 AM at the neutron pod.
 
 
 
 
 
  Cheers,
 
  main(i){putchar((5852758((i-1)/2)*8)-!(1i)*'\r')^89main(++i);}
 
 
 
  On Thu, May 15, 2014 at 2:50 PM, Stephen Wong s3w...@midokura.com
 wrote:
 
  Hi Vinay,
 
 
 
  I am interested. Please sign up a slot on Neutron pod for tomorrow
  (Friday) and announce the timeslot to the ML.
 
 
 
  Thanks,
 
  - Stephen
 
 
 
 
 
  On Thu, May 15, 2014 at 7:13 AM, Vinay Yadhav vinayyad...@gmail.com
 wrote:
 
  Hi,
 
 
 
  I am Vinay, working with Ericsson.
 
 
 
  I am interested in the following blueprint regarding port mirroring
  extension in neutron:
  https://blueprints.launchpad.net/neutron/+spec/port-mirroring
 
 
 
  I am close to finishing an implementation for this extension in OVS
 plugin
  and would be submitting a neutron spec related to the blueprint soon.
 
 
 
  I would like to know other who are also interested in introducing Port
  Mirroring extension in neutron.
 
 
 
  It would be great if we can discuss and collaborate in development and
  testing this extension
 
 
 
  I am currently attending the OpenStack Summit in Atlanta, so if any of
 you
  are interested in the blueprint, we can meet here in the summit and
 discuss
  how to proceed with the blueprint.
 
 
 
  Cheers,
 
  main(i){putchar((5852758((i-1)/2)*8)-!(1i)*'\r')^89main(++i);}
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-dev][Neutron] Port Mirroring Extension in Neutron

2014-05-21 Thread Anil Rao
Thanks Vinay. I’ll review the spec and get back with my comments soon.

-Anil

From: Vinay Yadhav [mailto:vinayyad...@gmail.com]
Sent: Wednesday, May 21, 2014 10:23 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Openstack-dev][Neutron] Port Mirroring Extension 
in Neutron

Hi,

I am attaching the first version of the neutron spec for Tap-as-a-Service (Port 
Mirroring).

It will be formally commited soon in git.

Cheers,
main(i){putchar((5852758((i-1)/2)*8)-!(1i)*'\r')^89main(++i);}

On Tue, May 20, 2014 at 7:12 AM, Kanzhe Jiang 
kanzhe.ji...@bigswitch.commailto:kanzhe.ji...@bigswitch.com wrote:
Vinay's proposal was based on OVS's mirroring feature.

On Mon, May 19, 2014 at 9:11 PM, YAMAMOTO Takashi 
yamam...@valinux.co.jpmailto:yamam...@valinux.co.jp wrote:
 Hi,

 I am Vinay, working with Ericsson.

 I am interested in the following blueprint regarding port mirroring
 extension in neutron:
 https://blueprints.launchpad.net/neutron/+spec/port-mirroring

 I am close to finishing an implementation for this extension in OVS plugin
 and would be submitting a neutron spec related to the blueprint soon.
does your implementation use OVS' mirroring functionality?
or is it flow-based?

YAMAMOTO Takashi


 I would like to know other who are also interested in introducing Port
 Mirroring extension in neutron.

 It would be great if we can discuss and collaborate in development and
 testing this extension

 I am currently attending the OpenStack Summit in Atlanta, so if any of you
 are interested in the blueprint, we can meet here in the summit and discuss
 how to proceed with the blueprint.

 Cheers,
 main(i){putchar((5852758((i-1)/2)*8)-!(1i)*'\r')^89main(++i);}
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Kanzhe Jiang
MTS at BigSwitch

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Manual VM migration

2014-05-21 Thread Aditya Thatte
Hi,

What kind of errors are you getting? Can you give more details about what
you have tried?


On Wed, May 21, 2014 at 11:02 PM, Naveed Ahmad
12msccsnah...@seecs.edu.pkwrote:


 Hi community,

 I need some help from you people. Openstack provides Hot (Live) and Cold
 (Offline) migration between clusters/compute. However i am interested to
 migrate Virtual Machine from one OpenStack Cloud to another.  is it
 possible ?  It is inter cloud VM migration not inter cluster or compute.

 I need help and suggestion regarding VM migration. I tried to manually
 migrate VM from one OpenStack Cloud to another but no success yet.

 Please guide me!

 Regards


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Aditya Thatte
BrainChamber Research
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Manual VM migration

2014-05-21 Thread Diego Parrilla Santamaría
Hi Naveed,

we have customers running VMs in their own Private Cloud that are migrating
to our new Public Cloud offering. To be honest I would love to have a
better way to do it, but this is how we do. We have developed a tiny script
that basically performs the following actions:

1) Take a snapshot of the VM from the source Private Cloud
2) Halts the source VM (optional, but good for state consistency)
3) Download the snapshot from source Private Cloud
4) Upload the snapshot to target Public Cloud
5) Start a new VM using the uploaded image in the target public cloud
6) Allocate a floating IP and attach it to the VM
7) Change DNS to point to the new floating IP
8) Perform some cleanup processes (delete source VM, deallocate its
floating IP, delete snapshot from source...)

A bit rudimentary, but it works if your VM does not have attached volumes
right away.

Still, I would love to hear some sexy and direct way to do it.

Regards
Diego

 --
Diego Parrilla
https://mailtrack.io/trace/link/660f588f5b8ce60a3da368dfbfeda30eb0548109
*CEO*
*www.stackops.com
https://mailtrack.io/trace/link/660f588f5b8ce60a3da368dfbfeda30eb0548109 | *
 diego.parri...@stackops.com | +34 91 005-2164 | skype:diegoparrilla




On Wed, May 21, 2014 at 7:32 PM, Naveed Ahmad 12msccsnah...@seecs.edu.pkwrote:


 Hi community,

 I need some help from you people. Openstack provides Hot (Live) and Cold
 (Offline) migration between clusters/compute. However i am interested to
 migrate Virtual Machine from one OpenStack Cloud to another.  is it
 possible ?  It is inter cloud VM migration not inter cluster or compute.

 I need help and suggestion regarding VM migration. I tried to manually
 migrate VM from one OpenStack Cloud to another but no success yet.

 Please guide me!

 Regards


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-devhttps://mailtrack.io/trace/link/86e76f2270da640047a3867c01c2cc077eb9a20c


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][db] oslo.db repository review request

2014-05-21 Thread Ben Nemec
On 05/14/2014 09:38 AM, Victor Stinner wrote:
 Le mardi 13 mai 2014, 07:31:34 Doug Hellmann a écrit :
 Since we think we have been able to solve all of the issues we were
 having with namespace packages before, ...
 
 I just tried to start my DevStack and again, I had issues with a builtin 
 olso module: import oslo.config doesn't work, whereas olso.config was 
 installed (system wide) by pip.
 
 pip list|grep olso told me that oslo.config, oslo.messaging, oslo.rootwrap 
 and oslo.vmware are installed.
 
 My workaround is to uninstall all olso modules:
 sudo pip uninstall oslo.config oslo.messaging oslo.rootwrap oslo.vmware
 
 ./stack.sh reinstalls them and now it works.

One of the parts of the fix was to have Devstack stop installing oslo.*
packages as editable, so if you first ran it before that changed then
this makes sense.

 
 --
 
 Current state:
 
 haypo@devstackdev$ pip list|grep oslo
 oslo.config (1.3.0a0.40.gb347519)
 oslo.messaging (1.3.0.8.gc0c8557)
 oslo.rootwrap (1.2.0)
 oslo.vmware (0.3.1.g49097c0)
 
 haypo@devstackdev$ python
 Python 2.7.5 (default, Feb 19 2014, 13:47:28) 
 [GCC 4.8.2 20131212 (Red Hat 4.8.2-7)] on linux2
 Type help, copyright, credits or license for more information.
 import oslo
 oslo
 module 'oslo' (built-in)
 import oslo.config
 import oslo.messaging
 import oslo.rootwrap
 import oslo.vmware
 
 
 I never understood how these .pth files work.
 
 haypo@devstackdev$ cd /usr/lib/python2.7/site-packages
 
 haypo@devstackdev$ ls oslo*.pth -1
 oslo.config-1.3.0a0.40.gb347519-py2.7-nspkg.pth
 oslo.messaging-1.3.0.8.gc0c8557-py2.7-nspkg.pth
 oslo.rootwrap-1.2.0-py2.7-nspkg.pth
 oslo.vmware-0.3.1.g49097c0-py2.7-nspkg.pth
 
 haypo@devstackdev$ md5sum oslo*.pth 
 002fd4bf040a30d396d4df8e1ed378a8  oslo.config-1.3.0a0.40.gb347519-py2.7-
 nspkg.pth
 002fd4bf040a30d396d4df8e1ed378a8  oslo.messaging-1.3.0.8.gc0c8557-py2.7-
 nspkg.pth
 002fd4bf040a30d396d4df8e1ed378a8  oslo.rootwrap-1.2.0-py2.7-nspkg.pth
 002fd4bf040a30d396d4df8e1ed378a8  oslo.vmware-0.3.1.g49097c0-py2.7-nspkg.pth
 
 haypo@devstackdev$ cat oslo.config-1.3.0a0.40.gb347519-py2.7-nspkg.pth
 import sys,types,os; p = os.path.join(sys._getframe(1).f_locals['sitedir'], 
 *('oslo',)); ie = os.path.exists(os.path.join(p,'__init__.py')); m = not ie 
 and sys.modules.setdefault('oslo',types.ModuleType('oslo')); mp = (m or []) 
 and m.__dict__.setdefault('__path__',[]); (p not in mp) and mp.append(p)
 
 Victor
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Manual VM migration

2014-05-21 Thread Aditya Thatte
Hi Diego, we have a push button technology that does all of that, plus a
few additional things. Please look at our talk here
https://www.youtube.com/watch?v=NW_D9bEsHAkfeature=shared


On Wed, May 21, 2014 at 11:17 PM, Diego Parrilla Santamaría 
diego.parrilla.santama...@gmail.com wrote:

 Hi Naveed,

 we have customers running VMs in their own Private Cloud that are
 migrating to our new Public Cloud offering. To be honest I would love to
 have a better way to do it, but this is how we do. We have developed a tiny
 script that basically performs the following actions:

 1) Take a snapshot of the VM from the source Private Cloud
 2) Halts the source VM (optional, but good for state consistency)
  3) Download the snapshot from source Private Cloud
 4) Upload the snapshot to target Public Cloud
 5) Start a new VM using the uploaded image in the target public cloud
 6) Allocate a floating IP and attach it to the VM
 7) Change DNS to point to the new floating IP
 8) Perform some cleanup processes (delete source VM, deallocate its
 floating IP, delete snapshot from source...)

 A bit rudimentary, but it works if your VM does not have attached volumes
 right away.

 Still, I would love to hear some sexy and direct way to do it.

 Regards
 Diego

  --
 Diego Parrilla
 https://mailtrack.io/trace/link/660f588f5b8ce60a3da368dfbfeda30eb0548109
 *CEO*
 *www.stackops.com
 https://mailtrack.io/trace/link/660f588f5b8ce60a3da368dfbfeda30eb0548109 | *
  diego.parri...@stackops.com | +34 91 005-2164 | skype:diegoparrilla




 On Wed, May 21, 2014 at 7:32 PM, Naveed Ahmad 
 12msccsnah...@seecs.edu.pkwrote:


 Hi community,

 I need some help from you people. Openstack provides Hot (Live) and Cold
 (Offline) migration between clusters/compute. However i am interested to
 migrate Virtual Machine from one OpenStack Cloud to another.  is it
 possible ?  It is inter cloud VM migration not inter cluster or compute.

 I need help and suggestion regarding VM migration. I tried to manually
 migrate VM from one OpenStack Cloud to another but no success yet.

 Please guide me!

 Regards


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-devhttps://mailtrack.io/trace/link/86e76f2270da640047a3867c01c2cc077eb9a20c



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Aditya Thatte
BrainChamber Research
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Concerns about the ballooning size of keystone tokens

2014-05-21 Thread Kurt Griffiths
 adding another ~10kB to each request, just to save a once-a-day call to
Keystone (ie uuid tokens) seems to be a really high price to pay for not
much benefit.

I have the same concern with respect to Marconi. I feel like KPI tokens
are fine for control plane APIs, but don’t work so well for high-volume
data APIs where every KB counts.

Just my $0.02...

--Kurt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][L3][IPAM] Team Meeting Thursday at 1500 UTC

2014-05-21 Thread Carl Baldwin
Great work at the summit.  Let's meet tomorrow at the regular time in
#openstack-meeting-3 to discuss following up on action items that came
out of our discussions.  The agenda is mostly up but I will add a few
more updates later today.

* new topic: IPAM *  I'm adding a new topic to the agenda out of the
high level of interest that was shown at the summit in the area of
IPAM.  We will discuss the long list of blueprints that have been
filed, an initial straw man interface definition for pluggable IPAM,
coordination with the refactoring efforts that are already underway,
and potential improvements to the current IPAM implementation in
Neutron.  Please review the etherpad [2] from the pod discussion and
come join us at the meeting.

Carl Baldwin
Neutron L3 Subteam

[1] https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam#Agenda
[2] https://etherpad.openstack.org/p/ipam_pod

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Chalenges with highly available service VMs

2014-05-21 Thread Praveen Yalagandula
Hi Aaron,

I reported it as a bug with bit more details:
https://bugs.launchpad.net/neutron/+bug/1321864. The report has examples
showing the incompleteness in the overlap check due to cidr notation
allowed in the allowed address pairs API.

Cheers,
Praveen


On Tue, May 20, 2014 at 7:54 PM, Aaron Rosen aaronoro...@gmail.com wrote:

 arosen@arosen-MacBookPro:~/devstack$ neutron port-show
 f5117013-ac04-45af-a5d6-e9110213ad6f

 +---+--+
 | Field | Value
  |

 +---+--+
 | admin_state_up| True
 |
 | allowed_address_pairs |
  |
 | binding:vnic_type | normal
 |
 | device_id | 99012a6c-a5ed-41c0-92c9-14d40af0c2df
 |
 | device_owner  | compute:None
 |
 | extra_dhcp_opts   |
  |
 | fixed_ips | {subnet_id:
 505eb39c-32dc-4fe7-a497-f801a0677c54, ip_address: 10.0.0.22} |
 | id| f5117013-ac04-45af-a5d6-e9110213ad6f
 |
 | mac_address   | fa:16:3e:77:4d:2d
  |
 | name  |
  |
 | network_id| 1b069199-bfa4-4efc-aebd-4a663d447964
 |
 | security_groups   | 0d5477cf-f63a-417e-be32-a12557fa4098
 |
 | status| ACTIVE
 |
 | tenant_id | c71ebe8d1f6e47bab7d44046ec2f6b39
 |

 +---+--+
 arosen@arosen-MacBookPro:~/devstack$ neutron port-update
 f5117013-ac04-45af-a5d6-e9110213ad6f --allowed-address-pairs list=true
 type=dict ip_address=10.0.0.0/24
 Updated port: f5117013-ac04-45af-a5d6-e9110213ad6f
 arosen@arosen-MacBookPro:~/devstack$ neutron port-show
 f5117013-ac04-45af-a5d6-e9110213ad6f

 +---+--+
 | Field | Value
  |

 +---+--+
 | admin_state_up| True
 |
 | allowed_address_pairs | {ip_address: 10.0.0.0/24, mac_address:
 fa:16:3e:77:4d:2d}|
 | binding:vnic_type | normal
 |
 | device_id | 99012a6c-a5ed-41c0-92c9-14d40af0c2df
 |
 | device_owner  | compute:None
 |
 | extra_dhcp_opts   |
  |
 | fixed_ips | {subnet_id:
 505eb39c-32dc-4fe7-a497-f801a0677c54, ip_address: 10.0.0.22} |
 | id| f5117013-ac04-45af-a5d6-e9110213ad6f
 |
 | mac_address   | fa:16:3e:77:4d:2d
  |
 | name  |
  |
 | network_id| 1b069199-bfa4-4efc-aebd-4a663d447964
 |
 | security_groups   | 0d5477cf-f63a-417e-be32-a12557fa4098
 |
 | status| ACTIVE
 |
 | tenant_id | c71ebe8d1f6e47bab7d44046ec2f6b39
 |

 +---+--+



 On Tue, May 20, 2014 at 7:52 PM, Aaron Rosen aaronoro...@gmail.comwrote:

 Hi Praveen,

 I think there is some confusion here. This function doesn't check if
 there is any overlap that occurs within the cidr block. It only checks that
 the fixed_ips+mac don't overlap with an allowed address pair. In your
 example if the host has an ip_address of 10.10.1.1 and you want to allow
 any ip in 10.10.1.0/24 to pass through the port you can just add a rule
 for 10.10.1.0/24 directly without having to break it up.

 Aaron


 On Tue, May 20, 2014 at 11:20 AM, Praveen Yalagandula 
 yprav...@avinetworks.com wrote:

 Hi Aaron,

 The main motivation is simplicity. Consider the case where we want to
 allow ip cidr 10.10.1.0/24 to be allowed on a port which has a fixed IP
 of 10.10.1.1. Now if we do not want to allow overlapping, then one needs to
 add 8 cidrs to get around this - (10.10.1.128/25, 10.10.1.64/26,
 10.10.1.32/27, 10.10.1.0/32); which makes it cumbersome.

 In any case, allowed-address-pairs is ADDING on to what is allowed
 because of the fixed IPs. So, there is no possibility of conflict. The
 check 

Re: [openstack-dev] Concerns about the ballooning size of keystone tokens

2014-05-21 Thread Morgan Fainberg
This is part of what I was referencing in regards to lightening the data
stored in the token. Ideally, we would like to see an ID only token that
only contains the basic information to act. Some initial tests show these
tokens should be able to clock in under 1k in size. However all the details
are not fully defined yet. Coupled with this data reduction there will be
explicit definitions of the data that is meant to go into the tokens. Some
of the data we have now is a result of convenience of accessing the data.

I hope to have this token change available during Juno development cycle.

There is a lot of work to be done to ensure this type of change goes
smoothly. But this is absolutely on the list of things we would like to
address.

Cheers,
Morgan

Sent via mobile

On Wednesday, May 21, 2014, Kurt Griffiths kurt.griffi...@rackspace.com
wrote:

  adding another ~10kB to each request, just to save a once-a-day call to
 Keystone (ie uuid tokens) seems to be a really high price to pay for not
 much benefit.

 I have the same concern with respect to Marconi. I feel like KPI tokens
 are fine for control plane APIs, but don’t work so well for high-volume
 data APIs where every KB counts.

 Just my $0.02...

 --Kurt

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org javascript:;
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [MagnetoDB] Configuring consistency draft of concept

2014-05-21 Thread Ilya Sviridov
Team, I believe it is quite complex task and we have to spend more time on
concept.
So, I've postponed it to next 3.0 seria, it is month from now and we can
keep focus on stabilization of current version.

Let us return to this discussion later.

Thanks,
Ilya Sviridov
isviridov @ FreeNode


On Mon, May 5, 2014 at 4:06 AM, Illia Khudoshyn ikhudos...@mirantis.comwrote:

 Can't say for others but I'm personally not really happy with Charles 
 Dima approach. As Charles pointed out (or hinted) , QUORUM during write may
 be equal to both EVENTUAL and STRONG, depending on consistency level chosen
 for later read. The same is with QUORUM for read. I'm afraid, this way MDB
 will become way too complex, and it would take more effort to predict its
 behaviour from user's point of view.
 I'd rather prefer it to be as straightforward as possible -- take full
 control and responsibility or follow reasonable defaults.

 And, please note, we're aiming to multi DC support, soon or late. And for
 that we'll need more flexible consistency control, so binary option would
 not be enough.

 Thanks


 On Thu, May 1, 2014 at 12:10 AM, Charles Wang 
 charles_w...@symantec.comwrote:

 Discussed further with Dima. Our consensus is to have WRITE consistency
 level defined in table schema, and READ consistency control at data item
 level. This should satisfy our use cases for now.

 For example, user defined table has Eventual Consistency (Quorum). After
 user writes data using the consistency level defined in table schema, when
 user tries to read data back asking for Strong consistency, MagnetoDB can
 do a READ Eventual Consistency (Quorum) to satisfy user's Strong
 consistency requirement.

 Thanks,

 Charles

 From: Charles Wang charles_w...@symantec.com
 Date: Wednesday, April 30, 2014 at 10:19 AM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org, Illia Khudoshyn 
 ikhudos...@mirantis.com
 Cc: Keith Newstadt keith_newst...@symantec.com

 Subject: Re: [openstack-dev] [MagnetoDB] Configuring consistency draft
 of concept

 Sorry for being late to the party.  Since we follow mostly DynamoDB, it
 makes sense not to deviate too much away from DynamoDB’s consistency mode.

 From what I read about DynamoDB, READ consistency is defined to be either
 strong consistency or eventual consistency.

   ConsistentRead 
 http://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html#DDB-Query-request-ConsistentRead:
  *boolean*”,

 *ConsistentRead 
 http://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html#API_Query_RequestSyntax*

 If set to true, then the operation uses strongly consistent reads; 
 otherwise, eventually consistent reads are used.

 Strongly consistent reads are not supported on global secondary indexes. If 
 you query a global secondary index with *ConsistentRead* set to true, you 
 will receive an error message.

 Type: Boolean

 Required: No


 http://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html

 WRITE consistency is not clearly defined anywhere. From what Werner
 Vogel’s description, it seems to indicate writes are replicated across
 availability zones/data centers synchronously. I guess inside data center,
 writes are replicated asynchronously. And the API doesn’t allow user to
 specify WRITE consistency level.

 http://www.allthingsdistributed.com/2012/01/amazon-dynamodb.html

 Considering the above factors and what Cassandra’s capabilities, I
 propose we use the following model.

 READ:

- Strong consistency (synchronously replicate to all, maps to
Cassandra READ All consistency level)
- Eventual consistency (quorum read, maps to Cassandra READ Quorum)
- Weak consistency (not in DynamoDB, maps to Cassandra READ ONE)

 WRITE:

- Strong consistency (synchronously replicate to all, maps to
Cassandra WRITE All consistency level)
- Eventual consistency (quorum write, maps to Cassandra WRITE Quorum)
- Weak consistency (not in DynamoDB, maps to Cassandra WRITE ANY)

 For conditional writes (conditional putItem/deletItem), only strong and
 eventual consistency should be supported.

 Thoughts?

 Thanks,

 Charles

 From: Dmitriy Ukhlov dukh...@mirantis.com
 Reply-To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: Tuesday, April 29, 2014 at 10:43 AM
 To: Illia Khudoshyn ikhudos...@mirantis.com
 Cc: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [MagnetoDB] Configuring consistency draft
 of concept

 Hi Illia,
 WEAK/QUORUM instead of true/false it is ok for me.

 But we have also STRONG.

 What does STRONG mean? In current concept we a using QUORUM and say that
 it is strong. I guess it is confusing (at least for me) and can have
 different behavior for different backends.

 I believe that from user point of view only 4 usecases exist: write and
 read with 

[openstack-dev] [openstack-sdk-dotnet][openstack-cli-powershell] readme and contributing guide

2014-05-21 Thread Matthew Farina
It would be useful to have top level readme and contributing guides for
these two projects. I was going to round out both of these and wanted to
know what format would be good to use. Here's what I found.

First, a lot of .NET projects don't have top level files. I understand how
visual studio treats these and how useful they are in practice for working
with and on a project. It doesn't fit well.

That being said, when someone encounters one of these projects on the
github mirror or elsewhere it would be useful for someone looking to get
started. .NET projects on github test to have readmes.

The most popular format I found, by far, was to use markdown.

If a README.md and CONTRIBUTING.md (as markdown) are ok to go with I'm
happy to craft the initial (or latest) versions of these.

Markdown seems to fit the most here. RST is something I've not seen used by
any .NET projects.

Sound good? Other thoughts?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Concerns about the ballooning size of keystone tokens

2014-05-21 Thread Kurt Griffiths
Good to know, thanks for clarifying. One thing I’m still fuzzy on, however, is 
why we want to deprecate use of UUID tokens in the first place? I’m just trying 
to understand the history here...

From: Morgan Fainberg 
morgan.fainb...@gmail.commailto:morgan.fainb...@gmail.com
Reply-To: OpenStack Dev 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, May 21, 2014 at 1:23 PM
To: OpenStack Dev 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Concerns about the ballooning size of keystone 
tokens

This is part of what I was referencing in regards to lightening the data stored 
in the token. Ideally, we would like to see an ID only token that only 
contains the basic information to act. Some initial tests show these tokens 
should be able to clock in under 1k in size. However all the details are not 
fully defined yet. Coupled with this data reduction there will be explicit 
definitions of the data that is meant to go into the tokens. Some of the data 
we have now is a result of convenience of accessing the data.

I hope to have this token change available during Juno development cycle.

There is a lot of work to be done to ensure this type of change goes smoothly. 
But this is absolutely on the list of things we would like to address.

Cheers,
Morgan

Sent via mobile

On Wednesday, May 21, 2014, Kurt Griffiths 
kurt.griffi...@rackspace.commailto:kurt.griffi...@rackspace.com wrote:
 adding another ~10kB to each request, just to save a once-a-day call to
Keystone (ie uuid tokens) seems to be a really high price to pay for not
much benefit.

I have the same concern with respect to Marconi. I feel like KPI tokens
are fine for control plane APIs, but don’t work so well for high-volume
data APIs where every KB counts.

Just my $0.02...

--Kurt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgjavascript:;
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Extending nova models

2014-05-21 Thread Rad Gruchalski
Hi everyone,

This is my first question here. I hope I could get an answer for the problem 
I'm currently facing in the development of a nova API extension. I am trying to 
add a couple of API endpoints that would serve as an interface to the table 
storing some data. I was able to create an API endpoint by placing my extension 
in api/openstack/compute/contrib and modifying the policy.json file. This is 
now working.

I then added the migration to create a table to 
nova/db/sqlalchemy/migrate_repo_versions/245_add_custom_table.

After unstack.sh and stack.sh (I'm using devstack) I can see my table being 
created. Great.

Next, I proceeded with creating an object definition and created a file in 
nova/objects. I am basing myself on keypairs.py example 
(https://github.com/openstack/nova/blob/2efd3faa3e07fdf16c2d91c16462e7e1e3f33e17/nova/api/openstack/compute/contrib/keypairs.py#L97)

self.api.create_key_pair

calls this 
https://github.com/openstack/nova/blob/839fe777e256d36e69e9fd7c571aed2c860b122c/nova/compute/api.py#L3512
the important part is

keypair = keypair_obj.KeyPair()
keypair.user_id = user_id
keypair.name = key_name
keypair.fingerprint = fingerprint
keypair.public_key = public_key
keypair.create(context)

`KeyPair()` is 
https://github.com/openstack/nova/blob/master/nova/objects/keypair.py

this has a method 
https://github.com/openstack/nova/blob/master/nova/objects/keypair.py#L52
and it's calling `db_keypair = db.key_pair_create(context, updates)`
`db` points to `from nova import db`

which I believe points to this 
https://github.com/openstack/nova/blob/master/nova/db/__init__.py
which loads https://github.com/openstack/nova/blob/master/nova/db/api.py
there's a function called 
https://github.com/openstack/nova/blob/master/nova/db/api.py#L922
`key_pair_create` 
https://github.com/openstack/nova/blob/master/nova/db/api.py#L924

`IMPL` is https://github.com/openstack/nova/blob/master/nova/db/api.py#L69-L95
but where is `IMPL.key_pair_create`?

Is there an easy way to insert a record into the table?
Thank you for any pointers.

I’ve posted the same question on ask.openstack.org 
(https://ask.openstack.org/en/questions/30231).










Kind regards,

Radek Gruchalski

ra...@gruchalski.com (mailto:ra...@gruchalski.com)
 
(mailto:ra...@gruchalski.com)
de.linkedin.com/in/radgruchalski/ (http://de.linkedin.com/in/radgruchalski/)
+4917685656526

Confidentiality:
This communication is intended for the above-named person and may be 
confidential and/or legally privileged.
If it has come to you in error you must take no action based on it, nor must 
you copy or show it to anyone; please delete/destroy and inform the sender 
immediately.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] Neutron - reservation of fixed ip

2014-05-21 Thread Sławek Kapłoński
Hello,

Ok, I found that now there is probably no such feature to reserve fixed
ip for tenant. So I was thinking about add such feature to neutron. I
mean that it should have new table with reserved ips in neutron
database and neutron will check this table every time when new port
will be created (or updated) and IP should be associated with this
port. If user has got reserved IP it should be then used for new port,
if IP is reserver by other tenant - it shouldn't be used. 
What You are thinking about such possibility? Is it possible to add it
in some future release of neutron?

-- 
Best regards
Sławek Kapłoński
sla...@kaplonski.pl


Dnia Mon, 19 May 2014 20:07:43 +0200
Sławek Kapłoński sla...@kaplonski.pl napisał:

 Hello,
 
 I'm using openstack with neutron and ML2 plugin. Is there any way to
 reserve fixed IP from shared external network for one tenant? I know
 that there is possibility to create port with IP and later connect VM
 to this port. This solution is almost ok for me but problem is when
 user delete this instance - then port is also deleted and it is not
 reserved still for the same user and tenant. So maybe there is any
 solution to reserve it permanent?
 I know also about floating IPs but I don't use L3 agents so this is
 probably not for me :)
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Confusion about the respective use cases for volume's admin_metadata, metadata and glance_image_metadata

2014-05-21 Thread Day, Phil
 -Original Message-
 From: Tripp, Travis S
 Sent: 07 May 2014 18:06
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Cinder] Confusion about the respective use
 cases for volume's admin_metadata, metadata and glance_image_metadata
 
  We're suffering from a total overload of the term 'metadata' here, and
  there are
  3 totally separate things that are somehow becoming mangled
 
 Thanks for the summary. The term metadata definitely gets overloaded.
 I've been experimenting with the metadata to see what happens with all
 of it.
 
OK I won't even try to bring Nova's three types of metadata into the discussion 
then.

 Glance image properties == ALL properties are copied to 
 volume_image_metadata in Cinder 

Let's just limit this thread to this one, since that's the one that is partly 
mutable in Glance and becomes immutable in Cinder

 
 Regarding the property protections in Glance, it looks to use RBAC.  It seems
 to me that if a volume is being uploaded to glance with protected properties
 and the user doing the copying doesn't have the right roles to create those
 properties that Glance should reject the upload request.
 
 Based on the etherpads, the primary motivation for property protections
 was for an image marketplace, which doesn't seem like there would be the
 same need for volumes. 
No it is still needed.   Consider the case where there is a licensed image in 
Glance.   That license key, which will be passed through to the billing system 
has to be immutable and has to be availabe to Nova for any instance that is 
running a copy of that image.  Create a snapshot in Glance, the key needs to be 
there.  Create a bootable volume in Cinder, the key needs to be there, etc, 
etc.So both Nova and Cinder have to copy the Glance Image properties 
whenever they create a copy of an image.

The full set of paths where the image properties need to be copied are:

- When Cinder creates a bootable volume from an Image on Glance
- When Cinder creates a snapshot or copy of a bootable volume
- When Nova creates a snapshot in Glance from a running instance (So Nova has 
to have a copy of the properties of the image the instance was booted from - 
the image in Glance can be deleted while the instance is running)

The issue is that the set of Glance Image Properties that are copied need are a 
combination of muatable and immutable values - but that distinction is lost 
when they are copied into Cinder.  I'm not even sure if you can query Glance to 
find out if a property is mutable or not.

So to make Cinder and Glance consistent I think we would need:

1) A way to find out from Glance is a property is mutable or not
2) A way in Cinder to mark a property as mutable or immutable

I don't think Nova needs to know the difference, since it only ever creates 
snapshots in Glance - and Glance already knows what can and can't be changed.

Phil

 
  -Original Message-
  From: Duncan Thomas [mailto:duncan.tho...@gmail.com]
  Sent: Wednesday, May 07, 2014 7:57 AM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [Cinder] Confusion about the respective
  use cases for volume's admin_metadata, metadata and
  glance_image_metadata
 
  On 7 May 2014 09:36, Trump.Zhang zhangleiqi...@gmail.com wrote:
   @Tripp, Thanks for your reply and info.
  
   I am also thinking if it is proper to add support for updating the
   volume's glance_image_metadta to reflect the newest status of
 volume.
  
   However, there may be alternative ways to achieve it:
   1. Using the volume's metatadata
   2. Using the volume's admin_metadata
  
   So I am wondering which is the most proper method.
 
 
  We're suffering from a total overload of the term 'metadata' here, and
  there are
  3 totally separate things that are somehow becoming mangled:
 
  1. Volume metadata - this is for the tenant's own use. Cinder and nova
  don't assign meaning to it, other than treating it as stuff the tenant
  can set. It is entirely unrelated to glance_metadata 2. admin_metadata
  - this is an internal implementation detail for cinder to avoid every
  extension having to alter the core volume db model. It is not the same
  thing as glance metadata or volume_metadata.
 
  An interface to modify volume_glance_metadata sounds reasonable,
  however it is *unrelated* to the other two types of metadata. They are
  different things, not replacements or anything like that.
 
  Glance protected properties need to be tied into the modification API
  somehow, or else it becomes a trivial way of bypassing protected
  properties. Hopefully a glance expert can pop up and suggest a way of
 achieving this integration.
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev 

Re: [openstack-dev] [Neutron][QoS] Weekly IRC Meeting?

2014-05-21 Thread Stephen Wong
Hi Sean,

Sounds good! +1

- Stephen


On Wed, May 21, 2014 at 7:56 AM, Collins, Sean 
sean_colli...@cable.comcast.com wrote:

 Hi,

 The session that we had on the Quality of Service API extension was well
 attended - I would like to keep the momentum going by proposing a weekly
 IRC meeting.

 How does Tuesdays at 1800 UTC in #openstack-meeting-alt sound?

 --
 Sean M. Collins
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Logging exceptions and Python 3

2014-05-21 Thread Johannes Erdfelt
On Wed, May 21, 2014, John Dennis jden...@redhat.com wrote:
 But that's a bug in the logging implementation. Are we supposed to write
 perverse code just to avoid coding mistakes in other modules? Why not
 get the fundamental problem fixed?

It has been fixed, by making Python 3 :)

This is a problem in the Python 2 standard library.

I agree it kind of sucks. We've traditionally just worked around it, but
monkey patching might be a solution if the work arounds are onerous.

JE


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Concerns about the ballooning size of keystone tokens

2014-05-21 Thread Dolph Mathews
On Wed, May 21, 2014 at 2:36 PM, Kurt Griffiths 
kurt.griffi...@rackspace.com wrote:

  Good to know, thanks for clarifying. One thing I’m still fuzzy on,
 however, is why we want to deprecate use of UUID tokens in the first place?
 I’m just trying to understand the history here...


I don't think anyone has seriously discussed deprecating UUID tokens, only
that the number of benefits UUID has over PKI is rapidly diminishing as our
PKI implementation improves.



   From: Morgan Fainberg morgan.fainb...@gmail.com
 Reply-To: OpenStack Dev openstack-dev@lists.openstack.org
 Date: Wednesday, May 21, 2014 at 1:23 PM
 To: OpenStack Dev openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] Concerns about the ballooning size of
 keystone tokens

  This is part of what I was referencing in regards to lightening the data
 stored in the token. Ideally, we would like to see an ID only token that
 only contains the basic information to act. Some initial tests show these
 tokens should be able to clock in under 1k in size. However all the details
 are not fully defined yet. Coupled with this data reduction there will be
 explicit definitions of the data that is meant to go into the tokens. Some
 of the data we have now is a result of convenience of accessing the data.

  I hope to have this token change available during Juno development
 cycle.

  There is a lot of work to be done to ensure this type of change goes
 smoothly. But this is absolutely on the list of things we would like to
 address.

  Cheers,
 Morgan

  Sent via mobile

 On Wednesday, May 21, 2014, Kurt Griffiths kurt.griffi...@rackspace.com
 wrote:

  adding another ~10kB to each request, just to save a once-a-day call to
 Keystone (ie uuid tokens) seems to be a really high price to pay for not
 much benefit.

 I have the same concern with respect to Marconi. I feel like KPI tokens
 are fine for control plane APIs, but don’t work so well for high-volume
 data APIs where every KB counts.

 Just my $0.02...

 --Kurt

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Concerns about the ballooning size of keystone tokens

2014-05-21 Thread Dolph Mathews
On Wed, May 21, 2014 at 11:32 AM, John Dickinson m...@not.mn wrote:

 Thanks Dolph and Lance for the info and links.


 What concerns me, in general, about the current length of keystone tokens
 is that they are unbounded. And the proposed solutions don't change that
 pattern.

 My understanding of why PKI tokens are used is so that the system doesn't
 have to call to Keystone to authorize the request. This reduces the load on
 Keystone, but it adds significant overhead for every API request.

 Keystone's first system was to use UUID bearer tokens. These are fixed
 length, small, cacheable, and require a call to Keystone once per cache
 period.

 Moving to PKI tokens, we now have multi-kB headers that significantly
 increase the size of each request. Swift deployers commonly have small
 objects on the order of 50kB, so adding another ~10kB to each request,
 just to save a once-a-day call to Keystone (ie uuid tokens) seems to be a
 really high price to pay for not much benefit.

 The other benefit to PKI tokens is that services can make calls to other
 systems on behalf of the user (eg nova can call cinder for the user). This
 is great, but it's not the only usage pattern in OpenStack projects, and
 therefore I don't like optimizing for it at the expense of other patterns.

 In addition to PKI tokens (ie signed+encoded service catalogs), I'd like
 to see Keystone support and remain committed to fixed-length bearer tokens
 or a signed-with-shared-secret auth mechanism (a la AWS).


This is a fantastic argument in favor of UUID today. PKI will likely never
be fixed-length, but hopefully we can continue making them smaller such
that this argument might carry substantially less weight someday.



 --John




 On May 21, 2014, at 9:09 AM, Dolph Mathews dolph.math...@gmail.com
 wrote:

 
  On Wed, May 21, 2014 at 10:41 AM, John Dickinson m...@not.mn wrote:
  Can you explain how PKI info is compressible? I thought it was
 encrypted, which should mean you can't compress it right?
 
  They're not encrypted - just signed and then base64 encoded. The JSON
 (and especially service catalog) is compressible prior to encoding.
 
 
 
  --John
 
 
 
 
 
  On May 21, 2014, at 8:32 AM, Morgan Fainberg morgan.fainb...@gmail.com
 wrote:
 
   The keystone team is also looking at ways to reduce the data contained
 in the token. Coupled with the compression, this should get the tokens back
 down to a reasonable size.
  
   Cheers,
   Morgan
  
   Sent via mobile
  
   On Wednesday, May 21, 2014, Adam Young ayo...@redhat.com wrote:
   On 05/21/2014 11:09 AM, Chuck Thier wrote:
   There is a review for swift [1] that is requesting to set the max
 header size to 16k to be able to support v3 keystone tokens.  That might be
 fine if you measure you request rate in requests per minute, but this is
 continuing to add significant overhead to swift.  Even if you *only* have
 10,000 requests/sec to your swift cluster, an 8k token is adding almost
 80MB/sec of bandwidth.  This will seem to be equally bad (if not worse) for
 services like marconi.
  
   When PKI tokens were first introduced, we raised concerns about the
 unbounded size of of the token in the header, and were told that uuid style
 tokens would still be usable, but all I heard at the summit, was to not use
 them and PKI was the future of all things.
  
   At what point do we re-evaluate the decision to go with pki tokens,
 and that they may not be the best idea for apis like swift and marconi?
  
   Keystone tokens were slightly shrunk at the end of the last release
 cycle by removing unnecessary data from each endpoint entry.
  
   Compressed PKI tokens are enroute and will be much smaller.
  
  
   Thanks,
  
   --
   Chuck
  
   [1] https://review.openstack.org/#/c/93356/
  
  
   ___
   OpenStack-dev mailing list
  
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Installation instructions

2014-05-21 Thread Sanchez, Cristian A
Hi,
In our team we’re planning to make some contributions to TripleO, after we met 
with Robert Collins in an Intel hosted meeting during Summit.
As our first step we want to use TripleO to deploy the under-cloud and 
over-cloud. Is there some instructions of how to start with this?

Thanks

Cristian

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone][federation] Coordination for Juno

2014-05-21 Thread Steve Martinelli
Hey Everyone,

I've received a couple emails from folks
interested in federation for Keystone, ranging from testing federation
to fix bugs and implement blueprints.

I've been keeping a list of Federation
specific work items for Keystone, I've tried to attach names to some of
the items that have already been started.
If you're interested in helping out,
find an item below and shoot me an email (cc dolphm) with details about
how you can help.


Keystone to Keystone Federation

blueprint here: https://blueprints.launchpad.net/keystone/+spec/keystone-to-keystone-federation
Lots of interested parties here, but
no one has yet to post a patch about this
If anyone can help implement this part,
or help in crafting the API please let me know.
Doc'ing this portion would be extremely
important, too.

Supporting different protocols

OpenIDConnect support

blueprint here: https://blueprints.launchpad.net/keystone/+spec/auth-plugin-openid-connect
Implementation from stevemar here: https://review.openstack.org/#/c/61662/15
Reviews wanted!
ABFAB protocol support

Waiting on an apache module

SAML Client work in keystoneclient

Active patch from marekd here: https://review.openstack.org/#/c/92166/
Reviews wanted!

How can we gate on federation configurations
and perform real tempest tests?

If anyone has *any* ideas on this, please
share - the keystone team is stumped on this one.

Auditing support for federated users

Need a blueprint for this topic
Any takers for implementation?

Mapping engine enhancements

Trusted Attributes

blueprint here: https://blueprints.launchpad.net/keystone/+spec/trusted-attribute-issuing-policy
ksiu has the API spec here: https://review.openstack.org/#/c/60489/
Bugs (brought up by others as possible
optimizations)

Prioritize users / groups rules
Add domain support
Make groups a wildcard

Federated Keystone and Horizon

Completely open-ended, there isn't much
an expectation that we deliver this in Juno, but it's something we should
start thinking about.

Docs for everything!

Regards,

Steve Martinelli
Software Developer - Openstack
Keystone Core Member





Phone:
1-905-413-2851
E-mail: steve...@ca.ibm.com

8200 Warden Ave
Markham, ON L6G 1C7
Canada



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Installation instructions

2014-05-21 Thread Jason Rist
On 05/21/2014 02:22 PM, Sanchez, Cristian A wrote:
 Hi,
 In our team we’re planning to make some contributions to TripleO, after we 
 met with Robert Collins in an Intel hosted meeting during Summit.
 As our first step we want to use TripleO to deploy the under-cloud and 
 over-cloud. Is there some instructions of how to start with this?
 
 Thanks
 
 Cristian
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

Hi Cristian - There are quite a few options.  Some of us do devtest:
https://wiki.openstack.org/wiki/Tuskar/Devtest

Some of us do instack:
https://github.com/slagle/instack

And some of us do devstack and then fire up another horizon with
tuskar-ui running:
https://github.com/openstack/tuskar-ui/blob/master/doc/source/install.rst

-J

-- 
Jason E. Rist
Senior Software Engineer
OpenStack Management UI
Red Hat, Inc.
openuc: +1.972.707.6408
mobile: +1.720.256.3933
Freenode: jrist
github/identi.ca: knowncitizen

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Storyboard] [UX] Atlanta Storyboard UX Summary

2014-05-21 Thread Michael Krotscheck
I've compiled a list of takeaways from the one-on-one ux sessions that we
did at the Atlanta Summit. These comments aren't exhaustive - they're the
big items that came up over and over again. If you'd like to review the
videos yourself or peruse my notes, please drop me a private line: They're
rather large and I don't want to share my google drive with the world.

*Users were extremely confused about Tasks vs. Stories*

This is perhaps the biggest takeaway, because it came up in every session
and on every UI. Questions included: Can I comment on this task, What if
a task requires a design review independent of a story, What is the
lifecycle of a task/story, Can I make a Task be dependent on a Story (and
vice versa), How granular is a task, Why can't I see which project this
story is assigned to.

There was also an extended section of the design meeting trying to explain
what a task is, how it relates to gerrit commits, how that impacts the
overall workflow, and more. On one side, one-task-per-commit was mentioned,
another time 'tasks can be anything' was mentioned.

*We really need search*

The title is self explanatory, really. Several users asked for a
'universal' search much like how Gerrit now works, with improvements on the
autocompletion and tagging, so that there would be only one 'uber' Search
place. Subsearching inside of projects was also requested, though the same
was not asked for in stories. One user asked for the ability to save
searches.

*Users want sorting*
Sort by priority, sort by status, sort by date, sort by project were
mentioned. The latter may be difficult for stories.

*We need priorities*

Patch up for CR.

*Typeahead needs clearer UI*

Everyone had problems with the typeahead fields in the task edit form, for
which we've since put up a patch that may make it clearer that it's a
dynamic field. Work is ongoing on that.

*Story Status should be decoupled from the status of its tasks*

Many users were not certain about the Story's status changing when the
task's status did. Some did notice, however there was some negative
feedback regarding the lifecycle of a story. One user wanted the story's
lifecycle to begin during design, before any tasks were created.

*Everyone wants it*

A lot of interest. Yay!

There's more, however I've picked out the big things that I feel need to be
addressed as we move forward with the infra dogfood, as well as a few
little things that I feel are going to be quick fixes.

Michael
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] reservation of fixed ip

2014-05-21 Thread Collins, Sean
(Edited the subject since a lot of people filter based on the subject
line)

I would also be interested in reserved IPs - since we do not deploy the
layer 3 agent and use the provider networking extension and a hardware
router.

On Wed, May 21, 2014 at 03:46:53PM EDT, Sławek Kapłoński wrote:
 Hello,
 
 Ok, I found that now there is probably no such feature to reserve fixed
 ip for tenant. So I was thinking about add such feature to neutron. I
 mean that it should have new table with reserved ips in neutron
 database and neutron will check this table every time when new port
 will be created (or updated) and IP should be associated with this
 port. If user has got reserved IP it should be then used for new port,
 if IP is reserver by other tenant - it shouldn't be used. 
 What You are thinking about such possibility? Is it possible to add it
 in some future release of neutron?
 
 -- 
 Best regards
 Sławek Kapłoński
 sla...@kaplonski.pl
 
 
 Dnia Mon, 19 May 2014 20:07:43 +0200
 Sławek Kapłoński sla...@kaplonski.pl napisał:
 
  Hello,
  
  I'm using openstack with neutron and ML2 plugin. Is there any way to
  reserve fixed IP from shared external network for one tenant? I know
  that there is possibility to create port with IP and later connect VM
  to this port. This solution is almost ok for me but problem is when
  user delete this instance - then port is also deleted and it is not
  reserved still for the same user and tenant. So maybe there is any
  solution to reserve it permanent?
  I know also about floating IPs but I don't use L3 agents so this is
  probably not for me :)
  
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Spec Template Change Proposal

2014-05-21 Thread Jay Dobies

Currently, there is the following in the template:



Proposed change
===

[snip]

Alternatives


[snip]

Security impact
---



The unit tests assert the top and second level sections are standard, so 
if I add a section at the same level as Alternatives under Proposed 
Change, the tests will fail. If I add a third level section using ^, 
they pass.


The problem is that you can't add a ^ section under Proposed Change. 
Sphinx complains about a title level inconsistency since I'm skipping 
the second level and jumping to the third. But I can't add a 
second-level section directly under Proposed Change because it will 
break the unit tests that validate the structure.


The proposed change is going to be one of the beefier sections of a 
spec, so not being able to subdivide it is going to make the 
documentation messy and removes the ability to link directly to a 
portion of a proposed change.


I propose we add a section at the top of Proposed Change called Overview 
that will hold the change itself. That will allow us to use third level 
sections in the change itself while still having the first and section 
section structure validated by the tests.


I have no problem making the change to the templates, unit tests, and 
any existing specs (I don't think we have any yet), but before I go 
through that, I wanted to make sure there wasn't a major disagreement.


Thoughts?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Installation instructions

2014-05-21 Thread Jordan OMara

On 21/05/14 14:31 -0600, Jason Rist wrote:

Some of us do instack:
https://github.com/slagle/instack


There are some really detailed instructions on running instack with RDO,
if that is your weapon of choice:

http://openstack.redhat.com/Deploying_RDO_using_Instack
--
Jordan O'Mara jomara at redhat.com
Red Hat Engineering, Raleigh 


pgpKz3qO8cE0s.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][QoS] Weekly IRC Meeting?

2014-05-21 Thread Kevin Benton
+1


On Wed, May 21, 2014 at 7:56 AM, Collins, Sean 
sean_colli...@cable.comcast.com wrote:

 Hi,

 The session that we had on the Quality of Service API extension was well
 attended - I would like to keep the momentum going by proposing a weekly
 IRC meeting.

 How does Tuesdays at 1800 UTC in #openstack-meeting-alt sound?

 --
 Sean M. Collins
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Concerns about the ballooning size of keystone tokens

2014-05-21 Thread Adam Young

On 05/21/2014 02:00 PM, Kurt Griffiths wrote:

adding another ~10kB to each request, just to save a once-a-day call to
Keystone (ie uuid tokens) seems to be a really high price to pay for not
much benefit.

I have the same concern with respect to Marconi. I feel like KPI tokens
are fine for control plane APIs, but don’t work so well for high-volume
data APIs where every KB counts.

For those you should use Symmetric MACs IAW Kite.

For low volume authentication you should use PKI

You don't save the data, it just gets transferred at a different point.  
It is the service catalog that is what makes it variable in size, and we 
have an option to turn off the Service catalog in a token.





Just my $0.02...

--Kurt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] Gerrit downtime on May 23 for project renames

2014-05-21 Thread James E. Blair
Hi,

On Friday, May 23 at 21:00 UTC Gerrit will be unavailable for about 20
minutes while we rename some projects.  Existing reviews, project
watches, etc, should all be carried over.  The current list of projects
that we will rename is:

stackforge/barbican - openstack/barbican
openstack/oslo.test - openstack/oslotest
openstack-dev/openstack-qa - openstack-attic/openstack-qa
openstack/melange - openstack-attic/melange
openstack/python-melangeclient - openstack-attic/python-melangeclient
openstack/openstack-chef - openstack-attic/openstack-chef
stackforge/climate - stackforge/blazar
stackforge/climate-nova - stackforge/blazar-nova
stackforge/python-climateclient - stackforge/python-blazarclient
openstack/database-api - openstack-attic/database-api
openstack/glance-specs - openstack/image-specs
openstack/neutron-specs - openstack/networking-specs
openstack/oslo-specs - openstack/common-libraries-specs

Though that list is subject to change.

-Jim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Proposed changes to core team

2014-05-21 Thread Kyle Mestery
I would like to propose a few changes to the Neutron core team.
Looking at how current cores are contributing, both in terms of review
[1] as well as project participation and attendance at the summit
sessions last week, I am proposing a few changes. As cores, I believe
reviews are critical, but I also believe interacting with the Neutron
and OpenStack communities in general is important.

The first change I'd like to propose is removing Yong Sheng Gong from
neutron-core. Yong has been a core for a long time. I'd like to thank
him for all of his work on Neutron over the years. Going forward, I'd
also to propose that if Yong's participation and review stats improve
he could be fast-tracked back to core status. But for now, his review
stats for the past 90 days do not line up with current cores, and his
participation in general has dropped off. So I am proposing his
removal from neutron-core.

Since we're losing a core team member, I'd like to propose Carl
Baldwin (carl_baldwin) for Neutron core. Carl has been a very active
reviewer for Neutron, his stats are well in-line with other core
reviewers. Additionally, Carl has been leading the L3 sub-team [2] for
a while now. He's a very active member of the Neutron community, and
he is actively leading development of some important features for the
Juno release.

Neutron cores, please vote +1/-1 for the proposed addition of Carl
Baldwin to Neutron core.

I also wanted to mention the process for adding, removing, and
maintaining neutron-core membership is now documented on the wiki here
[3].

Thank you!
Kyle

[1] http://stackalytics.com/report/contribution/neutron/90
[2] https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam
[3] https://wiki.openstack.org/wiki/NeutronCore

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Proposed changes to core team

2014-05-21 Thread Mark McClain

On May 21, 2014, at 4:59 PM, Kyle Mestery mest...@noironetworks.com wrote:

 
 Neutron cores, please vote +1/-1 for the proposed addition of Carl
 Baldwin to Neutron core.
 

Carl has been a great contributor. +1 to adding to core team

mark


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Gerrit downtime on May 23 for project renames

2014-05-21 Thread Tom Fifield

On 22/05/14 04:55, James E. Blair wrote:

Hi,

On Friday, May 23 at 21:00 UTC Gerrit will be unavailable for about 20
minutes while we rename some projects.  Existing reviews, project
watches, etc, should all be carried over.  The current list of projects
that we will rename is:

stackforge/barbican - openstack/barbican
openstack/oslo.test - openstack/oslotest
openstack-dev/openstack-qa - openstack-attic/openstack-qa
openstack/melange - openstack-attic/melange
openstack/python-melangeclient - openstack-attic/python-melangeclient
openstack/openstack-chef - openstack-attic/openstack-chef
stackforge/climate - stackforge/blazar
stackforge/climate-nova - stackforge/blazar-nova
stackforge/python-climateclient - stackforge/python-blazarclient
openstack/database-api - openstack-attic/database-api
openstack/glance-specs - openstack/image-specs
openstack/neutron-specs - openstack/networking-specs
openstack/oslo-specs - openstack/common-libraries-specs



May I ask, will the old names have some kind of redirect to the new names?

For example, after the change, what would a user visiting: 
https://review.openstack.org/#/q/status:open+project:openstack/neutron-specs,n,z 



see?

We've gone to quite a bit of effort of late to communicate the new 
*-specs repositories to people (particularly for nova and neutron), and 
it would be a really bad experience - especially for those from the 
non-developer side of things - to be presented with some kind of error 
or empty list after following the link we gave them.



Regards,


Tom



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Installation instructions

2014-05-21 Thread Sanchez, Cristian A
Excellent, thank you

On 21/05/14 17:31, Jason Rist jr...@redhat.com wrote:

On 05/21/2014 02:22 PM, Sanchez, Cristian A wrote:
 Hi,
 In our team we¹re planning to make some contributions to TripleO, after
we met with Robert Collins in an Intel hosted meeting during Summit.
 As our first step we want to use TripleO to deploy the under-cloud and
over-cloud. Is there some instructions of how to start with this?
 
 Thanks
 
 Cristian
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

Hi Cristian - There are quite a few options.  Some of us do devtest:
https://wiki.openstack.org/wiki/Tuskar/Devtest

Some of us do instack:
https://github.com/slagle/instack

And some of us do devstack and then fire up another horizon with
tuskar-ui running:
https://github.com/openstack/tuskar-ui/blob/master/doc/source/install.rst

-J

-- 
Jason E. Rist
Senior Software Engineer
OpenStack Management UI
Red Hat, Inc.
openuc: +1.972.707.6408
mobile: +1.720.256.3933
Freenode: jrist
github/identi.ca: knowncitizen

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L3][ To Learn VM prefix to have reachability in the cloud using routing protocol ..

2014-05-21 Thread Assaf Muller
- Original Message -
 
 
 Hi,
 
 My Name is Keshava who was talking about using IGP to learn VM prefix and
 advertise to have reachability within the cloud in the last Atlanta summit.
 

Is there a document that could explain the use cases for something like this?

 Instead of running BGP below are the thoughts to use OSPF routing protocol
 which support hierarchical network design also.
 
 Here are my thoughts about the same.
 
 
 
 1. OSPF will be running in each of the Nodes under that TOR/switch for a
 particular Area.
 
 2. Tennant VM prefix needs to added as static route
 
 3. Each of the Nodes will form adjacency with each other in that area.
 
 a. If it is broadcast network then of the OSPF router will become ‘Designated
 router/DR’ on behalf all others in that area .
 
 4. L-1 switch/Nodes acting as Area-border router ( ABR) will connect to
 L2-switch which is Area-0.
 
 This will help for
 
 Route-aggregation,
 
 Avoid flooding of routes if any Node is going down.
 
 If any of the Node/Router is of higher/low capacity it is possible to say
 ‘forwarding option to other router’ for egress traffic.
 
 
 
 Let us know peoples opinion about the same.
 
 
 
 
 
 
 
 
 
 
 
 Thanks  regards,
 
 Keshava.A
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Installation instructions

2014-05-21 Thread James Polley
On Wed, May 21, 2014 at 4:31 PM, Jason Rist jr...@redhat.com wrote:

 On 05/21/2014 02:22 PM, Sanchez, Cristian A wrote:
  Hi,
  In our team we’re planning to make some contributions to TripleO, after
 we met with Robert Collins in an Intel hosted meeting during Summit.
  As our first step we want to use TripleO to deploy the under-cloud and
 over-cloud. Is there some instructions of how to start with this?
 
  Thanks
 
  Cristian
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 Hi Cristian - There are quite a few options.  Some of us do devtest:
 https://wiki.openstack.org/wiki/Tuskar/Devtest


There are more docs for devtest at
http://docs.openstack.org/developer/tripleo-incubator/devtest.html - less
RedHat specific, but they won't get you to the point of having Tuskar
running.

That doc links to docs for the devtest_undercloud.sh and
devtest_overcloud.sh scripts, which will show you one way to deploy the
clouds.




 Some of us do instack:
 https://github.com/slagle/instack

 And some of us do devstack and then fire up another horizon with
 tuskar-ui running:
 https://github.com/openstack/tuskar-ui/blob/master/doc/source/install.rst

 -J

 --
 Jason E. Rist
 Senior Software Engineer
 OpenStack Management UI
 Red Hat, Inc.
 openuc: +1.972.707.6408
 mobile: +1.720.256.3933
 Freenode: jrist
 github/identi.ca: knowncitizen

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Spec Template Change Proposal

2014-05-21 Thread James Polley
On Wed, May 21, 2014 at 4:37 PM, Jay Dobies jason.dob...@redhat.com wrote:

 Currently, there is the following in the template:



 Proposed change
 ===

 [snip]

 Alternatives
 

 [snip]

 Security impact
 ---



 The unit tests assert the top and second level sections are standard, so
 if I add a section at the same level as Alternatives under Proposed Change,
 the tests will fail. If I add a third level section using ^, they pass.

 The problem is that you can't add a ^ section under Proposed Change.
 Sphinx complains about a title level inconsistency since I'm skipping the
 second level and jumping to the third. But I can't add a second-level
 section directly under Proposed Change because it will break the unit tests
 that validate the structure.

 The proposed change is going to be one of the beefier sections of a spec,
 so not being able to subdivide it is going to make the documentation messy
 and removes the ability to link directly to a portion of a proposed change.

 I propose we add a section at the top of Proposed Change called Overview
 that will hold the change itself. That will allow us to use third level
 sections in the change itself while still having the first and section
 section structure validated by the tests.


I like all of this plan, except for the name Overview. To me, Overview
suggests a high-level summary rather than being one of the beefier
sections of a spec. Something like Detail or Detailed overview
(because the low-level detail will come in the changes that implement the
spec, not in the spec) seem like better descriptions of what we intend to
have there.


 I have no problem making the change to the templates, unit tests, and any
 existing specs (I don't think we have any yet), but before I go through
 that, I wanted to make sure there wasn't a major disagreement.

 Thoughts?

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] Nominating Nikita Konovalov for storyboard-core

2014-05-21 Thread James E. Blair
Nikita Konovalov has been reviewing changes to both storyboard and
storyboard-webclient for some time.  He is the second most active
storyboard reviewer and is very familiar with the codebase (having
written a significant amount of the server code).  He regularly provides
good feedback, understands where the project is heading, and in general
is in accord with the current core team, which has been treating his +1s
as +2s for a while now.

Please respond with +1s or concerns, and if the consensus is in favor, I
will add him to the group.

Nikita, thank you very much for your work!

-Jim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Logging exceptions and Python 3

2014-05-21 Thread Ben Nemec
On 05/21/2014 12:11 PM, Igor Kalnitsky wrote:
 So, write:

 LOG.debug(u'Could not do whatever you asked: %s', exc)

 or just:

 LOG.debug(exc)
 
 Actually, that's a bad idea to pass an exception instance to
 some log function: LOG.debug(exc). Let me show you why.
 
 Here a snippet from logging.py:
 
 def getMessage(self):
 if not _unicode:
 msg = str(self.msg)
 else:
 msg = self.msg
 if not isinstance(msg, basestring):
 try:
 msg = str(self.msg)
 except UnicodeError:
 msg = self.msg  # we keep exception object as it is
 if self.args:   # this condition is obviously False
 msg = msg % self.args
 return msg  # returns an exception object, not a
 text
 
 And here another snippet from the format() method:
 
 record.message = record.getMessage()
 # ... some time formatting ...
 s = self._fmt % record.__dict__ # FAIL
 
 the old string formatting will call str(), not unicode() and we will FAIL
 with UnicodeEncodeError.

This is exactly why we had to do
https://github.com/openstack/oslo-incubator/blob/master/openstack/common/log.py#L344

As long as it gets passed in as unicode in the first place the logging
code handles it fine, but if it gets passed in as a generic object it
will blow up.

@Doug: It occurs to me that this might be a problem with our plan to
stop using the ContextAdapter. :-/

 
 
 
 On Wed, May 21, 2014 at 6:38 PM, Doug Hellmann
 doug.hellm...@dreamhost.comwrote:
 
 On Thu, May 15, 2014 at 11:29 AM, Victor Stinner
 victor.stin...@enovance.com wrote:
 Hi,

 I'm trying to define some rules to port OpenStack code to Python 3. I
 just
 added a section in the Port Python 2 code to Python 3 about formatting
 exceptions and the logging module:

 https://wiki.openstack.org/wiki/Python3#logging_module_and_format_exceptions

 The problem is that I don't know what is the best syntax to log
 exceptions.
 Some projects convert the exception to Unicode, others use str(). I also
 saw
 six.u(str(exc)) which is wrong IMO (it can raise unicode error if the
 message
 contains a non-ASCII character).

 IMO the safest option is to use str(exc). For example, use
 LOG.debug(str(exc)).

 Is there a reason to log the exception as Unicode on Python 2?

 Exception classes that define translatable strings may end up with
 unicode characters that can't be converted to the default encoding
 when str() is called. It's better to let the logging code handle the
 conversion from an exception object to a string, since the logging
 code knows how to deal with unicode properly.

 So, write:

 LOG.debug(u'Could not do whatever you asked: %s', exc)

 or just:

 LOG.debug(exc)

 instead of converting explicitly.

 Doug


 Victor

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] Nominating Sergey Lukjanov for infra-root

2014-05-21 Thread James E. Blair
The Infrastructure program has a unique three-tier team structure:
contributors (that's all of us!), core members (people with +2 ability
on infra projects in Gerrit) and root members (people with
administrative access).  Read all about it here:

  http://ci.openstack.org/project.html#team

Sergey has been an extremely valuable member of infra-core for some time
now, providing reviews on a wide range of infrastructure projects which
indicate a growing familiarity with the large number of complex systems
that make up the project infrastructure.  In particular, Sergey has
expertise in systems related to the configuration of Jenkins jobs, Zuul,
and Nodepool which is invaluable in diagnosing and fixing operational
problems as part of infra-root.

Please respond with any comments or concerns.

Thanks again Sergey for all your work!

-Jim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Gerrit downtime on May 23 for project renames

2014-05-21 Thread James E. Blair
Tom Fifield t...@openstack.org writes:

 May I ask, will the old names have some kind of redirect to the new names?

Of course you may ask!  And it's a great question!  But sadly the answer
is no.  Unfortunately, Gerrit's support for renaming projects is not
very good (which is why we need to take downtime to do it).

I'm personally quite fond of stable URLs.  However, these started as an
experiment so we were bound to get some things wrong (and will
probably continue to do so) and it's better to try to fix them early.

-Jim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] strutils: enhance safe_decode() and safe_encode()

2014-05-21 Thread Doug Hellmann
On Wed, May 21, 2014 at 12:30 PM, John Dennis jden...@redhat.com wrote:
 On 05/15/2014 11:41 AM, Victor Stinner wrote:
 Hi,

 The functions safe_decode() and safe_encode() have been ported to Python 3,
 and changed more than once. IMO we can still improve these functions to make
 them more reliable and easier to use.


 (1) My first concern is that these functions try to guess user expectation
 about encodings. They use sys.stdin.encoding or sys.getdefaultencoding() as
 the default encoding to decode, but this encoding depends on the locale
 encoding (stdin encoding), on stdin (is stdin a TTY? is stdin mocked?), and 
 on
 the Python major version.

 IMO the default encoding should be UTF-8 because most OpenStack components
 expect this encoding.

 Or maybe users want to display data to the terminal, and so the locale
 encoding should be used? In this case, locale.getpreferredencoding() would be
 more reliable than sys.stdin.encoding.

 The problem is you can't know the correct encoding to use until you know
 the encoding of the IO stream, therefore I don't think you can correctly
 write a generic encode/decode functions. What if you're trying to send
 the output to multiple IO streams potentially with different encodings?
 Think that's far fetched? Nope, it's one of the nastiest and common
 problems in Python2. The default encoding differs depending on whether
 the IO target is a tty or not. Therefore code that works fine when
 written to the terminal blows up with encoding errors when redirected to
 a file (because the TTY probably has UTF-8 and all other encodings
 default to ASCII due to sys.defaultencoding).

 Another problem is that Python2 default encoding is ASCII but in Python3
 it's UTF-8 (IMHO the default encoding in Python2 should have been UTF-8,
 that fact it was set to ASCII is the cause of 99% of the encoding
 exceptions in Python2).

 Given that you don't know what the encoding of the IO stream is I don't
 think you should base it on the locale nor sys.stdin. Rather I think we
 should just agree everything is UTF-8. If that messes up someones
 terminal output I think it's fair to say if you're running OpenStack
 you'll need to switch to UTF-8. Anything else requires way more
 knowledge than we have available in a generic function. Solving this so
 the encodings match for each and every IO stream is very complicated,
 note Python3 still punts on this.

Unfortunately we can't just agree to a single encoding in all cases.
Lots of people use encodings other than UTF-8 for terminals, and
that's where these functions are most frequently used.

Doug



 --
 John

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Manual VM migration

2014-05-21 Thread CARVER, PAUL
Are you sure steps 1 and 2 aren’t in the wrong order? Seems like if you’re 
going to halt the source VM you should take your snapshot after halting. (Of 
course if you don’t intend to halt the VM you can just do your best to quiesce 
your most active writers before taking the snapshot and hope the disk is 
sufficiently consistent.)


1) Take a snapshot of the VM from the source Private Cloud
2) Halts the source VM (optional, but good for state consistency)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >