Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-08 Thread Kevin Benton
Can you link to the etherpad you mentioned?

In the mean time, apologies for another analogy in
advance. :-)

If I give you an API to sort a list, I'm free to implement it however I
want as long as I return a sorted list. However, there is no way me to know
based on a call to this API that you might only be looking for the second
largest element, so it won't be the most efficient approach because I will
always have to sort the entire list.
If I give you a higher level API to declare that you want elements of a
list that match a criteria in a certain order, then the API can make the
optimization to not actually sort the whole list if you just need the first
of the largest two elements.

The former is analogous to the security groups API, and the latter to the
GBP API.
On Aug 7, 2014 4:00 PM, Aaron Rosen aaronoro...@gmail.com wrote:




 On Thu, Aug 7, 2014 at 12:08 PM, Kevin Benton blak...@gmail.com wrote:

 I mean't 'side stepping' why GBP allows for the comment you made
 previous, With the latter, a mapping driver could determine that
 communication between these two hosts can be prevented by using an ACL on a
 router or a switch, which doesn't violate the user's intent and buys a
 performance improvement and works with ports that don't support security
 groups..

 Neutron's current API is a logical abstraction and enforcement can be
 done however one chooses to implement it. I'm really trying to understand
 at the network level why GBP allows for these optimizations and performance
 improvements you talked about.

 You absolutely cannot enforce security groups on a firewall/router that
 sits at the boundary between networks. If you try, you are lying to the
 end-user because it's not enforced at the port level. The current neutron
 APIs force you to decide where things like that are implemented.


 The current neutron API's are just logical abstractions. Where and how
 things are actually enforced are 100% an implementation detail of a vendors
 system.  Anyways, moving the discussion to the etherpad...



 The higher level abstractions give you the freedom to move the
 enforcement by allowing the expression of broad connectivity requirements.

 Why are you bringing up logging connections?

 This was brought up as a feature proposal to FWaaS because this is a
 basic firewall feature missing from OpenStack. However, this does not
 preclude a FWaaS vendor from logging.

 Personally, I think one could easily write up a very short document
 probably less than one page with examples showing/exampling how the current
 neutron API works even without a much networking background.

 The difficulty of the API for establishing basic connectivity isn't
 really the problem. It's when you have to compose a bunch of requirements
 and make sure nothing is violating auditing and connectivity constraints
 that it becomes a problem. We are arguing about the levels of abstraction.
 You could also write up a short document explaining to novice programmers
 how to use C to read and write database entries to an sqlite database, but
 that doesn't mean it's the best level of abstraction for what the users are
 trying to accomplish.

 I'll let someone else explain the current GBP API because I'm not working
 on that. I'm just trying to convince you of the value of declarative
 network configuration.


 On Thu, Aug 7, 2014 at 12:02 PM, Aaron Rosen aaronoro...@gmail.com
 wrote:




 On Thu, Aug 7, 2014 at 9:54 AM, Kevin Benton blak...@gmail.com wrote:

 You said you had no idea what group based policy was buying us so I
 tried to illustrate what the difference between declarative and imperative
 network configuration looks like. That's the major selling point of GBP so
 I'm not sure how that's 'side stepping' any points. It removes the need for
 the user to pick between implementation details like security
 groups/FWaaS/ACLs.


 I mean't 'side stepping' why GBP allows for the comment you made
 previous, With the latter, a mapping driver could determine that
 communication between these two hosts can be prevented by using an ACL on a
 router or a switch, which doesn't violate the user's intent and buys a
 performance improvement and works with ports that don't support security
 groups..

 Neutron's current API is a logical abstraction and enforcement can be
 done however one chooses to implement it. I'm really trying to understand
 at the network level why GBP allows for these optimizations and performance
 improvements you talked about.



 So are you saying that GBP allows someone to be able to configure an
 application that at the end of the day is equivalent  to
 networks/router/FWaaS rules without understanding networking concepts?

 It's one thing to understand the ports an application leverages and
 another to understand the differences between configuring VM firewalls,
 security groups, FWaaS, and router ACLs.


 Sure, but how does group based policy solve this. Security Groups and
 FWaaS are just different places of 

[openstack-dev] [nova] About response of Get server details API v2

2014-08-08 Thread Kanno, Masaki
Hi all,

Jclouds put out a stack trace when I tried Get server details of API v2 by 
jclouds.
I looked into a response body of the API, and found that a value of image was 
an empty string as follows.
I think the value of image should be an empty dictionary like Get server 
details of API v3.
What do you think?

{server: {
  status: ACTIVE, 
  updated: 2014-08-07T09:54:26Z, 
snip
  key_name: null, 
  image: ,  -- here
  OS-EXT-STS:task_state: null, 
  OS-EXT-STS:vm_state: active, 
snip
  config_drive: , 
  metadata: {}
  }
}


Best regards,
 Kanno


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gerrit] Preparing a patch that has dependency to more than one code under review

2014-08-08 Thread Nader Lahouti
Hi,

Is it possible to send a patch for review (i.e. A) on gerrit based on
multiple commit under the review (i.e. B and C)?
Based on the wiki page to add dependency these command should be used:
A-B, A-C (no dependency between B and C)

#fetch change under review and check out branch based on that change.
git review -d $PARENT_CHANGE_NUMBER
git checkout -b $DEV_TOPIC_BRANCH
# Edit files, add files to git
git commit
git review

This seems to work only for one dependency. Is it possible to repeat the
first command,
(i.e. git review -d $PARENT_CHANGE_NUMBER) multiple times for each
dependency?


Thanks,
Nader.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [HEAT] Is there any line length limitation in paste deploy configuration file?

2014-08-08 Thread Baohua Yang
Hi,
Recently I have noticed the api-paste. ini file in heat has some very
long lines (over the popular 80c).
Wondering if there's recommended length limitation on it?
Sometime, users have to read the file and change the configuration
value, so I think it should be kept readable.
 Thanks!

-- 
Best wishes!
Baohua
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-08 Thread Nikola Đipanov
On 08/08/2014 12:12 AM, Stefano Maffulli wrote:
 On 08/07/2014 01:41 PM, Eoghan Glynn wrote:
 My point was simply that we don't have direct control over the
 contributors' activities
 
 This is not correct and I've seen it repeated too often to let it go
 uncorrected: we (the OpenStack project as a whole) have a lot of control
 over contributors to OpenStack. There is a Technical Committee and a
 Board of Directors, corporate members and sponsors... all of these can
 do a lot to make things happen. For example, the Platinum members of the
 Foundation are required at the moment to have at least 'two full time
 equivalents' and I don't see why the board couldn't change that
 requirement, make it more specific.
 

Even if this were true (I don't know if it is or not), I have a hard
time imagining that any such attempt would be effective enough to solve
the current problems.

I think that OSS software wins in places it does mostly because it *does
not* get managed like a corporate software project. Trying to fit any
classical PM methodology on top of a (very active mind you) OSS project
will likely fail IMHO, due to not only lack of control over contributors
time, but widely different incentives of participating parties.

N.

 OpenStack is not an amateurish project done by volunteers in their free
 time.  We have lots of leverage we can apply to get things done.
 
 /stef
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] introducing cyclops

2014-08-08 Thread Piyush Harsh
Dear Eoghan,

Thanks for your comments. Although you are correct that rating, charging,
and billing policies are commercially sensitive to the operators, still if
an operator has an openstack installation, I do not see why the stack could
not offer a service that supports ways for the operator to input desired
policies, rules, etc to do charging and billing out of the box. These
policies could still only be accessible to the operator.

Furthermore, one could envision that using heat together with some django
magic, this could even be offered as a service for tenants of the operators
who could be distributors or resellers in his client ecosystem, allowing
them to set their own custom policies.

I believe such stack based solution would be very much welcome by SMEs, new
entrants, etc.

I am planning to attend the Kilo summit in Paris, and I would be very glad
to talk with you and others on this idea and on Cyclops :)

Forking the codebase to stackforge is something which is definitely
possible and thanks a lot for suggesting it.

Looking forward to more constructive discussions on this with you and
others.

Kind regards,
Piyush.


___
Dr. Piyush Harsh, Ph.D.
Researcher, InIT Cloud Computing Lab
Zurich University of Applied Sciences (ZHAW)
[Site] http://piyush-harsh.info
[Research Lab] http://www.cloudcomp.ch/
Fax: +41(0)58.935.7403 GPG Keyid: 9C5A8838


On Fri, Aug 8, 2014 at 12:01 AM, Eoghan Glynn egl...@redhat.com wrote:




  Dear All,
 
  Let me use my first post to this list to introduce Cyclops and initiate a
  discussion towards possibility of this platform as a future incubated
  project in OpenStack.
 
  We at Zurich university of Applied Sciences have a python project in open
  source (Apache 2 Licensing) that aims to provide a platform to do
  rating-charging-billing over ceilometer. We call is Cyclops (A Charging
  platform for OPenStack CLouds).
 
  The initial proof of concept code can be accessed here:
  https://github.com/icclab/cyclops-web 
  https://github.com/icclab/cyclops-tmanager
 
  Disclaimer: This is not the best code out there, but will be refined and
  documented properly very soon!
 
  A demo video from really early days of the project is here:
  https://www.youtube.com/watch?v=ZIwwVxqCio0 and since this video was
 made,
  several bug fixes and features were added.
 
  The idea presentation was done at Swiss Open Cloud Day at Bern and the
 talk
  slides can be accessed here:
  http://piyush-harsh.info/content/ocd-bern2014.pdf , and more recently
 the
  research paper on the idea was published in 2014 World Congress in
 Computer
  Science (Las Vegas), which can be accessed here:
  http://piyush-harsh.info/content/GCA2014-rcb.pdf
 
  I was wondering, if our effort is something that OpenStack
  Ceilometer/Telemetry release team would be interested in?
 
  I do understand that initially rating-charging-billing service may have
 been
  left out by choice as they would need to be tightly coupled with existing
  CRM/Billing systems, but Cyclops design (intended) is distributed,
 service
  oriented architecture with each component allowing for possible
 integration
  with external software via REST APIs. And therefore Cyclops by design is
  CRM/Billing platform agnostic. Although Cyclops PoC implementation does
  include a basic bill generation module.
 
  We in our team are committed to this development effort and we will have
  resources (interns, students, researchers) work on features and improve
 the
  code-base for a foreseeable number of years to come.
 
  Do you see a chance if our efforts could make in as an incubated project
 in
  OpenStack within Ceilometer?

 Hi Piyush,

 Thanks for bringing this up!

 I should preface my remarks by setting out a little OpenStack
 history, in terms of the original decision not to include the
 rating and billing stages of the pipeline under the ambit of
 the ceilometer project.

 IIUC, the logic was that such rating/billing policies were very
 likely to be:

   (a) commercially sensitive for competing cloud operators

 and:

   (b) already built-out via existing custom/proprietary systems

 The folks who were directly involved at the outset of ceilometer
 can correct me if I've misrepresented the thinking that pertained
 at the time.

 While that logic seems to still apply, I would be happy to learn
 more about the work you've done already on this, and would be
 open to hearing arguments for and against. Are you planning to
 attend the Kilo summit in Paris (Nov 3-7)? If so, it would be a
 good opportunity to discuss further in person.

 In the meantime, stackforge provides a low-bar-to-entry for
 projects in the OpenStack ecosystem that may, or may not, end up
 as incubated projects or as dependencies taken by graduated
 projects. So you might consider moving your code there?

 Cheers,
 Eoghan



  I really would like to hear back from you, comments, suggestions, etc.
 
  Kind regards,
  Piyush.
  

Re: [openstack-dev] [devstack] Core team proposals

2014-08-08 Thread Chmouel Boudjnah
On Thu, Aug 7, 2014 at 8:09 PM, Dean Troyer dtro...@gmail.com wrote:

 Please respond in the usual manner, +1 or concerns.


+1, I would be happy to see Ian joining the team.

Chmouel
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Neutron Ryu status

2014-08-08 Thread YAMAMOTO Takashi
just an update: the Neutron Ryu CI is getting stable now.
please let me know if you noticed any problems.  thank you.

YAMAMOTO Takashi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gerrit] Preparing a patch that has dependency to more than one code under review

2014-08-08 Thread Sylvain Bauza

Hi Nader,
Le 08/08/2014 09:23, Nader Lahouti a écrit :

Hi,

Is it possible to send a patch for review (i.e. A) on gerrit based on 
multiple commit under the review (i.e. B and C)?

Based on the wiki page to add dependency these command should be used:
A-B, A-C (no dependency between B and C)
#fetch change under review and check out branch based on that change.
git review -d $PARENT_CHANGE_NUMBER
git checkout -b $DEV_TOPIC_BRANCH
# Edit files, add files to git
git commit
git review
This seems to work only for one dependency. Is it possible to repeat 
the first command,
(i.e. git review -d $PARENT_CHANGE_NUMBER) multiple times for each 
dependency?





The thing is really simple, just create a local branch for your patches 
series, and do one commit per Gerrit change.


When you'll run git-review and say yes to what it will ask you, there 
will be a commit hook which will append a Change-Id to every commit you 
have in your branch, and it will publish the whole series (so A will 
depend on B which itself depends on C).


If you need to produce a new patchset (ie. a new iteration of a change), 
you'll just have to interactively  rebase your local branch to the local 
master you have, edit the commit you want to amend (and leave git pick 
the others) and as Git will stop on your patch to edit when rebasing, 
you'll just have to --amend it and perform a git rebase --continue.


Hope it makes things clearer now,
-Sylvain

Thanks,
Nader.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] how to deprecate a plugin

2014-08-08 Thread YAMAMOTO Takashi
 On Thu, Jul 31, 2014 at 1:43 AM, YAMAMOTO Takashi
 yamam...@valinux.co.jp wrote:
 On Wed, Jul 30, 2014 at 12:17 PM, YAMAMOTO Takashi
 yamam...@valinux.co.jp wrote:
 hi,

 what's the right procedure to deprecate a plugin?  we (ryu team) are
 considering deprecating ryu plugin, in favor of ofagent.  probably in
 K-timeframe, if it's acceptable.

 The typical way is to announce the deprecation at least one cycle
 before removing the deprecated plugin from the tree. So, you could
 announce the ryu plugin is deprecated in Juno, and then remove it from
 the tree in Kilo.

 where is an appropriate place to announce?  this ML?

 Yes, and also, put an item on the weekly Neutron meeting agenda [1] in
 the announcements section.
 
 [1] https://wiki.openstack.org/wiki/Network/Meetings

i added an item.

i'll try to attend the next meeting but i'm not sure if i can.
i have an overlapping schedule this month.  sorry.

YAMAMOTO Takashi

 
 YAMAMOTO Takashi


 Thanks,
 Kyle

 YAMAMOTO Takashi

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Further discussions on Cyclops

2014-08-08 Thread Piyush Harsh
Dear Andre,

I have not been an active user or IRC, but I have just now started using
it, I use the handle PH7_0 on irc://rajaniemi.freenode.net ... Tell me the
time and date and we can discuss more on cyclops.

Cheers,
Piyush.

___
Dr. Piyush Harsh, Ph.D.
Researcher, InIT Cloud Computing Lab
Zurich University of Applied Sciences (ZHAW)
[Site] http://piyush-harsh.info
[Research Lab] http://www.cloudcomp.ch/
Fax: +41(0)58.935.7403 GPG Keyid: 9C5A8838

 Date: Thu, 7 Aug 2014 17:03:45 +0200
 From: Endre Karlson endre.karl...@gmail.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] introducing cyclops
 Message-ID:
 
cafxo1ntd8fx8xtnj75a7ofyshf9fitdot4g3xhqr_vcswdf...@mail.gmail.com
 Content-Type: text/plain; charset=utf-8

 Hi, are you on IRC? :)

 Endre


 2014-08-07 12:01 GMT+02:00 Piyush Harsh h...@zhaw.ch:

 Dear All,

 Let me use my first post to this list to introduce Cyclops and initiate a
 discussion towards possibility of this platform as a future incubated
 project in OpenStack.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] use of compute node as a storage

2014-08-08 Thread shailendra acharya
i made 4 vm 1 controller, 1 network and 2 compute and i want 1 compute to
run as a storage so plz help how can i do such ?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Deprecating CONF.block_device_allocate_retries_interval

2014-08-08 Thread Liyi Meng
Hi John, 

I have some comments as well. see blow :) 


_On 6 August 2014 18:54, Jay Pipes jaypipes at gmail.com 
wrote:
 So, Liyi Meng has an interesting patch up for Nova:

 https://review.openstack.org/#/c/104876

 1) We should just deprecate both the options, with a note in the option help
 text that these options are not used when volume size is not 0, and that the
 interval is calculated based on volume size

This feels bad.

Liyi: Not worse than having these two options in action actually. Assuming you 
know you have volume size vary from 1G to 64G in your OPS deployment. To create 
volumes from these images, time consuming is from 30s to 30 x 64s with your 
storage
backend. How will you choose the right value for 
block_device_allocate_retries_interval
and block_device_allocate_retries? What we know for sure is that you have to 
make 
  block_device_allocate_retries_interval * block_device_allocate_retries = 30 
x 64 seconds

Lets say you have decided to choose: 
block_device_allocate_retries_interval  = 30 
block_device_allocate_retries = 64

This is obviously optimized for image with 64G size, what happen if a 1G image 
is used? 
The worse scenario is that the image is ready at the 30th second and you have 
done your status 
pulling at the 29th second, so your next query will come in the 59th second. 
Your VM booting up 
process therefore is slower 29 seconds than expected!

If applying my algorithm, the calculated interval is  int(30 /60 + 1) = 1 
second. i.e. if you boot up your 
1G VM, you waste less 1 second in waiting it comes up!  1 second v.s. 29 
seconds! This is big difference!

If booting a 64G VM, the figure should be similar to the hard coded value 
above, but for end user, if you know 
you have to wait for more than 32 minutes to get your VM up, will you care if 
you have wasted half 
of a minute somewhere?  

 2) We should deprecate the CONF.block_device_allocate_retries_interval
 option only, and keep the CONF.block_device_allocate_retries configuration
 option as-is, changing the help text to read something like Max number of
 retries. We calculate the interval of the retry based on the size of the
 volume.

What about a slight modification to (2)...

Liyi: If we do want to go in this option, block_device_allocate_retries is the 
one to keep. 
I don't have strong opinion here. My two cents is to prefer the option 1 above. 
This option will 
keep block_device_allocate_retries. IMHO, not a wise idea. When we have too 
many 
configuration options, OPS becomes difficult to use for operator, get less 
widely used. 
 If there is an advanced user there want high level customization, they would 
dig into 
source code. OPS is in python, not in C code. handling source is not huge 
different from
 handling the config. In someway, it is even more straightforward. 

3) CONF.block_device_allocate_retries_interval=-1 means calculate
using volume size, and we make it the default, so people can still
override it if they want to. But we also deprecate the option with a
view of removing it during Kilo? Move
CONF.block_device_allocate_retries as max retries.

 I bring this up on the mailing list because I think Liyi's patch offers an
 interesting future direction to the way that we think about our retry
 approach in Nova. Instead of having hard-coded or configurable interval
 times, I think Liyi's approach of calculating the interval length based on
 some input values is a good direction to take.

Seems like the right direction.

But I do worry that its quite dependent on the storage backend.
Sometimes the volume create is almost free regardless of the volume
size (with certain types of CoW). So maybe we end up needing some kind
of scaling factor on the weights. I kinda hope I am over thinking
that, and in reality it all works fine. I suspect that is the case.

Liyi:  agree here, and in my implementation I have covered this consideration 
as your see.
e.g. when mapping a 1G image into 64G volume, I call l this function with 
passing in 1G of size, 
not 64G. 


Thanks,
John___
From: Liyi Meng
Sent: Thursday, 07 August 2014 10:09 AM
To: Michael Still; OpenStack Development Mailing List (not for usage questions)
Subject: RE: [openstack-dev] [nova] Deprecating 
CONF.block_device_allocate_retries_interval

Hi Michael,

Not sure if I am getting your right. I think your proposal doesn't not perform 
well in reality.

Firstly, it is difficult to guess a good time that fix all problems, except  
you take forever. Just take the volume creation in my bugfix as example 
(https://review.openstack.org/#/c/104876/). If a couple of large volumes are 
requested to create at the same time toward a fast storage backend, it would 
take a long time for each to create. It is quite normal to see it takes more 
than an hour to create a volume from a 60G image. That is why I proposal we 
need to guess a total timeout base on image size in the bugfix.

Secondly, are you suggesting Eventlet 

Re: [openstack-dev] introducing cyclops

2014-08-08 Thread Stephane Albert
On Thu, Aug 07, 2014 at 12:01:04PM +0200, Piyush Harsh wrote:
 Dear All,
 
 Let me use my first post to this list to introduce Cyclops and initiate a
 discussion towards possibility of this platform as a future incubated project
 in OpenStack.
 
 We at Zurich university of Applied Sciences have a python project in open
 source (Apache 2 Licensing) that aims to provide a platform to do
 rating-charging-billing over ceilometer. We call is Cyclops (A Charging
 platform for OPenStack CLouds).
 
 The initial proof of concept code can be accessed here: https://github.com/
 icclab/cyclops-web  https://github.com/icclab/cyclops-tmanager
 
 Disclaimer: This is not the best code out there, but will be refined and
 documented properly very soon!
 
 A demo video from really early days of the project is here: https://
 www.youtube.com/watch?v=ZIwwVxqCio0 and since this video was made, several bug
 fixes and features were added.
 
 The idea presentation was done at Swiss Open Cloud Day at Bern and the talk
 slides can be accessed here: 
 http://piyush-harsh.info/content/ocd-bern2014.pdf,
 and more recently the research paper on the idea was published in 2014 World
 Congress in Computer Science (Las Vegas), which can be accessed here: http://
 piyush-harsh.info/content/GCA2014-rcb.pdf
 
 I was wondering, if our effort is something that OpenStack 
 Ceilometer/Telemetry
 release team would be interested in?
 
 I do understand that initially rating-charging-billing service may have been
 left out by choice as they would need to be tightly coupled with existing CRM/
 Billing systems, but Cyclops design (intended) is distributed, service 
 oriented
 architecture with each component allowing for possible integration with
 external software via REST APIs. And therefore Cyclops by design is 
 CRM/Billing
 platform agnostic. Although Cyclops PoC implementation does include a basic
 bill generation module.
 
 We in our team are committed to this development effort and we will have
 resources (interns, students, researchers) work on features and improve the
 code-base for a foreseeable number of years to come.
 
 Do you see a chance if our efforts could make in as an incubated project in
 OpenStack within Ceilometer?
 
 I really would like to hear back from you, comments, suggestions, etc.
 
 Kind regards,
 Piyush.
 ___
 Dr. Piyush Harsh, Ph.D.
 Researcher, InIT Cloud Computing Lab
 Zurich University of Applied Sciences (ZHAW)
 [Site] http://piyush-harsh.info
 [Research Lab] http://www.cloudcomp.ch/
 Fax: +41(0)58.935.7403 GPG Keyid: 9C5A8838

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Hi,

It's billing day on openstack-dev list as I can see ;) Glad to see there
is other people wanting to implement Open Source billing.
We've started some work around pricing and billing integration inside
OpenStack, and already showed our first POC at the last summit in Atlanta
to some Ceilometer devs. We are planning on releasing some major
improvements today, the changes are massive so the code has been split
in multiple patch/reviews and we started sending it to the gate
yesterday.
We've got some review in openstack-infra waiting for horizon and python
client repositories.
As soon as the reviews are validated we will push the code to customize
Horizon dashboard to integrate with the billing modules and real time
quoting on instance creation.
The project source code is available at:
http://github.com/stackforge/cloudkitty
If you want more informations you can find a wiki page at:
https://wiki.openstack.org/wiki/CloudKitty or reach us via irc on
freenode #cloudkitty.
Maybe we can't talk together about the future of our projects when we've
done pushing changes on stackforge.

Cheers,
Stéphane

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-08 Thread Thierry Carrez
Michael Still wrote:
 [...] I think an implied side effect of
 the runway system is that nova-drivers would -2 blueprint reviews
 which were not occupying a slot.
 
 (If we start doing more -2's I think we will need to explore how to
 not block on someone with -2's taking a vacation. Some sort of role
 account perhaps).

Ideally CodeReview-2s should be kept for blocking code reviews on
technical grounds, not procedural grounds. For example it always feels
weird to CodeReview-2 all feature patch reviews on Feature Freeze day --
that CodeReview-2 really doesn't have the same meaning as a traditional
CodeReview-2.

For those procedural blocks (feature freeze, waiting for runway
room...), it might be interesting to introduce a specific score
(Workflow-2 perhaps) that drivers could set. That would not prevent code
review from happening, that would just clearly express that this is not
ready to land for release cycle / organizational reasons.

Thoughts?

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-08 Thread Thierry Carrez
Eoghan Glynn wrote:
 On 08/07/2014 01:41 PM, Eoghan Glynn wrote:
 My point was simply that we don't have direct control over the
 contributors' activities

 This is not correct and I've seen it repeated too often to let it go
 uncorrected: we (the OpenStack project as a whole) have a lot of control
 over contributors to OpenStack. There is a Technical Committee and a
 Board of Directors, corporate members and sponsors... all of these can
 do a lot to make things happen. For example, the Platinum members of the
 Foundation are required at the moment to have at least 'two full time
 equivalents' and I don't see why the board couldn't change that
 requirement, make it more specific.

 OpenStack is not an amateurish project done by volunteers in their free
 time.  We have lots of leverage we can apply to get things done.
 
 There was no suggestion of amatuerish-ness, or even volunteerism,
 in my post.
 
 Simply a recognition of the reality that we are not operating in
 a traditional command  control environment.

I agree with Eoghan here. The main goal of an agile/lean system is to
maximize a development team productivity. The main goal of Open source
project management is not to maximize productivity. It’s to maximize
contributions. I wrote about that a few years ago here (shameless plug):

http://fnords.wordpress.com/2011/01/21/agile-vs-open/

The problem today is that our backlog/inventory/waste has reached levels
where it starts hurting our goal of maximizing contributions, by
creating frustration on the developers side. So we need to explore ways
to reduce it back to acceptable (or predictable) levels, taking into
account our limited control over our workforce.

Personally I think we just need to get better at communicating the
downstream expectations, so that if we create waste, it's clearly
upstream fault rather than downstream. Currently it's the lack of
communication that makes developers produce more / something else than
what core reviewers want to see. Any tool that lets us communicate
expectations better is welcome, and I think the runway approach is one
such tool, simple enough to understand.

Cheers,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-08 Thread Nikola Đipanov
On 08/08/2014 11:37 AM, Thierry Carrez wrote:
 Personally I think we just need to get better at communicating the
 downstream expectations, so that if we create waste, it's clearly
 upstream fault rather than downstream. Currently it's the lack of
 communication that makes developers produce more / something else than
 what core reviewers want to see. Any tool that lets us communicate
 expectations better is welcome, and I think the runway approach is one
 such tool, simple enough to understand.
 

I strongly agree with everything here except the last part of the last
sentence.

To me the runway approach seems like yet another set of arbitrary hoops
that we will put in place so that we don't have to tell people that we
don't have bandwidth/willingness to review and help their contribution in.

It is process over communication at it's finest and will in no way
help to foster an open and honest communication in the community IMHO. I
don't see it making matters any worse, since I think what we have now is
more or less that with one layer of processes less, but I don't see it
making things better either.

The biggest issue I see is that there is no justifiable metric with
which we can back up assigning a slot to a feature other than we say
so. We can do that just as easily without runways.

I'd love for someone to tell me what am I missing here...

Nikola

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][oslo] oslo.config and import chains

2014-08-08 Thread Matthew Booth
On 07/08/14 18:54, Kevin L. Mitchell wrote:
 On Thu, 2014-08-07 at 17:46 +0100, Matthew Booth wrote:
 In any case, the operative point is that CONF.attribute must
 always be
 evaluated inside run-time code, never at module load time.

 ...unless you call register_opts() safely, which is what I'm
 proposing.
 
 No, calling register_opts() at a different point only fixes the import
 issue you originally complained about; it does not fix the problem that
 the configuration option is evaluated at the wrong time.  The example
 code you included in your original email evaluates the configuration
 option at module load time, BEFORE the configuration has been loaded,
 which means that the argument default will be the default of the
 configuration option, rather than the configured value of the
 configuration option.  Configuration options must be evaluated at
 RUN-TIME, after configuration is loaded; they must not be evaluated at
 LOAD-TIME, which is what your original code does.

Ah, thanks, Kevin. The pertinent information is that the config has not
been loaded at module import time, and you'll therefore always get a
default.

Matt
-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
GPG ID:  D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][oslo] oslo.config and import chains

2014-08-08 Thread Matthew Booth
On 07/08/14 19:02, Kevin L. Mitchell wrote:
 On Thu, 2014-08-07 at 17:41 +0100, Matthew Booth wrote:
 ... or arg is an object which defines __nonzero__(), or defines
 __getattr__() and then explodes because of the unexpected lookup of a
 __nonzero__ attribute. Or it's False (no quotes when printed by the
 debugger), but has a unicode type and therefore evaluates to True[1].
 
 If you're passing such exotic objects as parameters that could
 potentially be drawn from configuration instead, maybe that code needs
 to be refactored a bit :)
 
 However, if you want to compare a value with None and write 'foo is
 None' it will always do exactly what you expect, regardless what you
 pass to it. I think it's also nicer to the reviewer and the
 maintainer,
 who then don't need to go looking for context to check if anything
 invalid might be passed in.
 
 In the vast majority of cases, however, we use a value that evaluates to
 False to indicate use the default, where default may be drawn from
 configuration.  Yes, there are cases where we must treat, say, 0 as
 distinct from None, but when we don't need to, we should keep the code
 as simple as possible.  After all, I doubt anyone would seriously
 suggest that we must always use something like the _unset sentinel,
 even when None has no special meaning…

I've found a few bugs in Openstack by checking implicit boolean tests
while reviewing code. Here's a recent one:

https://review.openstack.org/#/c/109006/1/nova/db/sqlalchemy/api.py

Note that the caller has accidentally passed read_deleted=False, a
pretty easy mistake to make, and the bare object test has hidden that
error and silently replaced it with a default.

Also note that while PEP8 stops short of decreeing against bare object
tests, it does recommend against it. See the section 'Programming
Recommendations':

http://legacy.python.org/dev/peps/pep-0008/

Matt
-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
GPG ID:  D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] backport fixes to old branches

2014-08-08 Thread Osanai, Hisashi

Hi,

On Tuesday, August 05, 2014 8:57 PM, Ihar Hrachyshka wrote:
  Thanks. To facilitate quicker backport, you may also propose the patch
  for review yourself. It may take time before stable maintainers or
  other interested parties get to the bug and do cherry-pick.

I did cherry-pick for https://bugs.launchpad.net/ceilometer/+bug/1326250; and 
executed git review (https://review.openstack.org/#/c/112806/).

In review phase I got the error message from Jenkins.
The reason of the error is happybase-0.8 (latest one) uses execfile function 
and 
the usage of the function is removed from python.

The happybase is not OpenStack components so I would like to have advices for 
how to deal with this. 

- console.html
2014-08-08 09:17:45.901 | Downloading/unpacking happybase=0.5,!=0.7 (from -r 
/home/jenkins/workspace/gate-ceilometer-python33/requirements.txt (line 7))
2014-08-08 09:17:45.901 |   http://pypi.openstack.org/simple/happybase/ uses an 
insecure transport scheme (http). Consider using https if pypi.openstack.org 
has it available
2014-08-08 09:17:45.901 |   Storing download in cache at 
./.tox/_download/http%3A%2F%2Fpypi.openstack.org%2Fpackages%2Fsource%2Fh%2Fhappybase%2Fhappybase-0.8.tar.gz
2014-08-08 09:17:45.901 |   Running setup.py 
(path:/home/jenkins/workspace/gate-ceilometer-python33/.tox/py33/build/happybase/setup.py)
 egg_info for package happybase
2014-08-08 09:17:45.902 | Traceback (most recent call last):
2014-08-08 09:17:45.902 |   File string, line 17, in module
2014-08-08 09:17:45.902 |   File 
/home/jenkins/workspace/gate-ceilometer-python33/.tox/py33/build/happybase/setup.py,
 line 5, in module
2014-08-08 09:17:45.902 | execfile('happybase/_version.py')
2014-08-08 09:17:45.902 | NameError: name 'execfile' is not defined
2014-08-08 09:17:45.902 | Complete output from command python setup.py 
egg_info:
2014-08-08 09:17:45.902 | Traceback (most recent call last):
2014-08-08 09:17:45.902 | 
2014-08-08 09:17:45.902 |   File string, line 17, in module
2014-08-08 09:17:45.902 | 
2014-08-08 09:17:45.902 |   File 
/home/jenkins/workspace/gate-ceilometer-python33/.tox/py33/build/happybase/setup.py,
 line 5, in module
2014-08-08 09:17:45.903 | 
2014-08-08 09:17:45.903 | execfile('happybase/_version.py')
2014-08-08 09:17:45.903 | 
2014-08-08 09:17:45.903 | NameError: name 'execfile' is not defined

- happybase-0.8/setup.py
1 from os.path import join, dirname
2 from setuptools import find_packages, setup
3
4 __version__ = None
5 execfile('happybase/_version.py')

- python's doc
https://docs.python.org/3.3/library/2to3.html?highlight=execfile#2to3fixer-execfile

Best Regards,
Hisashi Osanai


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Core team proposals

2014-08-08 Thread chandankumar


On 08/08/2014 01:05 PM, Chmouel Boudjnah wrote:


On Thu, Aug 7, 2014 at 8:09 PM, Dean Troyer dtro...@gmail.com 
mailto:dtro...@gmail.com wrote:


Please respond in the usual manner, +1 or concerns.


+1, I would be happy to see Ian joining the team.


+1

Chmouel


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Thanks,

Chandan Kumar

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] [swift] Improving ceilometer.objectstore.swift_middleware

2014-08-08 Thread Osanai, Hisashi

Hi,

Is there any way to proceed ahead the following topic?

Best Regards,
Hisashi Osanai

On Friday, August 01, 2014 7:32 PM, Hisashi Osanai wrote:
 I would like to follow this discussion so I picked up points.
 
 - There are two way to collect info from swift, one is pollster and
   the other is notification. And we discussed about how to solve the
   performance degradation of swift_middleware here.
   pollster:
- storage.objects
- storage.objects.size
- storage.objects.containers
- storage.containers.objects
- storage.containers.objects.size
   notification:
- storage.objects.incoming.bytes
- storage.objects.outgoing.bytes
- storage.api.request
 
 - storage.objects.imcoming.bytes, storage.objects.outgoing.bytes and
   storage.api.request are handled with swift_middleware because
 ceilometer
   need to have the info with per-user and per-tenant basis.
 - swift has statsd but there is no per-user and per-tenant related info
   because to realize this swift has to have keystone-isms into core swift
 code.
 - improves swift_middleware with stopping the 1:1 mapping b/w API calls
 and
   notifications
 - swift may consume 10s of thousands of event per second and this case
 is fairly
   unique so far.
 
 I would like to think this performance problem with the following point
 of view.
 - need to handle 10s of thousands of event per second
 - possibility to lost events (i.e. swift proxy goes down when events queued
 in a swift process)
 
 With the notification style there are restriction for above points.
 Therefore I change the style
 to get storage.objects.imcoming.bytes, storage.objects.outgoing.bytes
 and
 storage.api.request from notification to pollster.
 Here I met a problem that pointed out by Mr. Merritt, swift has dependency
 with keystone.
 But I prefer to solve this problem rather than a problem for notification
 style. What do you think?
 
 My rough idea to solve the dependency problem is
 - enable statsd (or similar function) in swift
 - put a middleware in swift proxy
 - this middleware does not have any communication with ceilometer but
   put a mark to followed middleware or swift proxy
 - store metrics with a tenant and a user by statsd if there is the mark
   store metrics by statsd if there is no mark
 - Ceilometer (central agent) call APIs to get the metrics
 
 Is there any way to solve the dependency problem?
 
 Best Regards,
 Hisashi Osanai


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Which program for Rally

2014-08-08 Thread Neependra Kumar Khare

- Original Message -
From: Thierry Carrez thie...@openstack.org
To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
Sent: Wednesday, August 6, 2014 4:00:35 PM
Subject: [openstack-dev] Which program for Rally


1. Rally as an essential QA tool
Performance testing (and especially performance regression testing) is
an essential QA function, and a feature that Rally provides. If the QA
team is happy to use Rally to fill that function, then Rally can
obviously be adopted by the (already-existing) QA program. That said,
that would put Rally under the authority of the QA PTL, and that raises
a few questions due to the current architecture of Rally, which is more
product-oriented. There needs to be further discussion between the QA
core team and the Rally team to see how that could work and if that
option would be acceptable for both sides.


I want to share an use case of Rally for performance bench-marking. 
I use Rally to benchmark Keystone performance. I can easily get results
for comparison between different configurations, openstack distributions
etc. Here is a sample result :-

https://github.com/stackforge/rally/blob/master/doc/user_stories/keystone/authenticate.rst

IMO Rally can play a essential role in performance regression testing. 


Regards,
Neependra Khare
Performance Engineering @ Red Hat


2. Rally as an essential operator tool
Regular benchmarking of OpenStack deployments is a best practice for
cloud operators, and a feature that Rally provides. With a bit of a
stretch, we could consider that benchmarking is essential to the
completion of the OpenStack project mission. That program could one day
evolve to include more such operations best practices tools. In
addition to the slight stretch already mentioned, one concern here is
that we still want to have performance testing in QA (which is clearly
essential to the production of OpenStack). Letting Rally primarily be
an operational tool might make that outcome more difficult.

3. Let Rally be a product on top of OpenStack
The last option is to not have Rally in any program, and not consider it
*essential* to the production of the OpenStack integrated release or
the completion of the OpenStack project mission. Rally can happily exist
as an operator tool on top of OpenStack. It is built as a monolithic
product: that approach works very well for external complementary
solutions... Also be more integrated in OpenStack or part of the
OpenStack programs might come at a cost (slicing some functionality out
of rally to make it more a framework and less a product) that might not
be what its authors want.

Let's explore each option to see which ones are viable, and the pros and
cons of each.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][oslo] oslo.config and import chains

2014-08-08 Thread Matthew Booth
On 08/08/14 11:04, Matthew Booth wrote:
 On 07/08/14 18:54, Kevin L. Mitchell wrote:
 On Thu, 2014-08-07 at 17:46 +0100, Matthew Booth wrote:
 In any case, the operative point is that CONF.attribute must
 always be
 evaluated inside run-time code, never at module load time.

 ...unless you call register_opts() safely, which is what I'm
 proposing.

 No, calling register_opts() at a different point only fixes the import
 issue you originally complained about; it does not fix the problem that
 the configuration option is evaluated at the wrong time.  The example
 code you included in your original email evaluates the configuration
 option at module load time, BEFORE the configuration has been loaded,
 which means that the argument default will be the default of the
 configuration option, rather than the configured value of the
 configuration option.  Configuration options must be evaluated at
 RUN-TIME, after configuration is loaded; they must not be evaluated at
 LOAD-TIME, which is what your original code does.
 
 Ah, thanks, Kevin. The pertinent information is that the config has not
 been loaded at module import time, and you'll therefore always get a
 default.

Ironically, the specific instance which prompted this investigation[1]
is in a driver. As drivers are imported dynamically after the config has
been loaded, using a config variable in import context will actually
work as intended. Relying on that behaviour seems pretty nasty to me,
though. We could probably do with a guideline on this.

I did a quick scan of Nova and found 12 instances which aren't in a
driver which appear to be broken due to this at first glance, resulting
in 11 config variables whose specified values appear to be ignored. Bug
here:

https://bugs.launchpad.net/nova/+bug/1354403

Matt

[1]
https://review.openstack.org/#/c/104145/14/nova/virt/vmwareapi/vmware_images.py
-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
GPG ID:  D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] Debug data for the NFS v4 hang issue

2014-08-08 Thread Deepak Shetty
Per yesterday's IRC meeting, I have updated the debug data I had collected
in the github issue @

https://github.com/csabahenk/cirros/issues/9

It has data for both :
32bit nfs client accessing 64bit cirros nfs server
64bit nfs client accessing 64bit cirros nfs server

thanx,
deepak
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] ceilometer] [ft] Improving ceil.objectstore.swift_middleware

2014-08-08 Thread Chris Dent

On Fri, 8 Aug 2014, Osanai, Hisashi wrote:


Is there any way to proceed ahead the following topic?


There are three active reviews that are somewhat related to this topic:

Use a FakeRequest object to test middleware:
https://review.openstack.org/#/c/110302/

Publish samples on other threads:
https://review.openstack.org/#/c/110257/

Permit usage of notifications for metering
https://review.openstack.org/#/c/80225/

The third one provides a way to potentially overcome the existing
performance problems that the second one is trying to fix.

These may not be directly what you want, but are something worth
tracking as you explore and think.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Deprecating CONF.block_device_allocate_retries_interval

2014-08-08 Thread Nikola Đipanov
On 08/06/2014 07:54 PM, Jay Pipes wrote:
 I bring this up on the mailing list because I think Liyi's patch offers
 an interesting future direction to the way that we think about our retry
 approach in Nova. Instead of having hard-coded or configurable interval
 times, I think Liyi's approach of calculating the interval length based
 on some input values is a good direction to take.
 

This indeed is a problem that we've seen bite us a number of times, and
I tried to solve it by proposing [1] but didn't get to work on it
further yet.

Having said that - after thinking about it more, I was not sure I like
my own approach in [1] on the grounds of it being too generic (and
overly elaborate) for the particular problem it is solving.

I was then thinking of something similar to what is proposed here, where
we would have a waiting time that is a function of a value that we could
query Cinder on. Allocation rate proposed here seems to fit this nicely,
but in my mind we would have a way to query cinder about it instead of
hardcoding it, however this can be done later and in cooperation with
the Cinder team.


 2) We should deprecate the CONF.block_device_allocate_retries_interval
 option only, and keep the CONF.block_device_allocate_retries
 configuration option as-is, changing the help text to read something
 like Max number of retries. We calculate the interval of the retry
 based on the size of the volume.


I'd go with this as the number of retries can still be useful as a tool
for easy workaround and troubleshooting, but I'd put a big disclaimer
that it is mostly meant for debugging/workaround purposes.

N.

[1] https://review.openstack.org/#/c/87546/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] Question on decorators in Ceilometer pecan framework

2014-08-08 Thread Pendergrass, Eric
Hi,

We have been struggling to get a decorator working for proposed new RBAC 
functionality in ceilometer-api.  We're hitting a problem where GET request 
query parameters are mucked up by our decorator.  Here's an example call:

curl -H X-Auth-Token:$TOKEN 
'http://localhost:8777/v2/meters?q.field=project_idq.value=8c678720fb5b4e3bb18dee222d7d7933'

And here's the decorator method (we've tried changing the kwargs, args, etc. 
with no luck):

_ENFORCER = None

def protected(controller_class):

global _ENFORCER
if not _ENFORCER:
_ENFORCER = policy.Enforcer()

def wrapper(f):
@functools.wraps(f)
def inner(self, **kwargs):
pdb.set_trace()
self._rbac_context = {}
if not _ENFORCER.enforce('context_is_admin',
 {},
 {'roles': 
pecan.request.headers.get('X-Roles', ).split(,)}):
self._rbac_context['project_id'] = 
pecan.request.headers.get('X-Project-Id')
self._rbac_context['user_id'] = 
pecan.request.headers.get('X-User-Id')
return f(self, **kwargs)
return inner
return wrapper

tried this too:

_ENFORCER = None

def protected(*args):

controller_class = 'meter'
global _ENFORCER
if not _ENFORCER:
_ENFORCER = policy.Enforcer()

def wrapper(f, *args):
def inner(self, *args):
pdb.set_trace()
#self._rbac_context = {}
#if not _ENFORCER.enforce('context_is_admin',
# {},
# {'roles': 
pecan.request.headers.get('X-Roles', ).split(,)}):
#self._rbac_context['project_id'] = 
pecan.request.headers.get('X-Project-Id')
#self._rbac_context['user_id'] = 
pecan.request.headers.get('X-User-Id')
#return f(*args)
f(self, *args)
return inner
return wrapper

and here's how it's used:

class MetersController(rest.RestController):
Works on meters.

_rbac_context = {}
@pecan.expose()
def _lookup(self, meter_name, *remainder):
return MeterController(meter_name), remainder

@wsme_pecan.wsexpose([Meter], [Query])
@rbac_validate.protected('meters')
def get_all(self, q=None):
Return all known meters, based on the data recorded so far.

:param q: Filter rules for the meters to be returned.

q = q or [] ...


but we get errors similar to below where the arg parser cannot find the query 
parameter because the decorator doesn't take a q argument as 
MetersController.get_all does.

Is there any way to get a decorator to work within the v2 API code and wsme 
framework or should we consider another approach?  Decorators would really 
simplify the RBAC idea we're working on, which is mostly code-implemented save 
for this fairly major problem.

I have a WIP registered BP on this at 
https://blueprints.launchpad.net/ceilometer/+spec/ready-ceilometer-rbac-keystone-v3.

If I can provide more details I'll be happy to.

Thanks
Eric

  /usr/local/bin/ceilometer-api(10)module()
- sys.exit(api())
  /opt/stack/ceilometer/ceilometer/cli.py(96)api()
- srv.serve_forever()
  /usr/lib/python2.7/SocketServer.py(227)serve_forever()
- self._handle_request_noblock()
  /usr/lib/python2.7/SocketServer.py(284)_handle_request_noblock()
- self.process_request(request, client_address)
  /usr/lib/python2.7/SocketServer.py(310)process_request()
- self.finish_request(request, client_address)
  /usr/lib/python2.7/SocketServer.py(323)finish_request()
- self.RequestHandlerClass(request, client_address, self)
  /usr/lib/python2.7/SocketServer.py(638)__init__()
- self.handle()
  /usr/lib/python2.7/wsgiref/simple_server.py(124)handle()
- handler.run(self.server.get_app())
  /usr/lib/python2.7/wsgiref/handlers.py(85)run()
- self.result = application(self.environ, self.start_response)
  
/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py(663)__call__()
- return self.app(env, start_response)
  /opt/stack/ceilometer/ceilometer/api/app.py(97)__call__()
- return self.v2(environ, start_response)
  
/usr/local/lib/python2.7/dist-packages/pecan/middleware/static.py(151)__call__()
- return self.app(environ, start_response)
  
/usr/local/lib/python2.7/dist-packages/pecan/middleware/debug.py(289)__call__()
- return self.app(environ, start_response)
  
/usr/local/lib/python2.7/dist-packages/pecan/middleware/recursive.py(56)__call__()
- return self.application(environ, start_response)
  /opt/stack/ceilometer/ceilometer/api/middleware.py(83)__call__()
- app_iter = self.app(environ, replacement_start_response)
  /usr/local/lib/python2.7/dist-packages/pecan/core.py(750)__call__()
- return super(Pecan, self).__call__(environ, start_response)
  /usr/local/lib/python2.7/dist-packages/pecan/core.py(616)__call__()
- self.invoke_controller(controller, args, kwargs, state)
  

Re: [openstack-dev] [Neutron][Nova] API design and usability

2014-08-08 Thread Andrew Laski


On 08/07/2014 07:57 AM, Mathieu Gagné wrote:

On 2014-08-06 7:58 PM, Robert Collins wrote:


I'm astounded by this proposal - it doesn't remove the garbage
collection complexity at all - it transfers it from our code - Nova -
onto end users. So rather than one tested and consolidated
implementation, we'll have one implementation in saltstack, one
implementation in heat, one implementation in Juju, one implementation
in foreman etc.

In what possible way is that an improvement ?



I agree with Robert. It is not an improvement.

For various reasons, in some parts of our systems, we have to manually 
create ports beforehand and it has always been a mess.


Instance creation often fails for all sort of reasons and it's really 
annoying to have to garbage collect orphan ports once in a while. The 
typically user does not use the API and does not care about the 
underlying details.


In other parts of our systems, we do rely on port auto-creation. It 
might has its flaws but when we use it, it works like a charm and we 
like it. We really appreciate the orchestration and automation made by 
Nova.


IMO, moving the burden of such orchestration (and garbage collection) 
to the end users would be a mistake. It's not a good UX at all.


I could say that removing auto-creation is like having to create your 
volume (from an image) before booting on it. Before BDMv2, that's what 
we had to do and it wasn't cool at all. We had to implement a logic 
waiting for the volume to be 'available' before booting on it 
otherwise Nova would complain about the volume not being available. 
Now that we have BDMv2, it's a much better UX.


I want to be able to run this command and not worry about pre-steps:

  nova boot --num-instances=50 [...] app.example.org



I think the suggestion being made by the 'do not autocreate' camp is to 
allow that, but have the logic for it wrapped into the client. That does 
mean that multiple SDKs might need to implement that logic, but in 
return you are provided with control.  A deployer is going to set a 
specific timeout that they've decided on, but as a user you can 
determine how long you're willing to wait for ports/volumes to be 
created.  And if there is a failure you can make on-the-fly decisions 
about how to handle that.


Also, when Nova is creating a resource on a users behalf it does not 
provide any feedback on the progress of that operation.  But if those 
are created outside of Nova than the user is exposed to the feedback and 
progress reporting provided by Neutron/Cinder.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-08-08 Thread Russell Bryant
On 08/07/2014 08:06 PM, Michael Still wrote:
 It seems to me that the tension here is that there are groups who
 would really like to use features in newer libvirts that we don't CI
 on in the gate. Is it naive to think that a possible solution here is
 to do the following:
 
  - revert the libvirt version_cap flag

I don't feel strongly either way on this.  It seemed useful at the time
for being able to decouple upgrading libvirt and enabling features that
come with that.  I'd like to let Dan get back from vacation and weigh in
on it, though.

  - instead implement a third party CI with the latest available
 libvirt release [1]

As for the general idea of doing CI, absolutely.  That was discussed
earlier in the thread, though nobody has picked up the ball yet.  I can
work on it, though.  We just need to figure out a sensible approach.

We've seen several times that building and maintaining 3rd party CI is a
*lot* of work.  Like you said in [1], doing this in infra's CI would be
ideal.  I think 3rd party should be reserved for when running it in the
project's infrastructure is not an option for some reason (requires
proprietary hw or sw, for example).

I wonder if the job could be as simple as one with an added step in the
config to install latest libvirt from source.  Dan, do you think someone
could add a libvirt-current.tar.gz to http://libvirt.org/sources/ ?
Using the latest release seems better than master from git.

I'll mess around and see if I can spin up an experimental job.

  - document clearly in the release notes the versions of dependancies
 that we tested against in a given release: hypervisor versions (gate
 and third party), etc etc

Sure, that sounds like a good thing to document in release notes.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [third-party] Freescale CI log site is being blocked

2014-08-08 Thread Kyle Mestery
Trinath:

In looking at your FWaaS review [1], I noticed the site you are using
for log storage is being blacklisted again, at least by Cisco WSA
appliances. Thus, I cannot see the logs for it. Did you change the
location of your log storage again? Is anyone else seeing this issue?

Thanks,
Kyle


[1] https://review.openstack.org/#/c/109659/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Question on decorators in Ceilometer pecan framework

2014-08-08 Thread Pendergrass, Eric
Sorry, wrong BP review link below.  Here is the correct one:  
https://review.openstack.org/#/c/112127/3.  Please disregard the wiki link.

From: Pendergrass, Eric
Sent: Friday, August 08, 2014 6:50 AM
To: openstack-dev@lists.openstack.org
Cc: Giannetti, Fabio
Subject: [Ceilometer] Question on decorators in Ceilometer pecan framework

Hi,

We have been struggling to get a decorator working for proposed new RBAC 
functionality in ceilometer-api.  We're hitting a problem where GET request 
query parameters are mucked up by our decorator.  Here's an example call:

curl -H X-Auth-Token:$TOKEN 
'http://localhost:8777/v2/meters?q.field=project_idq.value=8c678720fb5b4e3bb18dee222d7d7933'

And here's the decorator method (we've tried changing the kwargs, args, etc. 
with no luck):

_ENFORCER = None

def protected(controller_class):

global _ENFORCER
if not _ENFORCER:
_ENFORCER = policy.Enforcer()

def wrapper(f):
@functools.wraps(f)
def inner(self, **kwargs):
pdb.set_trace()
self._rbac_context = {}
if not _ENFORCER.enforce('context_is_admin',
 {},
 {'roles': 
pecan.request.headers.get('X-Roles', ).split(,)}):
self._rbac_context['project_id'] = 
pecan.request.headers.get('X-Project-Id')
self._rbac_context['user_id'] = 
pecan.request.headers.get('X-User-Id')
return f(self, **kwargs)
return inner
return wrapper

tried this too:

_ENFORCER = None

def protected(*args):

controller_class = 'meter'
global _ENFORCER
if not _ENFORCER:
_ENFORCER = policy.Enforcer()

def wrapper(f, *args):
def inner(self, *args):
pdb.set_trace()
#self._rbac_context = {}
#if not _ENFORCER.enforce('context_is_admin',
# {},
# {'roles': 
pecan.request.headers.get('X-Roles', ).split(,)}):
#self._rbac_context['project_id'] = 
pecan.request.headers.get('X-Project-Id')
#self._rbac_context['user_id'] = 
pecan.request.headers.get('X-User-Id')
#return f(*args)
f(self, *args)
return inner
return wrapper

and here's how it's used:

class MetersController(rest.RestController):
Works on meters.

_rbac_context = {}
@pecan.expose()
def _lookup(self, meter_name, *remainder):
return MeterController(meter_name), remainder

@wsme_pecan.wsexpose([Meter], [Query])
@rbac_validate.protected('meters')
def get_all(self, q=None):
Return all known meters, based on the data recorded so far.

:param q: Filter rules for the meters to be returned.

q = q or [] ...


but we get errors similar to below where the arg parser cannot find the query 
parameter because the decorator doesn't take a q argument as 
MetersController.get_all does.

Is there any way to get a decorator to work within the v2 API code and wsme 
framework or should we consider another approach?  Decorators would really 
simplify the RBAC idea we're working on, which is mostly code-implemented save 
for this fairly major problem.

I have a WIP registered BP on this at 
https://blueprints.launchpad.net/ceilometer/+spec/ready-ceilometer-rbac-keystone-v3.

If I can provide more details I'll be happy to.

Thanks
Eric

  /usr/local/bin/ceilometer-api(10)module()
- sys.exit(api())
  /opt/stack/ceilometer/ceilometer/cli.py(96)api()
- srv.serve_forever()
  /usr/lib/python2.7/SocketServer.py(227)serve_forever()
- self._handle_request_noblock()
  /usr/lib/python2.7/SocketServer.py(284)_handle_request_noblock()
- self.process_request(request, client_address)
  /usr/lib/python2.7/SocketServer.py(310)process_request()
- self.finish_request(request, client_address)
  /usr/lib/python2.7/SocketServer.py(323)finish_request()
- self.RequestHandlerClass(request, client_address, self)
  /usr/lib/python2.7/SocketServer.py(638)__init__()
- self.handle()
  /usr/lib/python2.7/wsgiref/simple_server.py(124)handle()
- handler.run(self.server.get_app())
  /usr/lib/python2.7/wsgiref/handlers.py(85)run()
- self.result = application(self.environ, self.start_response)
  
/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py(663)__call__()
- return self.app(env, start_response)
  /opt/stack/ceilometer/ceilometer/api/app.py(97)__call__()
- return self.v2(environ, start_response)
  
/usr/local/lib/python2.7/dist-packages/pecan/middleware/static.py(151)__call__()
- return self.app(environ, start_response)
  
/usr/local/lib/python2.7/dist-packages/pecan/middleware/debug.py(289)__call__()
- return self.app(environ, start_response)
  
/usr/local/lib/python2.7/dist-packages/pecan/middleware/recursive.py(56)__call__()
- return self.application(environ, start_response)
  /opt/stack/ceilometer/ceilometer/api/middleware.py(83)__call__()
- app_iter = 

Re: [openstack-dev] [oslo.db]A proposal for DB read/write separation

2014-08-08 Thread Roman Podoliaka
Hi Li,

How are you going to make this separation transparent? I mean,
generally, in a function code, you can't know in advance if the
transaction will be read-only or it will contain an
INSERT/UPDATE/DELETE statement. On the other hand, as a developer, you
could analyze the DB queries that can be possibly issued by this
function and mark the function somehow, so that oslo.db would know for
which database connection the transaction should be created, but this
is essentially what slave_connection option is for and how it works
now.

Secondly, as you said, the key thing here is to separate reads and
writes. In order to make reads fast/reduce the load on your 'writable'
database, you'd move reads to asynchronous replicas. But you can't do
this transparently either, as there is a lot of places in our code, in
which we assume we are using the latest state of data, while
asynchronous replicas might actually be a little bit out of date. So,
in case of slave_connection, we use it only when it's ok for the code
to work with outdated rows, i.e. *explicitly* modify the existing
functions to work with slave_connection.

Thanks,
Roman

On Fri, Aug 8, 2014 at 7:03 AM, Li Ma skywalker.n...@gmail.com wrote:
 Getting a massive amount of information from data storage to be displayed is
 where most of the activity happens in OpenStack. The two activities of reading
 data and writing (creating, updating and deleting) data are fundamentally
 different.

 The optimization for these two opposite database activities can be done by
 physically separating the databases that service these two different
 activities. All the writes go to database servers, which then replicates the
 written data to the database server(s) dedicated to servicing the reads.

 Currently, AFAIK, many OpenStack deployment in production try to take
 advantage of MySQL (includes Percona or MariaDB) multi-master Galera cluster.
 It is possible to design and implement a read/write separation schema
 for such a DB cluster.

 Actually, OpenStack has a method for read scalability via defining
 master_connection and slave_connection in configuration, but this method
 lacks of flexibility due to deciding master or slave in the logical
 context(code). It's not transparent for application developer.
 As a result, it is not widely used in all the OpenStack projects.

 So, I'd like to propose a transparent read/write separation method
 for oslo.db that every project may happily takes advantage of it
 without any code modification.

 Moreover, I'd like to put it in the mailing list in advance to
 make sure it is acceptable for oslo.db.

 I'd appreciate any comments.

 br.
 Li Ma


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Question on decorators in Ceilometer pecan framework

2014-08-08 Thread Pendergrass, Eric
Wrong link again, this is embarrassing :( 
https://review.openstack.org/#/c/112137/3

From: Pendergrass, Eric
Sent: Friday, August 08, 2014 7:15 AM
To: openstack-dev@lists.openstack.org
Subject: RE: [Ceilometer] Question on decorators in Ceilometer pecan framework

Sorry, wrong BP review link below.  Here is the correct one:  
https://review.openstack.org/#/c/112127/3.  Please disregard the wiki link.

From: Pendergrass, Eric
Sent: Friday, August 08, 2014 6:50 AM
To: openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Cc: Giannetti, Fabio
Subject: [Ceilometer] Question on decorators in Ceilometer pecan framework

Hi,

We have been struggling to get a decorator working for proposed new RBAC 
functionality in ceilometer-api.  We're hitting a problem where GET request 
query parameters are mucked up by our decorator.  Here's an example call:

curl -H X-Auth-Token:$TOKEN 
'http://localhost:8777/v2/meters?q.field=project_idq.value=8c678720fb5b4e3bb18dee222d7d7933'

And here's the decorator method (we've tried changing the kwargs, args, etc. 
with no luck):

_ENFORCER = None

def protected(controller_class):

global _ENFORCER
if not _ENFORCER:
_ENFORCER = policy.Enforcer()

def wrapper(f):
@functools.wraps(f)
def inner(self, **kwargs):
pdb.set_trace()
self._rbac_context = {}
if not _ENFORCER.enforce('context_is_admin',
 {},
 {'roles': 
pecan.request.headers.get('X-Roles', ).split(,)}):
self._rbac_context['project_id'] = 
pecan.request.headers.get('X-Project-Id')
self._rbac_context['user_id'] = 
pecan.request.headers.get('X-User-Id')
return f(self, **kwargs)
return inner
return wrapper

tried this too:

_ENFORCER = None

def protected(*args):

controller_class = 'meter'
global _ENFORCER
if not _ENFORCER:
_ENFORCER = policy.Enforcer()

def wrapper(f, *args):
def inner(self, *args):
pdb.set_trace()
#self._rbac_context = {}
#if not _ENFORCER.enforce('context_is_admin',
# {},
# {'roles': 
pecan.request.headers.get('X-Roles', ).split(,)}):
#self._rbac_context['project_id'] = 
pecan.request.headers.get('X-Project-Id')
#self._rbac_context['user_id'] = 
pecan.request.headers.get('X-User-Id')
#return f(*args)
f(self, *args)
return inner
return wrapper

and here's how it's used:

class MetersController(rest.RestController):
Works on meters.

_rbac_context = {}
@pecan.expose()
def _lookup(self, meter_name, *remainder):
return MeterController(meter_name), remainder

@wsme_pecan.wsexpose([Meter], [Query])
@rbac_validate.protected('meters')
def get_all(self, q=None):
Return all known meters, based on the data recorded so far.

:param q: Filter rules for the meters to be returned.

q = q or [] ...


but we get errors similar to below where the arg parser cannot find the query 
parameter because the decorator doesn't take a q argument as 
MetersController.get_all does.

Is there any way to get a decorator to work within the v2 API code and wsme 
framework or should we consider another approach?  Decorators would really 
simplify the RBAC idea we're working on, which is mostly code-implemented save 
for this fairly major problem.

I have a WIP registered BP on this at 
https://blueprints.launchpad.net/ceilometer/+spec/ready-ceilometer-rbac-keystone-v3.

If I can provide more details I'll be happy to.

Thanks
Eric

  /usr/local/bin/ceilometer-api(10)module()
- sys.exit(api())
  /opt/stack/ceilometer/ceilometer/cli.py(96)api()
- srv.serve_forever()
  /usr/lib/python2.7/SocketServer.py(227)serve_forever()
- self._handle_request_noblock()
  /usr/lib/python2.7/SocketServer.py(284)_handle_request_noblock()
- self.process_request(request, client_address)
  /usr/lib/python2.7/SocketServer.py(310)process_request()
- self.finish_request(request, client_address)
  /usr/lib/python2.7/SocketServer.py(323)finish_request()
- self.RequestHandlerClass(request, client_address, self)
  /usr/lib/python2.7/SocketServer.py(638)__init__()
- self.handle()
  /usr/lib/python2.7/wsgiref/simple_server.py(124)handle()
- handler.run(self.server.get_app())
  /usr/lib/python2.7/wsgiref/handlers.py(85)run()
- self.result = application(self.environ, self.start_response)
  
/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py(663)__call__()
- return self.app(env, start_response)
  /opt/stack/ceilometer/ceilometer/api/app.py(97)__call__()
- return self.v2(environ, start_response)
  
/usr/local/lib/python2.7/dist-packages/pecan/middleware/static.py(151)__call__()
- return self.app(environ, start_response)
  

Re: [openstack-dev] [all] The future of the integrated release

2014-08-08 Thread Philip Cheong
This thread couldn't help but make me wonder what kind of problems people
hit developing on the linux kernel.

I discovered this pretty incredible article which seemed to have enough
relevant information in it to post it, but also give me the hopes that
Openstack and it's contributors are different enough that we can avoid
similar issues.

Gives some perspective at least...

http://arstechnica.com/information-technology/2013/07/linus-torvalds-defends-his-right-to-shame-linux-kernel-developers/



2014-08-08 11:58 GMT+02:00 Nikola Đipanov ndipa...@redhat.com:

 On 08/08/2014 11:37 AM, Thierry Carrez wrote:
  Personally I think we just need to get better at communicating the
  downstream expectations, so that if we create waste, it's clearly
  upstream fault rather than downstream. Currently it's the lack of
  communication that makes developers produce more / something else than
  what core reviewers want to see. Any tool that lets us communicate
  expectations better is welcome, and I think the runway approach is one
  such tool, simple enough to understand.
 

 I strongly agree with everything here except the last part of the last
 sentence.

 To me the runway approach seems like yet another set of arbitrary hoops
 that we will put in place so that we don't have to tell people that we
 don't have bandwidth/willingness to review and help their contribution in.

 It is process over communication at it's finest and will in no way
 help to foster an open and honest communication in the community IMHO. I
 don't see it making matters any worse, since I think what we have now is
 more or less that with one layer of processes less, but I don't see it
 making things better either.

 The biggest issue I see is that there is no justifiable metric with
 which we can back up assigning a slot to a feature other than we say
 so. We can do that just as easily without runways.

 I'd love for someone to tell me what am I missing here...

 Nikola

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
*Philip Cheong*
*Elastx *| Public and Private PaaS
email: philip.che...@elastx.se
office: +46 8 557 728 10
mobile: +46 702 8170 814
twitter: @Elastx https://twitter.com/Elastx
http://elastx.se
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db]A proposal for DB read/write separation

2014-08-08 Thread Mike Bayer

On Aug 8, 2014, at 12:03 AM, Li Ma skywalker.n...@gmail.com wrote:

 
 So, I'd like to propose a transparent read/write separation method 
 for oslo.db that every project may happily takes advantage of it 
 without any code modification.


A single transaction begins, which is to emit a series of INSERT, UPDATE, and 
SELECT statements.   Are you proposing that this system in fact produce two 
separate transactions on two separate backends, and deliver the SELECT 
statements to the slave?   That approach isn’t feasible - SELECTs are part of a 
“write” transaction just as much as the other statements are (as they can be 
SELECTing locally uncommitted data), as they deliver data which is part of the 
transactional context as well as intended for those DML statements.   
Otherwise, by what system could this read/write be “transparent”?
reader/writer has to be at the transaction level, not the statement level, and 
without an up-front declaration as to whether a transaction is to be reader or 
writer, it’s not possible.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Question on decorators in Ceilometer pecan framework

2014-08-08 Thread David Stanek
It looks like maybe WSME or Pecan is inspecting the method signature. Have you 
tried to change the order of the decorators?


On Aug 8, 2014, at 9:16, Pendergrass, Eric eric.pendergr...@hp.com wrote:

 Wrong link again, this is embarrassing L 
 https://review.openstack.org/#/c/112137/3
  
 From: Pendergrass, Eric 
 Sent: Friday, August 08, 2014 7:15 AM
 To: openstack-dev@lists.openstack.org
 Subject: RE: [Ceilometer] Question on decorators in Ceilometer pecan framework
  
 Sorry, wrong BP review link below.  Here is the correct one: 
 https://review.openstack.org/#/c/112127/3.  Please disregard the wiki link.
  
 From: Pendergrass, Eric 
 Sent: Friday, August 08, 2014 6:50 AM
 To: openstack-dev@lists.openstack.org
 Cc: Giannetti, Fabio
 Subject: [Ceilometer] Question on decorators in Ceilometer pecan framework
  
 Hi,
  
 We have been struggling to get a decorator working for proposed new RBAC 
 functionality in ceilometer-api.  We’re hitting a problem where GET request 
 query parameters are mucked up by our decorator.  Here’s an example call:
  
 curl -H X-Auth-Token:$TOKEN 
 'http://localhost:8777/v2/meters?q.field=project_idq.value=8c678720fb5b4e3bb18dee222d7d7933'
  
 And here’s the decorator method (we’ve tried changing the kwargs, args, etc. 
 with no luck):
  
 _ENFORCER = None
  
 def protected(controller_class):
  
 global _ENFORCER
 if not _ENFORCER:
 _ENFORCER = policy.Enforcer()
  
 def wrapper(f):
 @functools.wraps(f)
 def inner(self, **kwargs):
 pdb.set_trace()
 self._rbac_context = {}
 if not _ENFORCER.enforce('context_is_admin',
  {},
  {'roles': 
 pecan.request.headers.get('X-Roles', ).split(,)}):
 self._rbac_context['project_id'] = 
 pecan.request.headers.get('X-Project-Id')
 self._rbac_context['user_id'] = 
 pecan.request.headers.get('X-User-Id')
 return f(self, **kwargs)
 return inner
 return wrapper
  
 tried this too:
  
 _ENFORCER = None
  
 def protected(*args):
  
 controller_class = 'meter'
 global _ENFORCER
 if not _ENFORCER:
 _ENFORCER = policy.Enforcer()
  
 def wrapper(f, *args):
 def inner(self, *args):
 pdb.set_trace()
 #self._rbac_context = {}
 #if not _ENFORCER.enforce('context_is_admin',
 # {},
 # {'roles': 
 pecan.request.headers.get('X-Roles', ).split(,)}):
 #self._rbac_context['project_id'] = 
 pecan.request.headers.get('X-Project-Id')
 #self._rbac_context['user_id'] = 
 pecan.request.headers.get('X-User-Id')
 #return f(*args)
 f(self, *args)
 return inner
 return wrapper
  
 and here’s how it’s used:
  
 class MetersController(rest.RestController):
 Works on meters.
  
 _rbac_context = {}
 @pecan.expose()
 def _lookup(self, meter_name, *remainder):
 return MeterController(meter_name), remainder
  
 @wsme_pecan.wsexpose([Meter], [Query])
 @rbac_validate.protected('meters')
 def get_all(self, q=None):
 Return all known meters, based on the data recorded so far.
  
 :param q: Filter rules for the meters to be returned.
 
 q = q or [] …
  
  
 but we get errors similar to below where the arg parser cannot find the query 
 parameter because the decorator doesn’t take a q argument as 
 MetersController.get_all does. 
  
 Is there any way to get a decorator to work within the v2 API code and wsme 
 framework or should we consider another approach?  Decorators would really 
 simplify the RBAC idea we’re working on, which is mostly code-implemented 
 save for this fairly major problem.
  
 I have a WIP registered BP on this 
 athttps://blueprints.launchpad.net/ceilometer/+spec/ready-ceilometer-rbac-keystone-v3.
  
 If I can provide more details I’ll be happy to.
  
 Thanks
 Eric
  
   /usr/local/bin/ceilometer-api(10)module()
 - sys.exit(api())
   /opt/stack/ceilometer/ceilometer/cli.py(96)api()
 - srv.serve_forever()
   /usr/lib/python2.7/SocketServer.py(227)serve_forever()
 - self._handle_request_noblock()
   /usr/lib/python2.7/SocketServer.py(284)_handle_request_noblock()
 - self.process_request(request, client_address)
   /usr/lib/python2.7/SocketServer.py(310)process_request()
 - self.finish_request(request, client_address)
   /usr/lib/python2.7/SocketServer.py(323)finish_request()
 - self.RequestHandlerClass(request, client_address, self)
   /usr/lib/python2.7/SocketServer.py(638)__init__()
 - self.handle()
   /usr/lib/python2.7/wsgiref/simple_server.py(124)handle()
 - handler.run(self.server.get_app())
   /usr/lib/python2.7/wsgiref/handlers.py(85)run()
 - self.result = application(self.environ, self.start_response)
   
 

Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-08-08 Thread Russell Bryant
On 08/08/2014 01:46 AM, Luke Gorrie wrote:
 On 8 August 2014 02:06, Michael Still mi...@stillhq.com
 mailto:mi...@stillhq.com wrote:
 
 1: I think that ultimately should live in infra as part of check, but
 I'd be ok with it starting as a third party if that delivers us
 something faster. I'd be happy enough to donate resources to get that
 going if we decide to go with this plan.
 
 
 Can we cooperate somehow?
 
 We are already working on bringing up a third party CI covering QEMU 2.1
 and Libvirt 1.2.7. The intention of this CI is to test the software
 configuration that we are recommending for NFV deployments (including
 vhost-user feature which appeared in those releases), and to provide CI
 cover for the code we are offering for Neutron.
 
 Michele Paolino is working on this and the relevant nova/devstack changes.

It sounds like what you're working on is a separate thing.  You're
targeting coverage for a specific set of use cases, while this is a
flavor of the general CI coverage we're already doing, but with the
latest (not pegged) libvirt (and maybe qemu).

By all means, more testing is useful though.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-08 Thread Chris Dent

On Fri, 8 Aug 2014, Nikola Đipanov wrote:


To me the runway approach seems like yet another set of arbitrary hoops
that we will put in place so that we don't have to tell people that we
don't have bandwidth/willingness to review and help their contribution in.


I pretty much agree with this. As things stand there are a lot of
hoops for casual contributors. For the people who make more regular
contributions these hoops are either taken as the norm and good safety
precautions or are an annoying tax that you just kind of have to
deal with.

Few of those hoops say clearly and explicitly that the project is
resource constrained. There are certainly lots of clues and cues
that is going on. It would be best to be as open and upfront as
possible.

Meanwhile, there are fairly perverse incentives in place that work
against strategic contribution, despite many people acknowledging
the need to be more strategic.

It's a tricky problem. If there really is a resource starvation
problem, it is best to be honest that this is a project that is
primarily funded by and staffed from organizational members. From
there is where strategic resources will have to come, in part
because of the incentives, in part because those organizational
members want a healthy framework on which to lay their tactical
changes and a context in which to say lookee, we're a part of this
big deal thing.

But after all that it's important to keep in mind that shit's not
broken: Every few days I'll update all my various repos and think wow
that's an awful lot of changed code.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-08-08 Thread Luke Gorrie
On 8 August 2014 15:27, Russell Bryant rbry...@redhat.com wrote:

 It sounds like what you're working on is a separate thing.


Roger. Just wanted to check if our work could have some broader utility,
but as you say we do have a specific use case in mind.

Cheers!
-Luke
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] usage patterns for oslo.config

2014-08-08 Thread Coles, Alistair
I've been looking at the implications of applying oslo.config in Swift, and I 
have a question about the best pattern for registering options.

Looking at how keystone uses oslo.config, the pattern seems to be to have all 
options declared and registered 'up-front' in a single place 
(keystone/common/config.py) before loading wsgi pipeline/starting the service. 
Is there another usage pattern where each middleware registers its options 
independently 'on-demand' rather than maintaining them all in a single place?

I read about a pattern [1] whereby modules register opts during import, but 
does that require there to be some point in the lifecycle where all required 
modules are imported *before* parsing config files? Seems like that would mean 
parsing the wsgi pipeline to 'discover' the middleware modules being used, 
importing all those modules, then parsing config files, then loading the wsgi 
pipeline?

OR - is it acceptable for each middleware module to register its own options 
if/when it is imported during wsgi pipeline loading (CONF.register_options()) 
and then call CONF.reload_config_files() ?

Thanks,
Alistair

[1] http://docs.openstack.org/developer/oslo.config/cfg.html#global-configopts

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-08 Thread Kyle Mestery
On Thu, Aug 7, 2014 at 1:26 PM, Joe Gordon joe.gord...@gmail.com wrote:



 On Tue, Aug 5, 2014 at 9:03 AM, Thierry Carrez thie...@openstack.org
 wrote:

 Hi everyone,

 With the incredible growth of OpenStack, our development community is
 facing complex challenges. How we handle those might determine the
 ultimate success or failure of OpenStack.

 With this cycle we hit new limits in our processes, tools and cultural
 setup. This resulted in new limiting factors on our overall velocity,
 which is frustrating for developers. This resulted in the burnout of key
 firefighting resources. This resulted in tension between people who try
 to get specific work done and people who try to keep a handle on the big
 picture.

 It all boils down to an imbalance between strategic and tactical
 contributions. At the beginning of this project, we had a strong inner
 group of people dedicated to fixing all loose ends. Then a lot of
 companies got interested in OpenStack and there was a surge in tactical,
 short-term contributions. We put on a call for more resources to be
 dedicated to strategic contributions like critical bugfixing,
 vulnerability management, QA, infrastructure... and that call was
 answered by a lot of companies that are now key members of the OpenStack
 Foundation, and all was fine again. But OpenStack contributors kept on
 growing, and we grew the narrowly-focused population way faster than the
 cross-project population.


 At the same time, we kept on adding new projects to incubation and to
 the integrated release, which is great... but the new developers you get
 on board with this are much more likely to be tactical than strategic
 contributors. This also contributed to the imbalance. The penalty for
 that imbalance is twofold: we don't have enough resources available to
 solve old, known OpenStack-wide issues; but we also don't have enough
 resources to identify and fix new issues.

 We have several efforts under way, like calling for new strategic
 contributors, driving towards in-project functional testing, making
 solving rare issues a more attractive endeavor, or hiring resources
 directly at the Foundation level to help address those. But there is a
 topic we haven't raised yet: should we concentrate on fixing what is
 currently in the integrated release rather than adding new projects ?


 TL;DR: Our development model is having growing pains. until we sort out the
 growing pains adding more projects spreads us too thin.

+100

 In addition to the issues mentioned above, with the scale of OpenStack today
 we have many major cross project issues to address and no good place to
 discuss them.

We do have the ML, as well as the cross-project meeting every Tuesday
[1], but we as a project need to do a better job of actually bringing
up relevant issues here.

[1] https://wiki.openstack.org/wiki/Meetings/ProjectMeeting



 We seem to be unable to address some key issues in the software we
 produce, and part of it is due to strategic contributors (and core
 reviewers) being overwhelmed just trying to stay afloat of what's
 happening. For such projects, is it time for a pause ? Is it time to
 define key cycle goals and defer everything else ?



 I really like this idea, as Michael and others alluded to in above, we are
 attempting to set cycle goals for Kilo in Nova. but I think it is worth
 doing for all of OpenStack. We would like to make a list of key goals before
 the summit so that we can plan our summit sessions around the goals. On a
 really high level one way to look at this is, in Kilo we need to pay down
 our technical debt.

 The slots/runway idea is somewhat separate from defining key cycle goals; we
 can be approve blueprints based on key cycle goals without doing slots.  But
 with so many concurrent blueprints up for review at any given time, the
 review teams are doing a lot of multitasking and humans are not very good at
 multitasking. Hopefully slots can help address this issue, and hopefully
 allow us to actually merge more blueprints in a given cycle.

I'm not 100% sold on what the slots idea buys us. What I've seen this
cycle in Neutron is that we have a LOT of BPs proposed. We approve
them after review. And then we hit one of two issues: Slow review
cycles, and slow code turnaround issues. I don't think slots would
help this, and in fact may cause more issues. If we approve a BP and
give it a slot for which the eventual result is slow review and/or
code review turnaround, we're right back where we started. Even worse,
we may have not picked a BP for which the code submitter would have
turned around reviews faster. So we've now doubly hurt ourselves. I
have no idea how to solve this issue, but by over subscribing the
slots (e.g. over approving), we allow for the submissions with faster
turnaround a chance to merge quicker. With slots, we've removed this
capability by limiting what is even allowed to be considered for
review.

Thanks,
Kyle



 On the integrated release side, 

Re: [openstack-dev] [all] The future of the integrated release

2014-08-08 Thread Kyle Mestery
On Fri, Aug 8, 2014 at 4:06 AM, Thierry Carrez thie...@openstack.org wrote:
 Michael Still wrote:
 [...] I think an implied side effect of
 the runway system is that nova-drivers would -2 blueprint reviews
 which were not occupying a slot.

 (If we start doing more -2's I think we will need to explore how to
 not block on someone with -2's taking a vacation. Some sort of role
 account perhaps).

 Ideally CodeReview-2s should be kept for blocking code reviews on
 technical grounds, not procedural grounds. For example it always feels
 weird to CodeReview-2 all feature patch reviews on Feature Freeze day --
 that CodeReview-2 really doesn't have the same meaning as a traditional
 CodeReview-2.

 For those procedural blocks (feature freeze, waiting for runway
 room...), it might be interesting to introduce a specific score
 (Workflow-2 perhaps) that drivers could set. That would not prevent code
 review from happening, that would just clearly express that this is not
 ready to land for release cycle / organizational reasons.

 Thoughts?

I like this idea. As a user of -2 for procedural reasons, this would
let me more clearly articulate why I've put a -2 down.

 --
 Thierry Carrez (ttx)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Bug#1231298 - size parameter for volume creation

2014-08-08 Thread Dean Troyer
On Fri, Aug 8, 2014 at 12:36 AM, Ganapathy, Sandhya 
sandhya.ganapa...@hp.com wrote:

  This is to discuss Bug #1231298 –
 https://bugs.launchpad.net/cinder/+bug/1231298

...

 Conclusion reached with this bug is that, we need to modify cinder client
 in order to accept optional size parameter (as the cinder’s API allows)
  and calculate the size automatically during volume creation from image.

 There is also an opinion that size should not be an optional parameter
 during volume creation – does this mean, Cinder’s API should be changed in
 order to make size a mandatory parameter.


In cinderclient I think you're stuck with size as a mandatory argument to
the 'cinder create' command, as you must be backward-compatible for at
least a deprecation period.[0]

Your option here[1] is to use a sentinel value for size that indicates the
actual volume size should be calculated and let the client do the right
thing under the hood to feed the server API.  Other project CLIs have used
both 'auto' and '0' in situations like this.  I'd suggest '0' as it is
still an integer and doesn't require potentially user-error-prone string
matching to work.

FWIW, this is why OSC changed 'volume create' to make --size an option and
make the volume name be the positional argument.

[0] The deprecation period for clients is ambiguous as the release cycle
isn't timed but we think of deprecations that way.  Using integrated
release cycles is handy but less than perfect to correlate to the client's
semver releases.
[1] Bad pun alert...or is there such a thing as a bad pun???

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-08 Thread Russell Bryant
On 08/08/2014 05:06 AM, Thierry Carrez wrote:
 Michael Still wrote:
 [...] I think an implied side effect of
 the runway system is that nova-drivers would -2 blueprint reviews
 which were not occupying a slot.

 (If we start doing more -2's I think we will need to explore how to
 not block on someone with -2's taking a vacation. Some sort of role
 account perhaps).
 
 Ideally CodeReview-2s should be kept for blocking code reviews on
 technical grounds, not procedural grounds. For example it always feels
 weird to CodeReview-2 all feature patch reviews on Feature Freeze day --
 that CodeReview-2 really doesn't have the same meaning as a traditional
 CodeReview-2.
 
 For those procedural blocks (feature freeze, waiting for runway
 room...), it might be interesting to introduce a specific score
 (Workflow-2 perhaps) that drivers could set. That would not prevent code
 review from happening, that would just clearly express that this is not
 ready to land for release cycle / organizational reasons.
 
 Thoughts?
 

That sounds much nicer than using code review -2.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Core team proposals

2014-08-08 Thread Gary Kotton
+1

From: chandankumar 
chandankumar.093...@gmail.commailto:chandankumar.093...@gmail.com
Reply-To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Friday, August 8, 2014 at 2:14 PM
To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [devstack] Core team proposals


On 08/08/2014 01:05 PM, Chmouel Boudjnah wrote:

On Thu, Aug 7, 2014 at 8:09 PM, Dean Troyer 
dtro...@gmail.commailto:dtro...@gmail.com wrote:
Please respond in the usual manner, +1 or concerns.

+1, I would be happy to see Ian joining the team.

+1
Chmouel



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Thanks,

Chandan Kumar

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-08 Thread Wuhongning
Does it make sense to move all advanced extension out of ML2, like security 
group, qos...? Then we can just talk about advanced service itself, without 
bothering basic neutron object (network/subnet/port)

Traditionally, SG is applied in CN, and FWaas is applied in NN (bound to L3 
agent), however in DVR they are bluing, for each host has a L3 agent, this 
brings opportunity to move all service related features out of L2 agent, and 
coordinate  the two security modules (SG  FW for W-E traffic). However, 
neutron agents (l2/l3/advanced service) needs some rework.

When all these service features is detached from ML2, we can easily keep the 
mainstream OVS continually as L2 backend, and vendor specific hardware as the 
service policy enforcement backend such as acl/qos for high performance.

So maybe a trade-off is better: an optionally new policy service plugin 
together with SG and FWaas, cloud operate can choose what to present to 
enduser, but without concept of EP/EPG/BD/RD(now renamed to L2Pand L3P), Policy 
only focus on service layer, and policy template will be applied to existing 
neutron port object.

EP/EPG/L2P/L3P is really not so friendly to endusers facing. I asked some SMB 
IT staff and personal user of amazon AWS (they all have very limited networking 
knowledge), they can easily understand port/network/subnet, but can't 
understand those GBP objects. So, maybe apply advanced service policy to 
existing basic neutron object is more smoothly for endusers to accept.


From: Kevin Benton [blak...@gmail.com]
Sent: Friday, August 08, 2014 2:42 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way 
forward


Can you link to the etherpad you mentioned?

In the mean time, apologies for another analogy in
advance. :-)

If I give you an API to sort a list, I'm free to implement it however I want as 
long as I return a sorted list. However, there is no way me to know based on a 
call to this API that you might only be looking for the second largest element, 
so it won't be the most efficient approach because I will always have to sort 
the entire list.
If I give you a higher level API to declare that you want elements of a list 
that match a criteria in a certain order, then the API can make the 
optimization to not actually sort the whole list if you just need the first of 
the largest two elements.

The former is analogous to the security groups API, and the latter to the GBP 
API.

On Aug 7, 2014 4:00 PM, Aaron Rosen 
aaronoro...@gmail.commailto:aaronoro...@gmail.com wrote:



On Thu, Aug 7, 2014 at 12:08 PM, Kevin Benton 
blak...@gmail.commailto:blak...@gmail.com wrote:
I mean't 'side stepping' why GBP allows for the comment you made previous, 
With the latter, a mapping driver could determine that communication between 
these two hosts can be prevented by using an ACL on a router or a switch, 
which doesn't violate the user's intent and buys a performance improvement and 
works with ports that don't support security groups..

Neutron's current API is a logical abstraction and enforcement can be done 
however one chooses to implement it. I'm really trying to understand at the 
network level why GBP allows for these optimizations and performance 
improvements you talked about.

You absolutely cannot enforce security groups on a firewall/router that sits at 
the boundary between networks. If you try, you are lying to the end-user 
because it's not enforced at the port level. The current neutron APIs force you 
to decide where things like that are implemented.

The current neutron API's are just logical abstractions. Where and how things 
are actually enforced are 100% an implementation detail of a vendors system.  
Anyways, moving the discussion to the etherpad...

The higher level abstractions give you the freedom to move the enforcement by 
allowing the expression of broad connectivity requirements.
Why are you bringing up logging connections?

This was brought up as a feature proposal to FWaaS because this is a basic 
firewall feature missing from OpenStack. However, this does not preclude a 
FWaaS vendor from logging.

Personally, I think one could easily write up a very short document probably 
less than one page with examples showing/exampling how the current neutron API 
works even without a much networking background.

The difficulty of the API for establishing basic connectivity isn't really the 
problem. It's when you have to compose a bunch of requirements and make sure 
nothing is violating auditing and connectivity constraints that it becomes a 
problem. We are arguing about the levels of abstraction. You could also write 
up a short document explaining to novice programmers how to use C to read and 
write database entries to an sqlite database, but that doesn't mean it's the 
best level of abstraction for what the users are trying to accomplish.

I'll let 

Re: [openstack-dev] Which program for Rally

2014-08-08 Thread Anne Gentle
On Wed, Aug 6, 2014 at 5:30 AM, Thierry Carrez thie...@openstack.org
wrote:

 Hi everyone,

 At the TC meeting yesterday we discussed Rally program request and
 incubation request. We quickly dismissed the incubation request, as
 Rally appears to be able to live happily on top of OpenStack and would
 benefit from having a release cycle decoupled from the OpenStack
 integrated release.

 That leaves the question of the program. OpenStack programs are created
 by the Technical Committee, to bless existing efforts and teams that are
 considered *essential* to the production of the OpenStack integrated
 release and the completion of the OpenStack project mission. There are 3
 ways to look at Rally and official programs at this point:

 1. Rally as an essential QA tool
 Performance testing (and especially performance regression testing) is
 an essential QA function, and a feature that Rally provides. If the QA
 team is happy to use Rally to fill that function, then Rally can
 obviously be adopted by the (already-existing) QA program. That said,
 that would put Rally under the authority of the QA PTL, and that raises
 a few questions due to the current architecture of Rally, which is more
 product-oriented. There needs to be further discussion between the QA
 core team and the Rally team to see how that could work and if that
 option would be acceptable for both sides.


Pros: Performance testing is great and we don't have it now that I know of.
Considerations:
- QA then takes on more scope in their mission. Do they want it?
- Is Rally actually splittable this way?
- How important is the PTL role - PTL cage match may ensue next election?



 2. Rally as an essential operator tool
 Regular benchmarking of OpenStack deployments is a best practice for
 cloud operators, and a feature that Rally provides. With a bit of a
 stretch, we could consider that benchmarking is essential to the
 completion of the OpenStack project mission. That program could one day
 evolve to include more such operations best practices tools. In
 addition to the slight stretch already mentioned, one concern here is
 that we still want to have performance testing in QA (which is clearly
 essential to the production of OpenStack). Letting Rally primarily be
 an operational tool might make that outcome more difficult.


Pros: Great start to an operator program for tooling.

Considerations:
- Would have to ensure Rally is what we want first as getting to be PTL
since you are first to propose seems to be the model.
- Is benchmark testing and SLA-meeting a best first tool? Or monitoring? Or
deployment? Or some other tools?
- Is this program what operators want?



 3. Let Rally be a product on top of OpenStack
 The last option is to not have Rally in any program, and not consider it
 *essential* to the production of the OpenStack integrated release or
 the completion of the OpenStack project mission. Rally can happily exist
 as an operator tool on top of OpenStack. It is built as a monolithic
 product: that approach works very well for external complementary
 solutions... Also be more integrated in OpenStack or part of the
 OpenStack programs might come at a cost (slicing some functionality out
 of rally to make it more a framework and less a product) that might not
 be what its authors want.


Pros: Rally can set the standards for this path and lead on this pioneer
trail.

Considerations:
- Would this tool be applied against continuously-deployed clouds?
- Is there any preference or advantage to be outside of integrated releases?
- Will people believe it's official?

Hopefully that summarizes how I'm looking at this application --
Anne



 Let's explore each option to see which ones are viable, and the pros and
 cons of each.

 --
 Thierry Carrez (ttx)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] Do I need a spec for testing the compute os-networks API?

2014-08-08 Thread Matt Riedemann
This came up while reviewing the fix for bug 1327406 [1].  Basically the 
os-networks API behaves differently depending on your backing network 
manager in nova-network.


We run Tempest in the gate with the FlatDHCPManager, which has the bug; 
if you try to list networks as a non-admin user it won't return anything 
you can't assign those networks to a tenant.  With VlanManager you do 
assign a tenant so list-networks works.


I don't see any os-networks API testing in Tempest today and I'm looking 
to add something, at least for listing networks to show that this bug 
exists (plus get coverage).  The question is do I need a qa-spec to do 
this?  When I wrote the tests for os-quota-classes it was for a bug fix 
since we regressed when we thought the API was broken and unused and it 
was erroneously removed in Icehouse.  I figured I'd treat this the same 
way, but it's going to require changes to the servers client to call the 
os-networks API, plus a new test module.


As far as the test design, we'd skip if using neutron since this is a 
nova-network only test. As far as how to figure out the proper 
assertions given we don't know what the backing network manager is and 
the API is inconsistent in that regard, I might have some other hurdles 
there but would at least like to get a POC going.


I guess I can do the POC before the question of blueprints/specs needs 
to be answered...


[1] https://launchpad.net/bugs/1327406

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Do I need a spec for testing the compute os-networks API?

2014-08-08 Thread Andrea Frittoli
Thanks Matt for bringing this up/

There is a tiny start in flight here [0] - if you plan to work on providing
full testing coverage for the n-net api you may want to create a spec with
a link to an etherpad to help track / split the work.

andrea

[0] https://review.openstack.org/#/c/107552/21



On 8 August 2014 15:42, Matt Riedemann mrie...@linux.vnet.ibm.com wrote:

 This came up while reviewing the fix for bug 1327406 [1].  Basically the
 os-networks API behaves differently depending on your backing network
 manager in nova-network.

 We run Tempest in the gate with the FlatDHCPManager, which has the bug; if
 you try to list networks as a non-admin user it won't return anything you
 can't assign those networks to a tenant.  With VlanManager you do assign a
 tenant so list-networks works.

 I don't see any os-networks API testing in Tempest today and I'm looking
 to add something, at least for listing networks to show that this bug
 exists (plus get coverage).  The question is do I need a qa-spec to do
 this?  When I wrote the tests for os-quota-classes it was for a bug fix
 since we regressed when we thought the API was broken and unused and it was
 erroneously removed in Icehouse.  I figured I'd treat this the same way,
 but it's going to require changes to the servers client to call the
 os-networks API, plus a new test module.

 As far as the test design, we'd skip if using neutron since this is a
 nova-network only test. As far as how to figure out the proper assertions
 given we don't know what the backing network manager is and the API is
 inconsistent in that regard, I might have some other hurdles there but
 would at least like to get a POC going.

 I guess I can do the POC before the question of blueprints/specs needs to
 be answered...

 [1] https://launchpad.net/bugs/1327406

 --

 Thanks,

 Matt Riedemann


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Do I need a spec for testing the compute os-networks API?

2014-08-08 Thread Matt Riedemann



On 8/8/2014 9:50 AM, Andrea Frittoli wrote:

Thanks Matt for bringing this up/

There is a tiny start in flight here [0] - if you plan to work on
providing full testing coverage for the n-net api you may want to create
a spec with a link to an etherpad to help track / split the work.

andrea

[0] https://review.openstack.org/#/c/107552/21



On 8 August 2014 15:42, Matt Riedemann mrie...@linux.vnet.ibm.com
mailto:mrie...@linux.vnet.ibm.com wrote:

This came up while reviewing the fix for bug 1327406 [1].  Basically
the os-networks API behaves differently depending on your backing
network manager in nova-network.

We run Tempest in the gate with the FlatDHCPManager, which has the
bug; if you try to list networks as a non-admin user it won't return
anything you can't assign those networks to a tenant.  With
VlanManager you do assign a tenant so list-networks works.

I don't see any os-networks API testing in Tempest today and I'm
looking to add something, at least for listing networks to show that
this bug exists (plus get coverage).  The question is do I need a
qa-spec to do this?  When I wrote the tests for os-quota-classes it
was for a bug fix since we regressed when we thought the API was
broken and unused and it was erroneously removed in Icehouse.  I
figured I'd treat this the same way, but it's going to require
changes to the servers client to call the os-networks API, plus a
new test module.

As far as the test design, we'd skip if using neutron since this is
a nova-network only test. As far as how to figure out the proper
assertions given we don't know what the backing network manager is
and the API is inconsistent in that regard, I might have some other
hurdles there but would at least like to get a POC going.

I guess I can do the POC before the question of blueprints/specs
needs to be answered...

[1] https://launchpad.net/bugs/__1327406
https://launchpad.net/bugs/1327406

--

Thanks,

Matt Riedemann


_
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.__org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Andrea,

Thanks, the client stuff was what I needed right now since that was the 
bulk of the code I needed for this simple POC to show the bug:


https://review.openstack.org/#/c/112944/

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][policy] Group Based Policy - Renaming

2014-08-08 Thread Jay Pipes

On 08/07/2014 01:17 PM, Ronak Shah wrote:

Hi,
Following a very interesting and vocal thread on GBP for last couple of
days and the GBP meeting today, GBP sub-team proposes following name
changes to the resource.


policy-point for endpoint
policy-group for endpointgroup (epg)

Please reply if you feel that it is not ok with reason and suggestion.


Thanks Ronak and Sumit for sharing. I, too, wasn't able to attend the 
meeting (was in other meetings yesterday and today).


I'm very happy with the change from endpoint-group - policy-group.

policy-point is better than endpoint, for sure. The only other 
suggestion I might have would be to use policy-target instead of 
policy-point, since the former clearly delineates what the object is 
used for (a target for a policy).


But... I won't raise a stink about this. Sorry for sparking long and 
tangential discussions on GBP topics earlier this week. And thanks to 
the folks who persevered and didn't take too much offense to my questioning.


Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-08 Thread CARVER, PAUL
Wuhongning [mailto:wuhongn...@huawei.com] wrote:

Does it make sense to move all advanced extension out of ML2, like security
group, qos...? Then we can just talk about advanced service itself, without
bothering basic neutron object (network/subnet/port)

A modular layer 3 (ML3) analogous to ML2 sounds like a good idea. I still
think it's too late in the game to be shooting down all the work that the
GBP team has put in unless there's a really clean and effective way of
running AND iterating on GBP in conjunction with Neutron without being
part of the Juno release. As far as I can tell they've worked really
hard to follow the process and accommodate input. They shouldn't have
to wait multiple more releases on a hypothetical refactoring of how L3+ vs
L2 is structured.

But, just so I'm not making a horrible mistake, can someone reassure me
that GBP isn't removing the constructs of network/subnet/port from Neutron?

I'm under the impression that GBP is adding a higher level abstraction
but that it's not ripping basic constructs like network/subnet/port out
of the existing API. If I'm wrong about that I'll have to change my
opinion. We need those fundamental networking constructs to be present
and accessible to users that want/need to deal with them. I'm viewing
GBP as just a higher level abstraction over the top.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] Image upload/download bandwidth cap

2014-08-08 Thread Tomoki Sekiyama
Hi all,

I'm considering how I can apply image download/upload bandwidth limit for
glance for network QoS.

There was a review for the bandwidth limit, however it is abandoned.

* Download rate limiting
  https://review.openstack.org/#/c/21380/

Was there any discussion in the past summit about this not to merge this?
Or, is there alternative way to cap the bandwidth consumed by Glance?

I appreciate any information about this.

Thanks,
Tomoki Sekiyama



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-08 Thread Kevin Benton
The existing constructs will not change.
On Aug 8, 2014 9:49 AM, CARVER, PAUL pc2...@att.com wrote:

 Wuhongning [mailto:wuhongn...@huawei.com] wrote:

 Does it make sense to move all advanced extension out of ML2, like
 security
 group, qos...? Then we can just talk about advanced service itself,
 without
 bothering basic neutron object (network/subnet/port)

 A modular layer 3 (ML3) analogous to ML2 sounds like a good idea. I still
 think it's too late in the game to be shooting down all the work that the
 GBP team has put in unless there's a really clean and effective way of
 running AND iterating on GBP in conjunction with Neutron without being
 part of the Juno release. As far as I can tell they've worked really
 hard to follow the process and accommodate input. They shouldn't have
 to wait multiple more releases on a hypothetical refactoring of how L3+ vs
 L2 is structured.

 But, just so I'm not making a horrible mistake, can someone reassure me
 that GBP isn't removing the constructs of network/subnet/port from Neutron?

 I'm under the impression that GBP is adding a higher level abstraction
 but that it's not ripping basic constructs like network/subnet/port out
 of the existing API. If I'm wrong about that I'll have to change my
 opinion. We need those fundamental networking constructs to be present
 and accessible to users that want/need to deal with them. I'm viewing
 GBP as just a higher level abstraction over the top.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Nova] API design and usability

2014-08-08 Thread Mathieu Gagné

On 2014-08-08 8:54 AM, Andrew Laski wrote:


On 08/07/2014 07:57 AM, Mathieu Gagné wrote:


IMO, moving the burden of such orchestration (and garbage collection)
to the end users would be a mistake. It's not a good UX at all.

I could say that removing auto-creation is like having to create your
volume (from an image) before booting on it. Before BDMv2, that's what
we had to do and it wasn't cool at all. We had to implement a logic
waiting for the volume to be 'available' before booting on it
otherwise Nova would complain about the volume not being available.
Now that we have BDMv2, it's a much better UX.

I want to be able to run this command and not worry about pre-steps:

  nova boot --num-instances=50 [...] app.example.org



I think the suggestion being made by the 'do not autocreate' camp is to
allow that, but have the logic for it wrapped into the client. That does
mean that multiple SDKs might need to implement that logic, but in
return you are provided with control.



With control comes responsibilities.

We went down that path and it didn't go well.

One part of our systems isn't written in Python and we had to rewrite 
part of python-novaclient. For various reasons, we also had to create 
ports/volumes first and orchestrate steps ourselves.


Writing a good bullet-proof orchestration logic (and rollback) takes 
time and experience. I stopped counting the number of bugs we internally 
opened due to bad logic, orchestration, fallback, retries, etc.


If OpenStack can provide a server-side logic, the better we will be. We 
wish to fire-and-forget a nova boot and not have to orchestrate stuff 
on the client-side.


When stuff are done on the client-side, anything can go wrong: your 
script can get killed/stopped for a lot of reasons. And when it happens, 
you have to have a logic to either fallback, garbage collect or restart 
from where you left. This is suboptimal and error prone.


If people wish to orchestrate those steps themselves, they already can 
by providing a port-id instead of a net-id. Same with volumes and BDM.


OpenStack has to be easy to use to attract people, not the other way 
around where the argument of control is used to avoid implementing a 
good logic on the server-side (and push this burden to the end users).



 A deployer is going to set a

specific timeout that they've decided on, but as a user you can
determine how long you're willing to wait for ports/volumes to be
created.


Our type of users do not care about such details. They wish to boot an 
instance and have all those details handled for them. You might feel 
greater control is better but not in our case.



 And if there is a failure you can make on-the-fly decisions
 about how to handle that.

Why can't OpenStack make those decisions?



Also, when Nova is creating a resource on a users behalf it does not
provide any feedback on the progress of that operation.


This is something Nova could easily handle with task states. It's 
already the case with block device mapping.



--
Mathieu

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-08 Thread Ivar Lazzaro
Hi Paul,

Don't need to worry, you are perfectly right, GBP API is not replacing
anything :).

Also thanks for sharing your opinion on this matter.

Thanks,
Ivar.


On Fri, Aug 8, 2014 at 5:46 PM, CARVER, PAUL pc2...@att.com wrote:

 Wuhongning [mailto:wuhongn...@huawei.com] wrote:

 Does it make sense to move all advanced extension out of ML2, like
 security
 group, qos...? Then we can just talk about advanced service itself,
 without
 bothering basic neutron object (network/subnet/port)

 A modular layer 3 (ML3) analogous to ML2 sounds like a good idea. I still
 think it's too late in the game to be shooting down all the work that the
 GBP team has put in unless there's a really clean and effective way of
 running AND iterating on GBP in conjunction with Neutron without being
 part of the Juno release. As far as I can tell they've worked really
 hard to follow the process and accommodate input. They shouldn't have
 to wait multiple more releases on a hypothetical refactoring of how L3+ vs
 L2 is structured.

 But, just so I'm not making a horrible mistake, can someone reassure me
 that GBP isn't removing the constructs of network/subnet/port from Neutron?

 I'm under the impression that GBP is adding a higher level abstraction
 but that it's not ripping basic constructs like network/subnet/port out
 of the existing API. If I'm wrong about that I'll have to change my
 opinion. We need those fundamental networking constructs to be present
 and accessible to users that want/need to deal with them. I'm viewing
 GBP as just a higher level abstraction over the top.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-08 Thread Akash Gangil
Quick Question:
From what I understand, GBP is a high level declarative way of configuring
the network which ultimately gets mapped to basic Neutron API's via some
business logic. Why can't it be in a module of it own? In that way users
who want to use it can just install that and use it as an interface to
interact with Neutron while the rest continue with their lives as usual.




On Fri, Aug 8, 2014 at 9:25 PM, Kevin Benton blak...@gmail.com wrote:

 The existing constructs will not change.
 On Aug 8, 2014 9:49 AM, CARVER, PAUL pc2...@att.com wrote:

 Wuhongning [mailto:wuhongn...@huawei.com] wrote:

 Does it make sense to move all advanced extension out of ML2, like
 security
 group, qos...? Then we can just talk about advanced service itself,
 without
 bothering basic neutron object (network/subnet/port)

 A modular layer 3 (ML3) analogous to ML2 sounds like a good idea. I still
 think it's too late in the game to be shooting down all the work that the
 GBP team has put in unless there's a really clean and effective way of
 running AND iterating on GBP in conjunction with Neutron without being
 part of the Juno release. As far as I can tell they've worked really
 hard to follow the process and accommodate input. They shouldn't have
 to wait multiple more releases on a hypothetical refactoring of how L3+ vs
 L2 is structured.

 But, just so I'm not making a horrible mistake, can someone reassure me
 that GBP isn't removing the constructs of network/subnet/port from
 Neutron?

 I'm under the impression that GBP is adding a higher level abstraction
 but that it's not ripping basic constructs like network/subnet/port out
 of the existing API. If I'm wrong about that I'll have to change my
 opinion. We need those fundamental networking constructs to be present
 and accessible to users that want/need to deal with them. I'm viewing
 GBP as just a higher level abstraction over the top.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Akash
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] testing performance/latency of various components?

2014-08-08 Thread Chris Friesen
Is there a straightforward way to determine where the time is going when 
I run a command from novaclient?


For instance, if I run nova list, that's going to run novaclient, 
which will send a message to nova-api, which wakes up and does some 
processing and sends a message to nova-conductor, which wakes up and 
does some processing and then calls out to the database, which wakes up 
and does some processing and sends the response back to nova-conductor, 
etc...  And the messaging goes via rabbit, so there are additional 
messaging and wake-ups involved there.


Suppose nova-list takes A amount of time to run...is there a standard 
way to determine how much time was spent in nova-api, in nova-conductor, 
in the database, in rabbit, how much was due to scheduler delays, etc.? 
 Or would I be looking at needing to instrument everything to get that 
level of detail?


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] testing performance/latency of various components?

2014-08-08 Thread Boris Pavlovic
Chris,

We working on cross service project profiler OSprofiler [1] and integrating
it in all projects (including gates)
Please join discussion here: https://review.openstack.org/#/c/103825/

If everting goes well we will get this feature in Juno.
So we will be able to trace request cross service-project with just adding
--profile to any python client call.
And get such results: http://boris-42.github.io/profiler/


[1] https://github.com/stackforge/osprofiler

Best regards,
Boris Pavlovic


On Fri, Aug 8, 2014 at 8:03 PM, Chris Friesen chris.frie...@windriver.com
wrote:

 Is there a straightforward way to determine where the time is going when I
 run a command from novaclient?

 For instance, if I run nova list, that's going to run novaclient, which
 will send a message to nova-api, which wakes up and does some processing
 and sends a message to nova-conductor, which wakes up and does some
 processing and then calls out to the database, which wakes up and does some
 processing and sends the response back to nova-conductor, etc...  And the
 messaging goes via rabbit, so there are additional messaging and wake-ups
 involved there.

 Suppose nova-list takes A amount of time to run...is there a standard way
 to determine how much time was spent in nova-api, in nova-conductor, in the
 database, in rabbit, how much was due to scheduler delays, etc.?  Or would
 I be looking at needing to instrument everything to get that level of
 detail?

 Chris

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-08 Thread Jay Pipes

On 08/08/2014 08:55 AM, Kevin Benton wrote:

The existing constructs will not change.


A followup question on the above...

If GPB API is merged into Neutron, the next logical steps (from what I 
can tell) will be to add drivers that handle policy-based payloads/requests.


Some of these drivers, AFAICT, will *not* be deconstructing these policy 
requests into the low-level port, network, and subnet 
creation/attachment/detachment commands, but instead will be calling out 
as-is to hardware that speaks the higher-level abstraction API [1], not 
the lower-level port/subnet/network APIs. The low-level APIs would 
essentially be consumed entirely within the policy-based driver, which 
would effectively mean that the only way a system would be able to 
orchestrate networking in systems using these drivers would be via the 
high-level policy API.


Is that correct? Very sorry if I haven't explained clearly my 
question... this is a tough question to frame eloquently :(


Thanks,
-jay

[1] 
http://www.cisco.com/c/en/us/solutions/data-center-virtualization/application-centric-infrastructure/index.html



On Aug 8, 2014 9:49 AM, CARVER, PAUL pc2...@att.com
mailto:pc2...@att.com wrote:

Wuhongning [mailto:wuhongn...@huawei.com
mailto:wuhongn...@huawei.com] wrote:

 Does it make sense to move all advanced extension out of ML2, like
security
 group, qos...? Then we can just talk about advanced service
itself, without
 bothering basic neutron object (network/subnet/port)

A modular layer 3 (ML3) analogous to ML2 sounds like a good idea. I
still
think it's too late in the game to be shooting down all the work
that the
GBP team has put in unless there's a really clean and effective way of
running AND iterating on GBP in conjunction with Neutron without being
part of the Juno release. As far as I can tell they've worked really
hard to follow the process and accommodate input. They shouldn't have
to wait multiple more releases on a hypothetical refactoring of how
L3+ vs
L2 is structured.

But, just so I'm not making a horrible mistake, can someone reassure me
that GBP isn't removing the constructs of network/subnet/port from
Neutron?

I'm under the impression that GBP is adding a higher level abstraction
but that it's not ripping basic constructs like network/subnet/port out
of the existing API. If I'm wrong about that I'll have to change my
opinion. We need those fundamental networking constructs to be present
and accessible to users that want/need to deal with them. I'm viewing
GBP as just a higher level abstraction over the top.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-08 Thread Devananda van der Veen
On Fri, Aug 8, 2014 at 2:06 AM, Thierry Carrez thie...@openstack.org wrote:

 Michael Still wrote:
  [...] I think an implied side effect of
  the runway system is that nova-drivers would -2 blueprint reviews
  which were not occupying a slot.
 
  (If we start doing more -2's I think we will need to explore how to
  not block on someone with -2's taking a vacation. Some sort of role
  account perhaps).

 Ideally CodeReview-2s should be kept for blocking code reviews on
 technical grounds, not procedural grounds. For example it always feels
 weird to CodeReview-2 all feature patch reviews on Feature Freeze day --
 that CodeReview-2 really doesn't have the same meaning as a traditional
 CodeReview-2.

 For those procedural blocks (feature freeze, waiting for runway
 room...), it might be interesting to introduce a specific score
 (Workflow-2 perhaps) that drivers could set. That would not prevent code
 review from happening, that would just clearly express that this is not
 ready to land for release cycle / organizational reasons.

 Thoughts?


+1

In addition to distinguishing between procedural and technical blocks, this
sounds like it will also solve the current problem when a core
reviewer has gone on
vacation after blocking something for procedural reasons.

-Deva

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-08 Thread Ivar Lazzaro
Hi Jay,

You can choose. The whole purpose of this is about flexibility, if you want
to use GBP API 'only' with a specific driver you just can.
Additionally, given the 'ML2 like' architecture, the reference mapping
driver can ideally run alongside by filling the core Neutron constructs
without ever 'disturbing' your own driver (I'm not entirely sure about this
but it seems feasible).

I hope this answers your question,
Ivar.


On Fri, Aug 8, 2014 at 6:28 PM, Jay Pipes jaypi...@gmail.com wrote:

 On 08/08/2014 08:55 AM, Kevin Benton wrote:

 The existing constructs will not change.


 A followup question on the above...

 If GPB API is merged into Neutron, the next logical steps (from what I can
 tell) will be to add drivers that handle policy-based payloads/requests.

 Some of these drivers, AFAICT, will *not* be deconstructing these policy
 requests into the low-level port, network, and subnet
 creation/attachment/detachment commands, but instead will be calling out
 as-is to hardware that speaks the higher-level abstraction API [1], not the
 lower-level port/subnet/network APIs. The low-level APIs would essentially
 be consumed entirely within the policy-based driver, which would
 effectively mean that the only way a system would be able to orchestrate
 networking in systems using these drivers would be via the high-level
 policy API.

 Is that correct? Very sorry if I haven't explained clearly my question...
 this is a tough question to frame eloquently :(

 Thanks,
 -jay

 [1] http://www.cisco.com/c/en/us/solutions/data-center-
 virtualization/application-centric-infrastructure/index.html

  On Aug 8, 2014 9:49 AM, CARVER, PAUL pc2...@att.com
 mailto:pc2...@att.com wrote:

 Wuhongning [mailto:wuhongn...@huawei.com
 mailto:wuhongn...@huawei.com] wrote:

  Does it make sense to move all advanced extension out of ML2, like
 security
  group, qos...? Then we can just talk about advanced service
 itself, without
  bothering basic neutron object (network/subnet/port)

 A modular layer 3 (ML3) analogous to ML2 sounds like a good idea. I
 still
 think it's too late in the game to be shooting down all the work
 that the
 GBP team has put in unless there's a really clean and effective way of
 running AND iterating on GBP in conjunction with Neutron without being
 part of the Juno release. As far as I can tell they've worked really
 hard to follow the process and accommodate input. They shouldn't have
 to wait multiple more releases on a hypothetical refactoring of how
 L3+ vs
 L2 is structured.

 But, just so I'm not making a horrible mistake, can someone reassure
 me
 that GBP isn't removing the constructs of network/subnet/port from
 Neutron?

 I'm under the impression that GBP is adding a higher level abstraction
 but that it's not ripping basic constructs like network/subnet/port
 out
 of the existing API. If I'm wrong about that I'll have to change my
 opinion. We need those fundamental networking constructs to be present
 and accessible to users that want/need to deal with them. I'm viewing
 GBP as just a higher level abstraction over the top.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-08 Thread Salvatore Orlando
It might be because of the wording used, but it seems to me that you're
making it sound like the group policy effort could have been completely
orthogonal to neutron as we know it now.

What I understood is that the declarative abstraction offered by group
policy could do without any existing neutron entity leveraging native
drivers, but can actually be used also with existing neutron plugins
through the mapping driver - which will provide a sort of backward
compatibility. And still in that case I'm not sure one would be able to use
traditional neutron API (or legacy as it has been called), since I
don't know if the mapping driver is bidirectional.

I know this probably stems from my ignorance on the subject - I had
unfortunately very little time to catch-up with this effort in the past
months.

Salvatore

On 8 August 2014 18:49, Ivar Lazzaro ivarlazz...@gmail.com wrote:

 Hi Jay,

 You can choose. The whole purpose of this is about flexibility, if you
 want to use GBP API 'only' with a specific driver you just can.
 Additionally, given the 'ML2 like' architecture, the reference mapping
 driver can ideally run alongside by filling the core Neutron constructs
 without ever 'disturbing' your own driver (I'm not entirely sure about this
 but it seems feasible).

 I hope this answers your question,
 Ivar.


 On Fri, Aug 8, 2014 at 6:28 PM, Jay Pipes jaypi...@gmail.com wrote:

 On 08/08/2014 08:55 AM, Kevin Benton wrote:

 The existing constructs will not change.


 A followup question on the above...

 If GPB API is merged into Neutron, the next logical steps (from what I
 can tell) will be to add drivers that handle policy-based payloads/requests.

 Some of these drivers, AFAICT, will *not* be deconstructing these policy
 requests into the low-level port, network, and subnet
 creation/attachment/detachment commands, but instead will be calling out
 as-is to hardware that speaks the higher-level abstraction API [1], not the
 lower-level port/subnet/network APIs. The low-level APIs would essentially
 be consumed entirely within the policy-based driver, which would
 effectively mean that the only way a system would be able to orchestrate
 networking in systems using these drivers would be via the high-level
 policy API.

 Is that correct? Very sorry if I haven't explained clearly my question...
 this is a tough question to frame eloquently :(

 Thanks,
 -jay

 [1] http://www.cisco.com/c/en/us/solutions/data-center-
 virtualization/application-centric-infrastructure/index.html

  On Aug 8, 2014 9:49 AM, CARVER, PAUL pc2...@att.com
 mailto:pc2...@att.com wrote:

 Wuhongning [mailto:wuhongn...@huawei.com
 mailto:wuhongn...@huawei.com] wrote:

  Does it make sense to move all advanced extension out of ML2, like
 security
  group, qos...? Then we can just talk about advanced service
 itself, without
  bothering basic neutron object (network/subnet/port)

 A modular layer 3 (ML3) analogous to ML2 sounds like a good idea. I
 still
 think it's too late in the game to be shooting down all the work
 that the
 GBP team has put in unless there's a really clean and effective way
 of
 running AND iterating on GBP in conjunction with Neutron without
 being
 part of the Juno release. As far as I can tell they've worked really
 hard to follow the process and accommodate input. They shouldn't have
 to wait multiple more releases on a hypothetical refactoring of how
 L3+ vs
 L2 is structured.

 But, just so I'm not making a horrible mistake, can someone reassure
 me
 that GBP isn't removing the constructs of network/subnet/port from
 Neutron?

 I'm under the impression that GBP is adding a higher level
 abstraction
 but that it's not ripping basic constructs like network/subnet/port
 out
 of the existing API. If I'm wrong about that I'll have to change my
 opinion. We need those fundamental networking constructs to be
 present
 and accessible to users that want/need to deal with them. I'm viewing
 GBP as just a higher level abstraction over the top.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list

[openstack-dev] What's happening with stable release management?

2014-08-08 Thread Thomas Goirand
Hi,

For updating keystone from 2014.1.1 to 2014.1.2, I had to:

- Upgrade oslo-config from 1.2.1 to 1.4.0.0~a3
- Upgrade oslo.messaging from 1.3.0~a9 to 1.4.0.0~a3
- Upgrade python-six from 1.6 to 1.7
- Upgrade python-pycadf from 0.4 to 0.5.1
- Add python-ldappool
- Add python-oslo.db
- Add python-oslo.i18n
- Add python-keystonemiddleware, which needs python-crypto = 2.6
(previously, 2.5 was enough)

So, we have 5 major Python module upgrades, and 4 completely new
libraries which were not there in 2014.1.1. Some of the changes aren't
small at all.

I'm sure that there's very valid reasons for each of the upgrades or
library addition, but I don't think that it is overall reasonable. If
this was to happen during the freeze of Debian, or worse, after a
release, upgrading all of this would be a nightmare, and I'm sure that
the Debian release team would simply refuse.

Should I assign myself to program a robot which will vote -1 on all
change on the stable/Icehouse global-requirements.txt file? Or is sanity
still possible in OpenStack? :)

It is my opinion that we need to review our release process for the
stable releases, policy for requirement changes, and need to adopt a way
more conservative attitude.

Cheers,

Thomas Goirand (zigo)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] usage patterns for oslo.config

2014-08-08 Thread Vishvananda Ishaya
Hi Alistair,

Modules can register their own options and there is no need to call 
reload_config_files. The config files are parsed and values stored in case the 
option is later declared. The only time you need to reload files is if you add 
new config files in the new module. See the example code:

from oslo.config import cfg
with open(foo, w) as f:
f.write([DEFAULT]\nfoo=bar)

cfg.CONF([--config-file, foo])
try:
print cfg.CONF.foo
except cfg.NoSuchOptError:
print NO OPT
# OUT: 'NO OPT'

cfg.CONF.register_opt(cfg.StrOpt(foo))
print cfg.CONF.foo
cfg.CONF.foo
# OUT: ‘bar'

One thing to keep in mind is you don’t want to use config values at import 
time, since this tends to be before the config files have been loaded.

Vish

On Aug 8, 2014, at 6:40 AM, Coles, Alistair alistair.co...@hp.com wrote:

 I’ve been looking at the implications of applying oslo.config in Swift, and I 
 have a question about the best pattern for registering options.
  
 Looking at how keystone uses oslo.config, the pattern seems to be to have all 
 options declared and registered 'up-front' in a single place 
 (keystone/common/config.py) before loading wsgi pipeline/starting the 
 service. Is there another usage pattern where each middleware registers its 
 options independently ‘on-demand’ rather than maintaining them all in a 
 single place?
  
 I read about a pattern [1] whereby modules register opts during import, but 
 does that require there to be some point in the lifecycle where all required 
 modules are imported *before* parsing config files? Seems like that would 
 mean parsing the wsgi pipeline to ‘discover’ the middleware modules being 
 used, importing all those modules, then parsing config files, then loading 
 the wsgi pipeline?
  
 OR - is it acceptable for each middleware module to register its own options 
 if/when it is imported during wsgi pipeline loading (CONF.register_options()) 
 and then call CONF.reload_config_files() ?
  
 Thanks,
 Alistair
  
 [1] http://docs.openstack.org/developer/oslo.config/cfg.html#global-configopts
  
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Bug#1231298 - size parameter for volume creation

2014-08-08 Thread Vishvananda Ishaya

On Aug 8, 2014, at 6:55 AM, Dean Troyer dtro...@gmail.com wrote:

 On Fri, Aug 8, 2014 at 12:36 AM, Ganapathy, Sandhya 
 sandhya.ganapa...@hp.com wrote:
 This is to discuss Bug #1231298 – 
 https://bugs.launchpad.net/cinder/+bug/1231298
 
 ... 
 Conclusion reached with this bug is that, we need to modify cinder client in 
 order to accept optional size parameter (as the cinder’s API allows)  and 
 calculate the size automatically during volume creation from image.
 
 There is also an opinion that size should not be an optional parameter during 
 volume creation – does this mean, Cinder’s API should be changed in order to 
 make size a mandatory parameter.
 
 
 In cinderclient I think you're stuck with size as a mandatory argument to the 
 'cinder create' command, as you must be backward-compatible for at least a 
 deprecation period.[0]
 
 Your option here[1] is to use a sentinel value for size that indicates the 
 actual volume size should be calculated and let the client do the right thing 
 under the hood to feed the server API.  Other project CLIs have used both 
 'auto' and '0' in situations like this.  I'd suggest '0' as it is still an 
 integer and doesn't require potentially user-error-prone string matching to 
 work.

We did this for novaclient volume attach and allowed device to be ‘auto' or the 
argument to be omitted. I don’t see a huge problem turning size into an 
optional parameter as long as it doesn’t break older scripts. Turning it from 
an arg into a kwarg would definitely require deprecation.

Vish

 
 FWIW, this is why OSC changed 'volume create' to make --size an option and 
 make the volume name be the positional argument.
 
 [0] The deprecation period for clients is ambiguous as the release cycle 
 isn't timed but we think of deprecations that way.  Using integrated release 
 cycles is handy but less than perfect to correlate to the client's semver 
 releases.
 [1] Bad pun alert...or is there such a thing as a bad pun???
 
 dt
 
 -- 
 
 Dean Troyer
 dtro...@gmail.com
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Is network ordering of vNICs guaranteed?

2014-08-08 Thread CARVER, PAUL
I'm hearing friend of a friend that people have looked at the code and 
determined that the order of networks on a VM is not guaranteed. Can anyone 
confirm whether this is true? If it is true, is there any reason why this is 
not considered a bug? I've never seen it happen myself.

To elaborate, I'm being told that if you create some VMs with several vNICs on 
each and you want them to be, for example:


1)  Management Network

2)  Production Network

3)  Storage Network

You can't count on all the VMs having eth0 connected to the management network, 
eth1 on the production network, eth2 on the storage network.

I'm being told that they will come up like that most of the time, but sometimes 
you will see, for example, a VM might wind up with eth0 connected to the 
production network, eth1 to the storage network, and eth2 connected to the 
storage network (or some other permutation.)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-08 Thread Stefano Maffulli
On 08/08/2014 02:37 AM, Thierry Carrez wrote:
 I agree with Eoghan here. The main goal of an agile/lean system is to
 maximize a development team productivity. The main goal of Open source
 project management is not to maximize productivity. It’s to maximize
 contributions. I wrote about that a few years ago here (shameless plug):

I'm not sold on this difference in goals. Ultimately it's all about
doing things better but let's not digress into a philosophical argument
to dissect the meaning of the keywords agile/productivity/maximization
etc and focus on identifying clearly the problem

 The problem today is that our backlog/inventory/waste has reached levels

Agreed, the problem is in the backlog of blueprints/specs to be
processed, in the code reviews that go with them, in the amount of
technical debt accumulated.

 where it starts hurting 

Agreed too, it hurts, there is pain. Can we quantify this pain? I know
the time to merge patches has been constantly increasing across all
projects. That's one pain but one data point not enough IMO to describe
the system. What else can we measure, quantify to understand?

 our goal of maximizing contributions,

Here I lost you and I think we should spell out our goals into
measurable objectives.

 by creating frustration on the developers side. So we need to explore ways
 to reduce it back to acceptable (or predictable) levels, taking into
 account our limited control over our workforce.

Limited,sure, but not inexistent: I think we have ways to put
sticks/carrots in place so that our 'workforce' can improve our (to be
defined) goals.

 Personally I think we just need to get better at communicating the
 downstream expectations, so that if we create waste, it's clearly
 upstream fault rather than downstream. Currently it's the lack of
 communication that makes developers produce more / something else than
 what core reviewers want to see. Any tool that lets us communicate
 expectations better is welcome, and I think the runway approach is one
 such tool, simple enough to understand.

I've asked Dan and John for help in formalizing a proposal. I'll keep
you posted.

/stef

-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-08 Thread Kevin Benton
There is an enforcement component to the group policy that allows you to
use the current APIs and it's the reason that group policy is integrated
into the neutron project. If someone uses the current APIs, the group
policy plugin will make sure they don't violate any policy constraints
before passing the request into the regular core/service plugins.


On Fri, Aug 8, 2014 at 11:02 AM, Salvatore Orlando sorla...@nicira.com
wrote:

 It might be because of the wording used, but it seems to me that you're
 making it sound like the group policy effort could have been completely
 orthogonal to neutron as we know it now.

 What I understood is that the declarative abstraction offered by group
 policy could do without any existing neutron entity leveraging native
 drivers, but can actually be used also with existing neutron plugins
 through the mapping driver - which will provide a sort of backward
 compatibility. And still in that case I'm not sure one would be able to use
 traditional neutron API (or legacy as it has been called), since I
 don't know if the mapping driver is bidirectional.

 I know this probably stems from my ignorance on the subject - I had
 unfortunately very little time to catch-up with this effort in the past
 months.

 Salvatore


 On 8 August 2014 18:49, Ivar Lazzaro ivarlazz...@gmail.com wrote:

 Hi Jay,

 You can choose. The whole purpose of this is about flexibility, if you
 want to use GBP API 'only' with a specific driver you just can.
 Additionally, given the 'ML2 like' architecture, the reference mapping
 driver can ideally run alongside by filling the core Neutron constructs
 without ever 'disturbing' your own driver (I'm not entirely sure about this
 but it seems feasible).

 I hope this answers your question,
 Ivar.


 On Fri, Aug 8, 2014 at 6:28 PM, Jay Pipes jaypi...@gmail.com wrote:

 On 08/08/2014 08:55 AM, Kevin Benton wrote:

 The existing constructs will not change.


 A followup question on the above...

 If GPB API is merged into Neutron, the next logical steps (from what I
 can tell) will be to add drivers that handle policy-based payloads/requests.

 Some of these drivers, AFAICT, will *not* be deconstructing these policy
 requests into the low-level port, network, and subnet
 creation/attachment/detachment commands, but instead will be calling out
 as-is to hardware that speaks the higher-level abstraction API [1], not the
 lower-level port/subnet/network APIs. The low-level APIs would essentially
 be consumed entirely within the policy-based driver, which would
 effectively mean that the only way a system would be able to orchestrate
 networking in systems using these drivers would be via the high-level
 policy API.

 Is that correct? Very sorry if I haven't explained clearly my
 question... this is a tough question to frame eloquently :(

 Thanks,
 -jay

 [1] http://www.cisco.com/c/en/us/solutions/data-center-
 virtualization/application-centric-infrastructure/index.html

  On Aug 8, 2014 9:49 AM, CARVER, PAUL pc2...@att.com
 mailto:pc2...@att.com wrote:

 Wuhongning [mailto:wuhongn...@huawei.com
 mailto:wuhongn...@huawei.com] wrote:

  Does it make sense to move all advanced extension out of ML2, like
 security
  group, qos...? Then we can just talk about advanced service
 itself, without
  bothering basic neutron object (network/subnet/port)

 A modular layer 3 (ML3) analogous to ML2 sounds like a good idea. I
 still
 think it's too late in the game to be shooting down all the work
 that the
 GBP team has put in unless there's a really clean and effective way
 of
 running AND iterating on GBP in conjunction with Neutron without
 being
 part of the Juno release. As far as I can tell they've worked really
 hard to follow the process and accommodate input. They shouldn't
 have
 to wait multiple more releases on a hypothetical refactoring of how
 L3+ vs
 L2 is structured.

 But, just so I'm not making a horrible mistake, can someone
 reassure me
 that GBP isn't removing the constructs of network/subnet/port from
 Neutron?

 I'm under the impression that GBP is adding a higher level
 abstraction
 but that it's not ripping basic constructs like network/subnet/port
 out
 of the existing API. If I'm wrong about that I'll have to change my
 opinion. We need those fundamental networking constructs to be
 present
 and accessible to users that want/need to deal with them. I'm
 viewing
 GBP as just a higher level abstraction over the top.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 

Re: [openstack-dev] [neutron] [third-party] Freescale CI log site is being blocked

2014-08-08 Thread Kevin Benton
Does your log server allow anonymous uploads that caused it to host malware
or something that led to it being blocked?


On Fri, Aug 8, 2014 at 7:12 AM, Kyle Mestery mest...@mestery.com wrote:

 Trinath:

 In looking at your FWaaS review [1], I noticed the site you are using
 for log storage is being blacklisted again, at least by Cisco WSA
 appliances. Thus, I cannot see the logs for it. Did you change the
 location of your log storage again? Is anyone else seeing this issue?

 Thanks,
 Kyle


 [1] https://review.openstack.org/#/c/109659/

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] introducing cyclops

2014-08-08 Thread Doug Hellmann

On Aug 8, 2014, at 3:34 AM, Piyush Harsh h...@zhaw.ch wrote:

 Dear Eoghan,
 
 Thanks for your comments. Although you are correct that rating, charging, and 
 billing policies are commercially sensitive to the operators, still if an 
 operator has an openstack installation, I do not see why the stack could not 
 offer a service that supports ways for the operator to input desired 
 policies, rules, etc to do charging and billing out of the box. These 
 policies could still only be accessible to the operator.

I think the point was more that most deployers we talked to at the beginning of 
the project already had tools that managed the rates and charging, but needed 
the usage data to feed into those tools. That was a while back, though and, as 
you say, it’s quite possible we have new users without similar tools in place, 
so I’m glad to see a couple of groups working on taking billing integration one 
step further. At the very least working with teams building open source 
consumers of ceilometer’s API will help us understand if there are any ways to 
make it easier to use.

Doug

 
 Furthermore, one could envision that using heat together with some django 
 magic, this could even be offered as a service for tenants of the operators 
 who could be distributors or resellers in his client ecosystem, allowing them 
 to set their own custom policies.
 
 I believe such stack based solution would be very much welcome by SMEs, new 
 entrants, etc.
 
 I am planning to attend the Kilo summit in Paris, and I would be very glad to 
 talk with you and others on this idea and on Cyclops :)
 
 Forking the codebase to stackforge is something which is definitely possible 
 and thanks a lot for suggesting it.
 
 Looking forward to more constructive discussions on this with you and others.
 
 Kind regards,
 Piyush.
 
 
 ___
 Dr. Piyush Harsh, Ph.D.
 Researcher, InIT Cloud Computing Lab
 Zurich University of Applied Sciences (ZHAW)
 [Site] http://piyush-harsh.info
 [Research Lab] http://www.cloudcomp.ch/
 Fax: +41(0)58.935.7403 GPG Keyid: 9C5A8838
 
 
 On Fri, Aug 8, 2014 at 12:01 AM, Eoghan Glynn egl...@redhat.com wrote:
 
 
 
  Dear All,
 
  Let me use my first post to this list to introduce Cyclops and initiate a
  discussion towards possibility of this platform as a future incubated
  project in OpenStack.
 
  We at Zurich university of Applied Sciences have a python project in open
  source (Apache 2 Licensing) that aims to provide a platform to do
  rating-charging-billing over ceilometer. We call is Cyclops (A Charging
  platform for OPenStack CLouds).
 
  The initial proof of concept code can be accessed here:
  https://github.com/icclab/cyclops-web 
  https://github.com/icclab/cyclops-tmanager
 
  Disclaimer: This is not the best code out there, but will be refined and
  documented properly very soon!
 
  A demo video from really early days of the project is here:
  https://www.youtube.com/watch?v=ZIwwVxqCio0 and since this video was made,
  several bug fixes and features were added.
 
  The idea presentation was done at Swiss Open Cloud Day at Bern and the talk
  slides can be accessed here:
  http://piyush-harsh.info/content/ocd-bern2014.pdf , and more recently the
  research paper on the idea was published in 2014 World Congress in Computer
  Science (Las Vegas), which can be accessed here:
  http://piyush-harsh.info/content/GCA2014-rcb.pdf
 
  I was wondering, if our effort is something that OpenStack
  Ceilometer/Telemetry release team would be interested in?
 
  I do understand that initially rating-charging-billing service may have been
  left out by choice as they would need to be tightly coupled with existing
  CRM/Billing systems, but Cyclops design (intended) is distributed, service
  oriented architecture with each component allowing for possible integration
  with external software via REST APIs. And therefore Cyclops by design is
  CRM/Billing platform agnostic. Although Cyclops PoC implementation does
  include a basic bill generation module.
 
  We in our team are committed to this development effort and we will have
  resources (interns, students, researchers) work on features and improve the
  code-base for a foreseeable number of years to come.
 
  Do you see a chance if our efforts could make in as an incubated project in
  OpenStack within Ceilometer?
 
 Hi Piyush,
 
 Thanks for bringing this up!
 
 I should preface my remarks by setting out a little OpenStack
 history, in terms of the original decision not to include the
 rating and billing stages of the pipeline under the ambit of
 the ceilometer project.
 
 IIUC, the logic was that such rating/billing policies were very
 likely to be:
 
   (a) commercially sensitive for competing cloud operators
 
 and:
 
   (b) already built-out via existing custom/proprietary systems
 
 The folks who were directly involved at the outset of ceilometer
 can correct me if I've misrepresented the thinking that pertained
 

Re: [openstack-dev] [Ceilometer] Question on decorators in Ceilometer pecan framework

2014-08-08 Thread Pendergrass, Eric
 From: David Stanek [mailto:dsta...@dstanek.com]
 Sent: Friday, August 08, 2014 7:25 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Ceilometer] Question on decorators in
 Ceilometer pecan framework

 It looks like maybe WSME or Pecan is inspecting the method signature. Have
you
 tried to change the order of the decorators?

Good suggestion and we did try it.  Unfortunately the wsme decorator must
come
first else the endpoint isn't found and the client gets a 404.

Eric


smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

2014-08-08 Thread Maru Newby
On Aug 8, 2014, at 10:56 AM, Kevin Benton blak...@gmail.com wrote:

 There is an enforcement component to the group policy that allows you to use 
 the current APIs and it's the reason that group policy is integrated into the 
 neutron project. If someone uses the current APIs, the group policy plugin 
 will make sure they don't violate any policy constraints before passing the 
 request into the regular core/service plugins.


The enforcement requirement might be easier to implement through code-based 
integration, but a separate service could provide the same guarantee against 
constraint violation by proxying v2 API calls for an endpoint to which access 
was restricted.

Apologies if I've missed discussion of this (it's a big thread), but won't 
enforcement by group policy on the v2 API have the potential to violate 
stability requirements?  If new errors related to enforcement can result from 
API calls, that would seem to be a change in behavior.  Do we have a precedent 
for allowing extensions or new services to modify core behavior in this way?


m. 

 
 
 On Fri, Aug 8, 2014 at 11:02 AM, Salvatore Orlando sorla...@nicira.com 
 wrote:
 It might be because of the wording used, but it seems to me that you're 
 making it sound like the group policy effort could have been completely 
 orthogonal to neutron as we know it now.
 
 What I understood is that the declarative abstraction offered by group policy 
 could do without any existing neutron entity leveraging native drivers, but 
 can actually be used also with existing neutron plugins through the mapping 
 driver - which will provide a sort of backward compatibility. And still in 
 that case I'm not sure one would be able to use traditional neutron API (or 
 legacy as it has been called), since I don't know if the mapping driver is 
 bidirectional.
 
 I know this probably stems from my ignorance on the subject - I had 
 unfortunately very little time to catch-up with this effort in the past 
 months.
 
 Salvatore
 
 
 On 8 August 2014 18:49, Ivar Lazzaro ivarlazz...@gmail.com wrote:
 Hi Jay,
 
 You can choose. The whole purpose of this is about flexibility, if you want 
 to use GBP API 'only' with a specific driver you just can.
 Additionally, given the 'ML2 like' architecture, the reference mapping driver 
 can ideally run alongside by filling the core Neutron constructs without ever 
 'disturbing' your own driver (I'm not entirely sure about this but it seems 
 feasible).
 
 I hope this answers your question,
 Ivar.
 
 
 On Fri, Aug 8, 2014 at 6:28 PM, Jay Pipes jaypi...@gmail.com wrote:
 On 08/08/2014 08:55 AM, Kevin Benton wrote:
 The existing constructs will not change.
 
 A followup question on the above...
 
 If GPB API is merged into Neutron, the next logical steps (from what I can 
 tell) will be to add drivers that handle policy-based payloads/requests.
 
 Some of these drivers, AFAICT, will *not* be deconstructing these policy 
 requests into the low-level port, network, and subnet 
 creation/attachment/detachment commands, but instead will be calling out 
 as-is to hardware that speaks the higher-level abstraction API [1], not the 
 lower-level port/subnet/network APIs. The low-level APIs would essentially be 
 consumed entirely within the policy-based driver, which would effectively 
 mean that the only way a system would be able to orchestrate networking in 
 systems using these drivers would be via the high-level policy API.
 
 Is that correct? Very sorry if I haven't explained clearly my question... 
 this is a tough question to frame eloquently :(
 
 Thanks,
 -jay
 
 [1] 
 http://www.cisco.com/c/en/us/solutions/data-center-virtualization/application-centric-infrastructure/index.html
 
 On Aug 8, 2014 9:49 AM, CARVER, PAUL pc2...@att.com
 mailto:pc2...@att.com wrote:
 
 Wuhongning [mailto:wuhongn...@huawei.com
 mailto:wuhongn...@huawei.com] wrote:
 
  Does it make sense to move all advanced extension out of ML2, like
 security
  group, qos...? Then we can just talk about advanced service
 itself, without
  bothering basic neutron object (network/subnet/port)
 
 A modular layer 3 (ML3) analogous to ML2 sounds like a good idea. I
 still
 think it's too late in the game to be shooting down all the work
 that the
 GBP team has put in unless there's a really clean and effective way of
 running AND iterating on GBP in conjunction with Neutron without being
 part of the Juno release. As far as I can tell they've worked really
 hard to follow the process and accommodate input. They shouldn't have
 to wait multiple more releases on a hypothetical refactoring of how
 L3+ vs
 L2 is structured.
 
 But, just so I'm not making a horrible mistake, can someone reassure me
 that GBP isn't removing the constructs of network/subnet/port from
 Neutron?
 
 I'm under the impression that GBP is adding a higher level abstraction
 but that it's not ripping basic 

Re: [openstack-dev] [Neutron] Group Based Policy and the way forward

2014-08-08 Thread Kevin Benton
The only issue with the separate service proxying API calls is that it
can't receive requests between the service and core plugins.

What kind of stability requirements were you concerned about? A response
change would be similar to having a custom policy.json file where things
that violate constraints would result in a 403.
On Aug 8, 2014 1:04 PM, Maru Newby ma...@redhat.com wrote:

 On Aug 8, 2014, at 10:56 AM, Kevin Benton blak...@gmail.com wrote:

  There is an enforcement component to the group policy that allows you to
 use the current APIs and it's the reason that group policy is integrated
 into the neutron project. If someone uses the current APIs, the group
 policy plugin will make sure they don't violate any policy constraints
 before passing the request into the regular core/service plugins.


 The enforcement requirement might be easier to implement through
 code-based integration, but a separate service could provide the same
 guarantee against constraint violation by proxying v2 API calls for an
 endpoint to which access was restricted.

 Apologies if I've missed discussion of this (it's a big thread), but won't
 enforcement by group policy on the v2 API have the potential to violate
 stability requirements?  If new errors related to enforcement can result
 from API calls, that would seem to be a change in behavior.  Do we have a
 precedent for allowing extensions or new services to modify core behavior
 in this way?


 m.

 
 
  On Fri, Aug 8, 2014 at 11:02 AM, Salvatore Orlando sorla...@nicira.com
 wrote:
  It might be because of the wording used, but it seems to me that you're
 making it sound like the group policy effort could have been completely
 orthogonal to neutron as we know it now.
 
  What I understood is that the declarative abstraction offered by group
 policy could do without any existing neutron entity leveraging native
 drivers, but can actually be used also with existing neutron plugins
 through the mapping driver - which will provide a sort of backward
 compatibility. And still in that case I'm not sure one would be able to use
 traditional neutron API (or legacy as it has been called), since I
 don't know if the mapping driver is bidirectional.
 
  I know this probably stems from my ignorance on the subject - I had
 unfortunately very little time to catch-up with this effort in the past
 months.
 
  Salvatore
 
 
  On 8 August 2014 18:49, Ivar Lazzaro ivarlazz...@gmail.com wrote:
  Hi Jay,
 
  You can choose. The whole purpose of this is about flexibility, if you
 want to use GBP API 'only' with a specific driver you just can.
  Additionally, given the 'ML2 like' architecture, the reference mapping
 driver can ideally run alongside by filling the core Neutron constructs
 without ever 'disturbing' your own driver (I'm not entirely sure about this
 but it seems feasible).
 
  I hope this answers your question,
  Ivar.
 
 
  On Fri, Aug 8, 2014 at 6:28 PM, Jay Pipes jaypi...@gmail.com wrote:
  On 08/08/2014 08:55 AM, Kevin Benton wrote:
  The existing constructs will not change.
 
  A followup question on the above...
 
  If GPB API is merged into Neutron, the next logical steps (from what I
 can tell) will be to add drivers that handle policy-based payloads/requests.
 
  Some of these drivers, AFAICT, will *not* be deconstructing these policy
 requests into the low-level port, network, and subnet
 creation/attachment/detachment commands, but instead will be calling out
 as-is to hardware that speaks the higher-level abstraction API [1], not the
 lower-level port/subnet/network APIs. The low-level APIs would essentially
 be consumed entirely within the policy-based driver, which would
 effectively mean that the only way a system would be able to orchestrate
 networking in systems using these drivers would be via the high-level
 policy API.
 
  Is that correct? Very sorry if I haven't explained clearly my
 question... this is a tough question to frame eloquently :(
 
  Thanks,
  -jay
 
  [1]
 http://www.cisco.com/c/en/us/solutions/data-center-virtualization/application-centric-infrastructure/index.html
 
  On Aug 8, 2014 9:49 AM, CARVER, PAUL pc2...@att.com
  mailto:pc2...@att.com wrote:
 
  Wuhongning [mailto:wuhongn...@huawei.com
  mailto:wuhongn...@huawei.com] wrote:
 
   Does it make sense to move all advanced extension out of ML2, like
  security
   group, qos...? Then we can just talk about advanced service
  itself, without
   bothering basic neutron object (network/subnet/port)
 
  A modular layer 3 (ML3) analogous to ML2 sounds like a good idea. I
  still
  think it's too late in the game to be shooting down all the work
  that the
  GBP team has put in unless there's a really clean and effective way
 of
  running AND iterating on GBP in conjunction with Neutron without
 being
  part of the Juno release. As far as I can tell they've worked really
  hard to follow the process and accommodate input. They 

Re: [openstack-dev] What's happening with stable release management?

2014-08-08 Thread Thierry Carrez
Thomas Goirand wrote:
 Hi,
 
 For updating keystone from 2014.1.1 to 2014.1.2, I had to:
 
 - Upgrade oslo-config from 1.2.1 to 1.4.0.0~a3
 - Upgrade oslo.messaging from 1.3.0~a9 to 1.4.0.0~a3
 - Upgrade python-six from 1.6 to 1.7
 - Upgrade python-pycadf from 0.4 to 0.5.1
 - Add python-ldappool
 - Add python-oslo.db
 - Add python-oslo.i18n
 - Add python-keystonemiddleware, which needs python-crypto = 2.6
 (previously, 2.5 was enough)
 
 So, we have 5 major Python module upgrades, and 4 completely new
 libraries which were not there in 2014.1.1. Some of the changes aren't
 small at all.
 
 I'm sure that there's very valid reasons for each of the upgrades or
 library addition, but I don't think that it is overall reasonable. If
 this was to happen during the freeze of Debian, or worse, after a
 release, upgrading all of this would be a nightmare, and I'm sure that
 the Debian release team would simply refuse.
 
 Should I assign myself to program a robot which will vote -1 on all
 change on the stable/Icehouse global-requirements.txt file? Or is sanity
 still possible in OpenStack? :)
 
 It is my opinion that we need to review our release process for the
 stable releases, policy for requirement changes, and need to adopt a way
 more conservative attitude.

No, actually this is because the 2014.1.2 tarball is still completely
wrong. The tag is now OK, but due to some stale workspaces in our CI the
tarball was still generated from the wrong (Juno) tag.

I'll upload a new tarball ASAP. I took down the wrong one. Sorry for the
inconvenience... the issues here are not a policy problem, they are just
human error in the original tag, complicated by CI staleness that made
us think we fixed it while we didn't.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-08 Thread Sumit Naiksatam
Hi Jay, To extend Ivar's response here, the core resources and core
plugin configuration does not change with the addition of these
extensions. The mechanism to implement the GBP extensions is via a
service plugin. So even in a deployment where a GBP service plugin is
deployed with a driver which interfaces with a backend that perhaps
directly understands some of the GBP constructs, that system would
still need to have a core plugin configured that honors Neutron's core
resources. Hence my earlier comment that GBP extensions are
complementary to the existing core resources (in much the same way as
the existing extensions in Neutron).

Thanks,
~Sumit.

On Fri, Aug 8, 2014 at 9:49 AM, Ivar Lazzaro ivarlazz...@gmail.com wrote:
 Hi Jay,

 You can choose. The whole purpose of this is about flexibility, if you want
 to use GBP API 'only' with a specific driver you just can.
 Additionally, given the 'ML2 like' architecture, the reference mapping
 driver can ideally run alongside by filling the core Neutron constructs
 without ever 'disturbing' your own driver (I'm not entirely sure about this
 but it seems feasible).

 I hope this answers your question,
 Ivar.


 On Fri, Aug 8, 2014 at 6:28 PM, Jay Pipes jaypi...@gmail.com wrote:

 On 08/08/2014 08:55 AM, Kevin Benton wrote:

 The existing constructs will not change.


 A followup question on the above...

 If GPB API is merged into Neutron, the next logical steps (from what I can
 tell) will be to add drivers that handle policy-based payloads/requests.

 Some of these drivers, AFAICT, will *not* be deconstructing these policy
 requests into the low-level port, network, and subnet
 creation/attachment/detachment commands, but instead will be calling out
 as-is to hardware that speaks the higher-level abstraction API [1], not the
 lower-level port/subnet/network APIs. The low-level APIs would essentially
 be consumed entirely within the policy-based driver, which would effectively
 mean that the only way a system would be able to orchestrate networking in
 systems using these drivers would be via the high-level policy API.

 Is that correct? Very sorry if I haven't explained clearly my question...
 this is a tough question to frame eloquently :(

 Thanks,
 -jay

 [1]
 http://www.cisco.com/c/en/us/solutions/data-center-virtualization/application-centric-infrastructure/index.html

 On Aug 8, 2014 9:49 AM, CARVER, PAUL pc2...@att.com
 mailto:pc2...@att.com wrote:

 Wuhongning [mailto:wuhongn...@huawei.com
 mailto:wuhongn...@huawei.com] wrote:

  Does it make sense to move all advanced extension out of ML2, like
 security
  group, qos...? Then we can just talk about advanced service
 itself, without
  bothering basic neutron object (network/subnet/port)

 A modular layer 3 (ML3) analogous to ML2 sounds like a good idea. I
 still
 think it's too late in the game to be shooting down all the work
 that the
 GBP team has put in unless there's a really clean and effective way
 of
 running AND iterating on GBP in conjunction with Neutron without
 being
 part of the Juno release. As far as I can tell they've worked really
 hard to follow the process and accommodate input. They shouldn't have
 to wait multiple more releases on a hypothetical refactoring of how
 L3+ vs
 L2 is structured.

 But, just so I'm not making a horrible mistake, can someone reassure
 me
 that GBP isn't removing the constructs of network/subnet/port from
 Neutron?

 I'm under the impression that GBP is adding a higher level
 abstraction
 but that it's not ripping basic constructs like network/subnet/port
 out
 of the existing API. If I'm wrong about that I'll have to change my
 opinion. We need those fundamental networking constructs to be
 present
 and accessible to users that want/need to deal with them. I'm viewing
 GBP as just a higher level abstraction over the top.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-08 Thread Jay Pipes

On 08/08/2014 12:29 PM, Sumit Naiksatam wrote:

Hi Jay, To extend Ivar's response here, the core resources and core
plugin configuration does not change with the addition of these
extensions. The mechanism to implement the GBP extensions is via a
service plugin. So even in a deployment where a GBP service plugin is
deployed with a driver which interfaces with a backend that perhaps
directly understands some of the GBP constructs, that system would
still need to have a core plugin configured that honors Neutron's core
resources. Hence my earlier comment that GBP extensions are
complementary to the existing core resources (in much the same way as
the existing extensions in Neutron).


OK, thanks Sumit. That clearly explains things for me.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [third-party] Freescale CI log site is being blocked

2014-08-08 Thread Sumit Naiksatam
Actually I am able to access the logs in this CI over the internet and
through my service provider. I have copy-pasted the log from the
latest freescale run here (to validate if this is indeed the latest
run):
http://paste.openstack.org/show/92229/

But good point Kevin, when I was trying to post this on paste, it did
complain about the log text appearing like spam.

On Fri, Aug 8, 2014 at 10:58 AM, Kevin Benton blak...@gmail.com wrote:
 Does your log server allow anonymous uploads that caused it to host malware
 or something that led to it being blocked?


 On Fri, Aug 8, 2014 at 7:12 AM, Kyle Mestery mest...@mestery.com wrote:

 Trinath:

 In looking at your FWaaS review [1], I noticed the site you are using
 for log storage is being blacklisted again, at least by Cisco WSA
 appliances. Thus, I cannot see the logs for it. Did you change the
 location of your log storage again? Is anyone else seeing this issue?

 Thanks,
 Kyle


 [1] https://review.openstack.org/#/c/109659/

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kevin Benton

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] usage patterns for oslo.config

2014-08-08 Thread Doug Hellmann

On Aug 8, 2014, at 1:30 PM, Vishvananda Ishaya vishvana...@gmail.com wrote:

 Hi Alistair,
 
 Modules can register their own options and there is no need to call 
 reload_config_files. The config files are parsed and values stored in case 
 the option is later declared. The only time you need to reload files is if 
 you add new config files in the new module. See the example code:
 
 from oslo.config import cfg
 with open(foo, w) as f:
 f.write([DEFAULT]\nfoo=bar)
 
 cfg.CONF([--config-file, foo])
 try:
 print cfg.CONF.foo
 except cfg.NoSuchOptError:
 print NO OPT
 # OUT: 'NO OPT'
 
 cfg.CONF.register_opt(cfg.StrOpt(foo))
 print cfg.CONF.foo
 cfg.CONF.foo
 # OUT: ‘bar'
 
 One thing to keep in mind is you don’t want to use config values at import 
 time, since this tends to be before the config files have been loaded.
 
 Vish


That’s right. The preferred approach is to put the register_opt() in *runtime* 
code somewhere before the option will be used. That might be in the constructor 
for a class that uses an option, for example, as described in 
http://docs.openstack.org/developer/oslo.config/cfg.html#registering-options

Doug

 
 On Aug 8, 2014, at 6:40 AM, Coles, Alistair alistair.co...@hp.com wrote:
 
 I’ve been looking at the implications of applying oslo.config in Swift, and 
 I have a question about the best pattern for registering options.
  
 Looking at how keystone uses oslo.config, the pattern seems to be to have 
 all options declared and registered 'up-front' in a single place 
 (keystone/common/config.py) before loading wsgi pipeline/starting the 
 service. Is there another usage pattern where each middleware registers its 
 options independently ‘on-demand’ rather than maintaining them all in a 
 single place?
  
 I read about a pattern [1] whereby modules register opts during import, but 
 does that require there to be some point in the lifecycle where all required 
 modules are imported *before* parsing config files? Seems like that would 
 mean parsing the wsgi pipeline to ‘discover’ the middleware modules being 
 used, importing all those modules, then parsing config files, then loading 
 the wsgi pipeline?
  
 OR - is it acceptable for each middleware module to register its own options 
 if/when it is imported during wsgi pipeline loading 
 (CONF.register_options()) and then call CONF.reload_config_files() ?
  
 Thanks,
 Alistair
  
 [1] 
 http://docs.openstack.org/developer/oslo.config/cfg.html#global-configopts
  
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][policy] Group Based Policy - Renaming

2014-08-08 Thread Sumit Naiksatam
Thanks Jay for your constructive feedback on this. I personally think
that 'policy-target' is a good option. I am not sure what the rest of
the team thinks, perhaps they can chime in.

On Fri, Aug 8, 2014 at 8:43 AM, Jay Pipes jaypi...@gmail.com wrote:
 On 08/07/2014 01:17 PM, Ronak Shah wrote:

 Hi,
 Following a very interesting and vocal thread on GBP for last couple of
 days and the GBP meeting today, GBP sub-team proposes following name
 changes to the resource.


 policy-point for endpoint
 policy-group for endpointgroup (epg)

 Please reply if you feel that it is not ok with reason and suggestion.


 Thanks Ronak and Sumit for sharing. I, too, wasn't able to attend the
 meeting (was in other meetings yesterday and today).

 I'm very happy with the change from endpoint-group - policy-group.

 policy-point is better than endpoint, for sure. The only other suggestion I
 might have would be to use policy-target instead of policy-point, since
 the former clearly delineates what the object is used for (a target for a
 policy).

 But... I won't raise a stink about this. Sorry for sparking long and
 tangential discussions on GBP topics earlier this week. And thanks to the
 folks who persevered and didn't take too much offense to my questioning.

 Best,
 -jay



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-08 Thread Armando M.
On 8 August 2014 10:56, Kevin Benton blak...@gmail.com wrote:

 There is an enforcement component to the group policy that allows you to
 use the current APIs and it's the reason that group policy is integrated
 into the neutron project. If someone uses the current APIs, the group
 policy plugin will make sure they don't violate any policy constraints
 before passing the request into the regular core/service plugins.


This is the statement that makes me trip over, and I don't understand why
GBP and Neutron Core need to be 'integrated' together as they have. Policy
decision points can be decentralized from the system under scrutiny, we
don't need to have one giant monolithic system that does everything; it's
an architectural decision that would make difficult to achieve
composability and all the other good -ilities of software systems.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][policy] Group Based Policy - Renaming

2014-08-08 Thread Ryan Moats
+1

Sumit Naiksatam sumitnaiksa...@gmail.com wrote on 08/08/2014 02:44:55 PM:

 From: Sumit Naiksatam sumitnaiksa...@gmail.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: 08/08/2014 02:45 PM
 Subject: Re: [openstack-dev] [Neutron][policy] Group Based Policy -
Renaming

 Thanks Jay for your constructive feedback on this. I personally think
 that 'policy-target' is a good option. I am not sure what the rest of
 the team thinks, perhaps they can chime in.___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][policy] Group Based Policy - Renaming

2014-08-08 Thread Stephen Wong
policy target sounds good. +1



On Fri, Aug 8, 2014 at 12:44 PM, Sumit Naiksatam sumitnaiksa...@gmail.com
wrote:

 Thanks Jay for your constructive feedback on this. I personally think
 that 'policy-target' is a good option. I am not sure what the rest of
 the team thinks, perhaps they can chime in.

 On Fri, Aug 8, 2014 at 8:43 AM, Jay Pipes jaypi...@gmail.com wrote:
  On 08/07/2014 01:17 PM, Ronak Shah wrote:
 
  Hi,
  Following a very interesting and vocal thread on GBP for last couple of
  days and the GBP meeting today, GBP sub-team proposes following name
  changes to the resource.
 
 
  policy-point for endpoint
  policy-group for endpointgroup (epg)
 
  Please reply if you feel that it is not ok with reason and suggestion.
 
 
  Thanks Ronak and Sumit for sharing. I, too, wasn't able to attend the
  meeting (was in other meetings yesterday and today).
 
  I'm very happy with the change from endpoint-group - policy-group.
 
  policy-point is better than endpoint, for sure. The only other
 suggestion I
  might have would be to use policy-target instead of policy-point,
 since
  the former clearly delineates what the object is used for (a target for a
  policy).
 
  But... I won't raise a stink about this. Sorry for sparking long and
  tangential discussions on GBP topics earlier this week. And thanks to the
  folks who persevered and didn't take too much offense to my questioning.
 
  Best,
  -jay
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Question on decorators in Ceilometer pecan framework

2014-08-08 Thread Doug Hellmann

On Aug 8, 2014, at 8:49 AM, Pendergrass, Eric eric.pendergr...@hp.com wrote:

 Hi,
  
 We have been struggling to get a decorator working for proposed new RBAC 
 functionality in ceilometer-api.  We’re hitting a problem where GET request 
 query parameters are mucked up by our decorator.  Here’s an example call:
  
 curl -H X-Auth-Token:$TOKEN 
 'http://localhost:8777/v2/meters?q.field=project_idq.value=8c678720fb5b4e3bb18dee222d7d7933'
  
 And here’s the decorator method (we’ve tried changing the kwargs, args, etc. 
 with no luck):
  
 _ENFORCER = None
  
 def protected(controller_class):
  
 global _ENFORCER
 if not _ENFORCER:
 _ENFORCER = policy.Enforcer()
  
 def wrapper(f):
 @functools.wraps(f)
 def inner(self, **kwargs):
 pdb.set_trace()
 self._rbac_context = {}

You need to be careful saving request state on the controller. The controller 
may be shared by multiple requests (I see below that you’re creating a 
MeterController for each incoming request, but that won’t be the case for all 
controller types). It’s better to store the value in the pecan.request, which 
is a thread-safe request-specific object.

If you just need to store some values, and not test ACLs it in the decorator, 
you could use a Pecan hook [1] like we do with the configuration settings [2].

Doug

1 - http://pecan.readthedocs.org/en/latest/hooks.html
2 - 
http://git.openstack.org/cgit/openstack/ceilometer/tree/ceilometer/api/hooks.py#n27

 if not _ENFORCER.enforce('context_is_admin',
  {},
  {'roles': 
 pecan.request.headers.get('X-Roles', ).split(,)}):
 self._rbac_context['project_id'] = 
 pecan.request.headers.get('X-Project-Id')
 self._rbac_context['user_id'] = 
 pecan.request.headers.get('X-User-Id')
 return f(self, **kwargs)
 return inner
 return wrapper
  
 tried this too:
  
 _ENFORCER = None
  
 def protected(*args):
  
 controller_class = 'meter'
 global _ENFORCER
 if not _ENFORCER:
 _ENFORCER = policy.Enforcer()
  
 def wrapper(f, *args):
 def inner(self, *args):
 pdb.set_trace()
 #self._rbac_context = {}
 #if not _ENFORCER.enforce('context_is_admin',
 # {},
 # {'roles': 
 pecan.request.headers.get('X-Roles', ).split(,)}):
 #self._rbac_context['project_id'] = 
 pecan.request.headers.get('X-Project-Id')
 #self._rbac_context['user_id'] = 
 pecan.request.headers.get('X-User-Id')
 #return f(*args)
 f(self, *args)
 return inner
 return wrapper
  
 and here’s how it’s used:
  
 class MetersController(rest.RestController):
 Works on meters.
  
 _rbac_context = {}
 @pecan.expose()
 def _lookup(self, meter_name, *remainder):
 return MeterController(meter_name), remainder
  
 @wsme_pecan.wsexpose([Meter], [Query])
 @rbac_validate.protected('meters')
 def get_all(self, q=None):
 Return all known meters, based on the data recorded so far.
  
 :param q: Filter rules for the meters to be returned.
 
 q = q or [] …
  
  
 but we get errors similar to below where the arg parser cannot find the query 
 parameter because the decorator doesn’t take a q argument as 
 MetersController.get_all does. 
  
 Is there any way to get a decorator to work within the v2 API code and wsme 
 framework or should we consider another approach?  Decorators would really 
 simplify the RBAC idea we’re working on, which is mostly code-implemented 
 save for this fairly major problem.
  
 I have a WIP registered BP on this at 
 https://blueprints.launchpad.net/ceilometer/+spec/ready-ceilometer-rbac-keystone-v3.
  
 If I can provide more details I’ll be happy to.
  
 Thanks
 Eric
  
   /usr/local/bin/ceilometer-api(10)module()
 - sys.exit(api())
   /opt/stack/ceilometer/ceilometer/cli.py(96)api()
 - srv.serve_forever()
   /usr/lib/python2.7/SocketServer.py(227)serve_forever()
 - self._handle_request_noblock()
   /usr/lib/python2.7/SocketServer.py(284)_handle_request_noblock()
 - self.process_request(request, client_address)
   /usr/lib/python2.7/SocketServer.py(310)process_request()
 - self.finish_request(request, client_address)
   /usr/lib/python2.7/SocketServer.py(323)finish_request()
 - self.RequestHandlerClass(request, client_address, self)
   /usr/lib/python2.7/SocketServer.py(638)__init__()
 - self.handle()
   /usr/lib/python2.7/wsgiref/simple_server.py(124)handle()
 - handler.run(self.server.get_app())
   /usr/lib/python2.7/wsgiref/handlers.py(85)run()
 - self.result = application(self.environ, self.start_response)
   
 /opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py(663)__call__()
 - return self.app(env, start_response)
   

Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-08 Thread Sumit Naiksatam
On Fri, Aug 8, 2014 at 12:45 PM, Armando M. arma...@gmail.com wrote:
 On 8 August 2014 10:56, Kevin Benton blak...@gmail.com wrote:

 There is an enforcement component to the group policy that allows you to
 use the current APIs and it's the reason that group policy is integrated
 into the neutron project. If someone uses the current APIs, the group policy
 plugin will make sure they don't violate any policy constraints before
 passing the request into the regular core/service plugins.


 This is the statement that makes me trip over, and I don't understand why
 GBP and Neutron Core need to be 'integrated' together as they have. Policy
 decision points can be decentralized from the system under scrutiny, we
 don't need to have one giant monolithic system that does everything; it's an
 architectural decision that would make difficult to achieve composability
 and all the other good -ilities of software systems.


Adding the GBP extension to Neutron does not change the nature of the
software architecture of Neutron making it more or less monolithic. It
fulfills a gap that is currently present in the Neutron API, namely -
to complement the current imperative abstractions with a app
-developer/deployer friendly declarative abstraction [1]. To
reiterate, it has been proposed as an “extension”, and not a
replacement of the core abstractions or the way those are consumed. If
this is understood and interpreted correctly, I doubt that there
should be reason for concern.

[1] https://review.openstack.org/#/c/89469

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Image upload/download bandwidth cap

2014-08-08 Thread Jay Pipes

On 08/08/2014 08:49 AM, Tomoki Sekiyama wrote:

Hi all,

I'm considering how I can apply image download/upload bandwidth limit for
glance for network QoS.

There was a review for the bandwidth limit, however it is abandoned.

* Download rate limiting
   https://review.openstack.org/#/c/21380/

Was there any discussion in the past summit about this not to merge this?
Or, is there alternative way to cap the bandwidth consumed by Glance?

I appreciate any information about this.


Hi Tomoki :)

Would it be possible to integrate traffic control into the network 
configuration between the Glance endpoints and the nova-compute nodes 
over the control plane network?


http://www.lartc.org/lartc.html#LARTC.RATELIMIT.SINGLE

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Simple proposal for stabilizing new features in-tree

2014-08-08 Thread Robert Kukura
[Note - I understand there are ongoing discussion that may lead to a 
proposal for an out-of-tree incubation process for new Neutron features. 
This is a complementary proposal that describes how our existing 
development process can be used to stabilize new features in-tree over 
the time frame of a release cycle or two. We should fully consider both 
proposals, and where each might apply. I hope something like the 
approach I propose here will allow the implementations of Neutron BPs 
with non-trivial APIs that have been targeted for the Juno release to be 
included in that release, used by early adopters, and stabilized as 
quickly as possible for general consumption.]


According to our existing development process, once a blueprint and 
associated specification for a new Neutron feature have been reviewed, 
approved, and targeted to a release, development proceeds, resulting in 
a series of patches to be reviewed and merged to the Neutron source 
tree. This source tree is then the basis for milestone releases and the 
final release for the cycle.


Ideally, this development process would conclude successfully, well in 
advance of the cycle's final release, and the resulting feature and its 
API would be considered fully stable in that release. Stable features 
are ready for widespread general deployment. Going forward, any further 
modifications to a stable API must be backwards-compatible with 
previously released versions. Upgrades must not lose any persistent 
state associated with stable features. Upgrade processes and their 
impact on a deployments (downtime, etc.) should be consistent for all 
stable features.


In reality, we developers are not perfect, and minor (or more 
significant) changes may be identified as necessary or highly desirable 
once early adopters of the new feature have had a chance to use it. 
These changes may be difficult or impossible to do in a way that honors 
the guarantees associated with stable features.


For new features that effect the core Neutron API and therefore impact 
all Neutron deployments, the stability requirement is strict. But for 
features that do not effect the core API, such as services whose 
deployment is optional, the stability requirement can be relaxed 
initially, allowing time for feedback from early adopters to be 
incorporated before declaring these APIs stable. The key in doing this 
is to manage the expectations of developers, packagers, operators, and 
end users regarding these new optional features while they stabilize.


I therefore propose that we manage these expectations, while new 
optional features in the source tree stabilize, by clearly labeling 
these features with the term preview until they are declared stable, 
and sufficiently isolating them so that they are not confused with 
stable features. The proposed guidelines would apply during development 
as the patches implementing the feature are first merged, in the initial 
release containing the feature, and in any subsequent releases that are 
necessary to fully stabilize the feature.


Here are my initial not-fully-baked ideas for how our current process 
can be adapted with a preview feature concept supporting in-tree 
stabilization of optional features:


* Preview features are implementations of blueprints that have been 
reviewed, approved, and targeted for a Neutron release. The process is 
intended for features for which there is a commitment to add the feature 
to Neutron, not for experimentation where failing fast is an 
acceptable outcome.


* Preview features must be optional to deploy, such as by configuring a 
service plugin or some set of drivers. Blueprint implementations whose 
deployment is not optional are not eligible to be treated as preview 
features.


* Patches implementing a preview feature are merged to the the master 
branch of the Neutron source tree. This makes them immediately available 
to all direct consumers of the source tree, such as developers, 
trunk-chasing operators, packagers, and evaluators or end-users that use 
DevStack, maximizing the opportunity to get the feedback that is 
essential to quickly stabilize the feature.


* The process for reviewing, approving and merging patches implementing 
preview features is exactly the same as for all other Neutron patches. 
The patches must meet HACKING standards, be production-quality code, 
have adequate test coverage, have DB migration scripts, etc., and 
require two +2s and a +A from Neutron core developers to merge.


* DB migrations for preview features are treated similarly to other DB 
migrations, forming a single ordered list that results in the current 
overall DB schema, including the schema for the preview feature. But DB 
migrations for a preview feature are not yet required to preserve 
existing persistent state in a deployment, as would be required for a 
stable feature.


* All code that is part of a preview feature is located under 
neutron/preview/feature/. Associated unit 

Re: [openstack-dev] [Glance] Image upload/download bandwidth cap

2014-08-08 Thread Russell Bryant
On 08/08/2014 04:17 PM, Jay Pipes wrote:
 On 08/08/2014 08:49 AM, Tomoki Sekiyama wrote:
 Hi all,

 I'm considering how I can apply image download/upload bandwidth limit for
 glance for network QoS.

 There was a review for the bandwidth limit, however it is abandoned.

 * Download rate limiting
https://review.openstack.org/#/c/21380/

 Was there any discussion in the past summit about this not to merge this?
 Or, is there alternative way to cap the bandwidth consumed by Glance?

 I appreciate any information about this.
 
 Hi Tomoki :)
 
 Would it be possible to integrate traffic control into the network
 configuration between the Glance endpoints and the nova-compute nodes
 over the control plane network?
 
 http://www.lartc.org/lartc.html#LARTC.RATELIMIT.SINGLE

Yep, that was my first thought as well.  It seems like something that
would ideally be handled outside of OpenStack itself.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-08 Thread Prasad Vellanki
GBP is about networking policy and hence limited to networking constructs.
It enhances the networking constructs. Since it follows more or less the
plugin model, it is not in one monolithic module but fans out to the policy
module and is done via  extension.


On Fri, Aug 8, 2014 at 12:45 PM, Armando M. arma...@gmail.com wrote:

 On 8 August 2014 10:56, Kevin Benton blak...@gmail.com wrote:

 There is an enforcement component to the group policy that allows you to
 use the current APIs and it's the reason that group policy is integrated
 into the neutron project. If someone uses the current APIs, the group
 policy plugin will make sure they don't violate any policy constraints
 before passing the request into the regular core/service plugins.


 This is the statement that makes me trip over, and I don't understand why
 GBP and Neutron Core need to be 'integrated' together as they have. Policy
 decision points can be decentralized from the system under scrutiny, we
 don't need to have one giant monolithic system that does everything; it's
 an architectural decision that would make difficult to achieve
 composability and all the other good -ilities of software systems.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Image upload/download bandwidth cap

2014-08-08 Thread Arnaud Legendre
+1, That’s what suggested in the blueprint a year ago: 
https://blueprints.launchpad.net/glance/+spec/transfer-rate-limiting

It looks like consensus during summit discussion that rate limiting should be 
a separate facility running as a proxy in front of glance.”

Thanks,
Arnaud

On Aug 8, 2014, at 1:23 PM, Russell Bryant 
rbry...@redhat.commailto:rbry...@redhat.com wrote:

On 08/08/2014 04:17 PM, Jay Pipes wrote:
On 08/08/2014 08:49 AM, Tomoki Sekiyama wrote:
Hi all,

I'm considering how I can apply image download/upload bandwidth limit for
glance for network QoS.

There was a review for the bandwidth limit, however it is abandoned.

* Download rate limiting
  https://review.openstack.org/#/c/21380/

Was there any discussion in the past summit about this not to merge this?
Or, is there alternative way to cap the bandwidth consumed by Glance?

I appreciate any information about this.

Hi Tomoki :)

Would it be possible to integrate traffic control into the network
configuration between the Glance endpoints and the nova-compute nodes
over the control plane network?

https://urldefense.proofpoint.com/v1/url?u=http://www.lartc.org/lartc.html%23LARTC.RATELIMIT.SINGLEk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=5wWaXo2oVaivfKLCMyU6Z9UTO8HOfeGCzbGHAT4gZpo%3D%0Am=dshyVjCo6WO66P5gNLmupQU512o2hEOHZwAxFhhOFt8%3D%0As=d3df646cf78d4e527ad3b66bbba20c110333b6d1cd59c6da8ab4dc5981e5b432

Yep, that was my first thought as well.  It seems like something that
would ideally be handled outside of OpenStack itself.

--
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][policy] Group Based Policy - Renaming

2014-08-08 Thread Ivar Lazzaro
That's ok for me as well!
+1
On Aug 8, 2014 10:04 PM, Prasad Vellanki 
prasad.vella...@oneconvergence.com wrote:

 It sounds good
 +1


 On Fri, Aug 8, 2014 at 12:44 PM, Sumit Naiksatam sumitnaiksa...@gmail.com
  wrote:

 Thanks Jay for your constructive feedback on this. I personally think
 that 'policy-target' is a good option. I am not sure what the rest of
 the team thinks, perhaps they can chime in.

 On Fri, Aug 8, 2014 at 8:43 AM, Jay Pipes jaypi...@gmail.com wrote:
  On 08/07/2014 01:17 PM, Ronak Shah wrote:
 
  Hi,
  Following a very interesting and vocal thread on GBP for last couple of
  days and the GBP meeting today, GBP sub-team proposes following name
  changes to the resource.
 
 
  policy-point for endpoint
  policy-group for endpointgroup (epg)
 
  Please reply if you feel that it is not ok with reason and suggestion.
 
 
  Thanks Ronak and Sumit for sharing. I, too, wasn't able to attend the
  meeting (was in other meetings yesterday and today).
 
  I'm very happy with the change from endpoint-group - policy-group.
 
  policy-point is better than endpoint, for sure. The only other
 suggestion I
  might have would be to use policy-target instead of policy-point,
 since
  the former clearly delineates what the object is used for (a target for
 a
  policy).
 
  But... I won't raise a stink about this. Sorry for sparking long and
  tangential discussions on GBP topics earlier this week. And thanks to
 the
  folks who persevered and didn't take too much offense to my questioning.
 
  Best,
  -jay
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Image upload/download bandwidth cap

2014-08-08 Thread Tomoki Sekiyama
On 8/8/14 16:28 , Arnaud Legendre 
alegen...@vmware.commailto:alegen...@vmware.com wrote:
+1, That’s what suggested in the blueprint a year ago: 
https://blueprints.launchpad.net/glance/+spec/transfer-rate-limiting

It looks like consensus during summit discussion that rate limiting should be 
a separate facility running as a proxy in front of glance.”

Thanks,
Arnaud

On Aug 8, 2014, at 1:23 PM, Russell Bryant 
rbry...@redhat.commailto:rbry...@redhat.com wrote:

On 08/08/2014 04:17 PM, Jay Pipes wrote:
On 08/08/2014 08:49 AM, Tomoki Sekiyama wrote:
Hi all,

I'm considering how I can apply image download/upload bandwidth limit for
glance for network QoS.

There was a review for the bandwidth limit, however it is abandoned.

* Download rate limiting
  https://review.openstack.org/#/c/21380/

Was there any discussion in the past summit about this not to merge this?
Or, is there alternative way to cap the bandwidth consumed by Glance?

I appreciate any information about this.

Hi Tomoki :)

Would it be possible to integrate traffic control into the network
configuration between the Glance endpoints and the nova-compute nodes
over the control plane network?

https://urldefense.proofpoint.com/v1/url?u=http://www.lartc.org/lartc.html%23LARTC.RATELIMIT.SINGLEk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=5wWaXo2oVaivfKLCMyU6Z9UTO8HOfeGCzbGHAT4gZpo%3D%0Am=dshyVjCo6WO66P5gNLmupQU512o2hEOHZwAxFhhOFt8%3D%0As=d3df646cf78d4e527ad3b66bbba20c110333b6d1cd59c6da8ab4dc5981e5b432

Yep, that was my first thought as well.  It seems like something that
would ideally be handled outside of OpenStack itself.
Ah OK, I got the point.
Thank you for the informations.

--
Tomoki Sekiyama
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron][Technical Committee] nova-network - Neutron. Throwing a wrench in the Neutron gap analysis

2014-08-08 Thread Brent Eagles
On Wed, Aug 06, 2014 at 01:40:28PM +0800, Tom Fifield wrote:
snip/
 While DB migrations are running things like the nova metadata service
 can/will misbehave - and user code within instances will be affected.
 Thats arguably VM downtime.
 
 OTOH you could define it more narrowly as 'VMs are not powered off' or
 'VMs are not stalled for more than 2s without a time slice' etc etc -
 my sense is that most users are going to be particularly concerned
 about things for which they have to *do something* - e.g. VMs being
 powered off or rebooted - but having no network for a short period
 while vifs are replugged and the overlay network re-establishes itself
 would be much less concerning.
 
 I think you've got it there, Rob - nicely put :)
 
 In many cases the users I've spoken to who are looking for a live path out
 of nova-network on to neutron are actually completely OK with some API
 service downtime (metadata service is an API service by their definition).
 A little 'glitch' in the network is also OK for many of them.
 
 Contrast that with the original proposal in this thread (snapshot VMs in
 old nova-network deployment, store in Swift or something, then launch VM
 from a snapshot in new Neutron deployment) - it is completely unacceptable
 and is not considered a migration path for these users.
 
 
 Regards,
 
 
 Tom

I've thought about this off and on since it was brought up at summit. I
have some concerns about expectations. While I could probably rattle on,
I'll stick to the two for now.

- We need to be clear with expectations with connection resets and other
  odd connection behavior. There are some nice little gotchas for some
  applications when an IP address is moved depending on how connection
  is being used. Floating IPs could be interesting as well as
  nova-network and neutron differ quite a bit in how they are
  implemented. The ultimate effect on running applications will of
  course depend on whether or not they can handle things of that nature.
  Apps designed for failover, stale connections, etc, will probably fare
  better than those that are not. Apps designed for cattle vms probably
  will do okay too. I imagine pets will be higher risk and interestingly
  enough, they seem to be a more likely target use case. I suppose this
  falls under the category of glitch, but the pessimist (realist?) in
  me is having a hard time that some deployments are going to run into
  problems... which is a nice segue into the next concern.

- I wonder about uncommunicated expectations with migration rollback in
  case of the all gone to hell, we need to put it back situation. We
  have been talking about migrating a live VM from nova-network to
  neutron, but what about the way back? Are new VM boots going to be
  prevented until an all-clear is given to prevent orphans if
  nova-network needs to be put back in place? Or are we saying it is a
  never look back type of deal? Has this  been discussed and all
  worked out and I just missed it? This concerns me a great deal because
  cannot imagine any of the admins I've ever worked with doing something
  without a failsafe backup to known good state whether the end up
  needing it or not.

I'm not convinced that these have been thoroughly considered, nor are
they addressable in the very near future. I also am *deeply* concerned
that placing significant focus on this PRIOR to achieving parity with
nova-network both in function and stability jeopardizes all. That is not
to diminish the efforts of those that have already contributed heavily
in this area. However, this work is all for nothing if we haven't
covered the necessary gaps so that the users have something to migrate
*to*. 

Cheers,

Brent

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-08-08 Thread Russell Bryant
On 08/08/2014 09:06 AM, Russell Bryant wrote:
  - instead implement a third party CI with the latest available
 libvirt release [1]
 
 As for the general idea of doing CI, absolutely.  That was discussed
 earlier in the thread, though nobody has picked up the ball yet.  I can
 work on it, though.  We just need to figure out a sensible approach.
 
 We've seen several times that building and maintaining 3rd party CI is a
 *lot* of work.  Like you said in [1], doing this in infra's CI would be
 ideal.  I think 3rd party should be reserved for when running it in the
 project's infrastructure is not an option for some reason (requires
 proprietary hw or sw, for example).
 
 I wonder if the job could be as simple as one with an added step in the
 config to install latest libvirt from source.  Dan, do you think someone
 could add a libvirt-current.tar.gz to http://libvirt.org/sources/ ?
 Using the latest release seems better than master from git.
 
 I'll mess around and see if I can spin up an experimental job.

Here's a first stab at it:

https://review.openstack.org/113020

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][policy] Group Based Policy - Renaming

2014-08-08 Thread Henry Fourie
+1 for policy-target

-Original Message-
From: Sumit Naiksatam [mailto:sumitnaiksa...@gmail.com] 
Sent: Friday, August 08, 2014 12:45 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][policy] Group Based Policy - Renaming

Thanks Jay for your constructive feedback on this. I personally think that 
'policy-target' is a good option. I am not sure what the rest of the team 
thinks, perhaps they can chime in.

On Fri, Aug 8, 2014 at 8:43 AM, Jay Pipes jaypi...@gmail.com wrote:
 On 08/07/2014 01:17 PM, Ronak Shah wrote:

 Hi,
 Following a very interesting and vocal thread on GBP for last couple 
 of days and the GBP meeting today, GBP sub-team proposes following 
 name changes to the resource.


 policy-point for endpoint
 policy-group for endpointgroup (epg)

 Please reply if you feel that it is not ok with reason and suggestion.


 Thanks Ronak and Sumit for sharing. I, too, wasn't able to attend the 
 meeting (was in other meetings yesterday and today).

 I'm very happy with the change from endpoint-group - policy-group.

 policy-point is better than endpoint, for sure. The only other 
 suggestion I might have would be to use policy-target instead of 
 policy-point, since the former clearly delineates what the object is 
 used for (a target for a policy).

 But... I won't raise a stink about this. Sorry for sparking long and 
 tangential discussions on GBP topics earlier this week. And thanks to 
 the folks who persevered and didn't take too much offense to my questioning.

 Best,
 -jay



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron][Technical Committee] nova-network - Neutron. Throwing a wrench in the Neutron gap analysis

2014-08-08 Thread Russell Bryant
On 08/06/2014 01:41 PM, Jay Pipes wrote:
 On 08/06/2014 01:40 AM, Tom Fifield wrote:
 On 06/08/14 13:30, Robert Collins wrote:
 On 6 August 2014 17:27, Tom Fifield t...@openstack.org wrote:
 On 06/08/14 13:24, Robert Collins wrote:

 What happened to your DB migrations then? :)


 Sorry if I misunderstood, I thought we were talking about running VM
 downtime here?

 While DB migrations are running things like the nova metadata service
 can/will misbehave - and user code within instances will be affected.
 Thats arguably VM downtime.

 OTOH you could define it more narrowly as 'VMs are not powered off' or
 'VMs are not stalled for more than 2s without a time slice' etc etc -
 my sense is that most users are going to be particularly concerned
 about things for which they have to *do something* - e.g. VMs being
 powered off or rebooted - but having no network for a short period
 while vifs are replugged and the overlay network re-establishes itself
 would be much less concerning.

 I think you've got it there, Rob - nicely put :)

 In many cases the users I've spoken to who are looking for a live path
 out of nova-network on to neutron are actually completely OK with some
 API service downtime (metadata service is an API service by their
 definition). A little 'glitch' in the network is also OK for many of
 them.

 Contrast that with the original proposal in this thread (snapshot VMs
 in old nova-network deployment, store in Swift or something, then launch
 VM from a snapshot in new Neutron deployment) - it is completely
 unacceptable and is not considered a migration path for these users.
 
 Who are these users? Can we speak with them? Would they be interested in
 participating in the documentation and migration feature process?

Yes, I'd really like to see some participation in the development of a
solution if it's an important requirement.  Until then, it feels like a
case of an open question of what do you want.  Of course the answer is
a pony.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Simple proposal for stabilizing new features in-tree

2014-08-08 Thread Ivar Lazzaro
Hi Robert,

I think this is a great proposal.
Easy to understand and (at a first glance) easy to implement.
Also, it seems the perfect compromise for those who wanted GBP (as a
representative for this kind of extension) in tree, and those who wanted it
to be more mature first.

Fwiw, You have my support on this.
Ivar.
On Aug 8, 2014 10:27 PM, Robert Kukura kuk...@noironetworks.com wrote:

[Note - I understand there are ongoing discussion that may lead to a
proposal for an out-of-tree incubation process for new Neutron features.
This is a complementary proposal that describes how our existing
development process can be used to stabilize new features in-tree over the
time frame of a release cycle or two. We should fully consider both
proposals, and where each might apply. I hope something like the approach I
propose here will allow the implementations of Neutron BPs with non-trivial
APIs that have been targeted for the Juno release to be included in that
release, used by early adopters, and stabilized as quickly as possible for
general consumption.]

According to our existing development process, once a blueprint and
associated specification for a new Neutron feature have been reviewed,
approved, and targeted to a release, development proceeds, resulting in a
series of patches to be reviewed and merged to the Neutron source tree.
This source tree is then the basis for milestone releases and the final
release for the cycle.

Ideally, this development process would conclude successfully, well in
advance of the cycle's final release, and the resulting feature and its API
would be considered fully stable in that release. Stable features are
ready for widespread general deployment. Going forward, any further
modifications to a stable API must be backwards-compatible with previously
released versions. Upgrades must not lose any persistent state associated
with stable features. Upgrade processes and their impact on a deployments
(downtime, etc.) should be consistent for all stable features.

In reality, we developers are not perfect, and minor (or more significant)
changes may be identified as necessary or highly desirable once early
adopters of the new feature have had a chance to use it. These changes may
be difficult or impossible to do in a way that honors the guarantees
associated with stable features.

For new features that effect the core Neutron API and therefore impact
all Neutron deployments, the stability requirement is strict. But for
features that do not effect the core API, such as services whose deployment
is optional, the stability requirement can be relaxed initially, allowing
time for feedback from early adopters to be incorporated before declaring
these APIs stable. The key in doing this is to manage the expectations of
developers, packagers, operators, and end users regarding these new
optional features while they stabilize.

I therefore propose that we manage these expectations, while new optional
features in the source tree stabilize, by clearly labeling these features
with the term preview until they are declared stable, and sufficiently
isolating them so that they are not confused with stable features. The
proposed guidelines would apply during development as the patches
implementing the feature are first merged, in the initial release
containing the feature, and in any subsequent releases that are necessary
to fully stabilize the feature.

Here are my initial not-fully-baked ideas for how our current process can
be adapted with a preview feature concept supporting in-tree
stabilization of optional features:

* Preview features are implementations of blueprints that have been
reviewed, approved, and targeted for a Neutron release. The process is
intended for features for which there is a commitment to add the feature to
Neutron, not for experimentation where failing fast is an acceptable
outcome.

* Preview features must be optional to deploy, such as by configuring a
service plugin or some set of drivers. Blueprint implementations whose
deployment is not optional are not eligible to be treated as preview
features.

* Patches implementing a preview feature are merged to the the master
branch of the Neutron source tree. This makes them immediately available to
all direct consumers of the source tree, such as developers, trunk-chasing
operators, packagers, and evaluators or end-users that use DevStack,
maximizing the opportunity to get the feedback that is essential to quickly
stabilize the feature.

* The process for reviewing, approving and merging patches implementing
preview features is exactly the same as for all other Neutron patches. The
patches must meet HACKING standards, be production-quality code, have
adequate test coverage, have DB migration scripts, etc., and require two
+2s and a +A from Neutron core developers to merge.

* DB migrations for preview features are treated similarly to other DB
migrations, forming a single ordered list that results in the current
overall DB 

Re: [openstack-dev] [Neutron] Simple proposal for stabilizing new features in-tree

2014-08-08 Thread Sumit Naiksatam
On Fri, Aug 8, 2014 at 1:21 PM, Robert Kukura kuk...@noironetworks.com wrote:
 [Note - I understand there are ongoing discussion that may lead to a
 proposal for an out-of-tree incubation process for new Neutron features.
 This is a complementary proposal that describes how our existing development
 process can be used to stabilize new features in-tree over the time frame of
 a release cycle or two. We should fully consider both proposals, and where
 each might apply. I hope something like the approach I propose here will
 allow the implementations of Neutron BPs with non-trivial APIs that have
 been targeted for the Juno release to be included in that release, used by
 early adopters, and stabilized as quickly as possible for general
 consumption.]


+1. I think this proposal is simple to understand, has limited process
and operational overhead while achieving the desired benefit of
handing preview features to early adopters with the right set of
expectations.

 According to our existing development process, once a blueprint and
 associated specification for a new Neutron feature have been reviewed,
 approved, and targeted to a release, development proceeds, resulting in a
 series of patches to be reviewed and merged to the Neutron source tree. This
 source tree is then the basis for milestone releases and the final release
 for the cycle.

 Ideally, this development process would conclude successfully, well in
 advance of the cycle's final release, and the resulting feature and its API
 would be considered fully stable in that release. Stable features are
 ready for widespread general deployment. Going forward, any further
 modifications to a stable API must be backwards-compatible with previously
 released versions. Upgrades must not lose any persistent state associated
 with stable features. Upgrade processes and their impact on a deployments
 (downtime, etc.) should be consistent for all stable features.

 In reality, we developers are not perfect, and minor (or more significant)
 changes may be identified as necessary or highly desirable once early
 adopters of the new feature have had a chance to use it. These changes may
 be difficult or impossible to do in a way that honors the guarantees
 associated with stable features.

 For new features that effect the core Neutron API and therefore impact all
 Neutron deployments, the stability requirement is strict. But for features
 that do not effect the core API, such as services whose deployment is
 optional, the stability requirement can be relaxed initially, allowing time
 for feedback from early adopters to be incorporated before declaring these
 APIs stable. The key in doing this is to manage the expectations of
 developers, packagers, operators, and end users regarding these new optional
 features while they stabilize.

 I therefore propose that we manage these expectations, while new optional
 features in the source tree stabilize, by clearly labeling these features
 with the term preview until they are declared stable, and sufficiently
 isolating them so that they are not confused with stable features. The
 proposed guidelines would apply during development as the patches
 implementing the feature are first merged, in the initial release containing
 the feature, and in any subsequent releases that are necessary to fully
 stabilize the feature.

 Here are my initial not-fully-baked ideas for how our current process can be
 adapted with a preview feature concept supporting in-tree stabilization of
 optional features:

 * Preview features are implementations of blueprints that have been
 reviewed, approved, and targeted for a Neutron release. The process is
 intended for features for which there is a commitment to add the feature to
 Neutron, not for experimentation where failing fast is an acceptable
 outcome.

 * Preview features must be optional to deploy, such as by configuring a
 service plugin or some set of drivers. Blueprint implementations whose
 deployment is not optional are not eligible to be treated as preview
 features.

 * Patches implementing a preview feature are merged to the the master branch
 of the Neutron source tree. This makes them immediately available to all
 direct consumers of the source tree, such as developers, trunk-chasing
 operators, packagers, and evaluators or end-users that use DevStack,
 maximizing the opportunity to get the feedback that is essential to quickly
 stabilize the feature.

 * The process for reviewing, approving and merging patches implementing
 preview features is exactly the same as for all other Neutron patches. The
 patches must meet HACKING standards, be production-quality code, have
 adequate test coverage, have DB migration scripts, etc., and require two +2s
 and a +A from Neutron core developers to merge.

 * DB migrations for preview features are treated similarly to other DB
 migrations, forming a single ordered list that results in the current
 overall DB schema, including the schema for 

Re: [openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

2014-08-08 Thread Armando M.

 Adding the GBP extension to Neutron does not change the nature of the
 software architecture of Neutron making it more or less monolithic.


I agree with this statement...partially: the way GBP was developed is in
accordance to the same principles and architectural choices made for the
service plugins and frameworks we have right now, and yes it does not make
Neutron more monolithic but certainly not less. These same very principles
have unveiled limitations we have realized need to be addressed, according
to Neutron's busy agenda. That said, if I were to be given the opportunity
to revise some architectural decisions during the new groundbreaking work
(regardless of the nature), I would.

For instance, I hate that the service plugins live in the same address
space of Neutron Server, I hate that I have one Neutron Server that does
L2, L3, IPAM, ...; we could break it down and make sure every entity can
have its own lifecycle: we can compose and integrate more easily if we did.
Isn't that what years of middleware and distributed systems taught us?

I suggested in the past that GBP would best integrate to Neutron via a
stable and RESTful interface, just like any other OpenStack project does. I
have been unable to be convinced otherwise, and I would love to be able to
change my opinion.


 It
 fulfills a gap that is currently present in the Neutron API, namely -
 to complement the current imperative abstractions with a app
 -developer/deployer friendly declarative abstraction [1]. To
 reiterate, it has been proposed as an “extension”, and not a
 replacement of the core abstractions or the way those are consumed.

If
 this is understood and interpreted correctly, I doubt that there
 should be reason for concern.


I never said that GBP did (mean to replace the core abstractions): I am
talking purely architecture and system integration. Not sure if this
statement is directed to my comment.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >