Re: [openstack-dev] [heat][mistral] EventScheduler vs Mistral scheduling

2013-11-14 Thread Renat Akhmerov

On 13 нояб. 2013 г., at 6:39, Angus Salkeld asalk...@redhat.com wrote:

 Your work mates;) https://github.com/rackerlabs/qonos
 
 how about merge qonos into mistral, or at lest put it into stack forge?

Just got to looking at qonos. It actually looks similar in some ways to Mistral 
but with some differences: no actually workflows but rather individual job 
scheduling, no configurable transports, no webhooks, dedicated workers. And in 
some ways it’s related to EvenScheduler API but it’s not generic enough (not 
based anything like webhooks). I think we could definitely reuse some ideas 
from Qonos in Mistral, but I’m not sure at this point if it could be just 
merged in as is, the philosophy of the projects are a little bit different. 
Worth considering though. It would be cool to get Adrian involved into this 
discussion since he participated in both things.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Command Line Interface for Solum

2013-11-14 Thread Noorul Islam Kamal Malmiyoda
On Nov 14, 2013 1:10 PM, Adrian Otto adrian.o...@rackspace.com wrote:

 Noorul,

 On Nov 13, 2013, at 7:43 PM, Noorul Islam K M noo...@noorul.com
  wrote:

  Doug Hellmann doug.hellm...@dreamhost.com writes:
 
  On Sun, Nov 10, 2013 at 10:15 AM, Noorul Islam K M noo...@noorul.com
wrote:
 
 
  Hello all,
 
  I registered a new blueprint [1] for command line client interface for
  Solum. We need to decide whether we should have a separate repository
  for this or go with new unified CLI framework [2]. Since Solum is not
  part of OpenStack I think it is not the right time to go with the
  unified CLI.
 
 
  One of the key features of the cliff framework used for the unified
command
  line app is that the subcommands can be installed independently of the
main
  program. So you can write plugins that work with the openstack client,
but
  put them in the solum client library package (and source repository).
That
  would let you, for example:
 
   $ pip install python-solumclient
   $ pip install python-openstackclient
   $ openstack solum make me a paas
 
  Dean has done a lot of work to design a consistent
noun-followed-by-verb
  command structure, so please look at that work when picking subcommand
  names (for example, you shouldn't use solum as a prefix as I did in my
  example above, since we are removing the project names from the
commands).
 
 
  I think we should follow this. If others have no objection, I will
  submit a review to openstack-infra/config to create a new repository
  named python-solumclient with intial code from cookiecutter template.
 
  Adrian,
 
  Does this blue-print require to be in Approved state to perform
  above task?

 Thanks for the enthusiasm! I'd like further input from additional team
members before advancing on this.


I think whichever path we take a separate repository is required.

Regards,
Noorul
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [heat] Custom Flavor creation through Heat

2013-11-14 Thread Kodam, Vijayakumar (EXT-Tata Consultancy Ser - FI/Espoo)
-Original Message-
From: Clint Byrum [mailto:cl...@fewbar.com] 
Sent: Tuesday, November 12, 2013 9:24 PM
To: openstack-dev
Subject: Re: [openstack-dev] [nova] [heat] Custom Flavor creation through Heat

Excerpts from Kodam, Vijayakumar (EXT-Tata Consultancy Ser - FI/Espoo)'s 
message of 2013-11-12 09:25:19 -0800:
 Hi,
 
 In Telecom Cloud applications, the requirements for every application are 
 different. One application might need 10 CPUs, 10GB RAM and no disk. Another 
 application might need 1 CPU, 512MB RAM and 100GB Disk. This varied 
 requirements directly affects the flavors which need to be created for 
 different applications (virtual instances). Customer has his own custom 
 requirements for CPU, RAM and other hardware requirements. So, based on 
 the requests from the customers, we believe that the flavor creation should 
 be done along with the instance creation, just before the instance is 
 created. Most of flavors will be specific to that application and 
 therefore will not be suitable by other instances.
 
 The obvious way is to allow users to create flavors and boot customized 
 instances through Heat. As of now, users can launch instances through heat 
 along with predefined nova flavors only. We have made some changes in our 
 setup and tested it. This change allows creation of customized nova flavors 
 using heat templates. We are also using extra-specs in the flavors for use 
 in our private cloud deployment.
 This gives an option to the user to mention custom requirements for the 
 flavor in the heat template directly along with the instance details. There 
 is one problem in the nova flavor creation using heat templates. Admin 
 privileges are required to create nova flavors. There should be a way to 
 allow a normal user to create flavors.
 
 Your comments and suggestions are most welcome on how to handle this problem 
 !!!

Seems like you just need to setup your Nova policy to allow a role to do
flavor creation:

compute_extension:flavormanage: rule:admin_api,
compute_extension:v3:flavor-manage: rule:admin_api,

And then enhance Heat to make those API calls.

There must be some valid reason for adding those checks in the Nova Policy. I 
would like to understand the implications before making any changes.

Regards,
VijayKumar
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [heat] Custom Flavor creation through Heat

2013-11-14 Thread Kodam, Vijayakumar (EXT-Tata Consultancy Ser - FI/Espoo)
From: Steve Baker [mailto:sba...@redhat.com] 
Sent: Tuesday, November 12, 2013 9:25 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] [heat] Custom Flavor creation through 
Heat

On 11/13/2013 07:50 AM, Steven Dake wrote:
On 11/12/2013 10:25 AM, Kodam, Vijayakumar (EXT-Tata Consultancy Ser - 
FI/Espoo) wrote:
Hi, 

   In Telecom Cloud applications, the requirements for every 
application are different. One application might need 10 CPUs, 10GB RAM and no 
disk. Another application might need 1 CPU, 512MB RAM
 and 100GB Disk. This varied requirements directly affects the flavors 
which need to be created for different applications (virtual instances). 
Customer has his own custom requirements for CPU, RAM 
 and other hardware requirements. So, based on the requests from the 
customers, we believe that the flavor creation should be done along with the 
instance creation, just before the instance is 
 created. Most of the flavors will be specific to that application and 
therefore will not be suitable by other instances.
 
   The obvious way is to allow users to create flavors and boot 
customized instances through Heat. As of now, users can launch instances 
through heat along with predefined nova flavors only. We 
 have made some changes in our setup and tested it. This change allows 
creation of customized nova flavors using heat templates. We are also using 
extra-specs in the flavors for use in our private 
cloud deployment.
   This gives an option to the user to mention custom requirements 
for the flavor in the heat template directly along with the instance details. 
There is one problem in the nova flavor creation 
 using heat templates. Admin privileges are required to create nova 
flavors. There should be a way to allow a normal user to create flavors.
 
Your comments and suggestions are most welcome on how to handle this 
problem !!!

Regards,
Vijaykumar Kodam

Vjaykumar,

I have long believed that an OS::Nova::Flavor resource would make a good 
addition to Heat, but as you pointed out, this type of resource requires 
administrative priveleges.  I generally also believe 
it is bad policy to implement resources that *require* admin privs to 
operate, because that results in yet more resources that require admin.  We are 
currently solving the IAM user cases (keystone 
doesn't allow the creation of users without admin privs).

It makes sense that cloud deployers would want to control who could create 
flavors to avoid DOS attacks against their inrastructure or prevent trusted 
users from creating a wacky flavor that the 
physical infrastructure can't support.  I'm unclear if nova offers a way 
to reduce permissions required for flavor creation.  One option that may be 
possible is via the keystone trusts mechanism.

Steve Hardy did most of the work integrating Heat with the new keystone 
trusts system - perhaps he has some input.

I would be happy for you to submit your OS::Nova::Flavor resource to heat. 
There are a couple of nova-specific issues that will need to be addressed:
* Is there optimization in nova required to handle the proliferation of 
flavors? Nova may currently make the assumption that the flavor list is short 
and static.
* How to provide an authorization policy that allows non-admins to create 
flavors. Maybe something role-based?

Thanks Steve Baker for the information. I am also waiting to hear from Steve 
Hardy, if keystone trust system will fix the nova flavors admin privileges 
issue.
One option to control the proliferation of nova flavors is to make them private 
to the tenant (using flavor-access?) who created them. This provides the needed 
privacy so that others tenants cannot view them.

Regards,
VijayKumar

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Glance Tasks

2013-11-14 Thread Mark Washenberger
Responses to both Jay and George inline.


On Wed, Nov 13, 2013 at 5:08 PM, Jay Pipes jaypi...@gmail.com wrote:

 Sorry for top-posting, but in summary, I entirely agree with George here.
 His logic is virtually identical to the concerns I raised with the initial
 proposal for Glance Tasks here:

 http://lists.openstack.org/pipermail/openstack-dev/2013-May/009400.html
 and
 http://lists.openstack.org/pipermail/openstack-dev/2013-May/009527.html


In my understanding, your viewpoints are subtly different.

George seems to agree with representing ongoing asynchronous tasks through
a separate 'tasks' resource. I believe where he differs with the current
design is how those tasks are created. He seems to prefer creating tasks
with POST requests to the affected resources. To distinguish between
uploading an image and importing an image, he suggests we require a
different content type in the request.

However, your main point in the links above seemed to be to reuse POST
/v2/images, but to capture the asynchronous nature of image verification
and conversion by adding more nodes to the image state machine.



 Best,
 -jay


 On 11/13/2013 05:36 PM, George Reese wrote:

 Let’s preface this with Glance being the part of OpenStack I am least
 familiar with. Keep in mind my commentary is related to the idea that
 the asynchronous tasks as designed are being considered beyond Glance.
 The problems of image upload/import/cloning/export are unlike other
 OpenStack operations for the most part in that they involve binary data
 as the core piece of the payload.

 Having said that, I’d prefer a polymorphic POST to the tasks API as
 designed.


Thanks. I think we'll move forward with this design for now in Glance. But
your alternative below is compelling and we'll definitely consider as we
add future tasks. I also want to say that we could probably completely
adopt your proposal in the future as long as we also support backwards
compatibility with the current design, but I can't predict at this point
the practical concerns that will emerge.


 But I’m much more concerned with the application of the tasks
 API as designed to wider problems.


I think this concern is very reasonable. Other projects should evaluate
your proposal carefully.



 Basically, I’d stick with POST /images.

 The content type should indicate what the server should expect.
 Basically, the content can be:

 * An actual image to upload
 * Content describing a target for an import
 * Content describing a target for a clone operation

 Implementation needs dictate whether any given operation is synchronous
 or asynchronous. Practically speaking, upload would be synchronous with
 the other two being asynchronous. This would NOT impact an existing
 /images POST as it will not change (unless we suddenly made it
 asynchronous).

 The response would be CREATED (synchronous) or ACCEPTED (asynchronous).
 If ACCEPTED, the body would contain JSON/XML describing the asynchronous
 task.

 I’m not sure if export is supposed to export to a target object store or
 export to another OpenStack environment. But it would be an async
 operation either way and should work as described above. Whether the
 endpoint for the image to be exported is the target or just /images is
 something worthy of discussion based on what the actual function of the
 export is.

 -George

 On Nov 12, 2013, at 5:45 PM, John Bresnahan j...@bresnahan.me
 mailto:j...@bresnahan.me wrote:

  George,

 Thanks for the comments, they make a lot of sense.  There is a Glance
 team meeting on Thursday where we would like to push a bit further on
 this.  Would you mind sending in a few more details? Perhaps a sample
 of what your ideal layout would be?  As an example, how would you
 prefer actions are handled that do not effect a currently existing
 resource but ultimately create a new resource (for example the import
 action).

 Thanks!

 John


 On 11/11/13, 8:05 PM, George Reese wrote:

 I was asked at the OpenStack Summit to look at the Glance Tasks,
 particularly as a general pattern for other asynchronous operations.

 If I understand Glance Tasks appropriately, different asynchronous
 operations get replaced by a single general purpose API call?

 In general, a unified API for task tracking across all kinds of
 asynchronous operations is a good thing. However, assuming this
 understanding is correct, I have two comments:

 #1 A consumer of an API should not need to know a priori whether a
 given operation is “asynchronous”. The asynchronous nature of the
 operation should be determined through a response. Specifically, if
 the client gets a 202 response, then it should recognize that the
 action is asynchronous and expect a task in the response. If it gets
 something else, then the action is synchronous. This approach has the
 virtual of being proper HTTP and allowing the needs of the
 implementation to dictate the synchronous/asynchronous nature of the
 API call and not a fixed contract.

 #2 I 

Re: [openstack-dev] Nova SSL Apache2 Question

2013-11-14 Thread Jesse Pretorius
On 13 November 2013 23:39, Miller, Mark M (EB SW Cloud - RD - Corvallis) 
mark.m.mil...@hp.com wrote:

 I finally found a set of web pages that has a working set of configuration
 files for the major OpenStack services 
 http://andymc-stack.co.uk/2013/07/apache2-mod_wsgi-openstack-pt-2-nova-api-os-compute-nova-api-ec2/;
  by Andy Mc. I skipped ceilometer and have the rest of the services
 working except quantum with self-signed certificates on a Grizzly-3
 OpenStack instance. Now I am stuck trying to figure out how to get quantum
 to accept self-signed certificates.

 My goal is to harden my Grizzly-3 OpenStack instance using SSL and
 self-signed certificates. Later I will do the same for Havana bits and use
 real/valid certificates.


I struggled with getting this all to work correctly for a few weeks, then
eventually gave up and opted instead to use an Apache reverse proxy to
front-end the native services. I just found that using an Apache/wsgi
configuration doesn't completely work. It would certainly help if this
configuration was implemented into the Openstack testing regime to help all
the services become first-class citizens as a wsgi process behind Apache.

I would suggest that you review the wsgi files and vhost templates in the
rcbops chef cookbooks for each service. They include my updates to Andy's
original blog items to make things work properly.

I found that while Andy's stuff appears to work, it becomes noticeable that
it works in a read-only fashion. I managed to get keystone/nova confirmed
to work properly, but glance just would not work - I could never upload any
images and if caching/management was turned off in the glance service then
downloading images didn't work either.

Good luck - if you do get a fully working config it'd be great to get
feedback on the adjustments you had to make to get it working.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Custom Flavor creation through Heat

2013-11-14 Thread Kodam, Vijayakumar (EXT-Tata Consultancy Ser - FI/Espoo)
not the video I was looking for, but he kind of makes the point about 
planning...
http://youtu.be/2E0C9zLSINE?t=42m55s

# Shawn Hartsock


Thanks Shawn for your inputs. As I have mentioned earlier, the usecase is 
mostly for telecom cloud applications running in our private cloud. 
The hardware requirements for them are quite different from the usual IT 
hardware requirements. 
We also have to use extra-specs to help nova schedule the virtual instances and 
to make other decisions.
And the flavors we create will be mostly visible/accessible to the tenant which 
creates them. 
This restriction could be imposed using flavor-access feature of nova.

Regards,
VijayKumar 


- Original Message -
 From: Shawn Hartsock hartso...@vmware.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Tuesday, November 12, 2013 6:38:21 PM
 Subject: Re: [openstack-dev] Custom Flavor creation through Heat
 
 
 
 - Original Message -
  From: Yunhong Jiang yunhong.ji...@intel.com
  To: OpenStack Development Mailing List (not for usage questions)
  openstack-dev@lists.openstack.org
  Sent: Tuesday, November 12, 2013 5:39:58 PM
  Subject: Re: [openstack-dev] Custom Flavor creation through Heat
   -Original Message-
   From: Shawn Hartsock [mailto:hartso...@vmware.com]
   Sent: Tuesday, November 12, 2013 12:56 PM
   To: OpenStack Development Mailing List (not for usage questions)
   Subject: Re: [openstack-dev] [nova] [heat] Custom Flavor creation through
   Heat
   
   My concern with proliferating custom flavors is that it might play havoc
   with the underlying root-case for flavors.
   
   My understanding of flavors is that they are used to solve the resource
   packing problem in elastic cloud scenarios. That way you know that 256
   tiny VMs fit cleanly into your hardware layout and so do 128 medium
   VMs and 64 large VMs. If you allow flavor of the week then the packing
   problem re-asserts itself and scheduling becomes harder.
  
  I'm a bit surprised that the flavor is used to resolve the packing problem.
  I
  thought it should be handled by scheduler, although it's a NP problem.
 
 I should have said flavors help to make the packing problem simpler for the
 scheduler ... flavors do not solve the packing problem.
 
  
  As for custom flavor, I think at least it's against current nova
  assumption.
  Currently nova assume flavor should only be created by admin, who knows the
  cloud quite well.
  One example is, flavor may contain extra-spec, so if an extra-spec value is
  specified in the flavor, while the corresponding scheduler filter is not
  enabled, then the extra-spec has no effect and may cause issue.
  
 
 Beyond merely extra-specs my understanding was that because you *have*
 flavors you can make assumptions about packing that make the problem space
 smaller... someone made a nice presentation showing how having restricted
 flavors made scheduling easier. I can't find it right now. It was presented
 at an OpenStack summit IIRC.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [RFC] Straw man to start the incubation / graduation requirements discussion

2013-11-14 Thread Thierry Carrez
Kurt Griffiths wrote:
 Also does it place a requirement that all projects wanting to request
 incubation to be placed in stackforge ? That sounds like a harsh
 requirement if we were to reject them.
 
 I think that anything that encourages projects to get used to the
 OpenStack development process sooner, rather than later, is a good thing.
 [...]

Fully agree with that... I was just taking the point of view of a
project currently developed outside of stackforge using a weird VCS --
we'd basically force them to switch to stackforge and git before we'd
even consider their application. If we ended up rejecting them, they
might feel this was a lot of wasted effort. I don't object raising the
pre-incubation requirements, just want to make sure everyone will be
aware of the consequences :)

That said, most (all?) openstack-wanabee projects now go the stackforge
route, so I'm not sure that would create extra pain for anyone.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal to recognize indirect contributions to our code base

2013-11-14 Thread Flavio Percoco

On 13/11/13 17:20 -0800, Stefano Maffulli wrote:

On 11/13/2013 04:34 PM, Colin McNamara wrote:

Not to be contrarian, but 92% of the commits in Havana came from
non-individual contributions. The majority of those came from big name
companies (IBM, RedHat, etc).


ow, that's harsh. Despite what US Supreme Court Judges may think,
Companies are not people: in the contest of this discussion (and for the
purpose of reporting on development activity) companies don't *do*
anything besides pay salaries of people. Red Hat, IBM, Rackspace, HP,
etc happen to pay the salaries of hundreds of skilled developers. That's
it. I happen to have started reporting publicly on companies activity
because I (as community manager) need to understand the full extent of
the dynamics inside the ecosystem. Those numbers are public and some
pundits abuse of them to fuel PR flaming machines.


Couldn't agree more!




In the operator case, there are examples where an operator uses another
companies Dev's to write a patch for their install that gets commited
upstream. In this case, the patch was sponsored by the operator company,
written and submitted by a developer employed by another.

Allowing for tracking if the fact that an operator/end user sponsored a
patch to be created further incents more operators/end users to put
funds towards getting features written.


I am not convinced at all that such thing would be of any incentive for
operators to contribute upstream. The practical advantage of having a
feature upstream maintained by somebody else should be more than enough
to justify it. I see the PR/marketing value in it, not a practical one.
On the other hand, I see potential for incentive to damaging behaviour.

As others have mentioned already, we have a lot of small contributions
coming in the code base but we're generally lacking people involved in
the hard parts of OpenStack. We need people contributing to 'thankless'
jobs that need to be done: from code reviewers to QA people to the
Security team, we need people involved there. I fear that giving
incentives to such small vanity contributions would do harm to our
community.


Agreed here as well.

There's nothing wrong with small contributions but I can see them
being abused.




This is a positive for the project, it's Dev's and the community. It
also opens up an expanded market for contract developers working on
specifier features.


I also don't see any obstacle for any company to proudly issue a press
release, blog post or similar, saying that they have sponsored a
feature/bug fix in OpenStack giving credit to developers/company writing
it. Why wouldn't that be enough? Why do we need to put in place a
reporting machine for what seems to be purely a marketing/pr need?


+1 here as well!

Cheers,
FF

--
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal to recognize indirect contributions to our code base

2013-11-14 Thread Flavio Percoco

On 13/11/13 17:22 -0700, John Griffith wrote:

On Wed, Nov 13, 2013 at 5:14 PM, Jay Pipes jaypi...@gmail.com wrote:

On 11/11/2013 12:44 PM, Daniel P. Berrange wrote:


On Mon, Nov 11, 2013 at 03:20:20PM +0100, Nicolas Barcet wrote:


Dear TC members,

Our companies are actively encouraging our respective customers to have
the
patches they mission us to make be contributed back upstream.  In order
to
encourage this behavior from them and others, it would be nice that if
could gain some visibility as sponsors of the patches in the same way
we
get visibility as authors of the patches today.

The goal here is not to provide yet another way to count affiliations of
direct contributors, nor is it a way to introduce sales pitches in
contrib.
  The only acceptable and appropriate use of the proposal we are making
is
to signal when a patch made by a contributor for another comany than the
one he is currently employed by.

For example if I work for a company A and write a patch as part of an
engagement with company B, I would signal that Company B is the sponsor
of
my patch this way, not Company A.  Company B would under current
circumstances not get any credit for their indirect contribution to our
code base, while I think it is our intent to encourage them to
contribute,
even indirectly.

To enable this, we are proposing that the commit text of a patch may
include a
sponsored-by: sponsorname
line which could be used by various tools to report on these commits.
  Sponsored-by should not be used to report on the name of the company
the
contributor is already affiliated to.

We would appreciate to see your comments on the subject and eventually
get
your approval for it's use.



IMHO, lets call this what it is: marketing.

I'm fine with the idea of a company wanting to have recognition for work
that they fund. They can achieve this by putting out a press release or
writing a blog post saying that they funded awesome feature XYZ to bring
benefits ABC to the project on their own websites, or any number of other
marketing approaches. Most / many companies and individuals contributing
to OpenStack in fact already do this very frequently which is fine /
great.

I don't think we need to, nor should we, add anything to our code commits,
review / development workflow / toolchain to support such marketing
pitches.
The identities recorded in git commits / gerrit reviewes / blueprints etc
should exclusively focus on technical authorship, not sponsorship. Leave
the marketing pitches for elsewhere.



I agree with Daniel here. There's nothing wrong with marketing, and there's
nothing wrong with a company promoting the funding that it contributed to
get some feature written or high profile bug fixed. But, I don't believe
this marketing belongs in the commit log. In the open source community,
*individuals* develop and contribute code, not companies. And I'm not
talking about joint contribution agreements, like the corporate CLA. I'm
talking about the actual work that is performed by developers, technical
documentation folks, QA folks, etc. Source control should be the domain of
the individual, not the company.



Well said


Yet again, couldn't agree more!




Best,
-jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal to recognize indirect contributions to our code base

2013-11-14 Thread Thierry Carrez
Stefano Maffulli wrote:
 On 11/13/2013 04:34 PM, Colin McNamara wrote:
 Not to be contrarian, but 92% of the commits in Havana came from
 non-individual contributions. The majority of those came from big name
 companies (IBM, RedHat, etc). 
 
 ow, that's harsh. Despite what US Supreme Court Judges may think,
 Companies are not people: in the contest of this discussion (and for the
 purpose of reporting on development activity) companies don't *do*
 anything besides pay salaries of people. Red Hat, IBM, Rackspace, HP,
 etc happen to pay the salaries of hundreds of skilled developers. That's
 it.

Furthermore, a ever-growing number of those developers actually work for
the OpenStack project itself, with companies sponsoring them to do that
much-needed work. Those companies have a vested interest in seeing
OpenStack succeed so they pay a number of individuals to do their magic
and make it happen. A lot of those individuals also end up switching
sponsors while keeping their position within the OpenStack project.
That's a very sane setup where everyone wins.

So the fact that, according to your stats, 8% of people working on
OpenStack are apparently unemployed (and I suspect the real number is
much lower) doesn't mean only 8% of contributions come from individuals.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [RFC] Straw man to start the incubation / graduation requirements discussion

2013-11-14 Thread Flavio Percoco

On 14/11/13 10:13 +0100, Thierry Carrez wrote:

Kurt Griffiths wrote:

Also does it place a requirement that all projects wanting to request
incubation to be placed in stackforge ? That sounds like a harsh
requirement if we were to reject them.


I think that anything that encourages projects to get used to the
OpenStack development process sooner, rather than later, is a good thing.
[...]


Fully agree with that... I was just taking the point of view of a
project currently developed outside of stackforge using a weird VCS --
we'd basically force them to switch to stackforge and git before we'd
even consider their application. If we ended up rejecting them, they
might feel this was a lot of wasted effort. I don't object raising the
pre-incubation requirements, just want to make sure everyone will be
aware of the consequences :)

That said, most (all?) openstack-wanabee projects now go the stackforge
route, so I'm not sure that would create extra pain for anyone.


And I don't think there's anything wrong with this. TBH, I thought
being in stackforge was already a requirement. If it is not, I think
it should be.

Moving projects under stackforge before going under openstack will
help us to test them in part of the infrastructure, it'll give the
team enough information about OpenStack's infrastructure, Gerrit's
workflow, gates and more. Also, this will give more information to the
TC wrt the team / project integration with OpenStack.

Cheers,
FF

--
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Congress: an open policy framework

2013-11-14 Thread Flavio Percoco

On 13/11/13 10:40 -0800, Tim Hinrichs wrote:

We're just getting started with Congress and understanding how it will 
integrate with the OS ecosystem, but here's our current thinking about how 
Congress relates to Oslo's policy engine and to Keystone.  Comments and 
suggestions are welcome.


Congress and Oslo

Three dimensions for comparison: policy language, data sources, and policy 
engine.

We've always planned to make Congress compatible with existing policy languages 
like the one in oslo.  The plan is to build a front-end for a number of policy 
languages/formats, e.g. oslo-policy language, XACML, JSON, YAML, SQL, etc.  The 
idea being that the syntax/language you use is irrelevant as long as it can be 
mapped into Congress's native policy language.  As of now, Congress is using 
Datalog, which is a variant of SQL and is at least as expressive as all of the 
policy languages we've run across in the cloud domain, including the 
oslo-policy language.

In terms of the data sources you can reference in the policy, Congress is 
designed to enable policies that reference arbitrary data sources in the cloud. 
 For example, we could write a Nova authorization policy that permits a new VM 
to be created if that VM is connected to a network owned by a tenant (info 
stored in Neutron) where the VM owner (info in the request) is in the same 
group as the network owner (info stored in Keystone/LDAP).  Oslo's handles some 
of these data sources with its terminal rules, but it's not involved in data 
integration to the same extent Congress is.

In terms of policy engines, Congress is intended to enforce policies in 2 
different ways: proactively (stopping policy violations before they occur) and 
reactively (acting to eliminate a violation after it occurs).  Ideally we 
wouldn't need reactive enforcement, but there will always be cases where 
proactive enforcement is not possible (e.g. a DOS attack brings app latencies 
out of compliance).  The oslo-engine does proactive enforcement only--stopping 
API calls before they violate the policy.

One concrete integration idea would be to treat Congress as a plugin for the 
oslo-policy engine.  This wouldn't enable say Nova to write policies that take 
advantage of the expressiveness of Datalog, but it would give us backwards 
compatibility.


In terms of ease integration with other projects, this sounds good.
However, it sounds like it'll add more complexity to policy.py than we
want.

That being said, I see the benefits of doing so, as long as policy.py
remains a library.



Congress and Keystone
--
I see Keystone as providing two pieces of functionality: authentication and 
group membership.  Congress has nothing to do with authentication and never 
will.  Groups, on the other hand, are things we end up defining when writing 
policies in Congress, so conceptually there's some overlap with Keystone.  I 
guess Congress could serve as a plugin/data source for Keystone and provide it 
with the groups defined within the policy.  This would allow a group to be 
defined using data sources not available to Keystone, e.g. we could define a 
group as all users who own a VM (info from Nova) connected to a network owned 
by someone (info from Neutron) in the same group (info from LDAP).  I don't 
know how useful or efficient this would be, and it's certainly not something 
we've designed Congress for.


I still have some doubts here, though. I know there's some work going
on around policy management - correct me, if I'm wrong - within
keystone. Have you looked into that?



Thoughts?


I know this may not be the right time to raise this, but I'll probably
forget about it later :D

Please, consider some kind of local cache within congress client
library as opposed to querying the API every single time. Policies
will be accessed a lot and it may become a performance penalty for
projects relying on congress.

Hope the above make sense.

Cheers,
FF

--
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][mistral] EventScheduler vs Mistral scheduling

2013-11-14 Thread Zane Bitter

On 14/11/13 11:29, Renat Akhmerov wrote:

As for EventScheduler proposal, I think it actually fits Mistral model
very well. What described in EvenScheduler is basically the ability to
configure webhooks to be called periodically or at a certain time. First
of all, from the very beginning the concept of scheduling has been
considered a very important capability of Mistral. And from Mistral
perspective calling a webhook is just a workflow consisting of one task.
In order to simplify consumption of the service we can implement API
methods to work specifically with webhooks in a convenient way (without
providing any workflow definitions using DSL etc.). I have already
suggested before that we can provide API shortcuts for scheduling
individual tasks rather than complex workflows so it has an adjacent
meaning.

I other words, I now tend to think it doesn’t make sense to have
EventScheduler a standalone service.


I tend to agree. What OpenStack doesn't yet have is a language for 
defining actions (Heat explicitly avoids this). This is required to 
define what to do in both a workflow task and a scheduled task and we 
only want to define it once, so workflow-as-a-service and 
cron-as-a-service (yeah, I know, people hate it when I call it that) are 
closely related at that level. Given this, it seems easiest to make the 
latter a feature of the former. I can easily imagine that you'll want to 
include scheduled tasks in your workflows too.


What might be a downside is that sharing a back-end may not be 
technically convenient - one thing we have been reminded of in Heat is 
that a service with timed tasks has to be scaled out in a completely 
different way to a service that avoids them. This may or may not be an 
issue for Mistral, but it could be resolved by having different back-end 
services that communicate over RPC. The front-end API can remain shared 
though.


cheers,
Zane.



What do you think?

Renat

On 13 Nov 2013, at 06:39, Angus Salkeld asalk...@redhat.com
mailto:asalk...@redhat.com wrote:


On 12/11/13 15:13 -0800, Christopher Armstrong wrote:

Given the recent discussion of scheduled autoscaling at the summit
session
on autoscaling, I looked into the state of scheduling-as-a-service in and
around OpenStack. I found two relevant wiki pages:

https://wiki.openstack.org/wiki/EventScheduler

https://wiki.openstack.org/wiki/Mistral/Cloud_Cron_details

The first one proposes and describes in some detail a new service and API
strictly for scheduling the invocation of webhooks.

The second one describes a part of Mistral (in less detail) to
basically do
the same, except executing taskflows directly.

Here's the first question: should scalable cloud scheduling exist
strictly
as a feature of Mistral, or should it be a separate API that only does
event scheduling? Mistral could potentially make use of the event
scheduling API (or just rely on users using that API directly to get
it to
execute their task flows).

Second question: if the proposed EventScheduler becomes a real project,
which OpenStack Program should it live under?

Third question: Is anyone actively working on this stuff? :)


Your work mates;)https://github.com/rackerlabs/qonos

how about merge qonos into mistral, or at lest put it into stackforge?

-Angus



--
IRC: radix
Christopher Armstrong
Rackspace



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] LBaaS Subteam meeting

2013-11-14 Thread Salvatore Orlando
Also, just to add some pedantry (which you know I am very fond of), it
would be the case to share on openstack-dev minutes and logs from eavesdrop
after each subteam meeting.
I know they're anyway at http://eavesdrop.openstack.org/meetings/ - but
sending an email will be useful to notify the rest of community about
progress/direction for each subteam.

Salvatore


On 12 November 2013 19:20, Eugene Nikanorov enikano...@mirantis.com wrote:

 Ok, I'll repost date and time to openstack-dev again.

 Thanks,
 Eugene.


 On Tue, Nov 12, 2013 at 11:15 PM, Stephen Wong s3w...@midokura.comwrote:

 Hi Eugene,

 LBaaS meeting on #openstack-meeting was previously schedule on
 Thursdays 1400UTC. And indeed it is still listed on
 https://wiki.openstack.org/wiki/Meetings as such, so I believe keeping
 it in that timeslot should be fine.

 - Stephen


 On Tue, Nov 12, 2013 at 7:40 AM, Eugene Nikanorov
 enikano...@mirantis.com wrote:
  I agree that it would be better to hold it on a channel with a bot which
  keeps logs.
 
  I just found that most convenient slots are already taken on both
  openstack-meeting and openstack-meeting-alt.
  14-00 UTC is convenient for me so I'd like to hear other opinions.
 
  Thanks,
  Eugene.
 
 
 
 
  On Tue, Nov 12, 2013 at 7:27 PM, Akihiro Motoki amot...@gmail.com
 wrote:
 
  Hi Eugene,
 
  In my opinion, it is better the LBaaS meeting is held on
  #openstack-meeting or #openstack-meeting-alt
  as most OpenStack projects do.
 
  In addition, information on
  https://wiki.openstack.org/wiki/Meetings#LBaaS_meeting is not
  up-to-date.
  The time is 1400UTC and the channel is #openstack-meeting.
  I saw someone asked is there LBaaS meeting today? on
  #openstack-meeting channel several times.
 
  Thanks,
  Akihiro
 
 
  On Wed, Nov 13, 2013 at 12:08 AM, Eugene Nikanorov
  enikano...@mirantis.com wrote:
   Hi neutron and lbaas folks!
  
   We have a plenty of work to do for the Icehouse, so I suggest we
 start
   having regular weekly meetings to track our progress.
   Let's meet at #neutron-lbaas on Thursday, 14 at 15-00 UTC
  
   The agenda for the meeting is the following:
   1. Blueprint list to be proposed for the icehouse-1
   2. QA  third-party testing
   3. dev resources evaluation
   4. Additional features requested by users.
  
   Thanks,
   Eugene.
  
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] LBaaS Subteam meeting

2013-11-14 Thread Eugene Nikanorov
Sure, no problem.


On Thu, Nov 14, 2013 at 3:07 PM, Salvatore Orlando sorla...@nicira.comwrote:

 Also, just to add some pedantry (which you know I am very fond of), it
 would be the case to share on openstack-dev minutes and logs from eavesdrop
 after each subteam meeting.
 I know they're anyway at http://eavesdrop.openstack.org/meetings/ - but
 sending an email will be useful to notify the rest of community about
 progress/direction for each subteam.

 Salvatore


 On 12 November 2013 19:20, Eugene Nikanorov enikano...@mirantis.comwrote:

 Ok, I'll repost date and time to openstack-dev again.

 Thanks,
 Eugene.


 On Tue, Nov 12, 2013 at 11:15 PM, Stephen Wong s3w...@midokura.comwrote:

 Hi Eugene,

 LBaaS meeting on #openstack-meeting was previously schedule on
 Thursdays 1400UTC. And indeed it is still listed on
 https://wiki.openstack.org/wiki/Meetings as such, so I believe keeping
 it in that timeslot should be fine.

 - Stephen


 On Tue, Nov 12, 2013 at 7:40 AM, Eugene Nikanorov
 enikano...@mirantis.com wrote:
  I agree that it would be better to hold it on a channel with a bot
 which
  keeps logs.
 
  I just found that most convenient slots are already taken on both
  openstack-meeting and openstack-meeting-alt.
  14-00 UTC is convenient for me so I'd like to hear other opinions.
 
  Thanks,
  Eugene.
 
 
 
 
  On Tue, Nov 12, 2013 at 7:27 PM, Akihiro Motoki amot...@gmail.com
 wrote:
 
  Hi Eugene,
 
  In my opinion, it is better the LBaaS meeting is held on
  #openstack-meeting or #openstack-meeting-alt
  as most OpenStack projects do.
 
  In addition, information on
  https://wiki.openstack.org/wiki/Meetings#LBaaS_meeting is not
  up-to-date.
  The time is 1400UTC and the channel is #openstack-meeting.
  I saw someone asked is there LBaaS meeting today? on
  #openstack-meeting channel several times.
 
  Thanks,
  Akihiro
 
 
  On Wed, Nov 13, 2013 at 12:08 AM, Eugene Nikanorov
  enikano...@mirantis.com wrote:
   Hi neutron and lbaas folks!
  
   We have a plenty of work to do for the Icehouse, so I suggest we
 start
   having regular weekly meetings to track our progress.
   Let's meet at #neutron-lbaas on Thursday, 14 at 15-00 UTC
  
   The agenda for the meeting is the following:
   1. Blueprint list to be proposed for the icehouse-1
   2. QA  third-party testing
   3. dev resources evaluation
   4. Additional features requested by users.
  
   Thanks,
   Eugene.
  
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] - Vendor specific erros

2013-11-14 Thread Salvatore Orlando
In general, an error state make sense.
I think you might want to send more details about how this state plugs into
the load balancer state machine, but I reckon it is a generally
non-recoverable state which could be reached by any other state; in that
case it would be a generic enough case which might be supported by all
drivers.

It is good to point out that driver-specific state transitions however, in
my opinion, are to avoid; application using the Neutron API will become
non-portable, or at least users of the Neutron API would need to be aware
that an entity might have a different state machine from driver to driver,
which I reckon would be bad enough for a developer to decide to switch over
to Cloudstack or AWS APIs!

Salvatore

PS: On the last point I am obviously joking, but not so much.



On 12 November 2013 08:00, Avishay Balderman avish...@radware.com wrote:



 Hi

 Some of the DB entities in the LBaaS domain inherit from
 HasStatusDescriptionhttps://github.com/openstack/neutron/blob/master/neutron/db/models_v2.py#L40

 With this we can set the entity status (ACTIVE, PENDING_CREATE,etc) and a
 description for the status.

 There are flows in the Radware LBaaS driver that the  driver needs to set
 the entity status to ERROR and it is able to set the description of the
 error –  the description is Radware specific.

 My question is:  Does it make sense to do that?

 After all the tenant is aware to the fact that he works against Radware
 load balancer -  the tenant selected Radware as the lbaas provider in the
 UI.

 Any reason not to do that?



 This is a generic issue/question and does not relate  to a specific plugin
 or driver.



 Thanks



 Avishay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [ipv6] IPv6 meeting - Thursdays 21:00UTC - #openstack-meeting-alt

2013-11-14 Thread Salvatore Orlando
Although IRC requires typing, I find it more inclusive than Webex.
It's easy to have 100s of people in a IRC room, not so easy on a conf call.
Also, people like me whose 1st language is not English might find easier to
write  read rather than listen  speak.

Finally, the capability of generating minutes and logs, which are way
easier to browse compared with listening to a recording, is hardly
available with Webex.

The fact that webex is not a free and open source service is another aspect
to take into account
I'll now duck before stones start being thrown as I'm not really the guy
who can play FOSS advocate.

Salvatore


On 13 November 2013 17:46, Collins, Sean (Contractor) 
sean_colli...@cable.comcast.com wrote:

 On Wed, Nov 13, 2013 at 10:20:55AM -0500, Shixiong Shang wrote:
  Thanks a bunch for finalizing the time! Sorry for my ignorance….how do
 we usually run the meeting? On Webex or IRC channel?

 IRC.

 I'm not opposed to Webex (other teams have used it before) - but it
 would involve more set-up. We'd need to publish recordings,
 so that there is a way for those that couldn't attend to review,
 similar to how the IRC meetings are logged.

 --
 Sean M. Collins
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][mistral] EventScheduler vs Mistral scheduling

2013-11-14 Thread Renat Akhmerov

On 14 Nov 2013, at 18:03, Zane Bitter zbit...@redhat.com wrote:

 What might be a downside is that sharing a back-end may not be technically 
 convenient - one thing we have been reminded of in Heat is that a service 
 with timed tasks has to be scaled out in a completely different way to a 
 service that avoids them. This may or may not be an issue for Mistral, but it 
 could be resolved by having different back-end services that communicate over 
 RPC. The front-end API can remain shared though.

Not sure I’m 100% following here. Could you please provide more details on 
this? Seems to be an important topic to me. Particularly, what did you mean 
when you said “sharing a back-end”? Sharing by which components?

Thanks.___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder][Glance] OSLO update

2013-11-14 Thread Elena Ezhova
Hello all,

I have made several patches that update modules in cinder/openstack/common
from oslo which have not been reviewed for more than a month already. My
colleague has the same problem with her patches in Glance.

Probably it's not a top priority issue, but if oslo is not updated
periodically in small bits it may become a problem in the future. What's
more, it is much easier for a developer if oslo code is consistent in all
projects.

So, I would be grateful if someone reviewed these patches:
https://review.openstack.org/#/c/48272/
https://review.openstack.org/#/c/48273/
https://review.openstack.org/#/c/52099/
https://review.openstack.org/#/c/52101/
https://review.openstack.org/#/c/53114/
https://review.openstack.org/#/c/47581/

Thanks,

Elena
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [heat] Custom Flavor creation through Heat

2013-11-14 Thread Steven Hardy
On Thu, Nov 14, 2013 at 08:22:57AM +, Kodam, Vijayakumar (EXT-Tata 
Consultancy Ser - FI/Espoo) wrote:
snip
 Thanks Steve Baker for the information. I am also waiting to hear from Steve 
 Hardy, if keystone trust system will fix the nova flavors admin privileges 
 issue.

So, basically, no.  Trusts only allow you to delegate roles you already
have, so if nova requires admin to create a flavor, and the user creating
the heat stack doesn't have admin, then they can't create a flavor.  Trusts
won't solve this problem, they won't allow users to gain roles they don't
already have.

As Clint has pointed out, if you control the OpenStack deployment, you are
free to modify the policy for any API to suit your requirements - the
policy provided by projects is hopefully a sane set of defaults, but the
whole point of policy.json is that it's configurable.

 One option to control the proliferation of nova flavors is to make them 
 private to the tenant (using flavor-access?) who created them. This provides 
 the needed privacy so that others tenants cannot view them.

This is the first step IMO - the nova flavors aren't scoped per tenant atm,
which will be a big problem if you start creating loads of non-public
flavors via stack templates.

At the moment, you can specify --is-public false when creating a flavor,
but this doesn't really mean that the flavor is private to the user, or
tenant, it just means non-admin users can't see it AFAICT.

So right now, if User1 in Tenant1 does:

nova flavor-create User1Flavor auto 128 10 1 --is-public false

Every user in every tenant will see it via tenant-list --all, if they have
the admin role.

This lack of proper role-based request scoping is an issue throughout
OpenStack AFAICS, Heat included (I'm working on fixing it).

Probably what we need is something like:
- Normal user : Can create a private flavor in a tenant where they
  have the Member role (invisible to any other users)
- Tenant Admin user : Can create public flavors in the tenants where they
  have the admin role (visible to all users in the tenant)
- Domain admin user : Can create public flavors in the domains where they
  have the admin role (visible to all users in all tenants in that domain)

Note the current admin user scope is like the last case, only for the
default domain.

So for now, I'm -1 on adding a heat resource to create flavors, we should
fix the flavor scoping in Nova first IMO.

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][object] One question to the resource tracker session

2013-11-14 Thread John Garbutt
On 13 November 2013 23:22, Andrew Laski andrew.la...@rackspace.com wrote:
 On 11/13/13 at 11:12pm, Jiang, Yunhong wrote:

 Hi, Dan Smith and all,
 I noticed followed statement in 'Icehouse tasks' in
 https://etherpad.openstack.org/p/IcehouseNovaExtensibleSchedulerMetrics

 convert resource tracker to objects
 make resoruce tracker extensible
 no db migrations ever again!!
 extra specs to cover resources - use a name space

 How is it planned to achieve the 'no db migrations ever again'?
 Even with the object, we still need keep resource information in database.
 And when new resource type added, we either add a new column to the table.
 Or it means we merge all resource information into a single column as json
 string and parse it in the resource tracker object?.


 You're right, it's not really achievable without moving to a schemaless
 persistence model.  I'm fairly certain it was added to be humorous and
 should not be considered an outcome of that session.

But we can avoid most data migrations by adding any required
conversion code into the objects DB layer, once we start using it. But
it might not be what we want.

John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Congress: an open policy framework

2013-11-14 Thread Morgan Fainberg
On Wed, Nov 13, 2013 at 10:40 AM, Tim Hinrichs thinri...@vmware.com wrote:
 We're just getting started with Congress and understanding how it will 
 integrate with the OS ecosystem, but here's our current thinking about how 
 Congress relates to Oslo's policy engine and to Keystone.  Comments and 
 suggestions are welcome.


 Congress and Oslo
 
 Three dimensions for comparison: policy language, data sources, and policy 
 engine.

 We've always planned to make Congress compatible with existing policy 
 languages like the one in oslo.  The plan is to build a front-end for a 
 number of policy languages/formats, e.g. oslo-policy language, XACML, JSON, 
 YAML, SQL, etc.  The idea being that the syntax/language you use is 
 irrelevant as long as it can be mapped into Congress's native policy 
 language.  As of now, Congress is using Datalog, which is a variant of SQL 
 and is at least as expressive as all of the policy languages we've run across 
 in the cloud domain, including the oslo-policy language.

 In terms of the data sources you can reference in the policy, Congress is 
 designed to enable policies that reference arbitrary data sources in the 
 cloud.  For example, we could write a Nova authorization policy that permits 
 a new VM to be created if that VM is connected to a network owned by a tenant 
 (info stored in Neutron) where the VM owner (info in the request) is in the 
 same group as the network owner (info stored in Keystone/LDAP).  Oslo's 
 handles some of these data sources with its terminal rules, but it's not 
 involved in data integration to the same extent Congress is.

 In terms of policy engines, Congress is intended to enforce policies in 2 
 different ways: proactively (stopping policy violations before they occur) 
 and reactively (acting to eliminate a violation after it occurs).  Ideally we 
 wouldn't need reactive enforcement, but there will always be cases where 
 proactive enforcement is not possible (e.g. a DOS attack brings app latencies 
 out of compliance).  The oslo-engine does proactive enforcement 
 only--stopping API calls before they violate the policy.


Does this mean all policy decisions need to ask this new service?
There are many policy checks that occur across even a given action (in
some cases).  Could this have a significant performance implication on
larger scale cloud deployments?  I like the idea of having reactive
(DOS prevention) policy enforcement as well as external (arbitrary)
data to help make policy decisions, I don't want to see Congress be
limited in deployment because large scale clouds getting bottle-necked
trying to communicate with it.

 One concrete integration idea would be to treat Congress as a plugin for the 
 oslo-policy engine.  This wouldn't enable say Nova to write policies that 
 take advantage of the expressiveness of Datalog, but it would give us 
 backwards compatibility.

I'm sure that once Congress is available (and ready for prime-time)
this type of mechanism will be mostly used for the transitional
period.


 Congress and Keystone
 --
 I see Keystone as providing two pieces of functionality: authentication and 
 group membership.  Congress has nothing to do with authentication and never 
 will.  Groups, on the other hand, are things we end up defining when writing 
 policies in Congress, so conceptually there's some overlap with Keystone.  I 
 guess Congress could serve as a plugin/data source for Keystone and provide 
 it with the groups defined within the policy.  This would allow a group to be 
 defined using data sources not available to Keystone, e.g. we could define a 
 group as all users who own a VM (info from Nova) connected to a network owned 
 by someone (info from Neutron) in the same group (info from LDAP).  I don't 
 know how useful or efficient this would be, and it's certainly not something 
 we've designed Congress for.

I have a concern about using the generic group terminology.  As it
stands group is a fairly overloaded term and fairly generic.  If you
have multiple group concepts when it comes to users it will cause
confusion in discussion / understanding.  Especially in the case of an
IdP (Identity Provider) defining a group that has certain rights
associated to it (e.g. Can add projects in keystone) that is somehow
overloaded by the policy engine when dealing with other services (name
conflict) or even simpler requiring the understanding in the context
of Congress groups or Groups that Keystone sees.  While I am not
opposed to a grouping mechanism within the policy engine, I want to
make sure everyone has a clear understanding of the concepts being
described (there was a recent issue with two concepts in keystone
being called the same thing, and it has been a challenge to unwind
that).

There might be some value  to seeing some work being done to provide
more information to Keystone, but I think this will become more
apparent as Congress develops.

 Thoughts?
 Tim



[openstack-dev] Split of the openstack-dev list

2013-11-14 Thread Thierry Carrez
Hi everyone,

I think that we have recently reached critical mass for the
openstack-dev mailing-list, with 2267 messages posted in October, and
November well on its way to pass 2000 again. Some of those are just
off-topic (and I've been regularly fighting against them) but most of
them are just about us covering an ever-increasing scope, stretching the
definition of what we include in openstack development.

Therefore I'd like to propose a split between two lists:

*openstack-dev*: Discussions on future development for OpenStack
official projects

*stackforge-dev*: Discussions on development for stackforge-hosted projects

Non-official OpenStack-related projects would get discussed in
stackforge-dev (or any other list of their preference), while
openstack-dev would be focused on openstack official programs (including
incubated  integrated projects).

That means discussion about Solum, Mistral, Congress or Murano
(stackforge/* repos in gerrit) would now live on stackforge-dev.
Discussions about Glance, TripleO or Oslo libraries (openstack*/* repos
on gerrit) would happen on openstack-dev. This will allow easier
filtering and prioritization; OpenStack developers interested in
tracking promising stackforge projects would subscribe to both lists.

That will not solve all issues. We should also collectively make sure
that *usage questions are re-routed* to the openstack general
mailing-list, where they belong. Too many people still answer off-topic
questions here on openstack-dev, which encourages people to be off-topic
in the future (traffic on the openstack general ML has been mostly
stable, with only 868 posts in October). With those actions, I hope that
traffic on openstack-dev would drop back to the 1000-1500 range, which
would be more manageable for everyone.

Thoughts ?

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Split of the openstack-dev list

2013-11-14 Thread Thierry Carrez
Thierry Carrez wrote:
 [...]
 That will not solve all issues. We should also collectively make sure
 that *usage questions are re-routed* to the openstack general
 mailing-list, where they belong. Too many people still answer off-topic
 questions here on openstack-dev, which encourages people to be off-topic
 in the future (traffic on the openstack general ML has been mostly
 stable, with only 868 posts in October). With those actions, I hope that
 traffic on openstack-dev would drop back to the 1000-1500 range, which
 would be more manageable for everyone.

Other suggestion: we could stop posting meeting reminders to -dev (I
know, I'm guilty of it) and only post something if the meeting time
changes, or if the weekly meeting is canceled for whatever reason.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] AWS compatibility discussions at today's Nova Meeting

2013-11-14 Thread Rohit.Karajgi
Hi Folks,

As a first follow up after last week's design summit session,
let's discuss at today's Nova meeting the steps to be taken for putting in
continuous efforts for AWS compatibility in Openstack.

Nova Meeting: 21:00 UTC
Summit Etherpad: https://etherpad.openstack.org/p/icehouse-aws-compatibility 
https://etherpad.openstack.org/p/icehouse-aws-compatibility

Cheers,
Rohit
Confidentiality Warning: This message and any attachments are intended only 
for the use of the intended recipient(s). 
are confidential and may be privileged. If you are not the intended recipient. 
you are hereby notified that any 
review. re-transmission. conversion to hard copy. copying. circulation or other 
use of this message and any attachments is 
strictly prohibited. If you are not the intended recipient. please notify the 
sender immediately by return email. 
and delete this message and any attachments from your system.

Virus Warning: Although the company has taken reasonable precautions to ensure 
no viruses are present in this email. 
The company cannot accept responsibility for any loss or damage arising from 
the use of this email or attachment.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Split of the openstack-dev list

2013-11-14 Thread Julien Danjou
On Thu, Nov 14 2013, Thierry Carrez wrote:

 Other suggestion: we could stop posting meeting reminders to -dev (I
 know, I'm guilty of it) and only post something if the meeting time
 changes, or if the weekly meeting is canceled for whatever reason.

Good suggestion.

-- 
Julien Danjou
-- Free Software hacker - independent consultant
-- http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Split of the openstack-dev list

2013-11-14 Thread Julien Danjou
On Thu, Nov 14 2013, Thierry Carrez wrote:

 Thoughts ?

I agree on the need to split, the traffic is getting huge.

As I'd have to subscribe to both openstack-dev and stackforge-dev, that
would not help me personally, but I think it can be an easy and first
step.

-- 
Julien Danjou
-- Free Software hacker - independent consultant
-- http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Split of the openstack-dev list

2013-11-14 Thread Dina Belova
Yeah, that's big problem... Especially when you are trying to keep track on
lots of topics...
I suppose this solution will do letters' prioritisation at least easier for
developers and everybody who is subscribed on openstack-dev.

Nice idea.


On Thu, Nov 14, 2013 at 9:12 PM, Thierry Carrez thie...@openstack.orgwrote:

 Hi everyone,

 I think that we have recently reached critical mass for the
 openstack-dev mailing-list, with 2267 messages posted in October, and
 November well on its way to pass 2000 again. Some of those are just
 off-topic (and I've been regularly fighting against them) but most of
 them are just about us covering an ever-increasing scope, stretching the
 definition of what we include in openstack development.

 Therefore I'd like to propose a split between two lists:

 *openstack-dev*: Discussions on future development for OpenStack
 official projects

 *stackforge-dev*: Discussions on development for stackforge-hosted projects

 Non-official OpenStack-related projects would get discussed in
 stackforge-dev (or any other list of their preference), while
 openstack-dev would be focused on openstack official programs (including
 incubated  integrated projects).

 That means discussion about Solum, Mistral, Congress or Murano
 (stackforge/* repos in gerrit) would now live on stackforge-dev.
 Discussions about Glance, TripleO or Oslo libraries (openstack*/* repos
 on gerrit) would happen on openstack-dev. This will allow easier
 filtering and prioritization; OpenStack developers interested in
 tracking promising stackforge projects would subscribe to both lists.

 That will not solve all issues. We should also collectively make sure
 that *usage questions are re-routed* to the openstack general
 mailing-list, where they belong. Too many people still answer off-topic
 questions here on openstack-dev, which encourages people to be off-topic
 in the future (traffic on the openstack general ML has been mostly
 stable, with only 868 posts in October). With those actions, I hope that
 traffic on openstack-dev would drop back to the 1000-1500 range, which
 would be more manageable for everyone.

 Thoughts ?

 --
 Thierry Carrez (ttx)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Split of the openstack-dev list

2013-11-14 Thread Daniel P. Berrange
On Thu, Nov 14, 2013 at 02:19:24PM +0100, Thierry Carrez wrote:
 Thierry Carrez wrote:
  [...]
  That will not solve all issues. We should also collectively make sure
  that *usage questions are re-routed* to the openstack general
  mailing-list, where they belong. Too many people still answer off-topic
  questions here on openstack-dev, which encourages people to be off-topic
  in the future (traffic on the openstack general ML has been mostly
  stable, with only 868 posts in October). With those actions, I hope that
  traffic on openstack-dev would drop back to the 1000-1500 range, which
  would be more manageable for everyone.
 
 Other suggestion: we could stop posting meeting reminders to -dev (I
 know, I'm guilty of it) and only post something if the meeting time
 changes, or if the weekly meeting is canceled for whatever reason.

Is there somewhere on the website which keeps a record of all regular
scheduled meetings people can discover / refer to easily ?

Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Split of the openstack-dev list

2013-11-14 Thread Julien Danjou
On Thu, Nov 14 2013, Daniel P. Berrange wrote:

 Is there somewhere on the website which keeps a record of all regular
 scheduled meetings people can discover / refer to easily ?

It's all on the wiki:

  https://wiki.openstack.org/wiki/Meetings

-- 
Julien Danjou
// Free Software hacker / independent consultant
// http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Glance Tasks

2013-11-14 Thread George Reese
One critical reasons why tasks rather than resource status may be required is 
because:

a) The system state may not be sufficient at time of POST/PUT to generate a 
“minimum viable resource” and we don’t want to risk timeouts waiting for the 
“minimum viable resource
b) There may be more stuff about the workflow worth tracking beyond simple 
status, including the ability to act on the workflow

Did the new Glance Tasks stuff make it into Havana? If so, I agree with keeping 
in the new Glance Tasks stuff because I am violently opposed to breaking 
backwards portability under any circumstances. If not, I worry the new approach 
establishes a precedence that others may mistakenly use.

Also, as a side note, I think there should be a separate, shared tasks tracking 
component (ooh, something new for “OpenStack core” :)) that is the central 
authority on all task management. Glance (and Nova and everyone else) would 
interact with this component to create and update task status, but clients 
would query against it. It would also have it’s own API.

That way, a client subsystem could be handed a random task from an arbitrary 
OpenStack component and easily know the semantics for getting information about 
it.

-George

On Nov 14, 2013, at 2:30 AM, Mark Washenberger mark.washenber...@markwash.net 
wrote:

 Responses to both Jay and George inline.
 
 
 On Wed, Nov 13, 2013 at 5:08 PM, Jay Pipes jaypi...@gmail.com wrote:
 Sorry for top-posting, but in summary, I entirely agree with George here. His 
 logic is virtually identical to the concerns I raised with the initial 
 proposal for Glance Tasks here:
 
 http://lists.openstack.org/pipermail/openstack-dev/2013-May/009400.html
 and
 http://lists.openstack.org/pipermail/openstack-dev/2013-May/009527.html
 
 In my understanding, your viewpoints are subtly different.
 
 George seems to agree with representing ongoing asynchronous tasks through a 
 separate 'tasks' resource. I believe where he differs with the current design 
 is how those tasks are created. He seems to prefer creating tasks with POST 
 requests to the affected resources. To distinguish between uploading an image 
 and importing an image, he suggests we require a different content type in 
 the request.
 
 However, your main point in the links above seemed to be to reuse POST 
 /v2/images, but to capture the asynchronous nature of image verification and 
 conversion by adding more nodes to the image state machine.
 
 
 
 Best,
 -jay
 
 
 On 11/13/2013 05:36 PM, George Reese wrote:
 Let’s preface this with Glance being the part of OpenStack I am least
 familiar with. Keep in mind my commentary is related to the idea that
 the asynchronous tasks as designed are being considered beyond Glance.
 The problems of image upload/import/cloning/export are unlike other
 OpenStack operations for the most part in that they involve binary data
 as the core piece of the payload.
 
 Having said that, I’d prefer a polymorphic POST to the tasks API as
 designed.
 
 Thanks. I think we'll move forward with this design for now in Glance. But 
 your alternative below is compelling and we'll definitely consider as we add 
 future tasks. I also want to say that we could probably completely adopt your 
 proposal in the future as long as we also support backwards compatibility 
 with the current design, but I can't predict at this point the practical 
 concerns that will emerge.
  
 But I’m much more concerned with the application of the tasks
 API as designed to wider problems.
 
 I think this concern is very reasonable. Other projects should evaluate your 
 proposal carefully.
  
 
 Basically, I’d stick with POST /images.
 
 The content type should indicate what the server should expect.
 Basically, the content can be:
 
 * An actual image to upload
 * Content describing a target for an import
 * Content describing a target for a clone operation
 
 Implementation needs dictate whether any given operation is synchronous
 or asynchronous. Practically speaking, upload would be synchronous with
 the other two being asynchronous. This would NOT impact an existing
 /images POST as it will not change (unless we suddenly made it
 asynchronous).
 
 The response would be CREATED (synchronous) or ACCEPTED (asynchronous).
 If ACCEPTED, the body would contain JSON/XML describing the asynchronous
 task.
 
 I’m not sure if export is supposed to export to a target object store or
 export to another OpenStack environment. But it would be an async
 operation either way and should work as described above. Whether the
 endpoint for the image to be exported is the target or just /images is
 something worthy of discussion based on what the actual function of the
 export is.
 
 -George
 
 On Nov 12, 2013, at 5:45 PM, John Bresnahan j...@bresnahan.me
 mailto:j...@bresnahan.me wrote:
 
 George,
 
 Thanks for the comments, they make a lot of sense.  There is a Glance
 team meeting on Thursday where we would like to push a bit further on
 this.  

Re: [openstack-dev] [reviews] putting numbers on -core team load

2013-11-14 Thread Christopher Yeoh
On Thu, Nov 14, 2013 at 5:59 PM, Robert Collins
robe...@robertcollins.netwrote:


 Total reviews: 10705 (118.9/day)
 Total reviewers: 406
 Total reviews by core team: 5289 (58.8/day)
 Core team size: 17
 New patch sets in the last 90 days: 7515 (83.5/day)

 This is the really interesting bit. Remembering that every patch needs
 - at minimum - 2 +2's, the *minimum* viable core team review rate to
 keep up is patch sets per day * 2:
 30 days: 132 core reviews/day
 90 days: 167 core reviews/day

 But we're getting:
 30 days: 42/day or 90/day short
 90 days: 59/day or 108/day short

 One confounding factor here is that this counts (AIUI) pushed changes,
 not change ids - so we don't need two +2's for every push, we need two
 +2's for every changeid - we should add that as a separate metric I
 think, as the needed +2 count will be a bit lower.


So I thought that can make quite a difference to the calculations you made
below
so just for fun I added a few more stats (
https://review.openstack.org/#/c/56380/) and got:

Total reviews: 10751 (119.5/day)
Total reviewers: 405
Total reviews by core team: 5312 (59.0/day)
Core team size: 17
New patch sets in the last 90 days: 7501 (83.3/day)
Changes involved in the last 90 days: 1840 (20.4/day)
  New changes in the last 90 days: 1549 (17.2/day)
  Changes merged in the last 90 days: 1120 (12.4/day)
  Changes abandoned in the last 90 days: 395 (4.4/day)
  Changes left in state WIP in the last 90 days: 18 (0.2/day)
  Queue growth in the last 90 days: 16 (0.2/day)
  Average number of patches per changeset: 4.1

So if everyone uploaded perfect changesets we'd only need 40 core reviews
per day :-)
Though in practice it takes on average about 4 tries for Nova. Obviously
some of those updated patchsets
are due to automatic feedback from Jenkins and -1 could come from anyone,
but in practice
of course cores review a lot of patches in progress rather than just when
they're ready. Though the more
non-cores picking up issues, the less that needs to occur.

Queue growth is a derived number which I think is correct as its based on
new changes versus ones which merge or are abandoned (but I might be
wrong). Its not necessarily a problem as long as the delay through the
queue does
not increase as the queue length is going to grow as a project becomes more
active if the time taken to get through the queue remains the same. It also
probably changes a bit depending on the exact time slice of the development
period you look at.

Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Split of the openstack-dev list

2013-11-14 Thread mark
This would also have the added benefit of reducing the times people conflate 
related open source projects from stackforge with OpenStack itself. Having 
related oss discussions on a list called OpenStack-Dev may certainly have 
given the wrong impression to the casual observer. 


On Nov 14, 2013 7:12 AM, Thierry Carrez thie...@openstack.org wrote:

 Hi everyone, 

 I think that we have recently reached critical mass for the 
 openstack-dev mailing-list, with 2267 messages posted in October, and 
 November well on its way to pass 2000 again. Some of those are just 
 off-topic (and I've been regularly fighting against them) but most of 
 them are just about us covering an ever-increasing scope, stretching the 
 definition of what we include in openstack development. 

 Therefore I'd like to propose a split between two lists: 

 *openstack-dev*: Discussions on future development for OpenStack 
 official projects 

 *stackforge-dev*: Discussions on development for stackforge-hosted projects 

 Non-official OpenStack-related projects would get discussed in 
 stackforge-dev (or any other list of their preference), while 
 openstack-dev would be focused on openstack official programs (including 
 incubated  integrated projects). 

 That means discussion about Solum, Mistral, Congress or Murano 
 (stackforge/* repos in gerrit) would now live on stackforge-dev. 
 Discussions about Glance, TripleO or Oslo libraries (openstack*/* repos 
 on gerrit) would happen on openstack-dev. This will allow easier 
 filtering and prioritization; OpenStack developers interested in 
 tracking promising stackforge projects would subscribe to both lists. 

 That will not solve all issues. We should also collectively make sure 
 that *usage questions are re-routed* to the openstack general 
 mailing-list, where they belong. Too many people still answer off-topic 
 questions here on openstack-dev, which encourages people to be off-topic 
 in the future (traffic on the openstack general ML has been mostly 
 stable, with only 868 posts in October). With those actions, I hope that 
 traffic on openstack-dev would drop back to the 1000-1500 range, which 
 would be more manageable for everyone. 

 Thoughts ? 

 -- 
 Thierry Carrez (ttx) 

 ___ 
 OpenStack-dev mailing list 
 OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [reviews] putting numbers on -core team load

2013-11-14 Thread Christopher Yeoh
On Fri, Nov 15, 2013 at 12:04 AM, Christopher Yeoh cbky...@gmail.comwrote:

 On Thu, Nov 14, 2013 at 5:59 PM, Robert Collins robe...@robertcollins.net
  wrote:
 So if everyone uploaded perfect changesets we'd only need 40 core reviews
 per day :-)


And I should add in terms of changesets that are eventually going to merge
(not be abandoned)
we'd need about 25-30 core reviews/day

Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Split of the openstack-dev list

2013-11-14 Thread Thierry Carrez
Julien Danjou wrote:
 On Thu, Nov 14 2013, Thierry Carrez wrote:
 
 Thoughts ?
 
 I agree on the need to split, the traffic is getting huge.
 
 As I'd have to subscribe to both openstack-dev and stackforge-dev, that
 would not help me personally, but I think it can be an easy and first
 step.

Personally I would also subscribe to both, but I would not parse them
with the exact same level of attention -- having them land in two
separate folders would certainly help me.

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Split of the openstack-dev list

2013-11-14 Thread Sean Dague
On 11/14/2013 08:12 AM, Thierry Carrez wrote:
 Hi everyone,
 
 I think that we have recently reached critical mass for the
 openstack-dev mailing-list, with 2267 messages posted in October, and
 November well on its way to pass 2000 again. Some of those are just
 off-topic (and I've been regularly fighting against them) but most of
 them are just about us covering an ever-increasing scope, stretching the
 definition of what we include in openstack development.
 
 Therefore I'd like to propose a split between two lists:
 
 *openstack-dev*: Discussions on future development for OpenStack
 official projects
 
 *stackforge-dev*: Discussions on development for stackforge-hosted projects
 
 Non-official OpenStack-related projects would get discussed in
 stackforge-dev (or any other list of their preference), while
 openstack-dev would be focused on openstack official programs (including
 incubated  integrated projects).

+1

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Split of the openstack-dev list

2013-11-14 Thread Flavio Percoco

On 14/11/13 14:12 +0100, Thierry Carrez wrote:

Hi everyone,

I think that we have recently reached critical mass for the
openstack-dev mailing-list, with 2267 messages posted in October, and
November well on its way to pass 2000 again. Some of those are just
off-topic (and I've been regularly fighting against them) but most of
them are just about us covering an ever-increasing scope, stretching the
definition of what we include in openstack development.

Therefore I'd like to propose a split between two lists:

*openstack-dev*: Discussions on future development for OpenStack
official projects

*stackforge-dev*: Discussions on development for stackforge-hosted projects

Non-official OpenStack-related projects would get discussed in
stackforge-dev (or any other list of their preference), while
openstack-dev would be focused on openstack official programs (including
incubated  integrated projects).

That means discussion about Solum, Mistral, Congress or Murano
(stackforge/* repos in gerrit) would now live on stackforge-dev.
Discussions about Glance, TripleO or Oslo libraries (openstack*/* repos
on gerrit) would happen on openstack-dev. This will allow easier
filtering and prioritization; OpenStack developers interested in
tracking promising stackforge projects would subscribe to both lists.

That will not solve all issues. We should also collectively make sure
that *usage questions are re-routed* to the openstack general
mailing-list, where they belong. Too many people still answer off-topic
questions here on openstack-dev, which encourages people to be off-topic
in the future (traffic on the openstack general ML has been mostly
stable, with only 868 posts in October). With those actions, I hope that
traffic on openstack-dev would drop back to the 1000-1500 range, which
would be more manageable for everyone.

Thoughts ?


+1

I'll most likely subscribe to both but I still think splitting them is
the way to go.

Cheers,
FF

--
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Core pinning

2013-11-14 Thread Tuomas Paappanen

On 13.11.2013 20:20, Jiang, Yunhong wrote:



-Original Message-
From: Chris Friesen [mailto:chris.frie...@windriver.com]
Sent: Wednesday, November 13, 2013 9:57 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] Core pinning

On 11/13/2013 11:40 AM, Jiang, Yunhong wrote:


But, from performance point of view it is better to exclusively
dedicate PCPUs for VCPUs and emulator. In some cases you may want
to guarantee that only one instance(and its VCPUs) is using certain
PCPUs.  By using core pinning you can optimize instance performance
based on e.g. cache sharing, NUMA topology, interrupt handling, pci
pass through(SR-IOV) in multi socket hosts etc.

My 2 cents. When you talking about  performance point of view, are
you talking about guest performance, or overall performance? Pin PCPU
is sure to benefit guest performance, but possibly not for overall
performance, especially if the vCPU is not consume 100% of the CPU
resources.

It can actually be both.  If a guest has several virtual cores that both
access the same memory, it can be highly beneficial all around if all
the memory/cpus for that guest come from a single NUMA node on the
host.
   That way you reduce the cross-NUMA-node memory traffic, increasing
overall efficiency.  Alternately, if a guest has several cores that use
lots of memory bandwidth but don't access the same data, you might want
to ensure that the cores are on different NUMA nodes to equalize
utilization of the different NUMA nodes.

I think the Tuomas is talking about  exclusively dedicate PCPUs for VCPUs, in 
that situation, that pCPU can't be shared by other vCPU anymore. If this vCPU like cost 
only 50% of the PCPU usage, it's sure to be a waste of the overall performance.

As to the cross NUMA node access, I'd let hypervisor, instead of cloud OS, to 
reduce the cross NUMA access as much as possible.

I'm not against such usage, it's sure to be used on data center virtualization. 
Just question if it's for cloud.



Similarly, once you start talking about doing SR-IOV networking I/O
passthrough into a guest (for SDN/NFV stuff) for optimum efficiency it
is beneficial to be able to steer interrupts on the physical host to the
specific cpus on which the guest will be running.  This implies some
form of pinning.

Still, I think hypervisor should achieve this, instead of openstack.



I think pin CPU is common to data center virtualization, but not sure
if it's in scope of cloud, which provide computing power, not
hardware resources.

And I think part of your purpose can be achieved through
https://wiki.openstack.org/wiki/CPUEntitlement and
https://wiki.openstack.org/wiki/InstanceResourceQuota . Especially I
hope a well implemented hypervisor will avoid needless vcpu migration
if the vcpu is very busy and required most of the pCPU's computing
capability (I knew Xen used to have some issue in the scheduler to
cause frequent vCPU migration long before).

I'm not sure the above stuff can be done with those.  It's not just
about quantity of resources, but also about which specific resources
will be used so that other things can be done based on that knowledge.

With the above stuff, it ensure the QoS and the compute capability for the 
guest, I think.

--jyh
  

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Hi,

thank you for your comments. I am talking about quest performance. We 
are using openstack for managing Telco cloud applications where quest 
performance optimization is needed.
That example where pcpus are dedicated exclusively for vcpus is not a 
problem. It can be implemented by using scheduling filters and if you 
need that feature you can take the filter in use. Without it, pcpus are 
shared in normal way.


As Chris said, core pinning e.g. depending on NUMA topology is 
beneficial and I think its beneficial with or without exclusive 
dedication of pcpu.


Regards,
Tuomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Congress: an open policy framework

2013-11-14 Thread Flavio Percoco

On 14/11/13 04:40 -0800, Morgan Fainberg wrote:

On Wed, Nov 13, 2013 at 10:40 AM, Tim Hinrichs thinri...@vmware.com wrote:

We're just getting started with Congress and understanding how it will 
integrate with the OS ecosystem, but here's our current thinking about how 
Congress relates to Oslo's policy engine and to Keystone.  Comments and 
suggestions are welcome.


Congress and Oslo

Three dimensions for comparison: policy language, data sources, and policy 
engine.

We've always planned to make Congress compatible with existing policy languages 
like the one in oslo.  The plan is to build a front-end for a number of policy 
languages/formats, e.g. oslo-policy language, XACML, JSON, YAML, SQL, etc.  The 
idea being that the syntax/language you use is irrelevant as long as it can be 
mapped into Congress's native policy language.  As of now, Congress is using 
Datalog, which is a variant of SQL and is at least as expressive as all of the 
policy languages we've run across in the cloud domain, including the 
oslo-policy language.

In terms of the data sources you can reference in the policy, Congress is 
designed to enable policies that reference arbitrary data sources in the cloud. 
 For example, we could write a Nova authorization policy that permits a new VM 
to be created if that VM is connected to a network owned by a tenant (info 
stored in Neutron) where the VM owner (info in the request) is in the same 
group as the network owner (info stored in Keystone/LDAP).  Oslo's handles some 
of these data sources with its terminal rules, but it's not involved in data 
integration to the same extent Congress is.

In terms of policy engines, Congress is intended to enforce policies in 2 
different ways: proactively (stopping policy violations before they occur) and 
reactively (acting to eliminate a violation after it occurs).  Ideally we 
wouldn't need reactive enforcement, but there will always be cases where 
proactive enforcement is not possible (e.g. a DOS attack brings app latencies 
out of compliance).  The oslo-engine does proactive enforcement only--stopping 
API calls before they violate the policy.



Does this mean all policy decisions need to ask this new service?
There are many policy checks that occur across even a given action (in
some cases).  Could this have a significant performance implication on
larger scale cloud deployments?  I like the idea of having reactive
(DOS prevention) policy enforcement as well as external (arbitrary)
data to help make policy decisions, I don't want to see Congress be
limited in deployment because large scale clouds getting bottle-necked
trying to communicate with it.


This is exactly what worries me about Congress. I mentioned in my last
email that some kind of 'local' cache managed by the Confress library
is a must to avoid the performance penalty.


There might be some value  to seeing some work being done to provide
more information to Keystone, but I think this will become more
apparent as Congress develops.


+1

Cheers,
FF

--
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [reviews] putting numbers on -core team load

2013-11-14 Thread Daniel P. Berrange
On Thu, Nov 14, 2013 at 08:29:36PM +1300, Robert Collins wrote:
 At the summit there were lots of discussions about reviews, and I made
 the mistake of sending a mail to Russell proposing a few new stats we
 could gather.
 
 I say mistake, because he did so and then some... we new have extra
 info - consider:
 
 http://russellbryant.net/openstack-stats/nova-reviewers-30.txt
 
 There are two new things:
 In each row a new column 'received' - this counts the number of
 incoming reviews to each reviewer. It's use should be obvious - but
 remember that folk who contribute the occasional patch probably don't
 have the context to be doing reviews... those who are contributing
 many patches and getting many incoming reviews however...
 This gives us the philanthropists (or perhaps team supporters...)
 
 |klmitch **   | 1370  19   0 118   986.1% |
 14 ( 11.9%)  |  0 (∞)|
 
 (the bit at the end is a unicode infinity... we'll need to work on that).
 
 And so on :)
 
 Down the bottom of the page:
 
 Total reviews: 2980 (99.3/day)
 Total reviewers: 241
 Total reviews by core team: 1261 (42.0/day)
 Core team size: 17
 New patch sets in the last 30 days: 1989 (66.3/day)
 
 and for 90 days:
 
 Total reviews: 10705 (118.9/day)
 Total reviewers: 406
 Total reviews by core team: 5289 (58.8/day)
 Core team size: 17
 New patch sets in the last 90 days: 7515 (83.5/day)
 
 This is the really interesting bit. Remembering that every patch needs
 - at minimum - 2 +2's, the *minimum* viable core team review rate to
 keep up is patch sets per day * 2:
 30 days: 132 core reviews/day
 90 days: 167 core reviews/day
 
 But we're getting:
 30 days: 42/day or 90/day short
 90 days: 59/day or 108/day short
 
 One confounding factor here is that this counts (AIUI) pushed changes,
 not change ids - so we don't need two +2's for every push, we need two
 +2's for every changeid - we should add that as a separate metric I
 think, as the needed +2 count will be a bit lower.

NB, this analysis seems to assume that we actally /want/ to eventually
approve ever submitted patch. There's going to be some portion that are
abandoned or rejected. It is probably a reasonably small portion of the
total so won't change the order of magnitude, but I'd be interested in
stats on how many patches we reject in one way or another.

 Anyhow, to me - this is circling in nicely on having excellent
 information (vs data) on the review status, and from there we can
 start to say 'right, to keep up, Nova needs N core reviewers
 consistently doing Y reviews per day. If Y is something sensible like
 3 or 4, we can work backwards. Using the current figures (which since
 we don't have changeId as a separate count are a little confounded)
 that would give us:
 
 time period  reviews/core/day core-team-size
 30 days   344
 30 days   433
 30 days   817
 90 days   356
 90 days   442
 90 days   10  17
 
 Also note that these are calendar days, so no weekends or leave for -core!
 
 What else
 
 in the last 30 days core have done 42% of reviews, in the last 90 days
 49%. So thats getting better.
 
 I know Russell has had concerns about core cohesion in the past, but I
 don't think doing 8 detailed reviews every day including weekends is
 individually sustainable. IMO we badly need more core reviewers
 and that means:
 
  - 20 or so volunteers
  - who step up and do - pick a number - say 3 - reviews a day, every
 work day, like clockwork.
  - and follow up on their reviewed patches to learn what other
 reviewers say, and why
  - until the nova-core team  Russell are happy that they can
 contribute effectively as -core.
 
 Why such a big number of volunteers? Because we need a big number of
 people to spread load, because Nova has a high incoming patch rate.

One concern I have is that this exercise could turn out to be a bit like
building new roads to solve traffic jams. The more roads you build, the
more car usage you trigger. In some ways the limited rate of approvals
can be seen as acting as a natural break on the rate of patch submissions
getting out of control. I'm not saying we've reached that point currently,
but at some point I think it is wise to ask what is the acceptable rate
of code churn we should try to support as a project.


One of the problems of a project as large as Nova is that it is hard for
one person to be expert at reviewing all areas of the code. As you increase
the size of core review team we have to be careful we don't get a situation
where we have too many patches being reviewed  approved by people who are
not the natural experts in an area. At the same time you don't want people
to be strictly siloed to just one area of the codebase, because you want
people seeing the big picture. If we were 

[openstack-dev] [Mistral] Etherpad for Mistral high level design and 3rd party technologies

2013-11-14 Thread Renat Akhmerov
Hi,

We’ve created an etherpad to start thinking about Mistral main components and 
listing out possible 3rd party technologies to use for their implementation.

https://etherpad.openstack.org/p/MistralDesignAndDependencies

Feel free to share your ideas.

Thanks.


Renat Akhmerov
@ Mirantis Inc.___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Solum] Version scheme

2013-11-14 Thread Noorul Islam K M

Hello all,

We need to decide on version scheme that we are going to use. 

Monty Taylor said the following in one of the comments for review [1]:

Setting a version here enrolls solum in managing its version in a
pre-release versioning manner, such that non-tagged versions will
indicated that they are leading up to 0.0.1. If that's the model solum
wants to do (similar to the server projects) then I recommend replacing
0.0.1 with 2014.1.0.

Regards,
Noorul

[1] https://review.openstack.org/#/c/56130/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [reviews] putting numbers on -core team load

2013-11-14 Thread Sean Dague
On 11/14/2013 09:03 AM, Daniel P. Berrange wrote:
 On Thu, Nov 14, 2013 at 08:29:36PM +1300, Robert Collins wrote:
 At the summit there were lots of discussions about reviews, and I made
 the mistake of sending a mail to Russell proposing a few new stats we
 could gather.

 I say mistake, because he did so and then some... we new have extra
 info - consider:

 http://russellbryant.net/openstack-stats/nova-reviewers-30.txt

 There are two new things:
 In each row a new column 'received' - this counts the number of
 incoming reviews to each reviewer. It's use should be obvious - but
 remember that folk who contribute the occasional patch probably don't
 have the context to be doing reviews... those who are contributing
 many patches and getting many incoming reviews however...
 This gives us the philanthropists (or perhaps team supporters...)

 |klmitch **   | 1370  19   0 118   986.1% |
 14 ( 11.9%)  |  0 (∞)|

 (the bit at the end is a unicode infinity... we'll need to work on that).

 And so on :)

 Down the bottom of the page:

 Total reviews: 2980 (99.3/day)
 Total reviewers: 241
 Total reviews by core team: 1261 (42.0/day)
 Core team size: 17
 New patch sets in the last 30 days: 1989 (66.3/day)

 and for 90 days:

 Total reviews: 10705 (118.9/day)
 Total reviewers: 406
 Total reviews by core team: 5289 (58.8/day)
 Core team size: 17
 New patch sets in the last 90 days: 7515 (83.5/day)

 This is the really interesting bit. Remembering that every patch needs
 - at minimum - 2 +2's, the *minimum* viable core team review rate to
 keep up is patch sets per day * 2:
 30 days: 132 core reviews/day
 90 days: 167 core reviews/day

 But we're getting:
 30 days: 42/day or 90/day short
 90 days: 59/day or 108/day short

 One confounding factor here is that this counts (AIUI) pushed changes,
 not change ids - so we don't need two +2's for every push, we need two
 +2's for every changeid - we should add that as a separate metric I
 think, as the needed +2 count will be a bit lower.
 
 NB, this analysis seems to assume that we actally /want/ to eventually
 approve ever submitted patch. There's going to be some portion that are
 abandoned or rejected. It is probably a reasonably small portion of the
 total so won't change the order of magnitude, but I'd be interested in
 stats on how many patches we reject in one way or another.

IIRC about 2/3 of patches eventually make their way in. Realize
eventually for cells was 12 months and 80 iterations. Jeblair had real
numbers on this for a release somewhere.

 Anyhow, to me - this is circling in nicely on having excellent
 information (vs data) on the review status, and from there we can
 start to say 'right, to keep up, Nova needs N core reviewers
 consistently doing Y reviews per day. If Y is something sensible like
 3 or 4, we can work backwards. Using the current figures (which since
 we don't have changeId as a separate count are a little confounded)
 that would give us:

 time period  reviews/core/day core-team-size
 30 days   344
 30 days   433
 30 days   817
 90 days   356
 90 days   442
 90 days   10  17

 Also note that these are calendar days, so no weekends or leave for -core!

 What else

 in the last 30 days core have done 42% of reviews, in the last 90 days
 49%. So thats getting better.

 I know Russell has had concerns about core cohesion in the past, but I
 don't think doing 8 detailed reviews every day including weekends is
 individually sustainable. IMO we badly need more core reviewers
 and that means:

  - 20 or so volunteers
  - who step up and do - pick a number - say 3 - reviews a day, every
 work day, like clockwork.
  - and follow up on their reviewed patches to learn what other
 reviewers say, and why
  - until the nova-core team  Russell are happy that they can
 contribute effectively as -core.

 Why such a big number of volunteers? Because we need a big number of
 people to spread load, because Nova has a high incoming patch rate.
 
 One concern I have is that this exercise could turn out to be a bit like
 building new roads to solve traffic jams. The more roads you build, the
 more car usage you trigger. In some ways the limited rate of approvals
 can be seen as acting as a natural break on the rate of patch submissions
 getting out of control. I'm not saying we've reached that point currently,
 but at some point I think it is wise to ask what is the acceptable rate
 of code churn we should try to support as a project.
 
 
 One of the problems of a project as large as Nova is that it is hard for
 one person to be expert at reviewing all areas of the code. As you increase
 the size of core review team we have to be careful we don't get a situation
 where we have too many patches being 

Re: [openstack-dev] RFC: reverse the default Gerrit sort order

2013-11-14 Thread Steven Hardy
On Thu, Nov 07, 2013 at 12:36:49PM +1300, Robert Collins wrote:
 I've been thinking about review queues recently (since here at the
 summit everyone is talking about reviews! :)).
 
 One thing that struck me today was that Gerrit makes it easier to
 review the newest changes first, rather than the changes that have
 been in the queue longest, or the changes that started going through
 the review process first.
 
 So... is it possible to change the default sort order for Gerrit? How
 hard is it to do - could we do an experiment on that and see if it
 nudges the dial for reviewstats (just make the change, not ask anyone
 to change their behaviour)?

+1 to this idea - currently I think it's far to easy for reviews to drop
off the reviewer radar unreviewed, which is ultimately a fantastic way to
discourage contributors (particularly new ones who don't know who to hound
on IRC..)

I'd like something which sorts in this order:
- New revisions of a patch I've previously reviewed, or patches I've
  reviewed and -1'd where the reviewer has left a new comment
- Oldest remaining patches which don't already have negative feedback
- Everything else, sorted by most recently modified

I agree that having my recently closed reviews on the top-level dashboard
is not really very valuable - I'd rather that space was occupied by reviews
which need my attention.

Posting reviews and having them ignored for weeks is a really bad outcome
for both the submitter and project IMO, so I'd love to figure out a way to
avoid it happening on a regular basis.

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [PTL] Proposed Icehouse release schedule

2013-11-14 Thread Anne Gentle
On Wed, Nov 13, 2013 at 9:07 PM, Jeremy Stanley fu...@yuggoth.org wrote:

 On 2013-11-13 14:15:31 +0100 (+0100), Thierry Carrez wrote:
 [...]
  * Week of April 21
 [...]

 I've already got travel plans this week, so works for me (and
 release week too, though it was scheduled well over a year ago).

  * Week of April 28
 [...]

 I'll be back this week and catching up on things, so I'm happy to be
 the lonely soul keeping the lights on in Infra if needed.


Docs are probably most needy at this time, so I'm glad you can stick
around. :)

I had to remind myself that the release week starts on Thursday.

Just a question, did you consider a week off post-summit? I want to ensure
the tons of questions that come in about the schedule can be answered. I
felt the weeks up to the summit were still quite busy, not just from a docs
perspective, but from a PTL/scheduler/arranger perspective. Thoughts?
Anne




 --
 Jeremy Stanley

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Anne Gentle
annegen...@justwriteclick.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][mistral] EventScheduler vs Mistral scheduling

2013-11-14 Thread Zane Bitter

On 14/11/13 12:26, Renat Akhmerov wrote:


On 14 Nov 2013, at 18:03, Zane Bitter zbit...@redhat.com
mailto:zbit...@redhat.com wrote:


What might be a downside is that sharing a back-end may not be
technically convenient - one thing we have been reminded of in Heat is
that a service with timed tasks has to be scaled out in a completely
different way to a service that avoids them. This may or may not be an
issue for Mistral, but it could be resolved by having different
back-end services that communicate over RPC. The front-end API can
remain shared though.


Not sure I’m 100% following here. Could you please provide more details
on this? Seems to be an important topic to me. Particularly, what did
you mean when you said “sharing a back-end”? Sharing by which components?


If you have a service that is stateless and only responds to user 
requests, then scaling it out is easy (just stick it behind a load 
balancer). If it has state (i.e. a database), things become a whole lot 
more complicated to maintain consistency. And if the application has 
timed tasks as well as incoming requests, that also adds another layer 
of complexity.


Basically you need to ensure that a task is triggered exactly once, in a 
highly-available distributed system (and, per a previous thread, you're 
not allowed to use Zookeeper ;). Your scaling strategy will be more or 
less dictated by this, possibly to the detriment of the rest of your 
service - though in Mistral it may well be the case that you have this 
constraint already. If not then one possible solution to this is to run 
two binaries and have different scaling strategies for each.


None of this should take away from the fact that the two features should 
be part of the same API (this is what I meant by sharing a front-end).


Hopefully that clarifies things :)

cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Port mirroring

2013-11-14 Thread Rossella Sblendido
Hello devs,

I'd like to start working at this blueprint:

https://blueprints.launchpad.net/neutron/+spec/port-mirroring

I didn't receive much feedback so please have a look at it.

It was not approved yet. What's the usual workflow? Shall I wait for
approval before I start implementing it? Sorry for the trivial question but
I'm new here.

thanks,

Rossella
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Glance Tasks

2013-11-14 Thread Jay Pipes

On 11/14/2013 08:32 AM, George Reese wrote:

One critical reasons why tasks rather than resource status may be
required is because:

a) The system state may not be sufficient at time of POST/PUT to
generate a “minimum viable resource” and we don’t want to risk timeouts
waiting for the “minimum viable resource
b) There may be more stuff about the workflow worth tracking beyond
simple status, including the ability to act on the workflow


Agreed.

snip


Also, as a side note, I think there should be a separate, shared tasks
tracking component (ooh, something new for “OpenStack core” :)) that is
the central authority on all task management. Glance (and Nova and
everyone else) would interact with this component to create and update
task status, but clients would query against it. It would also have it’s
own API.

That way, a client subsystem could be handed a random task from an
arbitrary OpenStack component and easily know the semantics for getting
information about it.


There are two related projects in this space so far. Taskflow [1] is a 
library that aims to add structure to the in-process management of 
related tasks. Mistral [2] is a project that aims to provide a 
distributed task scheduling service that may act as the external 
subsystem/proxy you describe above.


Both are under heavy development and my hope is that both projects 
continue to evolve in their distinct ways and offer other OpenStack/open 
source projects different functionalities.


Best,
-jay

[1] https://wiki.openstack.org/wiki/TaskFlow
[2] https://wiki.openstack.org/wiki/Mistral


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Split of the openstack-dev list

2013-11-14 Thread Jay Pipes

++

On 11/14/2013 08:37 AM, m...@openstack.org wrote:

This would also have the added benefit of reducing the times people conflate related open 
source projects from stackforge with OpenStack itself. Having related oss discussions on 
a list called OpenStack-Dev may certainly have given the wrong impression to 
the casual observer.


On Nov 14, 2013 7:12 AM, Thierry Carrez thie...@openstack.org wrote:


Hi everyone,

I think that we have recently reached critical mass for the
openstack-dev mailing-list, with 2267 messages posted in October, and
November well on its way to pass 2000 again. Some of those are just
off-topic (and I've been regularly fighting against them) but most of
them are just about us covering an ever-increasing scope, stretching the
definition of what we include in openstack development.

Therefore I'd like to propose a split between two lists:

*openstack-dev*: Discussions on future development for OpenStack
official projects

*stackforge-dev*: Discussions on development for stackforge-hosted projects

Non-official OpenStack-related projects would get discussed in
stackforge-dev (or any other list of their preference), while
openstack-dev would be focused on openstack official programs (including
incubated  integrated projects).

That means discussion about Solum, Mistral, Congress or Murano
(stackforge/* repos in gerrit) would now live on stackforge-dev.
Discussions about Glance, TripleO or Oslo libraries (openstack*/* repos
on gerrit) would happen on openstack-dev. This will allow easier
filtering and prioritization; OpenStack developers interested in
tracking promising stackforge projects would subscribe to both lists.

That will not solve all issues. We should also collectively make sure
that *usage questions are re-routed* to the openstack general
mailing-list, where they belong. Too many people still answer off-topic
questions here on openstack-dev, which encourages people to be off-topic
in the future (traffic on the openstack general ML has been mostly
stable, with only 868 posts in October). With those actions, I hope that
traffic on openstack-dev would drop back to the 1000-1500 range, which
would be more manageable for everyone.

Thoughts ?

--
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [style] () vs \ continuations

2013-11-14 Thread Dolph Mathews
On Wed, Nov 13, 2013 at 6:46 PM, Robert Collins
robe...@robertcollins.netwrote:

 Hi so - in http://docs.openstack.org/developer/hacking/

 it has as bullet point 4:
 Long lines should be wrapped in parentheses in preference to using a
 backslash for line continuation.

 I'm seeing in some reviews a request for () over \ even when \ is
 significantly clearer.

 I'd like us to avoid meaningless reviewer churn here: can we either:
  - go with PEP8 which also prefers () but allows \ when it is better
- and reviewers need to exercise judgement when asking for one or other
  - make it a hard requirement that flake8 detects


+1 for the non-human approach.



 My strong recommendation is to go with PEP8 and exercising of judgement.

 The case that made me raise this is this:
 folder_exists, file_exists, file_size_in_kb, disk_extents = \
 self._path_file_exists(ds_browser, folder_path, file_name)

 Wrapping that in brackets gets this;
 folder_exists, file_exists, file_size_in_kb, disk_extents = (
 self._path_file_exists(ds_browser, folder_path, file_name))


The root of the problem is that it's a terribly named method with a
terrible return value... fix the underlying problem.



 Which is IMO harder to read - double brackets, but no function call,
 and no tuple: it's more ambiguous than \.

 from
 https://review.openstack.org/#/c/48544/15/nova/virt/vmwareapi/vmops.py

 Cheers,
 Rob
 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Continue discussing multi-region orchestration

2013-11-14 Thread Bartosz Górski

Hi all,

At summit in Hong Kong we had a design session where we discussed adding 
multi-region orchestration support to Heat. During the session we had 
really heated discussion and spent most of the time on explaining the 
problem. I think it was really good starting point and right now more 
people have better understanding for this problem. I appreciate all the 
suggestions and concerns I got from you. I would like to continue this 
discussion here on the mailing list.


I updated the etherpad after the session. If I forgot about something or 
wrote something that is not right feel free a please tell me about it.


References:
[1] Blueprint: 
https://wiki.openstack.org/wiki/Heat/Blueprints/Multi_Region_Support_for_Heat
[2] Etherpad: 
https://etherpad.openstack.org/p/icehouse-summit-heat-multi-region-cloud

[3] Patch with POC version: https://review.openstack.org/#/c/53313/


Best,
Bartosz Górski
NTTi3


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Split of the openstack-dev list

2013-11-14 Thread Anne Gentle
On Thu, Nov 14, 2013 at 7:12 AM, Thierry Carrez thie...@openstack.orgwrote:

 Hi everyone,

 I think that we have recently reached critical mass for the
 openstack-dev mailing-list, with 2267 messages posted in October, and
 November well on its way to pass 2000 again. Some of those are just
 off-topic (and I've been regularly fighting against them) but most of
 them are just about us covering an ever-increasing scope, stretching the
 definition of what we include in openstack development.

 Therefore I'd like to propose a split between two lists:

 *openstack-dev*: Discussions on future development for OpenStack
 official projects

 *stackforge-dev*: Discussions on development for stackforge-hosted projects

 Non-official OpenStack-related projects would get discussed in
 stackforge-dev (or any other list of their preference), while
 openstack-dev would be focused on openstack official programs (including
 incubated  integrated projects).

 That means discussion about Solum, Mistral, Congress or Murano
 (stackforge/* repos in gerrit) would now live on stackforge-dev.
 Discussions about Glance, TripleO or Oslo libraries (openstack*/* repos
 on gerrit) would happen on openstack-dev. This will allow easier
 filtering and prioritization; OpenStack developers interested in
 tracking promising stackforge projects would subscribe to both lists.

 That will not solve all issues. We should also collectively make sure
 that *usage questions are re-routed* to the openstack general
 mailing-list, where they belong. Too many people still answer off-topic
 questions here on openstack-dev, which encourages people to be off-topic
 in the future (traffic on the openstack general ML has been mostly
 stable, with only 868 posts in October). With those actions, I hope that
 traffic on openstack-dev would drop back to the 1000-1500 range, which
 would be more manageable for everyone.

 Thoughts ?


Sounds good.



 --
 Thierry Carrez (ttx)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Anne Gentle
annegen...@justwriteclick.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Split of the openstack-dev list

2013-11-14 Thread Sergey Lukjanov
+1, agreed.

Personally, I’ll subscribe to both lists but I think it really could help to 
prioritize emails.

Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.

On Nov 14, 2013, at 5:12 PM, Thierry Carrez thie...@openstack.org wrote:

 Hi everyone,
 
 I think that we have recently reached critical mass for the
 openstack-dev mailing-list, with 2267 messages posted in October, and
 November well on its way to pass 2000 again. Some of those are just
 off-topic (and I've been regularly fighting against them) but most of
 them are just about us covering an ever-increasing scope, stretching the
 definition of what we include in openstack development.
 
 Therefore I'd like to propose a split between two lists:
 
 *openstack-dev*: Discussions on future development for OpenStack
 official projects
 
 *stackforge-dev*: Discussions on development for stackforge-hosted projects
 
 Non-official OpenStack-related projects would get discussed in
 stackforge-dev (or any other list of their preference), while
 openstack-dev would be focused on openstack official programs (including
 incubated  integrated projects).
 
 That means discussion about Solum, Mistral, Congress or Murano
 (stackforge/* repos in gerrit) would now live on stackforge-dev.
 Discussions about Glance, TripleO or Oslo libraries (openstack*/* repos
 on gerrit) would happen on openstack-dev. This will allow easier
 filtering and prioritization; OpenStack developers interested in
 tracking promising stackforge projects would subscribe to both lists.
 
 That will not solve all issues. We should also collectively make sure
 that *usage questions are re-routed* to the openstack general
 mailing-list, where they belong. Too many people still answer off-topic
 questions here on openstack-dev, which encourages people to be off-topic
 in the future (traffic on the openstack general ML has been mostly
 stable, with only 868 posts in October). With those actions, I hope that
 traffic on openstack-dev would drop back to the 1000-1500 range, which
 would be more manageable for everyone.
 
 Thoughts ?
 
 -- 
 Thierry Carrez (ttx)
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] design summit outcomes

2013-11-14 Thread Steven Hardy
On Wed, Nov 13, 2013 at 10:04:04AM -0600, Dolph Mathews wrote:
 I guarantee there's a few things I'm forgetting, but this is my collection
 of things we discussed at the summit and determined to be good things to
 pursue during the icehouse timeframe. The contents represent a high level
 mix of etherpad conclusions and hallway meetings.
 
 https://gist.github.com/dolph/7366031

Looks good, but I have some feedback on items which were discussed (either
in the delegation session or in the hallway with ayoung/jlennox), and are
high priority for Heat, I don't see these captured in the page above:

Delegation:
- Need a way to create a secret derived from a trust (natively, not via
  ec2tokens extension), and it needs to be configurable such that it
  won't expire, or has a very long expiry time. ayoung mentioned a
  credential mechanism, but I'm not sure which BP he was referring to, so
  clarification appreciated.

Client:
- We need a way to get the role-tenant pairs (not just the tenant-less
  role list) into the request context, so we can correctly scope API
  requests.  I raised this bug:

  https://bugs.launchpad.net/python-keystoneclient/+bug/1250982

  Related to this thread (reminder - which you said you'd respond to ;):

  http://lists.openstack.org/pipermail/openstack-dev/2013-November/018201.html

  This topic came up again today related to tenant-scoped nova flavors:

  http://lists.openstack.org/pipermail/openstack-dev/2013-November/019099.html

  Closely related to this bug I think:

  https://bugs.launchpad.net/keystone/+bug/968696

  I'd welcome discussion on how we solve the request-scoping issue
  openstack-wide, currently I'm thinking we need the role-tenant pairs (and
  probably role-domain pairs) in the request context, so we can correctly
  filter in the model_query when querying the DB while servicing the
  API request.

Thanks,

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Using AD for keystone authentication only

2013-11-14 Thread Dolph Mathews
You can assign roles to users in keystoneclient ($ keystone help
user-role-add) -- the assignment would be persisted in SQL. openstackclient
supports assignments to groups as well if you switch to
--identity-api-version=3

On Wed, Nov 13, 2013 at 3:08 PM, Avi L aviost...@gmail.com wrote:

 Oh ok so in this case how does the Active Directory user gets a id , and
 how do you map the user to a role? Is there any example you can point me
 to?


 On Wed, Nov 13, 2013 at 11:24 AM, Dolph Mathews 
 dolph.math...@gmail.comwrote:

 Yes, that's the preferred approach in Havana: Users and Groups via LDAP,
 and everything else via SQL.


 On Wednesday, November 13, 2013, Avi L wrote:

 Hi,

 I understand that the LDAP provider in keystone can be used for
 authenticating a user (i.e validate username and password) , and it also
 authorize it against roles and tenant. However this requires AD schema
 modification. Is it possible to use AD only for authentication and then use
 keystone's native database for roles and tenant lookup? The advantage is
 that then we don't need to touch the enterprise AD installation.

 Thanks
 Al



 --

 -Dolph

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Configure overcommit policy

2013-11-14 Thread Alex Glikson
In fact, there is a blueprint which would enable supporting this scenario 
without partitioning -- 
https://blueprints.launchpad.net/nova/+spec/cpu-entitlement 
The idea is to annotate flavors with CPU allocation guarantees, and enable 
differentiation between instances, potentially running on the same host.
The implementation is augmenting the CoreFilter code to factor in the 
differentiation. Hopefully this will be out for review soon.

Regards,
Alex





From:   John Garbutt j...@johngarbutt.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org, 
Date:   14/11/2013 04:57 PM
Subject:Re: [openstack-dev] [nova] Configure overcommit policy



On 13 November 2013 14:51, Khanh-Toan Tran
khanh-toan.t...@cloudwatt.com wrote:
 Well, I don't know what John means by modify the over-commit 
calculation in
 the scheduler, so I cannot comment.

I was talking about this code:
https://github.com/openstack/nova/blob/master/nova/scheduler/filters/core_filter.py#L64


But I am not sure thats what you want.

 The idea of choosing free host for Hadoop on the fly is rather 
complicated
 and contains several operations, namely: (1) assuring the host never get
 pass 100% CPU load; (2) identifying a host that already has a Hadoop VM
 running on it, or already 100% CPU commitment; (3) releasing the host 
from
 100% CPU commitment once the Hadoop VM stops; (4) possibly avoiding 
other
 applications to use the host (to economy the host resource).

 - You'll need (1) because otherwise your Hadoop VM would come short of
 resources after the host gets overloaded.
 - You'll need (2) because you don't want to restrict a new host while 
one of
 your 100% CPU commited hosts still has free resources.
 - You'll need (3) because otherwise you host would be forerever 
restricted,
 and that is no longer on the fly.
 - You'll may need (4) because otherwise it'd be a waste of resources.

 The problem of changing CPU overcommit on the fly is that when your 
Hadoop
 VM is still running, someone else can add another VM in the same host 
with a
 higher CPU overcommit (e.g. 200%), (violating (1) ) thus effecting your
 Hadoop VM also.
 The idea of putting the host in the aggregate can give you (1) and (2). 
(4)
 is done by AggregateInstanceExtraSpecsFilter. However, it does not give 
you
 (3); which can be done with pCloud.

Step 1: use flavors so nova can tell between the two workloads, and
configure them differently

Step 2: find capacity for your workload given your current cloud usage

At the moment, most of our solutions involve reserving bits of your
cloud capacity for different workloads, generally using host
aggregates.

The issue with claiming back capacity from other workloads is a bit
tricker. The issue is I don't think you have defined where you get
that capacity back from? Maybe you want to look at giving some
workloads a higher priority over the constrained CPU resources? But
you will probably starve the little people out at random, which seems
bad. Maybe you want to have a concept of spot instances where they
can use your spare capacity until you need it, and you can just kill
them?

But maybe I am miss understanding your use case, its not totally clear to 
me.

John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Alembic or SA Migrate (again)

2013-11-14 Thread Jay Pipes

On 11/13/2013 05:31 AM, Roman Podoliaka wrote:

2) run tests on MySQL/PostgreSQL instead (we have a proof-of-concept
implementation here [2])

The latter approach has a few problems though:
1) even if tests are run in parallel and each test case is run within
a single transaction with ROLLBACK in the end (read fast cleanups),
running of the whole test suite on MySQL will take much longer (for
PostgreSQL we can use fsync = off to speed up things a lot, though)


When I was developing on the Drizzle database system, we used the 
excellent libeatmydata library from Stewart Smith to speed up testing. 
Basically, libeatmydata redirects fsync() and friends to be a no-op, 
speeding up SQL-based tests (on any platform) by 30-50%.


https://www.flamingspork.com/projects/libeatmydata/

I think we could fairly easily have eatmydata installed on the builders 
that run the migration tests, and that would significantly improve test 
runtimes and would have the benefit of working on many different 
platforms without having to ensure the database system is configured in 
any particular way...


Best,
-jay




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Tracking development of scenario tests

2013-11-14 Thread Zhi Kun Liu
+1, This is a great idea.  We could consider it as a general process for
all tests.


2013/11/14 Koderer, Marc m.kode...@telekom.de

 Hi all,

 I think we have quite the same issue with the neutron testing. I already
 put it on the agenda for the QA meeting for today.
 Let's make it to a general topic.

 Regards
 Marc
 
 From: Giulio Fidente [gfide...@redhat.com]
 Sent: Thursday, November 14, 2013 6:23 AM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [qa] Tracking development of scenario tests

 On 11/14/2013 12:24 AM, David Kranz wrote:
  1. Developer checks in the zeroth version of a scenario test as work in
  progress. It contains a description of the test, and possibly work
  items.  This will claim the area of the proposed scenario to avoid
  duplication and allow others to comment through gerrit.
  2. The developer pushes new versions, removing work in progress if the
  code is in working state and a review is desired and/or others will be
  contributing to the scenario.
  3. When finished, any process-oriented content such as progress tracking
  is removed and the test is ready for final review.

 +1 , the description will eventually contribute to documenting the
 scenarios

 yet the submitter (step 1) remains in charge of adding to the draft the
 reviewers

 how about we map at least one volunteer to each service (via the HACKING
 file) and ask submitters to add such a person as reviewer of its drafts
 when the tests touch the service? this should help avoid tests duplication.

 I very much like the idea of using gerrit for this
 --
 Giulio Fidente
 GPG KEY: 08D733BA | IRC: giulivo

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Group-based Policy Sub-team Meetings

2013-11-14 Thread Mohammad Banikazemi

Kyle,

Thank you for organizing this.

I think the original email you sent out did not solicit any comments
(except for possibly proposing a different time for the weekly meetings).
So that is probably why you have not heard from anybody (including me). So
we are ready to have the meeting but if the consensus is that people need
more time to prepare that is fine too.
I think we need to set an agenda for our meeting (similar to what you do
for the ML2 calls) so we have a better idea of what we need to do during
the meeting. In the proposal, we have identified new object resources.
Should we start making those definitions and their relationship with other
objects more precise. Just a suggestion.

Thanks,

Mohammad




From:   Kyle Mestery (kmestery) kmest...@cisco.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org,
Date:   11/13/2013 01:09 PM
Subject:Re: [openstack-dev] [neutron] Group-based Policy Sub-team
Meetings



On Nov 13, 2013, at 10:36 AM, Stephen Wong s3w...@midokura.com
 wrote:

 Hi Kyle,

So no meeting this Thursday?

I am inclined to skip this week's meeting due to the fact I haven't heard
many
replies yet. I think a good starting point for people would be to review
the
BP [1] and Design Document [2] and provide feedback where appropriate.
We should start to formalize what the APIs will look like at next week's
meeting,
and the Design Document has a first pass at this.

Thanks,
Kyle

[1]
https://blueprints.launchpad.net/neutron/+spec/group-based-policy-abstraction

[2]
https://docs.google.com/document/d/1ZbOFxAoibZbJmDWx1oOrOsDcov6Cuom5aaBIrupCD9E/edit?usp=sharing


 Thanks,
 - Stephen

 On Wed, Nov 13, 2013 at 7:11 AM, Kyle Mestery (kmestery)
 kmest...@cisco.com wrote:
 On Nov 13, 2013, at 8:58 AM, Stein, Manuel (Manuel)
manuel.st...@alcatel-lucent.com wrote:

 Kyle,

 I'm afraid your meeting vanished from the Meetings page [2] when user
amotiki reworked neutron meetings ^.^
 Is the meeting for Thu 1600 UTC still on?

 Ack, thanks for the heads up here! I have re-added the meeting. I only
heard
 back from one other person other than yourself, so at this point I'm
inclined
 to wait until next week to hold our first meeting unless I hear back
from others.

 A few heads-up questions (couldn't attend the HK design summit Friday
meeting):

 1) In the summit session Etherpad [3], ML2 implementation mentions
insertion of arbitrary metadata to hint to underlying implementation. Is
that (a) the plug-ing reporting its policy-bound realization? (b) the user
further specifying what should be used? (c) both? Or (d) none of that but
just some arbitrary message of the day?

 I believe that would be (a).

 2) Would policies _always_ map to the old Neutron entities?
 E.g. when I have policies in place, can I query related network/port,
subnet/address, router elements on the API or are there no equivalents
created? Would the logical topology created under the policies be exposed
otherwise? for e.g. monitoring/wysiwyg/troubleshoot purposes.

 No, this is up to the plugin/MechanismDriver implementation.

 3) Do the chain identifier in your policy rule actions match to
Service Chain UUID in Service Insertion, Chaining and API [4]

 That's one way to look at this, yes.

 4) Are you going to describe L2 services the way group policies work? I
mean, why would I need a LoadBalancer or Firewall instance before I can
insert it between two groups when all that load balancing/firewalling
requires is nothing but a policy for group communication itself? -
regardless the service instance used to carry out the service.

 These are things I'd like to discuss at the IRC meeting each week. The
goal
 would be to try and come up with some actionable items we can drive
towards
 in both Icehouse-1 and Icehouse-2. Given how close the closing of
Icehouse-1
 is, we need to focus on this very fast if we want to have a measurable
impact in
 Icehouse-1.

 Thanks,
 Kyle


 Best, Manuel

 [2]
https://wiki.openstack.org/wiki/Meetings#Neutron_Group_Policy_Sub-Team_Meeting

 [3]
https://etherpad.openstack.org/p/Group_Based_Policy_Abstraction_for_Neutron
 [4]
https://docs.google.com/document/d/1fmCWpCxAN4g5txmCJVmBDt02GYew2kvyRsh0Wl3YF2U/edit#


 -Original Message-
 From: Kyle Mestery (kmestery) [mailto:kmest...@cisco.com]
 Sent: Montag, 11. November 2013 19:41
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [neutron] Group-based Policy
 Sub-team Meetings

 Hi folks! Hope everyone had a safe trip back from Hong Kong.
 Friday afternoon in the Neutron sessions we discussed the
 Group-based Policy Abstraction BP [1]. It was decided we
 would try to have a weekly IRC meeting to drive out further
 requirements with the hope of coming up with a list of
 actionable tasks to begin working on by December.
 I've tentatively set the meeting [2] for Thursdays at 1600
 UTC on the #openstack-meeting-alt IRC channel. 

Re: [openstack-dev] Split of the openstack-dev list

2013-11-14 Thread Chmouel Boudjnah
On Thu, Nov 14, 2013 at 2:22 PM, Julien Danjou jul...@danjou.info wrote:

  Other suggestion: we could stop posting meeting reminders to -dev (I
  know, I'm guilty of it) and only post something if the meeting time
  changes, or if the weekly meeting is canceled for whatever reason.

 Good suggestion.


Or this can be moved to the announcement list?

Chmouel.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Split of the openstack-dev list

2013-11-14 Thread Romain Hardouin
Good idea.

-romain

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Version scheme

2013-11-14 Thread Murali Allada
I'm not a big fan of using date information in the version number. Is there an 
advantage to doing that? Using a model like 0.0.1 makes it easier to 
communicate.

A better approach might be to use  Major.Minor.Revision.Build. If we want to 
use dates, Year.Month.Day.Build  or Year.Minor.Revision.Build might be a better 
approach. Do any openstack projects use the build number in the version? or is 
there a way for the build process to insert the build number in there?

Thanks,
Murali




On Nov 14, 2013, at 8:23 AM, Noorul Islam K M 
noo...@noorul.commailto:noo...@noorul.com
 wrote:


Hello all,

We need to decide on version scheme that we are going to use.

Monty Taylor said the following in one of the comments for review [1]:

Setting a version here enrolls solum in managing its version in a
pre-release versioning manner, such that non-tagged versions will
indicated that they are leading up to 0.0.1. If that's the model solum
wants to do (similar to the server projects) then I recommend replacing
0.0.1 with 2014.1.0.

Regards,
Noorul

[1] https://review.openstack.org/#/c/56130/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][mistral] EventScheduler vs Mistral scheduling

2013-11-14 Thread Christopher Armstrong
On Thu, Nov 14, 2013 at 4:29 AM, Renat Akhmerov rakhme...@mirantis.comwrote:

 As for EventScheduler proposal, I think it actually fits Mistral model
 very well. What described in EvenScheduler is basically the ability to
 configure webhooks to be called periodically or at a certain time. First of
 all, from the very beginning the concept of scheduling has been considered
 a very important capability of Mistral. And from Mistral perspective
 calling a webhook is just a workflow consisting of one task. In order to
 simplify consumption of the service we can implement API methods to work
 specifically with webhooks in a convenient way (without providing any
 workflow definitions using DSL etc.). I have already suggested before that
 we can provide API shortcuts for scheduling individual tasks rather than
 complex workflows so it has an adjacent meaning.

 I other words, I now tend to think it doesn’t make sense to have
 EventScheduler a standalone service.

 What do you think?


I agree that I don't think it makes sense to have a whole new project just
for EventScheduler. Mistral seems like a pretty good fit. Convenience APIs
similar to the EventScheduler API for just saying run this webhook on this
schedule would be nice, too, but I wouldn't raise a fuss if they didn't
exist and I had to actually define a trivial workflow.

-- 
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] External authentication

2013-11-14 Thread Álvaro López García
Hi all,

During the review of [1] I had a look at the tests that are related
with external authentication (i.e. the usage of REMOTE_USER) in
Keystone and I realised that there is a bunch of them that are setting 
external as one of the authentication methods. However, in
keystone.auth.controllers there is an explicit call to the external
methods whenever REMOTE_USER is set [2].

Should we call the external authentication only when external is set
(i.e. in [3]) regardless of the REMOTE_USER presence in the context?

[1] https://review.openstack.org/#/c/50362/
[2] 
https://github.com/openstack/keystone/blob/master/keystone/auth/controllers.py#L335
[3] 
https://github.com/openstack/keystone/blob/master/keystone/auth/controllers.py#L342
-- 
Álvaro López García  al...@ifca.unican.es
Instituto de Física de Cantabria http://alvarolopez.github.io
Ed. Juan Jordá, Campus UC  tel: (+34) 942 200 969
Avda. de los Castros s/n
39005 Santander (SPAIN)
_
Unix never says `please.' -- Rob Pike

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] nova image list | Error 401

2013-11-14 Thread Ben Nemec

On 2013-11-14 04:41, Santosh Kumar wrote:

Hi Experts,

I am following Havana guide for installation of three node set up.

After installing all services of NOVA ( nova-api , nova-cert,
nova-scheduler, nova-consoleauth, nova-novncproxy ), when I tires to
verify
All the things by # nova image-list  ( It gives 401 , unauthorized 
).


From nova-api log , I can there keystone.middleware.authtoken is
rejecting request because of invalid authtoken.

Any pointer for the same.

Regards
Santosh


This sounds like a question for the openstack list: 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Openstack-dev is not for usage questions.

-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Version scheme

2013-11-14 Thread Monty Taylor


On 11/14/2013 10:36 AM, Murali Allada wrote:
 I'm not a big fan of using date information in the version number. Is
 there an advantage to doing that? Using a model like 0.0.1 makes it
 easier to communicate.
 
 A better approach might be to use  *Major.Minor.Revision.Build*. If we
 want to use dates, *Year.Month.Day.Build  or
 Year.**Minor.Revision.Build *might be a better approach.. Do any
 openstack projects use the build number in the version? or is there a
 way for the build process to insert the build number in there?

To be clear, this isn't really a call to design a versioning scheme from
scratch - there are two schemes currently in use, and solum should use
one of them.

The main reason to do 2014.1.0 is to align with OpenStack, so it depends
on intent a little bit. The advantage to the Year.Minor.Revision is
that, since OpenStack is on a date-based release cycle, it communicates
that fact.

The main reason to do a semver style Major.Minor.Patch scheme is to
communicate api changes across releases. This is the reason we release
our libraries using that scheme.

In terms of mechanics, the way it works for both schemes is that the
version produced is based on git tags. If a revision is tagged, that is
the version that is produced in the tarball.

If a version is NOT tagged, there are two approaches.

Since the date-based versions have a predictable next version, we have
intermediary versions marked as leading up to that version.
Specifically, the form is:

%(version_in_setup_cfg)s.dev%(num_revisions_since_last_tag)s.g%(git_short_sha)

the dev prefix is a PEP440 compliant indiciation that this is a
development version that is leading towards the version indicated.

For semver-based versions, intermediary versions are marked as following
the previous release. So we get:

%(most_recent_tag)s.%(num_revisions_since_last_tag)s.g%(git_short_sha)s

I would honestly recommend aligning with OpenStack and putting 2014.1.0
into the setup.cfg version line for solum itself and doing date-based
releases. For python-solumclient, since it's a library, I recommend not
listing a version in setup.cfg and doing semver-based versions. This way
you'll be operating in the same way as the rest of the project.

 
 On Nov 14, 2013, at 8:23 AM, Noorul Islam K M noo...@noorul.com
 mailto:noo...@noorul.com
  wrote:
 

 Hello all,

 We need to decide on version scheme that we are going to use.

 Monty Taylor said the following in one of the comments for review [1]:

 Setting a version here enrolls solum in managing its version in a
 pre-release versioning manner, such that non-tagged versions will
 indicated that they are leading up to 0.0.1. If that's the model solum
 wants to do (similar to the server projects) then I recommend replacing
 0.0.1 with 2014.1.0.

 Regards,
 Noorul

 [1] https://review.openstack.org/#/c/56130/

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-14 Thread Christopher Armstrong
http://docs.heatautoscale.apiary.io/

I've thrown together a rough sketch of the proposed API for autoscaling.
It's written in API-Blueprint format (which is a simple subset of Markdown)
and provides schemas for inputs and outputs using JSON-Schema. The source
document is currently at
https://github.com/radix/heat/raw/as-api-spike/autoscaling.apibp


Things we still need to figure out:

- how to scope projects/domains. put them in the URL? get them from the
token?
- how webhooks are done (though this shouldn't affect the API too much;
they're basically just opaque)

Please read and comment :)


-- 
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][keystone] APIs, roles, request scope and admin-ness

2013-11-14 Thread Dolph Mathews
On Sat, Nov 2, 2013 at 11:06 AM, Steven Hardy sha...@redhat.com wrote:

 Hi all,

 Looking to start a wider discussion, prompted by:
 https://review.openstack.org/#/c/54651/
 https://blueprints.launchpad.net/heat/+spec/management-api
 https://etherpad.openstack.org/p/heat-management-api

 Summary - it has been proposed to add a management API to Heat, similar in
 concept to the admin/public API topology used in keystone.


 I'm concerned that this may not be a pattern we want to propagate
 throughout
 OpenStack, and that for most services, we should have one API to access
 data,
 with the scope of the data returned/accessible defined by the roles held by
 the user (ie making proper use of the RBAC facilities afforded to us via
 keystone).


Agree with the concern; Identity API v3 abandons this topology in favor of
more granular access controls (policy.json) on a single API.

From an HTTP perspective, API responses should vary according to the token
used to access the API. Literally,

  Vary: X-Auth-Token

in HTTP headers.



 In the current PoC patch, a users admin-ness is derived from the fact that
 they are accessing a specific endpoint, and that policy did not deny them
 access to that endpoint.  I think this is wrong, and we should use keystone
 roles to decide the scope of the request.


++ (although use of the word scope here is dangerous, as I think you mean
something different from the usual usage?)



 The proposal seems to consider tenants as the top-level of abstraction,
 with
 the next level up being a global service provider admin, but this does not
 consider the keystone v3 concept of domains [1]


v3 also allows domain-level roles to be inherited to all projects owned by
that domain, so in effect-- it does (keystone just takes care of it).


 , or that you may wish to
 provide some of these admin-ish features to domain-admin users (who will
 adminster data accross multiple tenants, just like has been proposed), via
 the
 public-facing API.

 It seems like we need a way of scoping the request (via data in the
 context),
 based on a heirarchy of admin-ness, like:

 1. Normal user


I assume normal user has some non-admin role on a project/tenant.


 2. Tenant Admin (has admin role in a tenant)
 3. Domain Admin (has admin role in all tenants in the domain)


As mentioned above, keystone provides a solution to this already that other
projects don't need to be aware of.


 4. Service Admin (has admin role everywhere, like admin_token for keystone)


admin_token is a role-free, identity-free hack. With v3, it's only
necessary for bootstrapping keystone if you're not backing to an existing
identity store, and can be removed after that.



 The current is_admin flag which is being used in the PoC patch won't
 allow
 this granularity of administrative boundaries to be represented, and
 splitting
 admin actions into a separate API will prevent us providing tenant and
 domain
 level admin functionality to customers in a public cloud environment.


admin should not be a binary thing -- in the real world it's much more
blurry. Users have a finite set of roles/attributes, some of which can be
delegated, and those roles/attributes grant the user different sets of
capabilities.



 It has been mentioned that in keystone, if you have admin in one tenant,
 you
 are admin everywhere, which is a pattern I think we should not follow


Good! We're working towards eliminating that, but it's been a long, slow
road. Deprecating v2 is one next step in that direction. Building a more
powerful policy engine is another. Considering identity management as out
of scope


 keystone folks, what are your thoughts in terms of roadmap to make role
 assignment (at the request level) scoped to tenants rather than globally
 applied?


That's how all role assignments behave today, except for the magical
admin role in keystone where the scope is completely ignored. Because
keystone doesn't manage resources that can are owned by tenants/projects
like the bulk of OpenStack does (identity management especially).


 E.g what data can we add to move from X-Roles in auth_token, to
 expressing roles in multiple tenants and domains?


Tokens can only be scoped to a single project or domain, so that's your
mapping. All X-Roles apply to the X-Project or X-Domain in context. I don't
think we have a good roadmap to support a single authenticated request with
multi-project authorization. The best solution I have is to pass an
unscoped token that can be rescoped to two or more projects as needed.
Trust-based tokens are explicitly scoped already.



 Basically, I'm very concerned that we discuss this, get a clear roadmap
 which
 will work with future keystone admin/role models, and is not a short-term
 hack
 which we won't want to maintain long-term.

 What are peoples thoughts on this?

 [1]: https://wiki.openstack.org/wiki/Domains

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 

Re: [openstack-dev] [Solum] Version scheme

2013-11-14 Thread Clayton Coleman


- Original Message -
 
 
 On 11/14/2013 10:36 AM, Murali Allada wrote:
  I'm not a big fan of using date information in the version number. Is
  there an advantage to doing that? Using a model like 0.0.1 makes it
  easier to communicate.
  
  A better approach might be to use  *Major.Minor.Revision.Build*. If we
  want to use dates, *Year.Month.Day.Build  or
  Year.**Minor.Revision.Build *might be a better approach.. Do any
  openstack projects use the build number in the version? or is there a
  way for the build process to insert the build number in there?
 
 To be clear, this isn't really a call to design a versioning scheme from
 scratch - there are two schemes currently in use, and solum should use
 one of them.
 
 The main reason to do 2014.1.0 is to align with OpenStack, so it depends
 on intent a little bit. The advantage to the Year.Minor.Revision is
 that, since OpenStack is on a date-based release cycle, it communicates
 that fact.
 
 The main reason to do a semver style Major.Minor.Patch scheme is to
 communicate api changes across releases. This is the reason we release
 our libraries using that scheme.
 
 In terms of mechanics, the way it works for both schemes is that the
 version produced is based on git tags. If a revision is tagged, that is
 the version that is produced in the tarball.
 
 If a version is NOT tagged, there are two approaches.
 
 Since the date-based versions have a predictable next version, we have
 intermediary versions marked as leading up to that version.
 Specifically, the form is:
 
 %(version_in_setup_cfg)s.dev%(num_revisions_since_last_tag)s.g%(git_short_sha)
 
 the dev prefix is a PEP440 compliant indiciation that this is a
 development version that is leading towards the version indicated.
 
 For semver-based versions, intermediary versions are marked as following
 the previous release. So we get:
 
 %(most_recent_tag)s.%(num_revisions_since_last_tag)s.g%(git_short_sha)s
 
 I would honestly recommend aligning with OpenStack and putting 2014.1.0
 into the setup.cfg version line for solum itself and doing date-based
 releases. For python-solumclient, since it's a library, I recommend not
 listing a version in setup.cfg and doing semver-based versions. This way
 you'll be operating in the same way as the rest of the project.
 

+1, semver on unreleased versions conveys less useful information than date 
related info.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Version scheme

2013-11-14 Thread Noorul Islam K M
Monty Taylor mord...@inaugust.com writes:

 On 11/14/2013 10:36 AM, Murali Allada wrote:
 I'm not a big fan of using date information in the version number. Is
 there an advantage to doing that? Using a model like 0.0.1 makes it
 easier to communicate.
 
 A better approach might be to use  *Major.Minor.Revision.Build*. If we
 want to use dates, *Year.Month.Day.Build  or
 Year.**Minor.Revision.Build *might be a better approach.. Do any
 openstack projects use the build number in the version? or is there a
 way for the build process to insert the build number in there?

 To be clear, this isn't really a call to design a versioning scheme from
 scratch - there are two schemes currently in use, and solum should use
 one of them.

 The main reason to do 2014.1.0 is to align with OpenStack, so it depends
 on intent a little bit. The advantage to the Year.Minor.Revision is
 that, since OpenStack is on a date-based release cycle, it communicates
 that fact.

 The main reason to do a semver style Major.Minor.Patch scheme is to
 communicate api changes across releases. This is the reason we release
 our libraries using that scheme.

 In terms of mechanics, the way it works for both schemes is that the
 version produced is based on git tags. If a revision is tagged, that is
 the version that is produced in the tarball.

 If a version is NOT tagged, there are two approaches.

 Since the date-based versions have a predictable next version, we have
 intermediary versions marked as leading up to that version.
 Specifically, the form is:

 %(version_in_setup_cfg)s.dev%(num_revisions_since_last_tag)s.g%(git_short_sha)

 the dev prefix is a PEP440 compliant indiciation that this is a
 development version that is leading towards the version indicated.

 For semver-based versions, intermediary versions are marked as following
 the previous release. So we get:

 %(most_recent_tag)s.%(num_revisions_since_last_tag)s.g%(git_short_sha)s

 I would honestly recommend aligning with OpenStack and putting 2014.1.0
 into the setup.cfg version line for solum itself and doing date-based
 releases. For python-solumclient, since it's a library, I recommend not
 listing a version in setup.cfg and doing semver-based versions. This way
 you'll be operating in the same way as the rest of the project.


Thank you for explaining in detail. This is insightful!

Regards,
Noorul

 
 On Nov 14, 2013, at 8:23 AM, Noorul Islam K M noo...@noorul.com
 mailto:noo...@noorul.com
  wrote:
 

 Hello all,

 We need to decide on version scheme that we are going to use.

 Monty Taylor said the following in one of the comments for review [1]:

 Setting a version here enrolls solum in managing its version in a
 pre-release versioning manner, such that non-tagged versions will
 indicated that they are leading up to 0.0.1. If that's the model solum
 wants to do (similar to the server projects) then I recommend replacing
 0.0.1 with 2014.1.0.

 Regards,
 Noorul

 [1] https://review.openstack.org/#/c/56130/

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Configure overcommit policy

2013-11-14 Thread Khanh-Toan Tran
Step 1: use flavors so nova can tell between the two workloads, and
configure them differently

Step 2: find capacity for your workload given your current cloud usage

At the moment, most of our solutions involve reserving bits of your
cloud capacity for different workloads, generally using host
aggregates.

The issue with claiming back capacity from other workloads is a bit
tricker. The issue is I don't think you have defined where you get
that capacity back from? Maybe you want to look at giving some
workloads a higher priority over the constrained CPU resources? But
you will probably starve the little people out at random, which seems
bad. Maybe you want to have a concept of spot instances where they
can use your spare capacity until you need it, and you can just kill
them?

But maybe I am miss understanding your use case, its not totally clear to
me.



Yes currently we can only reserve some hosts for particular workloads. But
«reservation» is done by admin’s operation,

not «on-demand»  as I understand. Anyway, it’s just some speculations from
what I think of Alexander’ usecase. Or maybe

I misunderstand Alexander ?



It is interesting to see the development of the CPU entitlement blueprint
that Alex mentioned. It was registered in Jan 2013.

Any idea whether it is still going on?



De : Alex Glikson [mailto:glik...@il.ibm.com]
Envoyé : jeudi 14 novembre 2013 16:13
À : OpenStack Development Mailing List (not for usage questions)
Objet : Re: [openstack-dev] [nova] Configure overcommit policy



In fact, there is a blueprint which would enable supporting this scenario
without partitioning --
https://blueprints.launchpad.net/nova/+spec/cpu-entitlement
The idea is to annotate flavors with CPU allocation guarantees, and enable
differentiation between instances, potentially running on the same host.
The implementation is augmenting the CoreFilter code to factor in the
differentiation. Hopefully this will be out for review soon.

Regards,
Alex





From:John Garbutt j...@johngarbutt.com
To:OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org,
Date:14/11/2013 04:57 PM
Subject:Re: [openstack-dev] [nova] Configure overcommit policy

  _




On 13 November 2013 14:51, Khanh-Toan Tran
khanh-toan.t...@cloudwatt.com wrote:
 Well, I don't know what John means by modify the over-commit
calculation in
 the scheduler, so I cannot comment.

I was talking about this code:

https://github.com/openstack/nova/blob/master/nova/scheduler/filters/core
_filter.py#L64
https://github.com/openstack/nova/blob/master/nova/scheduler/filters/core_
filter.py#L64

But I am not sure thats what you want.

 The idea of choosing free host for Hadoop on the fly is rather
complicated
 and contains several operations, namely: (1) assuring the host never get
 pass 100% CPU load; (2) identifying a host that already has a Hadoop VM
 running on it, or already 100% CPU commitment; (3) releasing the host
from
 100% CPU commitment once the Hadoop VM stops; (4) possibly avoiding
other
 applications to use the host (to economy the host resource).

 - You'll need (1) because otherwise your Hadoop VM would come short of
 resources after the host gets overloaded.
 - You'll need (2) because you don't want to restrict a new host while
one of
 your 100% CPU commited hosts still has free resources.
 - You'll need (3) because otherwise you host would be forerever
restricted,
 and that is no longer on the fly.
 - You'll may need (4) because otherwise it'd be a waste of resources.

 The problem of changing CPU overcommit on the fly is that when your
Hadoop
 VM is still running, someone else can add another VM in the same host
with a
 higher CPU overcommit (e.g. 200%), (violating (1) ) thus effecting your
 Hadoop VM also.
 The idea of putting the host in the aggregate can give you (1) and (2).
(4)
 is done by AggregateInstanceExtraSpecsFilter. However, it does not give
you
 (3); which can be done with pCloud.

Step 1: use flavors so nova can tell between the two workloads, and
configure them differently

Step 2: find capacity for your workload given your current cloud usage

At the moment, most of our solutions involve reserving bits of your
cloud capacity for different workloads, generally using host
aggregates.

The issue with claiming back capacity from other workloads is a bit
tricker. The issue is I don't think you have defined where you get
that capacity back from? Maybe you want to look at giving some
workloads a higher priority over the constrained CPU resources? But
you will probably starve the little people out at random, which seems
bad. Maybe you want to have a concept of spot instances where they
can use your spare capacity until you need it, and you can just kill
them?

But maybe I am miss understanding your use case, its not totally clear to
me.

John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
 

Re: [openstack-dev] [neutron] Group-based Policy Sub-team Meetings

2013-11-14 Thread Kyle Mestery (kmestery)
On Nov 14, 2013, at 9:38 AM, Mohammad Banikazemi m...@us.ibm.com
 wrote:

 Kyle, 
 
 Thank you for organizing this.
 
 I think the original email you sent out did not solicit any comments (except 
 for possibly proposing a different time for the weekly meetings). So that is 
 probably why you have not heard from anybody (including me). So we are ready 
 to have the meeting but if the consensus is that people need more time to 
 prepare that is fine too.

Lets go with the time slot I've proposed, as no one objected.

 I think we need to set an agenda for our meeting (similar to what you do for 
 the ML2 calls) so we have a better idea of what we need to do during the 
 meeting. In the proposal, we have identified new object resources. Should we 
 start making those definitions and their relationship with other objects more 
 precise. Just a suggestion.
 
Can you add this to the agenda [1] for next week?

Thanks,
Kyle

[1] https://wiki.openstack.org/wiki/Meetings/Neutron_Group_Policy

 Thanks,
 
 Mohammad
 
 
 graycol.gifKyle Mestery (kmestery) ---11/13/2013 01:09:02 PM---On Nov 13, 
 2013, at 10:36 AM, Stephen Wong s3w...@midokura.com  wrote:
 
 From: Kyle Mestery (kmestery) kmest...@cisco.com
 To:   OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org, 
 Date: 11/13/2013 01:09 PM
 Subject:  Re: [openstack-dev] [neutron] Group-based Policy Sub-team 
 Meetings
 
 
 
 On Nov 13, 2013, at 10:36 AM, Stephen Wong s3w...@midokura.com
 wrote:
 
  Hi Kyle,
  
 So no meeting this Thursday?
  
 I am inclined to skip this week's meeting due to the fact I haven't heard many
 replies yet. I think a good starting point for people would be to review the
 BP [1] and Design Document [2] and provide feedback where appropriate.
 We should start to formalize what the APIs will look like at next week's 
 meeting,
 and the Design Document has a first pass at this.
 
 Thanks,
 Kyle
 
 [1] 
 https://blueprints.launchpad.net/neutron/+spec/group-based-policy-abstraction
 [2] 
 https://docs.google.com/document/d/1ZbOFxAoibZbJmDWx1oOrOsDcov6Cuom5aaBIrupCD9E/edit?usp=sharing
 
  Thanks,
  - Stephen
  
  On Wed, Nov 13, 2013 at 7:11 AM, Kyle Mestery (kmestery)
  kmest...@cisco.com wrote:
  On Nov 13, 2013, at 8:58 AM, Stein, Manuel (Manuel) 
  manuel.st...@alcatel-lucent.com wrote:
  
  Kyle,
  
  I'm afraid your meeting vanished from the Meetings page [2] when user 
  amotiki reworked neutron meetings ^.^
  Is the meeting for Thu 1600 UTC still on?
  
  Ack, thanks for the heads up here! I have re-added the meeting. I only 
  heard
  back from one other person other than yourself, so at this point I'm 
  inclined
  to wait until next week to hold our first meeting unless I hear back from 
  others.
  
  A few heads-up questions (couldn't attend the HK design summit Friday 
  meeting):
  
  1) In the summit session Etherpad [3], ML2 implementation mentions 
  insertion of arbitrary metadata to hint to underlying implementation. Is 
  that (a) the plug-ing reporting its policy-bound realization? (b) the 
  user further specifying what should be used? (c) both? Or (d) none of 
  that but just some arbitrary message of the day?
  
  I believe that would be (a).
  
  2) Would policies _always_ map to the old Neutron entities?
  E.g. when I have policies in place, can I query related network/port, 
  subnet/address, router elements on the API or are there no equivalents 
  created? Would the logical topology created under the policies be exposed 
  otherwise? for e.g. monitoring/wysiwyg/troubleshoot purposes.
  
  No, this is up to the plugin/MechanismDriver implementation.
  
  3) Do the chain identifier in your policy rule actions match to Service 
  Chain UUID in Service Insertion, Chaining and API [4]
  
  That's one way to look at this, yes.
  
  4) Are you going to describe L2 services the way group policies work? I 
  mean, why would I need a LoadBalancer or Firewall instance before I can 
  insert it between two groups when all that load balancing/firewalling 
  requires is nothing but a policy for group communication itself? - 
  regardless the service instance used to carry out the service.
  
  These are things I'd like to discuss at the IRC meeting each week. The goal
  would be to try and come up with some actionable items we can drive towards
  in both Icehouse-1 and Icehouse-2. Given how close the closing of 
  Icehouse-1
  is, we need to focus on this very fast if we want to have a measurable 
  impact in
  Icehouse-1.
  
  Thanks,
  Kyle
  
  
  Best, Manuel
  
  [2] 
  https://wiki.openstack.org/wiki/Meetings#Neutron_Group_Policy_Sub-Team_Meeting
  [3] 
  https://etherpad.openstack.org/p/Group_Based_Policy_Abstraction_for_Neutron
  [4] 
  https://docs.google.com/document/d/1fmCWpCxAN4g5txmCJVmBDt02GYew2kvyRsh0Wl3YF2U/edit#
  
  -Original Message-
  From: Kyle Mestery (kmestery) [mailto:kmest...@cisco.com]
  Sent: Montag, 11. November 2013 19:41
  To: OpenStack 

Re: [openstack-dev] Congress: an open policy framework

2013-11-14 Thread Tim Hinrichs
I completely agree that making Congress successful will rely crucially on 
addressing performance and scalability issues.  Some thoughts...

1. We're definitely intending to cache data locally to avoid repeated API 
calls.  In fact, a prototype cache is already in place.  We haven't yet hooked 
up API calls (other than to AD).  We envision some data sources pushing us data 
(updates) and some data sources requiring us to periodically pull.  

2. My main concern for scalability/performance is for proactive enforcement, 
where at least conceptually Congress is on the critical path for API calls.

One thought is that we could splice out, say, the network portion of the 
Congress policy and push it down into neutron, assuming neutron could enforce 
that policy.  This would at least eliminate cross-component communication.  It 
would require a policy engine on each of the OS components, but (a) there 
already is one on many components and (b) if there isn't, we can rely on 
reactive enforcement.

The downside with policy-caching on other OS components are the usual problems 
with staleness and data replication, e.g. maybe we'd end up copying all of 
nova's VM data into neutron so that neutron could enforce its policy.  But 
because we have reactive enforcement to rely on, we could always approximate 
the policy that we push down (conservatively) to catch the common mistakes and 
leave the remainder to reactive enforcement.  For example, we might be able to 
auto-generate the current policy.json files for each component from Congress's 
policy.

Keeping Congress out of the critical path for every API call is one of the 
reasons it was designed to do reactive enforcement as well as proactive 
enforcement.

3. Another option is to build high-performance optimizations for certain 
fragments of the policy language.  Then the cloud architect can decide whether 
she wants to utilize a more expressive language whose performance is worse or a 
less expressive language whose performance is better.

Tim




- Original Message -
| From: Flavio Percoco fla...@redhat.com
| To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
| Sent: Thursday, November 14, 2013 6:05:46 AM
| Subject: Re: [openstack-dev] Congress: an open policy framework
| 
| On 14/11/13 04:40 -0800, Morgan Fainberg wrote:
| On Wed, Nov 13, 2013 at 10:40 AM, Tim Hinrichs
| thinri...@vmware.com wrote:
|  We're just getting started with Congress and understanding how it
|  will integrate with the OS ecosystem, but here's our current
|  thinking about how Congress relates to Oslo's policy engine and
|  to Keystone.  Comments and suggestions are welcome.
| 
| 
|  Congress and Oslo
|  
|  Three dimensions for comparison: policy language, data sources,
|  and policy engine.
| 
|  We've always planned to make Congress compatible with existing
|  policy languages like the one in oslo.  The plan is to build a
|  front-end for a number of policy languages/formats, e.g.
|  oslo-policy language, XACML, JSON, YAML, SQL, etc.  The idea
|  being that the syntax/language you use is irrelevant as long as
|  it can be mapped into Congress's native policy language.  As of
|  now, Congress is using Datalog, which is a variant of SQL and is
|  at least as expressive as all of the policy languages we've run
|  across in the cloud domain, including the oslo-policy language.
| 
|  In terms of the data sources you can reference in the policy,
|  Congress is designed to enable policies that reference arbitrary
|  data sources in the cloud.  For example, we could write a Nova
|  authorization policy that permits a new VM to be created if that
|  VM is connected to a network owned by a tenant (info stored in
|  Neutron) where the VM owner (info in the request) is in the same
|  group as the network owner (info stored in Keystone/LDAP).
|   Oslo's handles some of these data sources with its terminal
|  rules, but it's not involved in data integration to the same
|  extent Congress is.
| 
|  In terms of policy engines, Congress is intended to enforce
|  policies in 2 different ways: proactively (stopping policy
|  violations before they occur) and reactively (acting to eliminate
|  a violation after it occurs).  Ideally we wouldn't need reactive
|  enforcement, but there will always be cases where proactive
|  enforcement is not possible (e.g. a DOS attack brings app
|  latencies out of compliance).  The oslo-engine does proactive
|  enforcement only--stopping API calls before they violate the
|  policy.
| 
| 
| Does this mean all policy decisions need to ask this new service?
| There are many policy checks that occur across even a given action
| (in
| some cases).  Could this have a significant performance implication
| on
| larger scale cloud deployments?  I like the idea of having reactive
| (DOS prevention) policy enforcement as well as external (arbitrary)
| data to help make policy decisions, I don't want to 

Re: [openstack-dev] [PTL] Proposed Icehouse release schedule

2013-11-14 Thread Thierry Carrez
Anne Gentle wrote:
 Just a question, did you consider a week off post-summit? I want to
 ensure the tons of questions that come in about the schedule can be
 answered. I felt the weeks up to the summit were still quite busy, not
 just from a docs perspective, but from a PTL/scheduler/arranger
 perspective. Thoughts?

We could consider that for future cycles...

For this one we have 3 full weeks between release date (April 17) and
summit (May 12), which is a lot of time lost in the cycle already. The
idea is to take the opportunity to declare one of those weeks a
recommended vacation time.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][vmware] VMwareAPI sub-team reviews update 2013-11-14

2013-11-14 Thread Shawn Hartsock

Greetings Stackers!

I hope you had a great time at the Summit! I'm expecting the VMwareAPI 
sub-team's priorities to shift around a bit as we digest the outcomes from the 
sessions. So, with that in mind, I'll stick to just updating on what reviews 
are in flight and where they are as far as our crack team of trusted reviewers 
is concerned. We'll worry about priority order more as we close on the 
milestones.

BTW: I've noticed a trend of folks starting to put VMware: in the titles of 
their reviews that are related to the drivers or need VMware subject matter 
expert's examination, that's a nice touch and will make searching easier for 
people. I encourage it. 

BBTW: Note that the meeting times are UTC and UTC does not have day light 
savings time so if you are in a country that observes such things your meeting 
times may have changed. I'm open to a new meeting time and format if people 
want to discuss it. 

Meeting info:
* https://wiki.openstack.org/wiki/Meetings/VMwareAPI
* If anything is missing, email me and I'll check it.
* We hang out in #openstack-vmware if you need to chat  it's not worth 
spamming the whole list

Happy stacking!


Ordered by fitness for review:

== needs one more +2/approval ==
* https://review.openstack.org/47743
title: 'VMWare: bug fix for Vim exception handling'
votes: +2:1, +1:6, -1:0, -2:0. +52 days in progress, revision: 10 is 24 
days old 
* https://review.openstack.org/53109
title: 'VMware: enable driver to work with postgres database'
votes: +2:1, +1:8, -1:0, -2:0. +22 days in progress, revision: 2 is 22 
days old 
* https://review.openstack.org/49305
title: 'VMware: fix snapshot failure when host in maintenance mode'
votes: +2:1, +1:7, -1:0, -2:0. +42 days in progress, revision: 15 is 12 
days old 

== ready for core ==
* https://review.openstack.org/54361
title: 'VMware: fix datastore selection when token is returned'
votes: +2:0, +1:6, -1:0, -2:0. +15 days in progress, revision: 5 is 14 
days old 

== needs review ==
* https://review.openstack.org/55038
title: 'VMware: bug fix for VM rescue when config drive is config...'
votes: +2:0, +1:4, -1:0, -2:0. +11 days in progress, revision: 2 is 10 
days old 
* https://review.openstack.org/55934
title: 'Always upload a snapshot as a preallocated disk'
votes: +2:0, +1:2, -1:0, -2:0. +2 days in progress, revision: 2 is 1 
days old 
* https://review.openstack.org/52645
title: 'VMware: Detach volume should not delete vmdk'
votes: +2:0, +1:4, -1:0, -2:0. +26 days in progress, revision: 14 is 1 
days old 
* https://review.openstack.org/55509
title: 'VMware: fix VM resize bug'
votes: +2:0, +1:3, -1:0, -2:0. +6 days in progress, revision: 1 is 6 
days old 
* https://review.openstack.org/53648
title: 'VMware: fix image snapshot with attached volume'
votes: +2:0, +1:3, -1:0, -2:0. +20 days in progress, revision: 1 is 20 
days old 
* https://review.openstack.org/55505
title: 'VMware: Handle cases when there are no hosts in cluster'
votes: +2:0, +1:1, -1:0, -2:0. +6 days in progress, revision: 1 is 6 
days old 
* https://review.openstack.org/55070
title: 'VMware: fix rescue with disks are not hot-addable'
votes: +2:0, +1:4, -1:0, -2:0. +10 days in progress, revision: 1 is 10 
days old 
* https://review.openstack.org/52630
title: 'VMware: fix bug when more than one datacenter exists'
votes: +2:0, +1:3, -1:0, -2:0. +26 days in progress, revision: 10 is 8 
days old 
* https://review.openstack.org/52630
title: 'VMware: fix bug when more than one datacenter exists'
votes: +2:0, +1:3, -1:0, -2:0. +26 days in progress, revision: 10 is 8 
days old 
* https://review.openstack.org/54808
title: 'VMware: fix bug for exceptions thrown in _wait_for_task'
votes: +2:0, +1:1, -1:0, -2:0. +13 days in progress, revision: 2 is 11 
days old 
* https://review.openstack.org/52557
title: 'VMware Driver update correct disk usage stat'
votes: +2:0, +1:0, -1:0, -2:0. +27 days in progress, revision: 1 is 27 
days old 
* https://review.openstack.org/43270
title: 'vmware driver selection of vm_folder_ref.'
votes: +2:0, +1:1, -1:0, -2:0. +83 days in progress, revision: 1 is 83 
days old 

== needs revision ==
* https://review.openstack.org/56278
title: 'VMware: bug in rebooting powered off instance'
votes: +2:0, +1:1, -1:1, -2:0. +0 days in progress, revision: 1 is 0 
days old 
* https://review.openstack.org/48544
title: 'VMWare - Fix when a image upload is interrupted it's not ...'
votes: +2:0, +1:2, -1:2, -2:0. +48 days in progress, revision: 15 is 8 
days old 
* https://review.openstack.org/52478
title: 'VMware: Refactor vim_util to reuse existing util method'
votes: +2:0, +1:2, 

Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-14 Thread Randall Burt

On Nov 14, 2013, at 10:19 AM, Christopher Armstrong 
chris.armstr...@rackspace.commailto:chris.armstr...@rackspace.com
 wrote:

http://docs.heatautoscale.apiary.io/

I've thrown together a rough sketch of the proposed API for autoscaling. It's 
written in API-Blueprint format (which is a simple subset of Markdown) and 
provides schemas for inputs and outputs using JSON-Schema. The source document 
is currently at https://github.com/radix/heat/raw/as-api-spike/autoscaling.apibp


Things we still need to figure out:

- how to scope projects/domains. put them in the URL? get them from the token?

This may be moot considering the latest from the keystone devs regarding token 
scoping to domains/projects. Basically, a token is scoped to a single 
domain/project from what I understood, so domain/project is implicit. I'm still 
of the mind that the tenant doesn't belong so early in the URI, since we can 
already surmise the actual tenant from the authentication context, but that's 
something for Openstack at large to agree on.

- how webhooks are done (though this shouldn't affect the API too much; they're 
basically just opaque)

Please read and comment :)


--
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Split of the openstack-dev list

2013-11-14 Thread Joe Gordon
On Nov 14, 2013 5:16 AM, Thierry Carrez thie...@openstack.org wrote:

 Hi everyone,

 I think that we have recently reached critical mass for the
 openstack-dev mailing-list, with 2267 messages posted in October, and
 November well on its way to pass 2000 again. Some of those are just
 off-topic (and I've been regularly fighting against them) but most of
 them are just about us covering an ever-increasing scope, stretching the
 definition of what we include in openstack development.

 Therefore I'd like to propose a split between two lists:

 *openstack-dev*: Discussions on future development for OpenStack
 official projects

 *stackforge-dev*: Discussions on development for stackforge-hosted
projects

 Non-official OpenStack-related projects would get discussed in
 stackforge-dev (or any other list of their preference), while
 openstack-dev would be focused on openstack official programs (including
 incubated  integrated projects).

 That means discussion about Solum, Mistral, Congress or Murano
 (stackforge/* repos in gerrit) would now live on stackforge-dev.
 Discussions about Glance, TripleO or Oslo libraries (openstack*/* repos
 on gerrit) would happen on openstack-dev. This will allow easier
 filtering and prioritization; OpenStack developers interested in
 tracking promising stackforge projects would subscribe to both lists.

 That will not solve all issues. We should also collectively make sure
 that *usage questions are re-routed* to the openstack general
 mailing-list, where they belong. Too many people still answer off-topic
 questions here on openstack-dev, which encourages people to be off-topic
 in the future (traffic on the openstack general ML has been mostly
 stable, with only 868 posts in October). With those actions, I hope that
 traffic on openstack-dev would drop back to the 1000-1500 range, which
 would be more manageable for everyone.

 Thoughts ?

++

How soon can we do this?


 --
 Thierry Carrez (ttx)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Split of the openstack-dev list

2013-11-14 Thread David Ripton

On 11/14/2013 08:21 AM, Julien Danjou wrote:

On Thu, Nov 14 2013, Thierry Carrez wrote:


Thoughts ?


I agree on the need to split, the traffic is getting huge.

As I'd have to subscribe to both openstack-dev and stackforge-dev, that
would not help me personally, but I think it can be an easy and first
step.


I don't think it's worth the bother.  openstack-dev would still receive 
most of the traffic.  Once you add back the traffic from people 
cross-posting, posting to the wrong list, yelling at people 
cross-posting or posting to the wrong list, etc. I'd expect 
openstack-dev's traffic to stay about the same.  It'll just be one more 
list for most of us to subscribe to.


The thing that would help with message volume would be splitting 
openstack-dev by subproject.  (Except for those who would need to follow 
most of the projects, who would still get just as much mail plus the 
extra noise from people posting wrong.)


--
David Ripton   Red Hat   drip...@redhat.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [style] () vs \ continuations

2013-11-14 Thread Joe Gordon
On Nov 14, 2013 6:58 AM, Dolph Mathews dolph.math...@gmail.com wrote:


 On Wed, Nov 13, 2013 at 6:46 PM, Robert Collins robe...@robertcollins.net
wrote:

 Hi so - in http://docs.openstack.org/developer/hacking/

 it has as bullet point 4:
 Long lines should be wrapped in parentheses in preference to using a
 backslash for line continuation.

 I'm seeing in some reviews a request for () over \ even when \ is
 significantly clearer.

 I'd like us to avoid meaningless reviewer churn here: can we either:
  - go with PEP8 which also prefers () but allows \ when it is better
- and reviewers need to exercise judgement when asking for one or
other
  - make it a hard requirement that flake8 detects


 +1 for the non-human approach.

Humans are a bad match for this type of review work, sounds like we will
have to add this into hacking 0.9




 My strong recommendation is to go with PEP8 and exercising of judgement.

 The case that made me raise this is this:
 folder_exists, file_exists, file_size_in_kb, disk_extents = \
 self._path_file_exists(ds_browser, folder_path, file_name)

 Wrapping that in brackets gets this;
 folder_exists, file_exists, file_size_in_kb, disk_extents = (
 self._path_file_exists(ds_browser, folder_path, file_name))


 The root of the problem is that it's a terribly named method with a
terrible return value... fix the underlying problem.



 Which is IMO harder to read - double brackets, but no function call,
 and no tuple: it's more ambiguous than \.

 from
https://review.openstack.org/#/c/48544/15/nova/virt/vmwareapi/vmops.py

 Cheers,
 Rob
 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --

 -Dolph

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][Glance] OSLO update

2013-11-14 Thread Joe Gordon
This ML is not for review requests.

Please read
http://lists.openstack.org/pipermail/openstack-dev/2013-September/015264.html

best,
Joe

sent on the go
On Nov 14, 2013 4:26 AM, Elena Ezhova eezh...@mirantis.com wrote:

 Hello all,

 I have made several patches that update modules in cinder/openstack/common
 from oslo which have not been reviewed for more than a month already. My
 colleague has the same problem with her patches in Glance.

 Probably it's not a top priority issue, but if oslo is not updated
 periodically in small bits it may become a problem in the future. What's
 more, it is much easier for a developer if oslo code is consistent in all
 projects.

 So, I would be grateful if someone reviewed these patches:
 https://review.openstack.org/#/c/48272/
 https://review.openstack.org/#/c/48273/
 https://review.openstack.org/#/c/52099/
 https://review.openstack.org/#/c/52101/
 https://review.openstack.org/#/c/53114/
 https://review.openstack.org/#/c/47581/

 Thanks,

 Elena

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-14 Thread Randall Burt
Good stuff! Some questions/comments:

If web hooks are associated with policies and policies are independent 
entities, how does a web hook specify the scaling group to act on? Does calling 
the web hook activate the policy on every associated scaling group?

Regarding web hook execution and cool down, I think the response should be 
something like 307 if the hook is on cool down with an appropriate retry-after 
header.

On Nov 14, 2013, at 10:57 AM, Randall Burt 
randall.b...@rackspace.commailto:randall.b...@rackspace.com
 wrote:


On Nov 14, 2013, at 10:19 AM, Christopher Armstrong 
chris.armstr...@rackspace.commailto:chris.armstr...@rackspace.com
 wrote:

http://docs.heatautoscale.apiary.io/

I've thrown together a rough sketch of the proposed API for autoscaling. It's 
written in API-Blueprint format (which is a simple subset of Markdown) and 
provides schemas for inputs and outputs using JSON-Schema. The source document 
is currently at https://github.com/radix/heat/raw/as-api-spike/autoscaling.apibp


Things we still need to figure out:

- how to scope projects/domains. put them in the URL? get them from the token?

This may be moot considering the latest from the keystone devs regarding token 
scoping to domains/projects. Basically, a token is scoped to a single 
domain/project from what I understood, so domain/project is implicit. I'm still 
of the mind that the tenant doesn't belong so early in the URI, since we can 
already surmise the actual tenant from the authentication context, but that's 
something for Openstack at large to agree on.

- how webhooks are done (though this shouldn't affect the API too much; they're 
basically just opaque)

Please read and comment :)


--
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Version scheme

2013-11-14 Thread Murali Allada
Thanks for the clear explanation Monty.

-Murali





 On Nov 14, 2013, at 10:08 AM, Monty Taylor mord...@inaugust.com wrote:
 
 
 
 On 11/14/2013 10:36 AM, Murali Allada wrote:
 I'm not a big fan of using date information in the version number. Is
 there an advantage to doing that? Using a model like 0.0.1 makes it
 easier to communicate.
 
 A better approach might be to use  *Major.Minor.Revision.Build*. If we
 want to use dates, *Year.Month.Day.Build  or
 Year.**Minor.Revision.Build *might be a better approach.. Do any
 openstack projects use the build number in the version? or is there a
 way for the build process to insert the build number in there?
 
 To be clear, this isn't really a call to design a versioning scheme from
 scratch - there are two schemes currently in use, and solum should use
 one of them.
 
 The main reason to do 2014.1.0 is to align with OpenStack, so it depends
 on intent a little bit. The advantage to the Year.Minor.Revision is
 that, since OpenStack is on a date-based release cycle, it communicates
 that fact.
 
 The main reason to do a semver style Major.Minor.Patch scheme is to
 communicate api changes across releases. This is the reason we release
 our libraries using that scheme.
 
 In terms of mechanics, the way it works for both schemes is that the
 version produced is based on git tags. If a revision is tagged, that is
 the version that is produced in the tarball.
 
 If a version is NOT tagged, there are two approaches.
 
 Since the date-based versions have a predictable next version, we have
 intermediary versions marked as leading up to that version.
 Specifically, the form is:
 
 %(version_in_setup_cfg)s.dev%(num_revisions_since_last_tag)s.g%(git_short_sha)
 
 the dev prefix is a PEP440 compliant indiciation that this is a
 development version that is leading towards the version indicated.
 
 For semver-based versions, intermediary versions are marked as following
 the previous release. So we get:
 
 %(most_recent_tag)s.%(num_revisions_since_last_tag)s.g%(git_short_sha)s
 
 I would honestly recommend aligning with OpenStack and putting 2014.1.0
 into the setup.cfg version line for solum itself and doing date-based
 releases. For python-solumclient, since it's a library, I recommend not
 listing a version in setup.cfg and doing semver-based versions. This way
 you'll be operating in the same way as the rest of the project.
 
 
 On Nov 14, 2013, at 8:23 AM, Noorul Islam K M noo...@noorul.com
 mailto:noo...@noorul.com
 wrote:
 
 
 Hello all,
 
 We need to decide on version scheme that we are going to use.
 
 Monty Taylor said the following in one of the comments for review [1]:
 
 Setting a version here enrolls solum in managing its version in a
 pre-release versioning manner, such that non-tagged versions will
 indicated that they are leading up to 0.0.1. If that's the model solum
 wants to do (similar to the server projects) then I recommend replacing
 0.0.1 with 2014.1.0.
 
 Regards,
 Noorul
 
 [1] https://review.openstack.org/#/c/56130/
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Nova SSL Apache2 Question

2013-11-14 Thread Miller, Mark M (EB SW Cloud - RD - Corvallis)
Hello Jesse,

Thank you for the information. Would you be so kind as to provide a URL to the 
updated rcbops chef cookbooks for Quantum?

Regards,

Mark

From: Jesse Pretorius [mailto:jesse.pretor...@gmail.com]
Sent: Thursday, November 14, 2013 12:43 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Nova SSL Apache2 Question

On 13 November 2013 23:39, Miller, Mark M (EB SW Cloud - RD - Corvallis) 
mark.m.mil...@hp.commailto:mark.m.mil...@hp.com wrote:
I finally found a set of web pages that has a working set of configuration 
files for the major OpenStack services  
http://andymc-stack.co.uk/2013/07/apache2-mod_wsgi-openstack-pt-2-nova-api-os-compute-nova-api-ec2/
  by Andy Mc. I skipped ceilometer and have the rest of the services working 
except quantum with self-signed certificates on a Grizzly-3 OpenStack instance. 
Now I am stuck trying to figure out how to get quantum to accept self-signed 
certificates.

My goal is to harden my Grizzly-3 OpenStack instance using SSL and self-signed 
certificates. Later I will do the same for Havana bits and use real/valid 
certificates.

I struggled with getting this all to work correctly for a few weeks, then 
eventually gave up and opted instead to use an Apache reverse proxy to 
front-end the native services. I just found that using an Apache/wsgi 
configuration doesn't completely work. It would certainly help if this 
configuration was implemented into the Openstack testing regime to help all the 
services become first-class citizens as a wsgi process behind Apache.

I would suggest that you review the wsgi files and vhost templates in the 
rcbops chef cookbooks for each service. They include my updates to Andy's 
original blog items to make things work properly.

I found that while Andy's stuff appears to work, it becomes noticeable that it 
works in a read-only fashion. I managed to get keystone/nova confirmed to work 
properly, but glance just would not work - I could never upload any images and 
if caching/management was turned off in the glance service then downloading 
images didn't work either.

Good luck - if you do get a fully working config it'd be great to get feedback 
on the adjustments you had to make to get it working.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-14 Thread Christopher Armstrong
On Thu, Nov 14, 2013 at 11:16 AM, Randall Burt
randall.b...@rackspace.comwrote:

  Good stuff! Some questions/comments:

  If web hooks are associated with policies and policies are independent
 entities, how does a web hook specify the scaling group to act on? Does
 calling the web hook activate the policy on every associated scaling group?


Not sure what you mean by policies are independent entities. You may have
missed that the policy resource lives hierarchically under the group
resource. Policies are strictly associated with one scaling group, so when
a policy is executed (via a webhook), it's acting on the scaling group that
the policy is associated with.



  Regarding web hook execution and cool down, I think the response should
 be something like 307 if the hook is on cool down with an appropriate
 retry-after header.


Indicating whether a webhook was found or whether it actually executed
anything may be an information leak, since webhook URLs require no
additional authentication other than knowledge of the URL itself.
Responding with only 202 means that people won't be able to guess at random
URLs and know when they've found one.



  On Nov 14, 2013, at 10:57 AM, Randall Burt randall.b...@rackspace.com
  wrote:


  On Nov 14, 2013, at 10:19 AM, Christopher Armstrong 
 chris.armstr...@rackspace.com
  wrote:

  http://docs.heatautoscale.apiary.io/

  I've thrown together a rough sketch of the proposed API for autoscaling.
 It's written in API-Blueprint format (which is a simple subset of Markdown)
 and provides schemas for inputs and outputs using JSON-Schema. The source
 document is currently at
 https://github.com/radix/heat/raw/as-api-spike/autoscaling.apibp


  Things we still need to figure out:

  - how to scope projects/domains. put them in the URL? get them from the
 token?


  This may be moot considering the latest from the keystone devs regarding
 token scoping to domains/projects. Basically, a token is scoped to a single
 domain/project from what I understood, so domain/project is implicit. I'm
 still of the mind that the tenant doesn't belong so early in the URI, since
 we can already surmise the actual tenant from the authentication context,
 but that's something for Openstack at large to agree on.

  - how webhooks are done (though this shouldn't affect the API too much;
 they're basically just opaque)

  Please read and comment :)


  --
  IRC: radix
 Christopher Armstrong
 Rackspace
   ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][object] One question to the resource tracker session

2013-11-14 Thread Jiang, Yunhong

 -Original Message-
 From: Andrew Laski [mailto:andrew.la...@rackspace.com]
 Sent: Wednesday, November 13, 2013 3:22 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [nova][object] One question to the resource
 tracker session
 
 On 11/13/13 at 11:12pm, Jiang, Yunhong wrote:
 Hi, Dan Smith and all,
  I noticed followed statement in 'Icehouse tasks' in
 https://etherpad.openstack.org/p/IcehouseNovaExtensibleSchedulerMetr
 ics
 
  convert resource tracker to objects
  make resoruce tracker extensible
  no db migrations ever again!!
  extra specs to cover resources - use a name space
 
  How is it planned to achieve the 'no db migrations ever again'? Even
 with the object, we still need keep resource information in database. And
 when new resource type added, we either add a new column to the table.
 Or it means we merge all resource information into a single column as json
 string and parse it in the resource tracker object?.
 
 You're right, it's not really achievable without moving to a schemaless
 persistence model.  I'm fairly certain it was added to be humorous and
 should not be considered an outcome of that session.

Andrew, thanks for the explanation. Not sure anyone have interests on this 
task, otherwise I will take it.

--jyh

 
 
 Thanks
 --jyh
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [style] () vs \ continuations

2013-11-14 Thread John Griffith
On Thu, Nov 14, 2013 at 10:03 AM, Joe Gordon joe.gord...@gmail.com wrote:

 On Nov 14, 2013 6:58 AM, Dolph Mathews dolph.math...@gmail.com wrote:


 On Wed, Nov 13, 2013 at 6:46 PM, Robert Collins
 robe...@robertcollins.net wrote:

 Hi so - in http://docs.openstack.org/developer/hacking/

 it has as bullet point 4:
 Long lines should be wrapped in parentheses in preference to using a
 backslash for line continuation.

 I'm seeing in some reviews a request for () over \ even when \ is
 significantly clearer.

 I'd like us to avoid meaningless reviewer churn here: can we either:
  - go with PEP8 which also prefers () but allows \ when it is better
- and reviewers need to exercise judgement when asking for one or
 other
  - make it a hard requirement that flake8 detects


 +1 for the non-human approach.

 Humans are a bad match for this type of review work, sounds like we will
 have to add this into hacking 0.9




 My strong recommendation is to go with PEP8 and exercising of judgement.

 The case that made me raise this is this:
 folder_exists, file_exists, file_size_in_kb, disk_extents = \
 self._path_file_exists(ds_browser, folder_path, file_name)

 Wrapping that in brackets gets this;
 folder_exists, file_exists, file_size_in_kb, disk_extents = (
 self._path_file_exists(ds_browser, folder_path, file_name))


 The root of the problem is that it's a terribly named method with a
 terrible return value... fix the underlying problem.



 Which is IMO harder to read - double brackets, but no function call,
 and no tuple: it's more ambiguous than \.

 from
 https://review.openstack.org/#/c/48544/15/nova/virt/vmwareapi/vmops.py

 Cheers,
 Rob
 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --

 -Dolph

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

personally I don't see the big deal here, I think there can be some
judgement etc.  BUT it seems to me that this is an awful waste of
time.

Just automate it one way or the other and let reviewers actually focus
on something useful.  Frankly I could care less about line separation
and am much more concerned about bugs being introduced via patches
that reviewers didn't catch.  That's ok though, at least the line
continuations were correct.

Sorry, I shouldn't be a jerk but we seem to have rather pointless
debates as of late (spelling/grammar in comments etc etc).  IMO we
should all do our best on these things but really the focus here
should be on the technical components of the code.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][keystone] APIs, roles, request scope and admin-ness

2013-11-14 Thread Steven Hardy
On Thu, Nov 14, 2013 at 10:20:02AM -0600, Dolph Mathews wrote:
 On Sat, Nov 2, 2013 at 11:06 AM, Steven Hardy sha...@redhat.com wrote:
 
  Hi all,
 
  Looking to start a wider discussion, prompted by:
  https://review.openstack.org/#/c/54651/
  https://blueprints.launchpad.net/heat/+spec/management-api
  https://etherpad.openstack.org/p/heat-management-api
 
  Summary - it has been proposed to add a management API to Heat, similar in
  concept to the admin/public API topology used in keystone.
 
 
  I'm concerned that this may not be a pattern we want to propagate
  throughout
  OpenStack, and that for most services, we should have one API to access
  data,
  with the scope of the data returned/accessible defined by the roles held by
  the user (ie making proper use of the RBAC facilities afforded to us via
  keystone).
 
 
 Agree with the concern; Identity API v3 abandons this topology in favor of
 more granular access controls (policy.json) on a single API.
 
 From an HTTP perspective, API responses should vary according to the token
 used to access the API. Literally,
 
   Vary: X-Auth-Token
 
 in HTTP headers.
 
 
 
  In the current PoC patch, a users admin-ness is derived from the fact that
  they are accessing a specific endpoint, and that policy did not deny them
  access to that endpoint.  I think this is wrong, and we should use keystone
  roles to decide the scope of the request.
 
 
 ++ (although use of the word scope here is dangerous, as I think you mean
 something different from the usual usage?)

I was using scope to say the context of the request can affect what data
is returned, ie the filters we apply when processing it.

  The proposal seems to consider tenants as the top-level of abstraction,
  with
  the next level up being a global service provider admin, but this does not
  consider the keystone v3 concept of domains [1]
 
 
 v3 also allows domain-level roles to be inherited to all projects owned by
 that domain, so in effect-- it does (keystone just takes care of it).

Ok, thanks, that's useful info

  , or that you may wish to
  provide some of these admin-ish features to domain-admin users (who will
  adminster data accross multiple tenants, just like has been proposed), via
  the
  public-facing API.
 
  It seems like we need a way of scoping the request (via data in the
  context),
  based on a heirarchy of admin-ness, like:
 
  1. Normal user
 
 
 I assume normal user has some non-admin role on a project/tenant.

Yep, that's my assumption, just not a role associated with admin-ness.

snip
  E.g what data can we add to move from X-Roles in auth_token, to
  expressing roles in multiple tenants and domains?
 
 
 Tokens can only be scoped to a single project or domain, so that's your
 mapping. All X-Roles apply to the X-Project or X-Domain in context. I don't
 think we have a good roadmap to support a single authenticated request with
 multi-project authorization. The best solution I have is to pass an
 unscoped token that can be rescoped to two or more projects as needed.
 Trust-based tokens are explicitly scoped already.

So this is a piece of the puzzle I was missing until now, combined with the
fact that Domain scoped tokens do not imply authorization with all projects
within that domain.  Thanks for the IRC conversation which cleared that up!

Based on this revised understanding, it sounds like, for now at least, some
of the global management api requirements may be best served via some
client tools which make multiple API calls to get the required information,
on behalf of a user who has the necessary roles to access all the projects.

Thanks,

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Nova SSL Apache2 Question

2013-11-14 Thread Miller, Mark M (EB SW Cloud - RD - Corvallis)
I believe I found it under nova-network.

Thanks,

Mark

From: Miller, Mark M (EB SW Cloud - RD - Corvallis)
Sent: Thursday, November 14, 2013 9:31 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Nova SSL Apache2 Question

Hello Jesse,

Thank you for the information. Would you be so kind as to provide a URL to the 
updated rcbops chef cookbooks for Quantum?

Regards,

Mark

From: Jesse Pretorius [mailto:jesse.pretor...@gmail.com]
Sent: Thursday, November 14, 2013 12:43 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Nova SSL Apache2 Question

On 13 November 2013 23:39, Miller, Mark M (EB SW Cloud - RD - Corvallis) 
mark.m.mil...@hp.commailto:mark.m.mil...@hp.com wrote:
I finally found a set of web pages that has a working set of configuration 
files for the major OpenStack services  
http://andymc-stack.co.uk/2013/07/apache2-mod_wsgi-openstack-pt-2-nova-api-os-compute-nova-api-ec2/
  by Andy Mc. I skipped ceilometer and have the rest of the services working 
except quantum with self-signed certificates on a Grizzly-3 OpenStack instance. 
Now I am stuck trying to figure out how to get quantum to accept self-signed 
certificates.

My goal is to harden my Grizzly-3 OpenStack instance using SSL and self-signed 
certificates. Later I will do the same for Havana bits and use real/valid 
certificates.

I struggled with getting this all to work correctly for a few weeks, then 
eventually gave up and opted instead to use an Apache reverse proxy to 
front-end the native services. I just found that using an Apache/wsgi 
configuration doesn't completely work. It would certainly help if this 
configuration was implemented into the Openstack testing regime to help all the 
services become first-class citizens as a wsgi process behind Apache.

I would suggest that you review the wsgi files and vhost templates in the 
rcbops chef cookbooks for each service. They include my updates to Andy's 
original blog items to make things work properly.

I found that while Andy's stuff appears to work, it becomes noticeable that it 
works in a read-only fashion. I managed to get keystone/nova confirmed to work 
properly, but glance just would not work - I could never upload any images and 
if caching/management was turned off in the glance service then downloading 
images didn't work either.

Good luck - if you do get a fully working config it'd be great to get feedback 
on the adjustments you had to make to get it working.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Configure overcommit policy

2013-11-14 Thread Alex Glikson
Khanh-Toan Tran khanh-toan.t...@cloudwatt.com wrote on 14/11/2013 
06:27:39 PM:

 It is interesting to see the development of the CPU entitlement 
 blueprint that Alex mentioned. It was registered in Jan 2013.
 Any idea whether it is still going on?

Yes. I hope we will be able to rebase and submit for review soon.

Regards,
Alex

 De : Alex Glikson [mailto:glik...@il.ibm.com] 
 Envoyé : jeudi 14 novembre 2013 16:13
 À : OpenStack Development Mailing List (not for usage questions)
 Objet : Re: [openstack-dev] [nova] Configure overcommit policy
 
 In fact, there is a blueprint which would enable supporting this 
 scenario without partitioning -- https://blueprints.launchpad.net/
 nova/+spec/cpu-entitlement 
 The idea is to annotate flavors with CPU allocation guarantees, and 
 enable differentiation between instances, potentially running on thesame 
host.
 The implementation is augmenting the CoreFilter code to factor in 
 the differentiation. Hopefully this will be out for review soon. 
 
 Regards, 
 Alex
 
 
 
 
 
 From:John Garbutt j...@johngarbutt.com 
 To:OpenStack Development Mailing List (not for usage 
questions) 
 openstack-dev@lists.openstack.org, 
 Date:14/11/2013 04:57 PM 
 Subject:Re: [openstack-dev] [nova] Configure overcommit policy 
 
 
 
 
 On 13 November 2013 14:51, Khanh-Toan Tran
 khanh-toan.t...@cloudwatt.com wrote:
  Well, I don't know what John means by modify the over-commit 
calculation in
  the scheduler, so I cannot comment.
 
 I was talking about this code:
 https://github.com/openstack/nova/blob/master/nova/scheduler/
 filters/core_filter.py#L64
 
 But I am not sure thats what you want.
 
  The idea of choosing free host for Hadoop on the fly is rather 
complicated
  and contains several operations, namely: (1) assuring the host never 
get
  pass 100% CPU load; (2) identifying a host that already has a Hadoop 
VM
  running on it, or already 100% CPU commitment; (3) releasing the host 
from
  100% CPU commitment once the Hadoop VM stops; (4) possibly avoiding 
other
  applications to use the host (to economy the host resource).
 
  - You'll need (1) because otherwise your Hadoop VM would come short of
  resources after the host gets overloaded.
  - You'll need (2) because you don't want to restrict a new host while 
one of
  your 100% CPU commited hosts still has free resources.
  - You'll need (3) because otherwise you host would be forerever 
restricted,
  and that is no longer on the fly.
  - You'll may need (4) because otherwise it'd be a waste of resources.
 
  The problem of changing CPU overcommit on the fly is that when your 
Hadoop
  VM is still running, someone else can add another VM in the same host 
with a
  higher CPU overcommit (e.g. 200%), (violating (1) ) thus effecting 
your
  Hadoop VM also.
  The idea of putting the host in the aggregate can give you (1) and 
(2). (4)
  is done by AggregateInstanceExtraSpecsFilter. However, it does not 
give you
  (3); which can be done with pCloud.
 
 Step 1: use flavors so nova can tell between the two workloads, and
 configure them differently
 
 Step 2: find capacity for your workload given your current cloud usage
 
 At the moment, most of our solutions involve reserving bits of your
 cloud capacity for different workloads, generally using host
 aggregates.
 
 The issue with claiming back capacity from other workloads is a bit
 tricker. The issue is I don't think you have defined where you get
 that capacity back from? Maybe you want to look at giving some
 workloads a higher priority over the constrained CPU resources? But
 you will probably starve the little people out at random, which seems
 bad. Maybe you want to have a concept of spot instances where they
 can use your spare capacity until you need it, and you can just kill
 them?
 
 But maybe I am miss understanding your use case, its not totally clear 
to me.
 
 John
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 
 Aucun virus trouvé dans ce message.
 Analyse effectuée par AVG - www.avg.fr
 Version: 2014.0.4158 / Base de données virale: 3629/6834 - Date: 
13/11/2013
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-14 Thread Randall Burt

On Nov 14, 2013, at 11:30 AM, Christopher Armstrong 
chris.armstr...@rackspace.commailto:chris.armstr...@rackspace.com
 wrote:

On Thu, Nov 14, 2013 at 11:16 AM, Randall Burt 
randall.b...@rackspace.commailto:randall.b...@rackspace.com wrote:
Good stuff! Some questions/comments:

If web hooks are associated with policies and policies are independent 
entities, how does a web hook specify the scaling group to act on? Does calling 
the web hook activate the policy on every associated scaling group?


Not sure what you mean by policies are independent entities. You may have 
missed that the policy resource lives hierarchically under the group resource. 
Policies are strictly associated with one scaling group, so when a policy is 
executed (via a webhook), it's acting on the scaling group that the policy is 
associated with.

Whoops. Yeah, I missed that.



Regarding web hook execution and cool down, I think the response should be 
something like 307 if the hook is on cool down with an appropriate retry-after 
header.

Indicating whether a webhook was found or whether it actually executed anything 
may be an information leak, since webhook URLs require no additional 
authentication other than knowledge of the URL itself. Responding with only 202 
means that people won't be able to guess at random URLs and know when they've 
found one.

Perhaps, but I also miss important information as a legitimate caller as to 
whether or not my scaling action actually happened or I've been a little too 
aggressive with my curl commands. The fact that I get anything other than 404 
(which the spec returns if its not a legit hook) means I've found *something* 
and can simply call it endlessly in a loop causing havoc. Perhaps the web hooks 
*should* be authenticated? This seems like a pretty large hole to me, 
especially if I can max someone's resources by guessing the right url.


On Nov 14, 2013, at 10:57 AM, Randall Burt 
randall.b...@rackspace.commailto:randall.b...@rackspace.com
 wrote:


On Nov 14, 2013, at 10:19 AM, Christopher Armstrong 
chris.armstr...@rackspace.commailto:chris.armstr...@rackspace.com
 wrote:

http://docs.heatautoscale.apiary.io/

I've thrown together a rough sketch of the proposed API for autoscaling. It's 
written in API-Blueprint format (which is a simple subset of Markdown) and 
provides schemas for inputs and outputs using JSON-Schema. The source document 
is currently at https://github.com/radix/heat/raw/as-api-spike/autoscaling.apibp


Things we still need to figure out:

- how to scope projects/domains. put them in the URL? get them from the token?

This may be moot considering the latest from the keystone devs regarding token 
scoping to domains/projects. Basically, a token is scoped to a single 
domain/project from what I understood, so domain/project is implicit. I'm still 
of the mind that the tenant doesn't belong so early in the URI, since we can 
already surmise the actual tenant from the authentication context, but that's 
something for Openstack at large to agree on.

- how webhooks are done (though this shouldn't affect the API too much; they're 
basically just opaque)

Please read and comment :)


--
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-14 Thread Zane Bitter

On 14/11/13 17:19, Christopher Armstrong wrote:

http://docs.heatautoscale.apiary.io/

I've thrown together a rough sketch of the proposed API for autoscaling.
It's written in API-Blueprint format (which is a simple subset of
Markdown) and provides schemas for inputs and outputs using JSON-Schema.
The source document is currently atÂ
https://github.com/radix/heat/raw/as-api-spike/autoscaling.apibp


Things we still need to figure out:

- how to scope projects/domains. put them in the URL? get them from the
token?
- how webhooks are done (though this shouldn't affect the API too much;
they're basically just opaque)


My 2c: the way I designed the Heat API was such that extant stacks can 
be addressed uniquely by name. Humans are pretty good with names, not so 
much with 128 bit numbers. The consequences of this for the design were:

 - names must be unique per-tenant
 - the tenant-id appears in the endpoint URL

However, the rest of OpenStack seems to have gone in a direction where 
the name is really just a comment field, everything is addressed only 
by UUID. A consequence of this is that it renders the tenant-id in the 
URL pointless, so many projects are removing it.


Unfortunately, one result is that if you create a resource and e.g. miss 
the Created response for any reason and thus do not have the UUID, there 
is now no safe, general automated way to delete it again. (There are 
obviously heuristics you could try.) To solve this problem, there is a 
proposal floating about for clients to provide another unique ID when 
making the request, which would render a retry of the request 
idempotent. That's insufficient, though, because if you decide to roll 
back instead of retry you still need a way to delete using only this ID.


So basically, that design sucks for both humans (who have to remember 
UUIDs instead of names) and machines (Heat). However, it appears that I 
am in a minority of one on this point, so take it with a grain of salt.



Please read and comment :)


A few comments...

#1 thing is that the launch configuration needs to be somehow 
represented. In general we want the launch configuration to be a 
provider template, but we'll want to create a shortcut for the obvious 
case of just scaling servers. Maybe we pass a provider template (or URL) 
as well as parameters, and the former is optional.


Successful creates should return 201 Created, not 200 OK.

Responses from creates should include the UUID as well as the URI. 
(Getting into minor details here.)


Policies are scoped within groups, so do they need a unique id or would 
a name do?


I'm not sure I understand the webhooks part... webhook-exec is the thing 
that e.g. Ceilometer will use to signal an alarm, right? Why is it not 
called something like /groups/{group_id}/policies/{policy_id}/alarm ? 
(Maybe because it requires different auth middleware? Or does it?)


And the other ones are setting up the notification actions? Can we call 
them notifications instead of webhooks? (After all, in the future we 
will probably want to add Marconi support, and maybe even Mistral 
support.) And why are these attached to the policy? Isn't the 
notification connected to changes in the group, rather than anything 
specific to the policy? Am I misunderstanding how this works? What is 
the difference between 'uri' and 'capability_uri'?


You need to define PUT/PATCH methods for most of these also, obviously 
(I assume you just want to get this part nailed down first).


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][object] One question to the resource tracker session

2013-11-14 Thread Andrew Laski

On 11/14/13 at 05:37pm, Jiang, Yunhong wrote:



-Original Message-
From: Andrew Laski [mailto:andrew.la...@rackspace.com]
Sent: Wednesday, November 13, 2013 3:22 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][object] One question to the resource
tracker session

On 11/13/13 at 11:12pm, Jiang, Yunhong wrote:
Hi, Dan Smith and all,
I noticed followed statement in 'Icehouse tasks' in
https://etherpad.openstack.org/p/IcehouseNovaExtensibleSchedulerMetr
ics

convert resource tracker to objects
make resoruce tracker extensible
no db migrations ever again!!
extra specs to cover resources - use a name space

How is it planned to achieve the 'no db migrations ever again'? Even
with the object, we still need keep resource information in database. And
when new resource type added, we either add a new column to the table.
Or it means we merge all resource information into a single column as json
string and parse it in the resource tracker object?.

You're right, it's not really achievable without moving to a schemaless
persistence model.  I'm fairly certain it was added to be humorous and
should not be considered an outcome of that session.


Andrew, thanks for the explanation. Not sure anyone have interests on this 
task, otherwise I will take it.


There is a blueprint for part of this from Paul Murray, 
https://blueprints.launchpad.net/nova/+spec/make-resource-tracker-use-objects.  
So you could coordinate the work if you're interested.




--jyh




Thanks
--jyh

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [style] () vs \ continuations

2013-11-14 Thread Monty Taylor


On 11/13/2013 08:08 PM, Robert Collins wrote:
 On 14 November 2013 13:59, Sean Dague s...@dague.net wrote:
 
 This is an area where we actually have consensus in our docs (have had
 for a while), the reviewer was being consistent with them, and it feels
 like you are reopening that for personal preference.
 
 Sorry that it feels that way. My personal code also uses ()
 overwhelmingly - so this isn't a personal agenda issue. I brought it
 up because the person that wrote the code had chosen to use \, and as
 far as I knew we didn't have a hard decision either way - and the
 style guide we have talks preference not requirement, but the review
 didn't distinguish between whether it's a suggestion or a requirement.
 I'm seeking clarity so I can review more effectively and so that our
 code doesn't end up consistent but hard to read.

I'd say we've got an expression of clarity here - which means
potentially a patch to the hacking guide to clarify the language on what
our choice is, as well as the addition of a hacking check to enforce it
would be in bounds.

 Honestly I find \ at the end of a line ugly as sin, and completely
 jarring to read. I actually do like the second one better. I don't care
 enough to change a policy on it, but we do already have a policy, so it
 seems pretty pedantic, and not useful.
 
 Ok, thats interesting. Readability matters, and if most folk find that
 even this case - which is pretty much the one case where I would argue
 for \ - is still easier to read with (), then thats cool.
 
 Bringing up for debate the style guide every time it disagrees with your
 personal preference isn't a very effective use of people's time.
 Especially on settled matters.
 
 Totally not what I'm doing. I've been told that much of our style
 guide was copied lock stock and barrel from some Google Python style
 guide, so I can't tell what is consensus and what is 'what someone
 copied down one day'. Particularly when there is no rationale included
 against the point - its a black box and entirely opaque.
 
 -Rob
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] External authentication

2013-11-14 Thread Adam Young

On 11/14/2013 10:52 AM, Álvaro López García wrote:

Hi all,

During the review of [1] I had a look at the tests that are related
with external authentication (i.e. the usage of REMOTE_USER) in
Keystone and I realised that there is a bunch of them that are setting
external as one of the authentication methods. However, in
keystone.auth.controllers there is an explicit call to the external
methods whenever REMOTE_USER is set [2].

Should we call the external authentication only when external is set
(i.e. in [3]) regardless of the REMOTE_USER presence in the context?
I'd like to.  We made a decision to make the user explicitly enable 
External authentication in the config, but there is no reason that it 
would have to extend to the request body itself.
In theory we could do token creation request with no Body at all, the 
same way we do role assignments:


To create a project scoped token
PUT /auth/tokens/domain/domid/projectprojectid

And to create a domain token
PUT /auth/tokens/domain/domid


Would work very well with Basic-Auth or other External formats. Then the 
Body would only have to contain any mitigating factors, like a shorter 
expiry or reduced set of roles.







___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [ipv6] IPv6 meeting - Thursdays 21:00UTC - #openstack-meeting-alt

2013-11-14 Thread Jeremy Stanley
On 2013-11-14 11:25:31 + (+), Salvatore Orlando wrote:
[...]
 The fact that webex is not a free and open source service is
 another aspect to take into account I'll now duck before stones
 start being thrown as I'm not really the guy who can play FOSS
 advocate.

Don't worry, I (and many others) will gladly step in front of any
barrage of stones hurled for suggesting we stick to free software
when collaborating on OpenStack.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [State-Management] Meeting skip for this week

2013-11-14 Thread Joshua Harlow
Since I just got back from HK yesterday, and I think others are still 
recovering from that trip lets skip the weekly IRC meeting this week.

Also note that the US time has changed (due to the evil thing called DST).

Link: https://wiki.openstack.org/wiki/TaskFlow
Channel: #openstack-state-management

Summit videos/slides:
 - http://www.slideshare.net/harlowja/taskflow-27820295
 - 
http://www.openstack.org/summit/openstack-summit-hong-kong-2013/session-videos/presentation/taskflow-an-openstack-library-that-helps-make-task-execution-easy-consistent-and-reliable

Summit sessions:
 - https://etherpad.openstack.org/p/icehouse-cinder-taskflow-next-steps
 - https://etherpad.openstack.org/p/icehouse-summit-taskflow-and-glance
 - https://etherpad.openstack.org/p/icehouse-summit-heat-workflow
 - https://etherpad.openstack.org/p/IcehouseConductorTasksNextSteps

Keep up the good work folks, zzz…

-Josh
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][object] One question to the resource tracker session

2013-11-14 Thread Jiang, Yunhong


 -Original Message-
 From: Andrew Laski [mailto:andrew.la...@rackspace.com]
 Sent: Thursday, November 14, 2013 10:02 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [nova][object] One question to the resource
 tracker session
 
 On 11/14/13 at 05:37pm, Jiang, Yunhong wrote:
 
  -Original Message-
  From: Andrew Laski [mailto:andrew.la...@rackspace.com]
  Sent: Wednesday, November 13, 2013 3:22 PM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [nova][object] One question to the
 resource
  tracker session
 
  On 11/13/13 at 11:12pm, Jiang, Yunhong wrote:
  Hi, Dan Smith and all,
I noticed followed statement in 'Icehouse tasks' in
 
 https://etherpad.openstack.org/p/IcehouseNovaExtensibleSchedulerMetr
  ics
  
convert resource tracker to objects
make resoruce tracker extensible
no db migrations ever again!!
extra specs to cover resources - use a name space
  
How is it planned to achieve the 'no db migrations ever again'?
 Even
  with the object, we still need keep resource information in database.
 And
  when new resource type added, we either add a new column to the
 table.
  Or it means we merge all resource information into a single column as
 json
  string and parse it in the resource tracker object?.
 
  You're right, it's not really achievable without moving to a schemaless
  persistence model.  I'm fairly certain it was added to be humorous and
  should not be considered an outcome of that session.
 
 Andrew, thanks for the explanation. Not sure anyone have interests on
 this task, otherwise I will take it.
 
 There is a blueprint for part of this from Paul Murray,
 https://blueprints.launchpad.net/nova/+spec/make-resource-tracker-use-
 objects.
 So you could coordinate the work if you're interested.

Yes, just noticed it and the first 2 sponsor. I will keep an eye on it.

--jyh

 
 
 --jyh
 
 
  
  Thanks
  --jyh
  
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [ipv6] IPv6 meeting - Thursdays 21:00UTC - #openstack-meeting-alt

2013-11-14 Thread Mark McClain

On Nov 13, 2013, at 12:46 PM, Collins, Sean (Contractor) 
sean_colli...@cable.comcast.com wrote:

 On Wed, Nov 13, 2013 at 10:20:55AM -0500, Shixiong Shang wrote:
 Thanks a bunch for finalizing the time! Sorry for my ignorance….how do we 
 usually run the meeting? On Webex or IRC channel? 
 
 IRC.
 
 I'm not opposed to Webex (other teams have used it before) - but it
 would involve more set-up. We'd need to publish recordings,
 so that there is a way for those that couldn't attend to review,
 similar to how the IRC meetings are logged.


Please use IRC.  It’s the community standard meeting platform and provides 
instantly searchable text logs.

mark
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] design summit outcomes

2013-11-14 Thread Adam Young

On 11/14/2013 10:03 AM, Steven Hardy wrote:

On Wed, Nov 13, 2013 at 10:04:04AM -0600, Dolph Mathews wrote:

I guarantee there's a few things I'm forgetting, but this is my collection
of things we discussed at the summit and determined to be good things to
pursue during the icehouse timeframe. The contents represent a high level
mix of etherpad conclusions and hallway meetings.

https://gist.github.com/dolph/7366031

Looks good, but I have some feedback on items which were discussed (either
in the delegation session or in the hallway with ayoung/jlennox), and are
high priority for Heat, I don't see these captured in the page above:

Delegation:
- Need a way to create a secret derived from a trust (natively, not via
   ec2tokens extension), and it needs to be configurable such that it
   won't expire, or has a very long expiry time. ayoung mentioned a
   credential mechanism, but I'm not sure which BP he was referring to, so
   clarification appreciated.
I am not sure this is pointing the right direction.  Trusts assume 
authentication from an separate source, like the token the trustee 
passes in when executing the trust.  Long term credentials should be 
used in conjunction with a trust, but separate to it.  I suspect that 
there is something that could be done with X509, Trusts, and Token 
Binding that would make sense here.


Something like:

1.  Identify client machine
2.  Generate new Cert for Client machice (Key stays on client, gets 
signed by CA)

3.  Generate trust, and link trust to new cert.
4.  Client machine uses cert and trust to get tokens.






Client:
- We need a way to get the role-tenant pairs (not just the tenant-less
   role list) into the request context, so we can correctly scope API
   requests.  I raised this bug:
A token is scoped to something;  project, domain, whatever. Providing 
tokens that are scoped wider is somewhat suspect.  What you want to do 
is to query the data from Keystone using an unscoped token.  But any 
token from a user sent back to Keystone (except a trust token) should be 
able to access this information.


Put another way: you need to query the role information for additional 
requests to Keystone.  A token is specifically for proving that you have 
access to the information to a third party. Since anything that is going 
to be done with this information is going to be validated by Keystone,  
it is OK to do it as a query against Keystone itself.


So, I think what you want is a way to query all roles for a user on all 
projects in all domains.  This can be done taody doing individual calls 
(enumerate domains, enumerat projects)  which is chatty.  If it proves 
to be a performance bottleneck, we can optimize it into a single call.




   https://bugs.launchpad.net/python-keystoneclient/+bug/1250982

   Related to this thread (reminder - which you said you'd respond to ;):

   http://lists.openstack.org/pipermail/openstack-dev/2013-November/018201.html

   This topic came up again today related to tenant-scoped nova flavors:

   http://lists.openstack.org/pipermail/openstack-dev/2013-November/019099.html

   Closely related to this bug I think:

   https://bugs.launchpad.net/keystone/+bug/968696

   I'd welcome discussion on how we solve the request-scoping issue
   openstack-wide, currently I'm thinking we need the role-tenant pairs (and
   probably role-domain pairs) in the request context, so we can correctly
   filter in the model_query when querying the DB while servicing the
   API request.

Thanks,

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >