Re: [openstack-dev] [Ceilometer] Is that possible to implement new APIs for horizon to show the usage report and charts?

2013-07-25 Thread Brooklyn Chen
Thanks for your reply.

1.Actually, statistics is used by horizon now to render the tables and
grouping by resource is needed. The number of http requests should be
reduced to speed up the page loading.

2. /statistics can't do this. You know, horizon may need hundreds or
thousands of value to render a chart. it's a disaster if so many requests
is sent to ceilometer api to render one chart.


On Tue, Jul 23, 2013 at 8:54 PM, Julien Danjou jul...@danjou.info wrote:

 On Tue, Jul 23 2013, Brooklyn Chen wrote:

 
  It would be helpful if ceilometer-api provides following api:
 
  GET /v2/usages/disk/
 
  Parameters:  q(list(Query)) Filter rules for the resources to be
 returned.
  Return Type: list(Usage) A list of usage with different tenant,user,
  resource
 
  GET /v2/usages/disk/usage_id

 Did you try /v2/meter/meter-id/statistics ?
 I think /statistics is good enough *except* that it misses the ability
 to group by resource the statistics.

  2. need gauge data like cpu_util to render stat charts.
  We have cumulative meters like disk.read.bytes and
  networking.incoming.bytes but they are not able to be used for drawing
  charts since the value of them are always increasing.

 The /statistics with the period= argument would allow you to do that as
 far as I can tell.

 --
 Julien Danjou
 # Free Software hacker # freelance consultant
 # http://julien.danjou.info

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Is that possible to implement new APIs for horizon to show the usage report and charts?

2013-07-25 Thread Brooklyn Chen
Thanks for your reply.

1.Actually, statistics is used by horizon now to render the tables and
grouping by resource is needed. The number of http requests should be
reduced to speed up the page loading.

2. /statistics can't do this. You know, horizon may need hundreds or
thousands of value to render a chart. it's a disaster if so many requests
is sent to ceilometer api to render one chart.


On Tue, Jul 23, 2013 at 8:54 PM, Julien Danjou jul...@danjou.info wrote:

 On Tue, Jul 23 2013, Brooklyn Chen wrote:

 
  It would be helpful if ceilometer-api provides following api:
 
  GET /v2/usages/disk/
 
  Parameters:  q(list(Query)) Filter rules for the resources to be
 returned.
  Return Type: list(Usage) A list of usage with different tenant,user,
  resource
 
  GET /v2/usages/disk/usage_id

 Did you try /v2/meter/meter-id/statistics ?
 I think /statistics is good enough *except* that it misses the ability
 to group by resource the statistics.

  2. need gauge data like cpu_util to render stat charts.
  We have cumulative meters like disk.read.bytes and
  networking.incoming.bytes but they are not able to be used for drawing
  charts since the value of them are always increasing.

 The /statistics with the period= argument would allow you to do that as
 far as I can tell.

 --
 Julien Danjou
 # Free Software hacker # freelance consultant
 # http://julien.danjou.info

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-dev][nova] Disable per-user rate limiting by default

2013-07-25 Thread Rosa, Andrea (HP Cloud Services)
I'd like to turn it off by default, as already pointed in [1] I think the rate 
limiting should be managed by something else (for example load balancers)  in 
front of the API.

Regards
--
Andrea Rosa
[1] http://www.gossamer-threads.com/lists/openstack/operators/28599


From: Joe Gordon [mailto:joe.gord...@gmail.com]
Sent: 24 July 2013 23:39
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Openstack-dev][nova] Disable per-user rate limiting 
by default

Hi all

I have proposed a patch to disable per-user rate limiting by default: 
https://review.openstack.org/#/c/34821/. And on Russell's request  does anyone 
care or prefer this to be enabled by default?

Here is some more context:

Earlier rate limiting discussion: 
http://www.gossamer-threads.com/lists/openstack/operators/28599
Related bug: https://bugs.launchpad.net/tripleo/+bug/1178529
rate limiting is per process, and doesn't act as expected in a multi-process 
environment: https://review.openstack.org/#/c/36516/

best,
Joe Gordon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Extending policy checking to include target entities

2013-07-25 Thread David Chadwick
I have responded to your post, as I dont think it solves the identified 
problem


regards

David

On 24/07/2013 23:26, Tiwari, Arvind wrote:

I have added my proposal @ https://etherpad.openstack.org/api_policy_on_target.

Thanks,
Arvind

-Original Message-
From: Henry Nash [mailto:hen...@linux.vnet.ibm.com]
Sent: Wednesday, July 24, 2013 8:46 AM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [keystone] Extending policy checking to include 
target entities

I think we should transfer this discussion to the etherpad for this blueprint: 
https://etherpad.openstack.org/api_policy_on_target

I have summarised the views of this thread there already, so let's make any 
further comments there, rather than here.

Henry
On 24 Jul 2013, at 00:29, Simo Sorce wrote:


On Tue, 2013-07-23 at 23:47 +0100, Henry Nash wrote:

...the problem is that if the object does not exists we might not be able tell 
whether the use is authorized or not (since authorization might depend on 
attributes of the object itself)so how do we know wether to lie or not?


If the error you return is always 'Not Found', why do you care ?

Simo.


Henry
On 23 Jul 2013, at 21:23, David Chadwick wrote:




On 23/07/2013 19:02, Henry Nash wrote:

One thing we could do is:

- Return Forbidden or NotFound if we can determine the correct answer
- When we can't (i.e. the object doesn't exist), then return NotFound
unless a new config value 'policy_harden' (?) is set to true (default
false) in which case we translate NotFound into Forbidden.


I am not sure that this achieves your objective of no data leakage through 
error codes, does it?

Its not a question of determining the correct answer or not, its a question of 
whether the user is authorised to see the correct answer or not

regards

David


Henry
On 23 Jul 2013, at 18:31, Adam Young wrote:


On 07/23/2013 12:54 PM, David Chadwick wrote:

When writing a previous ISO standard the approach we took was as follows

Lie to people who are not authorised.


Is that your verbage?  I am going to reuse that quote, and I would
like to get the attribution correct.



So applying this approach to your situation, you could reply Not
Found to people who are authorised to see the object if it had
existed but does not, and Not Found to those not authorised to see
it, regardless of whether it exists or not. In this case, only those
who are authorised to see the object will get it if it exists. Those
not authorised cannot tell the difference between objects that dont
exist and those that do exist


So, to try and apply this to a semi-real example:  There are two types
of URLs.  Ones that are like this:

users/55FEEDBABECAFE

and ones like this:

domain/66DEADBEEF/users/55FEEDBABECAFE


In the first case, you are selecting against a global collection, and
in the second, against a scoped collection.

For unscoped, you have to treat all users as equal, and thus a 404
probably makes sense.

For a scoped collection we could return a 404 or a 403 Forbidden
https://en.wikipedia.org/wiki/HTTP_403 based on the users
credentials:  all resources under domain/66DEADBEEF  would show up
as 403s regardless of existantce or not if the user had no roles in
the domain 66DEADBEEF.  A user that would be allowed access to
resources in 66DEADBEEF  would get a 403 only for an object that
existed but that they had no permission to read, and 404 for a
resource that doesn't exist.






regards

David


On 23/07/2013 16:40, Henry Nash wrote:

Hi

As part of bp
https://blueprints.launchpad.net/keystone/+spec/policy-on-api-target
I have uploaded some example WIP code showing a proposed approach
for just a few API calls (one easy, one more complex). I'd
appreciate early feedback on this before I take it any further.

https://review.openstack.org/#/c/38308/

A couple of points:

- One question is on how to handle errors when you are going to get
a target object before doing you policy check.  What do you do if
the object does not exist?  If you return NotFound, then someone,
who was not authorized  could troll for the existence of entities by
seeing whether they got NotFound or Forbidden. If however, you
return Forbidden, then users who are authorized to, say, manage
users in a domain would aways get Forbidden for objects that didn't
exist (since we can know where the non-existant object was!).  So
this would modify the expected return codes.

- I really think we need some good documentation on how to bud
keystone policy files.  I'm happy to take a first cut as such a
thing - what do you think the right place is for such documentation

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Re: [openstack-dev] [Stackalytics] 0.1 release

2013-07-25 Thread Gareth
A suggestion:

sort bugs number as int is much better than string, because '112'  '8' but
actually 112  8

http://stackalytics.com/companies/unitedstack


On Thu, Jul 25, 2013 at 9:31 AM, Alex Freedland afreedl...@mirantis.comwrote:

 Roman,

 Thank you for your comment. I agree that is should not be the only way to
 look at the statistics and that is why Stackalytics also measures the
 number of contributions and soon will add the number of reviews. I do,
 however, think it a useful statistic as because not all commits are created
 equal.

 To your argument that the developers will write longer code just for the
 sake of statistics, I think this will not happen en mass. First and
 foremost, the developers care about their reputations and knowing that
 their code is peer-reviewed, very few will intentionally write inefficient
 code just to get their numbers up. Those few who will choose this route
 will lose the respect of their peers and consequently will not be able to
 contribute as much.

 Also, in order to deal with the situations where people can manipulate the
 numbers, Stackalytics allows anyone in the community to correct the line
 count where it does not make sense.  (
 https://wiki.openstack.org/wiki/Stackalytics#Commits_metrics_corrections_and_a_common_sense_approach
 ).

 We welcome any other improvements and suggestions on how to make OpenStack
 statistics more transparent, meaningful and reliable.

 Alex Freedland




 On Tue, Jul 23, 2013 at 7:25 AM, Roman Prykhodchenko 
 rprikhodche...@mirantis.com wrote:

 I still think counting lines of code is evil because it might encourage
 some developers to write longer code just for statistics.

 On Jul 23, 2013, at 16:58 , Herman Narkaytis hnarkay...@mirantis.com
 wrote:

 Hello everyone!

 Mirantis http://www.mirantis.com/ is pleased to announce the release
 of Stackalytics http://www.stackalytics.com/ 0.1. You can find
 complete details on the Stackalytics 
 wikihttps://wiki.openstack.org/wiki/Stackalytics page,
 but here are the brief release notes:

- Changed the internal architecture. Main features include advanced
real time processing and horizontal scalability.
- Got rid of all 3rd party non-Apache libraries and published the
source on StackForge under the Apache2 license.
- Improved release cycle tracking by using Git tags instead of
approximate date periods.
- Changed project classification to a two-level structure: OpenStack 
 (core,
incubator, documentation, other) and StackForge.
- Implemented correction mechanism that allows users to tweak metrics
for particular commits.
- Added a number of new projects (Tempest, documentation, Puppet
recipes).
- Added company affiliated contribution breakdown to the user's
profile page.

 We welcome you to read, look it over, and comment.

 Thank you!

 --
 Herman Narkaytis
 DoO Ru, PhD
 Tel.: +7 (8452) 674-555, +7 (8452) 431-555
 Tel.: +7 (495) 640-4904
 Tel.: +7 (812) 640-5904
  Tel.: +38(057)728-4215
 Tel.: +1 (408) 715-7897
 ext 2002
 http://www.mirantis.com

 This email (including any attachments) is confidential. If you are not
 the intended recipient you must not copy, use, disclose, distribute or rely
 on the information contained in it. If you have received this email in
 error, please notify the sender immediately by reply email and delete the
 email from your system. Confidentiality and legal privilege attached to
 this communication are not waived or lost by reason of mistaken delivery to
 you. Mirantis does not guarantee (that this email or the attachment's) are
 unaffected by computer virus, corruption or other defects. Mirantis may
 monitor incoming and outgoing emails for compliance with its Email Policy.
 Please note that our servers may not be located in your country.
  ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Gareth

*Cloud Computing, OpenStack, Fitness, Basketball*
*OpenStack contributor*
*Company: UnitedStack http://www.ustack.com*
*My promise: if you find any spelling or grammar mistakes in my email from
Mar 1 2013, notify me *
*and I'll donate $1 or ¥1 to an open organization you specify.*
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Chalenges with highly available service VMs - port adn security group options.

2013-07-25 Thread Samuel Bercovici
Hi,

I had to patch the security groups to add the following rule, otherwise 
broadcast is blocked:
-m pkttype --pkt-type multicast -j RETURN

Ex:
In /opt/stack/quantum/quantum/agent/linux/iptables_firewall.py

def _allow_multicats_rule(self, iptables_rules):
   iptables_rules += ['-m pkttype --pkt-type multicast -j RETURN']
   return iptables_rules

def _convert_sgr_to_iptables_rules(self, security_group_rules):
iptables_rules = []
self._drop_invalid_packets(iptables_rules)
self._allow_established(iptables_rules)
self._allow_multicats_rule(iptables_rules)


From: Aaron Rosen [mailto:aro...@nicira.com]
Sent: Wednesday, July 24, 2013 11:58 PM
To: Samuel Bercovici
Cc: OpenStack Development Mailing List; sorla...@nicira.com; Avishay Balderman; 
gary.kot...@gmail.com
Subject: Re: [openstack-dev] [Neutron] Chalenges with highly available service 
VMs - port adn security group options.



On Wed, Jul 24, 2013 at 12:42 AM, Samuel Bercovici 
samu...@radware.commailto:samu...@radware.com wrote:
Hi,

This might be apparent but not to me.
Can you point to how broadcast can be turned on a network/port?

There  is currently no way to prevent it so it's on by default.

As for the 
https://github.com/openstack/neutron/blob/master/neutron/extensions/portsecurity.py,
 in NVP, does this totally disable port security on a port/network or it just 
disable the MAC/IP checks and still allows the user defined port security to 
take effect?

Port security is currently obtained from the fixed_ips and mac_address field on 
the port. This removes the filtering done on fixed_ips and mac_address fields 
when disabled.

This looks like an extension only implemented by NVP, do you know if there are 
similar implementations for other plugins?

No, the other plugins do not currently have a way to disable spoofing 
dynamically (only globally disabled).


Regards,
-Sam.


From: Aaron Rosen [mailto:aro...@nicira.commailto:aro...@nicira.com]
Sent: Tuesday, July 23, 2013 10:52 PM
To: Samuel Bercovici
Cc: OpenStack Development Mailing List; 
sorla...@nicira.commailto:sorla...@nicira.com; Avishay Balderman; 
gary.kot...@gmail.commailto:gary.kot...@gmail.com

Subject: Re: [openstack-dev] [Neutron] Chalenges with highly available service 
VMs - port adn security group options.

I agree too. I've posted a work in progress of this here if you want to start 
looking at it: https://review.openstack.org/#/c/38230/

Thanks,

Aaron

On Tue, Jul 23, 2013 at 4:21 AM, Samuel Bercovici 
samu...@radware.commailto:samu...@radware.com wrote:
Hi,

I agree that the AutZ should be separated and the service provider should be 
able to control this based on their model.

For Service VMs who might be serving ~100-~1000 IPs and might use multiple MACs 
per port, it would be better to turn this off altogether that to have an 
IPTABLE rules with thousands of entries.
This why I prefer to be able to turn-off IP spoofing and turn-off MAC spoofing 
altogether.

Still from a logical model / declarative reasons an IP that can migrate between 
different ports should be declared as such and maybe also from MAC perspective.

Regards,
-Sam.








From: Salvatore Orlando [mailto:sorla...@nicira.commailto:sorla...@nicira.com]
Sent: Sunday, July 21, 2013 9:56 PM

To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [Neutron] Chalenges with highly available service 
VMs - port adn security group options.



On 19 July 2013 13:14, Aaron Rosen 
aro...@nicira.commailto:aro...@nicira.com wrote:


On Fri, Jul 19, 2013 at 1:55 AM, Samuel Bercovici 
samu...@radware.commailto:samu...@radware.com wrote:

Hi,



I have completely missed this discussion as it does not have quantum/Neutron in 
the subject (modify it now)

I think that the security group is the right place to control this.

I think that this might be only allowed to admins.


I think this shouldn't be admin only since tenant's have control of their own 
networks they should be allowed to do this.

I reiterate my point that the authZ model for a feature should always be 
completely separated by the business logic of the feature itself.
In my opinion there are grounds both for scoping it as admin only and for 
allowing tenants to use it; it might be better if we just let the policy engine 
deal with this.


Let me explain what we need which is more than just disable spoofing.

1.   Be able to allow MACs which are not defined on the port level to 
transmit packets (for example VRRP MACs)== turn off MAC spoofing

For this it seems you would need to implement the port security extension which 
allows one to enable/disable port spoofing on a port.

This would be one way of doing it. The other would probably be adding a list of 
allowed VRRP MACs, which should be possible with the blueprint pointed by Aaron.

2.   Be able to allow IPs which are not defined on the port level to 
transmit packets (for example, IP used for HA service that 

[openstack-dev] [savanna] Team meeting reminder July 25 18:00 UTC

2013-07-25 Thread Sergey Lukjanov
Hi folks,

We'll be have the Savanna team meeting today as usual in #openstack-meeting-alt 
channel.

Agenda: 
https://wiki.openstack.org/wiki/Meetings/SavannaAgenda#Agenda_for_July.2C_25

http://www.timeanddate.com/worldclock/fixedtime.html?msg=Savanna+Meetingiso=20130725T18

Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] removing sudoers.d rules from disk-image-builder

2013-07-25 Thread Derek Higgins
On 25/07/13 09:41, Chris Jones wrote:
 Hi
 
 On 24 July 2013 22:18, Derek Higgins der...@redhat.com
 mailto:der...@redhat.com wrote:
  - setup passwordless sudo or
 Doesn't sound like a super awesome option to me, it places an ugly
 security problem on anyone wanting to set this up anywhere, imo.
 
 I don't think its any worse then the security implications of running
 di-b as root.
 
 Assuming I interpreted this option correctly, we're talking about giving
 some user blanket passwordless sudo, which seems like the kind of
 requirement that no sane sysadmin is going to be interested in granting
 without some seriously onerous precautions to protect against abuse/exploit.
 
 What's the advantage here over simply fixing di-b to work when invoked
 with sudo?

yes, I am talking about giving a user blanket passwordless sudo, I don't
think any sane sysadmin would give any users ability to run di-b on a
Host that has any purposes other then to build imaes, so I am basically
saying that we should be using sudo inside di-b not as a security
measure but more as a measure to protect the Host machine against
problems with buggy code. Running di-b with sudo would remove any
protecting provided by the need to explicitly state when a command
requires root.

This all looks like we are taking our current setup with a sudoers file
and making it less secure but our current sudoers file lets me do all
kinds of things e.g.

[stack@fido derekh]$ sudo head -n 1 /etc/shadow
[sudo] password for stack:

[stack@fido derekh]$ echo ALL ALL=(root) NOPASSWD: ALL | sudo /bin/dd
of=/tmp/image.JZH7Krvy/mnt/../../../etc/sudoers.d/letmedoanything

[stack@fido derekh]$ sudo head -n 1 /etc/shadow
root:$6$snip/:15827:0:9:7:::

which only gives people an incorrect sense of security.

Thanks,
Derek.

 
 
 -- 
 Cheers,
 
 Chris
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Usage of mox through out the Openstack project.

2013-07-25 Thread Mark McClain

On Jul 25, 2013, at 4:01 AM, Julien Danjou jul...@danjou.info wrote:

 On Wed, Jul 24 2013, Russell Bryant wrote:
 
 A practical approach would probably be:
 
 1) Prefer mock for new tests.
 
 2) Use suggestion #2 above to mitigate the Python 3 concern.
 
 3) Convert tests to mock over time, opportunistically, as tests are
 being updated anyway.  (Or if someone *really* wants to take this on as
 a project ...)
 
 Agreed. I will -1 every new patch coming with mox inside. :-)
 
 And I'll probably do some conversion for Oslo anyway.

The Neutron project has been enforcing this approach for about a year now.  
We're down to 8 files that still rely on Mox.

mark



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] QA discussions moving to openstack-dev mailing list

2013-07-25 Thread Sean Dague
Last week we decided on the QA meeting to deprecate the openstack-qa 
list and drive out conversations on openstack-dev instead, using the 
[qa] tag. We're still figuring out if there will be an alias for 
compatibility or not, but until then, please send your traffic over to 
openstack-dev.


Thanks all!

-Sean

--
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] Reminder - Weekly QA Meeting 17:00 UTC today (Thursday)

2013-07-25 Thread Sean Dague

So far we have the following proposed agenda items:

* Blueprints (sdague)
   - Current state of implementation
* WebDav status codes in nova? Consistently using the 404 or 422 on actions.
* Adding test cases with skip attribute vote? Exact rule (afazekas)
* py26 compatibility (afazekas)
* Consistent reviewing (afazekas)
* Critical Reviews (sdague)
* Separate heat job and slow tests in general (dkranz)

Please come join us in #openstack-meeting if you at 17:00 UTC if you are 
interested in any of those topics. And please feel free to propose 
additional topics on 
https://wiki.openstack.org/wiki/Meetings/QATeamMeeting before the meeting.


-Sean

--
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] 422 status codes from API

2013-07-25 Thread Sean Dague
Some new tempest tests have been showing up that push deeper into the 
Nova API, and popping out seem to be a lot of places where we are 
returning 422 status codes.


422 is WebDav reserved code, not in a proper HTTP spec (WebDAV; RFC 
4918), and a lot of the other projects have purged it out of their APIs.


It would be good to understand if we consider this a bug that should get 
fixed in v2 / a v2 oversight that gets fixed in v3 / or par for the 
course, as that gives us some guidance on what to land in tempest, and 
what we file bugs on for nova.


-Sean

--
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Usage of mox through out the Openstack project.

2013-07-25 Thread Julien Danjou
On Thu, Jul 25 2013, Mark McClain wrote:

 The Neutron project has been enforcing this approach for about a year now.
 We're down to 8 files that still rely on Mox.

Awesome, we'll have to bring Python 3 badges at the next summit for
Neutron devs. ;)

-- 
Julien Danjou
/* Free Software hacker * freelance consultant
   http://julien.danjou.info */


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] 422 status codes from API

2013-07-25 Thread John Griffith
Just as a data point, we considered these a *bug* in Cinder and fixed them
as I recall.


On Thu, Jul 25, 2013 at 8:29 AM, Sean Dague s...@dague.net wrote:

 Some new tempest tests have been showing up that push deeper into the Nova
 API, and popping out seem to be a lot of places where we are returning 422
 status codes.

 422 is WebDav reserved code, not in a proper HTTP spec (WebDAV; RFC 4918),
 and a lot of the other projects have purged it out of their APIs.

 It would be good to understand if we consider this a bug that should get
 fixed in v2 / a v2 oversight that gets fixed in v3 / or par for the course,
 as that gives us some guidance on what to land in tempest, and what we file
 bugs on for nova.

 -Sean

 --
 Sean Dague
 http://dague.net

 __**_
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.**org OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-devhttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-qa] [qa] A tempest test cann't pass gate-grenade-devstack-vm

2013-07-25 Thread Matthew Treinish
On Thu, Jul 25, 2013 at 10:16:54AM +0930, Christopher Yeoh wrote:
 On Wed, 24 Jul 2013 10:22:32 -0400
 Matthew Treinish mtrein...@kortar.org wrote:
  On Wed, Jul 24, 2013 at 10:48:19PM +0930, Christopher Yeoh wrote:
   On Wed, 24 Jul 2013 21:08:03 +0800
   Zhu Bo bo...@linux.vnet.ibm.com wrote:
   Just to expand on this a bit, from the logs it appears that within
   the grenade environment the compute_v3 endpoint can't be found.
   However the same test does pass within the normal tempest/devstack
   gate environment. We were wondering if there is perhaps some change
   that is required to add the compute_v3 endpoint into the grenade
   environment (more than the devstack changeset which has already
   been merged).
  
  So if I remember correctly grenade uses devstack but has to use a
  separate configuration because the defaults in devstack-gate are not
  going to be the same from grizzly to master. (a good example is heat)
  So you probably will have to make changes to ensure that it's getting
  enabled when grenade is enabled. You can look at:
  https://github.com/openstack-infra/devstack-gate/blob/master/devstack-vm-gate.sh
  to see if there is anything that needs to be added for grenade (I
  didn't see anything that would break v3 at first glance)
 
 Thanks we'll double check those as well. Talking to Sean last night it
 looks like it is most likely due to the api-paste file not being
 updated when upgrading nova in grenade. This is deliberately not done
 automatically because of api-paste being considered a config file more
 than code. But I've heard different opinions as well so if anyone else
 has an opinion please chime in :-) In the meantime we'll add some code
 to manually update the paste file to enable the v3 api when upgrading
 nova in grenade.
 
  But, I'm curious is there a reason that the v3 tests in tempest don't
  api version endpoint discovery? I did this for the glance tests so
  that if there isn't a v2 endpoint (or a v1 endpoint) then those tests
  are just skipped. Something similar should easily be implementable
  for nova's apis. This way nova v3 api isn't are hard requirement for
  running tempest. (Which I don't think it should be) 
 
 I don't think we should do auto skipping of tests for v3 (perhaps a
 config file option instead) because of the risk of accidentally
 skipping them. A config option instead perhaps.

That's a fair point, a config option would probably be better than auto
detection. That's the path we've gone down for all of the services in
general so we probably should extend it to api versions as well. We'll just
need to make sure it's easily auto-configurable from devstack.

 
 I do think v3 testing will need to be come a hard requirement for
 tempest testing pretty soon. Otherwise we are going to regress pretty
 quickly given how easy it is to modify the v2 api without modifying the
 v3 one. 

So I meant in general not for gating. Of course we want this v3 on every gate
run. All I meant by hard requirement is that while there is still a v2 api in
nova we should not force people running tempest to be running v3 on there
instance of nova. A config option or api endpoint version discovery both
accomplish this.

 
  I'm also not sure testing the v3 api in grenade is needed at this
  point since it wasn't in grizzly there really isn't an upgrade path
  so I don't think that there is any extra value in testing it in
  grenade too. (I might be wrong about this though)
 
 I think that's true, but there isn't an easy way to disable tests just
 for grenade is there?
 

If we add a config option for nova v3 enabled then we can just have grenade set
that to false. Right now you would have to manually exclude them. Also I believe
that grenade only runs the smoke tests. So if you don't tag the v3 tests as 
smoke
they shouldn't be run by grenade either. (This will only be an issue for the
neutron which is the other use case of the smoke attr)

-Matt Treinish

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Multi region support for Heat

2013-07-25 Thread Bartosz Górski
First of all sorry for the late reply. I needed some time to understand 
you vision and check a few things.


On 07/24/2013 08:40 AM, Clint Byrum wrote:

Excerpts from Adrian Otto's message of 2013-07-23 21:22:14 -0700:

Clint,

On Jul 23, 2013, at 10:03 AM, Clint Byrum cl...@fewbar.com
  wrote:


Excerpts from Steve Baker's message of 2013-07-22 21:43:05 -0700:

On 07/23/2013 10:46 AM, Angus Salkeld wrote:

On 22/07/13 16:52 +0200, Bartosz Górski wrote:

Hi folks,

I would like to start a discussion about the blueprint I raised about
multi region support.
I would like to get feedback from you. If something is not clear or
you have questions do not hesitate to ask.
Please let me know what you think.

Blueprint:
https://blueprints.launchpad.net/heat/+spec/multi-region-support

Wikipage:
https://wiki.openstack.org/wiki/Heat/Blueprints/Multi_Region_Support_for_Heat


What immediatley looks odd to me is you have a MultiCloud Heat talking
to other Heat's in each region. This seems like unneccessary
complexity to me.
I would have expected one Heat to do this job.
Yes. You are right. I'm seeing it now. One heat talking to other heat 
service would be an overkill and unnecessary complexity.
Better solution is to use one heat which will be talking directly to 
services (nova, glance, ...).


Also solution with two heat services (one for single region and one for 
multi region) has a lot of common parts.
For example single region heat needs to create a dependencies graph 
where each node is a resource and multi region a graph where each node 
is template.

So in fact it is better to have only one but more powerful heat service.

It should be possible to achieve this with a single Heat installation -
that would make the architecture much simpler.


Agreed that it would be simpler and is definitely possible.

However, consider that having a Heat in each region means Heat is more
resilient to failure. So focusing on a way to make multiple Heat's
collaborate, rather than on a way to make one Heat talk to two regions
may be a more productive exercise.

I agree with Angus, Steve Baker, and Randall on this one. We should aim for 
simplicity where practical. Having Heat services interacting with other Heat 
services seems like a whole category of complexity that's difficult to justify. 
If it were implemented as Steve Baker described, and the local Heat service 
were unavailable, the client may still have the option to use a Heat service in 
another region and still successfully orchestrate. That seems to me like a 
failure mode that's easier for users to anticipate and plan for.
Steve I really like you concept with the context as a resource. What do 
you think how we should proceed with it to make it happened?


What looks wared for me is the concept that orchestration service 
deployed in one region can orchestrate other regions.
My understating of regions was that they are separated and do not know 
about each other. So the heat service which is
responsible for orchestrating multi region should not be deployed in any 
of those regions but somewhere else.


Right now I also do not see a point for having separate heat service in 
each region.
One heat service with multi region support not deployed in any of the 
existing regions (logically not physically) looks fine for me.





I'm all for keeping the solution simple. However, I am not for making
it simpler than it needs to be to actually achieve its stated goals.


Can you further explain your perspective? What sort of failures would you 
expect a network of coordinated Heat services to be more effective with? Is 
there any way this would be more simple or more elegant than other options?

I expect partitions across regions to be common. Regions should be
expected to operate completely isolated from one another if need be. What
is the point of deploying a service to two regions, if one region's
failure means you cannot manage the resources in the standing region?

Active/Passive means you now have an untested passive heat engine in
the passive region. You also have a lot of pointers to update when the
active is taken offline or when there is a network partition. Also split
brain is basically guaranteed in that scenario.

Active/Active(/Active/...), where each region's Heat service collaborates
and owns its own respective pieces of the stack, means that on partition,
one is simply prevented from telling one region to scale/migrate/
etc. onto another one. It also affords a local Heat the ability to
replace resources in a failed region with local resources.

The way I see it working is actually pretty simple. One stack would
lead to resources in multiple regions. The collaboration I speak of
would simply be that if given a stack that requires crossing regions,
the other Heat is contacted and the same stack is deployed. Cross-region
attribute/ref sharing would need an efficient way to pass data about
resources as well.

Anyway, I'm not the one doing the work, so I'll step 

Re: [openstack-dev] [savanna] scalable architecture

2013-07-25 Thread Sergey Lukjanov
Hi Matt,

thank you for your comments. 

First of all, I want to say that personally I like the approach with agents 
because of much more theoretical flexibility and scalability. I want to share 
several overall comments of using agents. We’ve already discussed such approach 
several times including the previous meeting. Let’s take a look at the pros and 
cons of such approach.

I see the following pros of using it:

1. we can provision several agents to each Hadoop cluster and launching of 
large clusters will not affect other users;
2. agents will be deployed near the target VMs, so that’s why I/O could be 
faster.

And here is a list of cons of using it:

1. we’ll need to add one more service to the architecture, because we’ll need 
to have savanna-engine to create initial virtual machines and then provision 
agents to them and use agents for all next operations with cluster;
2. we’ll need to move agents in case if we remove some VM from the Hadoop 
cluster, for example, if we deploy several agents to cluster, some of them 
should be deployed to workers that could be removed while cluster scaling;
3. agents should have an ability to interop with savanna, so, it’ll need to 
have access to MQ (that is impossible due to the security reasons and 
potentially VMs will not have access to network where MQ installed) or agents 
could use savanna-api to communicate, but in this case will need to have an 
auth mechanism for them that isn’t easy to do;
4. agents should provide an API to make savanna possible to pass some tasks to 
it, so there will be need to have not only internal RPC API but internal REST 
API for communication with agents;
5. we’ll need to add scheduling mechanism to schedule tasks to the right hosts;
6. agents will consume resources at machines, but it’s not a really problem I 
think.

I want to say one more time that as the approach with agents looks much more 
flexible for me, but it is much more harder to implement it. I think that we 
should start work on scaling savanna in the 0.3 version and implement simple 
“pilot” approach with only engines and don’t forget about the very huge feature 
that’s the main for 0.3 version - Elastic Data Processing and we think that 
it’s very important to not overestimate team bandwidth for 0.3 version. It’ll 
prepare the base for future improvements. When we’ll finish the work on both 
EDP and simple architecture approach we’ll see much more cleaner other 
requirements for architecture and tasks execution framework and then we can 
take a look on pros and cons again and understand importance of agents usage.

Thank you for comment about statistics, I'll update architecture blueprint when 
wiki will be unfreezed.

P.S. We can discuss more details on IRC meeting.

Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.

On Jul 25, 2013, at 8:05, Matthew Farrellee m...@redhat.com wrote:

 On 07/23/2013 12:32 PM, Sergey Lukjanov wrote:
 Hi evereyone,
 
 We’ve started working on upgrading Savanna architecture in version
 0.3 to make it horizontally scalable.
 
 The most part of information is in the wiki page -
 https://wiki.openstack.org/wiki/Savanna/NextGenArchitecture.
 
 Additionally there are several blueprints created for this activity -
 https://blueprints.launchpad.net/savanna?searchtext=ng-
 
 We are looking for comments / questions / suggestions.
 
 Some comments on Why not provision agents to Hadoop cluster's to provision 
 all other stuff?
 
 Re problems with scaling agents for launching large clusters - launching 
 large clusters may be resource intensive, those resources must be provided by 
 someone. They're either going to be provided by a) the hardware running the 
 savanna infrastructure or b) the instance hardware provided to the tenant. If 
 they are provided by (a) then the cost of launching the cluster is incurred 
 by all users of savanna. If (b) then the cost is incurred by the user trying 
 to launch the large cluster. It is true that some instance recommendations 
 may be necessary, e.g. if you want to run a 500 instance cluster than your 
 head node should be large (vs medium or small). That sizing decision needs to 
 happen for (a) or (b) because enough virtual resources must be present to 
 maintain the large cluster after it is launched. There are accounting and 
 isolation benefits to (b).
 
 Re problems migrating agents while cluster is scaling - will you expand on 
 this point?
 
 Re unexpected resource consumers - during launch, maybe, during execution the 
 agent should be a minimal consumer of resources. sshd may also be an 
 unexpected resource consumer.
 
 Re security vulnerability - the agents should only communicate within the 
 instance network, primarily w/ the head node. The head node can relay 
 information to the savanna infrastructure outside the instances in the same 
 way savanna-api gets information now. So there should be no difference in 
 vulnerability assessment.
 
 Re support multiple 

[openstack-dev] [keystone] Does authorization not needed on “/auth/tokens” API??

2013-07-25 Thread Tiwari, Arvind
Thanks David.

Since we are discussing authorization and access control, I would like to gain 
little attention on the below bug which basically propose authorization check 
on  identity:check_token, identity:validate_token and identity:revoke_token APIs
 
https://bugs.launchpad.net/keystone/+bug/1186059

Due to absence of a target on such API calls Auth is  not possible, I would 
appreciate community's thoughts on the bug.  

Thanks,
Arvind




-Original Message-
From: David Chadwick [mailto:d.w.chadw...@kent.ac.uk] 
Sent: Thursday, July 25, 2013 4:10 AM
To: OpenStack Development Mailing List
Cc: Tiwari, Arvind
Subject: Re: [openstack-dev] [keystone] Extending policy checking to include 
target entities

I have responded to your post, as I dont think it solves the identified 
problem

regards

David

On 24/07/2013 23:26, Tiwari, Arvind wrote:
 I have added my proposal @ 
 https://etherpad.openstack.org/api_policy_on_target.

 Thanks,
 Arvind

 -Original Message-
 From: Henry Nash [mailto:hen...@linux.vnet.ibm.com]
 Sent: Wednesday, July 24, 2013 8:46 AM
 To: OpenStack Development Mailing List
 Subject: Re: [openstack-dev] [keystone] Extending policy checking to include 
 target entities

 I think we should transfer this discussion to the etherpad for this 
 blueprint: https://etherpad.openstack.org/api_policy_on_target

 I have summarised the views of this thread there already, so let's make any 
 further comments there, rather than here.

 Henry
 On 24 Jul 2013, at 00:29, Simo Sorce wrote:

 On Tue, 2013-07-23 at 23:47 +0100, Henry Nash wrote:
 ...the problem is that if the object does not exists we might not be able 
 tell whether the use is authorized or not (since authorization might depend 
 on attributes of the object itself)so how do we know wether to lie or 
 not?

 If the error you return is always 'Not Found', why do you care ?

 Simo.

 Henry
 On 23 Jul 2013, at 21:23, David Chadwick wrote:



 On 23/07/2013 19:02, Henry Nash wrote:
 One thing we could do is:

 - Return Forbidden or NotFound if we can determine the correct answer
 - When we can't (i.e. the object doesn't exist), then return NotFound
 unless a new config value 'policy_harden' (?) is set to true (default
 false) in which case we translate NotFound into Forbidden.

 I am not sure that this achieves your objective of no data leakage through 
 error codes, does it?

 Its not a question of determining the correct answer or not, its a 
 question of whether the user is authorised to see the correct answer or not

 regards

 David

 Henry
 On 23 Jul 2013, at 18:31, Adam Young wrote:

 On 07/23/2013 12:54 PM, David Chadwick wrote:
 When writing a previous ISO standard the approach we took was as follows

 Lie to people who are not authorised.

 Is that your verbage?  I am going to reuse that quote, and I would
 like to get the attribution correct.


 So applying this approach to your situation, you could reply Not
 Found to people who are authorised to see the object if it had
 existed but does not, and Not Found to those not authorised to see
 it, regardless of whether it exists or not. In this case, only those
 who are authorised to see the object will get it if it exists. Those
 not authorised cannot tell the difference between objects that dont
 exist and those that do exist

 So, to try and apply this to a semi-real example:  There are two types
 of URLs.  Ones that are like this:

 users/55FEEDBABECAFE

 and ones like this:

 domain/66DEADBEEF/users/55FEEDBABECAFE


 In the first case, you are selecting against a global collection, and
 in the second, against a scoped collection.

 For unscoped, you have to treat all users as equal, and thus a 404
 probably makes sense.

 For a scoped collection we could return a 404 or a 403 Forbidden
 https://en.wikipedia.org/wiki/HTTP_403 based on the users
 credentials:  all resources under domain/66DEADBEEF  would show up
 as 403s regardless of existantce or not if the user had no roles in
 the domain 66DEADBEEF.  A user that would be allowed access to
 resources in 66DEADBEEF  would get a 403 only for an object that
 existed but that they had no permission to read, and 404 for a
 resource that doesn't exist.





 regards

 David


 On 23/07/2013 16:40, Henry Nash wrote:
 Hi

 As part of bp
 https://blueprints.launchpad.net/keystone/+spec/policy-on-api-target
 I have uploaded some example WIP code showing a proposed approach
 for just a few API calls (one easy, one more complex). I'd
 appreciate early feedback on this before I take it any further.

 https://review.openstack.org/#/c/38308/

 A couple of points:

 - One question is on how to handle errors when you are going to get
 a target object before doing you policy check.  What do you do if
 the object does not exist?  If you return NotFound, then someone,
 who was not authorized  could troll for the existence of entities by
 seeing whether they got NotFound or Forbidden. If however, you
 return 

Re: [openstack-dev] [nova] 422 status codes from API

2013-07-25 Thread Russell Bryant
On 07/25/2013 10:29 AM, Sean Dague wrote:
 Some new tempest tests have been showing up that push deeper into the
 Nova API, and popping out seem to be a lot of places where we are
 returning 422 status codes.
 
 422 is WebDav reserved code, not in a proper HTTP spec (WebDAV; RFC
 4918), and a lot of the other projects have purged it out of their APIs.
 
 It would be good to understand if we consider this a bug that should get
 fixed in v2 / a v2 oversight that gets fixed in v3 / or par for the
 course, as that gives us some guidance on what to land in tempest, and
 what we file bugs on for nova.

I'm fine fixing it as a bug.

This is called out as a change generally considered OK in the
documented API change guidelines:

https://wiki.openstack.org/wiki/APIChangeGuidelines

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] scalable architecture

2013-07-25 Thread Joe Gordon
On Jul 23, 2013 12:34 PM, Sergey Lukjanov slukja...@mirantis.com wrote:

 Hi evereyone,

 We’ve started working on upgrading Savanna architecture in version 0.3 to
make it horizontally scalable.

 The most part of information is in the wiki page -
https://wiki.openstack.org/wiki/Savanna/NextGenArchitecture.

 Additionally there are several blueprints created for this activity -
https://blueprints.launchpad.net/savanna?searchtext=ng-

 We are looking for comments / questions / suggestions.

This sounds like most of this can be built around Heat, except maybe the
rest api to hadoop.  So why not use heat for the deploy part?


 P.S. The another thing that we’re working on in Savanna 0.3 is EDP
(Elastic Data Processing).

 Thank you!

 Sincerely yours,
 Sergey Lukjanov
 Savanna Technical Lead
 Mirantis Inc.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-dev][nova] Disable per-user rate limiting by default

2013-07-25 Thread Day, Phil
+1 to turning it off.  Having something that doesn't really work on by default 
now we have a threaded API is just wrong

From: Rosa, Andrea (HP Cloud Services)
Sent: 25 July 2013 09:35
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [Openstack-dev][nova] Disable per-user rate 
limiting by default

I'd like to turn it off by default, as already pointed in [1] I think the rate 
limiting should be managed by something else (for example load balancers)  in 
front of the API.

Regards
--
Andrea Rosa
[1] http://www.gossamer-threads.com/lists/openstack/operators/28599


From: Joe Gordon [mailto:joe.gord...@gmail.com]
Sent: 24 July 2013 23:39
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Openstack-dev][nova] Disable per-user rate limiting 
by default

Hi all

I have proposed a patch to disable per-user rate limiting by default: 
https://review.openstack.org/#/c/34821/. And on Russell's request  does anyone 
care or prefer this to be enabled by default?

Here is some more context:

Earlier rate limiting discussion: 
http://www.gossamer-threads.com/lists/openstack/operators/28599
Related bug: https://bugs.launchpad.net/tripleo/+bug/1178529
rate limiting is per process, and doesn't act as expected in a multi-process 
environment: https://review.openstack.org/#/c/36516/

best,
Joe Gordon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Does authorization not needed on “/auth/tokens” API??

2013-07-25 Thread David Chadwick

Hi Arvind

thanks for pointing me to this issue, which I was not aware of. I have 
posted my twopence worth to the bug. Basically I agree with you, that if 
you want audit of third parties revoking tokens, then you need the token 
of the subject to be revoked and the third party requesting it.


regards

David

On 25/07/2013 16:18, Tiwari, Arvind wrote:

Thanks David.

Since we are discussing authorization and access control, I would
like to gain little attention on the below bug which basically
propose authorization check on  identity:check_token,
identity:validate_token and identity:revoke_token APIs

https://bugs.launchpad.net/keystone/+bug/1186059

Due to absence of a target on such API calls Auth is  not possible, I
would appreciate community's thoughts on the bug.

Thanks, Arvind




-Original Message- From: David Chadwick
[mailto:d.w.chadw...@kent.ac.uk] Sent: Thursday, July 25, 2013 4:10
AM To: OpenStack Development Mailing List Cc: Tiwari, Arvind Subject:
Re: [openstack-dev] [keystone] Extending policy checking to include
target entities

I have responded to your post, as I dont think it solves the
identified problem

regards

David

On 24/07/2013 23:26, Tiwari, Arvind wrote:

I have added my proposal @
https://etherpad.openstack.org/api_policy_on_target.

Thanks, Arvind

-Original Message- From: Henry Nash
[mailto:hen...@linux.vnet.ibm.com] Sent: Wednesday, July 24, 2013
8:46 AM To: OpenStack Development Mailing List Subject: Re:
[openstack-dev] [keystone] Extending policy checking to include
target entities

I think we should transfer this discussion to the etherpad for this
blueprint: https://etherpad.openstack.org/api_policy_on_target

I have summarised the views of this thread there already, so let's
make any further comments there, rather than here.

Henry On 24 Jul 2013, at 00:29, Simo Sorce wrote:


On Tue, 2013-07-23 at 23:47 +0100, Henry Nash wrote:

...the problem is that if the object does not exists we might
not be able tell whether the use is authorized or not (since
authorization might depend on attributes of the object
itself)so how do we know wether to lie or not?


If the error you return is always 'Not Found', why do you care ?

Simo.


Henry On 23 Jul 2013, at 21:23, David Chadwick wrote:




On 23/07/2013 19:02, Henry Nash wrote:

One thing we could do is:

- Return Forbidden or NotFound if we can determine the
correct answer - When we can't (i.e. the object doesn't
exist), then return NotFound unless a new config value
'policy_harden' (?) is set to true (default false) in which
case we translate NotFound into Forbidden.


I am not sure that this achieves your objective of no data
leakage through error codes, does it?

Its not a question of determining the correct answer or not,
its a question of whether the user is authorised to see the
correct answer or not

regards

David


Henry On 23 Jul 2013, at 18:31, Adam Young wrote:


On 07/23/2013 12:54 PM, David Chadwick wrote:

When writing a previous ISO standard the approach we
took was as follows

Lie to people who are not authorised.


Is that your verbage?  I am going to reuse that quote,
and I would like to get the attribution correct.



So applying this approach to your situation, you could
reply Not Found to people who are authorised to see the
object if it had existed but does not, and Not Found to
those not authorised to see it, regardless of whether
it exists or not. In this case, only those who are
authorised to see the object will get it if it exists.
Those not authorised cannot tell the difference between
objects that dont exist and those that do exist


So, to try and apply this to a semi-real example:  There
are two types of URLs.  Ones that are like this:

users/55FEEDBABECAFE

and ones like this:

domain/66DEADBEEF/users/55FEEDBABECAFE


In the first case, you are selecting against a global
collection, and in the second, against a scoped
collection.

For unscoped, you have to treat all users as equal, and
thus a 404 probably makes sense.

For a scoped collection we could return a 404 or a 403
Forbidden https://en.wikipedia.org/wiki/HTTP_403 based
on the users credentials:  all resources under
domain/66DEADBEEF  would show up as 403s regardless
of existantce or not if the user had no roles in the
domain 66DEADBEEF.  A user that would be allowed
access to resources in 66DEADBEEF  would get a 403
only for an object that existed but that they had no
permission to read, and 404 for a resource that doesn't
exist.






regards

David


On 23/07/2013 16:40, Henry Nash wrote:

Hi

As part of bp
https://blueprints.launchpad.net/keystone/+spec/policy-on-api-target



I have uploaded some example WIP code showing a proposed approach

for just a few API calls (one easy, one more
complex). I'd appreciate early feedback on this
before I take it any further.

https://review.openstack.org/#/c/38308/

A couple of points:

- One question is on how to handle errors when you
are going to get a 

[openstack-dev] [Netron] Allow OVS default veth MTU to be configured.

2013-07-25 Thread Jun Cheol Park
Neutron Core Reviewers,

Could you please review and approve the following bug fix?

https://review.openstack.org/#/c/27937/

Thanks,

-Jun
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Multi region support for Heat

2013-07-25 Thread Zane Bitter

On 25/07/13 17:08, Bartosz Górski wrote:

First of all sorry for the late reply. I needed some time to understand
you vision and check a few things.

On 07/24/2013 08:40 AM, Clint Byrum wrote:

Excerpts from Adrian Otto's message of 2013-07-23 21:22:14 -0700:

Clint,

On Jul 23, 2013, at 10:03 AM, Clint Byrum cl...@fewbar.com
  wrote:


Excerpts from Steve Baker's message of 2013-07-22 21:43:05 -0700:

On 07/23/2013 10:46 AM, Angus Salkeld wrote:

On 22/07/13 16:52 +0200, Bartosz Górski wrote:

Hi folks,

I would like to start a discussion about the blueprint I raised
about
multi region support.
I would like to get feedback from you. If something is not clear or
you have questions do not hesitate to ask.
Please let me know what you think.

Blueprint:
https://blueprints.launchpad.net/heat/+spec/multi-region-support

Wikipage:
https://wiki.openstack.org/wiki/Heat/Blueprints/Multi_Region_Support_for_Heat



What immediatley looks odd to me is you have a MultiCloud Heat
talking
to other Heat's in each region. This seems like unneccessary
complexity to me.
I would have expected one Heat to do this job.

Yes. You are right. I'm seeing it now. One heat talking to other heat
service would be an overkill and unnecessary complexity.
Better solution is to use one heat which will be talking directly to
services (nova, glance, ...).


+1, and this is reasonably easy for Heat to do, simply by asking for a 
different region's service catalog from keystone.


Also solution with two heat services (one for single region and one for
multi region) has a lot of common parts.
For example single region heat needs to create a dependencies graph
where each node is a resource and multi region a graph where each node
is template.


I'm not convinced about the need for this though.

I looked at your example on the wiki, and it just contains a bunch of 
East resources that reference each other and a bunch of West resources 
that reference each other and never the twain shall meet. And that seems 
inevitable - you can't, e.g. connect a Cinder volume in one region to a 
Nova server in another region. So I'm wondering why we would ever want 
to mix regions in a single template, with a single dependency graph, 
when it's not really meaningful to have dependencies between resources 
in different regions. There's no actual orchestration to do at that level.


It seems to me your example would have been better as two templates (or, 
even better, the same template launched in two different regions, since 
I couldn't detect any differences between East vs. West).


Note that there are plans in the works to make passing multiple files to 
Heat a more pleasant experience.


I think creating an OS::Heat::Stack resource with a Region property 
solves 95%+ of the problem, without adding or modifying any other resources.


cheers,
Zane.


So in fact it is better to have only one but more powerful heat service.

It should be possible to achieve this with a single Heat
installation -
that would make the architecture much simpler.


Agreed that it would be simpler and is definitely possible.

However, consider that having a Heat in each region means Heat is more
resilient to failure. So focusing on a way to make multiple Heat's
collaborate, rather than on a way to make one Heat talk to two regions
may be a more productive exercise.

I agree with Angus, Steve Baker, and Randall on this one. We should
aim for simplicity where practical. Having Heat services interacting
with other Heat services seems like a whole category of complexity
that's difficult to justify. If it were implemented as Steve Baker
described, and the local Heat service were unavailable, the client
may still have the option to use a Heat service in another region and
still successfully orchestrate. That seems to me like a failure mode
that's easier for users to anticipate and plan for.

Steve I really like you concept with the context as a resource. What do
you think how we should proceed with it to make it happened?

What looks wared for me is the concept that orchestration service
deployed in one region can orchestrate other regions.
My understating of regions was that they are separated and do not know
about each other. So the heat service which is
responsible for orchestrating multi region should not be deployed in any
of those regions but somewhere else.

Right now I also do not see a point for having separate heat service in
each region.
One heat service with multi region support not deployed in any of the
existing regions (logically not physically) looks fine for me.




I'm all for keeping the solution simple. However, I am not for making
it simpler than it needs to be to actually achieve its stated goals.


Can you further explain your perspective? What sort of failures would
you expect a network of coordinated Heat services to be more
effective with? Is there any way this would be more simple or more
elegant than other options?

I expect partitions across regions to be 

Re: [openstack-dev] [Swift] question on Application class concurrency; paste.app_factory mechanism

2013-07-25 Thread Pete Zaitcev
On Wed, 24 Jul 2013 02:31:45 +
Luse, Paul E paul.e.l...@intel.com wrote:

 I was thinking that each connection would get its own instance thus it
 would be sage to store connection-transient information there but I was
 surprised by my quick test.

Yeah, you have it tracked in __call__, typically hanging off controller.

-- Pete

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Multi region support for Heat

2013-07-25 Thread Bartosz Górski

On 07/25/2013 06:38 PM, Zane Bitter wrote:

On 25/07/13 17:08, Bartosz Górski wrote:

First of all sorry for the late reply. I needed some time to understand
you vision and check a few things.

On 07/24/2013 08:40 AM, Clint Byrum wrote:

Excerpts from Adrian Otto's message of 2013-07-23 21:22:14 -0700:

Clint,

On Jul 23, 2013, at 10:03 AM, Clint Byrum cl...@fewbar.com
  wrote:


Excerpts from Steve Baker's message of 2013-07-22 21:43:05 -0700:

On 07/23/2013 10:46 AM, Angus Salkeld wrote:

On 22/07/13 16:52 +0200, Bartosz Górski wrote:

Hi folks,

I would like to start a discussion about the blueprint I raised
about
multi region support.
I would like to get feedback from you. If something is not 
clear or

you have questions do not hesitate to ask.
Please let me know what you think.

Blueprint:
https://blueprints.launchpad.net/heat/+spec/multi-region-support

Wikipage:
https://wiki.openstack.org/wiki/Heat/Blueprints/Multi_Region_Support_for_Heat 





What immediatley looks odd to me is you have a MultiCloud Heat
talking
to other Heat's in each region. This seems like unneccessary
complexity to me.
I would have expected one Heat to do this job.

Yes. You are right. I'm seeing it now. One heat talking to other heat
service would be an overkill and unnecessary complexity.
Better solution is to use one heat which will be talking directly to
services (nova, glance, ...).


+1, and this is reasonably easy for Heat to do, simply by asking for a 
different region's service catalog from keystone.


Also solution with two heat services (one for single region and one for
multi region) has a lot of common parts.
For example single region heat needs to create a dependencies graph
where each node is a resource and multi region a graph where each node
is template.


I'm not convinced about the need for this though.

I looked at your example on the wiki, and it just contains a bunch of 
East resources that reference each other and a bunch of West resources 
that reference each other and never the twain shall meet. And that 
seems inevitable - you can't, e.g. connect a Cinder volume in one 
region to a Nova server in another region. So I'm wondering why we 
would ever want to mix regions in a single template, with a single 
dependency graph, when it's not really meaningful to have dependencies 
between resources in different regions. There's no actual 
orchestration to do at that level.


It seems to me your example would have been better as two templates 
(or, even better, the same template launched in two different regions, 
since I couldn't detect any differences between East vs. West).


Note that there are plans in the works to make passing multiple files 
to Heat a more pleasant experience.


I think creating an OS::Heat::Stack resource with a Region property 
solves 95%+ of the problem, without adding or modifying any other 
resources.


We want to start from something simple. At the beginning we are assuming 
no dependencies between resources from different region. Our first use 
case (the one on the wikipage) uses this assumptions. So this is why it 
can be easily split on two separate single region templates.


Our goal is to support dependencies between resources from different 
regions. Our second use case (I will add it with more details to the 
wikipage soon) is similar to deploying two instances (app server + db 
server) wordpress in two different regions (app server in the first 
region and db server in the second). Regions will be connected to each 
other via VPN connection . In this case configuration of app server 
depends on db server. We need to know IP address of created DB server to 
properly configure App server. It forces us to wait with creating app 
server until db server will be created.


More complicated use case with load balancers and more regions are also 
in ours minds.


Thanks,
Bartosz


cheers,
Zane.


So in fact it is better to have only one but more powerful heat service.

It should be possible to achieve this with a single Heat
installation -
that would make the architecture much simpler.


Agreed that it would be simpler and is definitely possible.

However, consider that having a Heat in each region means Heat is 
more

resilient to failure. So focusing on a way to make multiple Heat's
collaborate, rather than on a way to make one Heat talk to two 
regions

may be a more productive exercise.

I agree with Angus, Steve Baker, and Randall on this one. We should
aim for simplicity where practical. Having Heat services interacting
with other Heat services seems like a whole category of complexity
that's difficult to justify. If it were implemented as Steve Baker
described, and the local Heat service were unavailable, the client
may still have the option to use a Heat service in another region and
still successfully orchestrate. That seems to me like a failure mode
that's easier for users to anticipate and plan for.

Steve I really like you concept with the context as a resource. 

Re: [openstack-dev] Discussing Amazon API compatibility [Nova][Swift]

2013-07-25 Thread Vishvananda Ishaya

On Jul 24, 2013, at 8:51 AM, Stefano Maffulli stef...@openstack.org wrote:

 Hello
 
 I have seen lots of discussions on blogs and twitter heating up around
 Amazon API compatibility and OpenStack. This seems like a recurring
 topic, often raised by pundits and recently joined by members of the
 community. I think it's time to bring the discussions inside our
 community to our established channels and processes. Our community has
 established ways to discuss and take technical decisions, from the more
 accessible General mailing list to the Development list to the Design
 Summits, the weekly project meetings, the reviews on gerrit and the
 governing bodies Technical Committee and Board of Directors.
 
 While we have not seen a large push in the community recently via
 contributions or deployments, Amazon APIs have been an option for
 deployments from the early days of OpenStack.
 
 I would like to have this discussion inside the established channels of
 our community and get the opinions from those that maintain that
 OpenStack should increase efforts for Amazon APIs compatibility, and
 ultimately it would be good to see code contributions.
 
 Do you think OpenStack should have an ongoing effort to imitate Amazon's
 API? If you think it should, how would you lead the effort?

Thanks for bringing this up Stefano. I think that there is a new driver
for amazon compatibility that has shown up recently: Netflix's efforts
to make their software stack Open Source[1]

The various projects that netflix has released are starting to get a lot
of attention from enterprises and companies wishing to become more agile.

Supporting other open source projects, especially ones that can be used
on top of OpenStack should be something that we encourage.

The greatest friction between netflix oss and openstack is lack of AWS
features[2] that are in use by their software (especially Asgard).

There are a couple of approaches here: one is to change the the other
open source software to use openstack apis natively. I think this
is best long term, but we have relatively little expertise in these
other projects, the easiest path forward is to do the best job we
can at supporting as many of the AWS apis as possible.

[1] http://netflix.github.io
[2] http://www.slideshare.net/adrianco/netflix-and-open-source/45

Vish

 
 
 /stef
 -- 
 Ask and answer questions on https://ask.openstack.org
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] removing sudoers.d rules from disk-image-builder

2013-07-25 Thread Chris Jones
Hi

On 25 July 2013 14:20, Derek Higgins der...@redhat.com wrote:

 which only gives people an incorrect sense of security.


I agree with your analysis of the effects of the sudoers file and I think
it makes a great argument for recommending people run the main command
itself with sudo, rather than a blanket passwordless sudo, but really all
we need to say is this tool needs to be run as root and let people make
their own decision :)

-- 
Cheers,

Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Does authorization not needed on “/auth/tokens” API??

2013-07-25 Thread Adam Young

On 07/25/2013 01:03 PM, Tiwari, Arvind wrote:

Thanks David for your comments.

I will try to fix it as per my suggestion in bug.

Arvind

-Original Message-
From: David Chadwick [mailto:d.w.chadw...@kent.ac.uk]
Sent: Thursday, July 25, 2013 10:27 AM
To: Tiwari, Arvind
Cc: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [keystone] Does authorization not needed on 
“/auth/tokens” API??

Hi Arvind

thanks for pointing me to this issue, which I was not aware of. I have
posted my twopence worth to the bug. Basically I agree with you, that if
you want audit of third parties revoking tokens, then you need the token
of the subject to be revoked and the third party requesting it.

regards


Agreed. I would argue that the larger issue is Audit in general, that 
certain APIs should be auditable, and that audit should be integrated 
with the policy checks.




David

On 25/07/2013 16:18, Tiwari, Arvind wrote:

Thanks David.

Since we are discussing authorization and access control, I would
like to gain little attention on the below bug which basically
propose authorization check on  identity:check_token,
identity:validate_token and identity:revoke_token APIs

https://bugs.launchpad.net/keystone/+bug/1186059

Due to absence of a target on such API calls Auth is  not possible, I
would appreciate community's thoughts on the bug.

Thanks, Arvind




-Original Message- From: David Chadwick
[mailto:d.w.chadw...@kent.ac.uk] Sent: Thursday, July 25, 2013 4:10
AM To: OpenStack Development Mailing List Cc: Tiwari, Arvind Subject:
Re: [openstack-dev] [keystone] Extending policy checking to include
target entities

I have responded to your post, as I dont think it solves the
identified problem

regards

David

On 24/07/2013 23:26, Tiwari, Arvind wrote:

I have added my proposal @
https://etherpad.openstack.org/api_policy_on_target.

Thanks, Arvind

-Original Message- From: Henry Nash
[mailto:hen...@linux.vnet.ibm.com] Sent: Wednesday, July 24, 2013
8:46 AM To: OpenStack Development Mailing List Subject: Re:
[openstack-dev] [keystone] Extending policy checking to include
target entities

I think we should transfer this discussion to the etherpad for this
blueprint: https://etherpad.openstack.org/api_policy_on_target

I have summarised the views of this thread there already, so let's
make any further comments there, rather than here.

Henry On 24 Jul 2013, at 00:29, Simo Sorce wrote:


On Tue, 2013-07-23 at 23:47 +0100, Henry Nash wrote:

...the problem is that if the object does not exists we might
not be able tell whether the use is authorized or not (since
authorization might depend on attributes of the object
itself)so how do we know wether to lie or not?

If the error you return is always 'Not Found', why do you care ?

Simo.


Henry On 23 Jul 2013, at 21:23, David Chadwick wrote:



On 23/07/2013 19:02, Henry Nash wrote:

One thing we could do is:

- Return Forbidden or NotFound if we can determine the
correct answer - When we can't (i.e. the object doesn't
exist), then return NotFound unless a new config value
'policy_harden' (?) is set to true (default false) in which
case we translate NotFound into Forbidden.

I am not sure that this achieves your objective of no data
leakage through error codes, does it?

Its not a question of determining the correct answer or not,
its a question of whether the user is authorised to see the
correct answer or not

regards

David

Henry On 23 Jul 2013, at 18:31, Adam Young wrote:


On 07/23/2013 12:54 PM, David Chadwick wrote:

When writing a previous ISO standard the approach we
took was as follows

Lie to people who are not authorised.

Is that your verbage?  I am going to reuse that quote,
and I would like to get the attribution correct.


So applying this approach to your situation, you could
reply Not Found to people who are authorised to see the
object if it had existed but does not, and Not Found to
those not authorised to see it, regardless of whether
it exists or not. In this case, only those who are
authorised to see the object will get it if it exists.
Those not authorised cannot tell the difference between
objects that dont exist and those that do exist

So, to try and apply this to a semi-real example:  There
are two types of URLs.  Ones that are like this:

users/55FEEDBABECAFE

and ones like this:

domain/66DEADBEEF/users/55FEEDBABECAFE


In the first case, you are selecting against a global
collection, and in the second, against a scoped
collection.

For unscoped, you have to treat all users as equal, and
thus a 404 probably makes sense.

For a scoped collection we could return a 404 or a 403
Forbidden https://en.wikipedia.org/wiki/HTTP_403 based
on the users credentials:  all resources under
domain/66DEADBEEF  would show up as 403s regardless
of existantce or not if the user had no roles in the
domain 66DEADBEEF.  A user that would be allowed
access to resources in 66DEADBEEF  would get a 403
only for an object 

[openstack-dev] Python overhead for rootwrap

2013-07-25 Thread Joe Gordon
Hi All,

We have recently hit some performance issues with nova-network.  It turns
out the root cause of this was we do roughly 20 rootwrapped shell commands,
many inside of global locks. (https://bugs.launchpad.net/oslo/+bug/1199433)

It turns out starting python itself, has a fairly significant overhead when
compared to the run time of many of the binary commands we execute.

For example:

$ time python -c print 'test'
test

real 0m0.023s
user 0m0.016s
sys 0m0.004s


$ time ip a
...

real 0m0.003s
user 0m0.000s
sys 0m0.000s


While we have removed the extra overhead of using entry points, we are now
hitting the overhead of just shelling out to python.


While there are many possible ways to reduce this issue, such as reducing
the number of rootwrapped calls and making locks finer grain, I think its
worth exploring alternates to the current rootwrap model.

Any ideas?  I am sending this email out to get the discussion started.


best,
Joe Gordon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-dev][nova] Disable per-user rate limiting by default

2013-07-25 Thread Davanum Srinivas
+1 to turn it off

-- dims

On Thu, Jul 25, 2013 at 12:07 PM, Day, Phil philip@hp.com wrote:
 +1 to turning it off.  Having something that doesn’t really work on by
 default now we have a threaded API is just wrong



 From: Rosa, Andrea (HP Cloud Services)
 Sent: 25 July 2013 09:35
 To: OpenStack Development Mailing List
 Subject: Re: [openstack-dev] [Openstack-dev][nova] Disable per-user rate
 limiting by default



 I’d like to turn it off by default, as already pointed in [1] I think the
 rate limiting should be managed by something else (for example load
 balancers)  in front of the API.



 Regards

 --

 Andrea Rosa

 [1] http://www.gossamer-threads.com/lists/openstack/operators/28599





 From: Joe Gordon [mailto:joe.gord...@gmail.com]
 Sent: 24 July 2013 23:39
 To: OpenStack Development Mailing List
 Subject: [openstack-dev] [Openstack-dev][nova] Disable per-user rate
 limiting by default



 Hi all



 I have proposed a patch to disable per-user rate limiting by default:
 https://review.openstack.org/#/c/34821/. And on Russell's request  does
 anyone care or prefer this to be enabled by default?



 Here is some more context:



 Earlier rate limiting discussion:
 http://www.gossamer-threads.com/lists/openstack/operators/28599

 Related bug: https://bugs.launchpad.net/tripleo/+bug/1178529

 rate limiting is per process, and doesn't act as expected in a multi-process
 environment: https://review.openstack.org/#/c/36516/



 best,

 Joe Gordon


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Python overhead for rootwrap

2013-07-25 Thread Mike Wilson
In my opinion:

1. Stop using rootwrap completely and get strong argument checking support
into sudo (regex).
2. Some sort of long lived rootwrap process, either forked by the service
that want's to shell out or a general purpose rootwrapd type thing.

I prefer #1 because it's surprising that sudo doesn't do this type of thing
already. It _must_ be something that everyone wants. But #2 may be quicker
and easier to implement, my $.02.

-Mike Wilson


On Thu, Jul 25, 2013 at 2:21 PM, Joe Gordon joe.gord...@gmail.com wrote:

 Hi All,

 We have recently hit some performance issues with nova-network.  It turns
 out the root cause of this was we do roughly 20 rootwrapped shell commands,
 many inside of global locks. (https://bugs.launchpad.net/oslo/+bug/1199433
 )

 It turns out starting python itself, has a fairly significant overhead
 when compared to the run time of many of the binary commands we execute.

 For example:

  $ time python -c print 'test'
 test

 real 0m0.023s
 user 0m0.016s
 sys 0m0.004s


 $ time ip a
 ...

 real 0m0.003s
 user 0m0.000s
 sys 0m0.000s


 While we have removed the extra overhead of using entry points, we are now
 hitting the overhead of just shelling out to python.


 While there are many possible ways to reduce this issue, such as reducing
 the number of rootwrapped calls and making locks finer grain, I think its
 worth exploring alternates to the current rootwrap model.

 Any ideas?  I am sending this email out to get the discussion started.


 best,
 Joe Gordon

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Python overhead for rootwrap

2013-07-25 Thread Russell Bryant
On 07/25/2013 04:40 PM, Mike Wilson wrote:
 In my opinion:
 
 1. Stop using rootwrap completely and get strong argument checking
 support into sudo (regex).
 2. Some sort of long lived rootwrap process, either forked by the
 service that want's to shell out or a general purpose rootwrapd type thing.
 
 I prefer #1 because it's surprising that sudo doesn't do this type of
 thing already. It _must_ be something that everyone wants. But #2 may be
 quicker and easier to implement, my $.02.

We could do #1 and keep rootwrap around as the fallback if the local
version of sudo doesn't support what we need.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Python overhead for rootwrap

2013-07-25 Thread Thierry Carrez
Russell Bryant wrote:
 On 07/25/2013 04:40 PM, Mike Wilson wrote:
 In my opinion:

 1. Stop using rootwrap completely and get strong argument checking
 support into sudo (regex).
 2. Some sort of long lived rootwrap process, either forked by the
 service that want's to shell out or a general purpose rootwrapd type thing.

 I prefer #1 because it's surprising that sudo doesn't do this type of
 thing already. It _must_ be something that everyone wants. But #2 may be
 quicker and easier to implement, my $.02.
 
 We could do #1 and keep rootwrap around as the fallback if the local
 version of sudo doesn't support what we need.

It's not just regexp support, rootwrap basically lets you extend the
rules to be openstack-specific (custom filters). That feature is not
widely used yet but is the key to fine-grained privilege escalation in
the future. Also getting something new into sudo is (for good reasons)
quite difficult.

I would rather support solution 3: create a single, separate  executable
that does those 20 things that need to be done (can be a shell script
with some logic in it), and have rootwrap call that *once*. That way you
increase speed by 20 times without dumping the security model.

Regards,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Stackalytics] 0.1 release

2013-07-25 Thread Alex Freedland
Thank you Gareth, this makes total sense.

We will make sure to include this in the next release.

Alex Freedland
Mirantis, Inc.




On Thu, Jul 25, 2013 at 3:21 AM, Gareth academicgar...@gmail.com wrote:

 A suggestion:

 sort bugs number as int is much better than string, because '112'  '8'
 but actually 112  8

 http://stackalytics.com/companies/unitedstack


 On Thu, Jul 25, 2013 at 9:31 AM, Alex Freedland 
 afreedl...@mirantis.comwrote:

 Roman,

 Thank you for your comment. I agree that is should not be the only way to
 look at the statistics and that is why Stackalytics also measures the
 number of contributions and soon will add the number of reviews. I do,
 however, think it a useful statistic as because not all commits are created
 equal.

 To your argument that the developers will write longer code just for the
 sake of statistics, I think this will not happen en mass. First and
 foremost, the developers care about their reputations and knowing that
 their code is peer-reviewed, very few will intentionally write inefficient
 code just to get their numbers up. Those few who will choose this route
 will lose the respect of their peers and consequently will not be able to
 contribute as much.

 Also, in order to deal with the situations where people can manipulate
 the numbers, Stackalytics allows anyone in the community to correct the
 line count where it does not make sense.  (
 https://wiki.openstack.org/wiki/Stackalytics#Commits_metrics_corrections_and_a_common_sense_approach
 ).

 We welcome any other improvements and suggestions on how to make
 OpenStack statistics more transparent, meaningful and reliable.

 Alex Freedland




 On Tue, Jul 23, 2013 at 7:25 AM, Roman Prykhodchenko 
 rprikhodche...@mirantis.com wrote:

 I still think counting lines of code is evil because it might encourage
 some developers to write longer code just for statistics.

 On Jul 23, 2013, at 16:58 , Herman Narkaytis hnarkay...@mirantis.com
 wrote:

 Hello everyone!

 Mirantis http://www.mirantis.com/ is pleased to announce the release
 of Stackalytics http://www.stackalytics.com/ 0.1. You can find
 complete details on the Stackalytics 
 wikihttps://wiki.openstack.org/wiki/Stackalytics page,
 but here are the brief release notes:

- Changed the internal architecture. Main features include advanced
real time processing and horizontal scalability.
- Got rid of all 3rd party non-Apache libraries and published the
source on StackForge under the Apache2 license.
- Improved release cycle tracking by using Git tags instead of
approximate date periods.
- Changed project classification to a two-level structure: OpenStack 
 (core,
incubator, documentation, other) and StackForge.
- Implemented correction mechanism that allows users to tweak
metrics for particular commits.
- Added a number of new projects (Tempest, documentation, Puppet
recipes).
- Added company affiliated contribution breakdown to the user's
profile page.

 We welcome you to read, look it over, and comment.

 Thank you!

 --
 Herman Narkaytis
 DoO Ru, PhD
 Tel.: +7 (8452) 674-555, +7 (8452) 431-555
 Tel.: +7 (495) 640-4904
 Tel.: +7 (812) 640-5904
  Tel.: +38(057)728-4215
 Tel.: +1 (408) 715-7897
 ext 2002
 http://www.mirantis.com

 This email (including any attachments) is confidential. If you are not
 the intended recipient you must not copy, use, disclose, distribute or rely
 on the information contained in it. If you have received this email in
 error, please notify the sender immediately by reply email and delete the
 email from your system. Confidentiality and legal privilege attached to
 this communication are not waived or lost by reason of mistaken delivery to
 you. Mirantis does not guarantee (that this email or the attachment's) are
 unaffected by computer virus, corruption or other defects. Mirantis may
 monitor incoming and outgoing emails for compliance with its Email Policy.
 Please note that our servers may not be located in your country.
  ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Gareth

 *Cloud Computing, OpenStack, Fitness, Basketball*
 *OpenStack contributor*
 *Company: UnitedStack http://www.ustack.com*
 *My promise: if you find any spelling or grammar mistakes in my email
 from Mar 1 2013, notify me *
 *and I'll donate $1 or ¥1 to an open organization you specify.*

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 

[openstack-dev] [Glance] property protections -- final call for comments

2013-07-25 Thread Brian Rosmaita
After lots of discussion, I think we've come to a consensus on what property 
protections should look like in Glance.  Please reply with comments!

The blueprint: 
https://blueprints.launchpad.net/glance/+spec/api-v2-property-protection

The full specification: 
https://wiki.openstack.org/wiki/Glance-property-protections
   (it's got a Prior Discussion section with links to the discussion etherpads)

A product approach to describing the feature: 
https://wiki.openstack.org/wiki/Glance-property-protections-product

cheers,
brian


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Discussing Amazon API compatibility [Nova][Swift]

2013-07-25 Thread Thierry Carrez
Stefano Maffulli wrote:
 I have seen lots of discussions on blogs and twitter heating up around
 Amazon API compatibility and OpenStack. This seems like a recurring
 topic, often raised by pundits and recently joined by members of the
 community. I think it's time to bring the discussions inside our
 community to our established channels and processes. Our community has
 established ways to discuss and take technical decisions, from the more
 accessible General mailing list to the Development list to the Design
 Summits, the weekly project meetings, the reviews on gerrit and the
 governing bodies Technical Committee and Board of Directors.
 
 While we have not seen a large push in the community recently via
 contributions or deployments, Amazon APIs have been an option for
 deployments from the early days of OpenStack.
 
 I would like to have this discussion inside the established channels of
 our community and get the opinions from those that maintain that
 OpenStack should increase efforts for Amazon APIs compatibility, and
 ultimately it would be good to see code contributions.

Like you say, all this needs is people to start putting resources where
their mouth is and pushing improvements through our regular channels:
proposing a blueprint, discussing it at our summits, signing up to do an
actionable piece of work and deliver it in one of our development
milestones.

I don't think anyone argues that having AWS compatibility would be a bad
thing, as long as we keep a local API that lets us exhibit features that
are not (yet) in AWS APIs when those features make sense.

Having a common internal layer upon which the various external APIs can
plug is also pretty common sense, the historical reason we don't have
that yet was that nobody signed up to do the work, while at the same
time Canonical signed up to do the AWSOME proxy. Since apparently this
effort was abandoned, the road is open again, just waiting for cars to
pass on it.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Discussing Amazon API compatibility [Nova][Swift]

2013-07-25 Thread Russell Bryant
On 07/25/2013 06:11 PM, Thierry Carrez wrote:
 Stefano Maffulli wrote:
 I have seen lots of discussions on blogs and twitter heating up around
 Amazon API compatibility and OpenStack. This seems like a recurring
 topic, often raised by pundits and recently joined by members of the
 community. I think it's time to bring the discussions inside our
 community to our established channels and processes. Our community has
 established ways to discuss and take technical decisions, from the more
 accessible General mailing list to the Development list to the Design
 Summits, the weekly project meetings, the reviews on gerrit and the
 governing bodies Technical Committee and Board of Directors.

 While we have not seen a large push in the community recently via
 contributions or deployments, Amazon APIs have been an option for
 deployments from the early days of OpenStack.

 I would like to have this discussion inside the established channels of
 our community and get the opinions from those that maintain that
 OpenStack should increase efforts for Amazon APIs compatibility, and
 ultimately it would be good to see code contributions.
 
 Like you say, all this needs is people to start putting resources where
 their mouth is and pushing improvements through our regular channels:
 proposing a blueprint, discussing it at our summits, signing up to do an
 actionable piece of work and deliver it in one of our development
 milestones.
 
 I don't think anyone argues that having AWS compatibility would be a bad
 thing, as long as we keep a local API that lets us exhibit features that
 are not (yet) in AWS APIs when those features make sense.
 
 Having a common internal layer upon which the various external APIs can
 plug is also pretty common sense, the historical reason we don't have
 that yet was that nobody signed up to do the work, while at the same
 time Canonical signed up to do the AWSOME proxy. Since apparently this
 effort was abandoned, the road is open again, just waiting for cars to
 pass on it.
 

If an external proxy (like AWSOME) is what you want, one of those
already exists (at least for the EC2 API).

http://deltacloud.apache.org/

It supports EC2 on the frontend and the OpenStack compute API on the
backend.  I'm not sure how using this compares to the EC2 implementation
in nova, though.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Discussing Amazon API compatibility [Nova][Swift]

2013-07-25 Thread Michael Still
On Fri, Jul 26, 2013 at 8:30 AM, Russell Bryant rbry...@redhat.com wrote:

 If an external proxy (like AWSOME) is what you want, one of those
 already exists (at least for the EC2 API).

 http://deltacloud.apache.org/

 It supports EC2 on the frontend and the OpenStack compute API on the
 backend.  I'm not sure how using this compares to the EC2 implementation
 in nova, though.

I am sceptical of the external proxy approach as there is a lot of
state to maintain (uuid to id mappings for example) which is hard to
do right in a proxy. I like the idea of the AWS APIs being secondary
APIs within nova. However, its fair to say that there hasn't been much
work done on them recently.

Michael

--
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Python overhead for rootwrap

2013-07-25 Thread Michael Still
On Fri, Jul 26, 2013 at 7:43 AM, Thierry Carrez thie...@openstack.org wrote:

 I would rather support solution 3: create a single, separate  executable
 that does those 20 things that need to be done (can be a shell script
 with some logic in it), and have rootwrap call that *once*. That way you
 increase speed by 20 times without dumping the security model.

I worry about this script getting out of date compared with the nova
binary. What about an abstraction class around shell commands where
you specify what commands you want to run, then it exports a generated
shell script and executes it with root-wrap?

We'd of course have to pay attention to using secure temporary files
for the generated scripts, but we could ask for an OSSG bench audit of
those bits.

Michael

--
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Stackalytics] 0.1 release [metrics]

2013-07-25 Thread Stefano Maffulli
On 07/23/2013 07:25 AM, Roman Prykhodchenko wrote:
 I still think counting lines of code is evil because it might encourage
 some developers to write longer code just for statistics.

Data becomes evil when you decide to use them for evil purposes :) I
don't think that lines of code is a bad metric per se: like any other
metric it becomes bad when used in an evile context. I'm getting more
and more convinced that it's a mistake to show ranks and classifications
in the dashboard and I'll be deleting all the ones that we may have on
http://activity.openstack.org. (see
https://bugs.launchpad.net/openstack-community/+bug/1205139)

Counting anything in OpenStack, from commits to number of reviews is not
a race, we don't need to *rank* top contributors. What we need is to
identify trends. Practical example: in the report for Grizzly, most
metrics put Red Hat and IBM visibly on top of many charts, while in
Folsom their contributions were much lower. The story of those numbers
was that IBM and Red Hat changed gear since Folsom and from 'involved'
became visibly and concretely 'committed'.  The story of those metrics
was not that Red Hat was first or second in some sort of race.

We should keep in mind that commits or bug resolutions to different
projects are not directly comparable, that line charts can damage the
appearance of some companies/people (loose face). Other charts need to
be explored (punch cards?) and avoid direct comparisons, maybe?

/stef

-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-dev][nova] Disable per-user rate limiting by default

2013-07-25 Thread Joshua Harlow
You mean process/forking API right?

Honestly I'd sort of think the whole limits.py that is this rate-limiting could 
also be turned off by default (or a log warn message occurs) when multi-process 
nova-api is used since the control for that paste module actually returns the 
currently enforced limits (and how much remaining) and on repeated calls to 
different processes those values will actually be different . This adds to the 
confusion that this rate-limiting in-memory/process solution creates which does 
also seem bad.

https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/limits.py

Maybe we should not have that code in nova in the future, idk.

-Josh

From: Day, Phil philip@hp.commailto:philip@hp.com
Reply-To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, July 25, 2013 9:07 AM
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Openstack-dev][nova] Disable per-user rate 
limiting by default

+1 to turning it off.  Having something that doesn’t really work on by default 
now we have a threaded API is just wrong

From: Rosa, Andrea (HP Cloud Services)
Sent: 25 July 2013 09:35
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [Openstack-dev][nova] Disable per-user rate 
limiting by default

I’d like to turn it off by default, as already pointed in [1] I think the rate 
limiting should be managed by something else (for example load balancers)  in 
front of the API.

Regards
--
Andrea Rosa
[1]http://www.gossamer-threads.com/lists/openstack/operators/28599


From: Joe Gordon [mailto:joe.gord...@gmail.com]
Sent: 24 July 2013 23:39
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Openstack-dev][nova] Disable per-user rate limiting 
by default

Hi all

I have proposed a patch to disable per-user rate limiting by default: 
https://review.openstack.org/#/c/34821/. And on Russell's request  does anyone 
care or prefer this to be enabled by default?

Here is some more context:

Earlier rate limiting discussion: 
http://www.gossamer-threads.com/lists/openstack/operators/28599
Related bug: https://bugs.launchpad.net/tripleo/+bug/1178529
rate limiting is per process, and doesn't act as expected in a multi-process 
environment: https://review.openstack.org/#/c/36516/

best,
Joe Gordon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Swift] question on Application class concurrency; paste.app_factory mechanism

2013-07-25 Thread Luse, Paul E
Thanks for the reply Pete.  Don't quite follow you so will continue to do some 
reading and experimenting and see if I can't come up some additional 
questions...

Thx
Paul

-Original Message-
From: Pete Zaitcev [mailto:zait...@redhat.com] 
Sent: Thursday, July 25, 2013 9:51 AM
To: OpenStack Development Mailing List
Cc: Luse, Paul E
Subject: Re: [openstack-dev] [Swift] question on Application class concurrency; 
paste.app_factory mechanism

On Wed, 24 Jul 2013 02:31:45 +
Luse, Paul E paul.e.l...@intel.com wrote:

 I was thinking that each connection would get its own instance thus it 
 would be sage to store connection-transient information there but I 
 was surprised by my quick test.

Yeah, you have it tracked in __call__, typically hanging off controller.

-- Pete

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack][Cinder] Driver qualification

2013-07-25 Thread John Griffith
Hey Everyone,

Something I've been kicking around for quite a while now but never really
been able to get around to is the idea of requiring that drivers in Cinder
run a qualification test and submit results prior to introduction in to
Cinder.

To elaborate a bit, the idea could start as something really simple like
the following:
1. We'd add a functional_qual option/script to devstack

2. Driver maintainer runs this script to setup devstack and configure it to
use their backend device on their own system.

3. Script does the usual devstack install/configure and runs the volume
pieces of the Tempest gate tests.

4. Grabs output and checksums of the directories in the devstack and
/opt/stack directories, bundles up the results for submission

5. Maintainer submits results

So why would we do this you ask?  Cinder is pretty heavy on the third party
driver plugin model which is fantastic.  On the other hand while there are
a lot of folks who do great reviews that catch things like syntax or logic
errors in the code, and unit tests do a reasonable job of exercising the
code it's difficult for folks to truly verify these devices all work.

I think it would be a very useful tool for initial introduction of a new
driver and even perhaps some sort of check that's run and submitted again
prior to milestone releases.

This would also drive some more activity and contribution in to Tempest
with respect to getting folks like myself motivated to contribute more
tests (particularly in terms of new functionality) in to Tempest.

I'd be interested to hear if folks have any interest or strong opinions on
this (positive or otherwise).  I know that some vendors like RedHat have
this sort of thing in place for certifications, and to be honest that
observation is something that caused me to start thinking about this again.

There are a lot of gaps here regarding how the submission process would
look, but we could start relatively simple and grow from there if it's
valuable or just abandon the idea if it proves to be unpopular and a waste
of time.

Anyway, I'd love to get feed-back from folks and see what they think.

Thanks,
John
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Cinder] Driver qualification

2013-07-25 Thread Russell Bryant
On 07/25/2013 08:44 PM, John Griffith wrote:
 Hey Everyone,
 
 Something I've been kicking around for quite a while now but never
 really been able to get around to is the idea of requiring that drivers
 in Cinder run a qualification test and submit results prior to
 introduction in to Cinder.

In general, a big +1 from me.  I've been thinking about similar things
for nova.  I think it's fair to set some requirements that ensure a base
level of functionality, quality, and a reasonably consistent user
experience.

I've been thinking along these lines for Nova, as well.  I also want to
require ongoing testing.  We have a wide range of functional test
coverage of the existing compute drivers [1].  I'd like to only accept
drivers if there is some sort of CI running against it with results that
are publicly available (Group A or B in [1]).  I posted about this a bit
before [2].

I really like the other thing Cinder has started doing, which is
requiring a base feature set for all drivers [3].  I'd like to do the
same for Nova, but haven't come up with a formalized proposal yet.

[1] https://wiki.openstack.org/wiki/HypervisorSupportMatrix
[2] http://lists.openstack.org/pipermail/openstack-dev/2013-July/011260.html
[3] http://lists.openstack.org/pipermail/openstack-dev/2013-July/012119.html

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Cinder] Driver qualification

2013-07-25 Thread Joshua Harlow
100% agree, its hard to handle these 3rd party type of drivers but I think we 
need to find out a way that will test it in a way that doesn't require having 
said 3rd party gear directly available.

Could it be possible to have CI gating be blocked/tested by individual 
subfolders of cinder.

For example when the solidfire driver is modified, this would cause a 'trigger' 
to say solidfire (via some API) that solidfire can respond with back saying 
said commit works.

Not sure if that’s feasible, but it does seem to be a similar situation in 
nova, neturon, cinder as more and more 3rd party 'driver-like' code appears.

From: John Griffith 
john.griff...@solidfire.commailto:john.griff...@solidfire.com
Reply-To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, July 25, 2013 5:44 PM
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [OpenStack][Cinder] Driver qualification

Hey Everyone,

Something I've been kicking around for quite a while now but never really been 
able to get around to is the idea of requiring that drivers in Cinder run a 
qualification test and submit results prior to introduction in to Cinder.

To elaborate a bit, the idea could start as something really simple like the 
following:
1. We'd add a functional_qual option/script to devstack

2. Driver maintainer runs this script to setup devstack and configure it to use 
their backend device on their own system.

3. Script does the usual devstack install/configure and runs the volume pieces 
of the Tempest gate tests.

4. Grabs output and checksums of the directories in the devstack and /opt/stack 
directories, bundles up the results for submission

5. Maintainer submits results

So why would we do this you ask?  Cinder is pretty heavy on the third party 
driver plugin model which is fantastic.  On the other hand while there are a 
lot of folks who do great reviews that catch things like syntax or logic errors 
in the code, and unit tests do a reasonable job of exercising the code it's 
difficult for folks to truly verify these devices all work.

I think it would be a very useful tool for initial introduction of a new driver 
and even perhaps some sort of check that's run and submitted again prior to 
milestone releases.

This would also drive some more activity and contribution in to Tempest with 
respect to getting folks like myself motivated to contribute more tests 
(particularly in terms of new functionality) in to Tempest.

I'd be interested to hear if folks have any interest or strong opinions on this 
(positive or otherwise).  I know that some vendors like RedHat have this sort 
of thing in place for certifications, and to be honest that observation is 
something that caused me to start thinking about this again.

There are a lot of gaps here regarding how the submission process would look, 
but we could start relatively simple and grow from there if it's valuable or 
just abandon the idea if it proves to be unpopular and a waste of time.

Anyway, I'd love to get feed-back from folks and see what they think.

Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Cinder] Driver qualification

2013-07-25 Thread Huang Zhiteng
Great idea and 100% agree.  It'd be even better if maintainer can publish
functional test results using their own back-ends on a regular basis
(weekly/bi-weekly test report) to 'openstack-dev' mailing list.


On Fri, Jul 26, 2013 at 9:05 AM, Joshua Harlow harlo...@yahoo-inc.comwrote:

  100% agree, its hard to handle these 3rd party type of drivers but I
 think we need to find out a way that will test it in a way that doesn't
 require having said 3rd party gear directly available.

  Could it be possible to have CI gating be blocked/tested by individual
 subfolders of cinder.

  For example when the solidfire driver is modified, this would cause a
 'trigger' to say solidfire (via some API) that solidfire can respond with
 back saying said commit works.

  Not sure if that’s feasible, but it does seem to be a similar situation
 in nova, neturon, cinder as more and more 3rd party 'driver-like' code
 appears.

   From: John Griffith john.griff...@solidfire.com
 Reply-To: OpenStack Development Mailing List 
 openstack-dev@lists.openstack.org
 Date: Thursday, July 25, 2013 5:44 PM
 To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [OpenStack][Cinder] Driver qualification

   Hey Everyone,

  Something I've been kicking around for quite a while now but never
 really been able to get around to is the idea of requiring that drivers in
 Cinder run a qualification test and submit results prior to introduction in
 to Cinder.

  To elaborate a bit, the idea could start as something really simple like
 the following:
 1. We'd add a functional_qual option/script to devstack

  2. Driver maintainer runs this script to setup devstack and configure it
 to use their backend device on their own system.

  3. Script does the usual devstack install/configure and runs the volume
 pieces of the Tempest gate tests.

  4. Grabs output and checksums of the directories in the devstack and
 /opt/stack directories, bundles up the results for submission

  5. Maintainer submits results

  So why would we do this you ask?  Cinder is pretty heavy on the third
 party driver plugin model which is fantastic.  On the other hand while
 there are a lot of folks who do great reviews that catch things like syntax
 or logic errors in the code, and unit tests do a reasonable job of
 exercising the code it's difficult for folks to truly verify these devices
 all work.

  I think it would be a very useful tool for initial introduction of a new
 driver and even perhaps some sort of check that's run and submitted again
 prior to milestone releases.

  This would also drive some more activity and contribution in to Tempest
 with respect to getting folks like myself motivated to contribute more
 tests (particularly in terms of new functionality) in to Tempest.

  I'd be interested to hear if folks have any interest or strong opinions
 on this (positive or otherwise).  I know that some vendors like RedHat have
 this sort of thing in place for certifications, and to be honest that
 observation is something that caused me to start thinking about this again.

  There are a lot of gaps here regarding how the submission process would
 look, but we could start relatively simple and grow from there if it's
 valuable or just abandon the idea if it proves to be unpopular and a waste
 of time.

  Anyway, I'd love to get feed-back from folks and see what they think.

  Thanks,
 John


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Regards
Huang Zhiteng
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Cinder] Driver qualification

2013-07-25 Thread John Griffith
On Thu, Jul 25, 2013 at 7:37 PM, yang, xing xing.y...@emc.com wrote:

 +1.  I like this idea.  With this qualification test, is each driver still
 required to have its own unit test?


Keep in mind this is just a proposal that I wanted to get feed-back on,
I'll likely submit something more concrete shortly.  That being said, no
this would NOT eliminate or preclude any current process or requirements.
 The idea here is to augment the existing process and provide some method
of visible functional testing compliance.


 Thanks,
 Xing


 On Jul 25, 2013, at 8:46 PM, John Griffith john.griff...@solidfire.com
 wrote:

 Hey Everyone,

 Something I've been kicking around for quite a while now but never really
 been able to get around to is the idea of requiring that drivers in Cinder
 run a qualification test and submit results prior to introduction in to
 Cinder.

 To elaborate a bit, the idea could start as something really simple like
 the following:
 1. We'd add a functional_qual option/script to devstack

 2. Driver maintainer runs this script to setup devstack and configure it
 to use their backend device on their own system.

 3. Script does the usual devstack install/configure and runs the volume
 pieces of the Tempest gate tests.

 4. Grabs output and checksums of the directories in the devstack and
 /opt/stack directories, bundles up the results for submission

 5. Maintainer submits results

 So why would we do this you ask?  Cinder is pretty heavy on the third
 party driver plugin model which is fantastic.  On the other hand while
 there are a lot of folks who do great reviews that catch things like syntax
 or logic errors in the code, and unit tests do a reasonable job of
 exercising the code it's difficult for folks to truly verify these devices
 all work.

 I think it would be a very useful tool for initial introduction of a new
 driver and even perhaps some sort of check that's run and submitted again
 prior to milestone releases.

 This would also drive some more activity and contribution in to Tempest
 with respect to getting folks like myself motivated to contribute more
 tests (particularly in terms of new functionality) in to Tempest.

 I'd be interested to hear if folks have any interest or strong opinions on
 this (positive or otherwise).  I know that some vendors like RedHat have
 this sort of thing in place for certifications, and to be honest that
 observation is something that caused me to start thinking about this again.

 There are a lot of gaps here regarding how the submission process would
 look, but we could start relatively simple and grow from there if it's
 valuable or just abandon the idea if it proves to be unpopular and a waste
 of time.

 Anyway, I'd love to get feed-back from folks and see what they think.

 Thanks,
 John

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] Alembic support

2013-07-25 Thread Adam Young
I've been looking into Alembic support.  It seems that there is one 
thing missing that I was counting on:  multiple migration repos. It 
might be supported, but the docs are thin, and reports vary.


In the current Keystone implementation, we have a table like this:
mysql desc migrate_version;
+-+--+--+-+-+---+
| Field   | Type | Null | Key | Default | Extra |
+-+--+--+-+-+---+
| repository_id   | varchar(250) | NO   | PRI | NULL|   |
| repository_path | text | YES  | | NULL|   |
| version | int(11)  | YES  | | NULL|   |
+-+--+--+-+-+---+


Right now we only have one row in there:

 keystone  | /opt/stack/keystone/keystone/common/sql/migrate_repo 
|   0



However, we have been lumping all of our migrations together into a 
singel repo, and we are just now looking to sort them out.  For example, 
Policy, Tokens, and Identity do not really need to share a database.  As 
such, they could go into separate migration repos, and it would keep 
changes to one from stepping on changes to another, and avoiding the 
continuous rebasing problem we currently have.


In addition, we want to put each of the extensions into their own 
repos.  This happens to be an important time for that, as we have three 
extensions coming in that need SQL repos:  OAuth, KDS, and Attribute 
Mapping.


I think we should delay moving Keystone to Alembic until the end of 
Havana, or as the first commit in Icehouse.  That way, we have a clean 
cut over point. We can decide then whether to backport the Extnesion 
migrations or leave them under sql_alchemy migrations. Mixing the two 
technologies side by side for a short period of time is going to be 
required, and I think we need to have a clear approach in place to avoid 
a mess.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Alembic support

2013-07-25 Thread Morgan Fainberg
+1 to getting the multiple repos in place.  Moving to Alembric later on in
H or even as the first commit of I should meet our goals to be on Alembric
in a reasonable timeframe.  This also allows us to ensure we aren't rushing
the work to get our migration repos over to Alembric.

I think that allowing the extensions to have their own repos sooner is
better, and if we end up with an extension that has more than 1 or 2
migrations, we have probably accepted code that is far from fully baked
(and we should evaluate how that code made it in).

I am personally in favor of making the first commit of Icehouse (barring
any major issue) the point in which we move to Alembric.  We can be
selective in taking extension modifications that add migration repos if it
is a major concern that moving to Alembric is going to be really painful.

Cheers,
Morgan Fainberg

On Thu, Jul 25, 2013 at 7:35 PM, Adam Young ayo...@redhat.com wrote:

 I've been looking into Alembic support.  It seems that there is one thing
 missing that I was counting on:  multiple migration repos. It might be
 supported, but the docs are thin, and reports vary.

 In the current Keystone implementation, we have a table like this:
 mysql desc migrate_version;
 +-+---**---+--+-+-+---**+
 | Field   | Type | Null | Key | Default | Extra |
 +-+---**---+--+-+-+---**+
 | repository_id   | varchar(250) | NO   | PRI | NULL|   |
 | repository_path | text | YES  | | NULL|   |
 | version | int(11)  | YES  | | NULL|   |
 +-+---**---+--+-+-+---**+


 Right now we only have one row in there:

  keystone  | /opt/stack/keystone/keystone/**common/sql/migrate_repo |
   0


 However, we have been lumping all of our migrations together into a singel
 repo, and we are just now looking to sort them out.  For example, Policy,
 Tokens, and Identity do not really need to share a database.  As such, they
 could go into separate migration repos, and it would keep changes to one
 from stepping on changes to another, and avoiding the continuous rebasing
 problem we currently have.

 In addition, we want to put each of the extensions into their own repos.
  This happens to be an important time for that, as we have three extensions
 coming in that need SQL repos:  OAuth, KDS, and Attribute Mapping.

 I think we should delay moving Keystone to Alembic until the end of
 Havana, or as the first commit in Icehouse.  That way, we have a clean cut
 over point. We can decide then whether to backport the Extnesion migrations
 or leave them under sql_alchemy migrations. Mixing the two technologies
 side by side for a short period of time is going to be required, and I
 think we need to have a clear approach in place to avoid a mess.

 __**_
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.**org OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-devhttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev