Re: [openstack-dev] swift-bench 1.9.1-dev - AttributeError: Values instance has no attribute 'containers'

2013-07-10 Thread Chmouel Boudjnah
you probably want to report this on ssbench github's issues.

Chmouel.

On Wed, Jul 10, 2013 at 4:22 AM, Snider, Tim tim.sni...@netapp.com wrote:
 I recently downloaded swift 1.9.1-dev.

 swift-bench gets the following error. What can I change to get this working
 sucessfully?

 Thanks,

 Tim



 root@controller21:~/ssbench-0.2.16# python -c 'import swift; print
 swift.__version__'
 1.9.1-dev
 root@controller21:~/ssbench-0.2.16#

 swift-bench -A http://localHost:8080/auth/v1.0 -K testing  -U test:tester -s
 10 -n 2 -g 1
 swift-bench 2013-07-09 19:17:00,338 INFO Auth version: 1.0
 Traceback (most recent call last):
   File /usr/bin/swift-bench, line 149, in module
 controller.run()
   File /root/swift/swift/common/bench.py, line 372, in run
 puts = BenchPUT(self.logger, self.conf, self.names)
   File /root/swift/swift/common/bench.py, line 450, in __init__
 self.containers = conf.containers
 AttributeError: Values instance has no attribute 'containers'


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] GPG passphrase

2013-07-10 Thread Jobin Raju George
Hey!

I am trying to get the meters from ceilometer programmatically using java
with the help of the SDK's provided by openstack.

While trying to execute the command mvn clean install or mvn clean install
assembly:assembly, it is asking me for a gpg passphrase. Which passphrase
is it talking about?

If I just press an enter, it gives me the following error message:
[ERROR] Failed to execute goal
 org.apache.maven.plugins:maven-gpg-plugin:1.4:sig
n (sign-artifacts) on project openstack-java-sdk: Exit code: 1 - [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions,
please read the following articles:
[ERROR] [Help 1]
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException



Here are my specifications:


Apache Maven 3.0.5
Java version: 1.6.0_51, vendor: Sun Microsystems Inc.
Default locale: en_US, platform encoding: Cp1252
OS name: windows 7, version: 6.1, arch: amd64, family: windows

-- 

Thanks and regards,

Jobin Raju George

Third Year, Information Technology

College of Engineering Pune

Alternate e-mail: georgejr10...@coep.ac.in
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Quantum (Grizzly) setup on Fedora18 - VM doesnot receive an IP address

2013-07-10 Thread Gopi Krishna B
We are unable to get an IP address when a VM gets launched and the below
DHCP error is observed in the Dashboard logs

The setup in done on Fedora 18 using Openstack Grizzly. Its a 2 node setup,
with Network+ Controller node having 3 NIC's, and Compute node having 2
NIC's. Tried configuring networking in vlan mode.

em1 - mgmt network,
em2 - external/public network (br-ex is created on top of this iface)
eth1 - internal/data network (br-eth1 is created on top of this iface)
** *
plugin.ini
-
enable_tunneling = False
tenant_network_type = vlan
network_vlan_ranges = eth1:100:1000
integration_bridge = br-int
bridge_mappings = eth1:br-eth1,em2:br-ex
** *

The console log from the dashboards is as below.

Initializing random number generator... done.

Starting network...

udhcpc (v1.18.5) started

Sending discover...

Sending select for 192.168.120.2...

Sending select for 192.168.120.2...

Sending select for 192.168.120.2...

No lease, failing

WARN: /etc/rc3.d/S40network failed

cirrosds 'net' up at 182.11

checking http://169.254.169.254/20090404/instanceid

failed 1/20: up 182.13. request failed

failed 2/20: up 184.34. request failed

failed 3/20: up 186.36. request failed


Could anyone help us in resolving this issue, we have tried following
different links and options available on internet, but couldnot resolve
this error. Let us know if further information is required to identify the
root cause.

Some more info, and if someone could possibly identify the root cause then
it would be of great help to me.

from the tcpdump output , I could track the DHCP discover packet at the
tapXXX , qbrXXX , qvbXXX, qvoXX, int-br-eth0, phy-br-eth0 interface, but
not after that

As per my understanding the flow of packets should be from tapXX - qbrXX
- qvbXX - qvoXX -- br-int - int-br-eth0 - phy-br-eth0 - br-eth0 -
eth0

So in this case is there a missing security group rules, which possibly
drop the packet.
I am not familiar with the iptables rules, so if I need to add any rules
could you please help me in adding the rule.
-- 
Regards
Gopi Krishna
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal for new Program: OpenStack Deployment

2013-07-10 Thread Thierry Carrez
Robert Collins wrote:
 Official Title: OpenStack Deployment
 PTL: Robert Collins robe...@robertcollins.net
 mailto:robe...@robertcollins.net
 Mission Statement:
   Develop and maintain tooling and infrastructure able to
   deploy OpenStack in production, using OpenStack itself wherever
   possible.
 
 I believe everyone is familiar with us, but just in case, here is some
 background: we're working on deploying OpenStack to bare metal using
 OpenStack components and cloud deployment strategies - such as Heat for
 service orchestration, Nova for machine provisioning Neutron for network
 configuration, golden images for rapid deployment... etc etc. So far we
 have straight forward deployment of bare metal clouds both without Heat
 (so that we can bootstrap from nothing), and with Heat (for the
 bootstrapped layer), and are working on the KVM cloud layer at the moment.

Could you provide the other pieces of information mentioned at:
https://wiki.openstack.org/wiki/Governance/NewPrograms

Also if you want it discussed at the next TC meeting, please send a
heads-up (pointing to the -dev thread) to openstack-tc ML.

Thanks!

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] sqlalchemy 0.8 and Grizzly: heat and cinder failing

2013-07-10 Thread Thomas Goirand
On 07/10/2013 01:36 AM, David Ripton wrote:
 On 07/09/2013 12:46 PM, Thomas Goirand wrote:
 On 07/08/2013 08:32 PM, Sean Dague wrote:
 On 07/08/2013 04:50 AM, Mark McLoughlin wrote:
 On Mon, 2013-07-08 at 15:53 +0800, Thomas Goirand wrote:
 Hi,

 Since python-sqlalchemy 0.8.2 has been uploaded to Sid, Quantum is
 uninstallable there right now (see #715294).

 I am wondering: what's wrong with sqlalchemy = 0.8, so that it is
 written explicitly in the requirements that we shouldn't use it? Is
 there a chance that having such a version of sqlalchemy will make
 all of
 OpenStack grizzly fail? What are the consequences? Would it be safe to
 simply patch the requirements file and ignore this?

 Don't really have a comment on the specifics, but ...

 The history of the 0.8 cap was the fact that 0.8beta1 (or something
 equiv) was uploaded near a freeze point. 0.8 didn't stop 0.8beta1 from
 being used (go version numbers).

 Also in 0.8 a piece was spun out as a separate library (I don't remember
 exactly which), which causes some build fails. Because it was around a
 freeze the cap was the right approach.

 However, projects really should be getting themselves on 0.8 in the
 Havana time frame. AFAIK it's really minor changes to work, so should be
 a simple review to bump it up.

  -Sean

 Indeed, most projects seem to work with the new SQLAlchemy. Though heat
 fails with a python crash dump:

File
 /home/zigo/sources/openstack/grizzly/heat/build-area/heat-2013.1.2/heat/db/sqlalchemy/models.py,

 line 32, in module
  class Json(types.TypeDecorator, types.MutableType):
 AttributeError: 'module' object has no attribute 'MutableType'

 Indeed, MutableType is gone in SQLAlchemy = 0.8. I'm therefore unsure
 what to do to fix the heat package in Sid... :(
 Any help would be appreciated.
 
 Yeah, someone who understands the Heat model code well needs to make
 class Json no longer inherit from MutableType.  I hope it would be
 possible to do that in a backward-compatible way so it kept working with
 SQLAlchemy 0.7 in addition to working with 0.8.
 
 There's also a big problem with Cinder:

File /usr/lib/python2.7/dist-packages/migrate/versioning/schema.py,
 line 91, in runchange
  change.run(self.engine, step)
File
 /usr/lib/python2.7/dist-packages/migrate/versioning/script/py.py, line
 145, in run
  script_func(engine)
File
 /root/src/cinder/build-area/cinder-2013.1.2/cinder/db/sqlalchemy/migrate_repo/versions/002_quota_class.py,

 line 42, in upgrade
  _warn_on_bytestring=False),
 TypeError: __init__() got an unexpected keyword argument 'assert_unicode'

 Unit tests aren't run at all, and cinder refuses to install (because I'm
 doing the db_sync in the postinst, which fails).

 Help there would also be appreciated.

 AFAICT, these are the only packages affected by the SQLAlchemy upgrade.
 
 Old openstack DB migrations contained a lot of convert_unicode=True,
 unicode_error=None and _warn_on_bytestring=False in the Column
 creation code.  Dan Prince removed these from the Nova migrations in
 commit 93dec58156e , when he squashed all the migrations for Grizzly.
 Other projects still have them here and there, and that appears to be
 what is causing the above error.
 
 I suspect, but haven't proven, that it's possible to get rid of them
 (because Nova did) and that getting rid of them will fix this problem
 (because Nova doesn't have the problem.)  Note that we don't like to
 modify database migrations because of compatibility concerns, so a
 change like this would need to receive a extra review scrutiny to prove
 it couldn't break anything.
 
 I threw up a patch to excise these arguments from Cinder DB migrations.
 https://review.openstack.org/36302  I will add some comments about how
 scary this type of change is to the review.

Hey, thanks a lot David! I've used your patch, and rebased it for my
Grizzly package (well, it's rather easy to just remove these
convert_unicode stuff), then it built with no error on the unit tests!
Package uploaded, Cinder case closed. Remaining is heat...

Thomas Goirand (zigo)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal for new Program: OpenStack Deployment

2013-07-10 Thread Robert Collins
On 10 July 2013 20:01, Thierry Carrez thie...@openstack.org wrote:

 Robert Collins wrote:
  Official Title: OpenStack Deployment
  PTL: Robert Collins robe...@robertcollins.net
  mailto:robe...@robertcollins.net
  Mission Statement:
Develop and maintain tooling and infrastructure able to
deploy OpenStack in production, using OpenStack itself wherever
possible.
 
  I believe everyone is familiar with us, but just in case, here is some
  background: we're working on deploying OpenStack to bare metal using
  OpenStack components and cloud deployment strategies - such as Heat for
  service orchestration, Nova for machine provisioning Neutron for network
  configuration, golden images for rapid deployment... etc etc. So far we
  have straight forward deployment of bare metal clouds both without Heat
  (so that we can bootstrap from nothing), and with Heat (for the
  bootstrapped layer), and are working on the KVM cloud layer at the moment.

 Could you provide the other pieces of information mentioned at:
 https://wiki.openstack.org/wiki/Governance/NewPrograms

ack:

* Detailed mission statement (including why their effort is essential
to the completion of the OpenStack mission)

I think this is covered. In case its not obvious: if you can't install
OpenStack easily, it becomes a lot harder to deliver to users. So
deployment is essential (and at the moment the market is assessing the
cost of deploying OpenStack at ~ 60K - so we need to make it a lot
cheaper).

* Expected deliverables and repositories

We'll deliver and maintain working instructions and templates for
deploying OpenStack.
Repositories that are 'owned' by Deployment today
diskimage-builder
tripleo-image-elements
triple-heat-templates
os-apply-config
os-collect-config
os-refresh-config
toci [triple-o-CI][this is something we're discussing with infra about
where it should live... and mordred and jeblair disagree with each
other :)].
tripleo-incubator [we're still deciding if we'll have an actual CLI
tool or just point folk at the other bits, and in the interim stuff
lives here].


* How 'contribution' is measured within the program (by default,
commits to the repositories associated to the program)

Same as rest of OpenStack : commits to any of these repositories, and
we need some way of recognising non-code contributions like extensive
docs/bug management etc, but we don't have a canned answer for the
non-code aspects.

* Main team members
Is this the initial review team, or something else? If its the review
team, then me/Clint/Chris Jones/Devananda.
If something else then I propose we start with those who have commits
in the last 6 months, namely [from a quick git check, this may be
imperfect]:
Me
Clint Byrum
Chris Jones
Ghe Rivero
Chris Krelle
Devananda van der Veen
Derek Higgins
Cody Somerville
Arata Notsu
Dan Prince
Elizabeth Krumbach
Joe Gordon
Lucas Alvares Gomes
Steve Baker
Tim Miller

Proposed initial program lead (PTL)
I think yours truely makes as much sense as anything :).

-Rob

--
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Cloud Services

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Service Type Framework implementation

2013-07-10 Thread Eugene Nikanorov
Ok, having so much pressure on db implementation, I think I'm just going to
post in-memory implementation and we'll decide if it will fit our needs.

Thanks,
Eugene.


On Wed, Jul 10, 2013 at 5:56 AM, Nachi Ueno na...@ntti3.com wrote:

 Hi Mark


 2013/7/9 Mark McClain mark.mccl...@dreamhost.com:
 
  On Jul 9, 2013, at 5:37 PM, Nachi Ueno na...@ntti3.com wrote:
 
  We have two suboption for db api based solution
 
  Option4. REST API + DB with Preload with Conf
 
 https://docs.google.com/presentation/d/1v0nLTEsFOwWeYpYjpw4qe3QHB5lLZEE_b0TmmR5b7ic/edit#slide=id.gf14b7b30_00
 
  so IMO, we can drop  option3.
 
  I believe option4 is easy to implement.
 
 
  I'm not onboard with option 4 either.  At the last summit, we talked
 about making Neutron easier to deploy.  Using a database to sync
 configuration state adds complexity.  Having some values in a configuration
 and others in the database (even cached) is a recipe for a major headache.
  For the deployments running multiple instances of Neutron, they should be
 using Chef, Chef, Salt, etc for managing their configs anyway.
 
  Using only configuration files (option 1) remains my preference.

 only configuration files (option 1)  is also acceptable for me.
 However, the headache continues even if we choose option1, because
 relation with service type
 and service resources are in the DB.

 Note that we still need to provide way to add or remove service types.

 Option1-1)
Allow to create new relation if it appears in the conf.
Remove the relation if it is disappears from conf.

IMO, This will fall on same problem of current implementation

 https://docs.google.com/a/ntti3.com/presentation/d/1v0nLTEsFOwWeYpYjpw4qe3QHB5lLZEE_b0TmmR5b7ic/edit#slide=id.gf0f4e2a2_1136

 Option1-2) Provide admin rest api for enable/disable service types
 Allow to create new relation if it is enabled by API
  Remove the relation if it disabled by API

 This is my preference. And IMO, this is same as option4.

 Best
 Nachi




  mark
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [PTLs] Proposed simplification around blueprint tracking

2013-07-10 Thread Thierry Carrez
This process change is now completed for all integrated projects.

I updated the wiki so that it reflects the new usage of the blueprints
fields:

https://wiki.openstack.org/wiki/Blueprints

Regards,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How Nova-api should deal with secgroup identifier being UUID in Quantum ?

2013-07-10 Thread Jordan Pittier
Hi,
Any ideas on this one ? Maybe I should forward this email to a more
specific ML ?


On Thu, Jul 4, 2013 at 4:36 PM, Jordan Pittier jpitt...@octo.com wrote:

 Hi guys,

 As you may know :
 * with Quantum, secgroups are uniquely identified by UUID.
 * with Nova-Net, secgroups are uniquely identified by numerical ID.

 At the moment Nova-api, before calling Nova-Net or Quantum,(see
 nova/api/openstack/compute/contrib/security_group*) performs some calls to
 validate_id(), defined in :
 * nova/network/security_group/quantum_drive.py for Quantum
 * nova/compute/api.py for Nova-Net

 Validate_id() raises an HTTPBadRequest in case the identifier is not an
 UUID for Quantum or an ID for Nova-Net.


 The first thing to notice is that : (1) It's Nova-API that performs
 identifier validation and raises the exception.


 This API mismatch breaks 4 Tempest tests (see
 bugs.launchpad.net/tempest/+bug/1182384) and could be confusing to the
 user as Sean Dague reported in this bug report.

 I see several approaches to deal with this :
 1) This API change can't be hidden, clients (and Tempest) must refer to
 security groups by their specific identifier. Ie Clients must be aware of
 the backing network implementation. (see review.openstack.org/#/c/29899/)
 2) Encapsulate all calls to validate_id() in a try/catch HTTPBadRequest
 and raise a HTTPNotFound instead (exception translation)
 3) Don't do any kind of validation neither for Nova-Net not Quantum. Some
 unit tests in test_quantum_security_groups.TestQuantumSecurityGroups must
 be adapted/removed. (see review.openstack.org/#/c/35285/ patchset 2 and 4
 for 2 different approaches). Let Quantum and Nova-Net deal with malformed
 inputs.

 What do you think ?
 Thanks a lot !
  Jordan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Ceilometer XSD

2013-07-10 Thread Julien Danjou
On Tue, Jul 09 2013, Jobin Raju George wrote:

 The reason I initiated to contribute the XSD was most of the other
 components(nova, keystone, glance, etc.) have their XSD's in the
 repository, so assuming they were contributed by people like me(and updated
 from time to time by the same or other fellows), I would patch this XSD to
 the repository.

 However, since XSD's need to be updated as soon as the respective API's
 are, I am not sure whether they are updated by dedicated developers. Please
 let me know if there are such restrictions. Thanks for your time!

I think that what Doug meant, is that since our API is autogenerated
From Python code, the XSD could likely be autogenerated from this code
too, if we had the right tool. It would therefore be better to build
such a tool than to maintain an XSD file by hand, as far as brave your
hands can be.

-- 
Julien Danjou
-- Free Software hacker - freelance consultant
-- http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Ceilometer XSD

2013-07-10 Thread Jobin Raju George
Ok, fine. I agree, its better since that would make sure both the XSD and
the api agree with each other. Thanks your time!


On Wed, Jul 10, 2013 at 3:45 PM, Julien Danjou jul...@danjou.info wrote:

 On Tue, Jul 09 2013, Jobin Raju George wrote:

  The reason I initiated to contribute the XSD was most of the other
  components(nova, keystone, glance, etc.) have their XSD's in the
  repository, so assuming they were contributed by people like me(and
 updated
  from time to time by the same or other fellows), I would patch this XSD
 to
  the repository.
 
  However, since XSD's need to be updated as soon as the respective API's
  are, I am not sure whether they are updated by dedicated developers.
 Please
  let me know if there are such restrictions. Thanks for your time!

 I think that what Doug meant, is that since our API is autogenerated
 From Python code, the XSD could likely be autogenerated from this code
 too, if we had the right tool. It would therefore be better to build
 such a tool than to maintain an XSD file by hand, as far as brave your
 hands can be.

 --
 Julien Danjou
 -- Free Software hacker - freelance consultant
 -- http://julien.danjou.info




-- 

Thanks and regards,

Jobin Raju George

Third Year, Information Technology

College of Engineering Pune

Alternate e-mail: georgejr10...@coep.ac.in
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Removing OS_AUTH_SYSTEM

2013-07-10 Thread Chmouel Boudjnah
On Fri, Jun 21, 2013 at 5:46 PM, Monty Taylor mord...@inaugust.com wrote:
 This is some preliminary works to move novaclient to use
 keystoneclient instead of implementing its own[1] client to keystone.
 If the OS_AUTH_SYSTEM feature was really needed[2] we should then
 moving it to keystoneclient.

 Agree. If this is a general feature someone needs, we should move it to
 keystoneclient.

agreed, I have started a review here which move alvaro works from
novaclient to keystoneclient  :

https://review.openstack.org/#/c/36427/

Let me know alvaro if that would work for you before I polish it
(fixing the tests) and get it ready for reviews.

Chmouel.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Service Type Framework implementation

2013-07-10 Thread Akihiro MOTOKI
Hi,

Sorry for late cut-in,

I agree that dynamic configuration through the API is not easy to implement.
At now, conf-based approach without database (option-1) looks the best way
unless we
don't have needs for dynamic configuration thru the API.

 1) From logic perspective service provider could be referenced by
(service_type, name) as it's unique primary key.
 2) From data normalization perspective it's better (and more convenient)
to have an unique ID in resource provider model.
 Obviously having ID works for DB implementation and doesn't work for
in-memory implementation.
 In other words, we can't use ID if we go with in-memory implementation.

I think ID is not necessarily required.
In DB approach, we can specify multiple fields as a primary key.
In in-memory approach, we can use a json-serialized string as a key
like json.dumps({'type': 'xxx', 'name': 'yyy'}).

In typical use cases,
(1) neutron-server retrieves a provider from assocation table
(which is usually implemented on database)
(2) neutron-server determines a driver from a provider.
In this case, dict-based approach does enough I believe.
Is there any other typical access pattern?

 3) From data modelling perspective it's better to have ID in service
provider model as referencing models will be simpler and easier to maintain.

As long as we don't have more keys than type and name to identify providers,
(type, name) combination looks simple enough.

service provider is similar to flavor in nova at some point.
flavor represents a combination of many fields.
If there is a possible case where a provider definition have more unique
keys, ID approach makes sense much.

 4) From CLI perspective it's more convenient if resource has ID, it's a
common way of specifying resource.

API perspective for an association from a resource to a provider,
a type is determined from a resource and what we need to specify is only
name.
As long as we can identify a provider by (type, name),
there is no difference between using ID and using name.

Regarding a possible demerit without ID, it is difficult to specify a
specific provider to show its detail.
At now a provider has only a couple of visible field (type, name, default)
through API, so list-service-providers does enough and show-service-provider
does not provide more. (It just provides API consistency with other
resources.)

 5) From user perspective it's more convenient to specify the name of
service provider.
 But that is usually solved either by Horizon or by cli, like it's done
for networks/subnets where name of the object is specified.


Thanks,
Akihiro



2013/7/10 Eugene Nikanorov enikano...@mirantis.com

 Ok, having so much pressure on db implementation, I think I'm just going
 to post in-memory implementation and we'll decide if it will fit our needs.

 Thanks,
 Eugene.


 On Wed, Jul 10, 2013 at 5:56 AM, Nachi Ueno na...@ntti3.com wrote:

 Hi Mark


 2013/7/9 Mark McClain mark.mccl...@dreamhost.com:
 
  On Jul 9, 2013, at 5:37 PM, Nachi Ueno na...@ntti3.com wrote:
 
  We have two suboption for db api based solution
 
  Option4. REST API + DB with Preload with Conf
 
 https://docs.google.com/presentation/d/1v0nLTEsFOwWeYpYjpw4qe3QHB5lLZEE_b0TmmR5b7ic/edit#slide=id.gf14b7b30_00
 
  so IMO, we can drop  option3.
 
  I believe option4 is easy to implement.
 
 
  I'm not onboard with option 4 either.  At the last summit, we talked
 about making Neutron easier to deploy.  Using a database to sync
 configuration state adds complexity.  Having some values in a configuration
 and others in the database (even cached) is a recipe for a major headache.
  For the deployments running multiple instances of Neutron, they should be
 using Chef, Chef, Salt, etc for managing their configs anyway.
 
  Using only configuration files (option 1) remains my preference.

 only configuration files (option 1)  is also acceptable for me.
 However, the headache continues even if we choose option1, because
 relation with service type
 and service resources are in the DB.

 Note that we still need to provide way to add or remove service types.

 Option1-1)
Allow to create new relation if it appears in the conf.
Remove the relation if it is disappears from conf.

IMO, This will fall on same problem of current implementation

 https://docs.google.com/a/ntti3.com/presentation/d/1v0nLTEsFOwWeYpYjpw4qe3QHB5lLZEE_b0TmmR5b7ic/edit#slide=id.gf0f4e2a2_1136

 Option1-2) Provide admin rest api for enable/disable service types
 Allow to create new relation if it is enabled by API
  Remove the relation if it disabled by API

 This is my preference. And IMO, this is same as option4.

 Best
 Nachi




  mark
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 

Re: [openstack-dev] [nova] volume affinity filter for nova scheduler

2013-07-10 Thread Joe Gordon
On Mon, Jul 8, 2013 at 8:53 AM, Ilya Kharin ikha...@mirantis.com wrote:

 Hi all.

 In my opinion it is about things that can live in one form or another,
 because
 in some cases there is a need to place an instance in the same place where
 its
 block device is located or will be attached. Both solutions that let you
 do it,
 I mean filter and weigher, have a right to life. A lot depends on the
 requirements that are present when you start an instance:

 1.  In the case when you want to put them together preferably, there
 should be
 a weigher. If that fails, then the user should not worry about it, the
 instance will be started though not as optimally as wanted.
 2.  When it is important that they are together and nothing else is
 acceptable,
 then there should be a filter. Some applications that are built on top
 of
 OpenStack may require that instance must be together with a particular
 volume.


I question how 'cloudy' an architecture that *requires* instances and
volumes be on the same node.  If we treat instances as ephemeral and
volumes as persistent having them live on the same node is a contradiction.

Also I agree with Russell, I don't like scheduler hints.  And don't want to
add more if we can help it.

So my vote is make this a weight and not a filter.



 Thus, both the filter and the weighter can be used, and it would give a
 great
 flexibility to choose what and how to use, but both options will require to
 specify a particular volume via scheduler hints. On the other hand, it is
 possible to use block_device_mapping, which allows to select several
 devices.
 In the case of multiple devices it is not clear which of them to use for
 affinity choice, this issue can be resolved in the same way through a
 scheduler hint. Thus the special hint gives ability to use affinity more
 generally.

 --
 Ilya Kharin
 Software Engineer
 OpenStack Services
 Mirantis Inc.


 On Jul 4, 2013, at 11:56 AM, Álvaro López García 
 alvaro.lopez.gar...@cern.ch wrote:

  On Wed 03 Jul 2013 (18:24), Alexey Ovchinnikov wrote:
  Hi everyone,
 
  Hi Alexey.
 
  for some time I have been working on an implementation of a filter that
  would allow to force instances to hosts which contain specific volumes.
  A blueprint can be found here:
  https://blueprints.launchpad.net/nova/+spec/volume-affinity-filter
  and an implementation here:
  https://review.openstack.org/#/c/29343/
 
  This is something we were eager to see and that we were also aiming to
  implement in the future, so, great job!
 
  The filter works for LVM driver and now it picks either a host
 containing
  specified volume
  or nothing (thus effectively failing instance scheduling). Now it fails
  primarily when it can't find the volume. It has been
  pointed to me that sometimes it may be desirable not to fail instance
  scheduling but to run it anyway. However this softer behaviour fits
 better
  for weighter function. Thus I have registered a blueprint for the
  weighter function:
 
 https://blueprints.launchpad.net/nova/+spec/volume-affinity-weighter-function
 
  I was thinking about both the filter and the weighter working together.
 The
  former
  could be used in cases when we strongly need storage space associated
 with
  an
  instance and need them placed on the same host. The latter could be used
  when
  storage space is nice to have and preferably on the same host
  with an instance, but not so crucial as to have the instance running.
 
  During reviewing a question appeared whether we need the filter and
  wouldn't things be better
  if we removed it and had only the weighter function instead. I am not
 yet
  convinced
  that the filter is useless and needs to be replaced with the weighter,
  so I am asking for your opinion on this matter. Do you see usecases for
 the
  filter,
  or the weighter will answer all needs?
 
  I'd go with the weigher. We have some use cases for this volume
  affinity, but they always require to access to their data rather than
  failing. Moreover, the filter implies that the user has some knowledge
  on the infrastructure (i.e. that volumes and instances can be
  coallocated) and it is only available for LVM drivers, whereas the
  weigher should work transparently in all situations.
 
  Cheers,
  --
  Álvaro López García  al...@ifca.unican.es
  Instituto de Física de Cantabria http://alvarolopez.github.io
  Ed. Juan Jordá, Campus UC  tel: (+34) 942 200 969
  Avda. de los Castros s/n
  39005 Santander (SPAIN)
  _
  If you haven't used grep, you've missed one of the simple pleasures of
  life. -- Brian Kernighan
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 

Re: [openstack-dev] swift-bench 1.9.1-dev - AttributeError: Values instance has no attribute 'containers'

2013-07-10 Thread Snider, Tim
Oops - that's a swift-bench error not ssbench.
sorry

From: Snider, Tim
Sent: Tuesday, July 09, 2013 9:23 PM
To: openstack-dev@lists.openstack.org
Subject: swift-bench 1.9.1-dev - AttributeError: Values instance has no 
attribute 'containers'


I recently downloaded swift 1.9.1-dev.

swift-bench gets the following error. What can I change to get this working 
sucessfully?

Thanks,

Tim



root@controller21:~/ssbench-0.2.16mailto:root@controller21:~/ssbench-0.2.16# 
python -c 'import swift; print swift.__version__'
1.9.1-dev
root@controller21:~/ssbench-0.2.16mailto:root@controller21:~/ssbench-0.2.16#

swift-bench -A http://localHost:8080/auth/v1.0 -K testing  -U test:tester -s 10 
-n 2 -g 1
swift-bench 2013-07-09 19:17:00,338 INFO Auth version: 1.0
Traceback (most recent call last):
  File /usr/bin/swift-bench, line 149, in module
controller.run()
  File /root/swift/swift/common/bench.py, line 372, in run
puts = BenchPUT(self.logger, self.conf, self.names)
  File /root/swift/swift/common/bench.py, line 450, in __init__
self.containers = conf.containers
AttributeError: Values instance has no attribute 'containers'
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] volume affinity filter for nova scheduler

2013-07-10 Thread Russell Bryant
On 07/10/2013 08:34 AM, Robert Collins wrote:
 On 4 July 2013 03:54, Russell Bryant rbry...@redhat.com wrote:
 
 Thanks for starting this thread.

 I was pushing for the weight function.  It seems much more appropriate
 for a cloud environment than the filter.  It's an optimization that is
 always a good idea, so the weight function that works automatically
 would be good.  It's also transparent to users.

 Some things I don't like about the filter:

  - It requires specifying a scheduler hint

  - It's exposing a concept of co-locating volumes and instances on the
 same host to users.  This isn't applicable for many volume backends.  As
 a result, it's a violation of the principle where users ideally do not
 need to know or care about deployment details.
 
 We'll probably need something like this for Ironic with persistent
 volumes on machines - yes its a rare case, but when it matters, it
 matters a great deal.

I believe you, but I guess I'd like to better understand how this works
to make sure what gets added actually solves your use case.  Is there
already support for Cinder managed persistent volumes that live on
baremetal nodes?

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [ml2] ML2 Sub-Team Meeting tomorrow

2013-07-10 Thread Luke Gorrie
I also won't make the meeting today. I have now started writing code for
the Tail-f NCS mechanism driver based on Andre's great work. Thanks Andre !


On 10 July 2013 06:53, Andre Pech ap...@aristanetworks.com wrote:

 Thanks Kyle,

 I'm unfortunately going to be on a plane during tomorrow's meeting, so
 wanted to send a quick update on the ML2 mechanism driver infrastructure (
 https://blueprints.launchpad.net/neutron/+spec/ml2-mechanism-drivers).
 Thanks to everyone for the latest rounds of review, I believe that I've
 incorporated all of the feedback so far and appreciate people taking
 another look. I'm available most of tomorrow so will try to be responsive
 to comments so that we can hopefully get this merged and into H2.

 Thanks
 Andre


 On Tue, Jul 9, 2013 at 7:20 PM, Kyle Mestery (kmestery) 
 kmest...@cisco.com wrote:

 Just a reminder, we have our weekly ML2 sub team meeting Wednesday at
 1400UTC. The agenda is on the meeting page here:

 https://wiki.openstack.org/wiki/Meetings/ML2

 We have a number of ML2 related blueprints in review with a hope of
 getting them in for H2 tomorrow, we'll focus on those in the meeting
 tomorrow for the most part.

 Thanks!
 Kyle
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal for new Program: OpenStack Deployment

2013-07-10 Thread Anne Gentle
Hi Robert,

What's your plan for documenting the efforts so that others can do this in
their environments? Is there any documentation currently for which you can
send links?

The Doc team is especially interested in configuration docs and
installation docs as those are the toughest to produce in a timely,
accurate manner now. We have a blueprint for automatically generating docs
from configuration options in the code. We are trying to determine a good
path for install docs for meeting the release deliverable -- much
discussion at
http://lists.openstack.org/pipermail/openstack-docs/2013-July/002114.html.
Your input welcomed.

Thanks,
Anne


On Wed, Jul 10, 2013 at 3:40 AM, Robert Collins
robe...@robertcollins.netwrote:

 On 10 July 2013 20:01, Thierry Carrez thie...@openstack.org wrote:
 
  Robert Collins wrote:
   Official Title: OpenStack Deployment
   PTL: Robert Collins robe...@robertcollins.net
   mailto:robe...@robertcollins.net
   Mission Statement:
 Develop and maintain tooling and infrastructure able to
 deploy OpenStack in production, using OpenStack itself wherever
 possible.
  
   I believe everyone is familiar with us, but just in case, here is some
   background: we're working on deploying OpenStack to bare metal using
   OpenStack components and cloud deployment strategies - such as Heat for
   service orchestration, Nova for machine provisioning Neutron for
 network
   configuration, golden images for rapid deployment... etc etc. So far we
   have straight forward deployment of bare metal clouds both without Heat
   (so that we can bootstrap from nothing), and with Heat (for the
   bootstrapped layer), and are working on the KVM cloud layer at the
 moment.
 
  Could you provide the other pieces of information mentioned at:
  https://wiki.openstack.org/wiki/Governance/NewPrograms

 ack:

 * Detailed mission statement (including why their effort is essential
 to the completion of the OpenStack mission)

 I think this is covered. In case its not obvious: if you can't install
 OpenStack easily, it becomes a lot harder to deliver to users. So
 deployment is essential (and at the moment the market is assessing the
 cost of deploying OpenStack at ~ 60K - so we need to make it a lot
 cheaper).

 * Expected deliverables and repositories

 We'll deliver and maintain working instructions and templates for
 deploying OpenStack.
 Repositories that are 'owned' by Deployment today
 diskimage-builder
 tripleo-image-elements
 triple-heat-templates
 os-apply-config
 os-collect-config
 os-refresh-config
 toci [triple-o-CI][this is something we're discussing with infra about
 where it should live... and mordred and jeblair disagree with each
 other :)].
 tripleo-incubator [we're still deciding if we'll have an actual CLI
 tool or just point folk at the other bits, and in the interim stuff
 lives here].


 * How 'contribution' is measured within the program (by default,
 commits to the repositories associated to the program)

 Same as rest of OpenStack : commits to any of these repositories, and
 we need some way of recognising non-code contributions like extensive
 docs/bug management etc, but we don't have a canned answer for the
 non-code aspects.

 * Main team members
 Is this the initial review team, or something else? If its the review
 team, then me/Clint/Chris Jones/Devananda.
 If something else then I propose we start with those who have commits
 in the last 6 months, namely [from a quick git check, this may be
 imperfect]:
 Me
 Clint Byrum
 Chris Jones
 Ghe Rivero
 Chris Krelle
 Devananda van der Veen
 Derek Higgins
 Cody Somerville
 Arata Notsu
 Dan Prince
 Elizabeth Krumbach
 Joe Gordon
 Lucas Alvares Gomes
 Steve Baker
 Tim Miller

 Proposed initial program lead (PTL)
 I think yours truely makes as much sense as anything :).

 -Rob

 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Cloud Services

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Anne Gentle
annegen...@justwriteclick.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Problem with how python-neutronclient deals with Neutron exceptions

2013-07-10 Thread Akihiro MOTOKI
Basically #2 makes sense to me.

 2) Enrich/enhance the NeutronServer exceptions with a type and detail
properties.
 This way, when a NeutronServer exception is serialized and sent to
python-neutronclient,
 the specific NeutronClient exception can be raised (and then sent to
nova-api).

What granularity are you thinking? I am not sure what information is passed
as type and detail.
Could you give an example you think?

Exceptions in neutron-server side are classified into categories and a
status code of response
is determined based on these categories. There is a case where multiple
categories are mapped
into a single status code. If the granularity of the client exceptions is
same as these categories,
it seems easy to map categories into client-side exceptions (by using
type).

If more detail granularity is needed, the situation becomes more
complicated.
If we map server-side exception into client-side exception one to one,
whenever server-side exceptions changes/increases neutron client needs to
catch up with it
In addition some exceptions are sometimes defined in extensions. This makes
more difficult
to one-to-one mapping.

Any thought?

Thanks,
Akihiro

2013/7/5 Jordan Pittier jpitt...@octo.com

 Hi guys,

 Nova-api makes extensive use of python-neutronclient. But the only
 exception from neutronclient that nova-api catches is the
 generic QuantumClientException (defined in
 quantumclient/common/exceptions.py).

 neutronclient has some more specific exceptions
 like PortInUseClient, NetworkNotFoundClient. But they are never raised
 because I believe there is a bug either in neutron server or in
 neutronclient, related to exceptions.

 In neutronclient, all exceptions are handled in
 neutronclient/v2_0/client.py::exception_handler_v20(). The code is supposed
 to catch a Neutron(Server)Exception and raise the corresponding specific
 NeutronClientException. In order to do that, NeutronClient expects the
 deserialized Neutron(Server)Exception to be a dictionnary which has the
 keys type,message and detail. But, these keys are never found in any
 Neutron(Server)Exceptions so, instead the generic NeutronClientException is
 raised.

 If you look at how Neutron(Server) exceptions are defined
 (quantum/quantum/common/exceptions.py), indeed there's no mention of 'type'
 or 'detail' (though 'message' is defined). So its logic that
 neutronclient always raises the generic NeutronClientException.

 Also see this bug reports :
 https://bugs.launchpad.net/python-neutronclient/+bug/1178734
 https://bugs.launchpad.net/python-neutronclient/+bug/1187698


 What should we do about this ? :
 1) Pretty much nothing. Nova-api still catches only the generic
 NeutronClientException, and based the further processing of the exception
 on the correct status_code reported by neutronclient. In that case, we
 should clean the code of neutronclient::exception_handler_v20() because it
 has a lot of dead code.
 2) Enrich/enhance the NeutronServer exceptions with a type and detail
 properties. This way, when a NeutronServer exception is serialized and sent
 to python-neutronclient, the specific NeutronClient exception can be raised
 (and then sent to nova-api).
 3) You tell me :) !

 Thanks a lot
 Jordan




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Akihiro MOTOKI amot...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] common requirements

2013-07-10 Thread Thierry Carrez
John Griffith wrote:
 I ran into an issue with Cinder the other week and it turned out it was
 due to some conflicts with keystoneclient version. 
 
 Long story short, the issue is that Cinder did the straight sync over
 from the common-requirements file which placed an upper bound on
 keystoneclient.  It looks like other projects haven't done this and most
 don't have the upper bound set.  There are other mismatches however I
 was only working on the one.
 
 I was going to log a bug against all the projects to do a sync of
 requirements and test-requirments to get us all aligned.  Before doing
 so I thought I should check to see if there are known cases where the
 settings in the common requirements file don't work for you currently? 
 I understand that horizon for example has some upcoming requirements for
 keystonclient 0.3.0 so that could be a reason to update the req's file
 first.
 
 Anyway if there are issues that folks already know about then I suppose
 the answer is to get that fixed up first and then we can start working
 towards getting all of the projects in sync.
 
 Thoughts, agreement, disagreement etc?

That sounds sane to me. You should push that early this week or wait
after havana-2 publication though... since late dep changes have a high
regression potential.

Regards,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Service Type Framework implementation

2013-07-10 Thread Akihiro MOTOKI
Thanks Engene,

I have looked at your new patch. It looks nice.
The router-provider association can be merged into service-provider
association,
but it can be done in another patch and I can do it.

I am also working to support multiple router implementations in NEC plugin
using this framework.
It is similar to Salvatore's distribution router patch.

Thanks,
Akihiro

2013/7/10 Salvatore Orlando sorla...@nicira.com

 Thanks Eugene,

 I am already looking at your new patch.
 Thankfully it seems that keeping providers in configuration files was not
 as hard as anticipated in previous rounds of reviews.

 I don't think what you did is a hack; I will fix rework the
 router-provider association extension in the distributed router patch or
 another patch.
 From my point of view, I think you can even remove altogether that code
 from your patch - if you don't feel happy about it.
 I will take care of restoring that extension afterwards; after all, it is
 outside of the scope of your blueprint.

 Salvatore


 On 10 July 2013 15:49, Eugene Nikanorov enikano...@mirantis.com wrote:

 Folks,

 I have put initial in-memory implementation of service providers on
 review.

 On of the 'hacks' I had to do is decoupling RouterServiceProviderBinding
 from service provider.
 I've just removed foreign key to ServiceProviders table.
 I think this needs to be fixed in the patch which introduces the code
 which uses it (like the one published by Salvatore)

 Thanks,
 Eugene.



 On Wed, Jul 10, 2013 at 2:33 PM, Akihiro MOTOKI amot...@gmail.comwrote:

 Hi,

 Sorry for late cut-in,

 I agree that dynamic configuration through the API is not easy to
 implement.
 At now, conf-based approach without database (option-1) looks the best
 way unless we
 don't have needs for dynamic configuration thru the API.

  1) From logic perspective service provider could be referenced by
 (service_type, name) as it's unique primary key.
   2) From data normalization perspective it's better (and more
 convenient) to have an unique ID in resource provider model.
  Obviously having ID works for DB implementation and doesn't work for
 in-memory implementation.
  In other words, we can't use ID if we go with in-memory implementation.

 I think ID is not necessarily required.
 In DB approach, we can specify multiple fields as a primary key.
 In in-memory approach, we can use a json-serialized string as a key
 like json.dumps({'type': 'xxx', 'name': 'yyy'}).

 In typical use cases,
 (1) neutron-server retrieves a provider from assocation table
 (which is usually implemented on database)
 (2) neutron-server determines a driver from a provider.
 In this case, dict-based approach does enough I believe.
 Is there any other typical access pattern?

  3) From data modelling perspective it's better to have ID in service
 provider model as referencing models will be simpler and easier to maintain.

 As long as we don't have more keys than type and name to identify
 providers,
 (type, name) combination looks simple enough.

 service provider is similar to flavor in nova at some point.
 flavor represents a combination of many fields.
 If there is a possible case where a provider definition have more unique
 keys, ID approach makes sense much.

  4) From CLI perspective it's more convenient if resource has ID, it's
 a common way of specifying resource.

 API perspective for an association from a resource to a provider,
 a type is determined from a resource and what we need to specify is
 only name.
 As long as we can identify a provider by (type, name),
 there is no difference between using ID and using name.

 Regarding a possible demerit without ID, it is difficult to specify a
 specific provider to show its detail.
 At now a provider has only a couple of visible field (type, name,
 default)
 through API, so list-service-providers does enough and
 show-service-provider
 does not provide more. (It just provides API consistency with other
 resources.)

  5) From user perspective it's more convenient to specify the name of
 service provider.
  But that is usually solved either by Horizon or by cli, like it's done
 for networks/subnets where name of the object is specified.
 

 Thanks,
 Akihiro



 2013/7/10 Eugene Nikanorov enikano...@mirantis.com

 Ok, having so much pressure on db implementation, I think I'm just
 going to post in-memory implementation and we'll decide if it will fit our
 needs.

 Thanks,
 Eugene.


 On Wed, Jul 10, 2013 at 5:56 AM, Nachi Ueno na...@ntti3.com wrote:

 Hi Mark


 2013/7/9 Mark McClain mark.mccl...@dreamhost.com:
 
  On Jul 9, 2013, at 5:37 PM, Nachi Ueno na...@ntti3.com wrote:
 
  We have two suboption for db api based solution
 
  Option4. REST API + DB with Preload with Conf
 
 https://docs.google.com/presentation/d/1v0nLTEsFOwWeYpYjpw4qe3QHB5lLZEE_b0TmmR5b7ic/edit#slide=id.gf14b7b30_00
 
  so IMO, we can drop  option3.
 
  I believe option4 is easy to implement.
 
 
  I'm not onboard with option 4 either.  At the last summit, 

Re: [openstack-dev] [cinder] Bug Squash Day

2013-07-10 Thread thingee
Just a reminder today is the Cinder bug squash day!

https://wiki.openstack.org/wiki/Cinder/BugSquashingDay/20130710


-Mike Perez!


On Fri, Jul 5, 2013 at 9:49 AM, John Griffith
john.griff...@solidfire.comwrote:

 Hey Everyone,

 So there have been a number of new bug reports coming in the last few days
 and I was thinking after the success of the squash day Avishay set up the
 other week for Cinder, next week might be a great time for a day to push on
 bugs before we cut H2.

 I'm thinking Wed July 10, any and all help is greatly appreciated and we
 can all sync up via IRC in #openstack-cinder.

 Thanks,
 John

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Review] Use of exception for non-exceptional cases

2013-07-10 Thread Nachi Ueno
HI folks

I would like to ask the review criteria in the community.

Should we use exception for non-exceptional cases when we can use
 parameter checking?

Example1:  Default value for array index

try:
   value = list[5]
except IndexError:
value = 'default_value'

This can be also written as,

 list_a[3] if len(list_a)  3 else 'default_value'

ask for forgiveness, not permission is one of way in python,
however, on the other hand, google python code style guide says,
-
Minimize the amount of code in a try/except block. The larger the body
of the try, the more likely that an exception will be raised by a line
of code that you didn't expect to raise an exception. In those cases,
the try/except block hides a real error.
---
http://google-styleguide.googlecode.com/svn/trunk/pyguide.html#Exceptions

Personally, I prefer not to use exception for such cases.

Best
Nachi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Embrane's Neutron Plugin

2013-07-10 Thread Salvatore Orlando
Hi Ivar,

thanks for your interest in Openstack and Neutron.
A few replies inline; I hope you'll find them useful.

Salvatore


On 10 July 2013 02:40, Ivar Lazzaro i...@embrane.com wrote:

   Hi,

 My name is Ivar Lazzaro, I’m an Italian developer currently employed at
 Embrane.


  Embrane provides L3 to L7 network services, (including routing, load
 balancing, SSL offloads, firewalls and IPsec VPNs), and we have developed a
 Neutron plugin that we would like to share and contribute to Openstack[1].


That would be great!
the current policy for Neutron plugins is that each plugin should have a
member of the core team which will act as a 'maintainer'; this figure is
not required to be an 'expert' of the specific plugin technology. His
duties are mainly those of keeping track of bugs/blueprints, review code,
and interact with the developers.

 


  My experience with OpenStack started with the Essex edition, which I
 deployed and managed as a user. Embrane leverages any existing form of L2
 to offer connectivity at L3 and above, and therefore our interest in
 contributing to OpenStack grew as L3 (and above) capabilities started to be
 added to Neutron, leading to the realization of a Neutron plugin.


  I'd like to talk about it with you before blindly requesting a review,
 and get your feedback and advice in order to improve it at the most!


Sounds a very sensible approach, since we're already halfway through the
release cycle, and finding resources for reviewing code might not be the
easiest thing.

 


  The idea is to provide L3 connectivity in Openstack through our software
 platform, called heleos, obviously using a plugin to follow the Neutron
 workflow.Since we don't provide L2 connectivity (which is part of the core
 APIs as well) our plugin is going to work together with one of the
 existing, which will manage L2 connectivity and share all the information
 needed.


  Therefore, whenever a user chooses to use Embrane's Neutron plugin, he
 specifies one of the supported existing plugins in the configuration file,
 and L2 connectivity will be provided by that specific choice.

 At the current state, for instance, our plugin is able to work with the
 OpenVSwitch's so that:


  -create_network() will call OVS plugin;

 -create_port() will call OVS plugin;

 -crate_router() will call Embrane's which will use knowledge from the OVS
 plugin in order to provide L3 connectivity.


 It looks like your plugin is pretty much a derivative of the OVS plugin,
which replaces the L3 agent with Embrane's heleos.
I think this approach makes some sense, but in the medium/long term you
would like to be able to run your plugin on top of any L2 plugin.

There is a Neutron blueprint for that, and that is
https://blueprints.launchpad.net/neutron/+spec/quantum-l3-routing-plugin
That blueprint is unfortunately a bit stuck at the moment.
It would be good for the whole community to understand whether we can
actually still merge it during the Havana timeframe.


  and so forth...

 The calls can be asynchronous (using Router status in a way similar to
 the LBaaS extension).


  Without going too much into details, that's all about the L3 plugin that
 we would like to share. We are also interested in sharing a LBaaS service
 plugin, but I'll do a different blueprint for that one.


I think it won't harm pushing your code as a draft on gerrit.

 

 All your feedback and comments are welcome :)


  Thanks,

 Ivar.


  [1] https://blueprints.launchpad.net/neutron/+spec/embrane-neutron-plugin

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Review] Use of exception for non-exceptional cases

2013-07-10 Thread Dolph Mathews
On Wed, Jul 10, 2013 at 1:01 PM, Nachi Ueno na...@ntti3.com wrote:

 HI folks

 I would like to ask the review criteria in the community.

 Should we use exception for non-exceptional cases when we can use
  parameter checking?

 Example1:  Default value for array index

 try:
value = list[5]
 except IndexError:
 value = 'default_value'


I can't get past this specific example... how often do you find yourself
needing to do this, exactly? Generally when you use a list you either FIFO
/ LIFO or iterate through the whole thing in some fashion.

I'd be tempted to write it as dict(enumerate(my_list)).get(3,
'default_value') just because you're treating it like a mapping anyway.



 This can be also written as,

  list_a[3] if len(list_a)  3 else 'default_value'

 ask for forgiveness, not permission is one of way in python,
 however, on the other hand, google python code style guide says,
 -
 Minimize the amount of code in a try/except block. The larger the body
 of the try, the more likely that an exception will be raised by a line
 of code that you didn't expect to raise an exception. In those cases,
 the try/except block hides a real error.
 ---
 http://google-styleguide.googlecode.com/svn/trunk/pyguide.html#Exceptions


+1 for this, but it's not really intended to provide an answer your
question of approach.




 Personally, I prefer not to use exception for such cases.

 Best
 Nachi

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Review] Use of exception for non-exceptional cases

2013-07-10 Thread Mark McLoughlin
On Wed, 2013-07-10 at 11:01 -0700, Nachi Ueno wrote:
 HI folks
 
 I would like to ask the review criteria in the community.
 
 Should we use exception for non-exceptional cases when we can use
  parameter checking?
 
 Example1:  Default value for array index
 
 try:
value = list[5]
 except IndexError:
 value = 'default_value'
 
 This can be also written as,
 
  list_a[3] if len(list_a)  3 else 'default_value'
 
 ask for forgiveness, not permission is one of way in python,
 however, on the other hand, google python code style guide says,
 -
 Minimize the amount of code in a try/except block. The larger the body
 of the try, the more likely that an exception will be raised by a line
 of code that you didn't expect to raise an exception. In those cases,
 the try/except block hides a real error.
 ---
 http://google-styleguide.googlecode.com/svn/trunk/pyguide.html#Exceptions

I don't think this statement contradicts the intent of EAFP.

 Personally, I prefer not to use exception for such cases.

My instinct is the same, but EAFP does seem to be the python way. There
are times I can tolerate the EAFP approach but, even then, I generally
think LBYL is cleaner.

I can live with something like this:

  try:
  return obj.foo
  except AttributeError:
  pass

but this is obviously broken:

  try:
  return self.do_something(obj.foo)
  except AttributeError:
  pass

since AttributeError will mask a typo with the do_something() call or an
AttributeError raised from inside do_something()

But I fail to see what's wrong with this:

  if hasattr(obj, 'foo'):
  return obj.foo

Cheers,
Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Embrane's Neutron Plugin

2013-07-10 Thread Kyle Mestery (kmestery)
On Jul 10, 2013, at 1:17 PM, Salvatore Orlando sorla...@nicira.com wrote:
 
 Hi Ivar,
 
 thanks for your interest in Openstack and Neutron.
 A few replies inline; I hope you'll find them useful.
 
 Salvatore
 
 
 On 10 July 2013 02:40, Ivar Lazzaro i...@embrane.com wrote:
 Hi,
 
 My name is Ivar Lazzaro, I’m an Italian developer currently employed at 
 Embrane.
 
 
 
 Embrane provides L3 to L7 network services, (including routing, load 
 balancing, SSL offloads, firewalls and IPsec VPNs), and we have developed a 
 Neutron plugin that we would like to share and contribute to Openstack[1].
 
 
 That would be great!
 the current policy for Neutron plugins is that each plugin should have a 
 member of the core team which will act as a 'maintainer'; this figure is not 
 required to be an 'expert' of the specific plugin technology. His duties are 
 mainly those of keeping track of bugs/blueprints, review code, and interact 
 with the developers. 
 
 
 
 My experience with OpenStack started with the Essex edition, which I deployed 
 and managed as a user. Embrane leverages any existing form of L2 to offer 
 connectivity at L3 and above, and therefore our interest in contributing to 
 OpenStack grew as L3 (and above) capabilities started to be added to Neutron, 
 leading to the realization of a Neutron plugin.
 
 
 
 I'd like to talk about it with you before blindly requesting a review, and 
 get your feedback and advice in order to improve it at the most!
 
 
 Sounds a very sensible approach, since we're already halfway through the 
 release cycle, and finding resources for reviewing code might not be the 
 easiest thing. 
 
 
 
 The idea is to provide L3 connectivity in Openstack through our software 
 platform, called heleos, obviously using a plugin to follow the Neutron 
 workflow.Since we don't provide L2 connectivity (which is part of the core 
 APIs as well) our plugin is going to work together with one of the existing, 
 which will manage L2 connectivity and share all the information needed.
 
 
 
 Therefore, whenever a user chooses to use Embrane's Neutron plugin, he 
 specifies one of the supported existing plugins in the configuration file, 
 and L2 connectivity will be provided by that specific choice.
 
 At the current state, for instance, our plugin is able to work with the 
 OpenVSwitch's so that:
 
 
 
 -create_network() will call OVS plugin;
 
 -create_port() will call OVS plugin;
 
 -crate_router() will call Embrane's which will use knowledge from the OVS 
 plugin in order to provide L3 connectivity.
 
 
 
 It looks like your plugin is pretty much a derivative of the OVS plugin, 
 which replaces the L3 agent with Embrane's heleos.
 I think this approach makes some sense, but in the medium/long term you would 
 like to be able to run your plugin on top of any L2 plugin.
 
 There is a Neutron blueprint for that, and that is 
 https://blueprints.launchpad.net/neutron/+spec/quantum-l3-routing-plugin
 That blueprint is unfortunately a bit stuck at the moment.
 It would be good for the whole community to understand whether we can 
 actually still merge it during the Havana timeframe.
  
I believe Bob is going to resurrect this work now, though he's on PTO at the 
moment. Expect a new version of this relatively soon Salvatore, and I was going 
to let Ivar know this work that Bob is doing would allow the Helios platform to 
work with any L2 plugin.

Thanks,
Kyle

 
 and so forth...
 
 The calls can be asynchronous (using Router status in a way similar to the 
 LBaaS extension).
 
 
 
 Without going too much into details, that's all about the L3 plugin that we 
 would like to share. We are also interested in sharing a LBaaS service 
 plugin, but I'll do a different blueprint for that one.
 
 
 I think it won't harm pushing your code as a draft on gerrit. 
 
 All your feedback and comments are welcome :)
 
 
 
 Thanks,
 
 Ivar.
 
 
 [1] https://blueprints.launchpad.net/neutron/+spec/embrane-neutron-plugin
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Review] Use of exception for non-exceptional cases

2013-07-10 Thread Nachi Ueno
Hi Mark

Thank you for your answering


 I don't think this statement contradicts the intent of EAFP.

I got it :)

 Personally, I prefer not to use exception for such cases.

 My instinct is the same, but EAFP does seem to be the python way. There
 are times I can tolerate the EAFP approach but, even then, I generally
 think LBYL is cleaner.

ok, so i'm worrying about the case one reviewer says to use LBYL, and
the other mentioning EAFP.
so it is great if we could some some criteria for this.

 I can live with something like this:

   try:
   return obj.foo
   except AttributeError:
   pass

 but this is obviously broken:

   try:
   return self.do_something(obj.foo)
   except AttributeError:
   pass

 since AttributeError will mask a typo with the do_something() call or an
 AttributeError raised from inside do_something()

 But I fail to see what's wrong with this:

   if hasattr(obj, 'foo'):
   return obj.foo

 Cheers,
 Mark.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Review] Use of exception for non-exceptional cases

2013-07-10 Thread Thomas Hervé
On Wed, Jul 10, 2013 at 8:32 PM, Mark McLoughlin mar...@redhat.com wrote:

 On Wed, 2013-07-10 at 11:01 -0700, Nachi Ueno wrote:
  Personally, I prefer not to use exception for such cases.


The key here is personally. I don't think we have to agree on all style
issues.



 My instinct is the same, but EAFP does seem to be the python way. There
 are times I can tolerate the EAFP approach but, even then, I generally
 think LBYL is cleaner.

 I can live with something like this:

   try:
   return obj.foo
   except AttributeError:
   pass

 but this is obviously broken:

   try:
   return self.do_something(obj.foo)
   except AttributeError:
   pass

 since AttributeError will mask a typo with the do_something() call or an
 AttributeError raised from inside do_something()

 But I fail to see what's wrong with this:

   if hasattr(obj, 'foo'):
   return obj.foo


hasattr is a bit dangerous as it catches more exceptions than it needs too.
See for example
http://stackoverflow.com/questions/903130/hasattr-vs-try-except-block-to-deal-with-non-existent-attributes/16186050#16186050for
an explanation.

-- 
Thomas
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] Disable keystone authorization in ceilometer-api?

2013-07-10 Thread Pendergrass, Eric
Does anyone know how to disable keystone authorization in ceilometer-api?
Is there a ceilometer.conf option for this?

Thank you


smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Review] Use of exception for non-exceptional cases

2013-07-10 Thread Nachi Ueno
Hi Thomas
Thank you for your reply.


 The key here is personally. I don't think we have to agree on all style
 issues.

personally is why I'm asking it for communities.
IMO, we should agree on the style issue as much as we can. (Eg pep8, flake8)
for more consistant review.
However, I also agree it is hard to agree on all style issues, and sometimes
 it is case by case.



 My instinct is the same, but EAFP does seem to be the python way. There
 are times I can tolerate the EAFP approach but, even then, I generally
 think LBYL is cleaner.

 I can live with something like this:

   try:
   return obj.foo
   except AttributeError:
   pass

 but this is obviously broken:

   try:
   return self.do_something(obj.foo)
   except AttributeError:
   pass

 since AttributeError will mask a typo with the do_something() call or an
 AttributeError raised from inside do_something()

 But I fail to see what's wrong with this:

   if hasattr(obj, 'foo'):
   return obj.foo


 hasattr is a bit dangerous as it catches more exceptions than it needs too.
 See for example
 http://stackoverflow.com/questions/903130/hasattr-vs-try-except-block-to-deal-with-non-existent-attributes/16186050#16186050
 for an explanation.

Thank you for sharing this.
I'll check the obj is overwriting __getattr__ or not on the review.
# BTW, using __getattr__ should be also minimized.

Thanks
Nachi.

 --
 Thomas


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Review] Use of exception for non-exceptional cases

2013-07-10 Thread David Ripton

On 07/10/2013 02:01 PM, Nachi Ueno wrote:

HI folks

I would like to ask the review criteria in the community.

Should we use exception for non-exceptional cases when we can use
  parameter checking?

Example1:  Default value for array index

try:
value = list[5]
except IndexError:
 value = 'default_value'

This can be also written as,

  list_a[3] if len(list_a)  3 else 'default_value'


Both of these are fine.  Neither deserves to be banned.

But LBYL is often naive in the face of concurrency.  Just because 
something was true a microsecond ago doesn't mean it's still true. 
Exceptions are often more robust.


--
David Ripton   Red Hat   drip...@redhat.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] Agenda for tomorrow's meeting at 1900 UTC

2013-07-10 Thread Kurt Griffiths
The Marconi project team holds a weekly meeting in #openstack-meeting-alt
on Thursdays, 1900 UTC.

The next meeting is tomorrow, July 11.

Everyone is welcome. However, please take a minute to review the wiki
before attending for the first time:

  http://wiki.openstack.org/marconi

## Agenda (60 mins): ##

* Review actions from last time
* Marconi next steps (prioritize the following)
  * Bugs
  * ConnectionError handling, incl. while iterating over results
  * Input validation
  * Performance tuning
  * JSON home document
  * Error responses
  * Error codes
  * Unit test code coverage
  * SQLAlchemy backend
  * DevOps stuff (transaction IDs, logging, rate limiting, etc.)
* QA cluster remaining TODOs
* Rebooting python-marconiclient
* Open discussion (time permitting)


See also:

  http://wiki.openstack.org/Meetings/Marconi

Cheers,

Kurt (kgriffs)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Review] Use of exception for non-exceptional cases

2013-07-10 Thread Doug Hellmann
On Wed, Jul 10, 2013 at 3:57 PM, David Ripton drip...@redhat.com wrote:

 On 07/10/2013 02:01 PM, Nachi Ueno wrote:

 HI folks

 I would like to ask the review criteria in the community.

 Should we use exception for non-exceptional cases when we can use
   parameter checking?

 Example1:  Default value for array index

 try:
 value = list[5]
 except IndexError:
  value = 'default_value'

 This can be also written as,

   list_a[3] if len(list_a)  3 else 'default_value'


 Both of these are fine.  Neither deserves to be banned.

 But LBYL is often naive in the face of concurrency.  Just because
 something was true a microsecond ago doesn't mean it's still true.
 Exceptions are often more robust.


getattr() takes a default and, as it is implemented in C, is thread-safe.
So:

  value = getattr(my_obj, 'might_not_be_there', 'default')

Of course, it's probably better to make sure you've always got the same
type of object in the first place but sometimes the attributes change
across versions of libraries.

For accessing elements of a sequence that may be too short,
itertools.chain() and itertools.islice() are useful.

 import itertools
 vals1 = ['a', 'b']
 a, b, c = itertools.islice(itertools.chain(vals1, ['c']), 3)
 a, b, c
('a', 'b', 'c')
 vals2 = ['a', 'b', 'd']
 a, b, c = itertools.islice(itertools.chain(vals2, ['c']), 3)
 a, b, c
('a', 'b', 'd')

Doug
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Review] Use of exception for non-exceptional cases

2013-07-10 Thread Dolph Mathews
On Wed, Jul 10, 2013 at 3:30 PM, Doug Hellmann
doug.hellm...@dreamhost.comwrote:




 On Wed, Jul 10, 2013 at 3:57 PM, David Ripton drip...@redhat.com wrote:

 On 07/10/2013 02:01 PM, Nachi Ueno wrote:

 HI folks

 I would like to ask the review criteria in the community.

 Should we use exception for non-exceptional cases when we can use
   parameter checking?

 Example1:  Default value for array index

 try:
 value = list[5]
 except IndexError:
  value = 'default_value'

 This can be also written as,

   list_a[3] if len(list_a)  3 else 'default_value'


 Both of these are fine.  Neither deserves to be banned.

 But LBYL is often naive in the face of concurrency.  Just because
 something was true a microsecond ago doesn't mean it's still true.
 Exceptions are often more robust.


 getattr() takes a default and, as it is implemented in C, is thread-safe.
 So:

   value = getattr(my_obj, 'might_not_be_there', 'default')

 Of course, it's probably better to make sure you've always got the same
 type of object in the first place but sometimes the attributes change
 across versions of libraries.

 For accessing elements of a sequence that may be too short,
 itertools.chain() and itertools.islice() are useful.

  import itertools
  vals1 = ['a', 'b']
  a, b, c = itertools.islice(itertools.chain(vals1, ['c']), 3)
  a, b, c
 ('a', 'b', 'c')
  vals2 = ['a', 'b', 'd']
  a, b, c = itertools.islice(itertools.chain(vals2, ['c']), 3)
  a, b, c
 ('a', 'b', 'd')


++ every time I look at itertools it's doing something clever



 Doug


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna-all] merging savanna-extra elements

2013-07-10 Thread Russell Bryant
On 07/10/2013 03:56 PM, Matthew Farrellee wrote:
 Ivan,

snip

No comments on the email itself ... but I see that there's a separate
mailing list (savanna-all).  If the intention is for this project to
eventually apply for incubation, I would encourage discussions on
openstack-dev instead of somewhere else to provide more visibility into
what's going on.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Nova API extensions NOT to be ported to v3

2013-07-10 Thread Russell Bryant
On 07/10/2013 01:04 PM, Joe Gordon wrote:
 
 
 
 On Fri, Jun 28, 2013 at 1:31 PM, Christopher Yeoh cbky...@gmail.com
 mailto:cbky...@gmail.com wrote:
 
 Hi,
 
 The following is a list of API extensions for which there are no
 plans to port. Please shout if you think any of them needs to be!
 
 baremetal_nodes.py
 os_networks.py
 networks_associate.py
 os_tenant_networks.py
 virtual_interfaces.py
 createserverext.py
 floating_ip_dns.py
 floating_ip_pools.py
 floating_ips_bulk.py
 floating_ips.py
 cloudpipe.py
 cloudpipe_update.py
 volumes.py
 
 
 So this is a little late, but what about the fping extension, i was
 introduced in https://review.openstack.org/#/c/12133/.  But now that we
 have heat doesn't all instance monitoring belong in heat?

Removing it sounds good to me.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] The danger of capping python-*clients in core projects, and forbidding it in the future

2013-07-10 Thread Sean Dague
Yesterday in the very exciting run around to figure out why the gate was 
broken, we realized something interesting. Because of the way the gate 
process pip requirements (one project at a time), on a current gate run 
we actually install and uninstall python-keystoneclient 4 times in a 
normal run, flipping back and forth from HEAD to 0.2.5.


http://paste.openstack.org/show/39880/ - shows what's going on

The net of this means that if any of the projects specify a capped 
client, it has the potential for preventing that client from being 
tested in the gate. This is very possibly part of the reason we ended up 
with a broken python-keystoneclient 0.3.0 released.


I think we need to get strict on projects and prevent them from capping 
their client requirements. That will also put burden on clients that 
they don't break backwards compatibility (which I think was a goal 
regardless). However there is probably going to be a bit of pain getting 
from where we are today, to this world.


This is both a heads up, and a time for discussion, before we start 
figuring out how to make this better in the gate.


-Sean

--
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The danger of capping python-*clients in core projects, and forbidding it in the future

2013-07-10 Thread Dolph Mathews
On Wednesday, July 10, 2013, Sean Dague wrote:

 Yesterday in the very exciting run around to figure out why the gate was
 broken, we realized something interesting. Because of the way the gate
 process pip requirements (one project at a time), on a current gate run we
 actually install and uninstall python-keystoneclient 4 times in a normal
 run, flipping back and forth from HEAD to 0.2.5.

 http://paste.openstack.org/**show/39880/http://paste.openstack.org/show/39880/-
  shows what's going on

 The net of this means that if any of the projects specify a capped client,
 it has the potential for preventing that client from being tested in the
 gate. This is very possibly part of the reason we ended up with a broken
 python-keystoneclient 0.3.0 released.


 I think we need to get strict on projects and prevent them from capping
 their client requirements. That will also put burden on clients that they
 don't break backwards compatibility (which I think was a goal regardless).
 However there is probably going to be a bit of pain getting from where we
 are today, to this world.


Thanks for investigating the underlying issue! I think the same
policy should apply a bit further to any code we develop and consume
ourselves as a community (oslo.config, etc). I have no doubt that's the
standard we strive for, but it's all too easy to throw a cap into a
requirements file and forget about it.


 This is both a heads up, and a time for discussion, before we start
 figuring out how to make this better in the gate.

 -Sean

 --
 Sean Dague
 http://dague.net

 __**_
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-devhttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The danger of capping python-*clients in core projects, and forbidding it in the future

2013-07-10 Thread Joshua Harlow
A useful tool that anvil has built into it (thanks to aababilov). It might
be useful in this situation.

https://github.com/stackforge/anvil/tree/master/tools#multipip

It might be useful to use said tool (or a derivative) to detect this kind
of version conflict earlier rather than later??

It is used in anvil to determine the same set of conflicts so that later
when multiple single packages of openstack (say of a given release) that
said multiple packages will all play nicely together (at least with
respect to pip requirement versioning).

-Josh

On 7/10/13 2:42 PM, Sean Dague s...@dague.net wrote:

Yesterday in the very exciting run around to figure out why the gate was
broken, we realized something interesting. Because of the way the gate
process pip requirements (one project at a time), on a current gate run
we actually install and uninstall python-keystoneclient 4 times in a
normal run, flipping back and forth from HEAD to 0.2.5.

http://paste.openstack.org/show/39880/ - shows what's going on

The net of this means that if any of the projects specify a capped
client, it has the potential for preventing that client from being
tested in the gate. This is very possibly part of the reason we ended up
with a broken python-keystoneclient 0.3.0 released.

I think we need to get strict on projects and prevent them from capping
their client requirements. That will also put burden on clients that
they don't break backwards compatibility (which I think was a goal
regardless). However there is probably going to be a bit of pain getting
from where we are today, to this world.

This is both a heads up, and a time for discussion, before we start
figuring out how to make this better in the gate.

   -Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Chalenges with highly available service VMs

2013-07-10 Thread Vishvananda Ishaya

On Jul 4, 2013, at 8:26 PM, Ian Wells ijw.ubu...@cack.org.uk wrote:

 On 4 July 2013 23:42, Robert Collins robe...@robertcollins.net wrote:
 Seems like a tweak would be to identify virtual IPs as separate to the
 primary IP on a port:
 you don't need to permit spoofing of the actual host IP for each host in
 the HA cluster; you just need to permit spoofing of the virtual IP. This
 would be safer than disabling the spoofing rules, and avoid configuration
 errors such as setting the primary IP of one node in the cluster to be a
 virtual IP on another node - neutron would reject that as the primary IP
 would be known as that.
 
 With apologies for diverting the topic somewhat, but for the use cases
 I have, I would actually like to be able to disable the antispoofing
 in its entirety.
 
 It used to be essential back when we had nova-network and all tenants
 ended up on one network.  It became less useful when tenants could
 create their own networks and could use them as they saw fit.
 
 It's still got its uses - for instance, it's nice that the metadata
 server can be sure that a request is really coming from where it
 claims - but I would very much like it to be possible to, as an
 option, explicitly disable antispoof - perhaps on a per-network basis
 at network creation time - and I think we could do this without
 breaking the security model beyond all hope of usefulness.

Per network and per port makes sense.

After all, this is conceptually the same as enabling or disabling
port security on your switch.

Vish

 -- 
 Ian.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] OpenStack Swift deep dive code overview of DiskFile object refactoring - G+ Hangout, Wed. July 17th, 3 PM EDT

2013-07-10 Thread Peter Portante
BEGIN:VCALENDAR
PRODID:-//Google Inc//Google Calendar 70.9054//EN
VERSION:2.0
CALSCALE:GREGORIAN
METHOD:REQUEST
BEGIN:VEVENT
DTSTART:20130717T19Z
DTEND:20130717T20Z
DTSTAMP:20130711T015507Z
ORGANIZER;CN=peter.a.porta...@gmail.com:mailto:peter.a.porta...@gmail.com
UID:mlev82t0n9fg2cfre4u7ha7...@google.com
ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=
 TRUE;CN=openstack-dev@lists.openstack.org;X-NUM-GUESTS=0:mailto:openstack-d
 e...@lists.openstack.org
ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=ACCEPTED;RSVP=TRUE
 ;CN=Peter Portante;X-NUM-GUESTS=0:mailto:peter.a.porta...@gmail.com
CREATED:20130711T015507Z
DESCRIPTION:View your event at http://www.google.com/calendar/event?action=
 VIEWueid=mlev82t0n9fg2cfre4u7ha7td4.
LAST-MODIFIED:20130711T015507Z
LOCATION:G+ Hangouts
SEQUENCE:0
STATUS:CONFIRMED
SUMMARY:OpenStack Swift deep dive code overview of DiskFile object refactor
 ing
TRANSP:OPAQUE
END:VEVENT
END:VCALENDAR
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Invitation: OpenStack Swift deep dive code overview of DiskFile objec... @ Wed Jul 17, 2013 3pm - 4pm (peter.a.porta...@gmail.com)

2013-07-10 Thread Peter Portante
BEGIN:VCALENDAR
PRODID:-//Google Inc//Google Calendar 70.9054//EN
VERSION:2.0
CALSCALE:GREGORIAN
METHOD:REQUEST
BEGIN:VEVENT
DTSTART:20130717T19Z
DTEND:20130717T20Z
DTSTAMP:20130711T015507Z
ORGANIZER;CN=peter.a.porta...@gmail.com:mailto:peter.a.porta...@gmail.com
UID:mlev82t0n9fg2cfre4u7ha7...@google.com
ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=
 TRUE;CN=openstack-dev@lists.openstack.org;X-NUM-GUESTS=0:mailto:openstack-d
 e...@lists.openstack.org
ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=ACCEPTED;RSVP=TRUE
 ;CN=Peter Portante;X-NUM-GUESTS=0:mailto:peter.a.porta...@gmail.com
CREATED:20130711T015507Z
DESCRIPTION:View your event at http://www.google.com/calendar/event?action=
 VIEWeid=bWxldjgydDBuOWZnMmNmcmU0dTdoYTd0ZDQgb3BlbnN0YWNrLWRldkBsaXN0cy5vcG
 Vuc3RhY2sub3Jntok=MjYjcGV0ZXIuYS5wb3J0YW50ZUBnbWFpbC5jb21iMTY4Mjk5NTczYTBi
 MDRiM2Q0ZmJhMDE0MzNhMjA0N2U3ZTdhODA3ctz=America/New_Yorkhl=en.
LAST-MODIFIED:20130711T015507Z
LOCATION:G+ Hangouts
SEQUENCE:0
STATUS:CONFIRMED
SUMMARY:OpenStack Swift deep dive code overview of DiskFile object refactor
 ing
TRANSP:OPAQUE
END:VEVENT
END:VCALENDAR


invite.ics
Description: application/ics
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Best way to do something MySQL-specific?

2013-07-10 Thread Adam Young

On 07/09/2013 07:33 PM, Jay Pipes wrote:

On 07/08/2013 05:18 PM, Sean Dague wrote:

On 07/01/2013 01:35 PM, Clint Byrum wrote:

The way the new keystone-manage command token_flush works right now
is quite broken by MySQL and InnoDB's gap locking behavior:

https://bugs.launchpad.net/1188378

Presumably other SQL databases like PostgreSQL will have similar 
problems

with doing massive deletes, but I am less familiar with them.

I am trying to solve this in keystone, and my first attempt is here:

https://review.openstack.org/#/c/32044/

However, MySQL does not support using LIMIT in a sub-query that
is feeding an IN() clause, so that approach will not work. Likewise,
sqlalchemy does not support the MySQL specific extension to DELETE 
which

allows it to have a LIMIT clause.

Now, I can do some hacky things, like just deleting all of the expired
tokens from the oldest single second, but that could also potentially
be millions of tokens, and thus, millions of gaps to lock.

So, there is just not one way to work for all databases, and we have to
have a special mode for MySQL.

I was wondering if anybody has suggestions and/or examples of how to do
that with sqlalchemy.


Honestly, my answer is typically to ask Jay, he understands a lot of the
ways to get SQLA to do the right thing in mysql.


LOL, /me blushes.

In this case, I'd propose something like this, which should work fine 
for any database:


cutoff = timeutils.utcnow() - 60  # one minute ago...

# DELETE in 500 record chunks
q = session.query(
TokenModel.id).filter(
TokenModel.expires  cutoff)).limit(500)
while True:
results = q.all()
if len(results):
ids_to_delete = [r[0] for r in results]
session.query(TokenModel).filter(
TokenModel.id.in_(ids_to_delete)).delete()
else:
break

Code not tested, use with caution, YMMV, etc etc...



It seems to me that it would still have the problem described in the 
original post.  Even if you are only deleteing 500 at atime, all of the 
tokens from the original query will be locked...but I guess that the 
add new behavior will only have to check against a subset of the 
tokens in the database.


Are we really generating so many tokens that deleting the expired tokens 
once per second is prohibitive?  That points to something else being wrong.




Best,
-jay




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] How to write unit tests for db methods?

2013-07-10 Thread Adam Young

On 07/10/2013 06:56 AM, Akshat Kakkar wrote:

I have added 2 tables to keystone.


This should be done in a migration, and should be tested using the 
test_db_update.py file.


I have methods which do the read/write/update/delete of records in 
these tables.

PLease explain.  We are not doing direct sql, but rather using SQLalchemy.


I want to write unit test for all this. These methods of mine inherit 
from keystone.common.sql and hence any call that these methods will 
make will go to the db returned by keystone.common.sql when creating a 
session. For writing a unit test this db should be a test db and not 
the production db. So, how can I have a session of test db? or is 
there altogether a different way of writing the unit test.

See test_backend_sql.py




*From:* Dolph Mathews dolph.math...@gmail.com
*To:* Akshat Kakkar the_aks...@yahoo.co.in; OpenStack Development 
Mailing List openstack-dev@lists.openstack.org

*Sent:* Tuesday, 9 July 2013 7:39 PM
*Subject:* Re: [openstack-dev] [Keystone] How to write unit tests for 
db methods?


I'm assuming you're referring to testing backend drivers as opposed to 
database migrations (tests/test_sql_upgrade.py).


Backend agnostic tests land in tests/test_backend.py. Backend-specific 
tests, overrides, etc belong in tests/test_backend_sql.py, 
tests/test_backend_kvs.py, etc.


Generally, you can't assume that keystone is backed by a database, 
however, as it's entirely possible to deploy without one.



On Tue, Jul 9, 2013 at 10:55 AM, Akshat Kakkar the_aks...@yahoo.co.in 
mailto:the_aks...@yahoo.co.in wrote:


How to write unit tests in keystone for the methods which are
directly calling the backend db? I understand that for testing
purpose it should be a *fake db*, but how to do that in keystone?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--

-Dolph




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] Need help writing gate tests

2013-07-10 Thread Adam Young
I want to write 3 new Jenkins gate tests:   Run the Keystone unit tests 
against


1. A live LDAP server
2. MySQL
3. Postgresql

Right now, we know that the unit tests will fail against the live DBs, 
so we want those two to be non-voting.  The Live LDAP one should be the 
scheme as set up by devstack, and should be voting (can be non-voting to 
start)


where do I start?  Do I need to do this in 
https://github.com/openstack-infra/config or 
http://ci.openstack.org/devstack-gate.html?



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Best way to do something MySQL-specific?

2013-07-10 Thread Clint Byrum
Excerpts from Adam Young's message of 2013-07-10 19:14:45 -0700:
 On 07/09/2013 09:51 PM, Robert Collins wrote:
  PostgreSQL doesn't do gap locks, but instead you have to deal with 
  http://wiki.postgresql.org/wiki/SSI : the transaction that is deleting 
  1M rows, for instance, will have a query that may return rows which 
  another transaction is adding; if so one of the two will be rolled 
  back. This is in many ways equivalent from the point of view of 
  writing good SQL that will work well on both systems.
 
 This is not a problem with token cleanup path, though. Tokens are 
 cleaned up based on expiry time, a value that is written and never 
 changed.  Tokens should never be removed from the database until their 
 expiry has been hit, or valid tokens will be denied.
 

Note that when the memory limit is reached, memcached will delete the
least recently used key in a slab that is full, whether it has reached
its TTL or not. If backends have to guarantee tokens will live for the
life of the token, memcached is inappropriate without ensuring it never
fills up or using the -M flag, which will then break a basic assumption
that items over TTL will be removed to free up space.

I think at some point clients have to expect tokens that should still
be valid might not be, and look at invalid token as a re-triable error.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Best way to do something MySQL-specific?

2013-07-10 Thread Clint Byrum
Excerpts from Adam Young's message of 2013-07-10 19:17:24 -0700:
 On 07/09/2013 07:33 PM, Jay Pipes wrote:
  On 07/08/2013 05:18 PM, Sean Dague wrote:
  On 07/01/2013 01:35 PM, Clint Byrum wrote:
  The way the new keystone-manage command token_flush works right now
  is quite broken by MySQL and InnoDB's gap locking behavior:
 
  https://bugs.launchpad.net/1188378
 
  Presumably other SQL databases like PostgreSQL will have similar 
  problems
  with doing massive deletes, but I am less familiar with them.
 
  I am trying to solve this in keystone, and my first attempt is here:
 
  https://review.openstack.org/#/c/32044/
 
  However, MySQL does not support using LIMIT in a sub-query that
  is feeding an IN() clause, so that approach will not work. Likewise,
  sqlalchemy does not support the MySQL specific extension to DELETE 
  which
  allows it to have a LIMIT clause.
 
  Now, I can do some hacky things, like just deleting all of the expired
  tokens from the oldest single second, but that could also potentially
  be millions of tokens, and thus, millions of gaps to lock.
 
  So, there is just not one way to work for all databases, and we have to
  have a special mode for MySQL.
 
  I was wondering if anybody has suggestions and/or examples of how to do
  that with sqlalchemy.
 
  Honestly, my answer is typically to ask Jay, he understands a lot of the
  ways to get SQLA to do the right thing in mysql.
 
  LOL, /me blushes.
 
  In this case, I'd propose something like this, which should work fine 
  for any database:
 
  cutoff = timeutils.utcnow() - 60  # one minute ago...
 
  # DELETE in 500 record chunks
  q = session.query(
  TokenModel.id).filter(
  TokenModel.expires  cutoff)).limit(500)
  while True:
  results = q.all()
  if len(results):
  ids_to_delete = [r[0] for r in results]
  session.query(TokenModel).filter(
  TokenModel.id.in_(ids_to_delete)).delete()
  else:
  break
 
  Code not tested, use with caution, YMMV, etc etc...
 
 
 It seems to me that it would still have the problem described in the 
 original post.  Even if you are only deleteing 500 at atime, all of the 
 tokens from the original query will be locked...but I guess that the 
 add new behavior will only have to check against a subset of the 
 tokens in the database.
 

Because it is only a read, only the index being used must be locked,
and only the gaps in the range that were seen. So token.expires =cutoff
will be locked. Nothing creating new tokens will run into this.

Of course, the locks can be avoided altogether with READ COMMITTED or
READ UNCOMMITTED. The problem only manifests in a SELECT when using
REPEATABLE READ.

 Are we really generating so many tokens that deleting the expired tokens 
 once per second is prohibitive?  That points to something else being wrong.
 

In a proof of concept deployment with somewhat constant load testing
I have seen an average of 20k - 40k tokens per hour expire. Seems to
me that tokens are just simply not being reused as that is about 10 -
20 times the number of actual operations done in that hour.

Running flush_tokens in a loop with sleep 1 would indeed work here. But
if I ever stop running that in a loop, even for 15 minutes, then the
tokens will back up and I'll be stuck with a massive delete again.

I will test Jay's solution, it is closest to what I originally attempted
but does not suffer from MySQL's limitations and will likely be a single
method that works reasonably well for all dbs.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Need help writing gate tests

2013-07-10 Thread Adam Young

On 07/10/2013 11:01 PM, Clark Boylan wrote:

On Wed, Jul 10, 2013 at 7:32 PM, Adam Young ayo...@redhat.com wrote:

I want to write 3 new Jenkins gate tests:   Run the Keystone unit tests
against

1. A live LDAP server
2. MySQL
3. Postgresql

Right now, we know that the unit tests will fail against the live DBs, so we
want those two to be non-voting.  The Live LDAP one should be the scheme as
set up by devstack, and should be voting (can be non-voting to start)

where do I start?  Do I need to do this in
https://github.com/openstack-infra/config or
http://ci.openstack.org/devstack-gate.html?


Adding a Jenkins job typically involves two pieces of config in
openstack-infra/config. First you need to add the job to the Jenkins
Job Builder config so that the job gets into Jenkins. This is done in
the files under
modules/openstack_project/files/jenkins_job_builder/config. There are
tons of examples in there and documentation can be found at
http://ci.openstack.org/jjb.html. The other config that is needed is
an update to the zuul layout.yaml file telling zuul when to run the
jobs. The layout file is at
modules/openstack_project/files/zuul/layout.yaml and documentation for
that can be found at http://ci.openstack.org/zuul.html.

Thanks.  I'll start reading up here.



Our CentOS 6 and Ubuntu Precise slaves (used to run python 2.6 and 2.7
unittests) have MySQL and PostgreSQL servers running on them and are
available to the unittests.


We currently use a sepate user and database for tests than would be run 
for a live server.  The keystone server is the keystone DB user, and the 
unit tests run against the keystone_test database as the keystone_test 
user.   I assu have to create these as part of the test setup?  I can 
possibly add the unit test runs to the current gate jobs, assuming I can 
get them to run cleanly. Would that make more sense?  I thought that the 
current gate jobs for Mysql and postgres were pretty much Tempest runs, 
and we don't currently have a way for tempest to run Keystone unit 
tests.  Does that make more sense than creating a whole separate gate 
job, or is the separate gate job the more scalable solution?



You can see how Nova makes use of these
servers at 
https://github.com/openstack/nova/blob/master/nova/tests/db/test_migrations.py#L31.
I prefer having opportunistic tests like Nova because it keeps the
number of special tests in our system down. If this isn't possible
because the tests don't currently pass you will probably want to add a
new test that runs something like `tox -evenv -- #command to run tests
against real DBs`.

Our CentOS 6 and Ubuntu Precise slaves do not currently have LDAP
servers running on them but we would be happy to add them. I don't
think disposable devstack slaves are necessary to do LDAP testing as
your LDAP tests should be able to assert a clean LDAP state before
testing.


Yes, that works fine. We wipe the LDAP clean before each test.  We still 
need to install the schema, as that is outside the realm of Keystone. 
Devstack does that now.  I assume we should make puppet scripts for that?



If that isn't possible (I could be completely wrong about my
previous statement) you will want to look at the
gate-pbr-devstack-vm-rawinstall in the Jenkins Job Builder config as
it will show you how to use a devstack node for something other than
running tempest.

Clark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Best way to do something MySQL-specific?

2013-07-10 Thread Adam Young

On 07/10/2013 11:11 PM, Clint Byrum wrote:

Excerpts from Adam Young's message of 2013-07-10 19:17:24 -0700:

On 07/09/2013 07:33 PM, Jay Pipes wrote:

On 07/08/2013 05:18 PM, Sean Dague wrote:

On 07/01/2013 01:35 PM, Clint Byrum wrote:

The way the new keystone-manage command token_flush works right now
is quite broken by MySQL and InnoDB's gap locking behavior:

https://bugs.launchpad.net/1188378

Presumably other SQL databases like PostgreSQL will have similar
problems
with doing massive deletes, but I am less familiar with them.

I am trying to solve this in keystone, and my first attempt is here:

https://review.openstack.org/#/c/32044/

However, MySQL does not support using LIMIT in a sub-query that
is feeding an IN() clause, so that approach will not work. Likewise,
sqlalchemy does not support the MySQL specific extension to DELETE
which
allows it to have a LIMIT clause.

Now, I can do some hacky things, like just deleting all of the expired
tokens from the oldest single second, but that could also potentially
be millions of tokens, and thus, millions of gaps to lock.

So, there is just not one way to work for all databases, and we have to
have a special mode for MySQL.

I was wondering if anybody has suggestions and/or examples of how to do
that with sqlalchemy.

Honestly, my answer is typically to ask Jay, he understands a lot of the
ways to get SQLA to do the right thing in mysql.

LOL, /me blushes.

In this case, I'd propose something like this, which should work fine
for any database:

cutoff = timeutils.utcnow() - 60  # one minute ago...

# DELETE in 500 record chunks
q = session.query(
 TokenModel.id).filter(
 TokenModel.expires  cutoff)).limit(500)
while True:
 results = q.all()
 if len(results):
 ids_to_delete = [r[0] for r in results]
 session.query(TokenModel).filter(
 TokenModel.id.in_(ids_to_delete)).delete()
 else:
 break

Code not tested, use with caution, YMMV, etc etc...


It seems to me that it would still have the problem described in the
original post.  Even if you are only deleteing 500 at atime, all of the
tokens from the original query will be locked...but I guess that the
add new behavior will only have to check against a subset of the
tokens in the database.


Because it is only a read, only the index being used must be locked,
and only the gaps in the range that were seen. So token.expires =cutoff
will be locked. Nothing creating new tokens will run into this.

Of course, the locks can be avoided altogether with READ COMMITTED or
READ UNCOMMITTED. The problem only manifests in a SELECT when using
REPEATABLE READ.


Is this something we can implement?  I realize it would be MySQL specific.




Are we really generating so many tokens that deleting the expired tokens
once per second is prohibitive?  That points to something else being wrong.


In a proof of concept deployment with somewhat constant load testing
I have seen an average of 20k - 40k tokens per hour expire. Seems to
me that tokens are just simply not being reused as that is about 10 -
20 times the number of actual operations done in that hour.
Yeah, that is why we are trying to push python-keyring as a client side 
token caching solution. They should be reused.






Running flush_tokens in a loop with sleep 1 would indeed work here. But
if I ever stop running that in a loop, even for 15 minutes, then the
tokens will back up and I'll be stuck with a massive delete again.

I will test Jay's solution, it is closest to what I originally attempted
but does not suffer from MySQL's limitations and will likely be a single
method that works reasonably well for all dbs.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Ceilometer XSD

2013-07-10 Thread Jobin Raju George
That's great, lets see if I find myself on #openstack-doc. Thanks!


On Thu, Jul 11, 2013 at 4:40 AM, Anne Gentle
annegen...@justwriteclick.comwrote:


 On Tue, Jul 9, 2013 at 10:29 AM, Doug Hellmann 
 doug.hellm...@dreamhost.com wrote:

 The instructions for getting set up to contribute to OpenStack in general
 are at https://wiki.openstack.org/wiki/How_To_Contribute (maybe you've
 already done that, I'm not sure).

 As far as this specific change goes, I would have to understand more
 about what it is that you want added to give specific advice. If we don't
 have the tools to make the XSD, I'm not sure how we will ensure it is kept
 up to date with changes to the API.


 To add some doc perspective and context - generally speaking, only two
 APIs have XSDs that I know of, hand created, Compute v2 and Identity v2.0.
 They are stored in the compute-api repo and the identity-api repo.

 The Compute v2 XSDs are published to
 http://docs.openstack.org/api/openstack-compute/2/xsd/.

 No other APIs have published such artifacts that I know of, but we could
 certainly work towards that goal if there are updated XSDs.

 Interested contributors, come to #openstack-doc Monday at 16:00 UTC during
 doc office hours.
 Thanks,
 Anne


 Doug


 On Mon, Jul 8, 2013 at 1:33 PM, Jobin Raju George jobin...@gmail.comwrote:

 Hey, all!

 I have prepared the XSD for ceilometer and would like to contribute it
 to the repositories on github. Can somebody help me out with the process to
 do this?


 On Mon, Jul 8, 2013 at 2:19 PM, Jobin Raju George jobin...@gmail.comwrote:

 Ok, that's fine. I am currently investigating into it. Lets see if I
 get any clues. Thanks for you time!


 On Mon, Jul 8, 2013 at 1:32 PM, Julien Danjou jul...@danjou.infowrote:

 On Sat, Jul 06 2013, Jobin Raju George wrote:

  I am trying to use the meters provided by ceilometer to extract usage
  values from the VM's deployed using openstack.
 
  However, in order to programmatically do this I need the XSD files
 for
  ceilometer. I tried googling them, posting them on forums and even on
  launchpad but have not received any response, i would be great if
 you could
  provide me with the source/link/file of the XSD's.

 The XML API is provided automatically via WSME¹, and I've no idea if it
 provides any XSD automatically generated or something like that.

 ¹  https://pypi.python.org/pypi/WSME

 --
 Julien Danjou
 /* Free Software hacker * freelance consultant
http://julien.danjou.info */




 --

  Thanks and regards,

 Jobin Raju George

 Third Year, Information Technology

 College of Engineering Pune

 Alternate e-mail: georgejr10...@coep.ac.in




 --

 Thanks and regards,

 Jobin Raju George

 Third Year, Information Technology

 College of Engineering Pune

 Alternate e-mail: georgejr10...@coep.ac.in


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Anne Gentle
 annegen...@justwriteclick.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Thanks and regards,

Jobin Raju George

Third Year, Information Technology

College of Engineering Pune

Alternate e-mail: georgejr10...@coep.ac.in
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] mid-cycle sprint?

2013-07-10 Thread Robert Collins
Clint suggested we do a mid-cycle sprint at the weekly meeting a
fortnight ago, but ETIME and stuff - so I'm following up.

HP would be delighted to host a get-together of TripleO contributors
[or 'I will be contributing soon, honest'] folk.

We're proposing a late August / early Sept time - a couple weeks
before H3, so we can be dealing with features that have landed //
ensuring necessary features *do* land.

That would give a start date of the 19th or 26th August. Probable
venue of either Sunnyvale, CA or Seattle.

I need a rough count of numbers to kick off the approval and final
venue stuff w/in HP. I've cc'd some fairly obvious folk that should
come :)

So - who is interested and would come, and what constraints do you have?

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Cloud Services

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev