[openstack-dev] [openstack][nova] an unit test problem

2013-09-04 Thread Wangpan
Hi experts,

I have an odd unit test issue in the commit 
https://review.openstack.org/#/c/44639/
the test results are here:
http://logs.openstack.org/39/44639/7/check/gate-nova-python27/4ddc671/testr_results.html.gz

the not passed test is: 
nova.tests.compute.test_compute_api.ComputeCellsAPIUnitTestCase.test_delete_in_resized
I have two questions about this issue:
1) why it is passed when I run it by 'testr run 
nova.tests.compute.test_compute_api.ComputeCellsAPIUnitTestCase.test_delete_in_resized'
 and also 'nosetests ' in my local venv
2) why the other test 
nova.tests.compute.test_compute_api.ComputeAPIUnitTestCase.test_delete_in_resized
 is passed, which also inherits from the class '_ComputeAPIUnitTestMixIn'

because it is OK in my local venv, so I have no idea to fix it, anybody can 
give me some advice?
Thanks a lot!

2013-09-04



Wangpan___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] I18n meeting tomorrow

2013-09-04 Thread Ying Chun Guo


Hi,


There will be OpenStack I18n team meeting at 0700UTC on Thursday (September
5th) in IRC channel #openstack-meeting.
The time, we use Asia/Europe friendly time. Welcome to join the meeting.

During the previous several weeks, we have good progress with the
infrastructure setting up in Transifex.
We have the common glossary shared within all openstack projects. We have
Horizon ready for translations.
Tomorrow is the string frozen date. Now it's the quite important time for
translating now.
We want to make sure Horizon can have the high quality internationalized
release for Havanna version.
If you are interested in translations or tools, welcome to join us.

We will cover following topics this time:

   Action items from the last meeting
   Horizon I18n version release process
   Translated document publish process
   Open discussion


For more details, please look into
https://wiki.openstack.org/wiki/Meetings/I18nTeamMeeting.


You can also contact us through IRC channel #openstack-translation, or
mailing address: openstack-i...@list.openstack.org.
Please refer to our wiki page for more details:
https://wiki.openstack.org/wiki/I18nTeam


Best regards
Daisy___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][heat] Question re deleting trusts via trust token

2013-09-04 Thread David Chadwick
If delegation (trusts) were enhanced to be role based, then anyone with 
the same role as the initial delegator should be able to revoke the 
delegation


regards

David


On 04/09/2013 05:02, Clint Byrum wrote:

Excerpts from Dolph Mathews's message of 2013-09-03 16:12:00 -0700:

On Tue, Sep 3, 2013 at 5:52 PM, Steven Hardy sha...@redhat.com wrote:


Hi,

I have a question for the keystone folks re the expected behavior when
deleting a trust.

Is it expected that you can only ever delete a trust as the user who
created it, and that you can *not* delete the trust when impersonating that
user using a token obtained via that trust?



We have some tests in keystone somewhat related to this scenario, but
nothing that asserts that specific behavior-

https://github.com/openstack/keystone/blob/master/keystone/tests/test_auth.py#L737-L763


The reason for this question, is for the Heat use-case, this may represent
a significant operational limitation, since it implies that the user who
creates the stack is the only one who can ever delete it.



I don't follow this implication-- can you explain further? I don't see how
the limitation above (if it exists) would impact this behavior or be a
blocker for the design below.



The way heatclient works right now, it will obtain a trust from
keystone, and then give that trust to Heat to use while it is managing
the stack. However, if this user was just one user in a team of users
who manage that stack, then when the stack is deleted, neither heat,
nor the user who is deleting the stack will be able to delete the trust
that was given to Heat.

This presents an operational hurdle for Heat users, as they will have to
have a stack owner user that is shared amongst a team. Otherwise they
may be stuck in a situation where the creating user is not available to
delete a stack that must be deleted for some reason.

Ideally as a final operation with the trust Heat, or the user doing the
delete, would be able to use the trust to delete itself.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][heat] Question re deleting trusts via trust token

2013-09-04 Thread Steven Hardy
On Tue, Sep 03, 2013 at 06:12:00PM -0500, Dolph Mathews wrote:
 On Tue, Sep 3, 2013 at 5:52 PM, Steven Hardy sha...@redhat.com wrote:
 
  Hi,
 
  I have a question for the keystone folks re the expected behavior when
  deleting a trust.
 
  Is it expected that you can only ever delete a trust as the user who
  created it, and that you can *not* delete the trust when impersonating that
  user using a token obtained via that trust?
 
 
 We have some tests in keystone somewhat related to this scenario, but
 nothing that asserts that specific behavior-
 
 https://github.com/openstack/keystone/blob/master/keystone/tests/test_auth.py#L737-L763
 
 
  The reason for this question, is for the Heat use-case, this may represent
  a significant operational limitation, since it implies that the user who
  creates the stack is the only one who can ever delete it.
 
 
 I don't follow this implication-- can you explain further? I don't see how
 the limitation above (if it exists) would impact this behavior or be a
 blocker for the design below.

As outlined already by Clint, the way Heat uses trusts is:

- User requests stack creation, passes token or user/password to Heat
- Heat uses the user credentials to create a trust between the user and the
  Heat service user, the ID of which is encrypted and stored in our DB
  (instead of the credentials)
- We use the trust to perform lifecycle operations, e.g adding a nova
  instance to an AutoScalingGroup, the Heat service user impersonates the
  User who created the stack
- The user deletes the stack, at which point we delete the trust

This final step is the problematic step - atm (unless I'm making a mistake,
which as previously proven is entirely possible! ;) it seems that it's
impossible for anyone except the trustor to delete the trust, even if we
impersonate the trustor.

Even a tenant admin, it seems, cannot delete the trust.

  Current Heat behavior is to allow any user in the same tenant, provided
  they have the requisite roles, to delete the stack
 
 
 That seems like a reasonable design. With trusts, any user who has been
 delegated the requisite role on the same tenant should be able to delete
 the stack.

If this is the case, I'd very much like to see some curl examples of this
working, in particular these two scenarios:

- Deleting a trust by impersonating the trustor (using a token obtained
  with the trust you're about to delete, which will obviously be
  invalidated as soon as the delete completes)

- Any user other than the trustor deleting the trust, e.g some other user
  in the same tenant

I'll create some minimal reproducers to try to illustrate the issue.

  which AFAICT atm will
  not be possible when using trusts.
 
 Similar to the above, I don't understand how trusts presents a blocker?

Hopefully the above clarifies, we will either leak trusts or have to assert
failure on stack delete unless we can delete the trust on behalf of the
stack-creating (trustor) user in the case where some other user in the
tenant performs the stack delete.

Thanks for any further info you can provide! :)

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] Wait a minute... I thought we were going to remove Alembic until Icehouse-1?

2013-09-04 Thread Julien Danjou
On Wed, Sep 04 2013, Jay Pipes wrote:

Hi Jay,

 So I went to do the work I said I was going to do at last week's Ceilometer
 meeting -- translate the 2 Alembic migrations in the Ceilometer source into
 SA-migrate migrations -- and then rebased my branch only to find 2 more
 Alembic migrations added in the last few days:

 https://review.openstack.org/#/c/42716/
 https://review.openstack.org/#/c/42715/

 I will note that there is no unit testing of either of these migrations,
 because neither of them runs on SQLite, which is what the unit tests use
 (improperly, IMHO).

Agreed. That's the reason I jumped in and submitted
https://review.openstack.org/#/c/44681/ to add devstack-gate to
Ceilometer, so we can catch this in the future.

I'm sorry these got in in the mean time, I didn't think about what you
were working on when pushing the button and that it would affect you.

 There is a unique constraint name in one of them (only
 apparently used in the PostgreSQL driver) that is inconsistent with the
 naming of unique constraints that is used in the other migration. Note that
 I am not in favor of the unique constraint naming convention of
 table_columnA0columnB0columnC0, as I've noted in the upstream oslo.db patch
 that adds a linter-style check for this convention:

 https://review.openstack.org/#/c/42307/2

Noted. I thought there was already some sort of convention around this.

 I thought we were going to translate the existing 2 Alembic migrations to
 SA-migrate migrations, and then do a switch to Alembic (removing the old
 SA-migrate versioning) in Icehouse-1? This was supposed to get us past the
 current mess of having both SA-migrate and Alembic migrations in the same
 source code base -- which is confusing a bunch of contributors who have
 written SA-migrate migrations.

 Can we have a decision on this please?

That was my understanding too, as I've also written a new migration
using SA-migrate.

 I thought the plan from last week was:

 1) Translate the 2 Alembic migrations to SA-Migrate migrations
 2) Remove Alembic support from Ceilometer
 3) Add unit tests (pretty much as-is from Glance) that would test the
 SA-migrate migrations in the unit tests as well as the MySQL and PostgreSQL
 testers in the gate
 4) Add SA-migrate migrations for the remainder of Havana
 5) Immediately after the cut of Havana final, do a cutover to Alembic from
 SA-migrate that would:
  a) Create an initial Alembic migration that would be the schema state of
 the Ceilometer database at the last cut of Havana
  b) Write a simple check for the migrate_version table in the database to
 check if the database was under SA-migrate control. If so, do nothing other
 than remove the migrate_version table
  c) Remove all the ceilometer/storage/sqlalchemy/migrate_repo/*

Sounds good to me.

Now we need to have the PostgreSQL migration fixed one way or another.
Svetlana wrote https://review.openstack.org/#/c/44539/ and I wrote
https://review.openstack.org/#/c/44691/ which try to fix the harm done.

I think the best call is to drop all of these and let your patch goes
in.

-- 
Julien Danjou
// Free Software hacker / independent consultant
// http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] REST API proposal

2013-09-04 Thread Nikolay Starodubtsev
I've made some update in the document. Please check it


On Tue, Sep 3, 2013 at 10:00 PM, Sergey Lukjanov slukja...@mirantis.comwrote:

 Done

 Sincerely yours,
 Sergey Lukjanov
 Savanna Technical Lead
 Mirantis Inc.

 On Sep 3, 2013, at 21:53, Jay Pipes jaypi...@gmail.com wrote:

  On 08/30/2013 04:37 AM, Nikolay Starodubtsev wrote:
  Hi, everyone!
  We have created a proposal for Climate REST API
 
 https://docs.google.com/document/d/1U36k5wk0sOUyLl-4Cz8tmk8RQFQGWKO9dVhb87ZxPC8/
  And we like to discuss it with everyone.
 
  If you enable commenting on the proposal, then we can put comments into
 the document for you to respond to.
 
  Best,
  -jay
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] v2 api upload image-size issue with rbd backend store

2013-09-04 Thread Edward Hope-Morley
Hi,

I'm hitting an issue with v2 api upload() and not sure the best way to
fix it so would appreciate some opinions/suggestions.

https://bugs.launchpad.net/glance/+bug/1213880
https://bugs.launchpad.net/python-glanceclient/+bug/1220197

So, currently doing cinder upload-to-image fails with v2 glance api and
RBD backend store. This is because v2 uses upload() (as opposed to
update() in v1) and does not accept an image-size. The v2 Glance api
upload() implementation checks the request content-length (which is
currently always zero) and then tries to create an RBD image of size
zero then write to it which fails. I have tried different solutions:

1. if image size is zero, resize for each chunk then write.

2. set content-length in glanceclient to size of image

Problem with 1 is that this implicitly disables 'Transfer-Encoding:
chunked' i.e. disables chunking. Problem with 2 is you get 2RTT of
network latency per write plus overhead of a resize.

So, I now think the best way to do this would be to modify the update
call to allow the glancelcient to send x-image-meta-size so that the
backend knows how big the image will be, create the image then write the
chunk(s) incrementally (kind of like the swift store).

Suggestions?

Ed.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Session suggestions for the Icehouse Design Summit now open

2013-09-04 Thread Thierry Carrez
Hi everyone,

TL;DR:
The session suggestion website for the Icehouse Design Summit (which
will happen at the OpenStack Summit in Hong-Kong) is now open at:
http://summit.openstack.org/

Long version:

The Icehouse Design Summit is a specific event part of the overall
OpenStack Summit in Hong-Kong. It is different from classic tracks in
a number of ways.

* It happens all 4 days, from Tuesday morning to Friday evening.

* There are *no formal presentations or speakers*. The sessions at the
design summit are open discussions between contributors on a specific
development topic for the upcoming development cycle, generally
moderated by the PTL or the person who proposed the session. While it is
possible to prepare a few slides to introduce the current status and
kick-off the discussion, these should never be formal
speaker-to-audience presentations. If that's what you're after, the
presentations in the other tracks of the OpenStack Summit are for you.

* There is no community voting on the content. The Icehouse Design
Summit is split into multiple topics (one for each official OpenStack
Program), and the elected program PTL will be ultimately responsible for
selecting the content he deems important for the upcoming cycle. If you
want to be PTL in place of the PTL, we'll be holding elections for that
in the coming weeks :)

With all this in mind, please feel free to suggest topics of discussion
for this event. The website to do this is open at:

http://summit.openstack.org/

You'll need to go through Launchpad SSO to log on that site (same auth
we use for review.openstack.org and all our core development
infrastructure). If you're lost, try the Help link at the bottom of the
page. If all else fails, ping me.

Please take extra care when selecting the topic your suggestion belongs
in. You can see the complete list of topics at:

https://wiki.openstack.org/wiki/Summit/Icehouse

You have until mid-October to suggest sessions. Proposed session topics
will be reviewed by PTLs afterwards, potentially merged with other
suggestions before being scheduled.

You can also comment on proposed sessions to suggest scheduling
constraints or sessions it could be merged with.

More information about the Icehouse Design Summit can be found at:
https://wiki.openstack.org/wiki/Summit

Cheers,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][heat] Question re deleting trusts via trust token

2013-09-04 Thread Steven Hardy
On Wed, Sep 04, 2013 at 09:49:48AM +0100, Steven Hardy wrote:
 This final step is the problematic step - atm (unless I'm making a mistake,
 which as previously proven is entirely possible! ;) it seems that it's
 impossible for anyone except the trustor to delete the trust, even if we
 impersonate the trustor.

Ok, apologies, after further testing, it appears I made a mistake and you
*can* delete the trust by impersonating the user.

The reason for the confusion is there's an odd issue when authenticating
the client using a trust_id.  If (and only if) the trust has
impersonation=True, you *must* specify the endpoint when initialising the
client, otherwise we do not get a token, we get a 401.

So I misinterpreted the authentication failure as a 401 on delete, because
I'd copied some code and changed impersonate from False to True, which
changes the required arguments when consuming the trust.  Seems like a bug?

I've created a gist containing an example which demonstrates the problem:

https://gist.github.com/hardys/6435299

I'm not sure if the bug is that the authenticate works without the endpoint
when impersonate=False, or that is doesn't when impersonate=True.

Thanks!

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] review request

2013-09-04 Thread Yaguang Tang
Hi all,

I'd appreciate if any of the  nova-core reviewers could take a look at
https://review.openstack.org/#/c/39226/, as this add keypair notification
events. which is more like a small feature than a bug, I am callling review
just in case after feature freeze it can't be accepted.

-- 
Tang Yaguang

Canonical Ltd. | www.ubuntu.com | www.canonical.com
Mobile:  +86 152 1094 6968
gpg key: 0x187F664F
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Need help with HBase backend

2013-09-04 Thread Thomas Maddox
No worries at all! I was just curious. =] Sounds great. I appreciate your time.

On 9/3/13 5:56 PM, Stas Maksimov 
maksi...@gmail.commailto:maksi...@gmail.com wrote:

Hi Thomas,

Not yet, sorry. But working on it (in parallel!), was having a bit of an issue 
setting up a new env with devstack.

Will update you as soon as I have some results.

Thanks,
Stas





On 3 September 2013 23:00, Thomas Maddox 
thomas.mad...@rackspace.commailto:thomas.mad...@rackspace.com wrote:
Hey Stas,

Were you ever able to get any answers on this? :)

Thanks!

-Thomas

On 8/12/13 9:42 AM, Thomas Maddox 
thomas.mad...@rackspace.commailto:thomas.mad...@rackspace.com wrote:

Happens all of the time. I haven't been able to get a single meter stored. :(

From: Stas Maksimov maksi...@gmail.commailto:maksi...@gmail.com
Reply-To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, August 12, 2013 9:34 AM
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Ceilometer] Need help with HBase backend

Is it sporadic or happens all the time?

In my case my Ceilometer VM was different from HBase VM, so I'm not sure if 
DHCP issues can affect localhost connections.

Thanks,
Stas

On 12 August 2013 15:29, Thomas Maddox 
thomas.mad...@rackspace.commailto:thomas.mad...@rackspace.com wrote:
Hmmm, that's interesting.

That would effect an all-in-one deployment? It's referencing localhost right 
now; not distributed. My Thrift server is 
hbase://127.0.0.1:9090/http://127.0.0.1:9090/. Or would that still effect it, 
because it's a software facilitated localhost reference and I'm doing dev 
inside of a VM (in the cloud) rather than a hardware host?

I really appreciate your help!

-Thomas

From: Stas Maksimov maksi...@gmail.commailto:maksi...@gmail.com
Reply-To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, August 12, 2013 9:17 AM
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Ceilometer] Need help with HBase backend

Aha, so here it goes. The problem was not caused by monkey-patching or 
multithreading issues, it was caused by the DevStack VM losing its connection 
and getting a new address from the DHCP server. Once I fixed the connection 
issues, the problem with eventlet disappeared.

Hope this helps,
Stas

On 12 August 2013 14:49, Stas Maksimov 
maksi...@gmail.commailto:maksi...@gmail.com wrote:

Hi Thomas,

I definitely saw this before, iirc it was caused by monkey-patching somewhere 
else in ceilometer. It was fixed in the end before i submitted hbase 
implementation.

At this moment unfortunately that's all I can recollect on the subject. I'll 
get back to you if I have an 'aha' moment on this. Feel free to contact me 
off-list regarding this hbase driver.

Thanks,
Stas.

Hey team,

I am working on a fix for retrieving the latest metadata on a resource rather 
than the first with the HBase implementation, and I'm running into some trouble 
when trying to get my dev environment to work with HBase. It looks like a 
concurrency issue when it tries to store the metering data. I'm getting the 
following error in my logs (summary):

013-08-11 18:52:33.980 2445 ERROR ceilometer.collector.dispatcher.database 
[req-3b3c65c9-1a1b-4b5d-bba5-8224f074b176 None None] Second simultaneous read 
on fileno 7 detected.  Unless you really know what you're doing, make sure that 
only one greenthread can read any particular socket.  Consider using a 
pools.Pool. If you do know what you're doing and want to disable this error, 
call eventlet.debug.hub_prevent_multiple_readers(False)

Full traceback: http://paste.openstack.org/show/43872/

Has anyone else run into this lovely little problem? It looks like the 
implementation needs to use happybase.ConnectionPool, unless I'm missing 
something.

Thanks in advance for help! :)

-Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][heat] Question re deleting trusts via trust token

2013-09-04 Thread David Chadwick
you can always do anything by impersonating the user. This is why 
impersonation should never be sanctioned


david


On 04/09/2013 11:45, Steven Hardy wrote:

Ok, apologies, after further testing, it appears I made a mistake and you
*can*  delete the trust by impersonating the user.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Question about create_port

2013-09-04 Thread Chandan Dutta Chowdhury
Hello All,

I am trying to make my neutron plugin  to configure a physical switch(using 
vlans), while in create_port I am trying to configure the physical switch I see 
a lot of create_port and delete_port  calls appearing in server.log.
I am assuming that this may be due to the amount of time required to 
configure/commit on the physical switch is higher, and nova may be trying 
multiple times to create port (and deleting port when response does not arrive 
within a timeout period).

Is there a timeout value in neutron or nova that can be altered so that the 
client can wait for the create_port to finish instead of sending multiple 
create/delete port?

Thanks
Chandan


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] key management and Cinder volume encryption

2013-09-04 Thread Russell Bryant
On 09/03/2013 09:27 PM, Bryan D. Payne wrote:
 
  How can someone use your code without a key manager?
 
 Some key management mechanism is required although it could be
 simplistic. For example, we’ve tested our code internally with
 an implementation of the key manager interface that returns a
 single, constant key.
 
 That works for testing but doesn't address: the current dearth of
 key management within OpenStack does not preclude the use of our
 existing work within a production environment 
 
 
 My understanding here is that users are free to use any key management
 mechanism that they see fit.  This can be a simple return a static key
 option.  Or it could be using something more feature rich like Barbican.
  Or it could be something completely home grown that is suited to a
 particular OpenStack deployment.
 
 I don't understand why we are getting hung up on having a key manager as
 part of OpenStack in order to accept this work.  Clearly there are other
 pieces of OpenStack that have external dependencies (message queues, to
 name one).

External dependencies are fine, obviously.  The difference is whether we
actually have code to interface with those external dependencies.  We
have code to talk to databases and message queues.  There's no code
right now to interface with anything for key management.

The request here is for something that allows this to be used without
having to modify or add code to Nova.

 
 I, for one, am looking forward to using this feature and would be very
 disappointed to see it pushed back for yet another release.
 

It's not like I'm happy about it, but it needs more code.

 
 Is a feature complete if no one can use it?  
 
 I am happy with a less then secure but fully functional key manager.
  But with no key manager that can be used in a real deployment, what
 is the value of including this code?
 
 
 Of course people can use it.  They just need to integrate with some
 solution of the deployment's choosing that provides key management
 capabilities.  And, of course, if you choose to not use the volume
 encryption then you don't need to worry about it at all.

As noted above, the integration effort takes code.  We need that code so
that the feature can be used.

 I've watched this feature go through many, many iterations throughout
 both the Grizzly and Havana release cycles.  The authors have been
 working hard to address everyone's concerns.  In fact, they have
 navigated quite a gauntlet to get this far.  And what they have now is
 an excellent, working solution.  Let's accept this nice security
 enhancement and move forward.

I agree that they have worked hard.  It's much appreciated.

We have held other features to this same standard.  See the discussion
about live snapshots / cloning fairly recently for one such example.  We
require that there be code in the tree that makes the feature usable.
That's where we are with this.

If a simple return a static key is deemed useful, I suspect that could
be put together in time.  From talking to Joel on IRC, it seemed that it
wasn't worth it.

This is one of those cases where we have to make a tough call, but after
reviewing the concerns raised, the answer is that without some
additional code to make it usable without modifications or additions,
the feature is deferred to Icehouse.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] How to do nova v3 tests in tempest

2013-09-04 Thread Zhu Bo

hi,
  I'm working on bp:nova-v3-tests in tempest.  The nova tests in 
tempest mostly have been ported into v3 and sent off.
but we got some feedbacks that there was mass code duplication and 
suggested to do this by inheritance.
So I have sent another patch to do this by inheritance. But in this way, 
another issue is not easy to drop v2 client and tests.
I want to get more feedbacks about this blue-print to make sure we do 
this in the right way, which is the better one or is there

another better way? I'd appreciate every suggestion and comment.

the first way to do this in separate files:
https://review.openstack.org/#/c/39609/ and 
https://review.openstack.org/#/c/39621/6


the second way to do this by inheritance.
https://review.openstack.org/#/c/44876/

Thanks  Best Regards

Ivan


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone][Devstack] is dogpile.cache a requirement?

2013-09-04 Thread Salvatore Orlando
whenever I run devstack keystone falies to start because dogpile.cache is
not installed; this is easily solved by installing it, but I wonder if it
should be in requirements.txt
Also, since the cache appears to be disabled by default (and I'm not
enabling it in my localrc), I'm not sure why I am hitting this error, as I
would expect the caching module to not be loaded at all.

any help will be appreciated
Salvatore

keystone rev: ead4f98
devstack rev: 3644724
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Devstack] is dogpile.cache a requirement?

2013-09-04 Thread Dolph Mathews
On Wed, Sep 4, 2013 at 9:14 AM, Salvatore Orlando sorla...@nicira.comwrote:

 whenever I run devstack keystone falies to start because dogpile.cache is
 not installed; this is easily solved by installing it, but I wonder if it
 should be in requirements.txt
 Also, since the cache appears to be disabled by default (and I'm not
 enabling it in my localrc), I'm not sure why I am hitting this error, as I
 would expect the caching module to not be loaded at all.


That sounds like a bug! It should only be a hard requirement if
keystone.conf [cache] enabled=True


 any help will be appreciated
 Salvatore

 keystone rev: ead4f98
 devstack rev: 3644724


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Devstack] is dogpile.cache a requirement?

2013-09-04 Thread Dolph Mathews
On Wed, Sep 4, 2013 at 9:58 AM, David Stanek dsta...@dstanek.com wrote:



 On Wed, Sep 4, 2013 at 10:23 AM, Dolph Mathews dolph.math...@gmail.comwrote:


 On Wed, Sep 4, 2013 at 9:14 AM, Salvatore Orlando sorla...@nicira.comwrote:

 whenever I run devstack keystone falies to start because dogpile.cache
 is not installed; this is easily solved by installing it, but I wonder if
 it should be in requirements.txt
 Also, since the cache appears to be disabled by default (and I'm not
 enabling it in my localrc), I'm not sure why I am hitting this error, as I
 would expect the caching module to not be loaded at all.


 That sounds like a bug! It should only be a hard requirement if
 keystone.conf [cache] enabled=True



 Currently keystone.assignment.core imports keystone.common.cache with ends
 up depending on dogpile.  The current implementation does depend on dogpile
 even if caching isn't being used.


++ I just poked around with making it an optional dependency and it looks
like it would require quite a bit of refactoring... probably too much this
late in the cycle.





 --
 David
 blog: http://www.traceback.org
 twitter: http://twitter.com/dstanek
 www: http://dstanek.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Revert Baremetal v3 API extension?

2013-09-04 Thread Russell Bryant
On 09/04/2013 10:26 AM, Dan Smith wrote:
 Hi all,
 
 As someone who has felt about as much pain as possible from the
 dual-maintenance of the v2 and v3 API extensions, I felt compelled to
 bring up one that I think we can drop. The baremetal extension was
 ported to v3 API before (I think) the decision was made to make v3
 experimental for Havana. There are a couple of patches up for review
 right now that make obligatory changes to one or both of the versions,
 which is what made me think about this.
 
 Since Ironic is on the horizon and was originally slated to deprecate
 the in-nova-tree baremetal support for Havana, and since v3 is only
 experimental in Havana, I think we can drop the baremetal extension for
 the v3 API for now. If Nova's baremetal support isn't ready for
 deprecation by the time we're ready to promote the v3 API, we can
 re-introduce it at that time. Until then, I propose we avoid carrying
 it for a soon-to-be-deprecated feature.
 
 Thoughts?

Sounds reasonable to me.  Anyone else have a differing opinion about it?

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Devstack] is dogpile.cache a requirement?

2013-09-04 Thread Dolph Mathews
On Wed, Sep 4, 2013 at 9:34 AM, Salvatore Orlando sorla...@nicira.comwrote:

 Is the cache module enabled on the devstack gate? If not, it's definetely
 an issue on my side - otherwise we would have known about this.


On a second look, dogpile.cache is actually required by requirements.txt,
so you *should* have it installed, even though it won't be used :-/

https://github.com/openstack/keystone/blob/6979ae010d1fa20caeda13c8f88cdf6dbfa259c6/requirements.txt#L22


 Salvatore


 On 4 September 2013 15:23, Dolph Mathews dolph.math...@gmail.com wrote:


 On Wed, Sep 4, 2013 at 9:14 AM, Salvatore Orlando sorla...@nicira.comwrote:

 whenever I run devstack keystone falies to start because dogpile.cache
 is not installed; this is easily solved by installing it, but I wonder if
 it should be in requirements.txt
 Also, since the cache appears to be disabled by default (and I'm not
 enabling it in my localrc), I'm not sure why I am hitting this error, as I
 would expect the caching module to not be loaded at all.


 That sounds like a bug! It should only be a hard requirement if
 keystone.conf [cache] enabled=True


  any help will be appreciated
 Salvatore

 keystone rev: ead4f98
 devstack rev: 3644724


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --

 -Dolph

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Devstack] is dogpile.cache a requirement?

2013-09-04 Thread Salvatore Orlando
Is the cache module enabled on the devstack gate? If not, it's definetely
an issue on my side - otherwise we would have known about this.

Salvatore


On 4 September 2013 15:23, Dolph Mathews dolph.math...@gmail.com wrote:


 On Wed, Sep 4, 2013 at 9:14 AM, Salvatore Orlando sorla...@nicira.comwrote:

 whenever I run devstack keystone falies to start because dogpile.cache is
 not installed; this is easily solved by installing it, but I wonder if it
 should be in requirements.txt
 Also, since the cache appears to be disabled by default (and I'm not
 enabling it in my localrc), I'm not sure why I am hitting this error, as I
 would expect the caching module to not be loaded at all.


 That sounds like a bug! It should only be a hard requirement if
 keystone.conf [cache] enabled=True


 any help will be appreciated
 Salvatore

 keystone rev: ead4f98
 devstack rev: 3644724


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --

 -Dolph

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cinder Backup documentation - to which doc it should go?

2013-09-04 Thread Edward Hope-Morley
Hi Ronan,

I have a bug open (which I am guilty of letting slip) to amend to cinder
backup docs:

https://bugs.launchpad.net/openstack-manuals/+bug/1205359

I implemented the Ceph backup driver a while back and was intending to
have a cleanup of the backup section in the docs while adding info on
backup to Ceph. Did you get round to implementing your changes? If so
can we get info on ceph backup in there too? (if not I'll get my arse in
gear and do it myself).

Ed.

On 03/09/13 11:50, Ronen Kat wrote:

 I noticed the complains about code submission without appropriate
 documentation submission, so I am ready to do my part for Cinder backup
 I have just one little question.
 Not being up to date on the current set of OpenStack manuals, and as I
 noticed that the block storage admin guide lost a lot of content, to which
 document(s) should I add the Cinder backup documentation?

 The documentation includes:
 1. Backup configuration
 2. General description of Cinder backup (commands, features, etc)
 3. Description of the available backup drivers

 Should all three go to the same place? Or different documents?

 Thanks,

 Regards,
 __
 Ronen I. Kat
 Storage Research
 IBM Research - Haifa
 Phone: +972.3.7689493
 Email: ronen...@il.ibm.com


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Revert Baremetal v3 API extension?

2013-09-04 Thread Thierry Carrez
Russell Bryant wrote:
 On 09/04/2013 10:26 AM, Dan Smith wrote:
 Hi all,

 As someone who has felt about as much pain as possible from the
 dual-maintenance of the v2 and v3 API extensions, I felt compelled to
 bring up one that I think we can drop. The baremetal extension was
 ported to v3 API before (I think) the decision was made to make v3
 experimental for Havana. There are a couple of patches up for review
 right now that make obligatory changes to one or both of the versions,
 which is what made me think about this.

 Since Ironic is on the horizon and was originally slated to deprecate
 the in-nova-tree baremetal support for Havana, and since v3 is only
 experimental in Havana, I think we can drop the baremetal extension for
 the v3 API for now. If Nova's baremetal support isn't ready for
 deprecation by the time we're ready to promote the v3 API, we can
 re-introduce it at that time. Until then, I propose we avoid carrying
 it for a soon-to-be-deprecated feature.

 Thoughts?
 
 Sounds reasonable to me.  Anyone else have a differing opinion about it?

+1

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Devstack] is dogpile.cache a requirement?

2013-09-04 Thread Morgan Fainberg
This was an intentional design to depend on dogpile.cache due to the
mechanism of caching (using decorators).  This is partially the nature of
the method of caching implementation.  It should have been installed in
devstack based upon the requirements.txt having it in there.  Making it an
optional import is likely going to take a significant amount of
refactoring.  My guess is that it's going to be a bit late in the cycle,
but we will see what comes up.

--Morgan Fainberg
IRC: morganfainberg


On Wed, Sep 4, 2013 at 8:21 AM, Dolph Mathews dolph.math...@gmail.comwrote:


 On Wed, Sep 4, 2013 at 9:58 AM, David Stanek dsta...@dstanek.com wrote:



 On Wed, Sep 4, 2013 at 10:23 AM, Dolph Mathews 
 dolph.math...@gmail.comwrote:


 On Wed, Sep 4, 2013 at 9:14 AM, Salvatore Orlando 
 sorla...@nicira.comwrote:

 whenever I run devstack keystone falies to start because dogpile.cache
 is not installed; this is easily solved by installing it, but I wonder if
 it should be in requirements.txt
 Also, since the cache appears to be disabled by default (and I'm not
 enabling it in my localrc), I'm not sure why I am hitting this error, as I
 would expect the caching module to not be loaded at all.


 That sounds like a bug! It should only be a hard requirement if
 keystone.conf [cache] enabled=True



 Currently keystone.assignment.core imports keystone.common.cache with
 ends up depending on dogpile.  The current implementation does depend on
 dogpile even if caching isn't being used.


 ++ I just poked around with making it an optional dependency and it looks
 like it would require quite a bit of refactoring... probably too much this
 late in the cycle.





 --
 David
 blog: http://www.traceback.org
 twitter: http://twitter.com/dstanek
 www: http://dstanek.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --

 -Dolph

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] upgrade tox - now with less slowness!

2013-09-04 Thread Dan Smith
 Because we landed a patch to tox upstream to use setup.py develop
 instead of sdist+install like our run_tests.sh scripts do - this means
 that with the new tox config changes, tox runs should be just as quick
 as run_tests.sh runs.

So. Freaking. Awesome.

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] [neutron] Neutron tests in tempest

2013-09-04 Thread David Kranz
It's great that new neutron tests are being submitted to Tempest. There 
is an issue that the only active neutron tests in the gate are smoke 
tests. Until the full gate can be enabled, please tag any new neutron 
tests as 'smoke' so they run in the gate jobs.

Thanks.

 -David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] upgrade tox - now with less slowness!

2013-09-04 Thread Morgan Fainberg
NICE!!


On Wed, Sep 4, 2013 at 11:05 AM, Dan Smith d...@danplanet.com wrote:

  Because we landed a patch to tox upstream to use setup.py develop
  instead of sdist+install like our run_tests.sh scripts do - this means
  that with the new tox config changes, tox runs should be just as quick
  as run_tests.sh runs.

 So. Freaking. Awesome.

 --Dan

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] key management and Cinder volume encryption

2013-09-04 Thread Coffman, Joel M.
The following change provides a key manager implementation that reads a static 
key from the project's configuration: https://review.openstack.org/#/c/45103/

This key manager implementation naturally does not provide the same 
confidentiality that would be proffered by retrieving keys from a service like 
Barbican or a KMIP server, but it still provides protection against certain 
attacks like intercepting iSCSI traffic between the compute and storage host 
and lost / stolen disks.


From: Bryan D. Payne [mailto:bdpa...@acm.org]
Sent: Wednesday, September 04, 2013 9:47 AM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [nova] key management and Cinder volume encryption


External dependencies are fine, obviously.  The difference is whether we
actually have code to interface with those external dependencies.  We
have code to talk to databases and message queues.  There's no code
right now to interface with anything for key management.

Ok, this makes sense.  I generally assume that people deploying OpenStack have 
some integration work to do anyway.  So, for me, writing a few python methods 
isn't much different than writing a configuration file.  Having said this, I do 
understand where you are coming from here.

I do believe that a static key configuration is a useful starting place for a 
lot of users.  I spoke with Joel this morning and I think he is going to try to 
put together an example key management driver that does this today.  Such a 
solution would allow deployers to use their existing orchestration tools to 
write a key to a configuration file.

Cheers,
-bryan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] upgrade tox - now with less slowness!

2013-09-04 Thread Monty Taylor


On 09/04/2013 01:22 PM, Dolph Mathews wrote:
 
 On Wed, Sep 4, 2013 at 10:56 AM, Monty Taylor mord...@inaugust.com
 mailto:mord...@inaugust.com wrote:
 
 Hey all!
 
 https://review.openstack.org/#/c/42178/2 has landed in nova, which means
 that nova now requires tox 1.6 or higher (for those folks using tox).
 We'll be wanting to port the same change to all of the projects, so if
 you use tox for anything, you'll want to go ahead and upgrade.
 
 Why?
 
 Because we landed a patch to tox upstream to use setup.py develop
 instead of sdist+install like our run_tests.sh scripts do - this means
 that with the new tox config changes, tox runs should be just as quick
 as run_tests.sh runs.
 
 Other than speed, it also gets more correctness, as before the process
 ran sdist with system python, even if that wasn't the python that was to
 be used for the test run.
 
 
 YAY! Can we kill run_tests.sh now?

Probably in another cycle - there are still things we need from the
testr ui that we have to punt to run_tests.sh for now. For instance,
running a single test in debug mode right now is WAY easier in run_tests.sh.

 
 Enjoy!
 Monty
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 -- 
 
 -Dolph
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Savanna] Guidance for adding a new plugin (CDH)

2013-09-04 Thread Matthew Farrellee

On 09/04/2013 04:06 PM, Andrei Savu wrote:

Hi guys -

I have just started to play with Savanna a few days ago - I'm still
going through the code. Next week I want to start to work on a plugin
that will deploy CDH using Cloudera Manager.

What process should I follow? I'm new to launchpad / Gerrit. Should I
start by creating a blueprint and a bug / improvement request?


Savanna is following all OpenStack community practices so you can check 
out https://wiki.openstack.org/wiki/How_To_Contribute to get a good idea 
of what to do.


In short, yes please use launchpad and gerrit and create a blueprint.



Is there any public OpenStack deployment that I can use for testing?
Should 0.2 work with Grizzly at trystack.org http://trystack.org?


0.2 will work with Grizzly. I've not tried trystack so let us know if it 
works.



Best,


matt


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] The three API server multi-worker process patches.

2013-09-04 Thread Brian Cline
Was any consensus on this ever reached? It appears both reviews are still open. 
I'm partial to review 37131 as it attacks the problem a more concisely, and, as 
mentioned, combined the efforts of the two more effective patches. I would echo 
Carl's sentiments that it's an easy review minus the few minor behaviors 
discussed on the review thread today.

We feel very strongly about these making it into Havana -- being confined to a 
single neutron-server instance per cluster or region is a huge 
bottleneck--essentially the only controller process with massive CPU churn in 
environments with constant instance churn, or sudden large batches of new 
instance requests.

In Grizzly, this behavior caused addresses not to be issued to some instances 
during boot, due to quantum-server thinking the DHCP agents timed out and were 
no longer available, when in reality they were just backlogged (waiting on 
quantum-server, it seemed).

Is it realistically looking like this patch will be cut for h3?

--
Brian Cline
Software Engineer III, Product Innovation

SoftLayer, an IBM Company
4849 Alpha Rd, Dallas, TX 75244
214.782.7876 direct  |  bcl...@softlayer.com
 

-Original Message-
From: Baldwin, Carl (HPCS Neutron) [mailto:carl.bald...@hp.com] 
Sent: Wednesday, August 28, 2013 3:04 PM
To: Mark McClain
Cc: OpenStack Development Mailing List
Subject: [openstack-dev] [Neutron] The three API server multi-worker process 
patches.

All,

We've known for a while now that some duplication of work happened with
respect to adding multiple worker processes to the neutron-server.  There
were a few mistakes made which led to three patches being done
independently of each other.

Can we settle on one and accept it?

I have changed my patch at the suggestion of one of the other 2 authors,
Peter Feiner, in attempt to find common ground.  It now uses openstack
common code and therefore it is more concise than any of the original
three and should be pretty easy to review.  I'll admit to some bias toward
my own implementation but most importantly, I would like for one of these
implementations to land and start seeing broad usage in the community
earlier than later.

Carl Baldwin

PS Here are the two remaining patches.  The third has been abandoned.

https://review.openstack.org/#/c/37131/
https://review.openstack.org/#/c/36487/


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][vmware] VMwareAPI sub-team reviews update 2013-09-04

2013-09-04 Thread Shawn Hartsock

It's feature-freeze-eve. Here's where we sit tonight. Many of our blueprints 
are on the cusp of making it in. For review, rules on Feature freeze.
# https://wiki.openstack.org/wiki/FeatureFreeze


... blueprint reviews by readiness ...
Needs one more core review/approval:
* NEW, https://review.openstack.org/#/c/30282/ ,'VMware: Multiple cluster 
support using single compute service'

https://blueprints.launchpad.net/nova/+spec/multiple-clusters-managed-by-one-service
core votes,1, non-core votes,4, down votes, 0

Ready for core reviewer:
* NEW, https://review.openstack.org/#/c/37819/ ,'VMware image clone strategy 
settings and overrides'
https://blueprints.launchpad.net/nova/+spec/vmware-image-clone-strategy
core votes,0, non-core votes,6, down votes, 0

Needs reviews:
* NEW, https://review.openstack.org/#/c/35633/ ,'Enhance the vCenter driver to 
support FC volume attach'
https://blueprints.launchpad.net/nova/+spec/fc-support-for-vcenter-driver
core votes,0, non-core votes,2, down votes, 0


Havana Blueprints by order of importance:
===
#1. 
https://blueprints.launchpad.net/nova/+spec/multiple-clusters-managed-by-one-service
 
    * https://review.openstack.org/#/c/30282/ - one more +2

#2 https://blueprints.launchpad.net/nova/+spec/vmware-nova-cinder-support 
   * https://review.openstack.org/#/c/40105/ - Merged!
   * https://review.openstack.org/#/c/40245/ - has +2s waiting?
   * https://review.openstack.org/#/c/41387/ - needs +2s

#3 https://blueprints.launchpad.net/cinder/+spec/vmware-vmdk-cinder-driver
   * https://review.openstack.org/#/c/41600/ - Merged!
   * https://review.openstack.org/#/c/43465/ - needs rebase (merge failed)

#4 
https://blueprints.launchpad.net/nova/+spec/deploy-vcenter-templates-from-vmware-nova-driver
   * https://review.openstack.org/#/c/34903/ - needs revision

#5 https://blueprints.launchpad.net/nova/+spec/improve-vmware-disk-usage
   * https://review.openstack.org/#/c/37659/ - needs revision

#6 https://blueprints.launchpad.net/nova/+spec/vmware-image-clone-strategy
   * https://review.openstack.org/#/c/37819/ - needs +2s
===


Meeting info:
* https://wiki.openstack.org/wiki/Meetings/VMwareAPI
* If anything is missing, add 'hartsocks' as a reviewer to the patch so I can 
examine it.
* We hang out in #openstack-vmware if you need to chat.

# Shawn Hartsock

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neturon] VPNaaS

2013-09-04 Thread Nachi Ueno
Hi folks

We could merged VPNaaS DB and Driver and CLI for neutron.
# heat support also looks like merged!

This is a demo video
http://www.youtube.com/watch?v=6qqCRqBwMUY

This is latest how to install vpn
https://wiki.openstack.org/wiki/Quantum/VPNaaS/HowToInstall

The last part is Horizon support.
https://review.openstack.org/#/c/34882/

so it is great if we can help test VPNaaS function and
Horizon.

It is also helpful if you could test it with existing network gears also.

Best
Nachi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [CI] can't receive mails from Jenkins

2013-09-04 Thread Gareth
Hi, all

I have faced this problem about 15 hours ago. Some activities here now
didn't notice me by mails:

https://review.openstack.org/#/c/45081/
https://review.openstack.org/#/c/45031/

Is this a personal issue or general one? or an old problem?

-- 
Gareth

*Cloud Computing, OpenStack, Fitness, Basketball*
*OpenStack contributor*
*Company: UnitedStack http://www.ustack.com*
*My promise: if you find any spelling or grammar mistakes in my email from
Mar 1 2013, notify me *
*and I'll donate $1 or ¥1 to an open organization you specify.*
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev