Re: [openstack-dev] [Product] [all][log] Openstack HTTP error codes

2015-02-02 Thread Rochelle Grober
What I see in this conversation is that we are talking about multiple different 
user classes.

Infra-operator needs as much info as possible, so if it is a vendor driver that 
is erring out, the dev-ops can see it in the log.

Tenant-operator is a totally different class of user.  These guys need VM based 
logs and virtual network based logs, etc., but should never see as far under 
the covers as the infra-ops *has* to see.

So, sounds like a security policy issue of what makes it to tenant logs and 
what stays in the data center thing.  

There are *lots* of logs that are being generated.  It sounds like we need 
standards on what goes into which logs along with error codes, 
logging/reporting levels, criticality, etc.

--Rocky

(bcc'ing the ops list so they can join this discussion, here)

-Original Message-
From: Sean Dague [mailto:s...@dague.net] 
Sent: Monday, February 02, 2015 8:19 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Product] [all][log] Openstack HTTP error codes

On 02/01/2015 06:20 PM, Morgan Fainberg wrote:
 Putting on my sorry-but-it-is-my-job-to-get-in-your-way hat (aka security), 
 let's be careful how generous we are with the user and data we hand back. It 
 should give enough information to be useful but no more. I don't want to see 
 us opened to weird attack vectors because we're exposing internal state too 
 generously. 
 
 In short let's aim for a slow roll of extra info in, and evaluate each data 
 point we expose (about a failure) before we do so. Knowing more about a 
 failure is important for our users. Allowing easy access to information that 
 could be used to attack / increase impact of a DOS could be bad. 
 
 I think we can do it but it is important to not swing the pendulum too far 
 the other direction too fast (give too much info all of a sudden). 

Security by cloud obscurity?

I agree we should evaluate information sharing with security in mind.
However, the black boxing level we have today is bad for OpenStack. At a
certain point once you've added so many belts and suspenders, you can no
longer walk normally any more.

Anyway, lets stop having this discussion in abstract and actually just
evaluate the cases in question that come up.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] oslo.versionedobjects repository is ready for pre-import review

2015-02-02 Thread Doug Hellmann
I’ve prepared a copy of nova.objects as oslo_versionedobjects in 
https://github.com/dhellmann/oslo.versionedobjects-import. The script to create 
the repository is part of the update to the spec in 
https://review.openstack.org/15.

Please look over the code so you are familiar with it. Dan and I have already 
talked about the need to rewrite the tests that depend on nova’s service code, 
so those are set to skip for now. We’ll need to do some work to make the lib 
compatible with python 3, so I’ll make sure the project-config patch does not 
enable those tests, yet.

Please post comments on the code here on the list in case I end up needing to 
rebuild that import repository.

I’ll give everyone a few days before removing the WIP flag from the infra 
change to import this new repository (https://review.openstack.org/151792).

Doug


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat][Keystone] Native keystone resources in Heat

2015-02-02 Thread Zane Bitter

On 30/01/15 02:19, Thomas Spatzier wrote:

From: Zane Bitter zbit...@redhat.com
To: openstack Development Mailing List

openstack-dev@lists.openstack.org

Date: 29/01/2015 17:47
Subject: [openstack-dev] [Heat][Keystone] Native keystone resources in

Heat


I got a question today about creating keystone users/roles/tenants in
Heat templates. We currently support creating users via the
AWS::IAM::User resource, but we don't have a native equivalent.

IIUC keystone now allows you to add users to a domain that is otherwise
backed by a read-only backend (i.e. LDAP). If this means that it's now
possible to configure a cloud so that one need not be an admin to create
users then I think it would be a really useful thing to expose in Heat.
Does anyone know if that's the case?

I think roles and tenants are likely to remain admin-only, but we have
precedent for including resources like that in /contrib... this seems
like it would be comparably useful.

Thoughts?


I am really not a keystone expert, so don't know what the security
implications would be, but I have heard the requirement or wish to be able
to create users, roles etc. from a template many times. I've talked to
people who want to explore this for onboarding use cases, e.g. for
onboarding of lines of business in a company, or for onboarding customers
in a public cloud case. They would like to be able to have templates that
lay out the overall structure for authentication stuff, and then
parameterize it for each onboarding process.
If this is something to be enabled, that would be interesting to explore.


Thanks for the input everyone. I raised a spec + blueprint here:

https://review.openstack.org/152309

I don't have any immediate plans to work on this, so if anybody wants to 
grab it they'd be more than welcome :)


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Talk on Jinja Metatemplates for upcoming summit

2015-02-02 Thread Pavlo Shchelokovskyy
Hi Pratik,

what would be the aim for this templating? I ask since we in Heat try to
keep the imperative logic like e.g. if-else out of heat templates, leaving
it to other services. Plus there is already a spec for a heat template
function to repeat pieces of template structure [1].

I can definitely say that some other OpenStack projects that are consumers
of Heat will be interested - Trove already tries to use Jinja templates to
create Heat templates [2], and possibly Sahara and Murano might be
interested as well (I suspect though the latter already uses YAQL for that).

[1] https://review.openstack.org/#/c/140849/
[2]
https://github.com/openstack/trove/blob/master/trove/templates/default.heat.template

Best regards,

Pavlo Shchelokovskyy
Software Engineer
Mirantis Inc
www.mirantis.com

On Mon, Feb 2, 2015 at 8:29 PM, Pratik Mallya pratik.mal...@rackspace.com
wrote:

 Hello Heat Developers,

 As part of an internal development project at Rackspace, I implemented a
 mechanism to allow using Jinja templating system in heat templates. I was
 hoping to give a talk on the same for the upcoming summit (which will be
 the first summit after I started working on openstack). Have any of you
 worked/ are working on something similar? If so, could you please contact
 me and we can maybe propose a joint talk? :-)

 Please let me know! It’s been interesting work and I hope the community
 will be excited to see it.

 Thanks!
 -Pratik

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] Finding people to work on the EC2 API in Nova

2015-02-02 Thread Jeremy Stanley
On 2015-02-02 23:29:55 +0300 (+0300), Alexandre Levine wrote:
 I'll do that when I've got myself acquainted with the weekly meetings
 procedure (haven't actually bumped into it before) :)
[...]

Start from the https://wiki.openstack.org/wiki/Meetings page
preamble and follow the instructions linked from it.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] PowerKVM CI reporting

2015-02-02 Thread Morgan Fainberg
Thank you for the heads up. 

—Morgan

-- 
Morgan Fainberg

On February 2, 2015 at 1:15:49 PM, Kurt Taylor (kurt.r.tay...@gmail.com) wrote:

Just FYI, in case there was any questions,

In addition to testing and reporting on Nova, the IBM PowerKVM CI system is now 
also testing against Keystone patches.

We are happy to also be testing keystone patches on PowerKVM, and will be 
adding other projects soon.

Regards,
Kurt Taylor (krtaylor)
__  
OpenStack Development Mailing List (not for usage questions)  
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] oslo.versionedobjects repository is ready for pre-import review

2015-02-02 Thread Doug Hellmann


On Mon, Feb 2, 2015, at 04:33 PM, Doug Hellmann wrote:
 I’ve prepared a copy of nova.objects as oslo_versionedobjects in
 https://github.com/dhellmann/oslo.versionedobjects-import. The script to
 create the repository is part of the update to the spec in
 https://review.openstack.org/15.
 
 Please look over the code so you are familiar with it. Dan and I have
 already talked about the need to rewrite the tests that depend on nova’s
 service code, so those are set to skip for now. We’ll need to do some
 work to make the lib compatible with python 3, so I’ll make sure the
 project-config patch does not enable those tests, yet.
 
 Please post comments on the code here on the list in case I end up
 needing to rebuild that import repository.
 
 I’ll give everyone a few days before removing the WIP flag from the infra
 change to import this new repository
 (https://review.openstack.org/151792).

I filed bugs for a few known issues that we'll need to work on before
the first release: https://bugs.launchpad.net/oslo.versionedobjects

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Deprecation of in tree EC2 API in Nova for Kilo release

2015-02-02 Thread Jesse Pretorius
On 2 February 2015 at 16:29, Sean Dague s...@dague.net wrote:

 It's really easy to say someone should do this, but the problem is
 that none of the core team is interested, neither is anyone else. Most
 of the people that once were interested have left being active in
 OpenStack.

 EC2 compatibility does not appear to be part of the long term strategy
 for the project, hasn't been in a while (looking at the level of
 maintenance here). Ok, we should signal that so that new and existing
 users that believe that is a core supported feature realize it's not.

 The fact that there is some plan to exist out of tree is a bonus,
 however the fact that this is not a first class feature in Nova really
 does need to be signaled. It hasn't been.

 Maybe deprecation is the wrong tool for that, and marking EC2 as
 experimental and non supported in the log message is more appropriate.


I think that perhaps something that shouldn't be lost site of is that the
users using the EC2 API are using it as-is. The only commitment that needs
to be made is to maintain the functionality that's already there, rather
than attempt to keep it up to scratch with newer functionality that's come
into EC2.

The stackforge project can perhaps be the incubator for the development of
a full replacement which is more up-to-date and interacts more like a
translator. Once it's matured enough that the users want to use it instead
of the old EC2 API in-tree, then perhaps deprecation is the right option.

Between now and then, I must say that I agree with Sean - perhaps the best
strategy would be to make it clear somehow that the EC2 API isn't a fully
tested or up-to-date API.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove] Schedule for Trove Mid-Cycle Sprint

2015-02-02 Thread Nikhil Manchanda
Hi folks:

I've updated the schedule for the Trove Mid-Cycle Sprint at
https://wiki.openstack.org/wiki/Sprints/TroveKiloSprint#Schedule
and have linked the slots on the time-table to the etherpads that we're
planning on using to track the discussion.

I've also updated the page with some more information about remote
participation in case you're not able to make it to the mid-cycle
location (Seattle, WA) in person.

Hope to see many of you tomorrow at the mid-cycle sprint.

Cheers,
Nikhil
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] RFC: ensuring live migration ends

2015-02-02 Thread Daniel P. Berrange
On Mon, Feb 02, 2015 at 01:21:31PM -0500, Andrew Laski wrote:
 
 On 02/02/2015 11:26 AM, Daniel P. Berrange wrote:
 On Mon, Feb 02, 2015 at 11:19:45AM -0500, Andrew Laski wrote:
 On 02/02/2015 05:58 AM, Daniel P. Berrange wrote:
 On Sun, Feb 01, 2015 at 11:20:08AM -0800, Noel Burton-Krahn wrote:
 Thanks for bringing this up, Daniel.  I don't think it makes sense to have
 a timeout on live migration, but operators should be able to cancel it,
 just like any other unbounded long-running process.  For example, there's
 no timeout on file transfers, but they need an interface report progress
 and to cancel them.  That would imply an option to cancel evacuation too.
 There has been periodic talk about a generic tasks API in Nova for 
 managing
 long running operations and getting information about their progress, but I
 am not sure what the status of that is. It would obviously be applicable to
 migration if that's a route we took.
 Currently the status of a tasks API is that it would happen after the API
 v2.1 microversions work has created a suitable framework in which to add
 tasks to the API.
 So is all work on tasks blocked by the microversions support ? I would have
 though that would only block places where we need to modify existing APIs.
 Are we not able to add APIs for listing / cancelling tasks as new APIs
 without such a dependency on microversions ?
 
 Tasks work is certainly not blocked on waiting for microversions. There is a
 large amount of non API facing work that could be done to move forward the
 idea of a task driving state changes within Nova. I would very likely be
 working on that if I wasn't currently spending much of my time on cells v2.

Ok, thanks for the info. So from the POV of migration, I'll focus on the
non-API stuff, and expect the tasks work to provide the API mechanisms

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] PowerKVM CI reporting

2015-02-02 Thread Anita Kuno
On 02/02/2015 02:52 PM, Kurt Taylor wrote:
 Thanks Morgan, That's why I wanted to email.
And since we have over 100 third party CI accounts this is why this sort
of conversation can take place in channel rather than the mail list.

Everyone can attend meetings: https://wiki.openstack.org/wiki/Meetings
Permission is not required. Show up at the specified time and day in the
irc channel and introduce yourself.

Thank you,
Anita.

 We will gladly come to a
 meeting and formally request to comment and will turn off commenting on
 Keystone until then.
 
 Thanks,
 Kurt Taylor (krtaylor)
 
 On Mon, Feb 2, 2015 at 3:43 PM, Morgan Fainberg morgan.fainb...@gmail.com
 wrote:
 
 I assumed [my mistake] this was not commenting/reporting, just running
 against Keystone. I expect a more specific request to comment rather than a
 “hey we’re doing this” if commenting is what is desired.

 Please come to our weekly meeting if you’re planning on commenting/scoring
 on keystone patches.

 --
 Morgan Fainberg

 On February 2, 2015 at 1:41:08 PM, Anita Kuno (ante...@anteaya.info)
 wrote:

 On 02/02/2015 02:16 PM, Morgan Fainberg wrote:
 Thank you for the heads up.

 —Morgan

 --
 Morgan Fainberg

 On February 2, 2015 at 1:15:49 PM, Kurt Taylor (kurt.r.tay...@gmail.com)
 wrote:

 Just FYI, in case there was any questions,

 In addition to testing and reporting on Nova, the IBM PowerKVM CI system
 is now also testing against Keystone patches.

 We are happy to also be testing keystone patches on PowerKVM, and will
 be adding other projects soon.

 Regards,
 Kurt Taylor (krtaylor)

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 Requesting permission to comment on a new repo is best done at the
 weekly meeting of the project in question, not the mailing list.

 Thanks,
 Anita.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The API WG mission statement

2015-02-02 Thread Stefano Maffulli
On Fri, 2015-01-30 at 23:05 +, Everett Toews wrote:
 To converge the OpenStack APIs to a consistent and pragmatic RESTful
 design by creating guidelines that the projects should follow. The
 intent is not to create backwards incompatible changes in existing
 APIs, but to have new APIs and future versions of existing APIs
 converge.

It's looking good already. I think it would be good also to mention the
end-recipients of the consistent and pragmatic RESTful design so that
whoever reads the mission is reminded why that's important. Something
like:

To improve developer experience converging the OpenStack API to
a consistent and pragmatic RESTful design. The working group
creates guidelines that all OpenStack projects should follow,
avoids introducing backwards incompatible changes in existing
APIs and promotes convergence of new APIs and future versions of
existing APIs.

more or less...

/stef


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] Finding people to work on the EC2 API in Nova

2015-02-02 Thread Michael Still
On Mon, Feb 2, 2015 at 11:01 PM, Alexandre Levine
alev...@cloudscaling.com wrote:
 Michael,

 I'm rather new here, especially in regard to communication matters, so I'd
 also be glad to understand how it's done and then I can drive it if it's ok
 with everybody.
 By saying EC2 sub team - who did you keep in mind? From my team 3 persons
 are involved.

I see the sub team as the way of keeping the various organisations who
have expressed interest in helping pulling in the same direction. I'd
suggest you pick a free slot on our meeting calendar and run an irc
meeting there weekly to track overall progress.

 From the technical point of view the transition plan could look somewhat
 like this (sequence can be different):

 1. Triage EC2 bugs and fix showstoppers in nova's EC2.
 2. Contribute Tempest tests for EC2 functionality and employ them against
 nova's EC2.
 3. Write spec for required API to be exposed from nova so that we get full
 info.
 4. Triage and fix all of the existing nova's EC2 bugs worth fixing.
 5. Set up Tempest testing of the stackforge/ec2 (if that's possible).
 6. Communicate and discover all of the existing questions and problematic
 points for the switching from existing EC2 API to the new one. Provide
 solutions or decisions about them.
 7. Do performance testing of the new stackforge/ec2 and provide fixes if any
 bottlenecks come up.
 8. Have all of the above prepared for the Vancouver summit and discuss the
 situation there.

This sounds really good to me -- this is the sort of thing you'd be
tracking against in that irc meeting, although presumably you'd
negotiate as a group exactly what the steps are and who is working on
what.

Do you see transitioning users to the external EC2 implementation as a
final step in this list? I know you've only gone as far as Vancouver
here, but I want to be explicit about the intended end goal.

 Michael, I am still wondering, who's going to be responsible for timely
 reviews and approvals of the fixes and tests we're going to contribute to
 nova? So far this is the biggest risk. Is there anyway to allow some of us
 to participate in the process?

Sean has offered here, for which I am grateful. Your team as it forms
should also start reviewing each other's work, as that will reduce the
workload somewhat for Sean and other cores.

I think given the level of interest here we can have a serious
discussion at Vancouver about if EC2 should be nominated as a priority
task for the L release, which is our more formal way of cementing this
at the beginning of a release cycle.

Thanks again to everyone who has volunteered to help out with this.
35% of our users are grateful!

Michael


 On 2/2/15 2:46 AM, Michael Still wrote:

 So, its exciting to me that we seem to developing more forward
 momentum here. I personally think the way forward is a staged
 transition from the in-nova EC2 API to the stackforge project, with
 testing added to ensure that we are feature complete between the two.
 I note that Soren disagrees with me here, but that's ok -- I'd like to
 see us work through that as a team based on the merits.

 So... It sounds like we have an EC2 sub team forming. How do we get
 that group meeting to come up with a transition plan?

 Michael

 On Sun, Feb 1, 2015 at 4:12 AM, Davanum Srinivas dava...@gmail.com
 wrote:

 Alex,

 Very cool. thanks.

 -- dims

 On Sat, Jan 31, 2015 at 1:04 AM, Alexandre Levine
 alev...@cloudscaling.com wrote:

 Davanum,

 Now that the picture with the both EC2 API solutions has cleared up a
 bit, I
 can say yes, we'll be adding the tempest tests and doing devstack
 integration.

 Best regards,
Alex Levine

 On 1/31/15 2:21 AM, Davanum Srinivas wrote:

 Alexandre, Randy,

 Are there plans afoot to add support to switch on stackforge/ec2-api
 in devstack? add tempest tests etc? CI Would go a long way in
 alleviating concerns i think.

 thanks,
 dims

 On Fri, Jan 30, 2015 at 1:24 PM, Bias, Randy randy.b...@emc.com
 wrote:

 As you know we have been driving forward on the stack forge project
 and
 it¹s our intention to continue to support it over time, plus
 reinvigorate
 the GCE APIs when that makes sense. So we¹re supportive of deprecating
 from Nova to focus on EC2 API in Nova.  I also think it¹s good for
 these
 APIs to be able to iterate outside of the standard release cycle.



 --Randy

 VP, Technology, EMC Corporation
 Formerly Founder  CEO, Cloudscaling (now a part of EMC)
 +1 (415) 787-2253 [google voice]
 TWITTER: twitter.com/randybias
 LINKEDIN: linkedin.com/in/randybias
 ASSISTANT: ren...@emc.com






 On 1/29/15, 4:01 PM, Michael Still mi...@stillhq.com wrote:

 Hi,

 as you might have read on openstack-dev, the Nova EC2 API
 implementation is in a pretty sad state. I wont repeat all of those
 details here -- you can read the thread on openstack-dev for detail.

 However, we got here because no one is maintaining the code in Nova
 for the EC2 API. This is despite repeated calls over the last 18
 months (at 

Re: [openstack-dev] [OpenStack Foundation] Finding people to work on the EC2 API in Nova

2015-02-02 Thread Alexandre Levine


On 2/2/15 11:15 PM, Michael Still wrote:

On Mon, Feb 2, 2015 at 11:01 PM, Alexandre Levine
alev...@cloudscaling.com wrote:

Michael,

I'm rather new here, especially in regard to communication matters, so I'd
also be glad to understand how it's done and then I can drive it if it's ok
with everybody.
By saying EC2 sub team - who did you keep in mind? From my team 3 persons
are involved.

I see the sub team as the way of keeping the various organisations who
have expressed interest in helping pulling in the same direction. I'd
suggest you pick a free slot on our meeting calendar and run an irc
meeting there weekly to track overall progress.


I'll do that when I've got myself acquainted with the weekly meetings 
procedure (haven't actually bumped into it before) :)



 From the technical point of view the transition plan could look somewhat
like this (sequence can be different):

1. Triage EC2 bugs and fix showstoppers in nova's EC2.
2. Contribute Tempest tests for EC2 functionality and employ them against
nova's EC2.
3. Write spec for required API to be exposed from nova so that we get full
info.
4. Triage and fix all of the existing nova's EC2 bugs worth fixing.
5. Set up Tempest testing of the stackforge/ec2 (if that's possible).
6. Communicate and discover all of the existing questions and problematic
points for the switching from existing EC2 API to the new one. Provide
solutions or decisions about them.
7. Do performance testing of the new stackforge/ec2 and provide fixes if any
bottlenecks come up.
8. Have all of the above prepared for the Vancouver summit and discuss the
situation there.

This sounds really good to me -- this is the sort of thing you'd be
tracking against in that irc meeting, although presumably you'd
negotiate as a group exactly what the steps are and who is working on
what.

Do you see transitioning users to the external EC2 implementation as a
final step in this list? I know you've only gone as far as Vancouver
here, but I want to be explicit about the intended end goal.


Yes, that's correct. The very final step though would be cleaning up 
nova from the EC2 stuff. But you're right, the major goal would be to 
make external EC2 API production-ready and to have all of the necessary 
means for users to seamlessly transition (no downtimes, no instances 
recreation required).

So I can point at least three distinct major milestones here:

1. EC2 API in nova is back and revived (no showstoppers, all of the 
currently employed functionality safe and sound, new tests added to 
check and ensure that).

2. External EC2 API is production-ready.
3. Nova is relieved of the EC2 stuff.

Vancouver is somewhere in between 1 and 3.



Michael, I am still wondering, who's going to be responsible for timely
reviews and approvals of the fixes and tests we're going to contribute to
nova? So far this is the biggest risk. Is there anyway to allow some of us
to participate in the process?

Sean has offered here, for which I am grateful. Your team as it forms
should also start reviewing each other's work, as that will reduce the
workload somewhat for Sean and other cores.


We've already started.


I think given the level of interest here we can have a serious
discussion at Vancouver about if EC2 should be nominated as a priority
task for the L release, which is our more formal way of cementing this
at the beginning of a release cycle.

Thanks again to everyone who has volunteered to help out with this.
35% of our users are grateful!

Michael



On 2/2/15 2:46 AM, Michael Still wrote:

So, its exciting to me that we seem to developing more forward
momentum here. I personally think the way forward is a staged
transition from the in-nova EC2 API to the stackforge project, with
testing added to ensure that we are feature complete between the two.
I note that Soren disagrees with me here, but that's ok -- I'd like to
see us work through that as a team based on the merits.

So... It sounds like we have an EC2 sub team forming. How do we get
that group meeting to come up with a transition plan?

Michael

On Sun, Feb 1, 2015 at 4:12 AM, Davanum Srinivas dava...@gmail.com
wrote:

Alex,

Very cool. thanks.

-- dims

On Sat, Jan 31, 2015 at 1:04 AM, Alexandre Levine
alev...@cloudscaling.com wrote:

Davanum,

Now that the picture with the both EC2 API solutions has cleared up a
bit, I
can say yes, we'll be adding the tempest tests and doing devstack
integration.

Best regards,
Alex Levine

On 1/31/15 2:21 AM, Davanum Srinivas wrote:

Alexandre, Randy,

Are there plans afoot to add support to switch on stackforge/ec2-api
in devstack? add tempest tests etc? CI Would go a long way in
alleviating concerns i think.

thanks,
dims

On Fri, Jan 30, 2015 at 1:24 PM, Bias, Randy randy.b...@emc.com
wrote:

As you know we have been driving forward on the stack forge project
and
it¹s our intention to continue to support it over time, plus
reinvigorate
the GCE APIs when that makes sense. So we¹re supportive of 

Re: [openstack-dev] [Neutron] Multiple template libraries being used in tree

2015-02-02 Thread Sean Dague
On 02/02/2015 04:20 PM, Mark McClain wrote:
 You’re right that the Mako dependency is really a side effect from Alembic.  
 We used jinja for tempting radvd because it is used by the projects within 
 the OpenStack ecosystem and also used in VPNaaS.

Jinja is far more used in other parts of OpenStack from my recollection,
I think that's probably the prefered thing to consolidate on.

Alembic being different is fine, it's a dependent library.

-Sean

 mark
 
 
 On Feb 2, 2015, at 3:13 PM, Sean M. Collins s...@coreitpro.com wrote:

 Sorry, I should have done a bit more grepping before I sent the e-mail,
 since it appears that Mako is being used by alembic.

 http://alembic.readthedocs.org/en/latest/tutorial.html

 So, should we switch the radvd templating over to Mako instead?

 -- 
 Sean M. Collins

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] PowerKVM CI reporting

2015-02-02 Thread Kurt Taylor
Just FYI, in case there was any questions,

In addition to testing and reporting on Nova, the IBM PowerKVM CI system is
now also testing against Keystone patches.

We are happy to also be testing keystone patches on PowerKVM, and will be
adding other projects soon.

Regards,
Kurt Taylor (krtaylor)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Multiple template libraries being used in tree

2015-02-02 Thread Mark McClain
You’re right that the Mako dependency is really a side effect from Alembic.  We 
used jinja for tempting radvd because it is used by the projects within the 
OpenStack ecosystem and also used in VPNaaS.

mark


 On Feb 2, 2015, at 3:13 PM, Sean M. Collins s...@coreitpro.com wrote:
 
 Sorry, I should have done a bit more grepping before I sent the e-mail,
 since it appears that Mako is being used by alembic.
 
 http://alembic.readthedocs.org/en/latest/tutorial.html
 
 So, should we switch the radvd templating over to Mako instead?
 
 -- 
 Sean M. Collins
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] PowerKVM CI reporting

2015-02-02 Thread Morgan Fainberg
I assumed [my mistake] this was not commenting/reporting, just running against 
Keystone. I expect a more specific request to comment rather than a “hey we’re 
doing this” if commenting is what is desired.

Please come to our weekly meeting if you’re planning on commenting/scoring on 
keystone patches.

-- 
Morgan Fainberg

On February 2, 2015 at 1:41:08 PM, Anita Kuno (ante...@anteaya.info) wrote:

On 02/02/2015 02:16 PM, Morgan Fainberg wrote:  
 Thank you for the heads up.  
  
 —Morgan  
  
 --  
 Morgan Fainberg  
  
 On February 2, 2015 at 1:15:49 PM, Kurt Taylor (kurt.r.tay...@gmail.com) 
 wrote:  
  
 Just FYI, in case there was any questions,  
  
 In addition to testing and reporting on Nova, the IBM PowerKVM CI system is 
 now also testing against Keystone patches.  
  
 We are happy to also be testing keystone patches on PowerKVM, and will be 
 adding other projects soon.  
  
 Regards,  
 Kurt Taylor (krtaylor)  
 __  
 OpenStack Development Mailing List (not for usage questions)  
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe  
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
  
  
  
 __  
 OpenStack Development Mailing List (not for usage questions)  
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe  
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
  
Requesting permission to comment on a new repo is best done at the  
weekly meeting of the project in question, not the mailing list.  

Thanks,  
Anita.  

__  
OpenStack Development Mailing List (not for usage questions)  
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Multiple template libraries being used in tree

2015-02-02 Thread Sean M. Collins
Sorry, I should have done a bit more grepping before I sent the e-mail,
since it appears that Mako is being used by alembic.

http://alembic.readthedocs.org/en/latest/tutorial.html

So, should we switch the radvd templating over to Mako instead?

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] PowerKVM CI reporting

2015-02-02 Thread Anita Kuno
On 02/02/2015 02:16 PM, Morgan Fainberg wrote:
 Thank you for the heads up. 
 
 —Morgan
 
 -- 
 Morgan Fainberg
 
 On February 2, 2015 at 1:15:49 PM, Kurt Taylor (kurt.r.tay...@gmail.com) 
 wrote:
 
 Just FYI, in case there was any questions,
 
 In addition to testing and reporting on Nova, the IBM PowerKVM CI system is 
 now also testing against Keystone patches.
 
 We are happy to also be testing keystone patches on PowerKVM, and will be 
 adding other projects soon.
 
 Regards,
 Kurt Taylor (krtaylor)
 __  
 OpenStack Development Mailing List (not for usage questions)  
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe  
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
Requesting permission to comment on a new repo is best done at the
weekly meeting of the project in question, not the mailing list.

Thanks,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Manila driver for CephFS

2015-02-02 Thread Jake Kugel
OK, thanks Sebastien and Valeriy.

Jake


Sebastien Han sebastien@enovance.com wrote on 02/02/2015 06:51:10 
AM:

 From: Sebastien Han sebastien@enovance.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: 02/02/2015 06:54 AM
 Subject: Re: [openstack-dev] [Manila] Manila driver for CephFS
 
 I believe this will start somewhere after Kilo.
 
  On 28 Jan 2015, at 22:59, Valeriy Ponomaryov 
 vponomar...@mirantis.com wrote:
  
  Hello Jake,
  
  Main thing, that should be mentioned, is that blueprint has no 
 assignee. Also, It is created long time ago without any activity after 
it.
  I did not hear any intentions about it, moreover did not see some,
 at least, drafts.
  
  So, I guess, it is open for volunteers.
  
  Regards,
  Valeriy Ponomaryov
  
  On Wed, Jan 28, 2015 at 11:30 PM, Jake Kugel jku...@us.ibm.com 
wrote:
  Hi,
  
  I see there is a blueprint for a Manila driver for CephFS here [1]. It
  looks like it was opened back in 2013 but still in Drafting state. 
Does
  anyone know more status about this one?
  
  Thank you,
  -Jake
  
  [1]  https://blueprints.launchpad.net/manila/+spec/cephfs-driver
  
  
  
__
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
__
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 Cheers.
 
 Sébastien Han
 Cloud Architect
 
 Always give 100%. Unless you're giving blood.
 
 Phone: +33 (0)1 49 70 99 72
 Mail: sebastien@enovance.com
 Address : 11 bis, rue Roquépine - 75008 Paris
 Web : www.enovance.com - Twitter : @enovance
 
 
__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] Resources owned by a project/tenant are not cleaned up after that project is deleted from keystone

2015-02-02 Thread Joe Gordon
On Mon, Feb 2, 2015 at 10:28 AM, Morgan Fainberg morgan.fainb...@gmail.com
wrote:

 I think the simple answer is yes. We (keystone) should emit
 notifications. And yes other projects should listen.

 The only thing really in discussion should be:

 1: soft delete or hard delete? Does the service mark it as orphaned, or
 just delete (leave this to nova, cinder, etc to discuss)

 2: how to cleanup when an event is missed (e.g rabbit bus goes out to
 lunch).



I disagree slightly, I don't think projects should directly listen to the
Keystone notifications I would rather have the API be something from a
keystone owned library, say keystonemiddleware. So something like this:

from keystonemiddleware import janitor

keystone_janitor = janitor.Janitor()
keystone_janitor.register_callback(nova.tenant_cleanup)

keystone_janitor.spawn_greenthread()

That way each project doesn't have to include a lot of boilerplate code,
and keystone can easily modify/improve/upgrade the notification mechanism.




 --Morgan

 Sent via mobile

  On Feb 2, 2015, at 10:16, Matthew Treinish mtrein...@kortar.org wrote:
 
  On Mon, Feb 02, 2015 at 11:46:53AM -0600, Matt Riedemann wrote:
  This came up in the operators mailing list back in June [1] but given
 the
  subject probably didn't get much attention.
 
  Basically there is a really old bug [2] from Grizzly that is still a
 problem
  and affects multiple projects.  A tenant can be deleted in Keystone even
  though other resources in other projects are under that project, and
 those
  resources aren't cleaned up.
 
  I agree this probably can be a major pain point for users. We've had to
 work around it
  in tempest by creating things like:
 
 
 http://git.openstack.org/cgit/openstack/tempest/tree/tempest/cmd/cleanup_service.py
  and
 
 http://git.openstack.org/cgit/openstack/tempest/tree/tempest/cmd/cleanup.py
 
  to ensure we aren't dangling resources after a run. But, this doesn't
 work in
  all cases either. (like with tenant isolation enabled)
 
  I also know there is a stackforge project that is attempting something
 similar
  here:
 
  http://git.openstack.org/cgit/stackforge/ospurge/
 
  It would be much nicer if the burden for doing this was taken off users
 and this
  was just handled cleanly under the covers.
 
 
  Keystone implemented event notifications back in Havana [3] but the
 other
  projects aren't listening on them to know when a project has been
 deleted
  and act accordingly.
 
  The bug has several people saying we should talk about this at the
 summit
  for several summits, but I can't find any discussion or summit sessions
  related back to the bug.
 
  Given this is an operations and cross-project issue, I'd like to bring
 it up
  again for the Vancouver summit if there is still interest (which I'm
  assuming there is from operators).
 
  I'd definitely support having a cross-project session on this.
 
 
  There is a blueprint specifically for the tenant deletion case but it's
  targeted at only Horizon [4].
 
  Is anyone still working on this? Is there sufficient interest in a
  cross-project session at the L summit?
 
  Thinking out loud, even if nova doesn't listen to events from keystone,
 we
  could at least have a periodic task that looks for instances where the
  tenant no longer exists in keystone and then take some action (log a
  warning, shutdown/archive/, reap, etc).
 
  There is also a spec for L to transfer instance ownership [5] which
 could
  maybe come into play, but I wouldn't depend on it.
 
  [1]
 http://lists.openstack.org/pipermail/openstack-operators/2014-June/004559.html
  [2] https://bugs.launchpad.net/nova/+bug/967832
  [3] https://blueprints.launchpad.net/keystone/+spec/notifications
  [4] https://blueprints.launchpad.net/horizon/+spec/tenant-deletion
  [5] https://review.openstack.org/#/c/105367/
 
  -Matt Treinish
  ___
  OpenStack-operators mailing list
  openstack-operat...@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

 ___
 OpenStack-operators mailing list
 openstack-operat...@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] Resources owned by a project/tenant are not cleaned up after that project is deleted from keystone

2015-02-02 Thread Morgan Fainberg
On February 2, 2015 at 1:31:14 PM, Joe Gordon (joe.gord...@gmail.com) wrote:


On Mon, Feb 2, 2015 at 10:28 AM, Morgan Fainberg morgan.fainb...@gmail.com 
wrote:
I think the simple answer is yes. We (keystone) should emit notifications. 
And yes other projects should listen.

The only thing really in discussion should be:

1: soft delete or hard delete? Does the service mark it as orphaned, or just 
delete (leave this to nova, cinder, etc to discuss)

2: how to cleanup when an event is missed (e.g rabbit bus goes out to lunch).


I disagree slightly, I don't think projects should directly listen to the 
Keystone notifications I would rather have the API be something from a keystone 
owned library, say keystonemiddleware. So something like this:

from keystonemiddleware import janitor

keystone_janitor = janitor.Janitor()
keystone_janitor.register_callback(nova.tenant_cleanup)

keystone_janitor.spawn_greenthread()

That way each project doesn't have to include a lot of boilerplate code, and 
keystone can easily modify/improve/upgrade the notification mechanism.


Sure. I’d place this into an implementation detail of where that actually 
lives. I’d be fine with that being a part of Keystone Middleware Package 
(probably something separate from auth_token).

—Morgan

 

--Morgan

Sent via mobile

 On Feb 2, 2015, at 10:16, Matthew Treinish mtrein...@kortar.org wrote:

 On Mon, Feb 02, 2015 at 11:46:53AM -0600, Matt Riedemann wrote:
 This came up in the operators mailing list back in June [1] but given the
 subject probably didn't get much attention.

 Basically there is a really old bug [2] from Grizzly that is still a problem
 and affects multiple projects.  A tenant can be deleted in Keystone even
 though other resources in other projects are under that project, and those
 resources aren't cleaned up.

 I agree this probably can be a major pain point for users. We've had to work 
 around it
 in tempest by creating things like:

 http://git.openstack.org/cgit/openstack/tempest/tree/tempest/cmd/cleanup_service.py
 and
 http://git.openstack.org/cgit/openstack/tempest/tree/tempest/cmd/cleanup.py

 to ensure we aren't dangling resources after a run. But, this doesn't work in
 all cases either. (like with tenant isolation enabled)

 I also know there is a stackforge project that is attempting something similar
 here:

 http://git.openstack.org/cgit/stackforge/ospurge/

 It would be much nicer if the burden for doing this was taken off users and 
 this
 was just handled cleanly under the covers.


 Keystone implemented event notifications back in Havana [3] but the other
 projects aren't listening on them to know when a project has been deleted
 and act accordingly.

 The bug has several people saying we should talk about this at the summit
 for several summits, but I can't find any discussion or summit sessions
 related back to the bug.

 Given this is an operations and cross-project issue, I'd like to bring it up
 again for the Vancouver summit if there is still interest (which I'm
 assuming there is from operators).

 I'd definitely support having a cross-project session on this.


 There is a blueprint specifically for the tenant deletion case but it's
 targeted at only Horizon [4].

 Is anyone still working on this? Is there sufficient interest in a
 cross-project session at the L summit?

 Thinking out loud, even if nova doesn't listen to events from keystone, we
 could at least have a periodic task that looks for instances where the
 tenant no longer exists in keystone and then take some action (log a
 warning, shutdown/archive/, reap, etc).

 There is also a spec for L to transfer instance ownership [5] which could
 maybe come into play, but I wouldn't depend on it.

 [1] 
 http://lists.openstack.org/pipermail/openstack-operators/2014-June/004559.html
 [2] https://bugs.launchpad.net/nova/+bug/967832
 [3] https://blueprints.launchpad.net/keystone/+spec/notifications
 [4] https://blueprints.launchpad.net/horizon/+spec/tenant-deletion
 [5] https://review.openstack.org/#/c/105367/

 -Matt Treinish
 ___
 OpenStack-operators mailing list
 openstack-operat...@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
openstack-operat...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] PowerKVM CI reporting

2015-02-02 Thread Kurt Taylor
Thanks Morgan, That's why I wanted to email. We will gladly come to a
meeting and formally request to comment and will turn off commenting on
Keystone until then.

Thanks,
Kurt Taylor (krtaylor)

On Mon, Feb 2, 2015 at 3:43 PM, Morgan Fainberg morgan.fainb...@gmail.com
wrote:

 I assumed [my mistake] this was not commenting/reporting, just running
 against Keystone. I expect a more specific request to comment rather than a
 “hey we’re doing this” if commenting is what is desired.

 Please come to our weekly meeting if you’re planning on commenting/scoring
 on keystone patches.

 --
 Morgan Fainberg

 On February 2, 2015 at 1:41:08 PM, Anita Kuno (ante...@anteaya.info)
 wrote:

 On 02/02/2015 02:16 PM, Morgan Fainberg wrote:
  Thank you for the heads up.
 
  —Morgan
 
  --
  Morgan Fainberg
 
  On February 2, 2015 at 1:15:49 PM, Kurt Taylor (kurt.r.tay...@gmail.com)
 wrote:
 
  Just FYI, in case there was any questions,
 
  In addition to testing and reporting on Nova, the IBM PowerKVM CI system
 is now also testing against Keystone patches.
 
  We are happy to also be testing keystone patches on PowerKVM, and will
 be adding other projects soon.
 
  Regards,
  Kurt Taylor (krtaylor)
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 Requesting permission to comment on a new repo is best done at the
 weekly meeting of the project in question, not the mailing list.

 Thanks,
 Anita.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] PowerKVM CI reporting

2015-02-02 Thread Matt Riedemann



On 2/2/2015 3:52 PM, Kurt Taylor wrote:

Thanks Morgan, That's why I wanted to email. We will gladly come to a
meeting and formally request to comment and will turn off commenting on
Keystone until then.

Thanks,
Kurt Taylor (krtaylor)

On Mon, Feb 2, 2015 at 3:43 PM, Morgan Fainberg
morgan.fainb...@gmail.com mailto:morgan.fainb...@gmail.com wrote:

I assumed [my mistake] this was not commenting/reporting, just
running against Keystone. I expect a more specific request to
comment rather than a “hey we’re doing this” if commenting is what
is desired.

Please come to our weekly meeting if you’re planning on
commenting/scoring on keystone patches.

--
Morgan Fainberg

On February 2, 2015 at 1:41:08 PM, Anita Kuno (ante...@anteaya.info
mailto:ante...@anteaya.info) wrote:


On 02/02/2015 02:16 PM, Morgan Fainberg wrote:
 Thank you for the heads up.

 —Morgan

 --
 Morgan Fainberg

 On February 2, 2015 at 1:15:49 PM, Kurt Taylor (kurt.r.tay...@gmail.com 
mailto:kurt.r.tay...@gmail.com) wrote:

 Just FYI, in case there was any questions,

 In addition to testing and reporting on Nova, the IBM PowerKVM CI system 
is now also testing against Keystone patches.

 We are happy to also be testing keystone patches on PowerKVM, and will be 
adding other projects soon.

 Regards,
 Kurt Taylor (krtaylor)
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Requesting permission to comment on a new repo is best done at the
weekly meeting of the project in question, not the mailing list.

Thanks,
Anita.

__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Sorry for being naive, but what in Keystone is arch-specific such that 
it could be different on ppc64 vs x86_64?  Or is there more to PowerKVM 
CI than the name implies?


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] problems with instance consoles and novnc

2015-02-02 Thread Chris Friesen

On 02/02/2015 01:27 PM, Mathieu Gagné wrote:

On 2015-02-02 11:36 AM, Chris Friesen wrote:

On 01/30/2015 06:26 AM, Jesse Pretorius wrote:


Have you tried manually updating the NoVNC and websockify files to later
versions from source?


We were already using a fairly recent version of websockify, but it
turns out that we needed to upversion the novnc package.



Which version are you using?


Pretty sure we're on 0.5.1 now.

Chris


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] unable to reproduce bug 1317363‏

2015-02-02 Thread bharath thiruveedula
Yeah sure

From: blak...@gmail.com
Date: Mon, 2 Feb 2015 11:09:08 -0800
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev][Neutron] unable to reproduce bug 1317363‏

The mailing list isn't a great place to discuss reproducing a bug. Post this 
comment on the bug report instead of the mailing list. That way the person who 
reported it and the ones who triaged it can see this information and respond. 
They might not be watching the dev mailing list as closely.


On Mon, Feb 2, 2015 at 10:17 AM, bharath thiruveedula bharath_...@hotmail.com 
wrote:



Hi,
I am Bharath Thiruveedula. I am new to openstack neutron and networking. I am 
trying to solve the bug 1317363. But I am unable to reproduce that bug. The 
steps I have done to reproduce:
1)I have created with network with external = True2)Created a subnet for the 
above network with CIDR=172.24.4.0/24 with gateway-ip =172.24.4.53)Created the 
router4)Set the gateway interface to the router5)Tried to change subnet 
gateway-ip but got this errorGateway ip 172.24.4.7 conflicts with allocation 
pool 172.24.4.6-172.24.4.254I used this command for thatneutron subnet-update 
ff9fe828-9ca2-42c4-9997-3743d8fc0b0c --gateway-ip 172.24.4.7 
Can you please help me with this issue?

-- Bharath Thiruveedula   

__

OpenStack Development Mailing List (not for usage questions)

Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev   
  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova] Openstack HTTP error codes

2015-02-02 Thread Christopher Yeoh
On Tue, Feb 3, 2015 at 9:05 AM, Jay Pipes jaypi...@gmail.com wrote:

 On 01/29/2015 12:41 PM, Sean Dague wrote:

 Correct. This actually came up at the Nova mid cycle in a side
 conversation with Ironic and Neutron folks.

 HTTP error codes are not sufficiently granular to describe what happens
 when a REST service goes wrong, especially if it goes wrong in a way
 that would let the client do something other than blindly try the same
 request, or fail.

 Having a standard json error payload would be really nice.

 {
   fault: ComputeFeatureUnsupportedOnInstanceType,
   messsage: This compute feature is not supported on this kind of
 instance type. If you need this feature please use a different instance
 type. See your cloud provider for options.
 }

 That would let us surface more specific errors.

 snip


 Standardization here from the API WG would be really great.


 What about having a separate HTTP header that indicates the OpenStack
 Error Code, along with a generated URI for finding more information about
 the error?

 Something like:

 X-OpenStack-Error-Code: 1234
 X-OpenStack-Error-Help-URI: http://errors.openstack.org/1234

 That way is completely backwards compatible (since we wouldn't be changing
 response payloads) and we could handle i18n entirely via the HTTP help
 service running on errors.openstack.org.


So I'm +1 to adding the X-OpenStack-Error-Code header assuming the error
code is unique
across OpenStack APIs and it has a fixed meaning (we never change it,
create a new one if
a project has a need for an error code which is close to the original one
but a bit different)

The X-OpenStack-Error-Help-URI header I'm not sure about. We can't
guarantee that apps will have
access to errors.openstack.org - is there an assumption here that we'd
build/ship an error translation service?

Regards,

Chris
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Convergence Phase 1 implementation plan

2015-02-02 Thread Zane Bitter

On 26/01/15 19:04, Angus Salkeld wrote:

On Sat, Jan 24, 2015 at 7:00 AM, Zane Bitter zbit...@redhat.com
mailto:zbit...@redhat.com wrote:
I'm also prepared to propose specs for all of these _if_ people
think that would be helpful. I see three options here:
  - Propose 18 fairly minimal specs (maybe in a single review?)


This sounds fine, but if possible group them a bit 18 sounds like a lot
and many of these look like small jobs.
I am also open to using bugs for smaller items. Basically this is just
the red tape, so what ever is the least effort
and makes things easier to divide the work up.


OK, here are the specs:

https://review.openstack.org/#/q/status:open+project:openstack/heat-specs+branch:master+topic:convergence,n,z

Let's get reviewing (and implementing!) :)

cheers,
Zane.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TaskFlow 0.7.0 released

2015-02-02 Thread Joe Gordon
This broke grenade on stable/juno, here is the fix.

https://review.openstack.org/#/c/152333/

On Mon, Feb 2, 2015 at 10:56 AM, Joshua Harlow harlo...@outlook.com wrote:

 The Oslo team is pleased to announce the release of:

 TaskFlow 0.7.0: taskflow structured state management library.

 For more details, please see the git log history below and:

 http://launchpad.net/taskflow/+milestone/0.7.0

 Please report issues through launchpad:

 http://bugs.launchpad.net/taskflow/

 Noteable changes
 

 * Using non-deprecated oslo.utils and oslo.serialization imports.
 * Added note(s) about publicly consumable types into docs.
 * Increase robustness of WBE producer/consumers by supporting and using
   the kombu provided feature to retry/ensure on transient/recoverable
   failures (such as timeouts).
 * Move the jobboard/job bases to a jobboard/base module and
   move the persistence base to the parent directory (standardizes how
   all pluggable types now have a similiar base module in a similar
 location,
   making the layout of taskflow's codebase easier to understand/follow).
 * Add executor statistics, using taskflow.futures executors now provides a
   useful feature to know about the following when using these executors.
   --
   | Statistic | What it is |
   
 -
   | failures  | How many submissions ended up raising exceptions  |
   | executed  | How many submissions were executed (failed or not)|
   | runtime   | Total runtime of all submissions executed (failed or not) |
   | cancelled | How many submissions were cancelled before executing  |
   
 -
 * The taskflow logger module does not provide a logging adapter [bug]
 * Use monotonic time when/if available for stopwatches (py3.3+ natively
   supports this) and other time.time usage (where the usage of time.time
 only
   cares about the duration between two points in time).
 * Make all/most usage of type errors follow a similar pattern (exception
   cleanup).

 Changes in /homes/harlowja/dev/os/taskflow 0.6.1..0.7.0
 ---

 NOTE: Skipping requirement commits...

 19f9674 Abstract out the worker finding from the WBE engine
 99b92ae Add and use a nicer kombu message formatter
 df6fb03 Remove duplicated 'do' in types documentation
 43d70eb Use the class defined constant instead of raw strings
 344b3f6 Use kombu socket.timeout alias instead of socket.timeout
 d5128cf Stopwatch usage cleanup/tweak
 2e43b67 Add note about publicly consumable types
 e9226ca Add docstring to wbe proxy to denote not for public use
 80888c6 Use monotonic time when/if available
 7fe2945 Link WBE docs together better (especially around arguments)
 f3a1dcb Emit a warning when no routing keys provided on publish()
 802bce9 Center SVG state diagrams
 97797ab Use importutils.try_import for optional eventlet imports
 84d44fa Shrink the WBE request transition SVG image size
 ca82e20 Add a thread bundle helper utility + tests
 e417914 Make all/most usage of type errors follow a similar pattern
 2f04395 Leave use-cases out of WBE developer documentation
 e3e2950 Allow just specifying 'workers' for WBE entrypoint
 66fc2df Add comments to runner state machine reaction functions
 35745c9 Fix coverage environment
 fc9cb88 Use explicit WBE worker object arguments (instead of kwargs)
 0672467 WBE documentation tweaks/adjustments
 55ad11f Add a WBE request state diagram + explanation
 45ef595 Tidy up the WBE cache (now WBE types) module
 1469552 Fix leftover/remaining 'oslo.utils' usage
 93d73b8 Show the failure discarded (and the future intention)
 5773fb0 Use a class provided logger before falling back to module
 addc286 Use explicit WBE object arguments (instead of kwargs)
 342c59e Fix persistence doc inheritance hierarchy
 072210a The gathered runtime is for failures/not failures
 410efa7 add clarification re parallel engine
 cb27080 Increase robustness of WBE producer/consumers
 bb38457 Move implementation(s) to there own sections
 f14ee9e Move the jobboard/job bases to a jobboard/base module
 ac5345e Have the serial task executor shutdown/restart its executor
 426484f Mirror the task executor methods in the retry action
 d92c226 Add back a 'eventlet_utils' helper utility module
 1ed0f22 Use constants for runner state machine event names
 bfc1136 Remove 'SaveOrderTask' and test state in class variables
 22eef96 Provide the stopwatch elapsed method a maximum
 3968508 Fix unused and conflicting variables
 2280f9a Switch to using 'oslo_serialization' vs 'oslo.serialization'
 d748db9 Switch to using 'oslo_utils' vs 'oslo.utils'
 9c15eff Add executor statistics
 bf2f205 Use oslo.utils reflection for class name
 9fe99ba Add split time capturing to the stop watch
 42a665d Use platform neutral line separator(s)
 eb536da Create and use a 

[openstack-dev] UpgradeImpact: Replacing swift_enable_net with swift_store_endpoint

2015-02-02 Thread Jesse Cook
+openstack-operators

On 2/2/15, 12:24 PM, Jesse Cook 
jesse.c...@rackspace.commailto:jesse.c...@rackspace.com wrote:

Configuration options will change (https://review.openstack.org/#/c/146972/4):

- Removed config option: swift_enable_snet. The default value of
  swift_enable_snet was False [1]. The comments indicated not to change this
  default value unless you are Rackspace [2].

- Added config option swift_store_endpoint. The default value of
  swift_store_endpoint is None, in which case the storage url from the auth
  response will be used. If set, the configured endpoint will be used. Example
  values: swift_store_endpoint = https://www.example.com/v1/not_a_container;

1. 
https://github.com/openstack/glance/blob/fd5a55c7f386a9d9441d5f1291ff6a92f7e6cc1b/etc/glance-api.conf#L525
2. 
https://github.com/openstack/glance/blob/fd5a55c7f386a9d9441d5f1291ff6a92f7e6cc1b/etc/glance-api.conf#L520

If you are using swift_enable_snet (i.e. You changed the default config from 
False to True in your deployment) and you are not Rackspace, please respond to 
this thread. Note, this is very unlikely as it is a Rackspace only option and 
documented as such.

Thanks,

Jesse
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova] Openstack HTTP error codes

2015-02-02 Thread Jay Pipes

On 01/29/2015 12:41 PM, Sean Dague wrote:

Correct. This actually came up at the Nova mid cycle in a side
conversation with Ironic and Neutron folks.

HTTP error codes are not sufficiently granular to describe what happens
when a REST service goes wrong, especially if it goes wrong in a way
that would let the client do something other than blindly try the same
request, or fail.

Having a standard json error payload would be really nice.

{
  fault: ComputeFeatureUnsupportedOnInstanceType,
  messsage: This compute feature is not supported on this kind of
instance type. If you need this feature please use a different instance
type. See your cloud provider for options.
}

That would let us surface more specific errors.

snip


Standardization here from the API WG would be really great.


What about having a separate HTTP header that indicates the OpenStack 
Error Code, along with a generated URI for finding more information 
about the error?


Something like:

X-OpenStack-Error-Code: 1234
X-OpenStack-Error-Help-URI: http://errors.openstack.org/1234

That way is completely backwards compatible (since we wouldn't be 
changing response payloads) and we could handle i18n entirely via the 
HTTP help service running on errors.openstack.org.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova] Openstack HTTP error codes

2015-02-02 Thread Sean Dague
On 02/02/2015 05:35 PM, Jay Pipes wrote:
 On 01/29/2015 12:41 PM, Sean Dague wrote:
 Correct. This actually came up at the Nova mid cycle in a side
 conversation with Ironic and Neutron folks.

 HTTP error codes are not sufficiently granular to describe what happens
 when a REST service goes wrong, especially if it goes wrong in a way
 that would let the client do something other than blindly try the same
 request, or fail.

 Having a standard json error payload would be really nice.

 {
   fault: ComputeFeatureUnsupportedOnInstanceType,
   messsage: This compute feature is not supported on this kind of
 instance type. If you need this feature please use a different instance
 type. See your cloud provider for options.
 }

 That would let us surface more specific errors.
 snip

 Standardization here from the API WG would be really great.
 
 What about having a separate HTTP header that indicates the OpenStack
 Error Code, along with a generated URI for finding more information
 about the error?
 
 Something like:
 
 X-OpenStack-Error-Code: 1234
 X-OpenStack-Error-Help-URI: http://errors.openstack.org/1234
 
 That way is completely backwards compatible (since we wouldn't be
 changing response payloads) and we could handle i18n entirely via the
 HTTP help service running on errors.openstack.org.

That could definitely be implemented in the short term, but if we're
talking about API WG long term evolution, I'm not sure why a standard
error payload body wouldn't be better.

The if we are going to having global codes that are just numbers, we'll
also need a global naming registry. Which isn't a bad thing, just
someone will need to allocate the numbers in a separate global repo
across all projects.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Multiple template libraries being used in tree

2015-02-02 Thread Mike Bayer


Sean Dague s...@dague.net wrote:

 On 02/02/2015 06:06 PM, Mike Bayer wrote:
 Sean Dague s...@dague.net wrote:
 
 On 02/02/2015 04:20 PM, Mark McClain wrote:
 You’re right that the Mako dependency is really a side effect from 
 Alembic.  We used jinja for tempting radvd because it is used by the 
 projects within the OpenStack ecosystem and also used in VPNaaS.
 
 Jinja is far more used in other parts of OpenStack from my recollection,
 I think that's probably the prefered thing to consolidate on.
 
 Alembic being different is fine, it's a dependent library.
 
 
 there’s no reason not to have both installed. Tempita also gets 
 installed with a typical openstack setup.
 
 that said, if you use Mako, you get the creator of Mako on board to help as 
 he already works for openstack, for free!
 
 Sure, but the point is that it would be better to have the OpenStack
 code be consistent in this regard, as it makes for a more smooth
 environment.

stick with Jinja if that’s what projects are already using.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] PowerKVM CI reporting

2015-02-02 Thread Kurt Taylor
On Mon, Feb 2, 2015 at 4:07 PM, Matt Riedemann mrie...@linux.vnet.ibm.com
wrote:



 On 2/2/2015 3:52 PM, Kurt Taylor wrote:

 Thanks Morgan, That's why I wanted to email. We will gladly come to a
 meeting and formally request to comment and will turn off commenting on
 Keystone until then.

 Thanks,
 Kurt Taylor (krtaylor)

 On Mon, Feb 2, 2015 at 3:43 PM, Morgan Fainberg
 morgan.fainb...@gmail.com mailto:morgan.fainb...@gmail.com wrote:

 I assumed [my mistake] this was not commenting/reporting, just
 running against Keystone. I expect a more specific request to
 comment rather than a “hey we’re doing this” if commenting is what
 is desired.

 Please come to our weekly meeting if you’re planning on
 commenting/scoring on keystone patches.

 --
 Morgan Fainberg

 On February 2, 2015 at 1:41:08 PM, Anita Kuno (ante...@anteaya.info
 mailto:ante...@anteaya.info) wrote:

  On 02/02/2015 02:16 PM, Morgan Fainberg wrote:
  Thank you for the heads up.
 
  —Morgan
 
  --
  Morgan Fainberg
 
  On February 2, 2015 at 1:15:49 PM, Kurt Taylor (
 kurt.r.tay...@gmail.com mailto:kurt.r.tay...@gmail.com) wrote:
 
  Just FYI, in case there was any questions,
 
  In addition to testing and reporting on Nova, the IBM PowerKVM CI
 system is now also testing against Keystone patches.
 
  We are happy to also be testing keystone patches on PowerKVM, and
 will be adding other projects soon.
 
  Regards,
  Kurt Taylor (krtaylor)


 Sorry for being naive, but what in Keystone is arch-specific such that it
 could be different on ppc64 vs x86_64?  Or is there more to PowerKVM CI
 than the name implies?


No, it's a good question. We plan on testing many different repos or
components in L1 and beyond. It is a quality statement really, to assure
anyone wanting to run OpenStack on a different platform that some set of
tests against some set of core components had been run.

We were starting with the L1 components with Nova first (as you know) and
adding from there. However, I of all people should know better than to turn
on comments for this new component without discussing it at the component's
meeting. I'm on the agenda for Keystone, please feel free to attend and
discuss.  https://wiki.openstack.org/wiki/Meetings/KeystoneMeeting

Thanks,
Kurt Taylor (krtaylor)

-- 

 Thanks,

 Matt Riedemann



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova] Openstack HTTP error codes

2015-02-02 Thread Ryan Moats
+1 to that idea...

Jay Pipes jaypi...@gmail.com wrote on 02/02/2015 04:35:36 PM:


 What about having a separate HTTP header that indicates the OpenStack
 Error Code, along with a generated URI for finding more information
 about the error?

 Something like:

 X-OpenStack-Error-Code: 1234
 X-OpenStack-Error-Help-URI: http://errors.openstack.org/1234

 That way is completely backwards compatible (since we wouldn't be
 changing response payloads) and we could handle i18n entirely via the
 HTTP help service running on errors.openstack.org.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Multiple template libraries being used in tree

2015-02-02 Thread Mike Bayer


Sean Dague s...@dague.net wrote:

 On 02/02/2015 04:20 PM, Mark McClain wrote:
 You’re right that the Mako dependency is really a side effect from Alembic.  
 We used jinja for tempting radvd because it is used by the projects within 
 the OpenStack ecosystem and also used in VPNaaS.
 
 Jinja is far more used in other parts of OpenStack from my recollection,
 I think that's probably the prefered thing to consolidate on.
 
 Alembic being different is fine, it's a dependent library.


there’s no reason not to have both installed. Tempita also gets installed 
with a typical openstack setup.

that said, if you use Mako, you get the creator of Mako on board to help as he 
already works for openstack, for free!




 
   -Sean
 
 mark
 
 
 On Feb 2, 2015, at 3:13 PM, Sean M. Collins s...@coreitpro.com wrote:
 
 Sorry, I should have done a bit more grepping before I sent the e-mail,
 since it appears that Mako is being used by alembic.
 
 http://alembic.readthedocs.org/en/latest/tutorial.html
 
 So, should we switch the radvd templating over to Mako instead?
 
 -- 
 Sean M. Collins
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 -- 
 Sean Dague
 http://dague.net
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-operators] UpgradeImpact: Replacing swift_enable_net with swift_store_endpoint

2015-02-02 Thread Jesse Cook
+ openstack-operators

On 2/2/15, 12:24 PM, Jesse Cook 
jesse.c...@rackspace.commailto:jesse.c...@rackspace.com wrote:

Configuration options will change (https://review.openstack.org/#/c/146972/4):

- Removed config option: swift_enable_snet. The default value of
  swift_enable_snet was False [1]. The comments indicated not to change this
  default value unless you are Rackspace [2].

- Added config option swift_store_endpoint. The default value of
  swift_store_endpoint is None, in which case the storage url from the auth
  response will be used. If set, the configured endpoint will be used. Example
  values: swift_store_endpoint = https://www.example.com/v1/not_a_container;

1. 
https://github.com/openstack/glance/blob/fd5a55c7f386a9d9441d5f1291ff6a92f7e6cc1b/etc/glance-api.conf#L525
2. 
https://github.com/openstack/glance/blob/fd5a55c7f386a9d9441d5f1291ff6a92f7e6cc1b/etc/glance-api.conf#L520

If you are using swift_enable_snet (i.e. You changed the default config from 
False to True in your deployment) and you are not Rackspace, please respond to 
this thread. Note, this is very unlikely as it is a Rackspace only option and 
documented as such.

Thanks,

Jesse
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova] Openstack HTTP error codes

2015-02-02 Thread Qin Zhao
Agree with Sean. A short error name in response body would be better for
applications who consume OpenStack. To my understand, the
X-OpenStack-Error-Help-URI proposed by jpipes will be a uri to error
resolution method. Usually, a consumer application needn't to load its
content.
On Feb 3, 2015 9:28 AM, Sean Dague s...@dague.net wrote:

 On 02/02/2015 05:35 PM, Jay Pipes wrote:
  On 01/29/2015 12:41 PM, Sean Dague wrote:
  Correct. This actually came up at the Nova mid cycle in a side
  conversation with Ironic and Neutron folks.
 
  HTTP error codes are not sufficiently granular to describe what happens
  when a REST service goes wrong, especially if it goes wrong in a way
  that would let the client do something other than blindly try the same
  request, or fail.
 
  Having a standard json error payload would be really nice.
 
  {
fault: ComputeFeatureUnsupportedOnInstanceType,
messsage: This compute feature is not supported on this kind of
  instance type. If you need this feature please use a different instance
  type. See your cloud provider for options.
  }
 
  That would let us surface more specific errors.
  snip
 
  Standardization here from the API WG would be really great.
 
  What about having a separate HTTP header that indicates the OpenStack
  Error Code, along with a generated URI for finding more information
  about the error?
 
  Something like:
 
  X-OpenStack-Error-Code: 1234
  X-OpenStack-Error-Help-URI: http://errors.openstack.org/1234
 
  That way is completely backwards compatible (since we wouldn't be
  changing response payloads) and we could handle i18n entirely via the
  HTTP help service running on errors.openstack.org.

 That could definitely be implemented in the short term, but if we're
 talking about API WG long term evolution, I'm not sure why a standard
 error payload body wouldn't be better.

 The if we are going to having global codes that are just numbers, we'll
 also need a global naming registry. Which isn't a bad thing, just
 someone will need to allocate the numbers in a separate global repo
 across all projects.

 -Sean

 --
 Sean Dague
 http://dague.net


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova] Openstack HTTP error codes

2015-02-02 Thread Ryan Moats
Sigh... hit send too soon and forgot to sign...

+1 to that idea...

Ryan

Jay Pipes jaypi...@gmail.com wrote on 02/02/2015 04:35:36 PM:


 What about having a separate HTTP header that indicates the OpenStack
 Error Code, along with a generated URI for finding more information
 about the error?

 Something like:

 X-OpenStack-Error-Code: 1234
 X-OpenStack-Error-Help-URI: http://errors.openstack.org/1234

 That way is completely backwards compatible (since we wouldn't be
 changing response payloads) and we could handle i18n entirely via the
 HTTP help service running on errors.openstack.org.__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db][nova] Use of asynchronous slaves in Nova (was: Deprecating use_slave in Nova)

2015-02-02 Thread Mike Bayer


Matthew Booth mbo...@redhat.com wrote:

 
 Based on my current (and still sketchy) understanding, I think we can
 define 3 classes of database node:
 
 1. Read/write
 2. Synchronous read-only
 3. Asynchronous read-only
 
 and 3 code annotations:
 
 * Writer (must use class 1)
 * Reader (prefer class 2, can use 1)
 * Async reader (prefer class 3, can use 2 or 1)
 
 The use cases for async would presumably be limited. Perhaps certain
 periodic tasks? Would it even be worth it?

Let’s suppose someone runs an openstack setup using a database with async 
replication.

Can openstack even make use of this outside of these periodic tasks, or is it 
the case that a stateless call to openstack (e.g. a web service call) can’t be 
tasked with knowing when it relies upon a previous web service call that may 
not have been synced?

Let’s suppose that an app has a web service call, and within that scope, it 
calls a function that does @writer, and then it calls a function that does 
@reader.   Even that situation, enginefacade could detect that within the new 
@reader call, we see a context being passed that we know was just used in a 
@writer - so even then, we could have the @reader upgrade to @writer if we know 
that reader slaves are async in a certain configuration.

But is that enough?   Or is it the case that a common operation calls upon 
multiple web service calls that are dependent on each other, with no indication 
between them to detect this, therefore all of these calls have to assume “I can 
only read from a slave if its synchronous” ?

I think we really need to know what deployment styles we are targeting here.  
If most people use galera synchronous, that can be the primary platform, and 
the others simply won’t be able to promise very good utilization of async read 
slaves.

If that all makes sense.  If I read this a week from now I won’t understand 
what I’m talking about.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TaskFlow 0.7.0 released

2015-02-02 Thread Joshua Harlow

Thanks for that!

Much appreciated :-)

Joe Gordon wrote:

This broke grenade on stable/juno, here is the fix.

https://review.openstack.org/#/c/152333/

On Mon, Feb 2, 2015 at 10:56 AM, Joshua Harlow harlo...@outlook.com
mailto:harlo...@outlook.com wrote:

The Oslo team is pleased to announce the release of:

TaskFlow 0.7.0: taskflow structured state management library.

For more details, please see the git log history below and:

http://launchpad.net/taskflow/__+milestone/0.7.0
http://launchpad.net/taskflow/+milestone/0.7.0

Please report issues through launchpad:

http://bugs.launchpad.net/__taskflow/
http://bugs.launchpad.net/taskflow/

Noteable changes


* Using non-deprecated oslo.utils and oslo.serialization imports.
* Added note(s) about publicly consumable types into docs.
* Increase robustness of WBE producer/consumers by supporting and using
   the kombu provided feature to retry/ensure on transient/recoverable
   failures (such as timeouts).
* Move the jobboard/job bases to a jobboard/base module and
   move the persistence base to the parent directory (standardizes how
   all pluggable types now have a similiar base module in a similar
location,
   making the layout of taskflow's codebase easier to
understand/follow).
* Add executor statistics, using taskflow.futures executors now
provides a
   useful feature to know about the following when using these
executors.
   --
   | Statistic | What it is |


--__--__-
   | failures  | How many submissions ended up raising exceptions
   |
   | executed  | How many submissions were executed (failed or not)
   |
   | runtime   | Total runtime of all submissions executed (failed
or not) |
   | cancelled | How many submissions were cancelled before
executing  |


--__--__-
* The taskflow logger module does not provide a logging adapter [bug]
* Use monotonic time when/if available for stopwatches (py3.3+ natively
   supports this) and other time.time usage (where the usage of
time.time only
   cares about the duration between two points in time).
* Make all/most usage of type errors follow a similar pattern (exception
   cleanup).

Changes in /homes/harlowja/dev/os/__taskflow 0.6.1..0.7.0
--__-

NOTE: Skipping requirement commits...

19f9674 Abstract out the worker finding from the WBE engine
99b92ae Add and use a nicer kombu message formatter
df6fb03 Remove duplicated 'do' in types documentation
43d70eb Use the class defined constant instead of raw strings
344b3f6 Use kombu socket.timeout alias instead of socket.timeout
d5128cf Stopwatch usage cleanup/tweak
2e43b67 Add note about publicly consumable types
e9226ca Add docstring to wbe proxy to denote not for public use
80888c6 Use monotonic time when/if available
7fe2945 Link WBE docs together better (especially around arguments)
f3a1dcb Emit a warning when no routing keys provided on publish()
802bce9 Center SVG state diagrams
97797ab Use importutils.try_import for optional eventlet imports
84d44fa Shrink the WBE request transition SVG image size
ca82e20 Add a thread bundle helper utility + tests
e417914 Make all/most usage of type errors follow a similar pattern
2f04395 Leave use-cases out of WBE developer documentation
e3e2950 Allow just specifying 'workers' for WBE entrypoint
66fc2df Add comments to runner state machine reaction functions
35745c9 Fix coverage environment
fc9cb88 Use explicit WBE worker object arguments (instead of kwargs)
0672467 WBE documentation tweaks/adjustments
55ad11f Add a WBE request state diagram + explanation
45ef595 Tidy up the WBE cache (now WBE types) module
1469552 Fix leftover/remaining 'oslo.utils' usage
93d73b8 Show the failure discarded (and the future intention)
5773fb0 Use a class provided logger before falling back to module
addc286 Use explicit WBE object arguments (instead of kwargs)
342c59e Fix persistence doc inheritance hierarchy
072210a The gathered runtime is for failures/not failures
410efa7 add clarification re parallel engine
cb27080 Increase robustness of WBE producer/consumers
bb38457 Move implementation(s) to there own sections
f14ee9e Move the jobboard/job bases to a jobboard/base module
ac5345e Have the serial task executor shutdown/restart its executor
426484f Mirror the task executor methods in the retry action
d92c226 Add back a 'eventlet_utils' helper utility module
1ed0f22 Use constants for runner state machine event names
bfc1136 Remove 'SaveOrderTask' and test state in 

[openstack-dev] [heat] operators vs users for choosing convergence engine

2015-02-02 Thread Steve Baker
A spec has been raised to add a config option to allow operators to 
choose whether to use the new convergence engine for stack operations. 
For some context you should read the spec first [1]


Rather than doing this, I would like to propose the following:
* Users can (optionally) choose which engine to use by specifying an 
engine parameter on stack-create (choice of classic or convergence)
* Operators can set a config option which determines which engine to use 
if the user makes no explicit choice
* Heat developers will set the default config option from classic to 
convergence when convergence is deemed sufficiently mature


I realize it is not ideal to expose this kind of internal implementation 
detail to the user, but choosing convergence _will_ result in different 
stack behaviour (such as multiple concurrent update operations) so there 
is an argument for giving the user the choice. Given enough supporting 
documentation they can choose whether convergence might be worth trying 
for a given stack (for example, a large stack which receives frequent 
updates)


Operators likely won't feel they have enough knowledge to make the call 
that a heat install should be switched to using all convergence, and 
users will never be able to try it until the operators do (or the 
default switches).


Finally, there are also some benefits to heat developers. Creating a 
whole new gate job to test convergence-enabled heat will consume its 
share of CI resource. I'm hoping to make it possible for some of our 
functional tests to run against a number of scenarios/environments. 
Being able to run tests under classic and convergence scenarios in one 
test run will be a great help (for performance profiling too).


If there is enough agreement then I'm fine with taking over and updating 
the convergence-config-option spec.


[1] 
https://review.openstack.org/#/c/152301/2/specs/kilo/convergence-config-option.rst


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Multiple template libraries being used in tree

2015-02-02 Thread Sean Dague
On 02/02/2015 06:06 PM, Mike Bayer wrote:
 
 
 Sean Dague s...@dague.net wrote:
 
 On 02/02/2015 04:20 PM, Mark McClain wrote:
 You’re right that the Mako dependency is really a side effect from Alembic. 
  We used jinja for tempting radvd because it is used by the projects within 
 the OpenStack ecosystem and also used in VPNaaS.

 Jinja is far more used in other parts of OpenStack from my recollection,
 I think that's probably the prefered thing to consolidate on.

 Alembic being different is fine, it's a dependent library.
 
 
 there’s no reason not to have both installed. Tempita also gets installed 
 with a typical openstack setup.
 
 that said, if you use Mako, you get the creator of Mako on board to help as 
 he already works for openstack, for free!

Sure, but the point is that it would be better to have the OpenStack
code be consistent in this regard, as it makes for a more smooth
environment.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova] Openstack HTTP error codes

2015-02-02 Thread Brant Knudson
On Mon, Feb 2, 2015 at 4:35 PM, Jay Pipes jaypi...@gmail.com wrote:

 On 01/29/2015 12:41 PM, Sean Dague wrote:

 Correct. This actually came up at the Nova mid cycle in a side
 conversation with Ironic and Neutron folks.

 HTTP error codes are not sufficiently granular to describe what happens
 when a REST service goes wrong, especially if it goes wrong in a way
 that would let the client do something other than blindly try the same
 request, or fail.

 Having a standard json error payload would be really nice.

 {
   fault: ComputeFeatureUnsupportedOnInstanceType,
   messsage: This compute feature is not supported on this kind of
 instance type. If you need this feature please use a different instance
 type. See your cloud provider for options.
 }

 That would let us surface more specific errors.

 snip


 Standardization here from the API WG would be really great.


 What about having a separate HTTP header that indicates the OpenStack
 Error Code, along with a generated URI for finding more information about
 the error?

 Something like:

 X-OpenStack-Error-Code: 1234
 X-OpenStack-Error-Help-URI: http://errors.openstack.org/1234

 That way is completely backwards compatible (since we wouldn't be changing
 response payloads) and we could handle i18n entirely via the HTTP help
 service running on errors.openstack.org.


Some of the suggested formats for an error document allow for multiple
errors, which would be useful in an input validation case since there may
be multiple fields that are incorrect (missing or wrong format).

One option to keep backwards compatibility is have both formats in the same
object. Keystone currently returns an error document like:

$ curl -X DELETE -H X-auth-token: $TOKEN
http://localhost:5000/v3/groups/lkdsajlkdsa/users/lkajfdskdsajf
{error: {message: Could not find user: lkajfdskdsajf, code: 404,
title: Not Found}}

So an enhanced error document could have:

$ curl -X DELETE -H X-auth-token: $TOKEN
http://localhost:5000/v3/groups/lkdsajlkdsa/users/lkajfdskdsajf
{error: {message: Could not find user: lkajfdskdsajf, code: 404,
title: Not Found},
 errors: [ { message: Could not find group: lkdsajlkdsa, id:
groupNotFound },
 { message: Could not find user: lkajfdskdsajf, id:
userNotFound } ]
}

Then when identity API 4 comes out we drop the deprecated error field.

- Brant



 Best,
 -jay


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Manila driver for CephFS

2015-02-02 Thread Sebastien Han
I believe this will start somewhere after Kilo.

 On 28 Jan 2015, at 22:59, Valeriy Ponomaryov vponomar...@mirantis.com wrote:
 
 Hello Jake,
 
 Main thing, that should be mentioned, is that blueprint has no assignee. 
 Also, It is created long time ago without any activity after it.
 I did not hear any intentions about it, moreover did not see some, at least, 
 drafts.
 
 So, I guess, it is open for volunteers.
 
 Regards,
 Valeriy Ponomaryov
 
 On Wed, Jan 28, 2015 at 11:30 PM, Jake Kugel jku...@us.ibm.com wrote:
 Hi,
 
 I see there is a blueprint for a Manila driver for CephFS here [1].  It
 looks like it was opened back in 2013 but still in Drafting state.  Does
 anyone know more status about this one?
 
 Thank you,
 -Jake
 
 [1]  https://blueprints.launchpad.net/manila/+spec/cephfs-driver
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Cheers.

Sébastien Han
Cloud Architect

Always give 100%. Unless you're giving blood.

Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Address : 11 bis, rue Roquépine - 75008 Paris
Web : www.enovance.com - Twitter : @enovance



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2] How to get compute host details

2015-02-02 Thread Kevin Benton
Your VM must be launched on the controller node then. In a multi-node setup
the controller will also act as a compute node unless you have disabled the
n-cpu service. The 'host' attribute is specifically to indicate where a
port is being used. It's not for anything else.

On Mon, Feb 2, 2015 at 1:15 AM, Harshada Kakad harshada.ka...@izeltech.com
wrote:

 Thanks Kevin for reply.
 But 'host' attribute returns me the controller hostname and not compute
 host name. I am having multi node setup, and I want to know compute host
 where the VM get launch.

 On Mon, Feb 2, 2015 at 2:19 PM, Kevin Benton blak...@gmail.com wrote:

 ML2 makes the hostname available in the context it passes to the drivers
 via the 'host' attribute.[1] This is the only thing Neutron knows about the
 compute node using the port.

 1.
 https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/driver_api.py#L776

 On Sun, Feb 1, 2015 at 10:11 PM, Harshada Kakad 
 harshada.ka...@izeltech.com wrote:

 Hi All,

 I am developing ml2 driver and I want compute host details while
 creation of ports. I mean to say is, I have multi node setup and when I
 launch VM I want to get deatils on which compute node does this VM got
 launced while creation of ports. Can anyone please help me on this.

 Thanks in Advance.

 --
 *Regards,*
 *Harshada Kakad*
 **
 *Sr. Software Engineer*
 *C3/101, Saudamini Complex, Right Bhusari Colony, Paud Road, Pune –
 411013, India*
 *Mobile-9689187388*
 *Email-Id : harshada.ka...@izeltech.com harshada.ka...@izeltech.com*
 *website : www.izeltech.com http://www.izeltech.com*

 *Disclaimer*
 The information contained in this e-mail and any attachment(s) to this
 message are intended for the exclusive use of the addressee(s) and may
 contain proprietary, confidential or privileged information of Izel
 Technologies Pvt. Ltd. If you are not the intended recipient, you are
 notified that any review, use, any form of reproduction, dissemination,
 copying, disclosure, modification, distribution and/or publication of this
 e-mail message, contents or its attachment(s) is strictly prohibited and
 you are requested to notify us the same immediately by e-mail and delete
 this mail immediately. Izel Technologies Pvt. Ltd accepts no liability for
 virus infected e-mail or errors or omissions or consequences which may
 arise as a result of this e-mail transmission.
 *End of Disclaimer*


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kevin Benton

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 *Regards,*
 *Harshada Kakad*
 **
 *Sr. Software Engineer*
 *C3/101, Saudamini Complex, Right Bhusari Colony, Paud Road, Pune –
 411013, India*
 *Mobile-9689187388*
 *Email-Id : harshada.ka...@izeltech.com harshada.ka...@izeltech.com*
 *website : www.izeltech.com http://www.izeltech.com*

 *Disclaimer*
 The information contained in this e-mail and any attachment(s) to this
 message are intended for the exclusive use of the addressee(s) and may
 contain proprietary, confidential or privileged information of Izel
 Technologies Pvt. Ltd. If you are not the intended recipient, you are
 notified that any review, use, any form of reproduction, dissemination,
 copying, disclosure, modification, distribution and/or publication of this
 e-mail message, contents or its attachment(s) is strictly prohibited and
 you are requested to notify us the same immediately by e-mail and delete
 this mail immediately. Izel Technologies Pvt. Ltd accepts no liability for
 virus infected e-mail or errors or omissions or consequences which may
 arise as a result of this e-mail transmission.
 *End of Disclaimer*

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] Finding people to work on the EC2 API in Nova

2015-02-02 Thread Daniel P. Berrange
On Mon, Feb 02, 2015 at 02:45:53PM +0300, Alexandre Levine wrote:
 Daniel,
 
 On 2/2/15 12:58 PM, Daniel P. Berrange wrote:
 On Fri, Jan 30, 2015 at 07:57:08PM +, Tim Bell wrote:
 Alex,
 
 
 
 Many thanks for the constructive approach. I've added an item to the list 
 for the Ops meetup in March to see who would be interested to help.
 
 
 
 As discussed on the change, it is likely that there would need to be some 
 additional
 Nova APIs added to support the full EC2 semantics. Thus, there would need 
 to support
 from the Nova team to enable these additional functions.  Having tables in 
 the EC2
 layer which get out of sync with those in the Nova layer would be a 
 significant
 problem in production.
 Adding new APIs to Nova to support out of tree EC2 impl is perfectly 
 reasonsable.
 Indeed if there is data needed by EC2 that Nova doesn't provide already, 
 chances
 are that providing this data woudl be useful to other regular users / client 
 apps
 too. Just really needs someone to submit a spec with details of exactly which
 functionality is missing. It shouldnt be hard for Nova cores to support it, 
 given
 the desire to see the out of tree EC2 impl take over  in tree impl removed.
 
 We'll do the spec shortly.
 
 I think this would merit a good slot in the Vancouver design sessions so we 
 can
 also discuss documentation, migration, packaging, configuration management,
 scaling, HA, etc.
 I'd really strongly encourage the people working on this to submit the
 detailed spec for the new APIs well before the Vancouver design summit.
 Likewise at lesat document somewhere the thoughts on upgrade paths plans.
 We need to at least discuss  iterate on this a few times online, so that
 we can take advantage of the f2f time for any remaining harder parts of
 the discussion.
 
 We'll see about that also when all of the subjects we can think of or get
 questions about are covered somewhere in docs or specs. By the way - how do
 you usually do those online discussions? I mean what is the tooling?

I just mean discussions on this mailing list, or in the gerrit reviews
for the spec and/or patches

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The API WG mission statement

2015-02-02 Thread Ryan Brown
On 01/30/2015 06:18 PM, Dean Troyer wrote:
 On Fri, Jan 30, 2015 at 4:57 PM, Everett Toews
 everett.to...@rackspace.com mailto:everett.to...@rackspace.com wrote:
 
 What is the API WG mission statement?
 
 
 It's more of a mantra than a Mission Statement(TM):
 
 Identify existing and future best practices in OpenStack REST APIs to
 enable new and existing projects to evolve and converge.
 

Identify existing and future pragmatic ideals in OpenStack REST APIs to
enable new and existing projects to evolve and converge.

I like it, but I'd like to get pragmatic in there somewhere. Just to
be clear we aren't just looking for pie-in-the-sky ideals, but ones that
can apply now/in the future.

 Tweetable, 126 chars!
 
 Plus, buzzword-bingo-compatibile, would score 5 in my old corporate
 buzzwordlist...
 
 dt
 
 (Can you tell my flight has been delayed? ;)
 
 -- 
 
 Dean Troyer
 dtro...@gmail.com mailto:dtro...@gmail.com
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Cluster replaced deployment of provisioning information

2015-02-02 Thread Dmitriy Shulyak
  But why to add another interface when there is one already (rest api)?

 I'm ok if we decide to use REST API, but of course there is a problem which
 we should solve, like versioning, which is much harder to support, than
 versioning
 in core-serializers. Also do you have any ideas how it can be implemented?


We need to think about deployment serializers not as part of nailgun (fuel
data inventory), but - part of another layer which uses nailgun api to
generate deployment information. Lets take ansible for example, and dynamic
inventory feature [1].
Nailgun API can be used inside of ansible dynamic inventory to generate
config that will be consumed by ansible during deployment.

Such approach will have several benefits:
- cleaner interface (ability to use ansible as main interface to control
deployment and all its features)
- deployment configuration will be tightly coupled with deployment code
- no limitation on what sources to use for configuration, and how to
compute additional values from requested data

I want to emphasize that i am not considering ansible as solution for fuel,
it serves only as example of architecture.


 You run some code which get the information from api on the master node and
 then sets the information in tasks? Or you are going to run this code on
 OpenStack
 nodes? As you mentioned in case of tokens, you should get the token right
 before
 you really need it, because of expiring problem, but in this case you don't
 need any serializers, get required token right in the task.


I think all information should be fetched before deployment.



 What is your opinion about serializing additional information in plugins
 code? How it can be done, without exposing db schema?

 With exposing the data in more abstract way the way it's done right now
 for the current deployment logic.


I mean what if plugin will want to generate additional data, like -
https://review.openstack.org/#/c/150782/? Schema will be still exposed?

[1] http://docs.ansible.com/intro_dynamic_inventory.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The API WG mission statement

2015-02-02 Thread Chris Dent

On Fri, 30 Jan 2015, Everett Toews wrote:


To converge the OpenStack APIs to a consistent and pragmatic RESTful
design by creating guidelines that the projects should follow. The
intent is not to create backwards incompatible changes in existing
APIs, but to have new APIs and future versions of existing APIs
converge.


This is pretty good but I think it leaves unresolved the biggest
question I've had about this process: What's so great about
converging the APIs? If we can narrow or clarify that aspect, good
to go.

The implication with your statement above is that there is some kind
of ideal which maps, at least to some extent, across the rather
diverse set of resources, interactions and transactions that are
present in the OpenStack ecosystem. It may not be your intent but
the above sounds like we want all the APIs to be kinda similar in
feel or when someone is using an OpenStack-related API they'll be
able to share some knowledge between then with regard to how stuff
works.

I'm not sure how realistic^Wuseful that is when we're in an
environment with APIs with such drastically different interactions
as (to just select three) Swift, Nova and Ceilometer.

We've seen this rather clearly in the recent debates about handling
metadata.

Now, there's nothing in what you say above that actually straight
out disagrees with my response, but I think there's got to be some
way we can remove the ambiguity or narrow the focus. The need to
remove ambiguity is why the discussion of having a mission statement
came up.

I think where we want to focus our attention is:

* strict adherence to correct HTTP
* proper use of response status codes
* effective (and correct) use of a media types
* some guidance on how to deal with change/versioning
* and _maybe_ a standard for providing actionable error responses
* setting not standards but guidelines for anything else

For most of that there is prior art and/or active conversation going
on outside the OpenStack world which ought to be useful fodder.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] Finding people to work on the EC2 API in Nova

2015-02-02 Thread Daniel P. Berrange
On Fri, Jan 30, 2015 at 07:57:08PM +, Tim Bell wrote:
 Alex,
 
 
 
 Many thanks for the constructive approach. I've added an item to the list for 
 the Ops meetup in March to see who would be interested to help.
 
 
 
 As discussed on the change, it is likely that there would need to be some 
 additional
 Nova APIs added to support the full EC2 semantics. Thus, there would need to 
 support
 from the Nova team to enable these additional functions.  Having tables in 
 the EC2
 layer which get out of sync with those in the Nova layer would be a 
 significant
 problem in production.

Adding new APIs to Nova to support out of tree EC2 impl is perfectly 
reasonsable.
Indeed if there is data needed by EC2 that Nova doesn't provide already, chances
are that providing this data woudl be useful to other regular users / client 
apps
too. Just really needs someone to submit a spec with details of exactly which
functionality is missing. It shouldnt be hard for Nova cores to support it, 
given
the desire to see the out of tree EC2 impl take over  in tree impl removed.

 I think this would merit a good slot in the Vancouver design sessions so we 
 can
 also discuss documentation, migration, packaging, configuration management,
 scaling, HA, etc.

I'd really strongly encourage the people working on this to submit the
detailed spec for the new APIs well before the Vancouver design summit.
Likewise at lesat document somewhere the thoughts on upgrade paths plans.
We need to at least discuss  iterate on this a few times online, so that
we can take advantage of the f2f time for any remaining harder parts of
the discussion.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][nova] Cinder backend for ephemeral disks?

2015-02-02 Thread Tobias Engelbert
Hi,
It was not re-proposed for Kilo as there is basic work ongoing in Cinder to 
create a common library Brick that can be used by Cinder and Nova.
There might be a chance in L* to get it in. Would be nice to get some people 
together working on it
/Tobi

-Original Message-
From: Michael Still [mailto:mi...@stillhq.com] 
Sent: Monday, February 02, 2015 12:40 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Cinder][nova] Cinder backend for ephemeral disks?

It looks like this was never re-proposed for Kilo. I am open to it being 
proposed for L* when that release opens for specs soon, but we need a developer 
to be advocating for it.

Michael

On Sun, Feb 1, 2015 at 5:22 PM, Adam Lawson alaw...@aqorn.com wrote:
 Question, looks like this spec was abandoned , hard to tell if it is 
 being addressed elsewhere? Good idea that received a -2 then 
 ultimately abandoned sure to juno freeze I think.

 https://blueprints.launchpad.net/nova/+spec/nova-ephemeral-cinder


 __
  OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Rackspace Australia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] RFC: ensuring live migration ends

2015-02-02 Thread Daniel P. Berrange
On Mon, Feb 02, 2015 at 08:24:20AM +1300, Robert Collins wrote:
 On 31 January 2015 at 05:47, Daniel P. Berrange berra...@redhat.com wrote:
  In working on a recent Nova migration bug
 
https://bugs.launchpad.net/nova/+bug/1414065
 
  I had cause to refactor the way the nova libvirt driver monitors live
  migration completion/failure/progress. This refactor has opened the
  door for doing more intelligent active management of the live migration
  process.
 ...
  What kind of things would be the biggest win from Operators' or tenants'
  POV ?
 
 Awesome. Couple thoughts from my perspective. Firstly, there's a bunch
 of situation dependent tuning. One thing Crowbar does really nicely is
 that you specify the host layout in broad abstract terms - e.g. 'first
 10G network link' and so on : some of your settings above like whether
 to compress page are going to be heavily dependent on the bandwidth
 available (I doubt that compression is a win on a 100G link for
 instance, and would be suspect at 10G even). So it would be nice if
 there was a single dial or two to set and Nova would auto-calculate
 good defaults from that (with appropriate overrides being available).

I wonder how such an idea would fit into Nova, since it doesn't really
have that kind of knowledge about the network deployment characteristics.

 Operationally avoiding trouble is better than being able to fix it, so
 I quite like the idea of defaulting the auto-converge option on, or
 perhaps making it controllable via flavours, so that operators can
 offer (and identify!) those particularly performance sensitive
 workloads rather than having to guess which instances are special and
 which aren't.

I'll investigate the auto-converge further to find out what the
potential downsides of it are. If we can unconditionally enable
it, it would be simpler than adding yet more tunables.

 Being able to cancel the migration would be good. Relatedly being able
 to restart nova-compute while a migration is going on would be good
 (or put differently, a migration happening shouldn't prevent a deploy
 of Nova code: interlocks like that make continuous deployment much
 harder).
 
 If we can't already, I'd like as a user to be able to see that the
 migration is happening (allows diagnosis of transient issues during
 the migration). Some ops folk may want to hide that of course.
 
 I'm not sure that automatically rolling back after N minutes makes
 sense : if the impact on the cluster is significant then 1 minute vs
 10 doesn't instrinsically matter: what matters more is preventing too
 many concurrent migrations, so that would be another feature that I
 don't think we have yet: don't allow more than some N inbound and M
 outbound live migrations to a compute host at any time, to prevent IO
 storms. We may want to log with NOTIFICATION migrations that are still
 progressing but appear to be having trouble completing. And of course
 an admin API to query all migrations in progress to allow API driven
 health checks by monitoring tools - which gives the power to manage
 things to admins without us having to write a probably-too-simple
 config interface.

Interesting, the point about concurrent migrations hadn't occurred to
me before, but it does of course make sense since migration is
primarily network bandwidth limited, though disk bandwidth is relevant
too if doing block migration.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Fwd: Image Upload error while installing devstack on CI slave machine.

2015-02-02 Thread Abhishek Shrivastava
Hi all,

For the past few days I have been facing the problem of getting the image
upload error while installation of devstack in my CI. The devstack
installation is triggered in CI when someone does the checkin, and the
failure cause comes out to be the same.

Below is the log for the error:

*2015-01-30 11:48:46.204 | + [[ 0 -ne 0 ]]*
*2015-01-30 11:48:46.205 | +
image=/opt/stack/new/devstack/files/mysql.qcow2*
*2015-01-30 11:48:46.205 | + [[
http://tarballs.openstack.org/trove/images/ubuntu/mysql.qcow2
http://tarballs.openstack.org/trove/images/ubuntu/mysql.qcow2 =~ openvz
]]*
*2015-01-30 11:48:46.205 | + [[
http://tarballs.openstack.org/trove/images/ubuntu/mysql.qcow2
http://tarballs.openstack.org/trove/images/ubuntu/mysql.qcow2 =~ \.vmdk
]]*
*2015-01-30 11:48:46.205 | + [[
http://tarballs.openstack.org/trove/images/ubuntu/mysql.qcow2
http://tarballs.openstack.org/trove/images/ubuntu/mysql.qcow2 =~
\.vhd\.tgz ]]*
*2015-01-30 11:48:46.205 | + [[
http://tarballs.openstack.org/trove/images/ubuntu/mysql.qcow2
http://tarballs.openstack.org/trove/images/ubuntu/mysql.qcow2 =~
\.xen-raw\.tgz ]]*
*2015-01-30 11:48:46.205 | + local kernel=*
*2015-01-30 11:48:46.205 | + local ramdisk=*
*2015-01-30 11:48:46.206 | + local disk_format=*
*2015-01-30 11:48:46.206 | + local container_format=*
*2015-01-30 11:48:46.206 | + local unpack=*
*2015-01-30 11:48:46.206 | + local img_property=*
*2015-01-30 11:48:46.206 | + case $image_fname in*
*2015-01-30 11:48:46.210 | ++ basename
/opt/stack/new/devstack/files/mysql.qcow2 .qcow2*
*2015-01-30 11:48:46.212 | + image_name=mysql*
*2015-01-30 11:48:46.212 | + disk_format=qcow2*
*2015-01-30 11:48:46.212 | + container_format=bare*
*2015-01-30 11:48:46.212 | + is_arch ppc64*
*2015-01-30 11:48:46.215 | ++ uname -m*
*2015-01-30 11:48:46.219 | + [[ x86_64 == \p\p\c\6\4 ]]*
*2015-01-30 11:48:46.219 | + '[' bare = bare ']'*
*2015-01-30 11:48:46.219 | + '[' '' = zcat ']'*
*2015-01-30 11:48:46.219 | + openstack --os-token
ae76e3eb602749f4b2f1428fba21431e --os-url http://127.0.0.1:9292
http://127.0.0.1:9292 image create mysql --public --container-format=bare
--disk-format qcow2*

*2015-01-30 11:48:47.342 | ERROR: openstack html*
*2015-01-30 11:48:47.342 |  head*
*2015-01-30 11:48:47.342 |   title401 Unauthorized/title*
*2015-01-30 11:48:47.342 |  /head*
*2015-01-30 11:48:47.342 |  body*
*2015-01-30 11:48:47.342 |   h1401 Unauthorized/h1*
*2015-01-30 11:48:47.343 |   This server could not verify that you are
authorized to access the document you requested. Either you supplied the
wrong credentials (e.g., bad password), or your browser does not understand
how to supply the credentials required.br /br /*
*2015-01-30 11:48:47.343 |*
*2015-01-30 11:48:47.343 |  /body*
*2015-01-30 11:48:47.343 | /html (HTTP 401)*
*2015-01-30 11:48:47.381 | + exit_trap*
*2015-01-30 11:48:47.381 | + local r=1*
*2015-01-30 11:48:47.382 | ++ jobs -p*
*2015-01-30 11:48:47.398 | + jobs='29629*
*2015-01-30 11:48:47.398 | 956'*
*2015-01-30 11:48:47.398 | + [[ -n 29629*
*2015-01-30 11:48:47.398 | 956 ]]*
*2015-01-30 11:48:47.398 | + [[ -n
/opt/stack/new/devstacklog.txt.2015-01-30-155739 ]]*
*2015-01-30 11:48:47.398 | + [[ True == \T\r\u\e ]]*
*2015-01-30 11:48:47.399 | + echo 'exit_trap: cleaning up child processes'*
*2015-01-30 11:48:47.399 | exit_trap: cleaning up child processes*
*2015-01-30 11:48:47.399 | + kill 29629 956*
*2015-01-30 11:48:47.399 | ./stack.sh: line 434: kill: (956) - No such
process*
*2015-01-30 11:48:47.399 | + kill_spinner*
*2015-01-30 11:48:47.399 | + '[' '!' -z '' ']'*
*2015-01-30 11:48:47.399 | + [[ 1 -ne 0 ]]*
*2015-01-30 11:48:47.399 | + echo 'Error on exit'*
*2015-01-30 11:48:47.399 | Error on exit*
*2015-01-30 11:48:47.400 | + [[ -z /opt/stack/new ]]*
*2015-01-30 11:48:47.400 | + /opt/stack/new/devstack/tools/worlddump.py -d
/opt/stack/new*
*2015-01-30 11:48:47.438 | World dumping... see
/opt/stack/new/worlddump-2015-01-30-114847.txt for details*
*2015-01-30 11:48:47.440 | df: '/run/user/112/gvfs': Permission denied*
*2015-01-30 11:48:47.468 | ./stack.sh: line 427: 29629 Terminated
   _old_run_process $service $command*
*2015-01-30 11:48:47.469 | + exit 1*

So, if anyone knows the solution for this problem please do reply.

-- 


*Thanks  Regards,*
*Abhishek*
*Cloudbyte Inc. http://www.cloudbyte.com*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db][nova] Use of asynchronous slaves in Nova (was: Deprecating use_slave in Nova)

2015-02-02 Thread Matthew Booth
On 30/01/15 19:06, Mike Bayer wrote:
 
 
 Matthew Booth mbo...@redhat.com wrote:
 
 At some point in the near future, hopefully early in L, we're intending
 to update Nova to use the new database transaction management in
 oslo.db's enginefacade.

 Spec:
 http://git.openstack.org/cgit/openstack/oslo-specs/plain/specs/kilo/make-enginefacade-a-facade.rst

 Implementation:
 https://review.openstack.org/#/c/138215/

 One of the effects of this is that we will always know when we are in a
 read-only transaction, or a transaction which includes writes. We intend
 to use this new contextual information to make greater use of read-only
 slave databases. We are currently proposing that if an admin has
 configured a slave database, we will use the slave for *all* read-only
 transactions. This would make the use_slave parameter passed to some
 Nova apis redundant, as we would always use the slave where the context
 allows.

 However, using a slave database has a potential pitfall when mixed with
 separate write transactions. A caller might currently:

 1. start a write transaction
 2. update the database
 3. commit the transaction
 4. start a read transaction
 5. read from the database

 The client might expect data written in step 2 to be reflected in data
 read in step 5. I can think of 3 cases here:

 1. A short-lived RPC call is using multiple transactions

 This is a bug which the new enginefacade will help us eliminate. We
 should not be using multiple transactions in this case. If the reads are
 in the same transaction as the write: they will be on the master, they
 will be consistent, and there is no problem. As a bonus, lots of these
 will be race conditions, and we'll fix at least some.

 2. A long-lived task is using multiple transactions between long-running
 sub-tasks

 In this case, for example creating a new instance, we genuinely want
 multiple transactions: we don't want to hold a database transaction open
 while we copy images around. However, I can't immediately think of a
 situation where we'd write data, then subsequently want to read it back
 from the db in a read-only transaction. I think we will typically be
 updating state, meaning it's going to be a succession of write transactions.

 3. Separate RPC calls from a remote client

 This seems potentially problematic to me. A client makes an RPC call to
 create a new object. The client subsequently tries to retrieve the
 created object, and gets a 404.

 Summary: 1 is a class of bugs which we should be able to find fairly
 mechanically through unit testing. 2 probably isn't a problem in
 practise? 3 seems like a problem, unless consumers of cloud services are
 supposed to expect that sort of thing.

 I understand that slave databases can occasionally get very behind. How
 behind is this in practise?

 How do we use use_slave currently? Why do we need a use_slave parameter
 passed in via rpc, when it should be apparent to the developer whether a
 particular task is safe for out-of-date data.

 Any chance they have some kind of barrier mechanism? e.g. block until
 the current state contains transaction X.

 General comments on the usefulness of slave databases, and the
 desirability of making maximum use of them?
 
 keep in mind that the big win we get from writer()/ reader() is that
writer() can remain pointing to one node in a Galera cluster, and
reader() can point to the cluster as a whole. reader() by default should
definitely refer to the cluster as a whole, that is, “use slave”.
 
 As for issue #3, galera cluster is synchronous replication. Slaves
don’t get “behind” at all. So to the degree that we need to
transparently support some other kind of master/slave where slaves do
get behind, perhaps there would be a reader(synchronous_required=True)
kind of thing; based on configuration, it would be known that
“synchronous” either means we don’t care (using galera) or that we
should use the writer (an asynchronous replication scheme).

This sounds like the crux of the matter to me. After some (admittedly
cursory) reading, it seems that galera can use both synchronous and
asynchronous replication. Up until Friday I had only ever considered
synchronous replication, which would not be a problem.

I think opportunistically using synchronous slaves whenever possible
could only be a win. Are there any unpleasant practicalities which might
mean this isn't the case?

However, it sounds to me like there is at least some OpenStack
deployment in production using asynchronous slaves, otherwise the issue
of 'getting behind' wouldn't have come up. We need to understand:

* Are people actually using asynchronous slaves?
* If so, why did they choose to do that, and
* what are they using them for?

 
 All of this points to the fact that I really don’t think the
directives / flags should say anything about which specific database to
use; using a “slave” or not due to various concerns is dependent on
backend implementation and configuration. The purpose of 

Re: [openstack-dev] [OpenStack Foundation] Finding people to work on the EC2 API in Nova

2015-02-02 Thread Alexandre Levine

Michael,

I'm rather new here, especially in regard to communication matters, so 
I'd also be glad to understand how it's done and then I can drive it if 
it's ok with everybody.
By saying EC2 sub team - who did you keep in mind? From my team 3 
persons are involved.


From the technical point of view the transition plan could look 
somewhat like this (sequence can be different):


1. Triage EC2 bugs and fix showstoppers in nova's EC2.
2. Contribute Tempest tests for EC2 functionality and employ them 
against nova's EC2.
3. Write spec for required API to be exposed from nova so that we get 
full info.

4. Triage and fix all of the existing nova's EC2 bugs worth fixing.
5. Set up Tempest testing of the stackforge/ec2 (if that's possible).
6. Communicate and discover all of the existing questions and 
problematic points for the switching from existing EC2 API to the new 
one. Provide solutions or decisions about them.
7. Do performance testing of the new stackforge/ec2 and provide fixes if 
any bottlenecks come up.
8. Have all of the above prepared for the Vancouver summit and discuss 
the situation there.


Michael, I am still wondering, who's going to be responsible for timely 
reviews and approvals of the fixes and tests we're going to contribute 
to nova? So far this is the biggest risk. Is there anyway to allow some 
of us to participate in the process?


Best regards,
  Alex Levine

On 2/2/15 2:46 AM, Michael Still wrote:

So, its exciting to me that we seem to developing more forward
momentum here. I personally think the way forward is a staged
transition from the in-nova EC2 API to the stackforge project, with
testing added to ensure that we are feature complete between the two.
I note that Soren disagrees with me here, but that's ok -- I'd like to
see us work through that as a team based on the merits.

So... It sounds like we have an EC2 sub team forming. How do we get
that group meeting to come up with a transition plan?

Michael

On Sun, Feb 1, 2015 at 4:12 AM, Davanum Srinivas dava...@gmail.com wrote:

Alex,

Very cool. thanks.

-- dims

On Sat, Jan 31, 2015 at 1:04 AM, Alexandre Levine
alev...@cloudscaling.com wrote:

Davanum,

Now that the picture with the both EC2 API solutions has cleared up a bit, I
can say yes, we'll be adding the tempest tests and doing devstack
integration.

Best regards,
   Alex Levine

On 1/31/15 2:21 AM, Davanum Srinivas wrote:

Alexandre, Randy,

Are there plans afoot to add support to switch on stackforge/ec2-api
in devstack? add tempest tests etc? CI Would go a long way in
alleviating concerns i think.

thanks,
dims

On Fri, Jan 30, 2015 at 1:24 PM, Bias, Randy randy.b...@emc.com wrote:

As you know we have been driving forward on the stack forge project and
it¹s our intention to continue to support it over time, plus reinvigorate
the GCE APIs when that makes sense. So we¹re supportive of deprecating
from Nova to focus on EC2 API in Nova.  I also think it¹s good for these
APIs to be able to iterate outside of the standard release cycle.



--Randy

VP, Technology, EMC Corporation
Formerly Founder  CEO, Cloudscaling (now a part of EMC)
+1 (415) 787-2253 [google voice]
TWITTER: twitter.com/randybias
LINKEDIN: linkedin.com/in/randybias
ASSISTANT: ren...@emc.com






On 1/29/15, 4:01 PM, Michael Still mi...@stillhq.com wrote:


Hi,

as you might have read on openstack-dev, the Nova EC2 API
implementation is in a pretty sad state. I wont repeat all of those
details here -- you can read the thread on openstack-dev for detail.

However, we got here because no one is maintaining the code in Nova
for the EC2 API. This is despite repeated calls over the last 18
months (at least).

So, does the Foundation have a role here? The Nova team has failed to
find someone to help us resolve these issues. Can the board perhaps
find resources as the representatives of some of the largest
contributors to OpenStack? Could the Foundation employ someone to help
us our here?

I suspect the correct plan is to work on getting the stackforge
replacement finished, and ensuring that it is feature compatible with
the Nova implementation. However, I don't want to preempt the design
process -- there might be other ways forward here.

I feel that a continued discussion which just repeats the last 18
months wont actually fix the situation -- its time to break out of
that mode and find other ways to try and get someone working on this
problem.

Thoughts welcome.

Michael

--
Rackspace Australia

___
Foundation mailing list
foundat...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/foundation



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






Re: [openstack-dev] [python-novaclient][nova] future of --os-compute-api-version option and whole api versioning

2015-02-02 Thread Andrey Kurilin
Thanks for the summary, I'll try to send first patch(maybe WIP) in few days.

On Mon, Feb 2, 2015 at 1:43 PM, Christopher Yeoh cbky...@gmail.com wrote:



 On Sat, Jan 31, 2015 at 4:09 AM, Andrey Kurilin akuri...@mirantis.com
 wrote:

 Thanks for the answer. Can I help with implementation of novaclient part?


 Sure! Do you think its something you can get proposed into Gerrit by the
 end of the week or very soon after?
 Its the sort of timeframe we're looking for to get microversions enabled
 asap I guess just let me know if it
 turns out you don't have the time.

 So I think a short summary of what is needed is:
 - if os-compute-api-version is not supplied don't send any header at all
 - it is probably worth doing a bit version parsing to see if it makes
 sense eg of format:
  r^([1-9]\d*)\.([1-9]\d*|0)$ or latest
 - handle  HTTPNotAcceptable if the user asked for a version which is not
 supported
   (can also get a badrequest if its badly formatted and got through the
 novaclient filter)
 - show the version header information returned

 Regards,

 Chris


 On Wed, Jan 28, 2015 at 11:50 AM, Christopher Yeoh cbky...@gmail.com
 wrote:

 On Fri, 23 Jan 2015 15:51:54 +0200
 Andrey Kurilin akuri...@mirantis.com wrote:

  Hi everyone!
  After removing nova V3 API from novaclient[1], implementation of v1.1
  client is used for v1.1, v2 and v3 [2].
  Since we moving to micro versions, I wonder, do we need such
  mechanism of choosing api version(os-compute-api-version) or we can
  simply remove it, like in proposed change - [3]?
  If we remove it, how micro version should be selected?
 

 So since v3 was never officially released I think we can re-use
 os-compute-api-version for microversions which will map to the
 X-OpenStack-Compute-API-Version header. See here for details on what
 the header will look like. We need to also modify novaclient to handle
 errors when a version requested is not supported by the server.

 If the user does not specify a version number then we should not send
 anything at all. The server will run the default behaviour which for
 quite a while will just be v2.1 (functionally equivalent to v.2)


 http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/api-microversions.html


 
  [1] - https://review.openstack.org/#/c/138694
  [2] -
 
 https://github.com/openstack/python-novaclient/blob/master/novaclient/client.py#L763-L769
  [3] - https://review.openstack.org/#/c/149006
 




 --
 Best regards,
 Andrey Kurilin.



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Best regards,
Andrey Kurilin.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Deprecation of in tree EC2 API in Nova for Kilo release

2015-02-02 Thread Daniel P. Berrange
On Fri, Jan 30, 2015 at 04:38:44PM -0600, Matt Riedemann wrote:
 
 
 On 1/30/2015 3:16 PM, Soren Hansen wrote:
 As I've said a couple of times in the past, I think the
 architecturally sound approach is to keep this inside Nova.
 
 The two main reasons are:
   * Having multiple frontend API's keeps us honest in terms of
 separation between the different layers in Nova.
   * Having the EC2 API inside Nova ensures the internal data model is
 rich enough to feed the EC2 API. If some field's only use is to
 enable the EC2 API and the EC2 API is a separate component, it's not
 hard to imagine it being deprecated as well.
 
 I fear that deprecation is a one way street and I would like to ask
 one more chance to resucitate it in its current home.
 
 I could be open to a discussion about putting it into a separate
 repository, but having it functionally remain in its current place, if
 that's somehow easier to swallow.
 
 
 Soren Hansen | http://linux2go.dk/
 Ubuntu Developer | http://www.ubuntu.com/
 OpenStack Developer  | http://www.openstack.org/
 
 
 2015-01-28 20:56 GMT+01:00 Sean Dague s...@dague.net:
 The following review for Kilo deprecates the EC2 API in Nova -
 https://review.openstack.org/#/c/150929/
 
 There are a number of reasons for this. The EC2 API has been slowly
 rotting in the Nova tree, never was highly tested, implements a
 substantially older version of what AWS has, and currently can't work
 with any recent releases of the boto library (due to implementing
 extremely old version of auth). This has given the misunderstanding that
 it's a first class supported feature in OpenStack, which it hasn't been
 in quite sometime. Deprecating honestly communicates where we stand.
 
 There is a new stackforge project which is getting some activity now -
 https://github.com/stackforge/ec2-api. The intent and hope is that is
 the path forward for the portion of the community that wants this
 feature, and that efforts will be focused there.
 
 Comments are welcomed, but we've attempted to get more people engaged to
 address these issues over the last 18 months, and never really had
 anyone step up. Without some real maintainers of this code in Nova (and
 tests somewhere in the community) it's really no longer viable.
 
  -Sean
 
 --
 Sean Dague
 http://dague.net
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 Deprecation isn't a one-way street really, nova-network was deprecated for a
 couple of releases and then undeprecated and opened up again for feature
 development (at least for a short while until the migration to neutron is
 sorted out and implemented).

Nova-network was prematurely deprecated as the alternative was not fully
ready. That's a prime example of why we should not be deprecating EC2
right now either.

Deprecation is a mechanism by which you inform users that they should
stop using the current functionality and switch to $NEW-THING as the
replacement. In the case of nova-network they could not switch because
neutron did not offer feature parity at the time we were asking them
to switch (nor did it have an upgrade path for that matter). Likewise
in the case of the EC2 API, the alternative is not ready for users to
actually switch to at a production quality level.

What we actually trying to tell users is that we think the out of tree
EC2 implementation is the long term strategic direction of the EC2
support with Nova, and that the current in tree impl is not being actively
developed. That's a sensible thing to tell our users, but deprecation is
the wrong mechanism for this. It is a task best suited for release notes.
Keep deprecation available as a mechanism for telling users that the time
has come for them to actively switch their deployments to the new impl.


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] RFC: ensuring live migration ends

2015-02-02 Thread Daniel P. Berrange
On Sat, Jan 31, 2015 at 03:55:23AM +0100, Vladik Romanovsky wrote:
 
 
 - Original Message -
  From: Daniel P. Berrange berra...@redhat.com
  To: openstack-dev@lists.openstack.org, 
  openstack-operat...@lists.openstack.org
  Sent: Friday, 30 January, 2015 11:47:16 AM
  Subject: [openstack-dev] [nova][libvirt] RFC: ensuring live migration ends
  
  In working on a recent Nova migration bug
  
https://bugs.launchpad.net/nova/+bug/1414065
  
  I had cause to refactor the way the nova libvirt driver monitors live
  migration completion/failure/progress. This refactor has opened the
  door for doing more intelligent active management of the live migration
  process.
  
  As it stands today, we launch live migration, with a possible bandwidth
  limit applied and just pray that it succeeds eventually. It might take
  until the end of the universe and we'll happily wait that long. This is
  pretty dumb really and I think we really ought to do better. The problem
  is that I'm not really sure what better should mean, except for ensuring
  it doesn't run forever.
  
  As a demo, I pushed a quick proof of concept showing how we could easily
  just abort live migration after say 10 minutes
  
https://review.openstack.org/#/c/151665/
  
  There are a number of possible things to consider though...
  
  First how to detect when live migration isn't going to succeeed.
  
   - Could do a crude timeout, eg allow 10 minutes to succeeed or else.
  
   - Look at data transfer stats (memory transferred, memory remaining to
 transfer, disk transferred, disk remaining to transfer) to determine
 if it is making forward progress.
 
 I think this is a better option. We could define a timeout for the progress
 and cancel if there is no progress. IIRC there were similar debates about it
 in Ovirt, we could do something similar:
 https://github.com/oVirt/vdsm/blob/master/vdsm/virt/migration.py#L430

That looks like quite a good implementation to follow. They are monitoring
progress and if they see progress stalling, then they wait a configurable
time before aborting. That should avoid prematurely aborting migrations
that are actually working, while avoiding migrations getting stuck forever.
They also have a global timeout which is based on the number of GB of RAM
the guest has, which is also a good idea compared to a one-size-fits-all
timeout.

  Fourth there's a question of whether we should give the tenant user or
  cloud admin further APIs for influencing migration
  
   - Add an explicit API for cancelling migration ?
  
   - Add APIs for setting tunables like downtime, bandwidth on the fly ?
  
   - Or drive some of the tunables like downtime, bandwidth, or policies
 like cancel vs paused from flavour or image metadata properties ?
  
   - Allow operations like evacuate to specify a live migration policy
 eg switch non-live migrate after 5 minutes ?
  
 IMHO, an explicit API for cancelling migration is very much needed.
 I remember cases when migrations took about 8 or hours, leaving the
 admins helpless :)

The oVirt hueristics should avoid that stuck scenario, but I do think
we need an API anyway.

 Also, I very much like the idea of having tunables and policy to set
 in the flavours and image properties.
 To allow the administrators to set these as a template in the flavour
 and also to let the users to update/override or request these options
 as they should know the best (hopefully) what is running in their guests.

We do need to make sure the administrators can always force migration
to succeed regardless of what the user might have configured, so they
can be ensured of emergency evacuation if needed.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova] Openstack HTTP error codes

2015-02-02 Thread Feodor Tersin
Hi Ken,

1 imageRef isn't the only attribute, which could receive an image id. There
are kernelId, ramdiskId, and even bdm v2 as well. So we couldn't guess
which attribute has the invalid value.

2 Besides NotFound case, other mixed cases are there. Such as attaching of
a volume. A mountpoint can be busy, or the volume can be used by an other
instance - both cases generate a conflict error. Do you suggest to use
specially formatted message in all such cases (when the same http error
code has several reasons)? But to make a work with Nova API
straightforward, all messages should have this format, even in simplest
cases.

3 How to parse a localized message? A Nova API client shouldn't use en_us
locale only to communicate with Nova, because it should display messages
generated by Nova to an end user.



On Mon, Feb 2, 2015 at 8:28 AM, Ken'ichi Ohmichi ken1ohmi...@gmail.com
wrote:

 2015-01-30 18:13 GMT+09:00 Simon Pasquier spasqu...@mirantis.com:
  On Fri, Jan 30, 2015 at 3:05 AM, Kenichi Oomichi 
 oomi...@mxs.nes.nec.co.jp
  wrote:
 
   -Original Message-
   From: Roman Podoliaka [mailto:rpodoly...@mirantis.com]
   Sent: Friday, January 30, 2015 2:12 AM
   To: OpenStack Development Mailing List (not for usage questions)
   Subject: Re: [openstack-dev] [api][nova] Openstack HTTP error codes
  
   Hi Anne,
  
   I think Eugeniya refers to a problem, that we can't really distinguish
   between two different  badRequest (400) errors (e.g. wrong security
   group name vs wrong key pair name when starting an instance), unless
   we parse the error description, which might be error prone.
 
  Yeah, current Nova v2 API (not v2.1 API) returns inconsistent messages
  in badRequest responses, because these messages are implemented at many
  places. But Nova v2.1 API can return consistent messages in most cases
  because its input validation framework generates messages
  automatically[1].
 
 
  When you say most cases, you mean JSON schema validation only, right?
  IIUC, this won't apply to the errors described by the OP such as invalid
 key
  name, unknown security group, ...

 Yeah, right.
 I implied that in most cases and I have one patch[1] for covering them.
 By the patch, the sample messages also will be based on the same
 format and be consistent.
 The other choice we have is CamelCase exception as the fist mail, that
 also is interesting.

 Thanks
 Ken Ohmichi

 ---
 [1]: https://review.openstack.org/#/c/151954

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] PyMySQL review

2015-02-02 Thread Yuriy Taraday
On Mon Feb 02 2015 at 11:49:31 AM Julien Danjou jul...@danjou.info wrote:

 On Fri, Jan 30 2015, Yuriy Taraday wrote:

  That's a great research! Under its impression I've spent most of last
  evening reading PyMySQL sources. It looks like it not as much need C
  speedups currently as plain old Python optimizations. Protocol parsing
 code
  seems very inefficient (chained struct.unpack's interleaved with data
  copying and util method calls that do the same struct.unpack with
  unnecessary type check... wow...) That's a huge place for improvement.
  I think it worth spending time on coming vacation to fix these slowdowns.
  We'll see if they'll pay back those 10% slowdown people are talking
 about.

 With all my respect, you may be right, but I need to say it'd be better
 to profile and then optimize rather than spend time rewriting random
 parts of the code then hoping it's going to be faster. :-)


Don't worry, I do profile. Currently I use mini-benchmark Mike provided an
optimizing hottest methods. I'm already getting 25% more speed in this case
and that's not a limit. I will be posting pull requests to pymysql soon.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] high dhcp lease times in neutron deployments considered harmful (or not???)

2015-02-02 Thread Kevin Benton
The only thing this discussion has convinced me of is that allowing users
to change the fixed IP address on a neutron port leads to a bad
user-experience.

Not as bad as having to delete a port and create another one on the same
network just to change addresses though...

Even with an 8-minute renew time you're talking up to a 7-minute blackout
(87.5% of lease time before using broadcast).

I suggested 240 seconds renewal time, which is up to 4 minutes of
connectivity outage. This doesn't have anything to do with lease time and
unicast DHCP will work because the spoof rules allow DHCP client traffic
before restricting to specific IPs.

 Most would have rebooted long before then, true?  Cattle not pets, right?

Only in an ideal world that I haven't encountered with customer
deployments. Many enterprise deployments end up bringing pets along where
reboots aren't always free. The time taken to relaunch programs and restore
state can end up being 10 minutes+ if it's something like a VDI deployment
or dev environment where someone spends a lot of time working on one VM.

Changing the lease time is just papering-over the real bug - neutron
doesn't support seamless changes in IP addresses on ports, since it totally
relies on the dhcp configuration settings a deployer has chosen.

It doesn't need to be seamless, but it certainly shouldn't be useless.
Connectivity interruptions can be expected with IP changes (e.g. I've seen
changes in elastic IPs on EC2 can interrupt connectivity to an instance for
up to 2 minutes), but an entire day of downtime is awful.

One of the things I'm getting at is that a deployer shouldn't be choosing
such high lease times and we are encouraging it with a high default. You
are arguing for infrequent renewals to work around excessive logging, which
is just an implementation problem that should be addressed with a patch to
your logging collector (de-duplication) or to dnsmasq (don't log renewals).

Documenting a VM reboot is necessary, or even deprecating this (you won't
like that) are sounding better to me by the minute.

If this is an approach you really want to go with, then we should at least
be consistent and deprecate the extra dhcp options extension (or at least
the ability to update ports' dhcp options). Updating subnet attributes like
gateway_ip, dns_nameserves, and host_routes should be thrown out as well.
All of these things depend on the DHCP server to deliver updated
information and are hindered by renewal times. Why discriminate against IP
updates on a port? A failure to receive many of those other types of
changes could result in just as severe of a connection disruption.


In summary, the information the DHCP server gives to clients is not static.
Unless we eliminate updates to everything in the Neutron API that results
in different DHCP lease information, my suggestion is that we include a new
option for the renewal interval and have the default set 5 minutes. We can
leave the lease default to 1 day so the amount of time a DHCP server can be
offline without impacting running clients can stay the same.

On Fri, Jan 30, 2015 at 8:00 AM, Brian Haley brian.ha...@hp.com wrote:

 Kevin,

 The only thing this discussion has convinced me of is that allowing users
 to
 change the fixed IP address on a neutron port leads to a bad
 user-experience.
 Even with an 8-minute renew time you're talking up to a 7-minute blackout
 (87.5%
 of lease time before using broadcast).  This is time that customers are
 paying
 for.  Most would have rebooted long before then, true?  Cattle not pets,
 right?

 Changing the lease time is just papering-over the real bug - neutron
 doesn't
 support seamless changes in IP addresses on ports, since it totally relies
 on
 the dhcp configuration settings a deployer has chosen.  Bickering over the
 lease
 time doesn't fix that non-deterministic recovery for the VM.  Documenting
 a VM
 reboot is necessary, or even deprecating this (you won't like that) are
 sounding
 better to me by the minute.

 Is there anyone else that has used, or has customers using, this part of
 the
 neutron API?  Can they share their experiences?

 -Brian


 On 01/30/2015 07:26 AM, Kevin Benton wrote:
 But they will if we document it well, which is what Salvatore suggested.
 
  I don't think this is a good approach, and it's a big part of why I
 started this
  thread. Most of the deployers/operators I have worked with only read the
 bare
  minimum documentation to get a Neutron deployment working and they only
 adjust
  the settings necessary for basic functionality.
 
  We have an overwhelming amount of configuration options and adding a note
  specifying that a particular setting for DHCP leases has been optimized
 to
  reduce logging at the cost of long downtimes during port IP address
 updates is a
  waste of time and effort on our part.
 
 I think the current default value is also more indicative of something
  you'd find in your house, or at work - i.e. stable networks.
 
  

Re: [openstack-dev] [nova][libvirt] RFC: ensuring live migration ends

2015-02-02 Thread Daniel P. Berrange
On Sun, Feb 01, 2015 at 11:20:08AM -0800, Noel Burton-Krahn wrote:
 Thanks for bringing this up, Daniel.  I don't think it makes sense to have
 a timeout on live migration, but operators should be able to cancel it,
 just like any other unbounded long-running process.  For example, there's
 no timeout on file transfers, but they need an interface report progress
 and to cancel them.  That would imply an option to cancel evacuation too.

There has been periodic talk about a generic tasks API in Nova for managing
long running operations and getting information about their progress, but I
am not sure what the status of that is. It would obviously be applicable to
migration if that's a route we took.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova][libvirt] RFC: ensuring live migration ends

2015-02-02 Thread Daniel P. Berrange
On Sun, Feb 01, 2015 at 03:03:45PM -0700, David Medberry wrote:
 I'll second much of what Rob said:
 API that indicated how many live-migrations (l-m) were going would be good.
 API that told you what progress (and start time) a given l-m had made would
 be great.
 API to cancel a given l-m would also be great. I think this is a preferred
 approach over an auto timeout (it would give us the tools we need to
 implement an auto timeout though.)
 
 I like the idea of trying auto-convergence (and agree it should be flavor
 feature and likely not the default.) I suspect this one needs some testing.
 It may be fine to automatically do this if it doesn't actually throttle the
 VM some 90-99% of the time.  (Presumably this could also increase the max
 downtime between cutover as well as throttling the vm.)

For reference the auto-convergance code in QEMU is this commit

  
http://git.qemu.org/?p=qemu.git;a=commit;h=7ca1dfad952d8a8655b32e78623edcc38a51b14a

If the migration operation is making good progress, it does not have any
impact on the guest. Periodically it checks the data transfer progress and
if the guest has dirtied more than 50% of the pages than were transferred
it'll start throttling. It throttles by simplying preventing the guest
vCPUs from running for a period of time. So the guest will obviously get
a performance drop, but the migration is more likely (but not guaranteed)
to succeed.

From the QEMU level you can actually enable this on the fly it seems, but
libvirt only lets it be set at startup of migration.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python-novaclient][nova] future of --os-compute-api-version option and whole api versioning

2015-02-02 Thread Christopher Yeoh
On Sat, Jan 31, 2015 at 4:09 AM, Andrey Kurilin akuri...@mirantis.com
wrote:

 Thanks for the answer. Can I help with implementation of novaclient part?


Sure! Do you think its something you can get proposed into Gerrit by the
end of the week or very soon after?
Its the sort of timeframe we're looking for to get microversions enabled
asap I guess just let me know if it
turns out you don't have the time.

So I think a short summary of what is needed is:
- if os-compute-api-version is not supplied don't send any header at all
- it is probably worth doing a bit version parsing to see if it makes sense
eg of format:
 r^([1-9]\d*)\.([1-9]\d*|0)$ or latest
- handle  HTTPNotAcceptable if the user asked for a version which is not
supported
  (can also get a badrequest if its badly formatted and got through the
novaclient filter)
- show the version header information returned

Regards,

Chris


 On Wed, Jan 28, 2015 at 11:50 AM, Christopher Yeoh cbky...@gmail.com
 wrote:

 On Fri, 23 Jan 2015 15:51:54 +0200
 Andrey Kurilin akuri...@mirantis.com wrote:

  Hi everyone!
  After removing nova V3 API from novaclient[1], implementation of v1.1
  client is used for v1.1, v2 and v3 [2].
  Since we moving to micro versions, I wonder, do we need such
  mechanism of choosing api version(os-compute-api-version) or we can
  simply remove it, like in proposed change - [3]?
  If we remove it, how micro version should be selected?
 

 So since v3 was never officially released I think we can re-use
 os-compute-api-version for microversions which will map to the
 X-OpenStack-Compute-API-Version header. See here for details on what
 the header will look like. We need to also modify novaclient to handle
 errors when a version requested is not supported by the server.

 If the user does not specify a version number then we should not send
 anything at all. The server will run the default behaviour which for
 quite a while will just be v2.1 (functionally equivalent to v.2)


 http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/api-microversions.html


 
  [1] - https://review.openstack.org/#/c/138694
  [2] -
 
 https://github.com/openstack/python-novaclient/blob/master/novaclient/client.py#L763-L769
  [3] - https://review.openstack.org/#/c/149006
 




 --
 Best regards,
 Andrey Kurilin.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: Image Upload error while installing devstack on CI slave machine.

2015-02-02 Thread Bob Ball
Hi Abhishek,

This is bug https://launchpad.net/bugs/1415795 introduced by 
https://review.openstack.org/#/c/142967/ because Swift doesn't use olso.config.

The fix is at https://review.openstack.org/#/c/151506/ which has not yet been 
approved, but if you can cherry-pick it for your CI it should get it working 
again.

Thanks,

Bob

From: Abhishek Shrivastava [mailto:abhis...@cloudbyte.com]
Sent: 02 February 2015 09:35
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] Fwd: Image Upload error while installing devstack on 
CI slave machine.

Hi all,

For the past few days I have been facing the problem of getting the image 
upload error while installation of devstack in my CI. The devstack installation 
is triggered in CI when someone does the checkin, and the failure cause comes 
out to be the same.

Below is the log for the error:

2015-01-30 11:48:46.204 | + [[ 0 -ne 0 ]]
2015-01-30 11:48:46.205 | + image=/opt/stack/new/devstack/files/mysql.qcow2
2015-01-30 11:48:46.205 | + [[ 
http://tarballs.openstack.org/trove/images/ubuntu/mysql.qcow2 =~ openvz ]]
2015-01-30 11:48:46.205 | + [[ 
http://tarballs.openstack.org/trove/images/ubuntu/mysql.qcow2 =~ \.vmdk ]]
2015-01-30 11:48:46.205 | + [[ 
http://tarballs.openstack.org/trove/images/ubuntu/mysql.qcow2 =~ \.vhd\.tgz ]]
2015-01-30 11:48:46.205 | + [[ 
http://tarballs.openstack.org/trove/images/ubuntu/mysql.qcow2 =~ \.xen-raw\.tgz 
]]
2015-01-30 11:48:46.205 | + local kernel=
2015-01-30 11:48:46.205 | + local ramdisk=
2015-01-30 11:48:46.206 | + local disk_format=
2015-01-30 11:48:46.206 | + local container_format=
2015-01-30 11:48:46.206 | + local unpack=
2015-01-30 11:48:46.206 | + local img_property=
2015-01-30 11:48:46.206 | + case $image_fname in
2015-01-30 11:48:46.210 | ++ basename /opt/stack/new/devstack/files/mysql.qcow2 
.qcow2
2015-01-30 11:48:46.212 | + image_name=mysql
2015-01-30 11:48:46.212 | + disk_format=qcow2
2015-01-30 11:48:46.212 | + container_format=bare
2015-01-30 11:48:46.212 | + is_arch ppc64
2015-01-30 11:48:46.215 | ++ uname -m
2015-01-30 11:48:46.219 | + [[ x86_64 == \p\p\c\6\4 ]]
2015-01-30 11:48:46.219 | + '[' bare = bare ']'
2015-01-30 11:48:46.219 | + '[' '' = zcat ']'
2015-01-30 11:48:46.219 | + openstack --os-token 
ae76e3eb602749f4b2f1428fba21431e --os-url http://127.0.0.1:9292 image create 
mysql --public --container-format=bare --disk-format qcow2
2015-01-30 11:48:47.342 | ERROR: openstack html
2015-01-30 11:48:47.342 |  head
2015-01-30 11:48:47.342 |   title401 Unauthorized/title
2015-01-30 11:48:47.342 |  /head
2015-01-30 11:48:47.342 |  body
2015-01-30 11:48:47.342 |   h1401 Unauthorized/h1
2015-01-30 11:48:47.343 |   This server could not verify that you are 
authorized to access the document you requested. Either you supplied the wrong 
credentials (e.g., bad password), or your browser does not understand how to 
supply the credentials required.br /br /
2015-01-30 11:48:47.343 |
2015-01-30 11:48:47.343 |  /body
2015-01-30 11:48:47.343 | /html (HTTP 401)
2015-01-30 11:48:47.381 | + exit_trap
2015-01-30 11:48:47.381 | + local r=1
2015-01-30 11:48:47.382 | ++ jobs -p
2015-01-30 11:48:47.398 | + jobs='29629
2015-01-30 11:48:47.398 | 956'
2015-01-30 11:48:47.398 | + [[ -n 29629
2015-01-30 11:48:47.398 | 956 ]]
2015-01-30 11:48:47.398 | + [[ -n 
/opt/stack/new/devstacklog.txt.2015-01-30-155739 ]]
2015-01-30 11:48:47.398 | + [[ True == \T\r\u\e ]]
2015-01-30 11:48:47.399 | + echo 'exit_trap: cleaning up child processes'
2015-01-30 11:48:47.399 | exit_trap: cleaning up child processes
2015-01-30 11:48:47.399 | + kill 29629 956
2015-01-30 11:48:47.399 | ./stack.sh: line 434: kill: (956) - No such process
2015-01-30 11:48:47.399 | + kill_spinner
2015-01-30 11:48:47.399 | + '[' '!' -z '' ']'
2015-01-30 11:48:47.399 | + [[ 1 -ne 0 ]]
2015-01-30 11:48:47.399 | + echo 'Error on exit'
2015-01-30 11:48:47.399 | Error on exit
2015-01-30 11:48:47.400 | + [[ -z /opt/stack/new ]]
2015-01-30 11:48:47.400 | + /opt/stack/new/devstack/tools/worlddump.py -d 
/opt/stack/new
2015-01-30 11:48:47.438 | World dumping... see 
/opt/stack/new/worlddump-2015-01-30-114847.txt for details
2015-01-30 11:48:47.440 | df: '/run/user/112/gvfs': Permission denied
2015-01-30 11:48:47.468 | ./stack.sh: line 427: 29629 Terminated  
_old_run_process $service $command
2015-01-30 11:48:47.469 | + exit 1

So, if anyone knows the solution for this problem please do reply.

--
[https://docs.google.com/uc?export=downloadid=0Byq0j7ZjFlFKV3ZCWnlMRXBCcU0revid=0Byq0j7ZjFlFKa2V5VjdBSjIwUGx6bUROS2IrenNwc0kzd2IwPQ]
Thanks  Regards,
Abhishek
Cloudbyte Inc.http://www.cloudbyte.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Logs format on UI (High/6.0)

2015-02-02 Thread Simon Pasquier
Hello,
(resurrecting this old thread because I think I found the root cause)

The problem affects all OpenStack environments using Syslog, not only
Fuel-based installations: when use_syslog is true, the
logging_context_format_string and logging_default_format_string parameters
aren't taken into account (see [1] for details).
The issue is fixed in oslo.log but not in oslo-incubator/log (See [2]).
Depending on when the different projects synchronized with oslo-incubator
during the Juno timeframe, some of them are immune to the bug (from the
Fuel bug report: heat, glance and neutron). As such the bug will affect all
projects that don't switch to oslo.log during the Kilo cycle.

BR,
Simon

[1] https://bugs.launchpad.net/oslo.log/+bug/1399088
[2] https://review.openstack.org/#/c/151157/

On Fri, Dec 12, 2014 at 7:35 PM, Dmitry Pyzhov dpyz...@mirantis.com wrote:

 We have a high priority bug in 6.0:
 https://bugs.launchpad.net/fuel/+bug/1401852. Here is the story.

 Our openstack services use to send logs in strange format with extra copy
 of timestamp and loglevel:
 == ./neutron-metadata-agent.log ==
 2014-12-12T11:00:30.098105+00:00 info: 2014-12-12 11:00:30.003 14349 INFO
 neutron.common.config [-] Logging enabled!

 And we have a workaround for this. We hide extra timestamp and use second
 loglevel.

 In Juno some of services have updated oslo.logging and now send logs in
 simple format:
 == ./nova-api.log ==
 2014-12-12T10:57:15.437488+00:00 debug: Loading app ec2 from
 /etc/nova/api-paste.ini

 In order to keep backward compatibility and deal with both formats we have
 a dirty workaround for our workaround:
 https://review.openstack.org/#/c/141450/

 As I see, our best choice here is to throw away all workarounds and show
 logs on UI as is. If service sends duplicated data - we should show
 duplicated data.

 Long term fix here is to update oslo.logging in all packages. We can do it
 in 6.1.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: Image Upload error while installing devstack on CI slave machine.

2015-02-02 Thread Abhishek Shrivastava
Hi Bob,

Thanks for the reply, this is really a great help for me.

On Mon, Feb 2, 2015 at 3:11 PM, Bob Ball bob.b...@citrix.com wrote:

  Hi Abhishek,



 This is bug https://launchpad.net/bugs/1415795 introduced by
 https://review.openstack.org/#/c/142967/ because Swift doesn't use
 olso.config.



 The fix is at https://review.openstack.org/#/c/151506/ which has not yet
 been approved, but if you can cherry-pick it for your CI it should get it
 working again.



 Thanks,



 Bob



 *From:* Abhishek Shrivastava [mailto:abhis...@cloudbyte.com]
 *Sent:* 02 February 2015 09:35
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* [openstack-dev] Fwd: Image Upload error while installing
 devstack on CI slave machine.



 Hi all,



 For the past few days I have been facing the problem of getting the image
 upload error while installation of devstack in my CI. The devstack
 installation is triggered in CI when someone does the checkin, and the
 failure cause comes out to be the same.



 Below is the log for the error:



 *2015-01-30 11:48:46.204 | + [[ 0 -ne 0 ]]*

 *2015-01-30 11:48:46.205 | +
 image=/opt/stack/new/devstack/files/mysql.qcow2*

 *2015-01-30 11:48:46.205 | + [[
 http://tarballs.openstack.org/trove/images/ubuntu/mysql.qcow2
 http://tarballs.openstack.org/trove/images/ubuntu/mysql.qcow2 =~ openvz
 ]]*

 *2015-01-30 11:48:46.205 | + [[
 http://tarballs.openstack.org/trove/images/ubuntu/mysql.qcow2
 http://tarballs.openstack.org/trove/images/ubuntu/mysql.qcow2 =~ \.vmdk
 ]]*

 *2015-01-30 11:48:46.205 | + [[
 http://tarballs.openstack.org/trove/images/ubuntu/mysql.qcow2
 http://tarballs.openstack.org/trove/images/ubuntu/mysql.qcow2 =~
 \.vhd\.tgz ]]*

 *2015-01-30 11:48:46.205 | + [[
 http://tarballs.openstack.org/trove/images/ubuntu/mysql.qcow2
 http://tarballs.openstack.org/trove/images/ubuntu/mysql.qcow2 =~
 \.xen-raw\.tgz ]]*

 *2015-01-30 11:48:46.205 | + local kernel=*

 *2015-01-30 11:48:46.205 | + local ramdisk=*

 *2015-01-30 11:48:46.206 | + local disk_format=*

 *2015-01-30 11:48:46.206 | + local container_format=*

 *2015-01-30 11:48:46.206 | + local unpack=*

 *2015-01-30 11:48:46.206 | + local img_property=*

 *2015-01-30 11:48:46.206 | + case $image_fname in*

 *2015-01-30 11:48:46.210 | ++ basename
 /opt/stack/new/devstack/files/mysql.qcow2 .qcow2*

 *2015-01-30 11:48:46.212 | + image_name=mysql*

 *2015-01-30 11:48:46.212 | + disk_format=qcow2*

 *2015-01-30 11:48:46.212 | + container_format=bare*

 *2015-01-30 11:48:46.212 | + is_arch ppc64*

 *2015-01-30 11:48:46.215 | ++ uname -m*

 *2015-01-30 11:48:46.219 | + [[ x86_64 == \p\p\c\6\4 ]]*

 *2015-01-30 11:48:46.219 | + '[' bare = bare ']'*

 *2015-01-30 11:48:46.219 | + '[' '' = zcat ']'*

 *2015-01-30 11:48:46.219 | + openstack --os-token
 ae76e3eb602749f4b2f1428fba21431e --os-url http://127.0.0.1:9292
 http://127.0.0.1:9292 image create mysql --public --container-format=bare
 --disk-format qcow2*

 *2015-01-30 11:48:47.342 | ERROR: openstack html*

 *2015-01-30 11:48:47.342 |  head*

 *2015-01-30 11:48:47.342 |   title401 Unauthorized/title*

 *2015-01-30 11:48:47.342 |  /head*

 *2015-01-30 11:48:47.342 |  body*

 *2015-01-30 11:48:47.342 |   h1401 Unauthorized/h1*

 *2015-01-30 11:48:47.343 |   This server could not verify that you are
 authorized to access the document you requested. Either you supplied the
 wrong credentials (e.g., bad password), or your browser does not understand
 how to supply the credentials required.br /br /*

 *2015-01-30 11:48:47.343 |*

 *2015-01-30 11:48:47.343 |  /body*

 *2015-01-30 11:48:47.343 | /html (HTTP 401)*

 *2015-01-30 11:48:47.381 | + exit_trap*

 *2015-01-30 11:48:47.381 | + local r=1*

 *2015-01-30 11:48:47.382 | ++ jobs -p*

 *2015-01-30 11:48:47.398 | + jobs='29629*

 *2015-01-30 11:48:47.398 | 956'*

 *2015-01-30 11:48:47.398 | + [[ -n 29629*

 *2015-01-30 11:48:47.398 | 956 ]]*

 *2015-01-30 11:48:47.398 | + [[ -n
 /opt/stack/new/devstacklog.txt.2015-01-30-155739 ]]*

 *2015-01-30 11:48:47.398 | + [[ True == \T\r\u\e ]]*

 *2015-01-30 11:48:47.399 | + echo 'exit_trap: cleaning up child processes'*

 *2015-01-30 11:48:47.399 | exit_trap: cleaning up child processes*

 *2015-01-30 11:48:47.399 | + kill 29629 956*

 *2015-01-30 11:48:47.399 | ./stack.sh: line 434: kill: (956) - No such
 process*

 *2015-01-30 11:48:47.399 | + kill_spinner*

 *2015-01-30 11:48:47.399 | + '[' '!' -z '' ']'*

 *2015-01-30 11:48:47.399 | + [[ 1 -ne 0 ]]*

 *2015-01-30 11:48:47.399 | + echo 'Error on exit'*

 *2015-01-30 11:48:47.399 | Error on exit*

 *2015-01-30 11:48:47.400 | + [[ -z /opt/stack/new ]]*

 *2015-01-30 11:48:47.400 | + /opt/stack/new/devstack/tools/worlddump.py -d
 /opt/stack/new*

 *2015-01-30 11:48:47.438 | World dumping... see
 /opt/stack/new/worlddump-2015-01-30-114847.txt for details*

 *2015-01-30 11:48:47.440 | df: '/run/user/112/gvfs': Permission denied*

 *2015-01-30 11:48:47.468 | ./stack.sh: line 427: 29629 Terminated
  _old_run_process 

Re: [openstack-dev] [OpenStack Foundation] Finding people to work on the EC2 API in Nova

2015-02-02 Thread Alexandre Levine

Daniel,

On 2/2/15 12:58 PM, Daniel P. Berrange wrote:

On Fri, Jan 30, 2015 at 07:57:08PM +, Tim Bell wrote:

Alex,



Many thanks for the constructive approach. I've added an item to the list for 
the Ops meetup in March to see who would be interested to help.



As discussed on the change, it is likely that there would need to be some 
additional
Nova APIs added to support the full EC2 semantics. Thus, there would need to 
support
from the Nova team to enable these additional functions.  Having tables in the 
EC2
layer which get out of sync with those in the Nova layer would be a significant
problem in production.

Adding new APIs to Nova to support out of tree EC2 impl is perfectly 
reasonsable.
Indeed if there is data needed by EC2 that Nova doesn't provide already, chances
are that providing this data woudl be useful to other regular users / client 
apps
too. Just really needs someone to submit a spec with details of exactly which
functionality is missing. It shouldnt be hard for Nova cores to support it, 
given
the desire to see the out of tree EC2 impl take over  in tree impl removed.


We'll do the spec shortly.



I think this would merit a good slot in the Vancouver design sessions so we can
also discuss documentation, migration, packaging, configuration management,
scaling, HA, etc.

I'd really strongly encourage the people working on this to submit the
detailed spec for the new APIs well before the Vancouver design summit.
Likewise at lesat document somewhere the thoughts on upgrade paths plans.
We need to at least discuss  iterate on this a few times online, so that
we can take advantage of the f2f time for any remaining harder parts of
the discussion.


We'll see about that also when all of the subjects we can think of or 
get questions about are covered somewhere in docs or specs. By the way - 
how do you usually do those online discussions? I mean what is the tooling?


Regards,
Daniel

Best regards,
  Alex Levine


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] K-2 Review-a-thon

2015-02-02 Thread Erlon Cruz
Hi Mike,

There is 2 features[1][2][3] of HNAS driver that still are not
approved/targeted.
Is there anything missing to then to be approved?

Erlon


[1] https://blueprints.launchpad.net/cinder/+spec/hds-hnas-ssh-backend
[2] https://blueprints.launchpad.net/cinder/+spec/hds-hnas-pool-aware-sched
[3] https://bugs.launchpad.net/cinder/+bug/1402771

On Sat, Jan 31, 2015 at 7:30 PM, Mike Perez thin...@gmail.com wrote:

 * Why: We got a bit in the review queue. K-2 [1] cut is set to February
 5th.

 * When: February 2nd at 2:00 UTC [2] to February 5th at 2:00 UTC [3]
 or sooner if we finish!

 * Where: #openstack-cinder on freenode IRC. There will also be a
 posted google hangout link in channel and etherpad [4] since that
 really worked out in previous hackathons. Remember there is a limit,
 so please join only if you're really going to be participating. You
 also don't have to be core.

 I'm encouraging two cores to sign up for a review in the etherpad [4].
 If there are already two people to a review, try to move onto
 something else to avoid getting burnt out on efforts already spent on
 a review.

 Patch owners will also be receiving an email directly from me to be
 aware of this prime time to respond back to feedback and post
 revisions if necessary.

 --
 Mike Perez

 [1] - https://launchpad.net/cinder/+milestone/kilo-2
 [2] -
 http://www.timeanddate.com/worldclock/fixedtime.html?iso=20150202T02p1=1440
 [3] -
 http://www.timeanddate.com/worldclock/fixedtime.html?iso=20150205T02p1=1440
 [4] - https://etherpad.openstack.org/p/cinder-k2-priorities

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Schedule for rest of Kilo

2015-02-02 Thread Tony Breeds
On Tue, Jan 27, 2015 at 04:27:21AM +, Adrian Otto wrote:
 Tony,
 
 That would be terrific. Which iCal feed were you thinking of? I was planning 
 on making something similar to this:

Sorry Adrian I was a little off topic.

I was thinking that when you settle on a schedule for your regular team
meetings (and add them to [1])  I'll handle keeping them in sync with the
openstack iCal feed [2]

Yours Tony.

[1] https://wiki.openstack.org/wiki/Meetings
[2] 
https://www.google.com/calendar/embed?src=bj05mroquq28jhud58esggqmh4%40group.calendar.google.comctz=Iceland/Reykjavik


pgphAzSttZNfp.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove]how to enable trove in dashboard?

2015-02-02 Thread Li Tianqing
Sorry, i find it.


--

Best
Li Tianqing

At 2015-02-03 10:48:06, Li Tianqing jaze...@163.com wrote:

Hello,
   I first install devstack, then install trove from source code. After 
seraching on the net, i do not find how to enable trove in dashboard...
  Can someone help to point out how to?




--

Best
Li Tianqing


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Talk on Jinja Metatemplates for upcoming summit

2015-02-02 Thread Pratik Mallya
Hey Pavlov,

The main aim of this effort is to allow a more efficient template catalog 
management, not unlike what is given in [2]. As a service to our customers, 
Rackspace maintains a catalog of useful templates[3] which are also exposed to 
the user through the UI. The template authors of these templates had expressed 
difficulties in having to maintain several templates depending on resource 
availability, account-type etc., so they asked for the ability to use Jinja 
templating system to instead include everything in one Heat meta-template 
(Heat Template + Jinja, I’m not sure if that term is used for something else 
already :-) ). e.g. [4] shows a very simple case of having to choose between 
two templates depending upon the availability of Neutron on the network.

I hope that clarifies things a bit. Let me know if you have more questions!

Thanks!
-Pratik

[3] https://github.com/rackspace-orchestration-templates
[4] 
https://github.com/rackspace-orchestration-templates/jinja-test/blob/master/jinja-test.yaml
On Feb 2, 2015, at 1:44 PM, Pavlo Shchelokovskyy 
pshchelokovs...@mirantis.commailto:pshchelokovs...@mirantis.com wrote:

Hi Pratik,

what would be the aim for this templating? I ask since we in Heat try to keep 
the imperative logic like e.g. if-else out of heat templates, leaving it to 
other services. Plus there is already a spec for a heat template function to 
repeat pieces of template structure [1].

I can definitely say that some other OpenStack projects that are consumers of 
Heat will be interested - Trove already tries to use Jinja templates to create 
Heat templates [2], and possibly Sahara and Murano might be interested as well 
(I suspect though the latter already uses YAQL for that).

[1] https://review.openstack.org/#/c/140849/
[2] 
https://github.com/openstack/trove/blob/master/trove/templates/default.heat.template

Best regards,

Pavlo Shchelokovskyy
Software Engineer
Mirantis Inc
www.mirantis.comhttp://www.mirantis.com/

On Mon, Feb 2, 2015 at 8:29 PM, Pratik Mallya 
pratik.mal...@rackspace.commailto:pratik.mal...@rackspace.com wrote:
Hello Heat Developers,

As part of an internal development project at Rackspace, I implemented a 
mechanism to allow using Jinja templating system in heat templates. I was 
hoping to give a talk on the same for the upcoming summit (which will be the 
first summit after I started working on openstack). Have any of you worked/ are 
working on something similar? If so, could you please contact me and we can 
maybe propose a joint talk? :-)

Please let me know! It’s been interesting work and I hope the community will be 
excited to see it.

Thanks!
-Pratik

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] operators vs users for choosing convergence engine

2015-02-02 Thread Robert Collins
I think incremental adoption is a great principle to have and this
will enable that.

So +1

-Rob

On 3 February 2015 at 13:52, Steve Baker sba...@redhat.com wrote:
 A spec has been raised to add a config option to allow operators to choose
 whether to use the new convergence engine for stack operations. For some
 context you should read the spec first [1]

 Rather than doing this, I would like to propose the following:
 * Users can (optionally) choose which engine to use by specifying an engine
 parameter on stack-create (choice of classic or convergence)
 * Operators can set a config option which determines which engine to use if
 the user makes no explicit choice
 * Heat developers will set the default config option from classic to
 convergence when convergence is deemed sufficiently mature

 I realize it is not ideal to expose this kind of internal implementation
 detail to the user, but choosing convergence _will_ result in different
 stack behaviour (such as multiple concurrent update operations) so there is
 an argument for giving the user the choice. Given enough supporting
 documentation they can choose whether convergence might be worth trying for
 a given stack (for example, a large stack which receives frequent updates)

 Operators likely won't feel they have enough knowledge to make the call that
 a heat install should be switched to using all convergence, and users will
 never be able to try it until the operators do (or the default switches).

 Finally, there are also some benefits to heat developers. Creating a whole
 new gate job to test convergence-enabled heat will consume its share of CI
 resource. I'm hoping to make it possible for some of our functional tests to
 run against a number of scenarios/environments. Being able to run tests
 under classic and convergence scenarios in one test run will be a great help
 (for performance profiling too).

 If there is enough agreement then I'm fine with taking over and updating the
 convergence-config-option spec.

 [1]
 https://review.openstack.org/#/c/152301/2/specs/kilo/convergence-config-option.rst

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova] Openstack HTTP error codes

2015-02-02 Thread Everett Toews
On Feb 2, 2015, at 7:24 PM, Sean Dague s...@dague.netmailto:s...@dague.net 
wrote:

On 02/02/2015 05:35 PM, Jay Pipes wrote:
On 01/29/2015 12:41 PM, Sean Dague wrote:
Correct. This actually came up at the Nova mid cycle in a side
conversation with Ironic and Neutron folks.

HTTP error codes are not sufficiently granular to describe what happens
when a REST service goes wrong, especially if it goes wrong in a way
that would let the client do something other than blindly try the same
request, or fail.

Having a standard json error payload would be really nice.

{
 fault: ComputeFeatureUnsupportedOnInstanceType,
 messsage: This compute feature is not supported on this kind of
instance type. If you need this feature please use a different instance
type. See your cloud provider for options.
}

That would let us surface more specific errors.
snip

Standardization here from the API WG would be really great.

What about having a separate HTTP header that indicates the OpenStack
Error Code, along with a generated URI for finding more information
about the error?

Something like:

X-OpenStack-Error-Code: 1234
X-OpenStack-Error-Help-URI: http://errors.openstack.org/1234

That way is completely backwards compatible (since we wouldn't be
changing response payloads) and we could handle i18n entirely via the
HTTP help service running on errors.openstack.orghttp://errors.openstack.org.

That could definitely be implemented in the short term, but if we're
talking about API WG long term evolution, I'm not sure why a standard
error payload body wouldn't be better.

Agreed. And using the “X-“ prefix in headers has been deprecated for over 2 
years now [1]. I don’t think we should be using it for new things.

Everett

[1] https://tools.ietf.org/html/rfc6648

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gantt] Scheduler sub-group meeting agenda 2/3

2015-02-02 Thread Dugger, Donald D
Meeting on #openstack-meeting at 1500 UTC (8:00AM MST)


1)  Remove direct nova DB/API access by Scheduler Filters - 
https://review.opernstack.org/138444/

2)  Status on cleanup work - https://wiki.openstack.org/wiki/Gantt/kilo


--
Don Dugger
Censeo Toto nos in Kansa esse decisse. - D. Gale
Ph: 303/443-3786

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Team meeting today

2015-02-02 Thread Kyle Mestery
Just a reminder, we'll have the weekly Neutron meeting [1] at 2100UTC in
#openstack-meeting today. We'll likely spend the majority of the time going
over any critical bugs, as well as covering BPs for Kilo-2 which have yet
to land this week. The other two standing items we'll discuss are the
nova-network to neutron migration, and the plugin decomposition.

Please feel free to add other items in the On Demand section of the
agenda [2].

Thanks!
Kyle

[1] https://wiki.openstack.org/wiki/Network/Meetings
[2] https://wiki.openstack.org/wiki/Network/Meetings#On_Demand_Agenda
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] API Definition Formats

2015-02-02 Thread Chris Dent

On Thu, 29 Jan 2015, michael mccune wrote:

in a similar vein, i started to work on marking up the sahara and barbican 
code bases to produce swagger. for sahara this was a little easier as flask 
makes it simple to query the paths. for barbican i started a pecan-swagger[1] 
project to aid in marking up the code. it's still in infancy but i have a few 
ideas.


pecan-swagger looks cool but presumably pecan has most of the info
you're putting in the decorators in itself already? So, given an
undecorated pecan app, would it be possible to provide it to a function
and have that function output all the paths?

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] RFC: ensuring live migration ends

2015-02-02 Thread Vladik Romanovsky


- Original Message -
 From: Daniel P. Berrange berra...@redhat.com
 To: Robert Collins robe...@robertcollins.net
 Cc: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org,
 openstack-operat...@lists.openstack.org
 Sent: Monday, 2 February, 2015 5:56:56 AM
 Subject: Re: [openstack-dev] [nova][libvirt] RFC: ensuring live migration 
 ends
 
 On Mon, Feb 02, 2015 at 08:24:20AM +1300, Robert Collins wrote:
  On 31 January 2015 at 05:47, Daniel P. Berrange berra...@redhat.com
  wrote:
   In working on a recent Nova migration bug
  
 https://bugs.launchpad.net/nova/+bug/1414065
  
   I had cause to refactor the way the nova libvirt driver monitors live
   migration completion/failure/progress. This refactor has opened the
   door for doing more intelligent active management of the live migration
   process.
  ...
   What kind of things would be the biggest win from Operators' or tenants'
   POV ?
  
  Awesome. Couple thoughts from my perspective. Firstly, there's a bunch
  of situation dependent tuning. One thing Crowbar does really nicely is
  that you specify the host layout in broad abstract terms - e.g. 'first
  10G network link' and so on : some of your settings above like whether
  to compress page are going to be heavily dependent on the bandwidth
  available (I doubt that compression is a win on a 100G link for
  instance, and would be suspect at 10G even). So it would be nice if
  there was a single dial or two to set and Nova would auto-calculate
  good defaults from that (with appropriate overrides being available).
 
 I wonder how such an idea would fit into Nova, since it doesn't really
 have that kind of knowledge about the network deployment characteristics.
 
  Operationally avoiding trouble is better than being able to fix it, so
  I quite like the idea of defaulting the auto-converge option on, or
  perhaps making it controllable via flavours, so that operators can
  offer (and identify!) those particularly performance sensitive
  workloads rather than having to guess which instances are special and
  which aren't.
 
 I'll investigate the auto-converge further to find out what the
 potential downsides of it are. If we can unconditionally enable
 it, it would be simpler than adding yet more tunables.
 
  Being able to cancel the migration would be good. Relatedly being able
  to restart nova-compute while a migration is going on would be good
  (or put differently, a migration happening shouldn't prevent a deploy
  of Nova code: interlocks like that make continuous deployment much
  harder).
  
  If we can't already, I'd like as a user to be able to see that the
  migration is happening (allows diagnosis of transient issues during
  the migration). Some ops folk may want to hide that of course.
  
  I'm not sure that automatically rolling back after N minutes makes
  sense : if the impact on the cluster is significant then 1 minute vs
  10 doesn't instrinsically matter: what matters more is preventing too
  many concurrent migrations, so that would be another feature that I
  don't think we have yet: don't allow more than some N inbound and M
  outbound live migrations to a compute host at any time, to prevent IO
  storms. We may want to log with NOTIFICATION migrations that are still
  progressing but appear to be having trouble completing. And of course
  an admin API to query all migrations in progress to allow API driven
  health checks by monitoring tools - which gives the power to manage
  things to admins without us having to write a probably-too-simple
  config interface.
 
 Interesting, the point about concurrent migrations hadn't occurred to
 me before, but it does of course make sense since migration is
 primarily network bandwidth limited, though disk bandwidth is relevant
 too if doing block migration.

Indeed, there was a lot time spent investigating this topic (in Ovirt again)
and eventually it was decided to expose a config option and allow 3 concurrent
migrations by default.

https://github.com/oVirt/vdsm/blob/master/lib/vdsm/config.py.in#L126

 
 Regards,
 Daniel
 --
 |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
 |: http://libvirt.org  -o- http://virt-manager.org :|
 |: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
 |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] Deprecation of in tree EC2 API in Nova for Kilo release

2015-02-02 Thread Thierry Carrez
Daniel P. Berrange wrote:
 On Fri, Jan 30, 2015 at 04:38:44PM -0600, Matt Riedemann wrote:

 Deprecation isn't a one-way street really, nova-network was deprecated for a
 couple of releases and then undeprecated and opened up again for feature
 development (at least for a short while until the migration to neutron is
 sorted out and implemented).
 
 Nova-network was prematurely deprecated as the alternative was not fully
 ready. That's a prime example of why we should not be deprecating EC2
 right now either.
 
 Deprecation is a mechanism by which you inform users that they should
 stop using the current functionality and switch to $NEW-THING as the
 replacement. In the case of nova-network they could not switch because
 neutron did not offer feature parity at the time we were asking them
 to switch (nor did it have an upgrade path for that matter). Likewise
 in the case of the EC2 API, the alternative is not ready for users to
 actually switch to at a production quality level.
 
 What we actually trying to tell users is that we think the out of tree
 EC2 implementation is the long term strategic direction of the EC2
 support with Nova, and that the current in tree impl is not being actively
 developed. That's a sensible thing to tell our users, but deprecation is
 the wrong mechanism for this. It is a task best suited for release notes.
 Keep deprecation available as a mechanism for telling users that the time
 has come for them to actively switch their deployments to the new impl.

I'm with Daniel on that one. We shouldn't deprecate until we are 100%
sure that the replacement is up to the task and that strategy is solid.

Today, we are just figuring out the strategy between the mainline EC2
support and the separated EC2 support repository, and we have some
promised resources to work on the issue. We have been there before (a
few times), and if we had deprecated the EC2 support on that promise
back then, we would have put ourselves in an odd corner. Today is not
really the best moment to deprecate. Announcing the proposed strategy,
however, is good information to send to our users.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Deprecation of in tree EC2 API in Nova for Kilo release

2015-02-02 Thread Dan Smith
 I'm with Daniel on that one. We shouldn't deprecate until we are 100%
 sure that the replacement is up to the task and that strategy is solid.

My problem with this is: If there wasn't a stackforge project, what
would we do? Nova's in-tree EC2 support has been rotting for years now,
and despite several rallies for developers, no real progress has been
made to rescue it. I don't think that it's reasonable to say that if
there wasn't a stackforge project we'd just have to suck it up and
magically produce the developers to work on EC2; it's clear that's not
going to happen.

Thus, it seems to me that we need to communicate that our EC2 support is
going away. Hopefully the stackforge project will be at a point to
support users that want to keep the functionality. However, the fate of
our in-tree support seems clear regardless of how that turns out.

--Dan



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] Finding people to work on the EC2 API in Nova

2015-02-02 Thread Sean Dague
On 02/02/2015 07:01 AM, Alexandre Levine wrote:
 Michael,
 
 I'm rather new here, especially in regard to communication matters, so
 I'd also be glad to understand how it's done and then I can drive it if
 it's ok with everybody.
 By saying EC2 sub team - who did you keep in mind? From my team 3
 persons are involved.
 
 From the technical point of view the transition plan could look somewhat
 like this (sequence can be different):
 
 1. Triage EC2 bugs and fix showstoppers in nova's EC2.
 2. Contribute Tempest tests for EC2 functionality and employ them
 against nova's EC2.
 3. Write spec for required API to be exposed from nova so that we get
 full info.
 4. Triage and fix all of the existing nova's EC2 bugs worth fixing.
 5. Set up Tempest testing of the stackforge/ec2 (if that's possible).
 6. Communicate and discover all of the existing questions and
 problematic points for the switching from existing EC2 API to the new
 one. Provide solutions or decisions about them.
 7. Do performance testing of the new stackforge/ec2 and provide fixes if
 any bottlenecks come up.
 8. Have all of the above prepared for the Vancouver summit and discuss
 the situation there.
 
 Michael, I am still wondering, who's going to be responsible for timely
 reviews and approvals of the fixes and tests we're going to contribute
 to nova? So far this is the biggest risk. Is there anyway to allow some
 of us to participate in the process?

It would also be really helpful if there were reviews from you team on
any ec2 touching code.

https://review.openstack.org/#/q/file:%255Enova/api/ec2.*+status:open,n,z

There currently are only a few patches which touch ec2 that are ec2
function/bug related, and mostly don't have any scored reviews.
Especially this series -
https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/ec2-volume-and-snapshot-tags,n,z


Which is a month old with no scoring.

-Sean

 
 Best regards,
   Alex Levine
 
 On 2/2/15 2:46 AM, Michael Still wrote:
 So, its exciting to me that we seem to developing more forward
 momentum here. I personally think the way forward is a staged
 transition from the in-nova EC2 API to the stackforge project, with
 testing added to ensure that we are feature complete between the two.
 I note that Soren disagrees with me here, but that's ok -- I'd like to
 see us work through that as a team based on the merits.

 So... It sounds like we have an EC2 sub team forming. How do we get
 that group meeting to come up with a transition plan?

 Michael

 On Sun, Feb 1, 2015 at 4:12 AM, Davanum Srinivas dava...@gmail.com
 wrote:
 Alex,

 Very cool. thanks.

 -- dims

 On Sat, Jan 31, 2015 at 1:04 AM, Alexandre Levine
 alev...@cloudscaling.com wrote:
 Davanum,

 Now that the picture with the both EC2 API solutions has cleared up
 a bit, I
 can say yes, we'll be adding the tempest tests and doing devstack
 integration.

 Best regards,
Alex Levine

 On 1/31/15 2:21 AM, Davanum Srinivas wrote:
 Alexandre, Randy,

 Are there plans afoot to add support to switch on stackforge/ec2-api
 in devstack? add tempest tests etc? CI Would go a long way in
 alleviating concerns i think.

 thanks,
 dims

 On Fri, Jan 30, 2015 at 1:24 PM, Bias, Randy randy.b...@emc.com
 wrote:
 As you know we have been driving forward on the stack forge
 project and
 it¹s our intention to continue to support it over time, plus
 reinvigorate
 the GCE APIs when that makes sense. So we¹re supportive of
 deprecating
 from Nova to focus on EC2 API in Nova.  I also think it¹s good for
 these
 APIs to be able to iterate outside of the standard release cycle.



 --Randy

 VP, Technology, EMC Corporation
 Formerly Founder  CEO, Cloudscaling (now a part of EMC)
 +1 (415) 787-2253 [google voice]
 TWITTER: twitter.com/randybias
 LINKEDIN: linkedin.com/in/randybias
 ASSISTANT: ren...@emc.com






 On 1/29/15, 4:01 PM, Michael Still mi...@stillhq.com wrote:

 Hi,

 as you might have read on openstack-dev, the Nova EC2 API
 implementation is in a pretty sad state. I wont repeat all of those
 details here -- you can read the thread on openstack-dev for detail.

 However, we got here because no one is maintaining the code in Nova
 for the EC2 API. This is despite repeated calls over the last 18
 months (at least).

 So, does the Foundation have a role here? The Nova team has
 failed to
 find someone to help us resolve these issues. Can the board perhaps
 find resources as the representatives of some of the largest
 contributors to OpenStack? Could the Foundation employ someone to
 help
 us our here?

 I suspect the correct plan is to work on getting the stackforge
 replacement finished, and ensuring that it is feature compatible
 with
 the Nova implementation. However, I don't want to preempt the design
 process -- there might be other ways forward here.

 I feel that a continued discussion which just repeats the last 18
 months wont actually fix the situation -- its time to break out of
 that 

Re: [openstack-dev] [all][tc] SQL Schema Downgrades and Related Issues

2015-02-02 Thread Adam Young

On 01/30/2015 07:23 AM, Sandy Walsh wrote:


From: Johannes Erdfelt [johan...@erdfelt.com]
Sent: Thursday, January 29, 2015 9:18 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all][tc] SQL Schema Downgrades and Related Issues

On Thu, Jan 29, 2015, Morgan Fainberg morgan.fainb...@gmail.com wrote:

The concept that there is a utility that can (and in many cases
willfully) cause permanent, and in some cases irrevocable, data loss
from a simple command line interface sounds crazy when I try and
explain it to someone.

The more I work with the data stored in SQL, and the more I think we
should really recommend the tried-and-true best practices when trying
to revert from a migration: Restore your DB to a known good state.

You mean like restoring from backup?

Unless your code deploy fails before it has any chance of running, then
you could have had new instances started or instances changed and then
restoring from backups would lose data.

If you meant another way of restoring your data, then there are
some strategies that downgrades could employ that doesn't lose data,
but there is nothing that can handle 100% of cases.

All of that said, for the Rackspace Public Cloud, we have never rolled
back our deploy. We have always rolled forward for any fixes we needed.


From my perspective, I'd be fine with doing away with downgrades, but

I'm not sure how to document that deployers should roll forward if they
have any deploy problems.

JE

Yep ... downgrades simply aren't practical with a SQL-schema based
solution. Too coarse-grained.

We'd have to move to a schema-less model, per-record versioning and
up-down conversion at the Nova Objects layer. Or, possibly introduce
more nodes that can deal with older versions. Either way, that's a big
hairy change


Horse pocky!  Schema less means implied contract instead of implicit.  
That would be madness.  Please take the NoSQL good, SQL bad approach of 
of the conversation, as absotutely (yes, absotutely) everything we have 
here is doubly true for NoSQL, we just don't hammer on it as much.  We 
don't even document the record formats in the NoSQL cases in Keystone so 
we can break them both willy and nilly, but have often found that we are 
just stuck.  Usually, we are only dealing with the token table, and so 
we just dump the old tokens and shake our heads sadly.






.

The upgrade code is still required, so removing the downgrades (and
tests, if any) is a relatively small change to the code base.

The bigger issue is the anxiety the deployer will experience until a
patch lands.

-S

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] Finding people to work on the EC2 API in Nova

2015-02-02 Thread Boris Pavlovic
On 02/02/2015 11:35 AM, Alexandre Levine wrote:
 Thank you Sean.

 We'll be tons of EC2 Tempest tests for your attention shortly.
 How would you prefer them? In several reviews, I believe. Not in one,
 right?

 Best regards,
   Alex Levine

So, honestly, I think that we should probably look at getting the ec2
 tests out of the Tempest tree as well and into a more dedicated place.
 Like as part of the stackforge project tree. Given that the right
 expertise would be there as well. It could use tempest-lib for some of
 the common parts.



Rally team would be happy to accept some of tests, and as well we support
in tree plugins.
So part of tests (that are only for hardcore functional testing and not
reusable in reallife)
can stay in tree of ec2-api.

Best regards,
Boris Pavlovic


On Mon, Feb 2, 2015 at 7:39 PM, Sean Dague s...@dague.net wrote:

 On 02/02/2015 11:35 AM, Alexandre Levine wrote:
  Thank you Sean.
 
  We'll be tons of EC2 Tempest tests for your attention shortly.
  How would you prefer them? In several reviews, I believe. Not in one,
  right?
 
  Best regards,
Alex Levine

 So, honestly, I think that we should probably look at getting the ec2
 tests out of the Tempest tree as well and into a more dedicated place.
 Like as part of the stackforge project tree. Given that the right
 expertise would be there as well. It could use tempest-lib for some of
 the common parts.

 -Sean

 
  On 2/2/15 6:55 PM, Sean Dague wrote:
  On 02/02/2015 07:01 AM, Alexandre Levine wrote:
  Michael,
 
  I'm rather new here, especially in regard to communication matters, so
  I'd also be glad to understand how it's done and then I can drive it if
  it's ok with everybody.
  By saying EC2 sub team - who did you keep in mind? From my team 3
  persons are involved.
 
   From the technical point of view the transition plan could look
  somewhat
  like this (sequence can be different):
 
  1. Triage EC2 bugs and fix showstoppers in nova's EC2.
  2. Contribute Tempest tests for EC2 functionality and employ them
  against nova's EC2.
  3. Write spec for required API to be exposed from nova so that we get
  full info.
  4. Triage and fix all of the existing nova's EC2 bugs worth fixing.
  5. Set up Tempest testing of the stackforge/ec2 (if that's possible).
  6. Communicate and discover all of the existing questions and
  problematic points for the switching from existing EC2 API to the new
  one. Provide solutions or decisions about them.
  7. Do performance testing of the new stackforge/ec2 and provide fixes
 if
  any bottlenecks come up.
  8. Have all of the above prepared for the Vancouver summit and discuss
  the situation there.
 
  Michael, I am still wondering, who's going to be responsible for timely
  reviews and approvals of the fixes and tests we're going to contribute
  to nova? So far this is the biggest risk. Is there anyway to allow some
  of us to participate in the process?
  I am happy to volunteer to shephard these reviews. I'll try to keep an
  eye on them, and if something is blocking please just ping me directly
  on IRC in #openstack-nova or bring them forward to the weekly Nova
  meeting.
 
  -Sean
 
 
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 --
 Sean Dague
 http://dague.net

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] problems with huge pages and libvirt

2015-02-02 Thread Sahid Orentino Ferdjaoui
On Mon, Feb 02, 2015 at 10:44:09AM -0600, Chris Friesen wrote:
 Hi,
 
 I'm trying to make use of huge pages as described in
 http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/virt-driver-large-pages.html;.
 I'm running kilo as of Jan 27th.
 I've allocated 1 2MB pages on a compute node.  virsh capabilities on 
 that node contains:
 
 topology
   cells num='2'
 cell id='0'
   memory unit='KiB'67028244/memory
   pages unit='KiB' size='4'16032069/pages
   pages unit='KiB' size='2048'5000/pages
   pages unit='KiB' size='1048576'1/pages
 ...
 cell id='1'
   memory unit='KiB'67108864/memory
   pages unit='KiB' size='4'16052224/pages
   pages unit='KiB' size='2048'5000/pages
   pages unit='KiB' size='1048576'1/pages
 
 
 I then restarted nova-compute, I set hw:mem_page_size=large on a
 flavor, and then tried to boot up an instance with that flavor.  I
 got the error logs below in nova-scheduler.  Is this a bug?

Hello,

Launchpad.net could be more appropriate to
discuss on something which looks like a bug.

  https://bugs.launchpad.net/nova/+filebug

According to your trace I would say you are running different versions
of Nova services.

BTW please verify your version of libvirt. Hugepages is supported
start to 1.2.8 (but this should difinitly not failed so badly like
that)

s.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Product] [all][log] Openstack HTTP error codes

2015-02-02 Thread Sean Dague
On 02/02/2015 12:54 AM, Christopher Yeoh wrote:
 
 
 On Sun, Feb 1, 2015 at 2:57 AM, Sean Dague s...@dague.net
 mailto:s...@dague.net wrote:
 
 On 01/31/2015 05:24 AM, Duncan Thomas wrote:
  Hi
 
  This discussion came up at the cinder mid-cycle last week too,
  specifically in the context of 'Can we change the details text in an
  existing error, or is that an unacceptable API change'.
 
  I have to second security / operational concerns about exposing
 too much
  granularity of failure in these error codes.
 
  For cases where there is something wrong with the request (item out of
  range, invalid names, feature not supported, etc) I totally agree that
  we should have good, clear, parsable response, and standardisation
 would
  be good. Having some fixed part of the response (whether a numeric
 code
  or, as I tend to prefer, a CamelCaseDescription so that I don't
 have to
  go look it up) and a human readable description section that is
 subject
  to change seems sensible.
 
  What I would rather not see is leakage of information when something
  internal to the cloud goes wrong, that the tenant can do nothing
  against. We certainly shouldn't be leaking internal implementation
  details like vendor details - that is what request IDs and logs
 are for.
  The whole point of the cloud, to me, is that separation between the
  things a tenant controls (what they want done) and what the cloud
  provider controls (the details of how the work is done).
 
  For example, if a create volume request fails because cinder-scheduler
  has crashed, all the tenant should get back is 'Things are broken, try
  again later or pass request id 1234-5678-abcd-def0 to the cloud
 admin'.
  They should need to or even be allowed to care about the details
 of the
  failure, it is not their domain.
 
 Sure, the value really is in determining things that are under the
 client's control to do differently. A concrete one is a multi hypervisor
 cloud with 2 hypervisors (say kvm and docker). The volume attach
 operation to a docker instance (which presumably is a separate set of
 instance types) can't work. The user should be told that that can't work
 with this instance_type if they try it.
 
 That's actually user correctable information. And doesn't require a
 ticket to move forward.
 
 I also think we could have a detail level knob, because I expect the
 level of information exposure might be considered different in public
 cloud use case vs. a private cloud at an org level or a private cloud at
 a dept level.
 
 
 That could turn into a major compatibility issue if what we returned
 could (and
 probably would between public/private) change between clouds? If we want
 to encourage
 people to parse this sort of thing I think we need to settle on whether
 we send the
 information back or not for everyone. 

Sure, it's a theoretical concern. We're not going to get anywhere rat
holing on theoretical concerns though, lets get some concrete instances
out there to discuss.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Python 3 is dead, long live Python 3

2015-02-02 Thread Jeremy Stanley
After a long wait and much testing, we've merged a change[1] which
moves the remainder of Python 3.3 based jobs to Python 3.4. This is
primarily in service of getting rid of the custom workers we
implemented to perform 3.3 testing more than a year ago, since we
can now run 3.4 tests on normal Ubuntu Trusty workers (with the
exception of a couple bugs[2][3] which have caused us to temporarily
suspend[4] Py3K jobs for oslo.messaging and oslo.rootwrap).

I've personally tested `tox -e py34` on every project hosted in our
infrastructure which was gating on Python 3.3 jobs and they all
still work, so you shouldn't see any issues arise from this change.
If you do, however, please let the Infrastructure team know about it
as soon as possible. Thanks!

[1] https://review.openstack.org/151713
[2] https://launchpad.net/bugs/1367907
[3] https://launchpad.net/bugs/1382607
[4] http://lists.openstack.org/pipermail/openstack-dev/2015-January/055270.html
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Product] [all][log] Openstack HTTP error codes

2015-02-02 Thread Sean Dague
On 02/01/2015 06:20 PM, Morgan Fainberg wrote:
 Putting on my sorry-but-it-is-my-job-to-get-in-your-way hat (aka security), 
 let's be careful how generous we are with the user and data we hand back. It 
 should give enough information to be useful but no more. I don't want to see 
 us opened to weird attack vectors because we're exposing internal state too 
 generously. 
 
 In short let's aim for a slow roll of extra info in, and evaluate each data 
 point we expose (about a failure) before we do so. Knowing more about a 
 failure is important for our users. Allowing easy access to information that 
 could be used to attack / increase impact of a DOS could be bad. 
 
 I think we can do it but it is important to not swing the pendulum too far 
 the other direction too fast (give too much info all of a sudden). 

Security by cloud obscurity?

I agree we should evaluate information sharing with security in mind.
However, the black boxing level we have today is bad for OpenStack. At a
certain point once you've added so many belts and suspenders, you can no
longer walk normally any more.

Anyway, lets stop having this discussion in abstract and actually just
evaluate the cases in question that come up.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] Finding people to work on the EC2 API in Nova

2015-02-02 Thread Alexandre Levine

Thank you Sean.

We'll be tons of EC2 Tempest tests for your attention shortly.
How would you prefer them? In several reviews, I believe. Not in one, right?

Best regards,
  Alex Levine

On 2/2/15 6:55 PM, Sean Dague wrote:

On 02/02/2015 07:01 AM, Alexandre Levine wrote:

Michael,

I'm rather new here, especially in regard to communication matters, so
I'd also be glad to understand how it's done and then I can drive it if
it's ok with everybody.
By saying EC2 sub team - who did you keep in mind? From my team 3
persons are involved.

 From the technical point of view the transition plan could look somewhat
like this (sequence can be different):

1. Triage EC2 bugs and fix showstoppers in nova's EC2.
2. Contribute Tempest tests for EC2 functionality and employ them
against nova's EC2.
3. Write spec for required API to be exposed from nova so that we get
full info.
4. Triage and fix all of the existing nova's EC2 bugs worth fixing.
5. Set up Tempest testing of the stackforge/ec2 (if that's possible).
6. Communicate and discover all of the existing questions and
problematic points for the switching from existing EC2 API to the new
one. Provide solutions or decisions about them.
7. Do performance testing of the new stackforge/ec2 and provide fixes if
any bottlenecks come up.
8. Have all of the above prepared for the Vancouver summit and discuss
the situation there.

Michael, I am still wondering, who's going to be responsible for timely
reviews and approvals of the fixes and tests we're going to contribute
to nova? So far this is the biggest risk. Is there anyway to allow some
of us to participate in the process?

I am happy to volunteer to shephard these reviews. I'll try to keep an
eye on them, and if something is blocking please just ping me directly
on IRC in #openstack-nova or bring them forward to the weekly Nova meeting.

-Sean




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] API Definition Formats

2015-02-02 Thread michael mccune

On 02/02/2015 10:26 AM, Chris Dent wrote:

pecan-swagger looks cool but presumably pecan has most of the info
you're putting in the decorators in itself already? So, given an
undecorated pecan app, would it be possible to provide it to a function
and have that function output all the paths?



you are correct, pecan is storing most of the information we want in 
it's controller metadata. i am working on the next version of 
pecan-swagger now that will reduce the need for so many decorators, and 
instead pull the endpoint information out of the pecan based controller 
classes.


in terms of having a completely undecorated pecan app, i'm not sure 
that's possible just yet due to the object-dispatch routing used by 
pecan. in the next version of pecan-swagger i'm going to reduce the 
deocrators to only be needed on controller classes, but i'm not sure 
that it will be possible to reduce further as there will need to be some 
way to learn the route path hierarchy.


i suppose in the future it might be advantageous to create a pecan 
controller base class that could help inform the routing structure, but 
this would still need to be added to current pecan projects.



mike

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] Finding people to work on the EC2 API in Nova

2015-02-02 Thread Sean Dague
On 02/02/2015 11:35 AM, Alexandre Levine wrote:
 Thank you Sean.
 
 We'll be tons of EC2 Tempest tests for your attention shortly.
 How would you prefer them? In several reviews, I believe. Not in one,
 right?
 
 Best regards,
   Alex Levine

So, honestly, I think that we should probably look at getting the ec2
tests out of the Tempest tree as well and into a more dedicated place.
Like as part of the stackforge project tree. Given that the right
expertise would be there as well. It could use tempest-lib for some of
the common parts.

-Sean

 
 On 2/2/15 6:55 PM, Sean Dague wrote:
 On 02/02/2015 07:01 AM, Alexandre Levine wrote:
 Michael,

 I'm rather new here, especially in regard to communication matters, so
 I'd also be glad to understand how it's done and then I can drive it if
 it's ok with everybody.
 By saying EC2 sub team - who did you keep in mind? From my team 3
 persons are involved.

  From the technical point of view the transition plan could look
 somewhat
 like this (sequence can be different):

 1. Triage EC2 bugs and fix showstoppers in nova's EC2.
 2. Contribute Tempest tests for EC2 functionality and employ them
 against nova's EC2.
 3. Write spec for required API to be exposed from nova so that we get
 full info.
 4. Triage and fix all of the existing nova's EC2 bugs worth fixing.
 5. Set up Tempest testing of the stackforge/ec2 (if that's possible).
 6. Communicate and discover all of the existing questions and
 problematic points for the switching from existing EC2 API to the new
 one. Provide solutions or decisions about them.
 7. Do performance testing of the new stackforge/ec2 and provide fixes if
 any bottlenecks come up.
 8. Have all of the above prepared for the Vancouver summit and discuss
 the situation there.

 Michael, I am still wondering, who's going to be responsible for timely
 reviews and approvals of the fixes and tests we're going to contribute
 to nova? So far this is the biggest risk. Is there anyway to allow some
 of us to participate in the process?
 I am happy to volunteer to shephard these reviews. I'll try to keep an
 eye on them, and if something is blocking please just ping me directly
 on IRC in #openstack-nova or bring them forward to the weekly Nova
 meeting.

 -Sean

 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Cluster replaced deployment of provisioning information

2015-02-02 Thread Evgeniy L
Hi Dmitry,

I've read about inventories and I'm not sure if it's what we really need,
inventory provides you a way to have some kind of nodes discovery
mechanism, but what we need is to get some abstract data and convert
the data to more tasks friendly format.

In another thread I've mentioned Variables [1] in Ansible, probably it
fits more than inventory from architecture point of view.

With this functionality plugin will be able to get required information from
Nailgun via REST API and pass the information into specific task.

But it's not a way to go with the core deployment. I would like to remind
you what we had two years ago, we had Nailgun which passed the information
in format A to Orchestrator (Astute), than Orchestrator converted the
information
in second format B. It was horrible from debugging point of view, it's
always
hard when you have to go in several places to figure out what you get
as result. Your have pretty similar design suggestion, which is dividing
searilization logic between Nailgun and some another layer in tasks
scripts.

Thanks,

[1] http://docs.ansible.com/playbooks_variables.html#registered-variables

On Mon, Feb 2, 2015 at 5:05 PM, Dmitriy Shulyak dshul...@mirantis.com
wrote:


  But why to add another interface when there is one already (rest api)?

 I'm ok if we decide to use REST API, but of course there is a problem
 which
 we should solve, like versioning, which is much harder to support, than
 versioning
 in core-serializers. Also do you have any ideas how it can be implemented?


 We need to think about deployment serializers not as part of nailgun (fuel
 data inventory), but - part of another layer which uses nailgun api to
 generate deployment information. Lets take ansible for example, and
 dynamic inventory feature [1].
 Nailgun API can be used inside of ansible dynamic inventory to generate
 config that will be consumed by ansible during deployment.

 Such approach will have several benefits:
 - cleaner interface (ability to use ansible as main interface to control
 deployment and all its features)
 - deployment configuration will be tightly coupled with deployment code
 - no limitation on what sources to use for configuration, and how to
 compute additional values from requested data

 I want to emphasize that i am not considering ansible as solution for
 fuel, it serves only as example of architecture.


 You run some code which get the information from api on the master node
 and
 then sets the information in tasks? Or you are going to run this code on
 OpenStack
 nodes? As you mentioned in case of tokens, you should get the token right
 before
 you really need it, because of expiring problem, but in this case you
 don't
 need any serializers, get required token right in the task.


 I think all information should be fetched before deployment.



  What is your opinion about serializing additional information in
 plugins code? How it can be done, without exposing db schema?

 With exposing the data in more abstract way the way it's done right now
 for the current deployment logic.


 I mean what if plugin will want to generate additional data, like -
 https://review.openstack.org/#/c/150782/? Schema will be still exposed?

 [1] http://docs.ansible.com/intro_dynamic_inventory.html


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] SQL Schema Downgrades and Related Issues

2015-02-02 Thread Adam Young

On 01/29/2015 03:11 PM, Mike Bayer wrote:


Morgan Fainberg morgan.fainb...@gmail.com wrote:


Are downward migrations really a good idea for us to support? Is this downward 
migration path a sane expectation? In the real world, would any one really 
trust the data after migrating downwards?

It’s a good idea for a migration script to include a rudimentary downgrade 
operation to complement the upgrade operation, if feasible.  The purpose of 
this downgrade is from




Except that is it is code we need to maintain and support.  I think we 
are making more work for ourselves than the value these scripts provide 
justify.

  a practical standpoint helpful when locally testing a specific, typically 
small series of migrations.

A downgrade however typically only applies to schema objects, and not so much 
data.   It is often impossible to provide downgrades of data changes as it is 
likely that a data upgrade operation was destructive of some data.  Therefore, 
when dealing with a full series of real world migrations that include data 
migrations within them, downgrades are typically impossible.   I’m getting the 
impression that our migration scripts have data migrations galore in them.

So I am +1 on establishing a policy that the deployer of the application would 
not have access to any “downgrade” migrations, and -1 on removing “downgrade” 
entirely from individual migrations.   Specific migration scripts may return 
NotImplemented for their downgrade if its really not feasible, but for things 
like table and column changes where autogenerate has already rendered the 
downgrade, it’s handy to keep at least the smaller ones working.





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] problems with huge pages and libvirt

2015-02-02 Thread Chris Friesen
Hi,

I'm trying to make use of huge pages as described in 
http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/virt-driver-large-pages.html;.
  I'm running kilo as of Jan 27th.

I've allocated 1 2MB pages on a compute node.  virsh capabilities on that 
node contains:

topology
  cells num='2'
cell id='0'
  memory unit='KiB'67028244/memory
  pages unit='KiB' size='4'16032069/pages
  pages unit='KiB' size='2048'5000/pages
  pages unit='KiB' size='1048576'1/pages
...
cell id='1'
  memory unit='KiB'67108864/memory
  pages unit='KiB' size='4'16052224/pages
  pages unit='KiB' size='2048'5000/pages
  pages unit='KiB' size='1048576'1/pages


I then restarted nova-compute, I set hw:mem_page_size=large on a flavor, and 
then tried to boot up an instance with that flavor.  I got the error logs below 
in nova-scheduler.  Is this a bug?


Feb  2 16:23:10 controller-0 nova-scheduler Exception during message handling: 
Cannot load 'mempages' in the base class
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib64/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py, line 
134, in _dispatch_and_reply
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib64/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py, line 
177, in _dispatch
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib64/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py, line 
123, in _do_dispatch
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib64/python2.7/site-packages/oslo/messaging/rpc/server.py, line 139, in 
inner
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher return 
func(*args, **kwargs)
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib64/python2.7/site-packages/nova/scheduler/manager.py, line 86, in 
select_destinations
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher 
filter_properties)
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib64/python2.7/site-packages/nova/scheduler/filter_scheduler.py, line 
67, in select_destinations
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher 
filter_properties)
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib64/python2.7/site-packages/nova/scheduler/filter_scheduler.py, line 
138, in _schedule
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher 
filter_properties, index=num)
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib64/python2.7/site-packages/nova/scheduler/host_manager.py, line 391, 
in get_filtered_hosts
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher hosts, 
filter_properties, index)
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib64/python2.7/site-packages/nova/filters.py, line 77, in 
get_filtered_objects
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher list_objs 
= list(objs)
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib64/python2.7/site-packages/nova/filters.py, line 43, in filter_all
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher if 
self._filter_one(obj, filter_properties):
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib64/python2.7/site-packages/nova/scheduler/filters/__init__.py, line 
27, in _filter_one
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher return 
self.host_passes(obj, filter_properties)
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib64/python2.7/site-packages/nova/scheduler/filters/numa_topology_filter.py,
 line 45, in host_passes
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher 
limits_topology=limits))
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib64/python2.7/site-packages/nova/virt/hardware.py, line 1161, in 
numa_fit_instance_to_host
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher 
host_cell, instance_cell, limit_cell)
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib64/python2.7/site-packages/nova/virt/hardware.py, line 851, in 
_numa_fit_instance_cell
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher 
host_cell, instance_cell)

Re: [openstack-dev] [Heat][Keystone] Native keystone resources in Heat

2015-02-02 Thread Adam Young

On 01/30/2015 02:19 AM, Thomas Spatzier wrote:

From: Zane Bitter zbit...@redhat.com
To: openstack Development Mailing List

openstack-dev@lists.openstack.org

Date: 29/01/2015 17:47
Subject: [openstack-dev] [Heat][Keystone] Native keystone resources in

Heat

I got a question today about creating keystone users/roles/tenants in
Heat templates. We currently support creating users via the
AWS::IAM::User resource, but we don't have a native equivalent.

IIUC keystone now allows you to add users to a domain that is otherwise
backed by a read-only backend (i.e. LDAP). If this means that it's now
possible to configure a cloud so that one need not be an admin to create
users then I think it would be a really useful thing to expose in Heat.
Does anyone know if that's the case?

I think roles and tenants are likely to remain admin-only, but we have
precedent for including resources like that in /contrib... this seems
like it would be comparably useful.

Thoughts?

I am really not a keystone expert,

I am!  But when I grow up, I want to be a fireman!

so don't know what the security
implications would be, but I have heard the requirement or wish to be able
to create users, roles etc. from a template many times.
SHould be possible.  LDAP can be read only, but these things can all go 
into SQL, and just have a loose coupling with the LDAP entities.




I've talked to
people who want to explore this for onboarding use cases, e.g. for
onboarding of lines of business in a company, or for onboarding customers
in a public cloud case. They would like to be able to have templates that
lay out the overall structure for authentication stuff, and then
parameterize it for each onboarding process.


THose domains, users, projects ,etc would all go intop SQL.  THe only 
case ot use LDAP would be if their remote organization already had an 
LDAP system that contained users, and the4y wanted to reuse it.  There 
are issues, there, and I suspect Federation (SAML) will be the mechanism 
of choice for these types of integrations, not LDAP.



If this is something to be enabled, that would be interesting to explore.

Regards,
Thomas


cheers,
Zane.



__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:

openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] problems with huge pages and libvirt

2015-02-02 Thread Jay Pipes
This is a bug that I discovered when fixing some of the NUMA related 
nova objects. I have a patch that should fix it up shortly.


This is what happens when we don't have any functional testing of stuff 
that is merged into master...


Best,
-jay

On 02/02/2015 11:44 AM, Chris Friesen wrote:

Hi,

I'm trying to make use of huge pages as described in 
http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/virt-driver-large-pages.html;.
  I'm running kilo as of Jan 27th.

I've allocated 1 2MB pages on a compute node.  virsh capabilities on that 
node contains:

 topology
   cells num='2'
 cell id='0'
   memory unit='KiB'67028244/memory
   pages unit='KiB' size='4'16032069/pages
   pages unit='KiB' size='2048'5000/pages
   pages unit='KiB' size='1048576'1/pages
...
 cell id='1'
   memory unit='KiB'67108864/memory
   pages unit='KiB' size='4'16052224/pages
   pages unit='KiB' size='2048'5000/pages
   pages unit='KiB' size='1048576'1/pages


I then restarted nova-compute, I set hw:mem_page_size=large on a flavor, and 
then tried to boot up an instance with that flavor.  I got the error logs below in 
nova-scheduler.  Is this a bug?


Feb  2 16:23:10 controller-0 nova-scheduler Exception during message handling: 
Cannot load 'mempages' in the base class
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib64/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py, line 
134, in _dispatch_and_reply
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib64/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py, line 
177, in _dispatch
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib64/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py, line 
123, in _do_dispatch
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib64/python2.7/site-packages/oslo/messaging/rpc/server.py, line 139, in 
inner
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher return 
func(*args, **kwargs)
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib64/python2.7/site-packages/nova/scheduler/manager.py, line 86, in 
select_destinations
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher 
filter_properties)
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib64/python2.7/site-packages/nova/scheduler/filter_scheduler.py, line 
67, in select_destinations
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher 
filter_properties)
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib64/python2.7/site-packages/nova/scheduler/filter_scheduler.py, line 
138, in _schedule
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher 
filter_properties, index=num)
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib64/python2.7/site-packages/nova/scheduler/host_manager.py, line 391, 
in get_filtered_hosts
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher hosts, 
filter_properties, index)
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib64/python2.7/site-packages/nova/filters.py, line 77, in 
get_filtered_objects
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher list_objs 
= list(objs)
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib64/python2.7/site-packages/nova/filters.py, line 43, in filter_all
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher if 
self._filter_one(obj, filter_properties):
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib64/python2.7/site-packages/nova/scheduler/filters/__init__.py, line 
27, in _filter_one
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher return 
self.host_passes(obj, filter_properties)
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib64/python2.7/site-packages/nova/scheduler/filters/numa_topology_filter.py,
 line 45, in host_passes
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher 
limits_topology=limits))
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib64/python2.7/site-packages/nova/virt/hardware.py, line 1161, in 
numa_fit_instance_to_host
2015-02-02 16:23:10.746 37521 TRACE oslo.messaging.rpc.dispatcher 

Re: [openstack-dev] [neutron] high dhcp lease times in neutron deployments considered harmful (or not???)

2015-02-02 Thread Brian Haley
Kevin,

I think we are finally converging.  One of the points I've been trying to make
is that users are playing with fire when they start playing with some of these
port attributes, and given the tool we have to work with (DHCP), the
instantiation of these changes cannot be made seamlessly to a VM.  That's life
in the cloud, and most of these things can (and should) be designed around.

On 02/02/2015 06:48 AM, Kevin Benton wrote:
 The only thing this discussion has convinced me of is that allowing users
 to change the fixed IP address on a neutron port leads to a bad
 user-experience.
 
 Not as bad as having to delete a port and create another one on the same
 network just to change addresses though...
 
 Even with an 8-minute renew time you're talking up to a 7-minute blackout
 (87.5% of lease time before using broadcast).
 
 I suggested 240 seconds renewal time, which is up to 4 minutes of
 connectivity outage. This doesn't have anything to do with lease time and
 unicast DHCP will work because the spoof rules allow DHCP client traffic
 before restricting to specific IPs.

The unicast DHCP will make it to the wire, but if you've renumbered the subnet
either a) the DHCP server won't respond because it's IP has changed as well; or
b) the DHCP server won't respond because there is no mapping for the VM on it's
old subnet.

 Most would have rebooted long before then, true?  Cattle not pets, right?
 
 Only in an ideal world that I haven't encountered with customer deployments. 
 Many enterprise deployments end up bringing pets along where reboots aren't 
 always free. The time taken to relaunch programs and restore state can end
 up being 10 minutes+ if it's something like a VDI deployment or dev
 environment where someone spends a lot of time working on one VM.

This would happen if the AZ their VM was in went offline as well, at which point
they would change their design to be more cloud-aware than it was.  Let's not
heap all the blame on neutron - the user is tasked with vetting that their
decisions meet the requirements they desire by thoroughly testing it.

 Changing the lease time is just papering-over the real bug - neutron
 doesn't support seamless changes in IP addresses on ports, since it totally 
 relies on the dhcp configuration settings a deployer has chosen.
 
 It doesn't need to be seamless, but it certainly shouldn't be useless. 
 Connectivity interruptions can be expected with IP changes (e.g. I've seen 
 changes in elastic IPs on EC2 can interrupt connectivity to an instance for
 up to 2 minutes), but an entire day of downtime is awful.

Yes, I agree, an entire day of downtime is bad.

 One of the things I'm getting at is that a deployer shouldn't be choosing
 such high lease times and we are encouraging it with a high default. You are
 arguing for infrequent renewals to work around excessive logging, which is
 just an implementation problem that should be addressed with a patch to your
 logging collector (de-duplication) or to dnsmasq (don't log renewals).

My #1 deployment problem was around control-plane upgrade, not logging:

During a control-plane upgrade or outage, having a short DHCP lease time will
take all your VMs offline.  The old value of 2 minutes is not a realistic value
for an upgrade, and I don't think 8 minutes is much better.  Yes, when DHCP is
down you can't boot a new VM, but as long as customers can get to their existing
VMs they're pretty happy and won't scream bloody murder.

 Documenting a VM reboot is necessary, or even deprecating this (you won't
 like
 that) are sounding better to me by the minute.
 
 If this is an approach you really want to go with, then we should at least
 be consistent and deprecate the extra dhcp options extension (or at least
 the ability to update ports' dhcp options). Updating subnet attributes like 
 gateway_ip, dns_nameserves, and host_routes should be thrown out as well. All
 of these things depend on the DHCP server to deliver updated information and
 are hindered by renewal times. Why discriminate against IP updates on a port?
 A failure to receive many of those other types of changes could result in
 just as severe of a connection disruption.

How about a big (*) next to all the things that could cause issues?  :)  We've
completely loaded the gun exposing all these attributes to the general user
when only the network-aware power-user should be playing with them.

(*) Changing these attributes could cause VMs to become unresponsive for a long
period of time depending on the deployment settings, and should be used with
caution.  Sometimes a VM reboot will be required to re-gain connectivity.

 In summary, the information the DHCP server gives to clients is not static. 
 Unless we eliminate updates to everything in the Neutron API that results in 
 different DHCP lease information, my suggestion is that we include a new
 option for the renewal interval and have the default set 5 minutes. We can
 leave the lease default to 1 day so the 

Re: [openstack-dev] Deprecation of in tree EC2 API in Nova for Kilo release

2015-02-02 Thread Daniel P. Berrange
On Mon, Feb 02, 2015 at 07:44:24AM -0800, Dan Smith wrote:
  I'm with Daniel on that one. We shouldn't deprecate until we are 100%
  sure that the replacement is up to the task and that strategy is solid.
 
 My problem with this is: If there wasn't a stackforge project, what
 would we do? Nova's in-tree EC2 support has been rotting for years now,
 and despite several rallies for developers, no real progress has been
 made to rescue it. I don't think that it's reasonable to say that if
 there wasn't a stackforge project we'd just have to suck it up and
 magically produce the developers to work on EC2; it's clear that's not
 going to happen.

I think that is exactly what we'd would have todo. We exist as a project
to serve the needs of our users and it seems pretty clear from the survey
results that users are deploying the EC2 impl in significant numbers,
so to just remove it would essentially be ignoring what our users want
from the project. If we're saying it is reasonable to ignore what our
users want, then this project is frankly doomed.

 Thus, it seems to me that we need to communicate that our EC2 support is
 going away. Hopefully the stackforge project will be at a point to
 support users that want to keep the functionality. However, the fate of
 our in-tree support seems clear regardless of how that turns out.

If the external EC2 support doesn't work out for whatever reason, then
I don't think the fate of the in-tree support is at all clear. I think
it would have a very strong case for continuing to exist.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] Finding people to work on the EC2 API in Nova

2015-02-02 Thread Sean Dague
On 02/02/2015 07:01 AM, Alexandre Levine wrote:
 Michael,
 
 I'm rather new here, especially in regard to communication matters, so
 I'd also be glad to understand how it's done and then I can drive it if
 it's ok with everybody.
 By saying EC2 sub team - who did you keep in mind? From my team 3
 persons are involved.
 
 From the technical point of view the transition plan could look somewhat
 like this (sequence can be different):
 
 1. Triage EC2 bugs and fix showstoppers in nova's EC2.
 2. Contribute Tempest tests for EC2 functionality and employ them
 against nova's EC2.
 3. Write spec for required API to be exposed from nova so that we get
 full info.
 4. Triage and fix all of the existing nova's EC2 bugs worth fixing.
 5. Set up Tempest testing of the stackforge/ec2 (if that's possible).
 6. Communicate and discover all of the existing questions and
 problematic points for the switching from existing EC2 API to the new
 one. Provide solutions or decisions about them.
 7. Do performance testing of the new stackforge/ec2 and provide fixes if
 any bottlenecks come up.
 8. Have all of the above prepared for the Vancouver summit and discuss
 the situation there.
 
 Michael, I am still wondering, who's going to be responsible for timely
 reviews and approvals of the fixes and tests we're going to contribute
 to nova? So far this is the biggest risk. Is there anyway to allow some
 of us to participate in the process?

I am happy to volunteer to shephard these reviews. I'll try to keep an
eye on them, and if something is blocking please just ping me directly
on IRC in #openstack-nova or bring them forward to the weekly Nova meeting.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >