Re: [openstack-dev] Which repo should the API WG use?

2015-01-31 Thread James E. Blair
Kevin L. Mitchell kevin.mitch...@rackspace.com writes:

 On Fri, 2015-01-30 at 22:33 +, Everett Toews wrote:
 It was suggested that the API WG use the openstack-specs [1] and/or
 the api-wg [2] repo to publish its guidelines. We’ve already arrived
 at the consensus that we should only use 1 repo [3]. So the purpose of
 this thread is to decide...
 
 Should the API WG use the openstack-specs repo or the api-wg repo?
 
 Let’s discuss.

 Well, the guidelines are just that: guidelines.  They don't implicitly
 propose changes to any OpenStack projects, just provide guidance for
 future API changes.  Thus, I think they should go in a repo separate
 from any of our *-specs repos; to me, a spec provides documentation of a
 change, and is thus independent of the guidelines.

Hi,

As a user of OpenStack I find the APIs inconsistent with each other.  My
understanding is that the API wg hopes to change this (thanks!).  As the
current reality is almost certainly not going to be completely in
alignment with the result of the wg, I think that necessarily there will
be a change in some software.

Consider the logging spec -- it says logs should look like this and use
these levels under these circumstances.  Many projects do not match
that at the moment, and will need changes.  I can imagine something
similar with the API wg.

Perhaps with APIs, things are a bit more complex and in addition to a
cross-project spec, we would need individual project specs to say in
order to get foo's API consistent with the guidelines, we will need to
make these changes and support these behaviors during a deprecation
period.  If that's the case, we can certainly put that level of detail
in an individual project spec repo while keeping the cross-project spec
focused on what things _should_ look like.

At any rate, I think it is important that eventually the result of the
API wg causes technical change to happen, and as such, I think the
openstack-specs repo seems like a good place.  I believe that
openstack-specs also provides a good place for reference documentation
like this (and logging guidelines, etc) to be published indefinitely for
current and new projects.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Product] [all][log] Openstack HTTP error codes

2015-01-31 Thread Sean Dague
On 01/31/2015 05:24 AM, Duncan Thomas wrote:
 Hi
 
 This discussion came up at the cinder mid-cycle last week too,
 specifically in the context of 'Can we change the details text in an
 existing error, or is that an unacceptable API change'.
 
 I have to second security / operational concerns about exposing too much
 granularity of failure in these error codes.
 
 For cases where there is something wrong with the request (item out of
 range, invalid names, feature not supported, etc) I totally agree that
 we should have good, clear, parsable response, and standardisation would
 be good. Having some fixed part of the response (whether a numeric code
 or, as I tend to prefer, a CamelCaseDescription so that I don't have to
 go look it up) and a human readable description section that is subject
 to change seems sensible.
 
 What I would rather not see is leakage of information when something
 internal to the cloud goes wrong, that the tenant can do nothing
 against. We certainly shouldn't be leaking internal implementation
 details like vendor details - that is what request IDs and logs are for.
 The whole point of the cloud, to me, is that separation between the
 things a tenant controls (what they want done) and what the cloud
 provider controls (the details of how the work is done).
 
 For example, if a create volume request fails because cinder-scheduler
 has crashed, all the tenant should get back is 'Things are broken, try
 again later or pass request id 1234-5678-abcd-def0 to the cloud admin'.
 They should need to or even be allowed to care about the details of the
 failure, it is not their domain.

Sure, the value really is in determining things that are under the
client's control to do differently. A concrete one is a multi hypervisor
cloud with 2 hypervisors (say kvm and docker). The volume attach
operation to a docker instance (which presumably is a separate set of
instance types) can't work. The user should be told that that can't work
with this instance_type if they try it.

That's actually user correctable information. And doesn't require a
ticket to move forward.

I also think we could have a detail level knob, because I expect the
level of information exposure might be considered different in public
cloud use case vs. a private cloud at an org level or a private cloud at
a dept level.

-Sean

 
 
 
 On 30 January 2015 at 02:34, Rochelle Grober rochelle.gro...@huawei.com
 mailto:rochelle.gro...@huawei.com wrote:
 
 Hi folks!
 
 Changed the tags a bit because this is a discussion for all projects
 and dovetails with logging rationalization/standards/
 
 At the Paris summit, we had a number of session on logging that kept
 circling back to Error Codes.  But, these codes would not be http
 codes, rather, as others have pointed out, codes related to the
 calling entities and referring entities and the actions that
 happened or didn’t.  Format suggestions were gathered from the
 Operators and from some senior developers.  The Logging Working
 Group is planning to put forth a spec for discussion on formats and
 standards before the Ops mid-cycle meetup.
 
 Working from a Glance proposal on error codes: 
 https://review.openstack.org/#/c/127482/ and discussions with
 operators and devs, we have a strawman to propose.  We also have a
 number of requirements from Ops and some Devs.
 
 Here is the basic idea:
 
 Code for logs would have four segments:
 Project Vendor/Component  Error
 Catalog number Criticality
 Def [A-Z] [A-Z] [A-Z]   - 
 [{0-9}|{A-Z}][A-Z] - [-]-   [0-9]
 Ex.  CIN-   NA- 
   0001- 
2
 Cinder   NetApp 
   driver error no   
   Criticality
 Ex.  GLA-  0A- 
0051 
  3
 Glance  Api 
error no 
  Criticality
 Three letters for project,  Either a two letter vendor code or a
 number and letter for 0+letter for internal component of project
 (like API=0A, Controller =0C, etc),  four digit error number which
 could be subsetted for even finer granularity, and a criticality number.
 
 This is for logging purposes and tracking down root cause faster for
 operators, but if an error is generated, why can the same codes be
 used internally for the code as externally for the logs?  This also
 allows for a unique message to be associated with the error code
 that is more 

Re: [openstack-dev] [OpenStack Foundation] Finding people to work on the EC2 API in Nova

2015-01-31 Thread Alexandre Levine

Davanum,

Now that the picture with the both EC2 API solutions has cleared up a 
bit, I can say yes, we'll be adding the tempest tests and doing devstack 
integration.


Best regards,
  Alex Levine

On 1/31/15 2:21 AM, Davanum Srinivas wrote:

Alexandre, Randy,

Are there plans afoot to add support to switch on stackforge/ec2-api
in devstack? add tempest tests etc? CI Would go a long way in
alleviating concerns i think.

thanks,
dims

On Fri, Jan 30, 2015 at 1:24 PM, Bias, Randy randy.b...@emc.com wrote:

As you know we have been driving forward on the stack forge project and
it¹s our intention to continue to support it over time, plus reinvigorate
the GCE APIs when that makes sense. So we¹re supportive of deprecating
from Nova to focus on EC2 API in Nova.  I also think it¹s good for these
APIs to be able to iterate outside of the standard release cycle.



--Randy

VP, Technology, EMC Corporation
Formerly Founder  CEO, Cloudscaling (now a part of EMC)
+1 (415) 787-2253 [google voice]
TWITTER: twitter.com/randybias
LINKEDIN: linkedin.com/in/randybias
ASSISTANT: ren...@emc.com






On 1/29/15, 4:01 PM, Michael Still mi...@stillhq.com wrote:


Hi,

as you might have read on openstack-dev, the Nova EC2 API
implementation is in a pretty sad state. I wont repeat all of those
details here -- you can read the thread on openstack-dev for detail.

However, we got here because no one is maintaining the code in Nova
for the EC2 API. This is despite repeated calls over the last 18
months (at least).

So, does the Foundation have a role here? The Nova team has failed to
find someone to help us resolve these issues. Can the board perhaps
find resources as the representatives of some of the largest
contributors to OpenStack? Could the Foundation employ someone to help
us our here?

I suspect the correct plan is to work on getting the stackforge
replacement finished, and ensuring that it is feature compatible with
the Nova implementation. However, I don't want to preempt the design
process -- there might be other ways forward here.

I feel that a continued discussion which just repeats the last 18
months wont actually fix the situation -- its time to break out of
that mode and find other ways to try and get someone working on this
problem.

Thoughts welcome.

Michael

--
Rackspace Australia

___
Foundation mailing list
foundat...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/foundation


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Deprecation of in tree EC2 API in Nova for Kilo release

2015-01-31 Thread Alexandre Levine

Matt,

Ideally (when we remove all the workarounds), the code should be 
dependent only on public APIs and oslo, but for the first few releases 
when some additional functionality is exposed in Nova for us to remove 
workarounds we might be dependent on particular releases - or if it's 
done via extensions or versioning and we can see what we're dealing with 
we also can be independent in terms of releases.


Best regards,
  Alex Levine
On 1/31/15 1:37 AM, Matt Riedemann wrote:



On 1/29/2015 5:52 AM, Alexandre Levine wrote:

Thomas,

I'm the lead of the team working on it.
The project is in a release-candidate state and the EC2 (non-VPC) part
is just being finished, so there are no tags or branches yet. Also we
were not sure about what should we do with it since we were told that
it'll have a chance of going as a part of nova eventually. So we've
created a spec and blueprint and only now the discussion has started.
Whatever the decisions we're ready to follow. If the first thing to get
it closer to customers is to create a package (now it can be only
installed from sources obviously) and a tag is required for it, then
that's what we should do.

So bottom line - we're not sure ourselves what the best way to move. Do
we put a tag (in what format? 1.0? m1? 2015.1.rc1?)? Or do we create a
branch?
My thinking now is to just put a tag - something like 1.0.rc1.
What do you think?

Best regards,
   Alex Levine

On 1/29/15 2:13 AM, Thomas Goirand wrote:

On 01/28/2015 08:56 PM, Sean Dague wrote:

There is a new stackforge project which is getting some activity now -
https://github.com/stackforge/ec2-api. The intent and hope is that is
the path forward for the portion of the community that wants this
feature, and that efforts will be focused there.

I'd be happy to provide a Debian package for this, however, there's not
even a single git tag there. That's not so nice for tracking issues.
Who's working on it?

Also, is this supposed to be branch-less? Or will it follow
juno/kilo/l... ?

Cheers,

Thomas Goirand (zigo)


__ 



OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



How dependent is this code on current nova master?  For example, is 
there a rebase or something that happens or things in nova on master 
that change which affect this repo and it has to adjust, like what 
happens with the nova-docker driver repo in stackforge?


If so, then I'd think it more closely aligns with the openstack 
release schedule and tagging/branching scheme, at least until it's 
completely independent.





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] Finding people to work on the EC2 API in Nova

2015-01-31 Thread Davanum Srinivas
Alex,

Very cool. thanks.

-- dims

On Sat, Jan 31, 2015 at 1:04 AM, Alexandre Levine
alev...@cloudscaling.com wrote:
 Davanum,

 Now that the picture with the both EC2 API solutions has cleared up a bit, I
 can say yes, we'll be adding the tempest tests and doing devstack
 integration.

 Best regards,
   Alex Levine

 On 1/31/15 2:21 AM, Davanum Srinivas wrote:

 Alexandre, Randy,

 Are there plans afoot to add support to switch on stackforge/ec2-api
 in devstack? add tempest tests etc? CI Would go a long way in
 alleviating concerns i think.

 thanks,
 dims

 On Fri, Jan 30, 2015 at 1:24 PM, Bias, Randy randy.b...@emc.com wrote:

 As you know we have been driving forward on the stack forge project and
 it¹s our intention to continue to support it over time, plus reinvigorate
 the GCE APIs when that makes sense. So we¹re supportive of deprecating
 from Nova to focus on EC2 API in Nova.  I also think it¹s good for these
 APIs to be able to iterate outside of the standard release cycle.



 --Randy

 VP, Technology, EMC Corporation
 Formerly Founder  CEO, Cloudscaling (now a part of EMC)
 +1 (415) 787-2253 [google voice]
 TWITTER: twitter.com/randybias
 LINKEDIN: linkedin.com/in/randybias
 ASSISTANT: ren...@emc.com






 On 1/29/15, 4:01 PM, Michael Still mi...@stillhq.com wrote:

 Hi,

 as you might have read on openstack-dev, the Nova EC2 API
 implementation is in a pretty sad state. I wont repeat all of those
 details here -- you can read the thread on openstack-dev for detail.

 However, we got here because no one is maintaining the code in Nova
 for the EC2 API. This is despite repeated calls over the last 18
 months (at least).

 So, does the Foundation have a role here? The Nova team has failed to
 find someone to help us resolve these issues. Can the board perhaps
 find resources as the representatives of some of the largest
 contributors to OpenStack? Could the Foundation employ someone to help
 us our here?

 I suspect the correct plan is to work on getting the stackforge
 replacement finished, and ensuring that it is feature compatible with
 the Nova implementation. However, I don't want to preempt the design
 process -- there might be other ways forward here.

 I feel that a continued discussion which just repeats the last 18
 months wont actually fix the situation -- its time to break out of
 that mode and find other ways to try and get someone working on this
 problem.

 Thoughts welcome.

 Michael

 --
 Rackspace Australia

 ___
 Foundation mailing list
 foundat...@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/foundation



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Product] [all][log] Openstack HTTP error codes

2015-01-31 Thread James E. Blair
Sean Dague s...@dague.net writes:

 On 01/31/2015 05:24 AM, Duncan Thomas wrote:
 What I would rather not see is leakage of information when something
 internal to the cloud goes wrong, that the tenant can do nothing
 against. We certainly shouldn't be leaking internal implementation
 details like vendor details - that is what request IDs and logs are for.
 The whole point of the cloud, to me, is that separation between the
 things a tenant controls (what they want done) and what the cloud
 provider controls (the details of how the work is done).

 Sure, the value really is in determining things that are under the
 client's control to do differently. A concrete one is a multi hypervisor
 cloud with 2 hypervisors (say kvm and docker). The volume attach
 operation to a docker instance (which presumably is a separate set of
 instance types) can't work. The user should be told that that can't work
 with this instance_type if they try it.

I agree that we should find the right balance.  Some anecdata from
infra-as-a-user: we have seen OpenStack sometimes unable to allocate a
public IP address for our servers when we cycle them too quickly with
nodepool.  That shows up as an opaque error for us, and it's only by
chatting with the operators that we know what's going on, yet, there
might be things we can do to reduce the occurrence (like rebuild nodes
instead of destroying them; delay before creating again; etc.).

So I would suggest that when we search for the sweet spot of how much
detail to include, we be somewhat generous with the user, who after all,
is likely to be technically competent and frustrated if they are
replacing systems that they can control and diagnose with a black box
that has a habit of saying no at random times for no discernible
reason.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Deprecation of in tree EC2 API in Nova for Kilo release

2015-01-31 Thread M Ranga Swami Reddy
I could see the real issue here is the maintainers for EC2 API code.
If this is the case - create a sub-group with core members in it, who
will be responsible to maintain this code like other projects.

On Sat, Jan 31, 2015 at 2:54 PM, Alexandre Levine
alev...@cloudscaling.com wrote:
 Matt,

 Ideally (when we remove all the workarounds), the code should be dependent
 only on public APIs and oslo, but for the first few releases when some
 additional functionality is exposed in Nova for us to remove workarounds we
 might be dependent on particular releases - or if it's done via extensions
 or versioning and we can see what we're dealing with we also can be
 independent in terms of releases.

 Best regards,
   Alex Levine

 On 1/31/15 1:37 AM, Matt Riedemann wrote:



 On 1/29/2015 5:52 AM, Alexandre Levine wrote:

 Thomas,

 I'm the lead of the team working on it.
 The project is in a release-candidate state and the EC2 (non-VPC) part
 is just being finished, so there are no tags or branches yet. Also we
 were not sure about what should we do with it since we were told that
 it'll have a chance of going as a part of nova eventually. So we've
 created a spec and blueprint and only now the discussion has started.
 Whatever the decisions we're ready to follow. If the first thing to get
 it closer to customers is to create a package (now it can be only
 installed from sources obviously) and a tag is required for it, then
 that's what we should do.

 So bottom line - we're not sure ourselves what the best way to move. Do
 we put a tag (in what format? 1.0? m1? 2015.1.rc1?)? Or do we create a
 branch?
 My thinking now is to just put a tag - something like 1.0.rc1.
 What do you think?

 Best regards,
Alex Levine

 On 1/29/15 2:13 AM, Thomas Goirand wrote:

 On 01/28/2015 08:56 PM, Sean Dague wrote:

 There is a new stackforge project which is getting some activity now -
 https://github.com/stackforge/ec2-api. The intent and hope is that is
 the path forward for the portion of the community that wants this
 feature, and that efforts will be focused there.

 I'd be happy to provide a Debian package for this, however, there's not
 even a single git tag there. That's not so nice for tracking issues.
 Who's working on it?

 Also, is this supposed to be branch-less? Or will it follow
 juno/kilo/l... ?

 Cheers,

 Thomas Goirand (zigo)



 __

 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 How dependent is this code on current nova master?  For example, is there
 a rebase or something that happens or things in nova on master that change
 which affect this repo and it has to adjust, like what happens with the
 nova-docker driver repo in stackforge?

 If so, then I'd think it more closely aligns with the openstack release
 schedule and tagging/branching scheme, at least until it's completely
 independent.



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] K-2 Review-a-thon

2015-01-31 Thread Mike Perez
* Why: We got a bit in the review queue. K-2 [1] cut is set to February 5th.

* When: February 2nd at 2:00 UTC [2] to February 5th at 2:00 UTC [3]
or sooner if we finish!

* Where: #openstack-cinder on freenode IRC. There will also be a
posted google hangout link in channel and etherpad [4] since that
really worked out in previous hackathons. Remember there is a limit,
so please join only if you're really going to be participating. You
also don't have to be core.

I'm encouraging two cores to sign up for a review in the etherpad [4].
If there are already two people to a review, try to move onto
something else to avoid getting burnt out on efforts already spent on
a review.

Patch owners will also be receiving an email directly from me to be
aware of this prime time to respond back to feedback and post
revisions if necessary.

--
Mike Perez

[1] - https://launchpad.net/cinder/+milestone/kilo-2
[2] - 
http://www.timeanddate.com/worldclock/fixedtime.html?iso=20150202T02p1=1440
[3] - 
http://www.timeanddate.com/worldclock/fixedtime.html?iso=20150205T02p1=1440
[4] - https://etherpad.openstack.org/p/cinder-k2-priorities

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] Kilo Deadlines

2015-01-31 Thread Mike Perez
* Blueprint/Spec approval deadline - February 15th
* Code freeze for all features - March 10th

After blueprint/spec approval deadline date has passed, you may
request exception by:

1) Email the Openstack Dev mailing list with [cinder] Request Spec
Freeze Exception in the subject.
2) The spec is reviewed the usual way, but should be a high priority to get in.

These deadlines were agreed on in the Cinder IRC meeting [1].

--
Mike Perez

[1] - 
http://eavesdrop.openstack.org/meetings/cinder/2015/cinder.2015-01-14-16.00.log.html#l-303

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Nominating Elizabeth K. Joseph for infra-core and root

2015-01-31 Thread Cody A.W. Somerville
Huge +1 from me!

Congratulations Elizabeth!

On Fri, Jan 30, 2015 at 12:20 PM, James E. Blair cor...@inaugust.com
wrote:

 Hi,

 The Infrastructure program has a unique three-tier team structure:
 contributors (that's all of us!), core members (people with +2 ability
 on infra projects in Gerrit) and root members (people with
 administrative access).  Read all about it here:

   http://ci.openstack.org/project.html#team

 Elizabeth K. Joseph has been reviewing a significant number of infra
 patches for some time now.  She has taken on a number of very large
 projects, including setting up our Git server farm, adding support for
 infra servers running on CentOS, and setting up the Zanata translation
 system (and all of this without shell access to production machines).

 She understands all of our servers, regardless of function, size, or
 operating system.  She has frequently spoken publicly about the unique
 way in which we perform systems administration, articulating what we are
 doing and why in a way that inspires us as much as others.

 Due to her strong systems administration background, I am nominating her
 for both infra-core and infra-root simultaneously.  I expect many of us
 are looking forward to seeing her insight and direction applied with +2s
 but also equally excited for her to be able to troubleshoot things when
 our best-laid plans meet reality.

 Please respond with any comments or concerns.

 Thanks, Elizabeth, for all your work!

 -Jim

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Cody A.W. Somerville
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder][nova] Cinder backend for ephemeral disks?

2015-01-31 Thread Adam Lawson
Question, looks like this spec was abandoned , hard to tell if it is being
addressed elsewhere? Good idea that received a -2 then ultimately abandoned
sure to juno freeze I think.

https://blueprints.launchpad.net/nova/+spec/nova-ephemeral-cinder
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][gate][stable] How eventlet 0.16.1 broke the gate

2015-01-31 Thread Joshua Harlow
For those that have been following this escapade I wrote up the 
following, hopefully it's useful to someone ;)


https://github.com/harlowja/pippin#how-it-works

Example(s) of actual runs are also @

https://github.com/harlowja/pippin/tree/master/examples

^ Does not include the full output files (since it's a lot of 
information/data that wouldn't fit easily on git...)


-Josh

Joshua Harlow wrote:

Cool,

I've got to try that out today to see what it's doing.

I've also shoved my little program up @
https://github.com/harlowja/pippin (the pip-tools one is definitely more
elegantly coded than mine, haha).

Feel free to fork it (modify, run, or ...)

Basic instructions to use it:

https://github.com/harlowja/pippin#pippin

-Josh

Bailey, Darragh wrote:

You may find the code for pip-compile
https://github.com/nvie/pip-tools/tree/future of interest for this, as I
think they may already have a solution for the deep dependency analysis.


I've started experimenting with it for git-upstream cause GitPython have
a habbit of breaking stuff through a couple of releases now :-(


What I like is:
* Doesn't require an extra tool before using 'pip install'
** Some may want to regen the dependencies, but it's optional and the
common python dev approach is retained
* Stable releases are guaranteed to use the versions of dependencies
they were released and verified against
* Improves on the guarantee of gated branch CI
** The idea that if you sync with upstream any test failures are due to
your local changes
** Which is not always true if updated deps can break stuff


On the flip side:
* You remain exposed to security issues in python code until you
manually update
* Development cycle doesn't move forward automatically, may not see
compatibility issues until late when forced to move forward one of the
deps


Think the cons can be handled by some additional CI jobs to update the
pins on a regular basis and pass it through the standard gates and
potentially to auto approve during development cycles if they pass
(already getting the latest matching ones so no big diff here). Some
decisions on trade off around whether this should be done for stable
releases automatically or periodically requiring manual approval would
have to be made.


Did I say how much I like the fact that it doesn't require another tool
before just being able to use 'pip install'?


To experiment with it:
virtualenv .venv/pip-tools
source .venv/pip-tools/bin/activate
pip install git+https://github.com/nvie/pip-tools.git@future

Regards,
Darragh Bailey

Nothing is foolproof to a sufficiently talented fool - Unknown

On 22/01/15 03:45, Joshua Harlow wrote:

A slightly better version that starts to go deeper (and downloads
dependencies of dependencies and extracts there egg_info to get at
these dependencies...)

https://gist.github.com/harlowja/555ea019aef4e901897b

Output @ http://paste.ubuntu.com/9813919/

When ran on the same 'test.txt' mentioned below...

Happy hacking!

-Josh

Joshua Harlow wrote:

A run that shows more of the happy/desired path:

$ cat test.txt
six1
taskflow0.5
$ python pippin.py -r test.txt
Initial package set:
- six ['1']
- taskflow ['0.5']
Deep package set:
- six ['==1.9.0']
- taskflow ['==0.4.0']

-Josh

Joshua Harlow wrote:

Another thing that I just started whipping together:

https://gist.github.com/harlowja/5e39ec5ca9e3f0d9a21f

The idea for the above is to use pip to download dependencies, but
figure out what versions will work using our own resolver (and our own
querying of 'http://pypi.python.org/pypi/%s/json') that just does a
very
deep search of all requirements (and requirements of requirements...).

The idea for that is that the probe() function in that gist will
'freeze' a single requirement then dive down into further requirements
and ensure compatibility while that 'diving' (aka, recursion into
further requirements) is underway. If a incompatibility is found then
the recursion will back-track and try a to freeze a different
version of
a desired package (and repeat...).

To me this kind of deep finding would be a potential way of making
this
work in a way that basically only uses pip for downloading (and does
the
deep matching/probing) on our own since once the algorithm above
doesn't
backtrack and finds a matching set of requirements that will all work
together the program can exit (and this set can then be used as the
master set for openstack; at that point we might have to tell
people to
not use pip, or to only use pip --download to fetch the compatible
versions).

It's not completed but it could be complementary to what others are
working on; feel free to hack away :)

So far the following works:

$ cat test.txt
six1
taskflow1

$ python pippin.py -r test.txt
Initial package set:
- six ['1']
- taskflow ['1']
Traceback (most recent call last):
File pippin.py, line 168, inmodule
main()
File pippin.py, line 162, in main
matches = probe(initial, {})
File pippin.py, line 139, in probe
result = probe(requirements, 

Re: [openstack-dev] [Product] [all][log] Openstack HTTP error codes

2015-01-31 Thread Duncan Thomas
Hi

This discussion came up at the cinder mid-cycle last week too, specifically
in the context of 'Can we change the details text in an existing error, or
is that an unacceptable API change'.

I have to second security / operational concerns about exposing too much
granularity of failure in these error codes.

For cases where there is something wrong with the request (item out of
range, invalid names, feature not supported, etc) I totally agree that we
should have good, clear, parsable response, and standardisation would be
good. Having some fixed part of the response (whether a numeric code or, as
I tend to prefer, a CamelCaseDescription so that I don't have to go look it
up) and a human readable description section that is subject to change
seems sensible.

What I would rather not see is leakage of information when something
internal to the cloud goes wrong, that the tenant can do nothing against.
We certainly shouldn't be leaking internal implementation details like
vendor details - that is what request IDs and logs are for. The whole point
of the cloud, to me, is that separation between the things a tenant
controls (what they want done) and what the cloud provider controls (the
details of how the work is done).

For example, if a create volume request fails because cinder-scheduler has
crashed, all the tenant should get back is 'Things are broken, try again
later or pass request id 1234-5678-abcd-def0 to the cloud admin'. They
should need to or even be allowed to care about the details of the failure,
it is not their domain.



On 30 January 2015 at 02:34, Rochelle Grober rochelle.gro...@huawei.com
wrote:

 Hi folks!

 Changed the tags a bit because this is a discussion for all projects and
 dovetails with logging rationalization/standards/

 At the Paris summit, we had a number of session on logging that kept
 circling back to Error Codes.  But, these codes would not be http codes,
 rather, as others have pointed out, codes related to the calling entities
 and referring entities and the actions that happened or didn’t.  Format
 suggestions were gathered from the Operators and from some senior
 developers.  The Logging Working Group is planning to put forth a spec for
 discussion on formats and standards before the Ops mid-cycle meetup.

 Working from a Glance proposal on error codes:
 https://review.openstack.org/#/c/127482/ and discussions with operators
 and devs, we have a strawman to propose.  We also have a number of
 requirements from Ops and some Devs.

 Here is the basic idea:

 Code for logs would have four segments:
 Project Vendor/Component  Error
 Catalog number Criticality
 Def [A-Z] [A-Z] [A-Z]   -
 [{0-9}|{A-Z}][A-Z] - [-]-   [0-9]
 Ex.  CIN-   NA-
 0001- 2
 Cinder   NetApp
 driver error no
 Criticality
 Ex.  GLA-  0A-
  0051   3
 Glance  Api
  error no   Criticality
 Three letters for project,  Either a two letter vendor code or a number
 and letter for 0+letter for internal component of project (like API=0A,
 Controller =0C, etc),  four digit error number which could be subsetted for
 even finer granularity, and a criticality number.

 This is for logging purposes and tracking down root cause faster for
 operators, but if an error is generated, why can the same codes be used
 internally for the code as externally for the logs?  This also allows for a
 unique message to be associated with the error code that is more
 descriptive and that can be pre translated.  Again, for logging purposes,
 the error code would not be part of the message payload, but part of the
 headers.  Referrer IDs and other info would still be expected in the
 payload of the message and could include instance ids/names, NICs or VIFs,
 etc.  The message headers is code in Oslo.log and when using the Oslo.log
 library, will be easy to use.

 Since this discussion came up, I thought I needed to get this info out to
 folks and advertise that anyone will be able to comment on the spec to
 drive it to agreement.  I will be  advertising it here and on Ops and
 Product-WG mailing lists.  I’d also like to invite anyone who want to
 participate in discussions to join them.  We’ll be starting a bi-weekly or
 weekly IRC meeting (also announced in the stated MLs) in February.

 And please realize that other than Oslo.log, the changes to make the
 errors more useable will be almost entirely community created standards
 with community created tools to help enforce them.  None of which exist
 yet, FYI.

 --RockyG






 From: Eugeniya Kudryashova