Re: [openstack-dev] EOL and Stable Contributions (was Juno is flubber at the gate) [metrics]

2015-02-10 Thread Mark Voelker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

The voice of operators/users/deployers in this conversation should be reflected 
through the entity that they are paying to provide operational cloud services.  

Let’s be careful here: I hope you didn’t mean to say that 
operators/users/deployers voices should only be heard when they pay a vendor to 
get OpenStack (and I don’t think you did, but it read that way a bit).  

It's those directly consuming the code from openstack.org that are responsible 
here because they are the ones directly making money by either providing 
public/private cloud services, or reselling a productized OpenStack or 
providing consulting services and the like.

Sure, I agree those folks certainly have an interest...but I don’t believe it’s 
solely their responsibility and that the development community has none.  If 
the development community has no responsibility for maintaining stable code, 
why have stable branches at all?  If we aren’t incentivizing contributions to 
stable code, we’re encouraging forking, IMHO.  There’s a balance to be struck 
here.  I think what’s being voiced in this thread is that we haven’t gotten to 
that place yet where there are good incentives to contribute to stable branch 
(not just back porting fixes, but dealing with gate problems, etc as well) and 
we’d like to figure out how to improve that situation.

At Your Service,

Mark T. Voelker
OpenStack Architect

On Feb 10, 2015, at 11:21 AM, Dean Troyer dtro...@gmail.com wrote:

On Tue, Feb 10, 2015 at 9:20 AM, Kevin Bringard (kevinbri) kevin...@cisco.com 
wrote:
ATC is only being given to folks committing to the current branch 
(https://ask.openstack.org/en/question/45531/atc-pass-for-the-openstack-summit/).
 
Secondly, it's difficult to get stack-analytics credit for back ports, as the 
preferred method is to cherry pick the code, and that keeps the original 
author's name.
 
My fear is that we're going in a direction where trunk is the sole focus and 
we're subsequently going to lose the support of the majority of the operators 
and enterprises at which point we'll be a fun research project, but little more.

[I've cherry-picked above what I think are the main points here... not directed 
at you Kevin.]


This is not Somebody Else's Problem.

Stable maintenance is Not Much Fun, no question.  Those who have demanded the 
loudest that we (the development community) maintain these stable branches need 
to be the one supporting it the most. (I have no idea how that matches up 
today, so I'm not pointing out anyone in particular.) 

* ATC credit should be given, stable branch maintenance is a contribution to 
the project, no question.

* I have a bit of a problem with stack-analytics being an issue partially 
because that is not what should be driving corporate contributions and resource 
allocation.  But it does.  Relying on a system with known anomalies like the 
cherry-pick problem gets imperfect results.

* The vast majority of the OpenStack contributors are paid to do their work by 
a (most likely) Foundation member company.  These companies choose how to 
allocate their resources, some do quite well at scratching their particular 
itches, some just make a lot of noise.  If fun is what drives them to select 
where they apply resources, then they will reap what they sow.

The voice of operators/users/deployers in this conversation should be reflected 
through the entity that they are paying to provide operational cloud services.  
It's those directly consuming the code from openstack.org that are responsible 
here because they are the ones directly making money by either providing 
public/private cloud services, or reselling a productized OpenStack or 
providing consulting services and the like.

This should not stop users/operators from contributing information, 
requirements or code in any way.  But if they have to go around their vendor 
then that vendor has failed them.

dt

- -- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-BEGIN PGP SIGNATURE-
Comment: GPGTools - https://gpgtools.org

iQIcBAEBCgAGBQJU2kAYAAoJELUJLUWGN7CbJpEQAJJmw/RlgRMxVSyuWZpcvJmn
nWVLw3KJTspGz5V9clFG6QYgDR5ygWNz51e+QT0Ps+baOWxq9Figwur2yyYfOt8c
hissfjL+OPG2QRrAQOwys7n2s5bSb28ePvO+/benY1PykAknpZX4lq28V4gfqRX9
+2ZjxNs+orkj0cuDi1KdhZPjfo1fuZ07GgCjFy1Xuqq7rYBLfrkEffmF2dbxF67R
aHxbTWe9CgX6IlTL1p5rH7wCKyYQrkFUEXXG3w3CpdWJQ4Ky2bvC/JyKX8wFOkqp
AL7voCvG0zwPV+BHBVabqobC7qb+7+uYd6IQRovuxVxcl9ZdI0o9GQyhcXf890IQ
Sq8Z+PApzOvd8pxQdpq0SBYlVZUsjuBe3YGfvmHg/3hyto+FUzKhJOfjfLepST5U
tThEQzSnfZwfgnTkI3/rN9zMvlg7vsP15lRJzRx+ycQhyzTJZdlEiA+yIQF3tdrZ
h1ZW1M4Dc9R/R56jRV3YeSxY1wMUlBHm4Gn+uKVS3q/dKjIOkYjJEEDVcDd/wCYh
Uknp5GFsZi6Uvj0Dgymllrk7HCustzEhog4r0mORH6W0HtYVK0U/NvaRFXkF/VS0

Re: [openstack-dev] [Nova][Neutron] Thoughts on the nova-neutron interface

2015-01-28 Thread Mark Voelker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

If the problem is too many round trips our the interaction being too chatty

This is a good point: is the main issue that we feel the interaction is too 
chatty, or that it’s too slow?  I seem to hear people gravitating toward one or 
the other when this topic comes up, and the two issues may have somewhat 
different solutions.  After all: in my experience, two Italians talking about 
football can say an awful lot in a very small window of time. =)  If the 
problem is chattiness, we may look at bulk ops, intent-based metadata, 
pipelining, etc.  If it’s slowness, then we probably want a deeper look at what 
bits of the operation are slow.  Mark McClain and Ian and I were chatting about 
this the other day and suspect something like offline token validation or token 
windowing (e.g. attempting to prune out roundtrips to keystone) could go a long 
way on that front.

At Your Service,

Mark T. Voelker

On Jan 28, 2015, at 12:52 AM, Brent Eagles beag...@redhat.com wrote:

On 25/01/2015 11:00 PM, Ian Wells wrote:
Lots of open questions in here, because I think we need a long conversation
on the subject.

On 23 January 2015 at 15:51, Kevin Benton blak...@gmail.com wrote:

It seems like a change to using internal RPC interfaces would be pretty
unstable at this point.

Can we start by identifying the shortcomings of the HTTP interface and see
if we can address them before making the jump to using an interface which
has been internal to Neutron so far?


I think the protocol being used is a distraction from the actual
shortcomings.

Firstly, you'd have to explain to me why HTTP is so much slower than RPC.
If HTTP is incredibly slow, can be be sped up?  If RPC is moving the data
around using the same calls, what changes?  Secondly, the problem seems
more that we make too many roundtrips - which would be the same over RPC -
and if that's true, perhaps we should be doing bulk operations - which is
not transport-specific.

I agree. If the problem is too many round trips our the interaction being too 
chatty, I would expect moving towards more service oriented APIs - where HTTP 
tends to be appropriate. I think we should focus on better separation of 
concerns, and approaches such as bulk operations using notifications where 
cross process synchronization for a task is required. Exploring transport 
alternatives seems premature until after we are satisfied that our house is in 
order architecture-wise.

Furthermore, I have some off-the-cuff concerns over claims that HTTP is 
slower than RPC in our case. I'm actually used to arguing that RPC is faster 
than HTTP but based on how our RPCs work, I find such an argument 
counter-intuitive. Our REST API calls are direct client-server requests with 
GET's returning results immediately. Our RPC calls involve AMQP and a messaging 
queue server, with requests and replies encapsulated in separate messages. If 
no reply is required, then the RPC *might* be dispatched more quickly from the 
client side as it is simply a message being queued. The actual servicing of the 
request (server side dispatch or upcall in broker-parlance) happens some 
time later, meaning possibly never. If the RPC has a return value, then the 
client must wait for the return reply message, which again involves an AMQP 
message being constructed, published and queued, then finally consumed. At the 
very least, this implies latency for dependent on the relative location and 
availability of the queue server.

As an aside (meaning you might want to skip this part), one way our RPC 
mechanism might be better than REST over HTTP calls is in the cost of 
constructing and encoding of requests and replies. However, this is more of a 
function of how requests are encoded and less how the are sent. Changing how 
request payloads are constructed would close that gap. Again reducing the 
number of requests required to do something would reduce the significance of 
any differences here. Unless the difference between the two methods were 
enormous (like double or an order of magnitude) then reducing the number of 
calls to perform a task still has more gain than switching methods. Another 
difference might be in how well the transport implementation scales. I would 
consider disastrous scaling characteristics a pretty compelling argument.

I absolutely do agree that Neutron should be doing more of the work, and
Nova less, when it comes to port binding.  (And, in fact, I'd like that we
stopped considering it 'Nova-Neutron' port binding, since in theory another
service attaching stuff to the network could request a port be bound; it
just happens at the moment that it's always Nova.)

One other problem, not yet raised,  is that Nova doesn't express its needs
when it asks for a port to be bound, and this is actually becoming a
problem for me right now.  At the moment, Neutron knows, almost
psychically, what binding type Nova will accept, and hands it over; Nova
then deals 

Re: [openstack-dev] EOL and Stable Contributions (was Juno is flubber at the gate)

2015-02-10 Thread Mark Voelker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

- -BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

The sentiment that Kevin is expressing here has come up informally at past 
Operator’s meetups as well, which makes sense given that relatively few 
operators are chasing trunk vs using a stable release.  I would hypothesize 
that there’s probably actually a fair bit of interest among operators in having 
well maintained stable branches but there are disincentives that keep them from 
pitching in more.  Let’s see if we can bring that to light a bit—I’ve added an 
item on the etherpad to discuss this in Philadelphia at the Operator’s midcycle 
meetup in a few weeks. [1]  If folks who are attending aren’t familiar with the 
current stable branch policies and team structure, you may want to read through 
the wiki first. [2]

[1] https://etherpad.openstack.org/p/PHL-ops-meetup

[2] https://wiki.openstack.org/wiki/StableBranch

At Your Service,

Mark T. Voelker
OpenStack Architect

On Feb 10, 2015, at 10:20 AM, Kevin Bringard (kevinbri) kevin...@cisco.com 
wrote:

Since this is sort of a topic change, I opted to start a new thread. I was 
reading over the Juno is Fubar at the Gate thread, and this bit stood out to 
me:

So I think it's time we called the icehouse branch and marked it EOL. We
originally conditioned the longer support window on extra people stepping
forward to keep things working. I believe this latest issue is just the latest
indication that this hasn't happened. Issue 1 listed above is being caused by
the icehouse branch during upgrades. The fact that a stable release was pushed
at the same time things were wedged on the juno branch is just the latest
evidence to me that things aren't being maintained as they should be. Looking at
the #openstack-qa irc log from today or the etherpad about trying to sort this
issue should be an indication that no one has stepped up to help with the
maintenance and it shows given the poor state of the branch.

Most specifically: 

We originally conditioned the longer support window on extra people stepping 
forward to keep things working ... should be an indication that no one has 
stepped up to help with the maintenance and it shows given the poor state of 
the branch.

I've been talking with a few people about this very thing lately, and I think 
much of it is caused by what appears to be our actively discouraging people 
from working on it. Most notably, ATC is only being given to folks committing 
to the current branch 
(https://ask.openstack.org/en/question/45531/atc-pass-for-the-openstack-summit/).
 Secondly, it's difficult to get stack-analytics credit for back ports, as the 
preferred method is to cherry pick the code, and that keeps the original 
author's name. I've personally gotten a few commits into stable, but have 
nothing to show for it in stack-analytics (if I'm doing it wrong, I'm happy to 
be corrected).

My point here isn't to complain that I, or others, are not getting credit, but 
to point out that I don't know what we expected to happen to stable branches 
when we actively dis-incentivize people from working on them. Working on 
hardening old code is generally far less interesting than working on the cool 
shiny new features, and many of the productionalization issues we run into 
aren't uncovered until it's being run at scale which in turn is usually by a 
big company who likely isn't chasing trunk.

My fear is that we're going in a direction where trunk is the sole focus and 
we're subsequently going to lose the support of the majority of the operators 
and enterprises at which point we'll be a fun research project, but little more.

- - -- Kevin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

- -BEGIN PGP SIGNATURE-
Comment: GPGTools - https://gpgtools.org

iQIcBAEBCgAGBQJU2i1eAAoJELUJLUWGN7CbmPEQAKV/9RPgKt6jwvq0bzhFCJPF
hz2LOC8M5Fk1wINGUcvwGjiphCwGMSD9p6IYx7PAzMrnbhLqQa0exCmo4DUi3jdV
qNC1A6juScrHjyQMcQ3dBS4Z4QEh0S964n2Ae/uoWydpDe8WGy4DQRLTNy+mCIg5
ROManHAWcQ3Cr5fYkFSeGQgaoROypj2Eebvv4kiYDPQVkjK1o49hpybxe5v0zR/Y
6kuacoZCK8h6X8b4CrbG+t/vCy8dqWIUB1j67VBojRDpe1p0yQqA3IJ72DLPfTw5
0GzUecfW781ZP5fHQSwhbg7t31UYXBpo9xszltXBiNynHRktA7BwYwj+YAFCKgNZ
sQ/gZOruqR+Os8/+pngA23PCGvuCUsTamUCkQUs4mCHjdvPq/BNFg0qGNeeheLQq
CzlldwqcPY5py3KfmipIZakH1wZ2S/DU/snuAhVatTjVHqO1leyk5asHYVVAnwCQ
96vawAHcIXEN4dPcXpcYBiiTE1mgq+0FQgVGsr4fQ2BkRYDN9rmOdVp+mG7b7QM0
jhK+POQqj+ojnQOKwA2ygQUglDY8MxjmfCrMukkWQylmYVb09Z0cOMFfMMw7YfU3
pWGP6BIfCManduqVBqFvTMxh/dCGIGq3LwrXo23qmukdgSIVRuj16XPZqXZ5xtv/
A6NV//pQXxvO4d+l4bBk
=cT3X
- -END PGP SIGNATURE-
-BEGIN PGP SIGNATURE-
Comment: GPGTools - https://gpgtools.org

iQIcBAEBCgAGBQJU2i1jAAoJELUJLUWGN7CbsDIP/i0CyFC1FOL7SSC3IFLvVd9r

Re: [openstack-dev] [all] OpenStack options for visually impaired contributors

2015-03-05 Thread Mark Voelker
Hi John,

I’m not visually impaired myself, but have you taken a look at Gertty?  It’s a 
console-based interface so you may be able to take advantage of other console 
customizations you’ve made (font sizing for example) and it has options that 
allow you to set color palettes (e.g. to increase contrast), set hotkeys rather 
than using mouse clicks, etc.

https://github.com/stackforge/gertty

At Your Service,

Mark T. Voelker

On Mar 5, 2015, at 2:46 PM, John Wood john.w...@rackspace.com wrote:

 Hello folks,
 
 I¹m curious what tools visually impaired OpenStack contributors have found
 helpful for performing Gerrit reviews (the UI is difficult to scan,
 especially for in-line code comments) and for Python development in
 general?
 
 Thanks,
 John
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

2015-03-27 Thread Mark Voelker
Inline…


On Mar 27, 2015, at 11:48 AM, Assaf Muller amul...@redhat.com wrote:

 
 
 - Original Message -
 On 03/27/2015 05:22 AM, Thierry Carrez wrote:
 snip
 Part of it is corner (or simplified) use cases not being optimally
 served by Neutron, and I think Neutron could more aggressively address
 those. But the other part is ignorance and convenience: that Neutron
 thing is a scary beast, last time I looked into it I couldn't make sense
 of it, and nova-network just works for me.
 
 That is why during the Ops Summit we discussed coming up with a
 migration guide that would explain the various ways you can use Neutron
 to cover nova-network use cases, demystify a few dark areas, and outline
 the step-by-step manual process you can go through to migrate from one
 to the other.
 
 We found a dev/ops volunteer for writing that migration guide but he was
 unfortunately not allowed to spend time on this. I heard we have new
 volunteers, but I'll let them announce what their plans are, rather than
 put words in their mouth.
 
 This migration guide can happen even if we follow the nova-net spinout
 plan (for the few who want to migrate to Neutron), so this is a
 complementary solution rather than an alternative. Personally I doubt
 there would suddenly be enough people interested in nova-net development
 to successfully spin it out and maintain it. I also agree with Russell
 that long-term fragmentation at this layer of the stack is generally not
 desirable.
 
 I think if you boil everything down, you end up with 3 really important
 differences.
 
 1) neutron is a fleet of services (it's very micro service) and every
 service requires multiple and different config files. Just configuring
 the fleet is a beast if it not devstack (and even if it is)
 
 2) neutron assumes a primary interesting thing to you is tenant secured
 self service networks. This is actually explicitly not interesting to a
 lot of deploys for policy, security, political reasons/restrictions.
 
 3) neutron open source backend defaults to OVS (largely because #2). OVS
 is it's own complicated engine that you need to learn to debug. While
 Linux bridge has challenges, it's also something that anyone who's
 worked with Linux  Virtualization for the last 10 years has some
 experience with.
 
 (also, the devstack setup code for neutron is a rats nest, as it was
 mostly not paid attention to. This means it's been 0 help in explaining
 anything to people trying to do neutron. For better or worse devstack is
 our executable manual for a lot of these things)
 
 so that being said, I think we need to talk about minimum viable
 neutron as a model and figure out how far away that is from n-net. This
 week at the QA Sprint, Dean, Sean Collins, and I have spent some time
 hashing it out, hopefully with something to show the end of the week.
 This will be the new devstack code for neutron (the old lib/neutron is
 moved to lib/neutron-legacy).
 
 Default setup will be provider networks (which means no tenant
 isolation). For that you should only need neutron-api, -dhcp, and -l2.
 So #1 is made a bunch better. #2 not a thing at all. And for #3 we'd
 like to revert back to linux bridge for the base case (though first code
 will probably be OVS because that's the happy path today).
 
 
 Looking at the latest user survey, OVS looks to be 3 times as popular as

3x as popular *with existing Neutron users* though.  Not people that are still 
running nova-net.  I think we have to bear in mind here that when we’re looking 
at user survey results we’re looking at a general audience of OpenStack users, 
and what we’re trying to solve on this thread is a specific subset of that 
audience.  The very fact that those people are still running nova-net may be a 
good indicator that they don’t find the Neutron choices that lots of other 
people have made to be a good fit for their particular use cases (else they’d 
have switched by now).  We got some reinforcement of this idea during 
discussion at the Operator’s Midcycle Meetup in Philadelphia: the feedback from 
nova-net users that I heard was that OVS+Neutron was too complicated and too 
hard to debug compared to what they have today, hence they didn’t find it a 
compelling option.  

Linux Bridge is, in the eyes of many folks in that room, a simpler model in 
terms of operating and debugging so I think it’s likely a very reasonable for 
this group of users.  However in the interest of ensuring that those operators 
have a chance to chime in here, I’ve added openstack-operators to the thread.

At Your Service,

Mark T. Voelker


 Linux bridge for production deployments. Having LB as the default seems
 like an odd choice. You also wouldn't want to change the default before
 LB is tested at the gate.
 
 Mixin #1: NEUTRON_BRIDGE_WITH=OVS
 
 First optional layer being flip from linuxbridge - ovs. That becomes
 one bite sized thing to flip over once you understand it.
 
 Mixin #2: self service networks
 
 This will 

Re: [openstack-dev] [all] Does OpenStack need a common solution for DLM? (was: Re: [Cinder] A possible solution for HA Active-Active)

2015-08-03 Thread Mark Voelker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On Aug 3, 2015, at 6:09 PM, Flavio Percoco fla...@redhat.com wrote:

On 03/08/15 19:48 +0200, Gorka Eguileor wrote:
On Mon, Aug 03, 2015 at 03:42:48PM +, Fox, Kevin M wrote:
I'm usually for abstraction layers, but they don't always pay off very well due 
to catering to the lowest common denominator.

Lets clearly define the problem space first. IFF the problem space can be fully 
implemented using Tooz, then lets do that. Then the operator can choose. If 
Tooz cant and wont handle the problem space, then we're trying to fit a square 
peg in a round hole.

What do you mean with clearly define the problem space?  We know what we
want, we just need to agree on the compromises we are willing to make,
use a DLM and make admins' life a little harder (only for those that
deploy A-A) but have an A-A solution earlier, or postpone A-A
functionality but make their life easier.

And we already know that Tooz is not the Holy Grail and will not perform
the miracle of giving Cinder HA A-A.  It is only a piece of the problem,
so there's nothing to discuss there, and it's not a square peg on a
round hole, because it fits perfectly for what it is intended. But once
you have filled that square hole you need another peg, the round one for
the round hole.

If people are expecting to find one thing that fixes everything and
gives us HA A-A on its own, then I believe they are a little bit lost.

As confusing as it seems, we've now moved from talking about just
Cinder to understanding whether this is a problem many projects have
and whether we can find a solution that will work for most of them.
Therefore, I've renamed this thread to make this more evident.

Now, so far we have:

- - Ironic has an internal distributed lock and it uses a hash-ring
- - Ceilometer uses tooz
- - Several projects use a file lock of some other fashion of
distributed lock.
- - *Add yours here*

Possibly worth mentioning here is that some projects have DLM’s in use at the 
plugin/driver layer as well.  For example, the Neutron plugin for NSXv is using 
Tooz:

https://review.openstack.org/#/c/188015/

At Your Service,

Mark T. Voelker


Each one of these projects has a specific use-case that doesn't
necessarily overlap. I'd like to see those cases listed somewhere.
We've done this in the past already and I believe we can do it now as
well. As I've mentioned in another thread, Gorka has done this for
Cinder already now we need to do it for other services too. Even if
your project has a DLM in place, it'd be good to know what problem you
solved with it as it may be a problem that other projects have as
well.

As a community, we've been able to do away with adding a new service
for DLM's thus far. I'm not saying we don't need one but, as mentioned
in other threads, lets give this some more thought before we add a new
service that'll make deploying and maintaining OpenStack harder.

Flavio

From: Gorka Eguileor [gegui...@redhat.com]
Sent: Monday, August 03, 2015 1:43 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Cinder] A possible solution for HA
Active-Active

On Mon, Aug 03, 2015 at 10:22:42AM +0200, Thierry Carrez wrote:
 Flavio Percoco wrote:
  [...]
  So, to summarize, I love the effort behind this. But, as others have
  mentioned, I'd like us to take a step back, run this accross teams and
  come up with an opinonated solution that would work for everyone.
 
  Starting this discussion now would allow us to prepare enough material
  to reach an agreement in Tokyo and work on a single solution for
  Mikata. This sounds like a good topic for a cross-project session.

 +1

 The last thing we want is to rush a solution that would only solve a
 particular project use case. Personally I'd like us to pick the simplest
 solution that can solve most of the use cases. Each of the solutions
 bring something to the table -- Zookeeper is mature, Consul is
 featureful, etcd is lean and simple... Let's not dive into the best
 solution but clearly define the problem space first.

 --
 Thierry Carrez (ttx)


I don't see those as different solutions from the point of view of
Cinder, they are different implementations to the same solution case,
using a DLM to lock resources.

We keep circling back to the fancy names like moths to a flame, when we
are still discussing whether we need or want a DLM for the solution.  I
think we should stop doing that, we need to decide on the solution from
an abstract point of view (like you say, define the problem space) and
not get caught up on discussions of which one of those is best.  If we
end up deciding to use a DLM, which is unlikely, then we can look into
available drivers in Tooz and if we are not convinced with the ones we
have (Redis, ZooKeeper, etc.) then we discuss which one we should be
using instead and just add it to Tooz.

- -- 
@flaper87
Flavio Percoco

Re: [openstack-dev] [all] Does OpenStack need a common solution for DLM? (was: Re: [Cinder] A possible solution for HA Active-Active)

2015-08-04 Thread Mark Voelker
On Aug 3, 2015, at 6:09 PM, Flavio Percoco fla...@redhat.com wrote:
 
 On 03/08/15 19:48 +0200, Gorka Eguileor wrote:
 On Mon, Aug 03, 2015 at 03:42:48PM +, Fox, Kevin M wrote:
 I'm usually for abstraction layers, but they don't always pay off very well 
 due to catering to the lowest common denominator.
 
 Lets clearly define the problem space first. IFF the problem space can be 
 fully implemented using Tooz, then lets do that. Then the operator can 
 choose. If Tooz cant and wont handle the problem space, then we're trying 
 to fit a square peg in a round hole.
 
 What do you mean with clearly define the problem space?  We know what we
 want, we just need to agree on the compromises we are willing to make,
 use a DLM and make admins' life a little harder (only for those that
 deploy A-A) but have an A-A solution earlier, or postpone A-A
 functionality but make their life easier.
 
 And we already know that Tooz is not the Holy Grail and will not perform
 the miracle of giving Cinder HA A-A.  It is only a piece of the problem,
 so there's nothing to discuss there, and it's not a square peg on a
 round hole, because it fits perfectly for what it is intended. But once
 you have filled that square hole you need another peg, the round one for
 the round hole.
 
 If people are expecting to find one thing that fixes everything and
 gives us HA A-A on its own, then I believe they are a little bit lost.
 
 As confusing as it seems, we've now moved from talking about just
 Cinder to understanding whether this is a problem many projects have
 and whether we can find a solution that will work for most of them.
 Therefore, I've renamed this thread to make this more evident.
 
 Now, so far we have:
 
 - Ironic has an internal distributed lock and it uses a hash-ring
 - Ceilometer uses tooz
 - Several projects use a file lock of some other fashion of
 distributed lock.
 - *Add yours here*



/me adds a couple more here and fixes formatting

From an operator’s point of view, it may be worth noting some other parts of 
various projects that could or do use the same systems that provide DLM 
capabilities in that *if* those systems solve well for multiple use cases, it 
may make operator’s lives easier to congeal around them when possible.  E.g.: 
fewer moving parts, less to debug, more shared logic.  From that perspective, 
Neutron has also had some discussion around tooz in the recent past as well for 
agent monitoring and state awareness.  This thread captures some of people’s 
thinking:

http://lists.openstack.org/pipermail/openstack-dev/2015-April/061268.html

And there’s a related spec for agent monitoring using tooz as an alternative to 
heartbeat/DB mechanisms currently in review here:

https://review.openstack.org/#/c/174438/

It may also be worth noting that some of the original thinking behind this came 
from Nova’s adoption of Zookeeper for ServiceGroups several years ago:

https://blueprints.launchpad.net/nova/+spec/zk-service-heartbeat

However when the Neutron community began discussing this, the idea of using 
tooz (which didn’t exist back when Nova implemented the ServiceGroup API’s) 
rather than using Zookeeper directly came up in review and seemed to make a lot 
more sense to everyone.  

And as I’ve noted, there are individual plugins that have already adopted tooz 
specifically for distributed locking, such as the NSXv plugin for Neutron:

https://review.openstack.org/#/c/188015/

At Your Service,

Mark T. Voelker



 
 Each one of these projects has a specific use-case that doesn't
 necessarily overlap. I'd like to see those cases listed somewhere.
 We've done this in the past already and I believe we can do it now as
 well. As I've mentioned in another thread, Gorka has done this for
 Cinder already now we need to do it for other services too. Even if
 your project has a DLM in place, it'd be good to know what problem you
 solved with it as it may be a problem that other projects have as
 well.
 
 As a community, we've been able to do away with adding a new service
 for DLM's thus far. I'm not saying we don't need one but, as mentioned
 in other threads, lets give this some more thought before we add a new
 service that'll make deploying and maintaining OpenStack harder.
 
 Flavio
 
 From: Gorka Eguileor [gegui...@redhat.com]
 Sent: Monday, August 03, 2015 1:43 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Cinder] A possible solution for HA
 Active-Active
 
 On Mon, Aug 03, 2015 at 10:22:42AM +0200, Thierry Carrez wrote:
  Flavio Percoco wrote:
   [...]
   So, to summarize, I love the effort behind this. But, as others have
   mentioned, I'd like us to take a step back, run this accross teams and
   come up with an opinonated solution that would work for everyone.
  
   Starting this discussion now would allow us to prepare enough material
   to reach an agreement in Tokyo and work on a single solution for
   Mikata. This sounds like 

Re: [openstack-dev] [all] Outcome of distributed lock manager discussion @ the summit

2015-11-04 Thread Mark Voelker
On Nov 4, 2015, at 4:41 PM, Gregory Haynes  wrote:
> 
> Excerpts from Clint Byrum's message of 2015-11-04 21:17:15 +:
>> Excerpts from Joshua Harlow's message of 2015-11-04 12:57:53 -0800:
>>> Ed Leafe wrote:
 On Nov 3, 2015, at 6:45 AM, Davanum Srinivas  wrote:
> Here's a Devstack review for zookeeper in support of this initiative:
> 
> https://review.openstack.org/241040
> 
> Thanks,
> Dims
 
 I thought that the operators at that session made it very clear that they 
 would *not* run any Java applications, and that if OpenStack required a 
 Java app to run, they would no longer use it.
 
 I like the idea of using Zookeeper as the DLM, but I don't think it should 
 be set up as a default, even for devstack, given the vehement opposition 
 expressed.
 
>>> 
>>> What should be the default then?
>>> 
>>> As for 'vehement opposition' I didn't see that as being there, I saw a 
>>> small set of people say 'I don't want to run java or I can't run java', 
>>> some comments about requiring using oracles JVM (which isn't correct, 
>>> OpenJDK works for folks that I have asked in the zookeeper community and 
>>> else where) and the rest of the folks were ok with it...
>>> 
>>> If people want a alternate driver, propose it IMHO...
>>> 
>> 
>> The few operators who stated this position are very much appreciated
>> for standing up and making it clear. It has helped us not step into a
>> minefield with a native ZK driver!
>> 
>> Consul is the most popular second choice, and should work fine for the
>> use cases we identified. It will not be sufficient if we ever have
>> a use case where many agents must lock many resources, since Consul
>> does not offer a way to grant lock access in a fair manner (ZK does,
>> and we're not aware of any others that do actually). Using Consul or
>> etcd for this case would result in situations where lock waiters may
>> wait _forever_, and will likely wait longer than they should at times.
>> Hopefully we can simply avoid the need for this in OpenStack all together.
>> 
>> I do _not_ think we should wait for constrained operators to scream
>> at us about ZK to write a Consul driver. It's important enough that we
>> should start documenting all of the issues we expect to see with Consul
>> (it's not widely packaged, for instance) and writing a driver with its
>> own devstack plugin.
>> 
>> If there are Consul experts who did not make it to those sessions,
>> it would be greatly appreciated if you can spend some time on this.
>> 
>> What I don't want to see happen is we get into a deadlock where there's
>> a large portion of users who can't upgrade and no driver to support them.
>> So lets stay ahead of the problem, and get a set of drivers that works
>> for everybody!
>> 
> 
> One additional note - out of the three possible options I see for tooz
> drivers in production (zk, consul, etcd) we currently only have drivers
> for ZK. This means that unless new drivers are created, when we depend
> on tooz we will be requiring folks deploy zk.
> 
> It would be *awesome* if some folks stepped up to create and support at
> least one of the aternate backends.
> 
> Although I am a fan of the ZK solution, I have an old WIP patch for
> creating an etcd driver. I would like to revive and maintain it, but I
> would also need one more maintainer per the new rules for in tree
> drivers…

For those following along at home, said WIP etcd driver patch is here:

https://review.openstack.org/#/c/151463/

And said rules are at:

https://review.openstack.org/#/c/240645/

And FWIW, I too am personally fine with ZK as a default for devstack.

At Your Service,

Mark T. Voelker

> 
> Cheers,
> Greg
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] The current state of glance v2 in public clouds

2015-09-15 Thread Mark Voelker
As another data point, I took a poke around the OpenStack Marketplace [1] this 
morning and found:

* 1 distro/appliance claims v1 support
* 3 managed services claim v1 support
* 3 public clouds claim v1 support

And everyone else claims v2 support.  I’d encourage vendors to check their 
Marketplace data for accuracy…if something’s wrong there, reach out to 
ecosys...@openstack.org to enquire about fixing it.  If you simply aren’t 
listed on the Marketplace and would like to be, check out [2].

[1] https://www.openstack.org/marketplace/
[2] http://www.openstack.org/assets/marketplace/join-the-marketplace.pdf

At Your Service,

Mark T. Voelker



> On Sep 15, 2015, at 7:32 AM, Monty Taylor  wrote:
> 
> Hi!
> 
> In some of our other discussions, there have been musings such as "people 
> want to..." or "people are concerned about..." Those are vague and 
> unsubstantiated. Instead of "people" - I thought I'd enumerate actual data 
> that I have personally empirically gathered.
> 
> I currently have an account on 12 different public clouds:
> 
> Auro
> CityCloud
> Dreamhost
> Elastx
> EnterCloudSuite
> HP
> OVH
> Rackspace
> RunAbove
> Ultimum
> UnitedStack
> Vexxhost
> 
> 
> (if, btw, you have a public cloud that I did not list above, please poke me 
> and let's get me an account so that I can make sure you're listed/supported 
> in os-client-config and also so that I don't make sweeping generalizations 
> without you)
> 
> In case you care- those clouds cover US, Canada, Sweden, UK, France, Germany, 
> Netherlands, Czech Republic and China.
> 
> Here's the rundown:
> 
> 11 of the 12 clouds run Glance v2, 1 only have Glance v1
> 11 of the 12 clouds support image-create, 1 uses tasks
> 8 of the 12 support qcow2, 3 require raw, 1 requires vhd
> 
> Use this data as you will.
> 
> Monty
> 
> Monty
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [cinder] [all] The future of Cinder API v1

2015-09-29 Thread Mark Voelker

Mark T. Voelker



> On Sep 29, 2015, at 12:36 PM, Matt Fischer  wrote:
> 
> 
> 
> I agree with John Griffith. I don't have any empirical evidences to back
> my "feelings" on that one but it's true that we weren't enable to enable
> Cinder v2 until now.
> 
> Which makes me wonder: When can we actually deprecate an API version? I
> *feel* we are fast to jump on the deprecation when the replacement isn't
> 100% ready yet for several versions.
> 
> --
> Mathieu
> 
> 
> I don't think it's too much to ask that versions can't be deprecated until 
> the new version is 100% working, passing all tests, and the clients (at least 
> python-xxxclients) can handle it without issues. Ideally I'd like to also 
> throw in the criteria that devstack, rally, tempest, and other services are 
> all using and exercising the new API.
> 
> I agree that things feel rushed.


FWIW, the TC recently created an assert:follows-standard-deprecation tag.  Ivan 
linked to a thread in which Thierry asked for input on it, but FYI the final 
language as it was approved last week [1] is a bit different than originally 
proposed.  It now requires one release plus 3 linear months of 
deprecated-but-still-present-in-the-tree as a minimum, and recommends at least 
two full stable releases for significant features (an entire API version would 
undoubtedly fall into that bucket).  It also requires that a migration path 
will be documented.  However to Matt’s point, it doesn’t contain any language 
that says specific things like:

In the case of major API version deprecation:
* $oldversion and $newversion must both work with [cinder|nova|whatever]client 
and openstackclient during the deprecation period.
* It must be possible to run $oldversion and $newversion concurrently on the 
servers to ensure end users don’t have to switch overnight. 
* Devstack uses $newversion by default.
* $newversion works in Tempest/Rally/whatever else.

What it *does* do is require that a thread be started here on 
openstack-operators [2] so that operators can provide feedback.  I would hope 
that feedback like “I can’t get clients to use it so please don’t remove it 
yet” would be taken into account by projects, which seems to be exactly what’s 
happening in this case with Cinder v1.  =)

I’d hazard a guess that the TC would be interested in hearing about whether you 
think that plan is a reasonable one (and given that TC election season is upon 
us, candidates for the TC probably would too).

[1] https://review.openstack.org/#/c/207467/
[2] 
http://git.openstack.org/cgit/openstack/governance/tree/reference/tags/assert_follows-standard-deprecation.rst#n59

At Your Service,

Mark T. Voelker


>  
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][nova] how to upgrade from v1 to v2?

2015-09-28 Thread Mark Voelker
On Sep 28, 2015, at 9:03 AM, Doug Hellmann  wrote:
> 
> Excerpts from John Garbutt's message of 2015-09-28 12:32:53 +0100:
>> On 28 September 2015 at 12:10, Sean Dague  wrote:
>>> On 09/27/2015 08:43 AM, Doug Hellmann wrote:
 Excerpts from Mark Voelker's message of 2015-09-25 20:43:23 +:
> On Sep 25, 2015, at 1:56 PM, Doug Hellmann  wrote:
>> 
>> Excerpts from Mark Voelker's message of 2015-09-25 17:42:24 +:
>>> 
> 
> Ah.  Thanks for bringing that up, because I think this may be an area 
> where there’s some misconception about what DefCore is set up to do 
> today.  In it’s present form, the Board of Directors has structured 
> DefCore to look much more at trailing indicators of market acceptance 
> rather than future technical direction.  More on that over here. [1]
 
 And yet future technical direction does factor in, and I'm trying
 to add a new heuristic to that aspect of consideration of tests:
 Do not add tests that use proxy APIs.
 
 If there is some compelling reason to add a capability for which
 the only tests use a proxy, that's important feedback for the
 contributor community and tells us we need to improve our test
 coverage. If the reason to use the proxy is that no one is deploying
 the proxied API publicly, that is also useful feedback, but I suspect
 we will, in most cases (glance is the exception), say "Yeah, that's
 not how we mean for you to run the services long-term, so don't
 include that capability."
>>> 
>>> I think we might also just realize that some of the tests are using the
>>> proxy because... that's how they were originally written.
>> 
>> From my memory, thats how we got here.
>> 
>> The Nova tests needed to use an image API. (i.e. list images used to
>> check the snapshot Nova, or similar)
>> 
>> The Nova proxy was chosen over Glance v1 and Glance v2, mostly due to
>> it being the only widely deployed option.
> 
> Right, and I want to make sure it's clear that I am differentiating
> between "these tests are bad" and "these tests are bad *for DefCore*".
> We should definitely continue to test the proxy API, since it's a
> feature we have and that our users rely on.
> 
>> 
>>> And they could be rewritten to use native APIs.
>> 
>> +1
>> Once Glance v2 is available.
>> 
>> Adding Glance v2 as advisory seems a good step to help drive more adoption.
> 
> I think we probably don't want to rewrite the existing tests, since
> that effectively changes the contract out from under existing folks
> complying with DefCore.  If we need new, parallel, tests that do
> not use the proxy to make more suitable tests for DefCore to use,
> we should create those.
> 
>> 
>>> I do agree that "testing proxies" should not be part of Defcore, and I
>>> like Doug's idea of making that a new heuristic in test selection.
>> 
>> +1
>> Thats a good thing to add.
>> But I don't think we had another option in this case.
> 
> We did have the option of leaving the feature out and highlighting the
> discrepancy to the contributors so tests could be added. That
> communication didn't really happen, as far as I can tell.
> 
 Sorry, I wasn't clear. The Nova team would, I expect, view the use of
 those APIs in DefCore as a reason to avoid deprecating them in the code
 even if they wanted to consider them as legacy features that should be
 removed. Maybe that's not true, and the Nova team would be happy to
 deprecate the APIs, but I did think that part of the feedback cycle we
 were establishing here was to have an indication from the outside of the
 contributor base about what APIs are considered important enough to keep
 alive for a long period of time.
>>> I'd also agree with this. Defcore is a wider contract that we're trying
>>> to get even more people to write to because that cross section should be
>>> widely deployed. So deprecating something in Defcore is something I
>>> think most teams, Nova included, would be very reluctant to do. It's
>>> just asking for breaking your users.
>> 
>> I can't see us removing the proxy APIs in Nova any time soon,
>> regardless of DefCore, as it would break too many people.
>> 
>> But personally, I like dropping them from Defcore, to signal that the
>> best practice is to use the Glance v2 API directly, rather than the
>> Nova proxy.
>> 
>> Maybe the are just marked deprecated, but still required, although
>> that sounds a bit crazy.
> 
> Marking them as deprecated, then removing them from DefCore, would let
> the Nova team make a technical decision about what to do with them
> (maybe they get spun out into a separate service, maybe they're so
> popular you just keep them, whatever).

So, here’s that Who’s On First thing again.  Just to clarify: Nova does not 
need Capabilities to be removed from Guidelines in order to make technical 
decisions about what to do with a feature 

Re: [openstack-dev] [Openstack-operators] [cinder] [all] The future of Cinder API v1

2015-09-28 Thread Mark Voelker
FWIW, the most popular client libraries in the last user survey[1] other than 
OpenStack’s own clients were: libcloud (48 respondents), jClouds (36 
respondents), Fog (34 respondents), php-opencloud (21 respondents), DeltaCloud 
(which has been retired by Apache and hasn’t seen a commit in two years, but 17 
respondents are still using it), pkgcloud (15 respondents), and OpenStack.NET 
(14 respondents).  Of those:

* libcloud appears to support the nova-volume API but not the cinder API: 
https://github.com/apache/libcloud/blob/trunk/libcloud/compute/drivers/openstack.py#L251

* jClouds appears to support only the v1 API: 
https://github.com/jclouds/jclouds/tree/jclouds-1.9.1/apis/openstack-cinder/src/main/java/org/jclouds

* Fog also appears to only support the v1 API: 
https://github.com/fog/fog/blob/master/lib/fog/openstack/volume.rb#L99

* php-opencloud appears to only support the v1 API: 
https://php-opencloud.readthedocs.org/en/latest/services/volume/index.html

* DeltaCloud I honestly haven’t looked at since it’s thoroughly dead, but I 
can’t imagine it supports v2.

* pkgcloud has beta-level support for Cinder but I think it’s v1 (may be 
mistaken): https://github.com/pkgcloud/pkgcloud/#block-storagebeta and 
https://github.com/pkgcloud/pkgcloud/tree/master/lib/pkgcloud/openstack/blockstorage

* OpenStack.NET does appear to support v2: 
http://www.openstacknetsdk.org/docs/html/T_net_openstack_Core_Providers_IBlockStorageProvider.htm

Now, it’s anyone’s guess as to whether or not users of those client libraries 
actually try to use them for volume operations or not (anecdotally I know a few 
clouds I help support are using client libraries that only support v1), and 
some users might well be using more than one library or mixing in code they 
wrote themselves.  But most of the above that support cinder do seem to rely on 
v1.  Some management tools also appear to still rely on the v1 API (such as 
RightScale: 
http://docs.rightscale.com/clouds/openstack/openstack_config_prereqs.html ).  
From that perspective it might be useful to keep it around a while longer and 
disable it by default.  Personally I’d probably lean that way, especially given 
that folks here on the ops list are still reporting problems too.

That said, v1 has been deprecated since Juno, and the Juno release notes said 
it was going to be removed [2], so there’s a case to be made that there’s been 
plenty of fair warning too I suppose.

[1] 
http://superuser.openstack.org/articles/openstack-application-developers-share-insights
[2] https://wiki.openstack.org/wiki/ReleaseNotes/Juno#Upgrade_Notes_7

At Your Service,

Mark T. Voelker



> On Sep 28, 2015, at 7:17 PM, Sam Morrison  wrote:
> 
> Yeah we’re still using v1 as the clients that are packaged with most distros 
> don’t support v2 easily.
> 
> Eg. with Ubuntu Trusty they have version 1.1.1, I just updated our “volume” 
> endpoint to point to v2 (we have a volumev2 endpoint too) and the client 
> breaks.
> 
> $ cinder list
> ERROR: OpenStack Block Storage API version is set to 1 but you are accessing 
> a 2 endpoint. Change its value through --os-volume-api-version or 
> env[OS_VOLUME_API_VERSION].
> 
> Sam
> 
> 
>> On 29 Sep 2015, at 8:34 am, Matt Fischer  wrote:
>> 
>> Yes, people are probably still using it. Last time I tried to use V2 it 
>> didn't work because the clients were broken, and then it went back on the 
>> bottom of my to do list. Is this mess fixed?
>> 
>> http://lists.openstack.org/pipermail/openstack-operators/2015-February/006366.html
>> 
>> On Mon, Sep 28, 2015 at 4:25 PM, Ivan Kolodyazhny  wrote:
>> Hi all,
>> 
>> As you may know, we've got 2 APIs in Cinder: v1 and v2. Cinder v2 API was 
>> introduced in Grizzly and v1 API is deprecated since Juno.
>> 
>> After [1] is merged, Cinder API v1 is disabled in gates by default. We've 
>> got a filed bug [2] to remove Cinder v1 API at all.
>> 
>> 
>> According to Deprecation Policy [3] looks like we are OK to remote it. But I 
>> would like to ask Cinder API users if any still use API v1.
>> Should we remove it at all Mitaka release or just disable by default in the 
>> cinder.conf?
>> 
>> AFAIR, only Rally doesn't support API v2 now and I'm going to implement it 
>> asap.
>> 
>> [1] https://review.openstack.org/194726 
>> [2] https://bugs.launchpad.net/cinder/+bug/1467589
>> [3] 
>> http://lists.openstack.org/pipermail/openstack-dev/2015-September/073576.html
>> 
>> Regards,
>> Ivan Kolodyazhny
>> 
>> ___
>> OpenStack-operators mailing list
>> openstack-operat...@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>> 
>> 
>> ___
>> OpenStack-operators mailing list
>> openstack-operat...@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> 
> 

Re: [openstack-dev] [glance][nova] how to upgrade from v1 to v2?

2015-09-25 Thread Mark Voelker
On Sep 25, 2015, at 12:00 PM, Chris Hoge  wrote:
> 
>> 
>> On Sep 25, 2015, at 6:59 AM, Doug Hellmann  wrote:
>> 
>> Excerpts from Mark Voelker's message of 2015-09-25 01:20:04 +:
 
 On Sep 24, 2015, at 5:55 PM, Sabari Murugesan  
 wrote:
 
 Hi Melanie
 
 In general, images created by glance v1 API should be accessible using v2 
 and
 vice-versa. We fixed some bugs [1] [2] [3] where metadata associated with 
 an image was
 causing incompatibility. These fixes were back-ported to stable/kilo.
 
 Thanks
 Sabari
 
 [1] - https://bugs.launchpad.net/glance/+bug/1447215
 [2] - https://bugs.launchpad.net/bugs/1419823 
 [3] - https://bugs.launchpad.net/python-glanceclient/+bug/1447193 
 
 
 On Thu, Sep 24, 2015 at 2:17 PM, melanie witt  wrote:
 Hi All,
 
 I have been looking and haven't yet located documentation about how to 
 upgrade from glance v1 to glance v2.
 
 From what I understand, images and snapshots created with v1 can't be 
 listed/accessed through the v2 api. Are there instructions about how to 
 migrate images and snapshots from v1 to v2? Are there other 
 incompatibilities between v1 and v2?
 
 I'm asking because I have read that glance v1 isn't defcore compliant and 
 so we need all projects to move to v2, but the incompatibility from v1 to 
 v2 is preventing that in nova. Is there anything else preventing v2 
 adoption? Could we move to glance v2 if there's a migration path from v1 
 to v2 that operators can run through before upgrading to a version that 
 uses v2 as the default?
>>> 
>>> Just to clarify the DefCore situation a bit here: 
>>> 
>>> The DefCore Committee is considering adding some Glance v2
>> capabilities [1] as “advisory” (e.g. not required now but might be
>> in the future unless folks provide feedback as to why it shouldn’t
>> be) in it’s next Guideline, which is due to go the Board of Directors
>> in January and will cover Juno, Kilo, and Liberty [2].   The Nova image
>> API’s are already required [3][4].  As discussion began about which
>> Glance capabilities to include and whether or not to keep the Nova
>> image API’s as required, it was pointed out that the many ways images
>> can currently be created in OpenStack is problematic from an
>> interoperability point of view in that some clouds use one and some use
>> others.  To be included in a DefCore Guideline, capabilities are scored
>> against twelve Criteria [5], and need to achieve a certain total to be
>> included.  Having a bunch of different ways to deal with images
>> actually hurts the chances of any one of them meeting the bar because
>> it makes it less likely that they’ll achieve several criteria.  For
>> example:
>>> 
>>> One of the criteria is “widely deployed” [6].  In the case of images, both 
>>> the Nova image-create API and Glance v2 are both pretty widely deployed 
>>> [7]; Glance v1 isn’t, and at least one uses none of those but instead uses 
>>> the import task API.
>>> 
>>> Another criteria is “atomic” [8] which basically means the capability is 
>>> unique and can’t be built out of other required capabilities.  Since the 
>>> Nova image-create API is already required and effectively does the same 
>>> thing as glance v1 and v2’s image create API’s, the latter lose points.
>> 
>> This seems backwards. The Nova API doesn't "do the same thing" as
>> the Glance API, it is a *proxy* for the Glance API. We should not
>> be requiring proxy APIs for interop. DefCore should only be using
>> tests that talk directly to the service that owns the feature being
>> tested.
> 
> I agree in general, at the time the standard was approved the
> only api we had available to us (because only nova code was
> being considered for inclusion) was the proxy.
> 
> We’re looking at v2 as the required api going forward, but
> as has been mentioned before, the nova proxy requires that
> v1 be present as a non-public api. Not the best situation in
> the world, and I’m personally looking forward to Glance,
> Cinder, and Neutron becoming explicitly required APIs in
> DefCore.
> 

Also worth pointing out here: when we talk about “doing the same thing” from a 
DefCore perspective, we’re essentially talking about what’s exposed to the end 
user, not how that’s implemented in OpenStack’s source code.  So from an end 
user’s perspective:

If I call nova image-create, I get an image in my cloud.  If I call the Glance 
v2 API to create an image, I also get an image in my cloud.  I neither see nor 
care that Nova is actually talking to Glance in the background, because if I’m 
writing code that uses the OpenStack API’s, I need to pick which one of those 
two API’s to make my code call upon to put an image in my cloud.  Or, in the 
worst case, I have to write a bunch of if/else loops into my code 

Re: [openstack-dev] [glance][nova] how to upgrade from v1 to v2?

2015-09-25 Thread Mark Voelker
On Sep 25, 2015, at 10:42 AM, Andrew Laski  wrote:
> 
> On 09/25/15 at 09:59am, Doug Hellmann wrote:
>> Excerpts from Mark Voelker's message of 2015-09-25 01:20:04 +:
>>> >
>>> > On Sep 24, 2015, at 5:55 PM, Sabari Murugesan  
>>> > wrote:
>>> >
>>> > Hi Melanie
>>> >
>>> > In general, images created by glance v1 API should be accessible using v2 
>>> > and
>>> > vice-versa. We fixed some bugs [1] [2] [3] where metadata associated with 
>>> > an image was
>>> > causing incompatibility. These fixes were back-ported to stable/kilo.
>>> >
>>> > Thanks
>>> > Sabari
>>> >
>>> > [1] - https://bugs.launchpad.net/glance/+bug/1447215
>>> > [2] - https://bugs.launchpad.net/bugs/1419823
>>> > [3] - https://bugs.launchpad.net/python-glanceclient/+bug/1447193
>>> >
>>> >
>>> > On Thu, Sep 24, 2015 at 2:17 PM, melanie witt  wrote:
>>> > Hi All,
>>> >
>>> > I have been looking and haven't yet located documentation about how to 
>>> > upgrade from glance v1 to glance v2.
>>> >
>>> > From what I understand, images and snapshots created with v1 can't be 
>>> > listed/accessed through the v2 api. Are there instructions about how to 
>>> > migrate images and snapshots from v1 to v2? Are there other 
>>> > incompatibilities between v1 and v2?
>>> >
>>> > I'm asking because I have read that glance v1 isn't defcore compliant and 
>>> > so we need all projects to move to v2, but the incompatibility from v1 to 
>>> > v2 is preventing that in nova. Is there anything else preventing v2 
>>> > adoption? Could we move to glance v2 if there's a migration path from v1 
>>> > to v2 that operators can run through before upgrading to a version that 
>>> > uses v2 as the default?
>>> 
>>> Just to clarify the DefCore situation a bit here:
>>> 
>>> The DefCore Committee is considering adding some Glance v2
>> capabilities [1] as “advisory” (e.g. not required now but might be
>> in the future unless folks provide feedback as to why it shouldn’t
>> be) in it’s next Guideline, which is due to go the Board of Directors
>> in January and will cover Juno, Kilo, and Liberty [2].   The Nova image
>> API’s are already required [3][4].  As discussion began about which
>> Glance capabilities to include and whether or not to keep the Nova
>> image API’s as required, it was pointed out that the many ways images
>> can currently be created in OpenStack is problematic from an
>> interoperability point of view in that some clouds use one and some use
>> others.  To be included in a DefCore Guideline, capabilities are scored
>> against twelve Criteria [5], and need to achieve a certain total to be
>> included.  Having a bunch of different ways to deal with images
>> actually hurts the chances of any one of them meeting the bar because
>> it makes it less likely that they’ll achieve several criteria.  For
>> example:
>>> 
>>> One of the criteria is “widely deployed” [6].  In the case of images, both 
>>> the Nova image-create API and Glance v2 are both pretty widely deployed 
>>> [7]; Glance v1 isn’t, and at least one uses none of those but instead uses 
>>> the import task API.
>>> 
>>> Another criteria is “atomic” [8] which basically means the capability is 
>>> unique and can’t be built out of other required capabilities.  Since the 
>>> Nova image-create API is already required and effectively does the same 
>>> thing as glance v1 and v2’s image create API’s, the latter lose points.
>> 
>> This seems backwards. The Nova API doesn't "do the same thing" as
>> the Glance API, it is a *proxy* for the Glance API. We should not
>> be requiring proxy APIs for interop. DefCore should only be using
>> tests that talk directly to the service that owns the feature being
>> tested.
> 
> I completely agree with this.  I will admit to having some confusion as to 
> why Glance capabilities have been tested through Nova and I know others have 
> raised this same thought within the process.

Because it turns out that’s how most of the world is dealing with images.

Generally speaking, the nova image API and glance v2 API’s have roughly equal 
adoption among public and private cloud products, but among the client SDK’s 
people are using to interact with OpenStack the nova image API’s have much 
better adoption (see notes in previous message for details).  So we gave the 
world lots of different ways to do the same thing and the world has strongly 
adopted two of them (with reasonable evidence that the Nova image API is 
actually the most-adopted of the lot).  If you’re looking for the most 
interoperable way to create an image across lots of different OpenStack clouds 
today, it’s actually through Nova.

At Your Service,

Mark T. Voelker

> 
>> 
>> Doug
>> 
>>> 
>>> Another criteria is “future direction” [9].  Glance v1 gets no points here 
>>> since v2 is the current API, has been for a while, and there’s even been 
>>> some work on v3 already.
>>> 
>>> There are also criteria for  “used by clients” 

Re: [openstack-dev] [glance][nova] how to upgrade from v1 to v2?

2015-09-25 Thread Mark Voelker
On Sep 25, 2015, at 1:56 PM, Doug Hellmann <d...@doughellmann.com> wrote:
> 
> Excerpts from Mark Voelker's message of 2015-09-25 17:42:24 +:
>> On Sep 25, 2015, at 1:24 PM, Brian Rosmaita <brian.rosma...@rackspace.com> 
>> wrote:
>>> 
>>> I'd like to clarify something.
>>> 
>>> On 9/25/15, 12:16 PM, "Mark Voelker" <mvoel...@vmware.com> wrote:
>>> [big snip]
>>>> Also worth pointing out here: when we talk about ³doing the same thing²
>>>> from a DefCore perspective, we¹re essentially talking about what¹s
>>>> exposed to the end user, not how that¹s implemented in OpenStack¹s source
>>>> code.  So from an end user¹s perspective:
>>>> 
>>>> If I call nova image-create, I get an image in my cloud.  If I call the
>>>> Glance v2 API to create an image, I also get an image in my cloud.  I
>>>> neither see nor care that Nova is actually talking to Glance in the
>>>> background, because if I¹m writing code that uses the OpenStack API¹s, I
>>>> need to pick which one of those two API¹s to make my code call upon to
>>>> put an image in my cloud.  Or, in the worst case, I have to write a bunch
>>>> of if/else loops into my code because some clouds I want to use only
>>>> allow one way and some allow only the other.
>>> 
>>> The above is a bit inaccurate.
>>> 
>>> The nova image-create command does give you an image in your cloud.  The
>>> image you get, however, is a snapshot of an instance that has been
>>> previously created in Nova.  If you don't have an instance, you cannot
>>> create an image via that command.  There is no provision in the Compute
>>> (Nova) API to allow you to create an image out of bits that you supply.
>>> 
>>> The Image (Glance) APIs (both v1 and v2) allow you to supply the bits and
>>> register them as an image which you can then use to boot instances from by
>>> using the Compute API.  But note that if all you have available is the
>>> Images API, you cannot create an image of one of your instances.
>>> 
>>>> So from that end-user perspective, the Nova image-create API indeed does
>>>> ³do the same thing" as the Glance API.
>>> 
>>> They don't "do the same thing".  Even if you have full access to the
>>> Images v1 or v2 API, you will still have to use the Compute (Nova) API to
>>> create an image of an instance, which is by far the largest use-case for
>>> image creation.  You can't do it through Glance, because Glance doesn't
>>> know anything about instances.  Nova has to know about Glance, because it
>>> needs to fetch images for instance creation, and store images for
>>> on-demand images of instances.
>> 
>> Yup, that’s fair: this was a bad example to pick (need moar coffee I guess). 
>>  Let’s use image-list instead. =)
> 
> From a "technical direction" perspective, I still think it's a bad

Ah.  Thanks for bringing that up, because I think this may be an area where 
there’s some misconception about what DefCore is set up to do today.  In it’s 
present form, the Board of Directors has structured DefCore to look much more 
at trailing indicators of market acceptance rather than future technical 
direction.  More on that over here. [1] 



> situation for us to be relying on any proxy APIs like this. Yes,
> they are widely deployed, but we want to be using glance for image
> features, neutron for networking, etc. Having the nova proxy is
> fine, but while we have DefCore using tests to enforce the presence
> of the proxy we can't deprecate those APIs.


Actually that’s not true: DefCore can totally deprecate things too, and can do 
so in response to the technical community deprecating things.  See my comments 
in this review [2].  Maybe I need to write another post about that...

/me envisions the title being “Who’s on First?”


> 
> What do we need to do to make that change happen over the next cycle
> or so?

There are several things that can be done:

First, if you don’t like the Criteria or the weights that the various Criteria 
today have, we can suggest changes to them.  The Board of Directors will 
ultimately have to approve that change, but we can certainly ask (I think 
there’s plenty of evidence that our Directors listen to the community’s 
concerns).  There’s actually already some early discussion about that now, 
though most of the energy is going into other things at the moment (because 
deadlines).  See post above for links.

Second, we certainly could consider changes to the Capabilities that are 
currently required.  That happens

Re: [openstack-dev] [glance][nova] how to upgrade from v1 to v2?

2015-09-25 Thread Mark Voelker
On Sep 25, 2015, at 1:24 PM, Brian Rosmaita <brian.rosma...@rackspace.com> 
wrote:
> 
> I'd like to clarify something.
> 
> On 9/25/15, 12:16 PM, "Mark Voelker" <mvoel...@vmware.com> wrote:
> [big snip]
>> Also worth pointing out here: when we talk about ³doing the same thing²
>> from a DefCore perspective, we¹re essentially talking about what¹s
>> exposed to the end user, not how that¹s implemented in OpenStack¹s source
>> code.  So from an end user¹s perspective:
>> 
>> If I call nova image-create, I get an image in my cloud.  If I call the
>> Glance v2 API to create an image, I also get an image in my cloud.  I
>> neither see nor care that Nova is actually talking to Glance in the
>> background, because if I¹m writing code that uses the OpenStack API¹s, I
>> need to pick which one of those two API¹s to make my code call upon to
>> put an image in my cloud.  Or, in the worst case, I have to write a bunch
>> of if/else loops into my code because some clouds I want to use only
>> allow one way and some allow only the other.
> 
> The above is a bit inaccurate.
> 
> The nova image-create command does give you an image in your cloud.  The
> image you get, however, is a snapshot of an instance that has been
> previously created in Nova.  If you don't have an instance, you cannot
> create an image via that command.  There is no provision in the Compute
> (Nova) API to allow you to create an image out of bits that you supply.
> 
> The Image (Glance) APIs (both v1 and v2) allow you to supply the bits and
> register them as an image which you can then use to boot instances from by
> using the Compute API.  But note that if all you have available is the
> Images API, you cannot create an image of one of your instances.
> 
>> So from that end-user perspective, the Nova image-create API indeed does
>> ³do the same thing" as the Glance API.
> 
> They don't "do the same thing".  Even if you have full access to the
> Images v1 or v2 API, you will still have to use the Compute (Nova) API to
> create an image of an instance, which is by far the largest use-case for
> image creation.  You can't do it through Glance, because Glance doesn't
> know anything about instances.  Nova has to know about Glance, because it
> needs to fetch images for instance creation, and store images for
> on-demand images of instances.

Yup, that’s fair: this was a bad example to pick (need moar coffee I guess).  
Let’s use image-list instead. =)

At Your Service,

Mark T. Voelker


> 
> 
>> At Your Service,
>> 
>> Mark T. Voelker
> 
> Glad to be of service, too,
> brian
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][nova] how to upgrade from v1 to v2?

2015-09-24 Thread Mark Voelker
> 
> On Sep 24, 2015, at 5:55 PM, Sabari Murugesan  wrote:
> 
> Hi Melanie
> 
> In general, images created by glance v1 API should be accessible using v2 and
> vice-versa. We fixed some bugs [1] [2] [3] where metadata associated with an 
> image was
> causing incompatibility. These fixes were back-ported to stable/kilo.
> 
> Thanks
> Sabari
> 
> [1] - https://bugs.launchpad.net/glance/+bug/1447215
> [2] - https://bugs.launchpad.net/bugs/1419823 
> [3] - https://bugs.launchpad.net/python-glanceclient/+bug/1447193 
> 
> 
> On Thu, Sep 24, 2015 at 2:17 PM, melanie witt  wrote:
> Hi All,
> 
> I have been looking and haven't yet located documentation about how to 
> upgrade from glance v1 to glance v2.
> 
> From what I understand, images and snapshots created with v1 can't be 
> listed/accessed through the v2 api. Are there instructions about how to 
> migrate images and snapshots from v1 to v2? Are there other incompatibilities 
> between v1 and v2?
> 
> I'm asking because I have read that glance v1 isn't defcore compliant and so 
> we need all projects to move to v2, but the incompatibility from v1 to v2 is 
> preventing that in nova. Is there anything else preventing v2 adoption? Could 
> we move to glance v2 if there's a migration path from v1 to v2 that operators 
> can run through before upgrading to a version that uses v2 as the default?

Just to clarify the DefCore situation a bit here: 

The DefCore Committee is considering adding some Glance v2 capabilities [1] as 
“advisory” (e.g. not required now but might be in the future unless folks 
provide feedback as to why it shouldn’t be) in it’s next Guideline, which is 
due to go the Board of Directors in January and will cover Juno, Kilo, and 
Liberty [2].  The Nova image API’s are already required [3][4].  As discussion 
began about which Glance capabilities to include and whether or not to keep the 
Nova image API’s as required, it was pointed out that the many ways images can 
currently be created in OpenStack is problematic from an interoperability point 
of view in that some clouds use one and some use others.  To be included in a 
DefCore Guideline, capabilities are scored against twelve Criteria [5], and 
need to achieve a certain total to be included.  Having a bunch of different 
ways to deal with images actually hurts the chances of any one of them meeting 
the bar because it makes it less likely that they’ll achieve several criteria.  
For example:

One of the criteria is “widely deployed” [6].  In the case of images, both the 
Nova image-create API and Glance v2 are both pretty widely deployed [7]; Glance 
v1 isn’t, and at least one uses none of those but instead uses the import task 
API.

Another criteria is “atomic” [8] which basically means the capability is unique 
and can’t be built out of other required capabilities.  Since the Nova 
image-create API is already required and effectively does the same thing as 
glance v1 and v2’s image create API’s, the latter lose points.

Another criteria is “future direction” [9].  Glance v1 gets no points here 
since v2 is the current API, has been for a while, and there’s even been some 
work on v3 already.

There are also criteria for  “used by clients” [11].  Unfortunately both Glance 
v1 and v2 fall down pretty hard here as it turns out that of all the client 
libraries users reported in the last user survey, it appears the only one other 
than the OpenStack clients supports Glance v2 and one supports Glance v1 while 
the rest all rely on the Nova API's.  Even within OpenStack we don’t 
necessarily have good adoption since Nova still uses the v1 API to talk to 
Glance and OpenStackClient didn’t support image creation with v2 until this 
week’s 1.7.0 release. [13]

So, it’s a bit problematic that v1 is still being used even within the project 
(though it did get slightly better this week).  It’s highly unlikely at this 
point that it makes any sense for DefCore to require OpenStack Powered products 
to expose v1 to end users.  Even if DefCore does end up requiring Glance v2 to 
be exposed to end users, that doesn’t necessarily mean Nova couldn’t continue 
to use v1: OpenStack Powered products wouldn’t be required to expose v1 to end 
users, but if the nova image-create API remains required then they’d have to 
expose it at least internally to the cloud.  But….really?  That’s still sort of 
an ugly position to be in, because at the end of the day that’s still a lot 
more moving parts than are really necessary and that’s not particularly good 
for operators, end users, developers who want interoperable ways of doing 
things, or pretty much anybody else.  

So basically: yes, it would be *lovely* if we could all get behind fewer ways 
of dealing with images. [10]  

[1] https://review.openstack.org/#/c/213353/
[2] http://git.openstack.org/cgit/openstack/defcore/tree/2016.next.json#n8
[3] http://git.openstack.org/cgit/openstack/defcore/tree/2015.07.json#n23

Re: [openstack-dev] [tempest][nova][defcore] Add option to disable some strict response checking for interop testing

2016-06-14 Thread Mark Voelker
On Jun 14, 2016, at 7:28 PM, Monty Taylor  wrote:
> 
> On 06/14/2016 05:42 PM, Doug Hellmann wrote:
>> Excerpts from Matthew Treinish's message of 2016-06-14 15:12:45 -0400:
>>> On Tue, Jun 14, 2016 at 02:41:10PM -0400, Doug Hellmann wrote:
 Excerpts from Matthew Treinish's message of 2016-06-14 14:21:27 -0400:
> On Tue, Jun 14, 2016 at 10:57:05AM -0700, Chris Hoge wrote:
>> Last year, in response to Nova micro-versioning and extension updates[1],
>> the QA team added strict API schema checking to Tempest to ensure that
>> no additional properties were added to Nova API responses[2][3]. In the
>> last year, at least three vendors participating the the OpenStack Powered
>> Trademark program have been impacted by this change, two of which
>> reported this to the DefCore Working Group mailing list earlier this 
>> year[4].
>> 
>> The DefCore Working Group determines guidelines for the OpenStack Powered
>> program, which includes capabilities with associated functional tests
>> from Tempest that must be passed, and designated sections with associated
>> upstream code [5][6]. In determining these guidelines, the working group
>> attempts to balance the future direction of development with lagging
>> indicators of deployments and user adoption.
>> 
>> After a tremendous amount of consideration, I believe that the DefCore
>> Working Group needs to implement a temporary waiver for the strict API
>> checking requirements that were introduced last year, to give downstream
>> deployers more time to catch up with the strict micro-versioning
>> requirements determined by the Nova/Compute team and enforced by the
>> Tempest/QA team.
> 
> I'm very much opposed to this being done. If we're actually concerned with
> interoperability and verify that things behave in the same manner between 
> multiple
> clouds then doing this would be a big step backwards. The fundamental 
> disconnect
> here is that the vendors who have implemented out of band extensions or 
> were
> taking advantage of previously available places to inject extra attributes
> believe that doing so means they're interoperable, which is quite far from
> reality. **The API is not a place for vendor differentiation.**
 
 This is a temporary measure to address the fact that a large number
 of existing tests changed their behavior, rather than having new
 tests added to enforce this new requirement. The result is deployments
 that previously passed these tests may no longer pass, and in fact
 we have several cases where that's true with deployers who are
 trying to maintain their own standard of backwards-compatibility
 for their end users.
>>> 
>>> That's not what happened though. The API hasn't changed and the tests 
>>> haven't
>>> really changed either. We made our enforcement on Nova's APIs a bit 
>>> stricter to
>>> ensure nothing unexpected appeared. For the most these tests work on any 
>>> version
>>> of OpenStack. (we only test it in the gate on supported stable releases, 
>>> but I
>>> don't expect things to have drastically shifted on older releases) It also
>>> doesn't matter which version of the API you run, v2.0 or v2.1. Literally, 
>>> the
>>> only case it ever fails is when you run something extra, not from the 
>>> community,
>>> either as an extension (which themselves are going away [1]) or another 
>>> service
>>> that wraps nova or imitates nova. I'm personally not comfortable saying 
>>> those
>>> extras are ever part of the OpenStack APIs.
>>> 
 We have basically three options.
 
 1. Tell deployers who are trying to do the right for their immediate
   users that they can't use the trademark.
 
 2. Flag the related tests or remove them from the DefCore enforcement
   suite entirely.
 
 3. Be flexible about giving consumers of Tempest time to meet the
   new requirement by providing a way to disable the checks.
 
 Option 1 goes against our own backwards compatibility policies.
>>> 
>>> I don't think backwards compatibility policies really apply to what what 
>>> define
>>> as the set of tests that as a community we are saying a vendor has to pass 
>>> to
>>> say they're OpenStack. From my perspective as a community we either take a 
>>> hard
>>> stance on this and say to be considered an interoperable cloud (and to get 
>>> the
>>> trademark) you have to actually have an interoperable product. We slowly 
>>> ratchet
>>> up the requirements every 6 months, there isn't any implied backwards
>>> compatibility in doing that. You passed in the past but not in the newer 
>>> stricter
>>> guidelines.
>>> 
>>> Also, even if I did think it applied, we're not talking about a change which
>>> would fall into breaking that. The change was introduced a year and half ago
>>> during kilo and landed a year ago during 

Re: [openstack-dev] [tempest][nova][defcore] Add option to disable some strict response checking for interop testing

2016-06-15 Thread Mark Voelker

> On Jun 15, 2016, at 8:01 AM, Sean Dague <s...@dague.net> wrote:
> 
> On 06/15/2016 12:14 AM, Mark Voelker wrote:
> 
>> 
>> It is perhaps important to note here that the DefCore seems to have two 
>> meanings to a lot of people I talk to today: it’s a mark of interoperability 
>> (the OpenStack Powered badge that says certain capabilities of this cloud 
>> behave like other clouds bearing the mark) and it gives a cloud the ability 
>> to call itself OpenStack (e.g. you can get a trademark/logo license 
>> agreement from the Foundation).  
>> 
>> The OpenStack Powered program currently covers Icehouse through Mitaka.  
>> Right now, that includes releases that were still on the Nova 2.0 API.  API 
>> extensions were a supported thing [1] back in 2.0 and it was even explicitly 
>> documented that they allowed for additional attributes in the responses and 
>> “vendor specific niche functionality [1]”.  The change to the Tempest tests 
>> [2] applied to the 2.0 API as well as 2.1 with the intent of preventing 
>> further changes from getting into the 2.0 API at the gate, which totally 
>> makes sense as a gate test.  If those same tests are used for DefCore 
>> purposes, it does change what vendors need to do to be compliant with the 
>> Guidelines rather immediately--even on older releases of OpenStack using 
>> 2.0, which could be problematic (as noted elsewhere already [3]).
> 
> Right, that's fair. And part of why I think the pass* makes sense.
> Liberty is the introduction of microversions on by default for clouds
> from Nova upstream configs.
> 
>> So, through the interoperability lens: I think many folks acknowledge that 
>> supporting extensions lead to a lot of variance between clouds, and that was 
>> Not So Awesome for interoperability.  IIRC part of the rationale for 
>> switching to microversions with a single monotonic counter and deprecating 
>> extensions [4] was to set a course for eliminating a lot of that behavioral 
>> variance.
>> 
>> From the “ability to call yourself OpenStack” lens: it feels sort of wrong 
>> to tell a cloud that it can’t claim to be OpenStack because it’s running a 
>> version that falls within the bounds of the Powered program with the 2.0 API 
>> (when extensions weren't deprecated) and using the extension mechanism that 
>> 2.0 supported for years.
> 
> To be clear, extensions weren't a part of the 2.0 API, they were a part
> of the infrastructure. It's a subtle but different point. Nova still
> supports the 2.0 API, but on different infrastructure, which
> doesn't/won't support extensions.
> 
> Are people registering new Kilo (or earlier) clouds in the system today?
> By the time folks get to Newton, none of that is going to work anyway in
> code.

Yes.  As recently as January we had a Juno-based public cloud run into this 
issue [1].  And, just to highlight again: there's a pretty long tail here since 
the OpenStack Powered program covers a lot of releases.  

[1] 
http://lists.openstack.org/pipermail/defcore-committee/2016-January/000986.html

> 
> In an ideal world product teams would be close enough to upstream code
> and changes to see all this coming and be on top of it. In the real
> world, a lot of these teams are like a year (or more) behind, which

+1.  If folks aren’t aware, it may be worth pointing out that as of the User 
Survey from two months ago, Kilo was the most popular release and Juno was the 
third most popular [2].  I don’t presume to know why so many deployments are on 
older releases, but I suspect the rationale is different for different folks.  
Maybe for some it’s because people are choosing to used products and a lot of 
those are based on older releases.  Maybe for others it’s because people are 
deliberately choosing older stable branches.  Maybe for some it’s because they 
they’ve decided that upgrading every six months or following master more 
closely just isn’t a good strategy for them.  Maybe for others it’s something 
else entirely.  But again, you’re right: in the real world the tail is long 
today.

[2] https://www.openstack.org/assets/survey/April-2016-User-Survey-Report.pdf 
(figure 4.3/4.4)

> actually makes Defcore with a pass* an ideal alternative communication
> channel to express that products are coming up to a cliff, and should
> start working on plans now.

Agreed.

> 
>> I think that’s part of what makes this issue tricky for a lot of folks.
>> 
>> [1] http://docs.openstack.org/developer/nova/v2/extensions.html
> 
> It's an unfortunate accident of bugs in our publishing system that that
> URL still exists. That was deleted in Oct 2015. I'll look at getting it
> properly cleaned up.
> 

Awesome, thanks Se

Re: [openstack-dev] [Neutron][Release] Changing release model for *-aas services

2016-06-01 Thread Mark Voelker

> On Jun 1, 2016, at 12:27 PM, Armando M.  wrote:
> 
> 
> 
> On 1 June 2016 at 02:28, Thierry Carrez  wrote:
> Armando M. wrote:
> Having looked at the recent commit volume that has been going into the
> *-aas repos, I am considering changing the release model for
> neutron-vpnaas, neutron-fwaas, neutron-lbaas
> from release:cycle-with-milestones [1] to
> release:cycle-with-intermediary [2]. This change will allow us to avoid
> publishing a release at fixed times when there's nothing worth releasing.
> 
> I commented on the review, but I think it's easier to discuss this here...
> 
> Beyond changing the release model, what you're proposing here is to remove 
> functionality from an existing deliverable ("neutron" which was a combination 
> of openstack/neutron and openstack/neutron-*aas, released together) and 
> making the *aas things separate deliverables.
> 
> All I wanted to do is change the release model of the *-aas projects, without 
> side effects. I appreciate that the governance structure doesn't seem to 
> allow this easily, and I am looking for guidance.
>   
> 
> From a Defcore perspective, the trademark programs include the "neutron" 
> deliverable. So the net effect for DefCore is that you remove functionality 
> -- and removing functionality from a Defcore-used project needs extra care 
> and heads-up time.
> 
> To the best of my knowledge none of the *-aas projects are part of defcore, 
> and since [1] has no presence of vpn, fw, lb, nor planned, I thought I was on 
> the safe side.
> 

Thanks for checking.  You are correct: LBaaS, VPNaaS, and FWaaS capabilities 
are not present in existing Board-approved DefCore Guidelines, nor have they 
been proposed for the next one. [2]

[2] http://git.openstack.org/cgit/openstack/defcore/tree/next.json

At Your Service,

Mark T. Voelker
DefCore Committee Co-Chair

> 
> It's probably fine to remove *-aas from the neutron deliverable if there is 
> no Defcore capability or designated section there (current or planned). 
> Otherwise we need to have a longer conversation that is likely to extend 
> beyond the release model deadline tomorrow.
> 
> I could not see one in [1]
>  
> [1] https://github.com/openstack/defcore/blob/master/2016.01.json 
> 
> 
> 
> -- 
> Thierry Carrez (ttx)
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat][swift] New TC Resolutions on DefCore Tests

2016-06-01 Thread Mark Voelker
Hi Everyone,

At today’s DefCore Committee meeting, we discussed a couple of newly-approved 
TC resolutions and wanted to take a quick moment to make folks aware of them in 
case they weren’t already.  These new resolutions may impact what capabilities 
and tests projects ask to have included in future DefCore Guidelines:

2016-05-04 Recommendation on API Proxy Tests for DefCore
https://governance.openstack.org/resolutions/20160504-defcore-proxy-tests.html
https://review.openstack.org/312719

2016-05-04 Recommendation on Location of Tests for DefCore
https://governance.openstack.org/resolutions/20160504-defcore-test-location.html
https://review.openstack.org/312718

The latter resolution is probably one that will be of the most interest to 
projects who are looking to add new Capabilities to DefCore Guidelines.  
RefStack has been able to handle tests that live in Tempest or that live in 
project trees but are use the Tempest plugin interface for some time now, and 
the DefCore Committee has generally guided project teams that either was 
acceptable.  In the new resolution, the TC "encourages the DefCore committee to 
consider it an indication of future technical direction that we do not want 
tests outside of the Tempest repository used for trademark enforcement, and 
that any new or existing tests that cover capabilities they want to consider 
for trademark enforcement should be placed in Tempest.  Project teams should 
work with the DefCore committee to move any existing tests that need to move as 
a result of this policy.”  

At present, I don’t think any tests in existing Board-approved DefCore 
Guidelines will need to move as a result of this resolution—however I am aware 
of a few teams that were interested in having in-project-tree tests used in the 
future (hence I’ve added [heat] and [swift] to the subject line).  Hopefully 
those folks were already aware of the new resolution and are making plans 
accordingly, but we thought it would be best to send out a quick communiqué 
just to be sure since this is a change in guidance since the last Summit.  As a 
reminder, our next round of identifying new candidate Capabilities won’t begin 
for a couple of months yet [2], so there’s some time for project teams to 
discuss what (if any) actions they wish to take.

[1] 
http://eavesdrop.openstack.org/meetings/defcore/2016/defcore.2016-06-01-16.00.html
[2] 
http://git.openstack.org/cgit/openstack/defcore/tree/doc/source/process/2015B.rst#n10

At Your Service,

Mark T. Voelker



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack Trove Ocata Midcycle, NYC, August 25 and 26.

2016-06-22 Thread Mark Voelker
[echoing what I just said on the openstack-operators ML since they just 
announced an Ops Midcycle in NYC also on the 25th and 26th]

Hi Folks,

FYI for those that may not be aware, that’s also the week of OpenStack East.  
OpenStack East runs August 23-24 also in New York City at the Playstation 
Theater.  If you’re coming to town for the Trove Ocata Midcycle, you may want 
to make a week of it.  Earlybird pricing for OpenStack East is still available 
but prices increase tomorrow:

http://www.openstackeast.com/

At Your Service,

Mark T. Voelker
(wearer of many hats, one of which is OpenStack East steering committee member)



> On Jun 22, 2016, at 9:53 AM, Amrith Kumar  wrote:
> 
> The Trove midcycle will be held in midtown NYC, thanks to IBM for hosting the 
> event, on August 25th and 26th.
> 
> If you are interested in attending, please join the Trove meeting today, 2pm 
> Eastern Time (#openstack-meeting-alt) and register at 
> https://urldefense.proofpoint.com/v2/url?u=http-3A__www.eventbrite.com_e_openstack-2Dtrove-2Docata-2Dmidcycle-2Dtickets-2D26197358003=CwICAg=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEs=Q8IhPU-EIzbG5YDx5LYO7zEJpGZykn7RwFg-UTPWvDc=G-PxT3XuDtL1H8oX1rNUEElt9ZS7BGjo4sfIc1t2SJA=RGog8QYu7RDsvZi0k1l8tZX4WpDv99OacyHD8okzr8g=
>  .
> 
> An etherpad for proposing sessions is at 
> https://etherpad.openstack.org/p/ocata-trove-midcycle
> 
> This will be a two-day event (not three days as we have done in the past) so 
> we will start early on 25th and go as late as we can on 26th recognizing that 
> people who have to travel out of NYC may want to get late flights (9pm, 10pm) 
> on Friday. 
> 
> Thanks,
> 
> -amrith
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest][nova][defcore] Add option to disable some strict response checking for interop testing

2016-06-16 Thread Mark Voelker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512



On Jun 16, 2016, at 2:25 PM, Matthew Treinish  wrote:

On Thu, Jun 16, 2016 at 02:15:47PM -0400, Doug Hellmann wrote:
Excerpts from Matthew Treinish's message of 2016-06-16 13:56:31 -0400:
On Thu, Jun 16, 2016 at 12:59:41PM -0400, Doug Hellmann wrote:
Excerpts from Matthew Treinish's message of 2016-06-15 19:27:13 -0400:
On Wed, Jun 15, 2016 at 09:10:30AM -0400, Doug Hellmann wrote:
Excerpts from Chris Hoge's message of 2016-06-14 16:37:06 -0700:
Top posting one note and direct comments inline, I’m proposing
this as a member of the DefCore working group, but this
proposal itself has not been accepted as the forward course of
action by the working group. These are my own views as the
administrator of the program and not that of the working group
itself, which may independently reject the idea outside of the
response from the upstream devs.

I posted a link to this thread to the DefCore mailing list to make
that working group aware of the outstanding issues.

On Jun 14, 2016, at 3:50 PM, Matthew Treinish  wrote:

On Tue, Jun 14, 2016 at 05:42:16PM -0400, Doug Hellmann wrote:
Excerpts from Matthew Treinish's message of 2016-06-14 15:12:45 -0400:
On Tue, Jun 14, 2016 at 02:41:10PM -0400, Doug Hellmann wrote:
Excerpts from Matthew Treinish's message of 2016-06-14 14:21:27 -0400:
On Tue, Jun 14, 2016 at 10:57:05AM -0700, Chris Hoge wrote:
Last year, in response to Nova micro-versioning and extension updates[1],
the QA team added strict API schema checking to Tempest to ensure that
no additional properties were added to Nova API responses[2][3]. In the
last year, at least three vendors participating the the OpenStack Powered
Trademark program have been impacted by this change, two of which
reported this to the DefCore Working Group mailing list earlier this year[4].

The DefCore Working Group determines guidelines for the OpenStack Powered
program, which includes capabilities with associated functional tests
from Tempest that must be passed, and designated sections with associated
upstream code [5][6]. In determining these guidelines, the working group
attempts to balance the future direction of development with lagging
indicators of deployments and user adoption.

After a tremendous amount of consideration, I believe that the DefCore
Working Group needs to implement a temporary waiver for the strict API
checking requirements that were introduced last year, to give downstream
deployers more time to catch up with the strict micro-versioning
requirements determined by the Nova/Compute team and enforced by the
Tempest/QA team.

I'm very much opposed to this being done. If we're actually concerned with
interoperability and verify that things behave in the same manner between 
multiple
clouds then doing this would be a big step backwards. The fundamental disconnect
here is that the vendors who have implemented out of band extensions or were
taking advantage of previously available places to inject extra attributes
believe that doing so means they're interoperable, which is quite far from
reality. **The API is not a place for vendor differentiation.**

This is a temporary measure to address the fact that a large number
of existing tests changed their behavior, rather than having new
tests added to enforce this new requirement. The result is deployments
that previously passed these tests may no longer pass, and in fact
we have several cases where that's true with deployers who are
trying to maintain their own standard of backwards-compatibility
for their end users.

That's not what happened though. The API hasn't changed and the tests haven't
really changed either. We made our enforcement on Nova's APIs a bit stricter to
ensure nothing unexpected appeared. For the most these tests work on any version
of OpenStack. (we only test it in the gate on supported stable releases, but I
don't expect things to have drastically shifted on older releases) It also
doesn't matter which version of the API you run, v2.0 or v2.1. Literally, the
only case it ever fails is when you run something extra, not from the community,
either as an extension (which themselves are going away [1]) or another service
that wraps nova or imitates nova. I'm personally not comfortable saying those
extras are ever part of the OpenStack APIs.

We have basically three options.

1. Tell deployers who are trying to do the right for their immediate
 users that they can't use the trademark.

2. Flag the related tests or remove them from the DefCore enforcement
 suite entirely.

3. Be flexible about giving consumers of Tempest time to meet the
 new requirement by providing a way to disable the checks.

Option 1 goes against our own backwards compatibility policies.

I don't think backwards compatibility policies really apply to what what define
as the set of tests that as a community we are saying a vendor has to pass to
say they're OpenStack. From my perspective as a 

Re: [openstack-dev] [tempest][nova][defcore] Add option to disable some strict response checking for interop testing

2016-06-20 Thread Mark Voelker

> On Jun 20, 2016, at 8:46 AM, Doug Hellmann  wrote:
> 
> Excerpts from Mark Voelker's message of 2016-06-16 20:33:36 +:
> 
>> On Tue, Jun 14, 2016 at 05:42:16PM -0400, Doug Hellmann wrote:
>> 
>> 
>>> I don't think DefCore actually needs to change old versions of Tempest,
>>> but maybe Chris or Mark can verify that?
>> 
>> So if I’m groking this correctly, there’s kind of two scenarios
>> being painted here.  One is the “LCD” approach where we use the
>> $osversion-eol version of Tempest, where $osversion matches the
>> oldest version covered in a Guideline.  The other is to use the
>> start-of-$osversion version of Tempest where $osversion is the
>> OpenStack version after the most recent one in the Guideline.  The
>> former may result in some fairly long-lived flags, and the latter
>> is actually not terribly different than what we do today I think.
>> Let me try to talk through both...
>> 
>> In some cases, tests get flagged in the Guidelines because of
>> bugs in the test or because the test needs refactoring.  The
>> underlying Capabilities the those tests are testing actually work
>> fine.  Once we identify such an issue, the test can be fixed…in
>> master.  Under the first scenario, this potentially creates some
>> very long-lived flags:
>> 
>> 2016.01 is the most current Guideline right now covers Juno, Kilo,
>> Liberty (and Mitaka after it was released).  It’s one of the two
>> Guidelines that you can use if you want an OpenStack Powered license
>> from he Foundation, $vendor wants to run it against their shiny new
>> Mitaka cloud.  They run the Juno EOL version of Tempest (tag=8),
>> they find a test issue, and we flag it.  A few weeks later, a fix
>> lands in Tempest.  Several months later the next Guideline rolls
>> around: the oldest covered release is Kilo and we start telling
>> people to use the Kilo-EOL version of Tempest.  That doesn’t have
>> the fix, so the flag stays.  Another six months goes by and we get
>> a Guideline and we’re up to the Liberty-EOL version of Tempest.  No
>> fix, flag stays.  Six more months, and now we’re at Mitaka-EOL, and
>> that's the first version that includes the fix.
>> 
>> Generally speaking long lived flags aren’t so great because it
>> means the tests are not required…which means there’s less or no
>> assurance that the capabilities they test for actually work in the
>> clouds that adhere to those Guidelines.  So, the worst-case scenario
>> here looks kind of ugly.
>> 
>> As Matt correctly pointed out though, the capabilities DefCore
>> selects for are generally pretty stable API’s that are long-lived
>> across many releases, so we haven’t run into a lot of issues running
>> pretty new versions of Tempest against older clouds to date.  In
>> fact I’m struggling to think of a time we’ve flagged something
>> because someone complained the test wasn’t runnable against an older
>> release covered by the Guideline in question.  I can think of plenty
>> of times where we’ve flagged something due to a test issue though…keep
>> in mind we’re still in pretty formative times with DefCore here
>> where these tests are starting to be used in a new way for the first
>> time.  Anyway, as Matt points out we could potentially use a much
>> newer Tempest tag: tag=11 (which is the start of Newton development
>> and is a roughly 2 month old version of Tempest).  Next Guideline
>> rolls around, we use the tag for start-of-ocata, and we get the fix
>> and can drop the flag.
>> 
>> Today, RefStack client by default checks out a specific SHA of
>> Tempest [1] (it actually did use a tag at some point in the past,
>> and still can).  When we see a fix for a flagged test go in, we or
>> the Refstack folks can do a quick test to make sure everything’s
>> in order and then update that SHA to match the version with the
>> fix.  That way we’re relatively sure we have a version that works
>> today, and will work when we drop the flag in the next Guideline
>> too.  When we finalize that next Guideline, we also update the
>> test-repositories section of the new Guideline that Matt pointed
>> to earlier to reflect the best-known version on the day the Guideline
>> was sent to the Board for approval.  One added benefit of this
>> approach is that people running the tests today may get a version
>> of Tempest that includes a fix for a flagged test.  A flagged test
>> isn’t required, but it does get run—and now will show a passing
>> result, so we have data that says “this provider actually does
>> support this capability (even though it’s flagged), and the test
>> does indeed seem to be working."
>> 
>> 
>> So, that’s actually not hugely different from the second scenario
>> I think?  Or did I miss something there?
> 
> What I was proposing is that we keep the certification rules in
> sync with OpenStack versions. If a vendor stays on an old version
> of OpenStack, they can keep using the old (matching) version of
> Tempest to certify.  It's not clear from 

Re: [openstack-dev] [defcore] Determine latest set of tests for a given release

2016-01-14 Thread Mark Voelker
[+defcore-committee]

This depends a little on what your objective is.  If you’re looking for the 
tests that a product must pass today if it wants an OpenStack Powered 
logo/trademark agreement, you’ll want to look at either of the two most 
recently-approved DefCore Guidelines (currently 2015.5 and 2015.07, though the 
Board will be voting on 2016.01 by the end of the month).  If you just want to 
find out what Guidelines might have covered a product built on an arbitrary 
OpenStack release in the past, you’ll need to go straight to the JSON.  The two 
most recently approved Guidelines are generally listed on the Foundation’s 
interop page if that’s helpful:

http://www.openstack.org/interop/

If you’re looking for more programmatic methods, the .json files are the 
authoritative data source.  In particular you’ll want to check these keys:

  "status": "approved”,   # can be draft, review, approved or superseded [see 
2015B C6.3]

and:

  "releases": ["icehouse", "juno", "kilo”], # array of releases, lower case 
(generally three releases)

The schema for the JSON files is documented here:

http://git.openstack.org/cgit/openstack/defcore/tree/doc/source/schema

The schema version used is also listed in the JSON files themselves with this 
key:

"schema": "1.4”,

The tests for a given Guideline are also in the 20xx.json files, and as a 
convenience there are also required/flaggged lists in plaintext in each 
Guideline’s working directory, such as:

http://git.openstack.org/cgit/openstack/defcore/tree/2015.07

If you’re writing code to grock all that sort of info though, I suspect you 
could re-use (or at least take inspiration from) a lot of the code that’s 
already been written into RefStack, since it can already parse most or all of 
the above (see the Community Results section of restack.openstack.org or it’s 
corresponding git repo).  Hope that helps!

At Your Service,

Mark T. Voelker



> On Jan 14, 2016, at 12:02 PM, Hugh Saunders  wrote:
> 
> Hi All, 
> Whats the most reliable way to determine the latest set of required defcore 
> tests for a given release? 
> 
> For example, I would currently use 2015.07/2015.07.required.txt but I don't 
> want to have to update that url each time there is a defcore release.
> 
> I could parse 20*.json in the root of the defcore repo, but that seems 
> brittle.
> 
> Thanks. 
> 
> --
> Hugh Saunders
> 
> 
> 
> 
> -- 
> --
> Hugh Saunders 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All] IRC Mishaps

2017-02-09 Thread Mark Voelker
I work with lots of different clouds and any time I switch focus to a terminal 
I have to figure out which cloud it’s environment is set up to use.  My 
terminal emulator has the same color scheme as my IRC client, so I’ve probably 
typed “set | grep OS_” into IRC accidentally about a million times over the 
years. 

Another fun one is that several times over the years I’ve gotten pings on IRC 
because someone was trying to copy/paste a large section of text from an IRC 
meeting into an email (usually to continue a conversation that was started 
during a meeting after we ran out of time).  Several times someone has 
accidentally pasted the text right back into their IRC client instead of their 
email client, so everyone in the meeting got notified that they’d been 
mentioned again.

At Your Service,

Mark T. Voelker



> On Feb 9, 2017, at 10:25 AM, Jeremy Stanley  wrote:
> 
> On 2017-02-09 10:10:28 +1100 (+1100), Michael Still wrote:
>> At a previous employer we had a policy that all passwords started
>> with "/" because of the sheer number of times someone typed the
>> root password into a public IRC channel.
> [...]
> 
> I can't even keep count of the number of times I and others on the
> Infra team get requests to redact passwords or similar sensitive
> information from our published IRC logs. Across a community the size
> of ours it's a pretty frequent occurrence (and of course we have to
> remind them that there's not a lot of point since we can't scrub
> anything from everyone's personal channel buffers/logs anyway).
> -- 
> Jeremy Stanley
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon][OSC][Nova][Neutron] Launch instance with Floating IP

2016-09-12 Thread Mark Voelker
> 
> On Sep 8, 2016, at 3:11 PM, Sean Dague  wrote:
> 
> On 09/08/2016 09:04 AM, Andrew Laski wrote:
>> 
>> 
>> On Thu, Sep 8, 2016, at 07:18 AM, Sean Dague wrote:
>>> On 09/07/2016 07:34 PM, Andrew Laski wrote:
 
 
 On Wed, Sep 7, 2016, at 06:54 PM, Martin Millnert wrote:
> On Thu, 2016-09-08 at 09:34 +1200, Adrian Turjak wrote:
> 3) core functionality should IMO require as few API calls as possible,
> to as few components as possible, while keeping REST data models etc.
> intact, [1][2]
 
 I agree that it should require as few API calls as possible but maybe we
 disagree about what core functionality is. Or to put it another way,
 what is core functionality depends on your perspective.
 
 I subscribe to the plumbing and porcelain approach
 (https://urldefense.proofpoint.com/v2/url?u=https-3A__git-2Dscm.com_book_en_v2_Git-2DInternals-2DPlumbing-2Dand-2DPorcelain=CwICAg=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEs=Q8IhPU-EIzbG5YDx5LYO7zEJpGZykn7RwFg-UTPWvDc=NWKhA-J2KIQGEiKTb4wFudlx1am5gBxyHPlNT1SsaLk=gPL9-f0Ias_WpPgP27BaMtNJszadrz57swhz6bBDoQY=
  )
 and believe that Nova should be part of the plumbing. So while I fully
 agree with a small number of API calls to do simple tasks I don't
 believe that orchestrating network setups is core functionality in Nova
 but is core to OpenStack.
>>> 
>>> Personally, I think that the role of Nova is to give you a functional
>>> compute unit. If you can't talk to that over the network, that doesn't
>>> seem very functional to me. For complicated setups it's fine that you
>>> need to do complicated things, but "I would like a working server with
>>> network" doesn't feel like it's a complicated ask from the user.
>> 
>> I'd really like to agree here, however it seems that it is actually a
>> somewhat complicated ask from the user due to the diverse network setups
>> used in practice.
>> 
>> But I was responding to the idea of booting an instance with a
>> floating-ip which to me goes beyond setting up a simple default network
>> for an instance. And I think therein lies the problem, there seems to be
>> some disagreement as to what a simple default network setup should be.
> 
> I agree that floating-ip auto allocation may be over that line, as
> floating ips are specifically constructs that are designed to not be
> tied to servers for the lifecycle of the server. Their value comes in
> having the floating ip last longer than the server.
> 
> But, there is also something here about wanting to make sure you have
> publicly accessable servers (which don't require floating ips specifically).
> 

/me chimes in late because still unearthing from a lot of work travel

Just to piggyback on a lot of what’s been said here, I completely agree that 
getting external connectivity to an instance is tricky today due to the variety 
of networking models in use.  That that’s both a strength (in that we have a 
rich enough platform to accommodate lots of different models to suit lots of 
different use cases) and a weakness (in that we end up having conversations 
like this and don’t have an interoperable way for users of multiple clouds to 
get external connectivity).  As a matter of fact, this very issue was recently 
mentioned to the Board as one of the top interoperability issues that the 
DefCore Committee sees in the wild today:

https://github.com/openstack/defcore/blob/master/doc/source/periodic_reports/fall_2016.rst#issue-3-external-network-connectivity

For what it’s worth, the DefCore Committee debated adding floating IP’s to the 
interoperability Guidelines that products have to follow in order to use the 
OpenStack name/trademark last fall, and ended up rejecting the idea—mostly for 
the reasons listed in this thread (e.g. there are a lot of other models and a 
lot of clouds don’t actually use floating IP’s so they actually aren’t 
interoperable, etc).

Also worth pointing out: the idea of having an administrative setting to 
auto-boot instances with a publicly accessible IP (whether that’s on a provider 
network or whether instances are auto-allocated a floating IP or whatever) is 
possibly less than ideal from an interoperability point of view, because end 
users tend to not have good ways of discovering administratively-set config 
settings.  At a minimum we'd want users to able to programmatically discover 
which position that knob was set to.

> There seems to be a space about sane default networking for users.
> Get-me-a-network worked through part of it. There might be a good follow
> on for some more standard models of "I really want internet accessible
> system". Neutron is not limited to granting this via floating ips like
> Nova Net was.

+1.  Get-me-a-network made it easier to get an instance booted up and attached 
to “a" network while alleviating part of the complexity (e.g. various 
underlying network models) of doing so.  What DefCore is seeing from an 
interoperability 

Re: [openstack-dev] [keystone][defcore][refstack] Removal of the v2.0 API

2017-03-02 Thread Mark Voelker


> On Mar 1, 2017, at 6:01 PM, Rodrigo Duarte  wrote:
> 
> On Wed, Mar 1, 2017 at 7:10 PM, Lance Bragstad  wrote:
> During the PTG, Morgan mentioned that there was the possibility of keystone 
> removing the v2.0 API [0]. This thread is a follow up from that discussion to 
> make sure we loop in the right people and do everything by the books.
> 
> The result of the session [1] listed the following work items: 
> - Figure out how we can test the removal and make the job voting (does the 
> v3-only job count for this)?
> 
> We have two v3-only jobs, one only runs keystone's tempest plugin tests - 
> which are specific to federation (it configures a federated environment using 
> mod_shib) - and another one (non-voting) that runs tempest, I believe the 
> later can be a good way to initially validate the v2.0 removal.
>  
> - Reach out to defcore and refstack communities about removing v2.0 (which is 
> partially what this thread is doing)

Yup, we actually talked a bit about this in the past couple of weeks.  I’ve 
CC'd Luz who is playing point on capabilities scoring for the 2017.08 Guideline 
for Identity to make extra sure she’s aware. =)

At Your Service,

Mark T. Voelker
InteropWG Co-chair

> 
> Outside of this thread, what else do we have to do from a defcore perspective 
> to make this happen?
> 
> Thanks for the time!
> 
> [0] https://review.openstack.org/#/c/437667/
> [1] https://etherpad.openstack.org/p/pike-ptg-keystone-deprecations
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> -- 
> Rodrigo Duarte Sousa
> Senior Quality Engineer @ Red Hat
> MSc in Computer Science
> http://rodrigods.com
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][qa][all]Potential New Interoperability Programs: The Current Thinking

2017-06-09 Thread Mark Voelker
Hi Everyone,

Happy Friday!  There have been a number of discussions (at the PTG, at 
OpenStack Summit, in Interop WG and Board of Directors meetings, etc) over the 
past several months about the possibility of creating new interoperability 
programs in addition to the existing OpenStack Powered program administered by 
the Interop Working Group (formerly the DefCore Committee).  In particular, 
lately there have been a lot of discussions recently [1] about where to put 
tests associated with trademark programs with respect to some existing TC 
guidance [2] and community goals for Queens [3].  Although these potential new 
programs have been discussed in a number of places, it’s a little hard to keep 
tabs on where we’re at with them unless you’re actively following the Interop 
WG.  Given the recent discussions on openstack-dev, I thought it might be 
useful to try and brain dump our current thinking on what these new programs 
might look like into a document somewhere that people could point at in 
discussions rather than discussing abstracts and working off memories from 
prior meetings.  To that end, I took a first stab at it this week which you can 
find here:

https://review.openstack.org/#/c/472785/

Needless to say this is just a draft to try to get some of the ideas out of 
neurons and on to electrons, so please don’t take it to be firm 
consensus—rather consider it a draft of what we’re currently thinking and an 
invitation to collaborate.  I expect that other members of the Interop Working 
Group will be leaving comments in Gerrit as we hash through this, and we’d love 
to have input from other folks in the community as well.  These programs 
potentially touch a lot of you (in fact, almost all of you) in some way or 
another, so we’re happy to hear your input as we work on evolving the interop 
programs.  Quite a lot has happened over the past couple of years, so we hope 
this will help folks understand where we came from and think about whether we 
want to make changes going forward.  

By the way, for those of you who might find an HTML-rendered document easier to 
read, click on the "gate-interop-docs-ubuntu-xenial” link in the comments left 
by Jenkins and then on “Extension Programs - Current Direction”.  Thanks, and 
have a great weekend!

[1] 
http://lists.openstack.org/pipermail/openstack-dev/2017-May/thread.html#117657
[2] 
https://governance.openstack.org/tc/resolutions/20160504-defcore-test-location.html
[3] https://governance.openstack.org/tc/goals/queens/split-tempest-plugins.html

At Your Service,

Mark T. Voelker
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [refstack] Full list of API Tests versus 'OpenStack Powered' Tests

2018-02-27 Thread Mark Voelker
Hi Greg,

Only the tests listed in the Guidelines are required to pass to get an 
OpenStack Powered logo and trademark usage license from the OpenStack 
Foundation (you must also use the designated sections of upstream code 
specified in the Guideline documents).  However vendors are strongly encouraged 
to run all the tests.  Doing so provides some data to the Interop Working Group 
about how many products support capabilities that aren’t on the required list 
today, but might be considered in the future.  If you have any questions about 
the process, please contact inte...@openstack.org and the Foundation staff will 
be happy to help!

At Your Service,

Mark T. Voelker



> On Feb 26, 2018, at 11:22 PM, Waines, Greg  wrote:
> 
>  
> · I have a commercial OpenStack product that I would like to claim 
> compliancy with RefStack
> · Is it sufficient to claim compliance with only the “OpenStack 
> Powered Platform” TESTS ?
> oi.e. https://refstack.openstack.org/#/guidelines
> oi.e. the ~350-ish compute + object-storage tests
> · OR
> · Should I be using the COMPLETE API Test Set ?
> oi.e. the > 1,000 tests from various domains that get run if you do not 
> specify a test-list
>  
> Greg.
>  
>  
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev