Re: [openstack-dev] [zaqar-ui][zaqar] Nominating Thai Tran for zaqar-ui core

2016-03-19 Thread Ryan Brown

On 03/18/2016 06:24 AM, Fei Long Wang wrote:

Hi team,

I would like to propose adding Thai Tran(tqtran) for the Zaqar UI core
team. Thai has done amazing work since the beginning of Zaqar UI project.
He is currently the most active contributor on Zaqar UI projects for
the last 90 days[1]. If no one objects, I'll proceed and add him in a week
from now.

[1] http://stackalytics.com/report/contribution/zaqar-ui/90


+1, and thank you for your work on Zaqar's UI!

--
Ryan Brown / Senior Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Nominating Eva Balycheva for Zaqar core

2016-03-01 Thread Ryan Brown

On 02/29/2016 07:47 PM, Fei Long Wang wrote:

Hi all,

I would like to propose adding Eva Balycheva(Eva-i) for the Zaqar core
team. Eva has been an awesome contributor since joining the Zaqar team.
She is currently the most active non-core reviewer on Zaqar projects for
the last 90 days[1]. During this time, she's been contributing to many
different areas:

1. Websocket binary support
2. Zaqar Configuration Reference docs
3. Zaqar client
4. Zaqar benchmarking

Eva has got an good eye for review and contributed a lot of wonderful
patches[2]. I think she would make an excellent addition to the team. If
no one objects, I'll proceed and add her in a week from now.


+1, cheers!

--
Ryan Brown / Senior Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Foundation Sponsorship for tox/pytest Sprint

2016-02-23 Thread Ryan Brown
Since every OpenStack project uses tox, would it be possible to have the 
foundation donate a little bit to the tox/pytest team to enable a sprint 
on both projects?


There's an IndieGoGo (which seems to be yet another crowdfunding site) 
https://www.indiegogo.com/projects/python-testing-sprint-mid-2016#/


While it's not a directly OpenStack project, I think it'd be worth 
supporting since we depend on them so heavily.


Individuals can also donate, and I encourage that too. I donated 100 USD 
because tox saves me loads of time when working on OpenStack, and I use 
py.test for projects at work and at play. If OpenStack pays your salary, 
consider giving the tox/pytest team a slice.


Cheers,
Ryan

--
Ryan Brown / Senior Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] create openstack/service-registry

2016-02-12 Thread Ryan Brown

On 02/12/2016 07:48 AM, Everett Toews wrote:

Hi All,

I need to air a concern over the create openstack/service-registry
[1] patch which aims to create an OpenStack service type registry
project to act as a dedicated registry location for service types in
OpenStack under the auspices of the API Working Group.

My concern is that it gets the API Working Group partially into the
big tent governance game. Personally I have zero interest in big tent
governance. I want better APIs, not to become embroiled in
governance. That said, I do fully recognize that by their nature APIs
in general (not just OpenStack) play a large role in governance.

The purpose of this email is not to dissuade the API WG from taking
on this responsibility. In fact, now that we've got a lot of
experience authoring guidelines and shepherding them through the
guideline process, it's time the WG evolved. My purpose is to simply
make sure we go into this with eyes wide open and understand the
consequences of doing so.

Thanks, Everett

[1] https://review.openstack.org/#/c/278612/


You're not wrong - it does involve us a little more in governance, but I 
think the value there (better namespacing etc) is something we can agree 
API-WG both does and should care about.


--
Ryan Brown / Senior Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] the trouble with names

2016-02-05 Thread Ryan Brown

On 02/05/2016 01:00 PM, Sean Dague wrote:

On 02/05/2016 12:16 PM, Ryan Brown wrote:

On 02/05/2016 09:08 AM, michael mccune wrote:

On 02/04/2016 12:57 PM, Hayes, Graham wrote:

On 04/02/2016 15:40, Ryan Brown wrote:

[snipped lots]

This isn't a perfect solution, but maybe instead of projects.yml there
could be a `registry.yml` project that would (of course) have all the
project.yml "in-tent" projects, but also merge in external project
requests for namespaces?


Where ever it is stored, could this be a solid place for the api-wg to
codify the string that should be shown in the catalog / headers /
other places by services?



this seems like a reasonable approach, the big downside might be
grooming the "dibs" list. we could have projects that expect to go
somewhere, register their name, then never achieve "lift-off". in these
cases we would need to release those names back into the free pool.


There could be some kind of reaping process, say every January send an
email to every project with outstanding "dibs" to check that they still
exist and want that name.

I think a 1 year TTL would be a good starting spot, does that help?

[snipped more]


Personally, I don't feel like reservations should exist for non
OpenStack projects. That's just squatting and locks away resources from
actual openstack projects.


Yeah, but I feel like there would be a lot of benefit to some kind of 
system where not-openstack-yet projects could say "we want to use X as a 
generic name" so it's not a surprise when two projects show up and want 
in the tent, but then one of them has to go change its service.


For example, I think "containers" will be one of those words that 
everyone wants to use (buzzbuzzbuzzbuzz). Having at least a way for 
projects to say "hm, someone else wants this" would be nice.


--
Ryan Brown / Senior Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] the trouble with names

2016-02-05 Thread Ryan Brown

On 02/05/2016 09:08 AM, michael mccune wrote:

On 02/04/2016 12:57 PM, Hayes, Graham wrote:

On 04/02/2016 15:40, Ryan Brown wrote:

[snipped lots]

This isn't a perfect solution, but maybe instead of projects.yml there
could be a `registry.yml` project that would (of course) have all the
project.yml "in-tent" projects, but also merge in external project
requests for namespaces?


Where ever it is stored, could this be a solid place for the api-wg to
codify the string that should be shown in the catalog / headers /
other places by services?



this seems like a reasonable approach, the big downside might be
grooming the "dibs" list. we could have projects that expect to go
somewhere, register their name, then never achieve "lift-off". in these
cases we would need to release those names back into the free pool.


There could be some kind of reaping process, say every January send an 
email to every project with outstanding "dibs" to check that they still 
exist and want that name.


I think a 1 year TTL would be a good starting spot, does that help?

[snipped more]

--
Ryan Brown / Senior Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] "No Open Core" in 2016

2016-02-05 Thread Ryan Brown

On 02/05/2016 05:57 AM, Thierry Carrez wrote:

Hi everyone,

Even before OpenStack had a name, our "Four Opens" principles were
created to define how we would operate as a community. The first open,
"Open Source", added the following precision: "We do not produce 'open
core' software". What does this mean in 2016 ?

Back in 2010 when OpenStack was started, this was a key difference with
the other open source cloud platform (Eucalyptus) which was following an
Open Core strategy with a crippled community edition and an "enterprise
version". OpenStack was then the property of a single entity
(Rackspace), so giving strong signals that we would never follow such a
strategy was essential to form a real community.

Fast-forward today, the open source project is driven by a non-profit
independent Foundation, which could not even do an "enterprise edition"
if it wanted to. However, member companies build "enterprise products"
on top of the Apache-licensed upstream project. And we have drivers that
expose functionality in proprietary components. So what does it mean to
"not do open core" in 2016 ? What is acceptable and what's not ? It is
time for us to refresh this.

My personal take on that is that we can draw a line in the sand for what
is acceptable as an official project in the upstream OpenStack open
source effort. It should have a fully-functional, production-grade open
source implementation. If you need proprietary software or a commercial
entity to fully use the functionality of a project or getting serious
about it, then it should not be accepted in OpenStack as an official
project. It can still live as a non-official project and even be hosted
under OpenStack infrastructure, but it should not be part of
"OpenStack". That is how I would interpret "no open core" in OpenStack
2016.

Of course, the devil is in the details, especially around what I mean by
"fully-functional" and "production-grade". Is it just an API/stability
thing, or does performance/scalability come into account ? There will
always be some subjectivity there, but I think it's a good place to start.

Comments ?


If a project isn't fully functional* then why would we accept it at all? 
Imagine this scenario:


1) Heat didn't exist
2) A project exactly like heat applies for OpenStack, that lets you use 
templates to create resources to a specification
3) BUT, if you don't buy Proprietary Enterprise Template Parsing 
Platform 9, a product of Shed Cat Enterpise Leopards**, you can't parse 
templates longer than 200 characters.


Would *any* TC count that as a project that could join under our current 
system? I don't think so. The TC (and community) would say something 
along the lines of "WTF are you thinking? Go read the 4 opens and try again"


I don't think adding "no open core" would change a decision the 
future-community and future-tc might make, because they will be elected 
by aforementioned community. Adding buzz-requirements like "must be 
fully-functional, production grade, webscale open-cloud softwidgets" 
isn't going to help future-us.


Footnotes:

* in my view, an openstack product that requires you to pay a vendor is 
as functional as an openstack product chock-full of syntax errors


** Shed Cat Enterprise Leopard is strictly fictional, and not based on 
any company that currently or ever has existed.


--
Ryan Brown / Senior Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] "No Open Core" in 2016

2016-02-05 Thread Ryan Brown
 of components (services or drivers).  The
Poppy situation doesn't seem to be a case of open washing anything,
or holding back features in order to sell a more advanced version.
It happens that for Poppy to be useful, you have to buy another
service for it to talk to (and to serve your data), but all of the
Poppy code is actually open and there are several services to choose
from.  There is no "better" version of Poppy available for sale,
if you buy a PoppyCDN subscription.

So, is Poppy "open core"?

Doug

[1] https://review.openstack.org/#/c/273756/
[2] 
http://stackalytics.com/?project_type=all=all=poppy=commits


I'd say no, Poppy is an open source project/product that makes it easier 
to use the different vendors of commodity services, and that's not a 
reason to boot it.


Sidebar: I recall a few years ago that Rackspace's CDN offering was 
based on Akamai, so read into that whatever you want I guess. They at 
least used to have a relation with Akamai at some point.


--
Ryan Brown / Senior Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] the trouble with names

2016-02-04 Thread Ryan Brown

On 02/04/2016 09:32 AM, michael mccune wrote:

On 02/04/2016 08:33 AM, Thierry Carrez wrote:

Hayes, Graham wrote:

On 04/02/2016 13:24, Doug Hellmann wrote:

Excerpts from Hayes, Graham's message of 2016-02-04 12:54:56 +:

On 04/02/2016 11:40, Sean Dague wrote:

2) Have a registry of "common" names.

Upside, we can safely use common names everywhere and not fear
collision down the road.

Downside, yet another contention point.

A registry would clearly be under TC administration, though all the
heavy lifting might be handed over to the API working group. I still
imagine collision around some areas might be contentious.


++ to a central registry. It could easily be added to the
projects.yaml
file, and is a single source of truth.


Although I realized that the projects.yaml file only includes official
projects right now, which would mean new projects wouldn't have a place
to register terms. Maybe that's a feature?


That is a good point - should we be registering terms for non tent
projects? Or do projects get terms when they get accepted into the tent?


I don't see why we would register terms for non-official projects. I
don't see under what authority we would do that, or where it would end.
So yes, that's a feature.



i have a question about this, as new, non-official, projects start to
spin up there will be questions about the naming conventions they will
use within the project as to headers and the like. given that the
current guidance trend in the api-wg is towards using "service type" in
these cases, how would these projects proceed?

(i'm not suggesting these projects should be registered, just curious)


This isn't a perfect solution, but maybe instead of projects.yml there 
could be a `registry.yml` project that would (of course) have all the 
project.yml "in-tent" projects, but also merge in external project 
requests for namespaces?


Say there's an LDAP aaS project, it could ask to reserve "directory" or 
whatever and have a reasonable hope that when they're tented they'll be 
able to use it. This would help avoid having multiple projects expecting 
to use the same name, while also not meaning we force anyone to use or 
not use some name.


Effectively, it's a gerrit-backed version of "dibs".


I think solution 2 is the best. To avoid too much contention, that can
easily be delegated to the API WG, and escalated to the TC for
resolution only in case of conflict between projects (or between a
project and the API WG).



i'm +1 for solution 2 as well. as to the api-wg participation in the
name registration side of things , i don't have an objection but i am
very curious to hear Everett's and Chris' opinions.

regards,
mike

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Ryan Brown / Senior Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] service type vs. project name for use in headers

2016-01-27 Thread Ryan Brown

On 01/27/2016 03:31 PM, Dean Troyer wrote:

On Wed, Jan 27, 2016 at 1:47 PM, michael mccune > wrote:

i am not convinced that we would ever need to have a standard on how
these names are chosen for the header values, or if we would even
need to have header names that could be deduced. for me, it would be
much better for the projects use an identifier that makes sense to
them, *and* for each project to have good api documentation.


I think we would be better served in selecting these things thinking
about the API consumers first.  We already have  enough for them to wade
through, the API-WG is making great gains in herding those particular
cats, I would hate to see giving back some of that here.

so, instead of using examples where we have header names like
"OpenStack-Some-[SERVICE_TYPE]-Header", maybe we should suggest
"OpenStack-Some-[SERVICE_TYPE or PROJECT_NAME]-Header" as our guideline.


I think the listed reviews have it right, only referencing service
type.  We have attempted to reduce the visible surface area of project
names in a LOT of areas, I do not think this is one that needs to be an
exception to that.


+1, I prefer service type over project name. Among other benefits, it 
leaves room for multiple implementations without being totally baffling 
to consumers.



Projects will do what they are going to do, sometimes in spite of
guidelines.  This does not mean that the guidelines need to bend to
match that practice when it is at odds with larger concerns.

In this case, the use of service type as the primary identifier for
endpoints and API services is well established, and is how the service
catalog has and will always work.

dt


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Stabilization cycles: Elaborating on the idea to move it forward

2016-01-21 Thread Ryan Brown

On 01/21/2016 06:23 AM, Chris Dent wrote:

On Wed, 20 Jan 2016, Flavio Percoco wrote:


- It was mentioned that some folks receive bonuses for landed features


In this thread we've had people recoil in shock at this ^ one...


- Economic impact on companies/market because no new features were
added (?)


...but I have to say it was this ^ one that gave me the most concern.

At the opensource project level I really don't think this should be
something we're actively worrying about. What we should be worrying
about is if OpenStack is any good. Often "good" will include features,
but not all the time.

Let the people doing the selling worry about the market, if they
want. That stuff is, or at least should be, on the other side of a
boundary.


I'm certain that they will worry about the market.

But look at where contributions come from. A glance at stackalytics says 
that only 11% of contributors are independent, meaning companies are 89% 
of the contributions. Whether we acknowledge it at the project level or 
not, features and "the OpenStack market" are going to be a priority for 
a some portion of those 89% of contributions.


Those contributors also want openstack to be "good" but they also have 
deadlines to meet internally. Having a freeze upstream for stabilization 
is going to put downstream development into overdrive, no doubt. That 
would be a poor precedent to have set given where the bulk of 
contributions come from.


--
Ryan Brown / Senior Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][api] GET call with huge argument list

2016-01-20 Thread Ryan Brown
So having a URI too long error is, in this case, likely an indication 
that you're requesting too many things at once.


You could:
1. Request 100 at a time in parallel
2. Find a query that would give you all those networks & page through 
the reply

3. Page through all the user's networks and filter client-side

How is the user supposed to be assembling this giant UUID list? I'd 
think it would be easier for them to specify a query (e.g. "get usage 
data for all my production subnets" or something).


On 01/19/2016 06:59 PM, Shraddha Pandhe wrote:

Hi folks,


I am writing a Neutron extension which needs to take 1000s of
network-ids as argument for filtering. The CURL call is as follows:

curl -i -X GET
'http://hostname:port/neutron/v2.0/extension_name.json?net-id=fffecbd1-0f6d-4f02-aee7-ca62094830f5=fffeee07-4f94-4cff-bf8e-a2aa7be59e2e'
-H "User-Agent: python-neutronclient" -H "Accept: application/json" -H
"X-Auth-Token: "


The list of net-ids can go up to 1000s. The problem is, with such large
url, I get the "Request URI too long" error. I don't want to update this
limit as proxies can have their own limits.

What options do I have to send 1000s of network IDs?

1. -d '{}' is not a recommended option for GET call and wsgi Controller
drops the data part when routing the request.

2. Use POST instead of GET? I will need to write the get_
logic inside create_resource logic for this to work. Its a hack, but
complies with HTTP standard.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Ryan Brown / Senior Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Status of the Support Conditionals in Heat templates

2016-01-13 Thread Ryan Brown
___
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Ryan Brown / Senior Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

2016-01-13 Thread Ryan Brown

On 01/13/2016 04:42 AM, Jamie Hannaford wrote:

I've recently been gathering feedback about the Magnum API and one of
the things that people commented on​ was the global /containers
endpoints. One person highlighted the danger of UUID collisions:


"""

It takes a container ID which is intended to be unique within that
individual cluster. Perhaps this doesn't matter, considering the surface
for hash collisions. You're running a 1% risk of collision on the
shorthand container IDs:


In [14]: n = lambda p,H: math.sqrt(2*H * math.log(1/(1-p)))

In [15]: n(.01, 0x1)
Out[15]: 2378620.6298183016


(this comes from the Birthday Attack -
https://en.wikipedia.org/wiki/Birthday_attack)
<https://en.wikipedia.org/wiki/Birthday_attack>


The main reason I questioned this is that we're not in control of how
the hashes are created whereas each Docker node or Swarm cluster will
pick a new ID under collisions. We don't have that guarantee when
aggregating across.


The use case that was outlined appears to be aggregation and reporting.
That can be done in a different manner than programmatic access to
single containers.​

"""


Representing a resource without reference to its parent resource also
goes against the convention of many other OpenStack APIs.


Nesting a container resource under its parent bay would mitigate both of
these issues:


/bays/{uuid}/containers/{uuid}​


I'd like to get feedback from folks in the Magnum team and see if
anybody has differing opinions about this.


Jamie


I'm not a member of the Magnum community, but I am on the API working 
group, so my opinions come from a slightly different perspective.


Nesting resources is not a "bad" thing, and as long as containers will 
always be in bays (from what I understand of the Magnum architecture, 
this is indeed true) then nesting them makes sense.


Of course, it's a big change and will have to be communicated to users & 
client libraries, probably via a version bump.


--
Ryan Brown / Senior Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Driving workflows with Mistral

2016-01-12 Thread Ryan Brown

On 01/12/2016 08:24 AM, Dan Prince wrote:

On Tue, 2016-01-12 at 12:52 +0100, Jiri Tomasek wrote:

On 01/11/2016 04:51 PM, Dan Prince wrote:

Background info:
[snip]




In similar manner, is Mistral able to provide a way to get workflow
in
progress data? E.g. We have a Mistral workflow for nodes
introspection.
This workflow contains several actions calling to ironic and
ironic-inspector apis. GUI calls Mistral API to trigger the workflow
and
now it needs to have a way to track the progress of that workflow.
How
would this be achieved?


I've been running 'mistral execution-list' and then you can watch
(poll) for the relevant execution items to finish running. Sure,
polling isn't great but I'd say lets start with this perhaps.


I think forcing GUI to poll the APIs that are
called in the actions and try to implement logic that estimates what
state the workflow is to report about it is not a valid solution. We
need the workflow API (Mistral) to provide a lets say web sockets
connection and push the status of actions along with relevant data,
so
GUI can listen to those.


I don't think Mistral has a websockets implementation. I think Zaqar
does though, and I think perhaps one way we could go about this sort of
notification might be to integrate our workflows with a Zaqar queue or
something. GUI would listen to a Zaqar queue for example... and the
workflow (as it executes) would post things to a specific queue.
Perhaps this is opt-in, only if a Zaqar queue is provided to a given
workflow. FWIW, integrating Mistral w/ Zaqar actions would likely be
quite easy.


Heat currently has a similar system for stack actions. The user provides 
a zaqar queue in the environment file, and every resource action 
generates a notification on that queue. I think a similar system would 
work for Mistral jobs.


For "domain-specific" notifications like receiving a notification for 
certain discovery stages, the TripleO custom actions could also publish 
messages to a queue so users get something more granular than "this 
Mistral job finished."



Alternately we could look at what it would take to add websocket
support to Mistral directly.


I really don't think adding websockets to Mistral would be worth it - 
Zaqar is a much better solution for notifications since it already has 
websocket, webhook, and even email (yes, that's right) notification options.




I am about to implement your nodes workflow in the GUI to test how it
works.

Jirka


--
Ryan Brown / Senior Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Driving workflows with Mistral

2016-01-12 Thread Ryan Brown
irectly?

What is the response from the Mistral action in the workflow? Lets say
we'd use Mistral to get a list of available environments (we do this in
tripleo-common now) So we call Mistral API to trigger a workflow that
has single action which gets the list of environments. Is mistral able
to provide this list as a response, or it is able just to trigger a
workflow?

In similar manner, is Mistral able to provide a way to get workflow in
progress data? E.g. We have a Mistral workflow for nodes introspection.
This workflow contains several actions calling to ironic and
ironic-inspector apis. GUI calls Mistral API to trigger the workflow and
now it needs to have a way to track the progress of that workflow. How
would this be achieved? I think forcing GUI to poll the APIs that are
called in the actions and try to implement logic that estimates what
state the workflow is to report about it is not a valid solution. We
need the workflow API (Mistral) to provide a lets say web sockets
connection and push the status of actions along with relevant data, so
GUI can listen to those.


I can see a few options for progress polling/push.

Mistral:
1. Send job to REST API (we didn't have to build)
1. It spins off as many jobs as it needs (we build)
1. Poll mistral to see how many are done

TripleO API
1. Build the REST API (we build)
1. Send requests to start introspection (we build)
1. Poll TripleO API until things are done

Mistral + Zaqar:
1. Send job to REST API (we didn't have to build)
1. It spins off as many jobs as it needs (we build)
1. Send updates to Zaqar (we didn't have to build)
1. Get websocket updates to the GUI from Zaqar (we didn't have to build)

Zaqar is (of course) designed to deliver updates like this, so every 
project on the face of the planet doesn't have to rebuild websocket 
notifications, which is a good thing.



I am about to implement your nodes workflow in the GUI to test how it
works.

Jirka

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Ryan Brown / Senior Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Driving workflows with Mistral

2016-01-12 Thread Ryan Brown

On 01/12/2016 10:50 AM, Jiri Tomasek wrote:

On 01/12/2016 04:22 PM, Ryan Brown wrote:

On 01/12/2016 06:52 AM, Jiri Tomasek wrote:

On 01/11/2016 04:51 PM, Dan Prince wrote:

Background info:

We've got a problem in TripleO at the moment where many of our
workflows can be driven by the command line only. This causes some
problems for those trying to build a UI around the workflows in that
they have to duplicate deployment logic in potentially multiple places.
There are specs up for review which outline how we might solve this
problem by building what is called TripleO API [1].

Late last year I began experimenting with an OpenStack service called
Mistral which contains a generic workflow API. Mistral supports
defining workflows in YAML and then creating, managing, and executing
them via an OpenStack API. Initially the effort was focused around the
idea of creating a workflow in Mistral which could supplant our
"baremetal introspection" workflow which currently lives in python-
tripleoclient. I create a video presentation which outlines this effort
[2]. This particular workflow seemed to fit nicely within the Mistral
tooling.



More recently I've turned my attention to what it might look like if we
were to use Mistral as a replacement for the TripleO API entirely. This
brings forth the question of would TripleO be better off building out
its own API... or would relying on existing OpenStack APIs be a better
solution?

Some things I like about the Mistral solution:

- The API already exists and is generic.

- Mistral already supports interacting with many of the OpenStack API's
we require [3]. Integration with keystone is baked in. Adding support
for new clients seems straightforward (I've had no issues in adding
support for ironic, inspector, and swift actions).

- Mistral actions are pluggable. We could fairly easily wrap some of
our more complex workflows (perhaps those that aren't easy to replicate
with pure YAML workflows) by creating our own TripleO Mistral actions.
This approach would be similar to creating a custom Heat resource...
something we have avoided with Heat in TripleO but I think it is
perhaps more reasonable with Mistral and would allow us to again build
out our YAML workflows to drive things. This might allow us to build
off some of the tripleo-common consolidation that is already underway
...

- We could achieve a "stable API" by simply maintaining input
parameters for workflows in a stable manner. Or perhaps workflows get
versioned like a normal API would be as well.

- The purist part of me likes Mistral quite a bit. It fits nicely with
the deploy OpenStack with OpenStack. I sort of feel like if we have to
build our own API in TripleO part of this vision has failed and could
even be seen as a massive technical debt which would likely be hard to
build a community around outside of TripleO.

- Some of the proposed validations could perhaps be implemented as new
Mistral actions as well. I'm not convinced we require TripleO API just
to support a validations mechanism yet. Perhaps validations seem hard
because we are simply trying to do them in the wrong places anyway?
(like for example perhaps we should validate network connectivity at
inspection time rather than during provisioning).

- Power users might find a workflow built around a Mistral API more
easy to interact with and expand upon. Perhaps this ends up being
something that gets submitted as a patchset back to the TripleO that we
accept into our upstream "stock" workflow sets.



Last week we landed the last patches [4] to our undercloud to enable
installing Mistral by simply setting: enable_mistral = true in
undercloud.conf. NOTE: you'll need to be using a recent trunk repo from
Delorean so that you have the recently added Mistral packages for this
to work. Although the feature is disable by default this should enable
those wishing to tinker with Mistral as a new TripleO undercloud
service an easy path forwards.

[1] https://review.openstack.org/#/c/230432
[2] https://www.youtube.com/watch?v=bnAT37O-sdw
[3] http://git.openstack.org/cgit/openstack/mistral/tree/mistral/action
s/openstack/mapping.json
[4] https://etherpad.openstack.org/p/tripleo-undercloud-workflow


__


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Hi, I have a few questions:

Is Mistral action able to access/manipulate local files? E.g. access the
templates installed at undercloud's
/usr/share/openstack-tripleo-heat-templates?


I believe with mistral there would be an intermediate step of
uploading the templates to Swift first. Heat can read templates from
swift, and any mistral workflows would be able to read the templates
out, modify them, and save back to swift.


Correct, but from the Mi

Re: [openstack-dev] [heat] Client checking of server version

2016-01-07 Thread Ryan Brown

On 01/06/2016 12:05 PM, Zane Bitter wrote:

On 05/01/16 16:37, Steven Hardy wrote:

On Mon, Jan 04, 2016 at 03:53:07PM -0500, Jay Dobies wrote:

I ran into an issue in a review about moving environment resolution from
client to server [1]. It revolves around clients being able to access
older
versions of servers (that's a pretty simplistic description; see [2]
for the
spec).

Before the holiday, Steve Hardy and I were talking about the
complications
involved. In my case, there's no good way to differentiate an older
server
from a legitimate error.

Since the API isn't versioned to the extent that we can leverage that
value,
I was looking into using the template versions call. Something along the
lines of:

   supported_versions = hc.template_versions.list()
   version_nums = [i.to_dict()['version'].split('.')[1] for i in
supported_versions]
   mitaka_or_newer = [i for i in version_nums if i >= '2016-04-08']

Yes, I'm planning on cleaning that up before submitting it :)

What I'm wondering is if I should make this into some sort of
generalized
utility method in the client, under the assumption that we'll need
this sort
of check in the future for the same backward compatibility requirements.

So a few questions:

1. Does anyone strongly disagree to checking supported template
versions as
a way of determining the specifics of the server API.


Ok, so some valid concerns have been raised over deriving things using
the
HOT version (although I do still wonder if the environment itself
should be
versioned, just like the templates, then we could rev the environment
verion and say it supports a list, vs changing anything in the API, but
that's probably a separate discussion).

Taking a step back for a moment, the original discussion was around
providing transparent access to the new interface via heatclient, but
that
isn't actually a hard requirement - the old interface works fine for many
users, so we could just introduce a new interface (which would eventually
become the default, after all non-EOL heat versions released support the
new API argument):

Currently we do:

heat stack-create foo -f foo.yaml -e a.yaml -e b.yaml

And this implies some client-side resolution of the multiple -e
arguments.

-e is short for "--environment-file", but we could introduce a new
format,
e.g "-E", short for "--environment-files":


I agree with Zane, this looks like a usability (and backwards compat) 
nightmare.


Not only do you have to get over everyone's muscle memory of typing `-e` 
(I've got it bad) but also all the scripts folks have that use heatclient.


Then there's the docs between "If ... blah blah ... then use -E, 
otherwise use -e" will be a pretty fat stumbling block for folks that 
use different deploys of OpenStack (say, a Juno prod cloud and a Kilo 
staging cloud) if they want to use heat templates on both.



heat stack-create foo -f foo.yaml -E a.yaml -E b.yaml

This option would work the same way as the current interface, but it
would
pass the files unmodified for resolution inside heat (by using the new
API
format), and as it's opt-in, it's leaving all the current heatclient
interfaces alone without any internal fallback logic?


That would certainly work, but it sounds like a usability/support
nightmare :(

Is there a reason we wouldn't consider bumping the API version to 1.1
for this? We'll have to figure out how to do it some time.

cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Ryan Brown / Senior Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Is Swift a good choice of database for the TripleO API?

2016-01-06 Thread Ryan Brown
 a well trodden and
tested path.

I'm not entirely sure how it maps to our template configuration method
though.  Storing a bunch of template blobs in the database feels a
little square peg, round hole to me, and might undo a lot of the
benefits of using the database in the first place.


I don't follow this point. In the way we access Swift, everything is
essentially
a blob of text also.

Now, on the other hand, having a database to store basic data like
metadata on plans and such might be a useful thing regardless of where
we store the templates themselves.  We could also use it for locking
just fine IMHO - TripleO isn't a tool targeted to have cloud-scale
number of users.  It's a deployer tool with a relatively limited number
of users per installation, so the scaling of a locking mechanism isn't a
problem I necessarily think we need to solve.  We have way bigger
scaling issues than that to tackle before I think we would hit the limit
of a database-locking scheme.

> - Invest time in building something on Swift.
> - Glance was transitioning to be a Artifact store. I don't know the
> status of
>   this or if it would meet out needs.
>
> Any input, ideas or suggestions would be great!
>
> Thanks,
> Dougal
>
>
> [1]:

>http://eavesdrop.openstack.org/meetings/tripleo/2015/tripleo.2015-12-15-14.03.log.html#l-89
> [2]:

>https://specs.openstack.org/openstack/tripleo-specs/specs/mitaka/tripleo-overcloud-deployment-library.html
> [3]:https://review.openstack.org/#/c/257481/
>
>
 >
__
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 >


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




______
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Ryan Brown / Senior Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Removing the Tuskar repos

2016-01-04 Thread Ryan Brown

On 12/22/2015 12:44 PM, Jason Rist wrote:

On 12/22/2015 04:38 AM, Dougal Matthews wrote:

Hi all,

I mentioned this at the meeting last week, but wanted to get wider input.
As far as I can tell from the current activity, there is no work going into
Tuskar and it isn't being tested with CI. This means the code is becoming
more stale quickly and likely wont work soon (if not already).

TripleO common is working towards solving the same problems that Tuskar
attempted and can be seen as the replacement for Tuskar. [1][2]

Are there any objections to it's removal? This would include the tuskar,
python-tuskarclient and tuskar-ui repos. We would also need to remove it
from instack-undercloud and tripleo-image-elements.

I'll start to beginning the cleanup process sometime in min/late January if
there are no objections.

Cheers,
Dougal


[1]:
https://specs.openstack.org/openstack/tripleo-specs/specs/mitaka/tripleo-overcloud-deployment-library.html
[2]: https://review.openstack.org/230432



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


+1


+1
--
Ryan Brown / Senior Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] OpenStack Tokyo Summit Summary

2015-11-10 Thread Ryan Brown

On 11/06/2015 01:36 PM, Doug Hellmann wrote:

Excerpts from Fei Long Wang's message of 2015-11-07 01:31:09 +1300:

Greetings,

Firstly, thank you for everyone joined Zaqar sessions at Tokyo summit.
We definitely made some great progress for those working sessions. Here
are the high level summary and those are basically our Mitaka
priorities. I may miss something so please feel free to comment/reply
this mail.

Sahara + Zaqar
-

We have a great discussion with Ethan Gafford from Sahara team. Sahara
team is happy to use Zaqar to fix some potential security issues. The
main user case will be covered in Mitaka is protecting tenant guest and
data from administrative user. So what Zaqar team needs to do in Mitaka
is completing the zaqar client function gaps for v2 to support signed
URL, which will be used by Sahara guest agent. Ethan will create a spec
in Sahara to track this work. This is a POC of what it'd look like to
have a guest agent in Sahara on top of Zaqar. The Sahara team has not
decided to use Zaqar yet but this would be the bases for that discussion.

Horizon + Zaqar
--

We used 1 horizon work session and 1 Zaqar work session to discuss this
topic. The main user case we would like to address is the async
notification so that Horizon won't have to poll the other OpenStack
components(e.g. Nova, Glance or Cinder) per second to get the latest
status. And I'm really happy to see we worked out a basic plan by
leveraging Zaqar's notification and websocket.

1. Implement a basic filter for Zaqar subscription, so that Zaqar can
decide if the message should be posted/forwarded to the subscriber when
there is a new message posted the queue. With this feature, Horizon will
only be notified by its interested notifications.

https://blueprints.launchpad.net/zaqar/+spec/suport-filter-for-subscription

2. Listen OpenStack notifications

We may need more discussion about this to make sure if it should be in
the scope of Zaqar's services. It could be separated process/service of
Zaqar to listen/collect interested notifications/messages and post them
in to particular Zaqar queues. It sounds very interesting and useful but
we need to define the scope carefully for sure.


The Telemetry team discussed splitting out the code in Ceilometer that
listens for notifications to make it a more generic service that accepts
plugins to process events. One such plugin might be an interface to
filter events and republish them to Zaqar, so if you're interested in
working on that you might want to coordinate with the Telemetry team to
avoid duplicating effort.


That would be great, I'm pretty strongly opposed to adding message 
filtering because that adds to the complexity of subscriptions pretty 
significantly.



Pool Group and Flavor
-

Thanks MD MADEEM proposed this topic so that we have a chance to review
the design of pool, pool group and flavor. Now the pool group and flavor
has a 1:1 mapping relationship and the pool group and pool has a 1:n
mapping relationship. But end user don't know the existence of pool, so
flavor is the way for end user to select what kind of storage(based on
capabilities) he want to use. Since pool group can't provide more
information than flavor so it's not really necessary, so we decide to
deprecate/remove it in Mitaka. Given this is hidden from users (done
automatically by Zaqar), there won't be an impact on the end user and
the API backwards compatibility will be kept.

https://blueprints.launchpad.net/zaqar/+spec/deprecate-pool-group

Zaqar Client


Some function gaps need to be filled in Mitaka. Personally, I would rate
the client work as the 1st priority of M since it's very key for the
integration with other OpenStack components. For v1.1, the support for
pool and flavor hasn't been completed. For v2, we're stilling missing
the support for subscription and signed URL.

https://blueprints.launchpad.net/zaqar/+spec/finish-client-support-for-v1.1-features


+1


SqlAlchemy Migration
-

Now we're missing the db migration support for SqlAlchemy, the control
plane driver. We will fix it in M as well.

https://blueprints.launchpad.net/zaqar/+spec/sqlalchemy-migration
<https://blueprints.launchpad.net/zaqar/+spec/sqlalchemy-migration>


Would this just be the addition of alembic (how heat and other projects 
handle it)?



Guys, please contribute this thread to fill the points/things I missed
or pop up in #openstack-zaqar channel directly with questions and
suggestions.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Ryan Brown / Senior Software Engineer, Openstack / Red Ha

Re: [openstack-dev] [devstack] Logging - filling up my tiny SSDs

2015-11-02 Thread Ryan Brown

On 11/02/2015 02:22 PM, Sean M. Collins wrote:

On Mon, Nov 02, 2015 at 03:19:42AM EST, Daniel Mellado wrote:

Also you could set up this var

LOGDAYS=1

to limit the amount of log, althougt setting the LOGDIR to /dev/null
should work too.


That is only useful if you are doing a new run of stack.sh - it won't
handle the issue of the log file growing in size.

I mean I guess the real answer is to configure logrotate. ugh.


Alternatively, you can set up a cronjob to do this:

du -sh LOGDIR | cut -f1 | grep -q G && \
 find LOGDIR -type f -exec truncate --size 0 \;

That'll cut all the logfiles down to size when the total size of your 
log directory exceeds 1 GB. I have it set to run every 3 hours on my vms.


--
Ryan Brown / Senior Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][ironic] some questions about tags

2015-10-22 Thread Ryan Brown

On 10/21/2015 11:14 PM, Tan, Lin wrote:

Hi guys,

Ironic is implementing the tags stuff:
https://review.openstack.org/#/q/status:open+project:openstack/ironic+branch:master+topic:bp/nodes-tagging,n,z

And this work is follow the guidelines of API Workgroup:
http://specs.openstack.org/openstack/api-wg/guidelines/tags.html

But I have two doubts about the guideline:
1. Can we support partial update the Tag List using PATCH. I see there is an 
option to add/delete individual tag, but it is still using PUT. What's the 
disadvantage here for PATCH?


The major disadvantage of PATCH is that there isn't an official standard 
for partial representation of things like lists. There are things like 
JSONPatch[1] that provide guidance, but since OpenStack hasn't really 
made a choice, we haven't put up a guideline.


For information on what we *have* done regarding PATCH, see the HTTP 
Methods guide[2].



2. Can we update the tag as well? For example, in Gmail we can rename the label 
if necessary, which is much more friendly to me. But currently, this is not 
support from the guide. The only way to support this is cached the tags's 
entities and retag them in python client, I don't think it's a good way.


The idea behind our guideline is to specify the way the API should allow 
users to access the tags on their resources. You can (of course!) extend 
functionality and add the ability to rename tags if you'd like.


If you were to use PATCH, then renaming a tag could look like:

step 1: get servers with tag X
step 2: PATCH
  [
{"op": "replace", "path": "/tags/1", "value": "Y"}
  ]

That would replace the existing tag with the new one, if you were using 
JSONPatch.



Best Regards,

Tan


1: http://jsonpatch.com/
2: 
http://specs.openstack.org/openstack/api-wg/guidelines/http.html#http-methods

--
Ryan Brown / Senior Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] conflicting names in python-openstackclient: could we have some exception handling please?

2015-10-07 Thread Ryan Brown

On 10/06/2015 05:15 PM, Thomas Goirand wrote:

Hi,

tl;dr: let's add a exception handling so that python-*client having
conflicting command names isn't a problem anymore, and "openstack help"
always work as much as it can.


Standardizing on "openstack  <noun[s]> verb" would likely be 
the best solution for both the immediate problem and for the broader 
"naming stuff" issue.


Sharing a flat namespace is a recipe for pain with a growing number of 
projects. Devs and users are unlikely to use every project, they 
probably won't notice conflicts naturally except in cases like horizon.


If we look over the fence at AWS, you'll note that their nice unified 
CLI that stops the non-uniform `awk` bloodshed is namespaced.


- aws s3 ...
- aws cloudformation ...
- aws ec2 ...

A flat namespace was a mostly-fine idea when all integrated projects 
were expected to put their CLI in-tree in openstackclient. There were 
reviews, and discussions about what noun belonged to whom.


Noun conflict will only get worse: lots of projects will share words 
like stack, domain, user, container, address, and so on.


Namespaces are one honking great idea -- let's do more of those!


Longer version:

This is just a suggestion for contributors to python-openstackclient.

I saw a few packages that had conflicts with the namespace of others
within openstackclient. To the point that typing "openstack help" just
fails. Here's an example:

# openstack help
[ ...]
   project create  Create new project
   project delete  Delete project(s)
   project list   List projects
   project setSet project properties
   project show   Display project details
Could not load EntryPoint.parse('ptr_record_list =
designateclient.v2.cli.reverse:ListFloatingIPCommand')
'ArgumentParser' object has no attribute 'debug'

This first happened to me with saharaclient. Lucky, upgrading to latest
version fixed it. Then I had the problem with zaqarclient, which I fixed
with a few patches to its setup.cfg. Then now designate, but this time,
patching setup.cfg doesn't seem to cut it (ie: after changing the name
of the command, "openstack help" just fails).

Note: I don't care which project is at fault, this isn't the point here.
The point is that command name conflicts aren't handled (see below)
which is the problem.


+1, this isn't a problem specific to any project, it's systemic with 
flat namespacing.



With Horizon being a large consumer of nearly all python-*client
packages, removing one of them also removes Horizon in my CI which is
not what I want to (or can) do to debug a tempest problem. End of the
story: since Liberty b3, I never could have "openstack help" to work
correctly in my CI... :(


O.O That's unfortunate.


Which leads me to write this:

Since we have a very large amount of projects, with each and everyone of
them adding new commands to openstackclient, I would really nice if we
could have some kind of checks to make sure that conflicts are either 1/
not possible or 2/ handled gracefully.


To your (1) we could have a gate job that installs all the clients and 
fails on conflicts.


The downside of doing that without addressing the namespace problem is 
that there will be inconsistent conventions everywhere. Zaqar will have 
"openstack queue " but "openstack message flavor ..." which creates 
the sort of confusion a unified client is supposed to avoid.


A central reservation process for nouns won't really scale, but 
namespacing will because we *already* namespace projects.



Your thoughts?
Cheers,

Thomas Goirand (zigo)

P.S: It wasn't the point of this message, but do we have a fix for
designateclient? It'd be nice to have this fixed before Liberty is out.


--
Ryan Brown / Senior Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Convergence: Detecting and handling worker failures

2015-09-30 Thread Ryan Brown

On 09/30/2015 03:10 AM, Anant Patil wrote:

Hi,

One of remaining items in convergence is detecting and handling engine
(the engine worker) failures, and here are my thoughts.

Background: Since the work is distributed among heat engines, by some
means heat needs to detect the failure and pick up the tasks from failed
engine and re-distribute or run the task again.

One of the simple way is to poll the DB to detect the liveliness by
checking the table populated by heat-manage. Each engine records its
presence periodically by updating current timestamp. All the engines
will have a periodic task for checking the DB for liveliness of other
engines. Each engine will check for timestamp updated by other engines
and if it finds one which is older than the periodicity of timestamp
updates, then it detects a failure. When this happens, the remaining
engines, as and when they detect the failures, will try to acquire the
lock for in-progress resources that were handled by the engine which
died. They will then run the tasks to completion.


Implementing our own locking system, even a "simple" one, sounds like a 
recipe for major bugs to me. I agree with your assessment that tooz is a 
better long-run decision.



Another option is to use a coordination library like the community owned
tooz (http://docs.openstack.org/developer/tooz/) which supports
distributed locking and leader election. We use it to elect a leader
among heat engines and that will be responsible for running periodic
tasks for checking state of each engine and distributing the tasks to
other engines when one fails. The advantage, IMHO, will be simplified
heat code. Also, we can move the timeout task to the leader which will
run time out for all the stacks and sends signal for aborting operation
when timeout happens. The downside: an external resource like
Zookeper/memcached etc are needed for leader election.


That's not necessarily true. For single-node installations (devstack, 
TripleO underclouds, etc) tooz offers file and IPC backends that don't 
need an extra service. Tooz's MySQL/PostgreSQL backends only provide 
distributed locking functionality, so we may need to depend on the 
memcached/redis/zookeeper backends for multi-node installs.


Even if tooz doesn't provide everything we need, I'm sure patches would 
be welcome.



In the long run, IMO, using a library like tooz will be useful for heat.
A lot of boiler plate needed for locking and running centralized tasks
(such as timeout) will not be needed in heat. Given that we are moving
towards distribution of tasks and horizontal scaling is preferred, it
will be advantageous to use them.

Please share your thoughts.

- Anant

--
Ryan Brown / Senior Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Defining a public API for tripleo-common

2015-09-30 Thread Ryan Brown

On 09/30/2015 04:08 AM, Dougal Matthews wrote:

Hi,

What is the standard practice for defining public API's for OpenStack
libraries? As I am working on refactoring and updating tripleo-common I have
to grep through the projects I know that use it to make sure I don't break
anything.


The API working group exists, but they focus on REST APIs so they don't 
have any guidelines on library APIs.



Personally I would choose to have a policy of "If it is documented, it is
public" because that is very clear and it still allows us to do internal
refactoring.

Otherwise we could use __all__ to define what is public in each file, or
assume everything that doesn't start with an underscore is public.


I think assuming that anything without a leading underscore is public 
might be too broad. For example, that would make all of libutils 
ostensibly a "stable" interface. I don't think that's what we want, 
especially this early in the lifecycle.


In heatclient, we present "heatclient.client" and "heatclient.exc" 
modules as the main public API, and put versioned implementations in 
modules.


heatclient
|- client
|- exc
\- v1
  |- client
  |- resources
  |- events
  |- services

I think versioning the public API is the way to go, since it will make 
it easier to maintain backwards compatibility while new needs/uses evolve.


--
Ryan Brown / Senior Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Compute API (Was Re: [nova][cinder] how to handle AZ bug 1496235?)

2015-09-28 Thread Ryan Brown

On 09/26/2015 12:04 AM, Joshua Harlow wrote:

+1 from me, although I thought heat was supposed to be this thing?

Maybe there should be a 'warm' project or something ;)

Or we can call it 'bbs' for 'building block service' (obviously not
bulletin board system); ask said service to build a set of blocks into
well defined structures and let it figure out how to make that happen...

This though most definitely requires cross-project agreement though so
I'd hope we can reach that somehow (before creating a halfway done new
orchestration thing that is halfway integrated with a bunch of other
apis that do one quarter of the work in ten different ways).


Indeed, I don't think I understand what need heat is failing to fulfill 
here? A user can easily have a template that contains a single server 
and a volume.


Heat's job is to be an API that lets you define a result[1] and then 
calls the APIs of whatever projects provide those things.


1: in this case, the result is "a working server with network and storage"


Duncan Thomas wrote:

I think there's a place for yet another service breakout from nova -
some sort of like-weight platform orchestration piece, nothing as
complicated or complete as heat, nothing that touches the inside of a
VM, just something that can talk to cinder, nova and neutron (plus I
guess ironic and whatever the container thing is called) and work
through long running / cross-project tasks. I'd probably expect it to
provide a task style interface, e.g. a boot-from-new-volume call returns
a request-id that can then be polled for detailed status.

The existing nova API for this (and any other nova APIs where this makes
sense) can then become a proxy for the new service, so that tenants are
not affected. The nova apis can then be deprecated in slow time.

Anybody else think this could be useful?


--
Ryan Brown / Senior Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] -1 due to line length violation in commit messages

2015-09-25 Thread Ryan Brown

On 09/25/2015 10:44 AM, Ihar Hrachyshka wrote:

Hi all,

releases are approaching, so it’s the right time to start some bike
shedding on the mailing list.

Recently I got pointed out several times [1][2] that I violate our
commit message requirement [3] for the message lines that says:
"Subsequent lines should be wrapped at 72 characters.”

I agree that very long commit message lines can be bad, f.e. if they
are 200+ chars. But <= 79 chars?.. Don’t think so. Especially since
we have 79 chars limit for the code.


The default "git log" display shows the commit message already indented, 
and the tab may display as 8 spaces I suppose. I believe the 72 limit is 
derived from 80-8 (terminal width - tab width)


I don't know how many folks use 80-char terminals (I use side-by-side 
110-column terms). Having some limit to prevent 200+ is reasonable, but 
I think it's pedantic to -1 a patch due to a 78-char commit message line.



We had a check for the line lengths in openstack-dev/hacking before
but it was killed [4] as per openstack-dev@ discussion [5].

I believe commit message lines of <=80 chars are absolutely fine and
should not get -1 treatment. I propose to raise the limit for the
guideline on wiki accordingly.

Comments?

[1]: https://review.openstack.org/#/c/224728/6//COMMIT_MSG [2]:
https://review.openstack.org/#/c/227319/2//COMMIT_MSG [3]:
https://wiki.openstack.org/wiki/GitCommitMessages#Summary_of_Git_commit_message_structure



[4]: https://review.openstack.org/#/c/142585/

[5]:
http://lists.openstack.org/pipermail/openstack-dev/2014-December/thread.html#52519

 Ihar



__



OpenStack Development Mailing List (not for usage questions)

Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Ryan Brown / Senior Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][keystone][openstackclient] Standards for object name attributes and filtering

2015-08-31 Thread Ryan Brown
On 08/27/2015 11:28 PM, Chen, Wei D wrote:
> 
> I agree that return 400 is good idea, thus client user would know what 
> happened.
> 

+1, I think a 400 is the sensible choice here. It'd be much more likely
to help devs catch their errors .

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [app-catalog][heat] Heat template contributors repo

2015-08-06 Thread Ryan Brown
On 08/06/2015 01:53 PM, Christopher Aedo wrote:
 Today during the app-catalog IRC meeting we talked about hosting Heat
 templates for contributors.  Right now someone who wants to create
 their own templates can easily self-host them on github, but until
 they get people pointed at it, nobody will know about their work on
 that template, and getting guidance and feedback from all the people
 who know Heat well takes a fair amount of effort.

Discoverability is a problem, but so is ownership in the shared repo
case. There's also the heat-templates repo, which has some example
content and such.

 What do you think about us creating a new repo (app-catalog-heat
 perhaps), and collectively we could encourage those interested in
 contributing Heat templates to host them there?  Ideally members of
 the Heat community would become reviewers of the content, and give
 guidance and feedback. 

I think being able to review something requires a lot more than hey we
have a central/shared repo, including having some shared purpose and
knowledge of the goal.

Of course, people with heat knowledge can look at templates and say
things like well that's not valid YAML, but that's not really a code
review. I'd much rather see folks come to IRC or ask.openstack with
specific questions so we can 1) answer them or 2) improve our docs.

Having a shared repo of here are some heat templates doesn't strike me
as incredibly useful, especially if the templates don't all go together
and make one big thingy.

 It would also allow us to hook into OpenStack
 CI so these templates could be tested, and contributors would have a
 better sense of the utility/portability of their templates.  Over time
 it could lead to much more exposure for all the useful Heat templates
 people are creating.
 
 Thoughts?
 
 -Christopher

What do you imagine these templates being for? Are people creating
little reusable snippets/nested stacks that can be incorporated into
someone else's infrastructure? Or standalone templates for stuff like
here, instant mongodb cluster?

Also, the obvious question of the central repo is how does reviewing
work? are heat cores expected to also be cores on this new repo, or
maybe just take anything that gets 5 +1's?

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][api] Response when a illegal body is sent

2015-07-23 Thread Ryan Brown
On 07/23/2015 12:13 PM, Jay Pipes wrote:
 On 07/23/2015 10:53 AM, Bunting, Niall wrote:
 Hi,

 Currently when a body is passed to an API operation that explicitly
 does not allow bodies Glance throws a 500.

 Such as in this bug report:
 https://bugs.launchpad.net/glance/+bug/1475647 This is an example of
 a GET however this also applies to other requests.

 What should Glance do rather than throwing a 500, should it return a
 400 as the user provided an illegal body
 
 Yep, this.

+1, this should be a 400. It would also be acceptable (though less
preferable) to ignore any body on GET requests and execute the request
as normal.

 Best,
 -jay
-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] conditional resource exposure - second thoughts

2015-07-14 Thread Ryan Brown
On 07/14/2015 02:46 PM, Clint Byrum wrote:
 Excerpts from Pavlo Shchelokovskyy's message of 2015-07-14 11:34:37 -0700:
 Hi Heaters,

 currently we already expose to the user only resources for services
 deployed in the cloud [1], and soon we will do the same based on whether
 actual user roles allow creating specific resources [2]. Here I would like
 to get your opinion on some of my thoughts regarding behavior of
 resource-type-list, resource-type-show and template-validate with this new
 features.

 resource-type-list
 We already (or soon will) hide unavailable in the cloud / for the user
 resources from the listing. But what if we add an API flag e.g. --all to
 show all registered in the engine resources? That would give any user a
 glimpse of what their Orchestration service can manage in principle, so
 they can nag the cloud operator to install additional OpenStack components
 or give them required roles :)

 
 There are more variables that lead to a resource being hidden than
 the catalog. The version of Heat, whether the plugin is available,
 libraries installed on the server, etc. The canonical place for all
 things possible, and the place that users should be encouraged to use,
 is the documentation of Heat. These API's should only be for inspection
 of what is available on the Heat service one is talking to.

I'd agree with Clint.

Think about a user that says to themselves Hey, I want to see *all* the
Heat resources! Will they:

1) Google heat resources openstack or similar, landing at our docs
page with the list in a nice, human-friendly format.
2) Look in the API endpoint docs and find a flag to show disabled
resources. Then call that endpoint and read the JSON response.

I think the answer is going to be (1) for the vast majority of users

 template-validate
 Right now Heat is failing validation for templates containing resource
 types not registered in the engine (e.g. typo). Should we also make this
 call available services- and roles-sensitive? Or should we leave a way for
 a user to check validity of any template with any in principle supported
 resources?

 
 I believe the current behavior is correct. The idea is to be able to
 edit a template, and then validate it on all the clouds you want to push
 it to.

Maybe it would be valuable to distinguish (when failing a validation)
between Resource Foo::Bar is not a thing at all and Resource
OS::Zaqar::Queue is disabled on this cloud.

I'm not sure if validate currently makes that distinction, but it likely
should.

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Should project name be uppercase or lowercase?

2015-07-08 Thread Ryan Brown
On 07/07/2015 10:36 PM, Anne Gentle wrote:
 How do you think which should we use uppercase vs lowercase for
 representing project names?
 
 
 I'll be patching the governance repo with some guidelines we have been
 using to make them official. The projects.yaml file is the reference
 point for the service catalog and documentation, and the doc team
 maintains a lookup list on the wiki page referenced above.
 
 Conventions for service names: 
 
   * Uppercase first letter of first word only.
   * Do not use OpenStack in the name of the service.
   * Use module if it is consuming other services (such as heat).

This may not be a great place to bring this up, but I don't really see
how consuming other services makes heat/ceilometer a module.

Most projects consume other services (they must use keystone at the
barest minimum, trove uses nova, glance uses swift, etc) which would
make darn near every service a module. It doesn't seem to be a valuable
distinction, and I know that within the heat team, we say service.

Do operators, users, or developers find this distinction useful? Calling
everything [blank] service seems like it'd be more consistent.


-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar][all] Zaqar will stay... Lots of work ahead

2015-06-01 Thread Ryan Brown
On 06/01/2015 04:34 AM, Flavio Percoco wrote:
 On 31/05/15 18:12 +, Ildikó Váncsa wrote:
 Hi All,

 I would like to ask about the user-facing notifications part of the
 list. Do you have a roadmap for this? Is this driven by the Zaqar
 team? What are the next steps?
 
 Hey,
 
 This will require cross-project efforts and we (the Zaqar team) would
 love to get some help here. If anyone is willing to drive the spec and
 sync, the Zaqar team would be happy to help with other tasks.
 
 I think it'd be really beneficial to start doing it in one of the
 services (Heat?) as a PoC, while we discuss the cross-project spec and
 feed it with the things we'll learn from the PoC.
 
 Would you like to help with this?

Indeed it will require tons of cross-project work. Heat is going to try
to get Zaqar notifications to our users in Liberty, and I've posted
specs for the message format side of the problem[1][2]. There is also a
cross-project spec for user notifications[3].

Heat plans on adopting this format after sufficient review, and we
welcome other projects to look over the format and provide feedback to
help it work for their use case as well.

[1]: https://review.openstack.org/#/c/186436/
[2]: https://review.openstack.org/#/c/186555/
[3]: https://review.openstack.org/#/c/185822/

 ...[snip]...

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Redis as Messaging System

2015-05-27 Thread Ryan Brown
On 05/27/2015 04:12 AM, Clint Byrum wrote:
 
 Excerpts from Ryan Brown's message of 2015-05-26 05:48:14 -0700:
 Zaqar provides an option to use Redis as a backend, and Zaqar provides
 pubsub messaging.

 
 Please please please do not mistake under-the-cloud messaging, which
 oslo.messaging is intended to facilitate, for user-facing messaging,
 which Zaqar is intended to facilitate.
 
 Under the cloud, you have one tenant, and simply communicating directly
 with Redis will suffice.

Ah, I see what I missed. I read messaging for OpenStack as messaging
service for OpenStack tenants not messaging for OpenStack internally.

Good catch Clint,
Ryan

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar][all] Zaqar will stay... Lots of work ahead

2015-05-26 Thread Ryan Brown
On 05/26/2015 04:28 AM, Flavio Percoco wrote:
 Greetings,
 
 TL;DR: Thanks everyone for your feedback. Based on the discussed plans
 at the summit - I'll be writing more about these later - Zaqar will
 stick around and play its role in the community.

\o/

 [snip]
 ==
 
 Next Steps
 ==
 
 In light of the above, and as mentioned in the TL;DR, Zaqar will stick
 around and the team, as promised, will focus on making those
 integrations happen. The team is small, which means we'll carefully
 pick the tasks we'll be spending time on.
 
 As a first step, we should restore our meetings and get to work right
 away. To favor our contributors in NZ, next week's meeting will be at
 21:00 UTC and we'll keep it at that time for 2 weeks.

For those who didn't know what day the Zaqar meetings are normally, they
are on Mondays according to the OpenStack calendar (next meeting on June 1).

 For the Zaqar team (and folks interested), I'll be sending out further
 emails to sync on the work to do.
 
 Special thanks for all the folks that showed interest, participated in
 sessions and that are committed on making this happen.
 
 Lets now make it happen,
 Flavio

Thanks for the mail Flavio,
Ryan

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Redis as Messaging System

2015-05-26 Thread Ryan Brown
Zaqar provides an option to use Redis as a backend, and Zaqar provides
pubsub messaging.

On 05/23/2015 03:09 PM, Todd Crane wrote:
 Does anybody know of a way (either current projects or future plans)
 to use Redis PubSub as the messaging system for OpenStack?
 
 -Todd

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Proposing Michael McCune as an API Working Group core

2015-05-11 Thread Ryan Brown
On 05/11/2015 04:18 PM, Everett Toews wrote:
 I would like to propose Michael McCune (elmiko) as an API Working Group core.

Not core, but +1 from me!

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Changing 403 Forbidden to 400 Bad Request for OverQuota was: [nova] Which error code should we return when OverQuota

2015-05-06 Thread Ryan Brown
On 05/06/2015 02:07 PM, Jay Pipes wrote:
 Adding [api] topic. API WG members, please do comment.
 
 On 05/06/2015 08:01 AM, Sean Dague wrote:
 On 05/06/2015 07:11 AM, Chris Dent wrote:
 On Wed, 6 May 2015, Sean Dague wrote:

 All other client errors, just be a 400. And use the emerging error
 reporting json to actually tell the client what's going on.

 Please do not do this. Please use the 4xx codes as best as you
 possibly can. Yes, they don't always match, but there are several of
 them for reasons™ and it is usually possible to find one that sort
 of fits.

I agree with Jay here: there are only 100 error codes in the 400
namespace, and (way) more than 100 possible errors. The general 400 is
perfectly good as a catch-all where the user can be expected to read the
JSON error response for more information, and the other error codes
should be used to make it easier for folks to distinguish specific
conditions.

Let's take the 403 case. If you are denied with your credentials,
there's no error handling that you're going to be able to fix that.

 Using just 400 is bad for a healthy HTTP ecosystem. Sure, for the
 most part people are talking to OpenStack through official clients
 but a) what happens when they aren't, b) is that the kind of world
 we want?

 I certainly don't. I want a world where the HTTP APIs that OpenStack
 and other services present actually use HTTP and allow a diversity
 of clients (machine and human).

Wanting other clients to be able to plug right in is why we try to be
RESTful and make error codes that are usable by any client (see the
error codes and messages specs). Using Conflict and Forbidden codes
in addition to good error messages will help, if they denote very
specific conditions that the user can act on.

 Absolutely. And the problem is there is not enough namespace in the HTTP
 error codes to accurately reflect the error conditions we hit. So the
 current model means the following:

 If you get any error code, it means multiple failure conditions. Throw
 it away, grep the return string to decide if you can recover.

 My proposal is to be *extremely* specific for the use of anything
 besides 400, so there is only 1 situation that causes that to arise. So
 403 means a thing, only one thing, ever. Not 2 kinds of things that you
 need to then figure out what you need to do.

Agreed

 If you get a 400, well, that's multiple kinds of errors, and you need to
 then go conditional.

 This should provide a better experience for all clients, human and
 machine.
 
 I agree with Sean on this one.
 
 Using response codes effectively makes it easier to write client code
 that is either simple or is able to use generic libraries effectively.

 Let's be honest: OpenStack doesn't have a great record of using HTTP
 effectively or correctly. Let's not make it worse.

 In the case of quota, 403 is fairly reasonable because you are in
 fact Forbidden from doing the thing you want to do. Yes, with the
 passage of time you may very well not be forbidden so the semantics
 are not strictly matching but it is more immediately expressive yet
 not quite as troubling as 409 (which has a more specific meaning).

 Except it's not, because you are saying to use 403 for 2 issues (Don't
 have permissions and Out of quota).

 Turns out, we have APIs for adjusting quotas, which your user might have
 access to. So part of 403 space is something you might be able to code
 yourself around, and part isn't. Which means you should always ignore it
 and write custom logic client side.

 Using something beyond 400 is *not* more expressive if it has more than
 one possible meaning. Then it's just muddy. My point is that all errors
 besides 400 should have *exactly* one cause, so they are specific.
 
 Yes, agreed.
 
 I think Sean makes an excellent point that if you have 1 condition that
 results in a 403 Forbidden, it actually does not make things more
 expressive. It actually just means both humans and clients need to now
 delve deeper into the error context to determine if this is something
 they actually don't have permission to do, or whether they've exceeded
 their quota but otherwise have permission to do some action.
 
 Best,
 -jay
 
 p.s. And, yes, Chris, I definitely do see your side of the coin on this.
 It's nuanced, and a grey area...
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Changing 403 Forbidden to 400 Bad Request for OverQuota was: [nova] Which error code should we return when OverQuota

2015-05-06 Thread Ryan Brown
 about it from the other direction: the
client asked for too much stuff (something wrong with the request) and
so a 400 seemed quite reasonable.

 4xx means client side error (The 4xx (Client Error) class of status
 code indicates that the client seems to have erred.), so arguably
 over quota doesn't really work in _any_ 4xx because the client made no
 error, the service just has a quota lower than they need. We don't
 want to go down the non 4xx road at this time, so given our choices
 403 is the one that most says the server won't have it. It doesn't
 really fit in with 4xx at all, just like over quota doesn't!
 -=-=-
 
 Moving away from that specific issue I think it important that we
 get our use of HTTP right. Whether we use 403 or 400 for quota is
 not going to break the world. Using 400 for nearly everything,
 however, may because it will make us seem like yet another system
 that couldn't be bothered to do things correctly and, you know,
 there are enough of those out there. Let's not be one of those.
 
 If we follow the global specs for HTTP (where possible) and not our
 own little derivations then we don't have to be responsible for
 creating and maintaining our own special rules, we just use the
 rules that exist.
 So yeah, my argument basically comes down to: There's a spec, let's
 follow it as much as possible.
 
 p.s. And, yes, Chris, I definitely do see your side of the coin on
 this. It's nuanced, and a grey area...
 
  (╯°□°)╯︵ ┻━┻
 
 (not really throwing a table, I just think it is fun to paste that
 rather evocative image)
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] Go! Swift!

2015-04-30 Thread Ryan Brown
On 04/30/2015 01:54 PM, Clint Byrum wrote:
 * Go's import and build system is rather odd. Python's weird distribution
   issues are at least well known in the OpenStack community. This is
   actually the main reason I've never gravitated toward Go, as I feel
   it is trying to be magical rather than logical. I imagine there are
   others who are also nervous about that.

I don't think it tries to be magical, it tries to be developer
friendly by default, and lets your VCS handle versioning your
dependencies. It just so happens version control systems are great at that.

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Question for the TC candidates

2015-04-28 Thread Ryan Brown
On 04/28/2015 12:42 PM, Doug Hellmann wrote:
 Excerpts from Jeremy Stanley's message of 2015-04-28 16:21:17 +:
 On 2015-04-28 16:30:21 +0100 (+0100), Chris Dent wrote:
 [...]
 What's important to avoid is the blog postings being only reporting of
 conclusions. They also need to be invitations to participate in the
 discussions. Yes, the mailing list, gerrit and meeting logs have some
 of the ongoing discussions but often, without a nudge, people won't
 know.
 [...]

 Perhaps better visibility for the meeting agenda would help? As in
 these are the major topics we're planning to cover in the upcoming
 meeting, everyone is encouraged to attend sort of messaging?
 Blogging that might be a bit obnoxious, not really sure (I'm one of
 those luddites who prefer mailing lists to blogs so tend not to
 follow the latter anyway).
 
 The agenda is managed in the wiki [1] and Thierry sends a copy to
 the openstack-tc mailing list every week. Maybe those should come
 here to this list, with the topic tag [tc], instead?
 
 Doug
 
 [1] https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee
 

I'd much prefer it on openstack-dev with the [tc] tag, I wasn't aware
the agenda was emailed out at all.

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Please stop reviewing code while asking questions

2015-04-24 Thread Ryan Brown
On 04/24/2015 03:00 PM, Julien Danjou wrote:
 I like that point and I agree with you. The problem, as someone already
 stated, is that these people are rarely on IRC and sometimes just never
 reply on the review. Right, maybe next time I'll chase them down via
 email. Sometimes I wish we were a little more conservative about who
 could do code review, but well.

I'm pretty heavily against limiting who can code review. There are some
less-than-helpful reviewers about, but putting up barriers is the wrong
way to go about fixing it.

Education is the way to go, and it's ok if there's some nominal level of
somewhat unhelpful reviews so long as, when possible, we try to teach
those reviewers how they can be more helpful.

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] API WG Meeting Time

2015-04-23 Thread Ryan Brown
On 04/23/2015 01:12 PM, michael mccune wrote:
 On 03/31/2015 10:13 PM, Everett Toews wrote:
 Ever since daylight savings time it has been increasing difficult for
 many API WG members to make it to the Thursday 00:00 UTC meeting time.

 Do we change it so there’s only the Thursday 16:00 UTC meeting time?
 
 this topic was brought up again at today's meeting (April 23, 2015), and
 we are still searching for a good alternate time.
 
 one proposal that came up was for an early morning PST time on thursdays
 for the meeting. i'm guessing this would be around the 12:00-13:00 UTC
 range.
 
 we would still love to hear more thoughts from folks in Australia/Asia
 time zones to see if we can arrive at something that will accommodate
 the most number of interested parties.
 
 regards,
 mike

I'd be more than happy with a 1200UTC-1300UTC meeting (I'm US-based).

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Call for adoption (or exclusion?)

2015-04-21 Thread Ryan Brown
On 04/20/2015 07:12 PM, Steve Baker wrote:
 On 21/04/15 11:01, Angus Salkeld wrote:


 On Tue, Apr 21, 2015 at 8:38 AM, Fox, Kevin M kevin@pnnl.gov
 mailto:kevin@pnnl.gov wrote:

 As an Op, a few things that come to mind in that category are:
  * RDO packaging (stated earlier). If its not easy to install, its
 not going to be deployed as much. I haven't installed it yet,
 because I haven't had time to do much other then yum install it...
  * Horizon UI
  * Heat Resources. (Some basic stuff like create/delete queue to
 go along with the stack. also link #1 below)


 Here you go:
 https://github.com/openstack/heat/tree/master/contrib/heat_zaqar
 One thing we need to do in Vancouver is come up with criteria for moving
 resources from contrib into the main tree. Previously this was whether
 the project was integrated. As a starter I would suggest something like:
 1. project is in the openstack git namespace
 2. the client library is synced with global-requirements.txt
 3. the resources are complete enough to be useful in template authoring
 We need to think about potential for integration testing in the gate too.

I think scenario/functional tests should be table stakes to graduate a
resource from contrib/ .

  

 Horizon has a discovery aspect to it. If users don't know a
 service is available, its hard for them to use it. Even with the
 most simple UI of Create/Delete/List queues, discovery is handled.

Absolutely agreed. Especially in a service like Zaqar where the vast
majority of usage isn't by humans in a web interface, the horizon bit
becomes mostly a dashboard/auditing/testing destination instead of a
primary interface.

 [snip]

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][pbr] getting 0.11 out the door. Maybe even 1.0

2015-04-20 Thread Ryan Brown
On 04/20/2015 08:22 AM, Victor Stinner wrote:
 I believe Redhat patch it out. I don't think they should need to,
 since we have explicit knobs for distros to use.
 
 pbr pulls pip which we don't want in RHEL. Example of patches in RDO:
 
 https://github.com/redhat-openstack/nova/commit/a19939c8f9a7b84b8a4d713fe3d26949e5664089
 https://github.com/redhat-openstack/python-keystoneclient/commit/e02d529a87aef8aaca0616c8ee81de224bf1f52a
 https://github.com/redhat-openstack/neutron/commit/85302b75362df30270383e3a373e60e81b1b2384
 (well, it's always the same change)
 
 Can't we enhance pbr to build (source/wheel) distributions of applications 
 which don't depend on pbr? Basically implement these patches in pbr?
 
 I read somewhere that pkg_resources may also be used to get the version.
 

You're absolutely correct, here's a quick snippet to get the version as
a string:

import pkg_resources
pkg_resources.get_distribution('nova').version

Where, of course, s/nova/any-package-name/

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Call for adoption (or exclusion?)

2015-04-20 Thread Ryan Brown
On 04/20/2015 02:22 PM, Michael Krotscheck wrote:
 What's the difference between openstack/zaqar and stackforge/cue?
 Looking at the projects, it seems like zaqar is a ground-up
 implementation of a queueing system, while cue is a provisioning api for
 queuing systems that could include zaqar, but could also include rabbit,
 zmq, etc...
 
 If my understanding of the projects is correct, the latter is far more
 versatile, and more in line with similar openstack approaches like
 trove. Is there a use case nuance I'm not aware of that warrants
 duplicating efforts? Because if not, one of the two should be retired
 and development focused on the other.
 
 Note: I do not have a horse in this race. I just feel it's strange that
 we're building a thing that can be provisioned by the other thing.
 

Well, with Trove you can provision databases, but the MagnetoDB project
still provides functionality that trove won't.


The Trove : MagnetoDB and Cue : Zaqar comparison fits well.

Trove provisions one instance of X (some database) per tenant, where
MagnetoDB is one instance (collection of hosts to do database things)
that serves many tenants.

Cue's goal is I have a not-very-multitenant message bus (rabbit, or
whatever) and makes that multitenant by provisioning one per tenant,
while Zaqar has a single install (of as many machines as needed) to
support messaging for all cloud tenants. This enables great stuff like
cross-tenant messaging, better physical resource utilization in
sparse-tenant cases, etc.

As someone who wants to adopt Zaqar, I'd really like to see it continue
as a project because it provides things other message broker approaches
don't.

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] how to send messages (and events) to our users

2015-04-08 Thread Ryan Brown
On 04/07/2015 02:34 PM, Sandy Walsh wrote:
 
 Tooling in general seems to be moving towards richer event data as well.
 The logging tools (Loggly/Logstash/PaperTrail/zillions of others) are
 intended to take your unstructured logs and turn them into events, so
 why not have Heat output structured events that we can present to the
 user with Ceilometer rather than sending log lines (through syslog or
 otherwise) and using tooling to reassemble them into events later.
 
 The trend in the monitoring space seems to be:
 
 1. Alarms are issued from Metrics as Events. 
 (events can issue alarms too, but conventional alarming is metric based)
 2. Multiple events are analyzed to produce Metrics (stream processing)
 3. Go to Step 1
 

Indeed. I sort of envisioned heat sending out events that are then
consumed both as metrics and by the user (where appropriate). In
StackTach I can see that being implemented as

/-- resource events  other tools
Heat -- Winchester --- notifications stream-- user
\-- metrics stream -- alerts --/


 TL;DR: I think what we really want is a place to send and react to
 *events*. Logs are a way to do that, of course, but the Ceilometer way
 sounds pretty attractive.
 
 The difference is structured vs. unstructured data. Elasticsearch-based
 solutions tend to ignore structure by looking at keywords. Some solutions,
 like TopLog, infer a structure by gleaning regexs from logs. 
 
 Events start as structured data. More so, we're looking at establishing 
 AVRO-based schema definitions on top of these events (slow progress).

Yeah, I'd really like to have a schema for Heat events so we can have a
single event stream and repackage events for different consumption goals
(metrics, notifications, programmatic interaction, etc).

 If anything we should consider changing the logging library to use structured 
 messages. Specifically:
 
 log(The %(foo)s did %(thing)s % {'foo':'x', 'thing':'action'})
 it would be
 log({'message':The %(foo)s did %(thing)s, 'foo':'x', 'thing':'action'})
 
 Which can still be formatted for conventional logs, but also sent as a
 notification or as a higher-level structure to things like ES, TopLog, etc.
 The driver can decide. 
 
 * CloudWatch has you send unstructured log messages, then build filters
 to parse them into quantifiable events, then set alarms on those metrics.
 
 Having to build filters is a relatively error-prone approach compared to the
 methods described above. 

I wasn't saying *we* should do the unstructured message + regex filters
strategy, I was just pointing out the CW solution for folks who hadn't
used it.

 [snip]
 
 The Fujitsu team have already added logging support to Monasca (with an 
 elasticsearch backend) and HP is currently adding StackTach.v3 support for
 notification-event conversion as well as our Winchester event stream 
 processing engine. Also, this is based on Kafka vs. RabbitMQ, which has better
 scaling characteristics for this kind of data.

Oooh, I'll have a look into that, Kafka as an event bus sounds like a
good fit. I have the same concern Angus voiced earlier about Zaqar
though. What's the deployment of StackTach.v3 across OpenStack
installations? Is it mostly deployed for Helion/Rackspace, or are
smaller deployers using it as well?

 
 This could be extended to richer JSON events that include the stack,
 resources affected in the update, stats like num-deleted-resources or
 num-replaced-resources, autoscaling actions, and info about stack errors.
 
 Some of these sound more like a metrics than notifications. We should be 
 careful not to misuse the two. 

I think they're events, and have facets that are quantifiable as metrics
(num-replaced-resources on an update action) and that should be
user-visible (update is complete, or autoscaling actions taken).

 Is there a way for users as-is to view those raw notifications, not just
 the indexed k/v pairs?
 
 In StackTach.v3 we ship the raw notifications to HDFS for archiving, but
 expose the reduced event via the API. The message-id links the two.
 
 Lots more here: http://www.stacktach.com

Thanks! I'll have to read up.

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo][clients] Let's speed up start of OpenStack libs and clients by optimizing imports with profimp

2015-04-08 Thread Ryan Brown
On 04/08/2015 09:12 AM, Flavio Percoco wrote:
 On 08/04/15 08:59 -0400, Doug Hellmann wrote:
 Excerpts from Robert Collins's message of 2015-04-07 10:43:30 +1200:
 On 7 April 2015 at 05:11, Joe Gordon joe.gord...@gmail.com wrote:
 
 
  On Mon, Apr 6, 2015 at 8:39 AM, Dolph Mathews
 dolph.math...@gmail.com
  wrote:
 
 
  On Mon, Apr 6, 2015 at 10:26 AM, Boris Pavlovic
 bo...@pavlovic.me wrote:
 
  Jay,
 
 
  Not far, IMHO. 100ms difference in startup time isn't something we
  should spend much time optimizing. There's bigger fish to fry.
 
 
  I agree that priority of this task shouldn't be critical or even
 high,
  and that there are other places that can be improved in OpenStack.
 
  In other hand this one is as well big source of UX issues that we
 have in
  OpenStack..
 
  For example:
 
  1) You would like to run some command X times where X is pretty big
  (admins likes to do this via bash loops). If you can execute all
 of them for
  1 and not 10 minutes you will get happier end user.
 
 
  +1 I'm fully in support of this effort. Shaving 100ms off the
 startup time
  of a frequently used library means that you'll save that 100ms
 over and
  over, adding up to a huge win.
 
 
 
  Another data point on how slow our libraries/CLIs can be:
 
  $ time openstack -h
  snip
  real0m2.491s
  user0m2.378s
  sys 0m0.111s


 pbr should be snappy - taking 100ms to get the version is wrong.

 I have always considered pbr a packaging/installation time tool, and not
 something that would be used at runtime. Why are we using pbr to get the
 version of an installed package, instead of asking pkg_resources?
 
 Just wanted to +1 the above.
 
 I've also considered pbr a packaging/install tool. Furthermore, I
 believe having it as a runtime requirement makes packagers life more
 complicated because that means pbr will obviously need to be added as
 a runtime requirement for that package.
 

RDO actually patches out calls to pbr to avoid the runtime requirement,
FWIW.

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] how to send messages (and events) to our users

2015-04-07 Thread Ryan Brown
On 04/07/2015 10:10 AM, gordon chung wrote:
 
 - Specific user oriented log messages (distinct from our normal
 operator logs)
 - Deprecation messages (if they are using old resource
 properties/template features)
 - Progress and resource state changes (an application doesn't want to
 poll an api for a state change)
 - Automated actions (autoscaling events, time based actions)
 - Potentially integrated server logs (from in guest agents)
 

Angus mentions that we want user messages but I'd argue that an events
interface (not to be confused with our current stack events) would be a
great fit. We could define schemas (building on API-WG's error message
guide[1]) for heat events so users can programmatically interact with
heat w/o polling all the time.

I think moving slightly up the abstraction ladder from here's a log
message to here's a structured event with extra metadata too would be
great, and do a better job helping users than unstructured log messages.
The difference between the AWS options is instructive here because the
older service (CloudWatch) is log-line oriented*, while the newer
service (CloudTrail) provides structured events.

Tooling in general seems to be moving towards richer event data as well.
The logging tools (Loggly/Logstash/PaperTrail/zillions of others) are
intended to take your unstructured logs and turn them into events, so
why not have Heat output structured events that we can present to the
user with Ceilometer rather than sending log lines (through syslog or
otherwise) and using tooling to reassemble them into events later.

TL;DR: I think what we really want is a place to send and react to
*events*. Logs are a way to do that, of course, but the Ceilometer way
sounds pretty attractive.

* CloudWatch has you send unstructured log messages, then build filters
to parse them into quantifiable events, then set alarms on those metrics.

 is the idea that Heat would build events from the logs or would you want
 to send the log messages to another service to be process? so for
 example, Nova doesn't send all logs messages to the queue but they do
 send a set of messages relating to certain actions and errors that occur
 (beyond just CRUD events). as the use cases above seem to target
 specific actions/logs and not all logs, i would think the processing
 could be done on the initiators service end and not on the consumer end.
 
 to give an example of what Ceilometer is capable of; Ceilometer
 currently takes JSON messages from the MQ from *most* services and from
 there we capture the entire raw notification and index on a select set
 of key-value pairs. i think it's entirely possible to take in non-json
 log messages and build an indexer around that if needed.
 

I don't think it would be too hard for us to package up events like
stack state transitions, failures (with as much debug info as is
reasonable), autoscaling actions, etc.

(please pardon any gross terminology-mangling, I'm not very familiar
with Ceilometer)

We already use notifications to send what I'd term sparse events that
only include the affected stack ID, meaning there's not much to slice on
in Ceilometer.

This could be extended to richer JSON events that include the stack,
resources affected in the update, stats like num-deleted-resources or
num-replaced-resources, autoscaling actions, and info about stack errors.

Is there a way for users as-is to view those raw notifications, not just
the indexed k/v pairs?

Thanks,
Ryan

[1]: https://review.openstack.org/#/c/167793/

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] API WG Meeting Time

2015-04-01 Thread Ryan Brown
On 03/31/2015 10:13 PM, Everett Toews wrote:
 Ever since daylight savings time it has been increasing difficult for
 many API WG members to make it to the Thursday 00:00 UTC meeting
 time.
 
 Do we change it so there’s only the Thursday 16:00 UTC meeting time?
 
 On a related note, I can’t make it to tomorrow’s meeting. Can someone
 else please #startmeeting?
 
 Thanks, Everett

+1 for moving to only 16:00UTC

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Decoupling Heat integration tests from Heat tree

2015-03-27 Thread Ryan Brown
On 03/27/2015 06:57 AM, Pavlo Shchelokovskyy wrote:
 Hi,
 
 On Thu, Mar 26, 2015 at 10:26 PM, Zane Bitter zbit...@redhat.com
 mailto:zbit...@redhat.com wrote:
 
 [snip]
 
 3) move the integration tests to a separate repo and use it as git
 submodule in the main tree. The main reasons not to do it as far
 as I've
 collected are not being able to provide code change and test in
 the same
 (or dependent) commits, and lesser reviewers' attention to a
 separate repo.
 
 
 -0
 
 I'm not sure what the advantage is here, and there are a bunch of
 downsides (basically, I agree with Ryan). Unfortunately I missed the
 IRC discussion, can you elaborate on how decoupling to this degree
 might help us?
 
 
 Presumably this could enable a more streamlined packaging and publishing
 of the test suit (e.g. to PyPI). But I agree, right now it is not really
 needed given the downsides, I just brought it up as an extreme
 separation case to collect more opinions.
 
 Given the feedback we have in the thread, I will proceed with the first
 point as this should have immediate benefit for the duration of the test
 job and already give help to those who want to package the test suit
 separately. Distutils stuff can be added later.
 
 Best regards, 
 Pavlo Shchelokovskyy

If we only do 1 and 2, not 3 we get all the benefits (separate package,
streamlined publishing, etc) without having to deal with the submodule
disadvantages I (and you) mentioned earlier.

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Is yaml-devel still needed for Devstack

2015-03-27 Thread Ryan Brown
On 03/27/2015 09:04 AM, Sean Dague wrote:
 On 03/27/2015 09:01 AM, Adam Young wrote:
 I recently got Devstack to run on RHEL.  In doing so, I had to hack
 around the dependency on yaml-devel (I just removed it from devstack's
 required packages)

 There is no yaml-devel in EPEL or the main repos for RHEL7.1/Centos7.

 Any idea what the right approach is to this moving forward?  Is this
 something that is going to bite us in RDO packaging?

 The dependency is a general one:
 http://git.openstack.org/cgit/openstack-dev/devstack/tree/files/rpms/general#n25


 So I don't know what actually needs it.  I find it interesting that
 Fedora does not seem to have it, either, but  I've had no problem
 running devstack on Fedora 21.  Can we remove this dependency, or at
 least move it closer to where it is needed?
 
 pyyaml will use libyaml to c accelerate yaml parsing. It's not strictly
 required, but there may be performance implications.

Since it's a soft requirement should we patch devstack to reflect that?

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Decoupling Heat integration tests from Heat tree

2015-03-26 Thread Ryan Brown
On 03/26/2015 10:38 AM, Pavlo Shchelokovskyy wrote:
 Hi all,
 
 following IRC discussion here is a summary of what I propose can be
 done in this regard, in the order of increased decoupling:
 
 1) make a separate requirements.txt for integration tests and modify
 the tox job to use it. The code of these tests is pretty much
 decoupled already, not using any modules from the main heat tree. The
 actual dependencies are mostly api clients and test framework. Making
 this happen should decrease the time needed to setup the tox env and
 thus speed up the test run somewhat.

+1 for this

 2) provide separate distutils' setup.py/setup.cfg to ease packaging
 and installing this test suit to run it against an already deployed
 cloud (especially scenario tests seem to be valuable in this
 regard).

I quite like this idea, the value here is pretty apparent  in the
spirit of the separate requirements.txt.

 3) move the integration tests to a separate repo and use it as git 
 submodule in the main tree. The main reasons not to do it as far as
 I've collected are not being able to provide code change and test in
 the same (or dependent) commits, and lesser reviewers' attention to a
 separate repo.

It's also important for local development workflow to have an up-to-date
version of the project's tests and having them shuffled out to a
submodule would make it exceptionally easy to forget submodule pull
and end up missing tests. This is, of course, in addition to your (all
valid) reasons for avoiding submodules.

 
 What do you think about it? Please share your comments.
 
 Best regards,
 
 Pavlo Shchelokovskyy Software Engineer Mirantis Inc www.mirantis.com
 http://www.mirantis.com

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] What do people think of YAPF (like gofmt, for python)?

2015-03-26 Thread Ryan Brown
On 03/26/2015 02:15 PM, Kiall Mac Innes wrote:
 I have no clue how I managed to send that last email encrypted -
 Apologies :)
 
 Re YAPF, or autofomatting, I've very little opinion..
 
 But - I gave YAPF a go against the Designate codebase with the stock config:
 
   258 files changed, 5242 insertions(+), 5691 deletions(-)
 
 Getting changes like that into the various projects won't be easy, even
 if the core team is happy to just +A without reviewing for potential
 issues, a massive percentage of in-progress reviews will fail to merge
 and need manual rebasing.
 
 For companies with internal forks/patches - those will likely all have
 to be redone too..

Ooof, that's huge. If we can configure it to be less aggressive I love
the *idea* of having everything formatted semantically, but that's a
pretty major burden for everyone involved.

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [API] Do we need to specify follow the HTTP RFCs?

2015-02-12 Thread Ryan Brown
On 02/12/2015 01:08 PM, Jay Pipes wrote:
 On 02/12/2015 01:01 PM, Chris Dent wrote:
 I meant to get to this in today's meeting[1] but we ran out of time
 and based on the rest of the conversation it was likely to lead to a
 spiral of different interpretations, so I thought I'd put it up here.

 $SUBJECT says it all: When writing guidelines to what extent do we
 think we should be recapitulating the HTTP RFCs and restating things
 said there in a form applicable to OpenStack APIs?

 For example should we say:

  Here are some guidelines, for all else please refer to RFCs
  7230-5.

 Or should we say something like:

  Here are some guidelines, including:

  If your API has a resource at /foo which responds to an authentic
  request with method GET but not with method POST, PUT, DELETE or
 PATCH
  then when an authentic request is made to /foo that is not a GET it
 must
  respond with a 405 and must include an Allow header listing the
  currently support methods.[2]

 I ask because I've been fleshing out my gabbi testing tool[3] by running
 it against a variety of APIs. Gabbi makes it very easy to write what I
 guess the officials call negative tests -- Throw some unexpected but
 well-
 formed input, see if there is a reasonable response -- just by making
 exploratory inquiries into the API and then traversing the discovered
 links
 with various methods and content types.

 What I've found is too often the response is not reasonable. Some of
 the problems arise from the frameworks being used, in other cases it
 is the implementing project.

 We can fix the existing stuff in a relatively straightforward but
 time consuming fashion: Use tools like gabbi to make more negative tests,
 fix the bugs as they come up. Same as it ever was.

 For new stuff, however, does there need to be increased awareness of
 the rules and is it the job of the working group to help that
 increasing along?
 
 I think it's definitely the role of the API WG to identify places in our
 API implementations that are not following the rules, yes.
 
 I think paraphrasing particular parts of RFCs would be my preference,
 along with examples of bad or incorrect usage.
 
 Best,
 -jay

+1 I think the way to go would be:

We suggest (pretty please) that you comply with RFCs 7230-5 and if you
have any questions ask us. Also here are some examples of usage that
is/isn't RFC compliant for clarity

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The API WG mission statement

2015-02-12 Thread Ryan Brown
On 02/10/2015 08:01 AM, Everett Toews wrote:
 On Feb 9, 2015, at 9:28 PM, Jay Pipes jaypi...@gmail.com
 mailto:jaypi...@gmail.com wrote:
 
 On 02/02/2015 02:51 PM, Stefano Maffulli wrote:
 On Fri, 2015-01-30 at 23:05 +, Everett Toews wrote:
 To converge the OpenStack APIs to a consistent and pragmatic RESTful
 design by creating guidelines that the projects should follow. The
 intent is not to create backwards incompatible changes in existing
 APIs, but to have new APIs and future versions of existing APIs
 converge.

 It's looking good already. I think it would be good also to mention the
 end-recipients of the consistent and pragmatic RESTful design so that
 whoever reads the mission is reminded why that's important. Something
 like:

 To improve developer experience converging the OpenStack API to
 a consistent and pragmatic RESTful design. The working group
 creates guidelines that all OpenStack projects should follow,
 avoids introducing backwards incompatible changes in existing
 APIs and promotes convergence of new APIs and future versions of
 existing APIs.

 After reading all the mails in this thread, I've decided that Stef's
 suggested mission statement above is the one I think best represents
 what we're trying to do.

 That said, I think it should begin To improve developer experience
 *by* converging ... :)
 
 +1 
 
 I think we could be even more explicit about the audience. 
 
 To improve developer experience *of API consumers by* converging the
 OpenStack API to a consistent and pragmatic RESTful design. The working
 group creates guidelines that all OpenStack projects should
 follow, avoids introducing backwards incompatible changes in
 existing APIs, and promotes convergence of new APIs and future versions
 of existing APIs.
 
 I’m not crazy about the term API consumer and could bike shed a bit on
 it. The problem being that alternative terms for API consumer have
 been taken in OpenStack land. “developer” is used for contributor
 developers building OpenStack itself, “user” is used for operators
 deploying OpenStack, and “end user” has too many meanings. “API
 consumer” makes it clear what side of the API the working group audience
 falls on.

I wouldn't mind API user, I think it conveys intent but doesn't sound
as stilted as API consumer.

 I also like dtroyer’s idea of a Tweetable mantra but I think we need to
 distill that mantra _from_ a longer mission statement. If we constrained
 the mission statement to = 140 chars at the outset, we’d be losing
 valuable information that’s vital in communicating our intent. And if we
 can’t fully communicate our intent in a mission statement then it
 doesn’t have as much value.
 
 Thanks,
 Everett
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat][API] Filtering by metadata values

2015-02-12 Thread Ryan Brown
On 02/10/2015 10:03 PM, Angus Salkeld wrote:
 On Wed, Feb 11, 2015 at 8:20 AM, Miguel Grinberg
 miguel.grinb...@gmail.com mailto:miguel.grinb...@gmail.com wrote:
 
 Hi,
 
 We had a discussion yesterday on the Heat channel regarding patterns
 for searching or filtering entities by its metadata values. This is
 in relation to a feature that is currently being implemented in Heat
 called “Stack Tags”.
 
 The idea is that Heat stack lists can be filtered by these tags, so
 for example, any stacks that you don’t want to see you can tag as
 “hidden”, then when you request a stack list you can specify that
 you only want stacks that do not have the “hidden” tag.
 
 
 Some background, the author initially just asked for a field hidden.
 But it seemed like there were many more use cases that could be
 fulfilled by having
 a generic tags on the stack REST resource. This is really nice feature
 from UI perspective.

Tagging would be incredibly useful for larger heat deployments.


 We were trying to find other similar usages of tags and/or metadata
 within OpenStack projects, where these are not only stored as data,
 but are also used in database queries for filtering. A quick search
 revealed nothing, which is surprising.
 
 
 I have spotted nova's instance tags that look like the kinda beast we
 are after:
   -  https://blueprints.launchpad.net/nova/+spec/tag-instances
   -  https://review.openstack.org/#/q/topic:bp/tag-instances,n,z
 
  
 
 
 Is there anything I may have missed? I would like to know if there
 anything even remotely similar, so that we don’t build a new wheel
 if one already exists for this.
 
 
 So we wanted to bring this up as there is a API WG and the concept of
 tags and filtering should be consistent
 and we don't want to run off and do something that the WG really doesn't
 like.
 

I'll bring up this thread in the API-WG meeting, which happens to be in
20 minutes (11AM EST) if anyone sees this soon enough to also attend.

 But it looks like this needs a bit more fleshing out:
  
 http://specs.openstack.org/openstack/api-wg/guidelines/pagination_filter_sort.html#filtering
 
 Should we just follow nova's instance tags, given the lack of definition
 in api-wg?
 
 Regards
 Angus
 
 Thanks,
 
 Miguel
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] [API WG] Environment status

2015-02-12 Thread Ryan Brown
On 02/12/2015 10:52 AM, Georgy Okrokvertskhov wrote:
 I think this is a good case for API WG as statuses of entities should be
 consistent among OpenStack APIs. As I recall, we are mixing two
 different statuses for environments. The first dimension for environment
 status is its content status: NEW, CONFIGURING, READY
 The second dimension is deployment status after Murano engine executes
 deployment action for this environment with statuses: NOT DEPLOYED,
 DEPLOYING, SUCCESS, FAIL
 
 Thanks
 Gosha
 
One place we use these pretty extensively is in Heat and Ceilometer.

Ceilometer: OK, INSUFFICIENT_DATA, ALARM

In heat we have the notion of status and actions. Actions are something
you do, a status is how the action has resolved.

Actions: INIT, CREATE, DELETE, UPDATE, ROLLBACK, SUSPEND, RESUME, ADOPT,
SNAPSHOT, CHECK

Status: IN_PROGRESS FAILED COMPLETE

This is presented to the user combined (e.g. INIT_COMPLETE or
ROLLBACK_FAILED) together.

 On Thu, Feb 12, 2015 at 7:38 AM, Andrew Pashkin apash...@mirantis.com
 mailto:apash...@mirantis.com wrote:
 
 Initiall my interest to that problem was caused by the bug [1] due
 to whose
 once environment was deployed with failure it stays forever with
 that status
 even if it have been deployed sucessfully later.
 
 For now status determination happens in three stages:
 
 - If at least one of all sessions of env, regardless of version, is in
   DEPLOYING/DELETING or DEPLOY_FAILURE/DELETE_FAILURE states - state
 of that
   session taken as state of environment. Sessions prioritized by
 version and
   midification date.
 - If there is no such sessions, but  at least one is in OPENED state -
   environment state will be PENDING.
 - Otherwise environment will have READY status.
 
 Accordingly to that - once session was deployed with failure - it stays
 in that
 status even if it was deployed successfully later.
 
 In UI statuses are taken directly from API except another calculated
 status
 NEW. Environment matches status NEW if it has version = 0 and it has
 apps with
 status 'pending' [2].
 
 After discussion inside Mirantis Murano team we came to these thoughts:
 - We need to remove statuses that does not related to deployment
 (PENDING).
 - Introduce NEVER_DEPLOYED (or NEW) status.
 - Change READY to DEPLOYED.
 - Possibly we need to keep state in Environment table in DB and no
 calculate it
   queriyng session every time.
 
 PENDING status was needed to indicate that another user do something
 with the
 environment. But it was decided, that this information must be placed
 somwhere
 else, to not be in coflict with deployment status. Another proposal
 was to
 additionaly show if environment has some dirty/not-deployed
 changes in it.
 
 First of all let's discuss quick fix of the alghorithm of
 Environment status
 matching [3]. I propose to take status of last modified session as
 status of an
 environment.
 
 At second let's discuss overall situation and more extensive changes
 that we
 might want introduce.
 
 
 [1] https://bugs.launchpad.net/murano/+bug/1413260
 [2]
 
 https://github.com/stackforge/murano-dashboard/blob/master/muranodashboard/environments/api.py#L140-140
 [3]
 
 https://github.com/stackforge/murano/blob/017d25f49e60e18365a50756f524e15f8d4fa78a/murano/db/services/environments.py#L62
 
 --
 With kind regards, Andrew Pashkin.
 cell phone - +7 (985) 898 57 59 tel:%2B7%20%28985%29%20898%2057%2059
 Skype - waves_in_fluids
 e-mail - apash...@mirantis.com mailto:apash...@mirantis.com
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 -- 
 Georgy Okrokvertskhov
 Architect,
 OpenStack Platform Products,
 Mirantis
 http://www.mirantis.com http://www.mirantis.com/
 Tel. +1 650 963 9828
 Mob. +1 650 996 3284
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack] [Heat][convergence] Formal model for SyncPoints

2015-02-11 Thread Ryan Brown
Oops, sent this to openstack@ not openstack-dev@.

On 02/10/2015 04:52 PM, Ryan Brown wrote:
 Heat team,
 
 After looking at Zane's prototype[1] and the ML threads on SyncPoint, I
 thought it'd be helpful to make a model of *just* the logic around
 resource locking. Hopefully, it'll help during implementation to be able
 to see just the logic without the stubbed-out data store etc from the
 prototype.
 
 The model[2] is the absolute minimum I could implement and still capture
 what syncpoints are supposed to do. A stack is still a collection of
 resources, but resources themselves are reduced to just their IDs. Locks
 are still at the resource level and dependency trees are followed as in
 Zane's prototype.
 
 Modeling it has confirmed the things we've already worked out about
 SyncPoints and sharing workload across workers. Resource-level locking
 is the best granularity for workers, and there isn't a case the model
 checker was able to generate that breaks the resource dependency order
 or ends with multiple holders of a given SyncPoint.
 
 It also confirms that the architecture we have is going to have serious
 database overhead for managing these locks. We've been over that, and
 there isn't much of a way around it without adding some other shared
 synchronization system.
 
 If you'll be implementing convergence, please take a read over the PDF
 version[3] and feedback/questions are (of course) welcome.
 
 Cheers,
 Ryan
 
 [1]: https://github.com/zaneb/heat-convergence-prototype/tree/resumable
 [2]: https://github.com/ryansb/heat-tla-model
 [3]: https://github.com/ryansb/heat-tla-model/blob/master/Heat.pdf
 

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The API WG mission statement

2015-02-02 Thread Ryan Brown
On 01/30/2015 06:18 PM, Dean Troyer wrote:
 On Fri, Jan 30, 2015 at 4:57 PM, Everett Toews
 everett.to...@rackspace.com mailto:everett.to...@rackspace.com wrote:
 
 What is the API WG mission statement?
 
 
 It's more of a mantra than a Mission Statement(TM):
 
 Identify existing and future best practices in OpenStack REST APIs to
 enable new and existing projects to evolve and converge.
 

Identify existing and future pragmatic ideals in OpenStack REST APIs to
enable new and existing projects to evolve and converge.

I like it, but I'd like to get pragmatic in there somewhere. Just to
be clear we aren't just looking for pie-in-the-sky ideals, but ones that
can apply now/in the future.

 Tweetable, 126 chars!
 
 Plus, buzzword-bingo-compatibile, would score 5 in my old corporate
 buzzwordlist...
 
 dt
 
 (Can you tell my flight has been delayed? ;)
 
 -- 
 
 Dean Troyer
 dtro...@gmail.com mailto:dtro...@gmail.com
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Next meeting agenda

2015-01-29 Thread Ryan Brown
On 01/29/2015 10:20 AM, Sean Dague wrote:
 On 01/29/2015 07:17 AM, Anne Gentle wrote:


 On Thu, Jan 29, 2015 at 4:10 AM, Thierry Carrez thie...@openstack.org
 mailto:thie...@openstack.org wrote:

 Everett Toews wrote:
  A couple of important topics came up as a result of attending the
  Cross Project Meeting. I’ve added both to the agenda for the next
  meeting on Thursday 2015/01/29 at 16:00 UTC.
 
  https://wiki.openstack.org/wiki/Meetings/API-WG#Agenda
 
  The first is the suggestion from ttx to consider using
  openstack-specs [1] for the API guidelines.

 Precision: my suggestion was to keep the api-wg repository for the
 various drafting stages, and move to openstack-specs when it's ready to
 be recommended and request wider community comments. Think Draft and
 RFC stages in the IETF process :)


 Oh, thanks for clarifying, I hadn't understood it that way. 

 To me, it seems more efficient to get votes and iterate in one repo
 rather than going through two iterations and two review groups. What do
 others think?
 Thanks,
 Anne
 
 Honestly, I'm more a fan of the one repository approach. Jumping
 repositories means the history gets lost, and you have to restart a
 bunch of conversations. That happened in the logging jump from nova -
 openstack-specs, which probably created 3 - 4 months additional delay in
 the process.
 
   -Sean
 

+1 for single repo, commits aren't as important for specs as unifying
discussion around them.

I think a spec being merged/unmerged is a good enough distinction for
the draft status of a spec and wouldn't be improved by having a
draft/final repo split.

___
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Where to keep data about stack breakpoints?

2015-01-12 Thread Ryan Brown
On 01/12/2015 10:29 AM, Tomas Sedovic wrote:
 Hey folks,
 
 I did a quick proof of concept for a part of the Stack Breakpoint
 spec[1] and I put the does this resource have a breakpoint flag into
 the metadata of the resource:
 
 https://review.openstack.org/#/c/146123/
 
 I'm not sure where this info really belongs, though. It does sound like
 metadata to me (plus we don't have to change the database schema that
 way), but can we use it for breakpoints etc., too? Or is metadata
 strictly for Heat users and not for engine-specific stuff?

I'd rather not store it in metadata so we don't mix user metadata with
implementation-specific-and-also-subject-to-change runtime metadata. I
think this is a big enough feature to warrant a schema update (and I
can't think of another place I'd want to put the breakpoint info).

 I also had a chat with Steve Hardy and he suggested adding a STOPPED
 state to the stack (this isn't in the spec). While not strictly
 necessary to implement the spec, this would help people figure out that
 the stack has reached a breakpoint instead of just waiting on a resource
 that takes a long time to finish (the heat-engine log and event-list
 still show that a breakpoint was reached but I'd like to have it in
 stack-list and resource-list, too).
 
 It makes more sense to me to call it PAUSED (we're not completely
 stopping the stack creation after all, just pausing it for a bit), I'll
 let Steve explain why that's not the right choice :-).

+1 to PAUSED. To me, STOPPED implies an end state (which a breakpoint is
not).

For sublime end user confusion, we could use BROKEN. ;)

 Tomas
 
 [1]:
 http://specs.openstack.org/openstack/heat-specs/specs/juno/stack-breakpoint.html
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [Ceilometer] [API] Batch alarm creation

2014-12-12 Thread Ryan Brown


 ___
 Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openst...@lists.openstack.org
 Unsubscribe : 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Nailgun] Web framework

2014-12-02 Thread Ryan Brown
On 12/02/2014 09:55 AM, Igor Kalnitsky wrote:
 Hi, Sebastian,
 
 Thank you for raising this topic again.
 
 [snip]
 
 Personally, I'd like to use Flask instead of Pecan, because first one
 is more production-ready tool and I like its design. But I believe
 this should be resolved by voting.
 
 Thanks,
 Igor
 
 On Tue, Dec 2, 2014 at 4:19 PM, Sebastian Kalinowski
 skalinow...@mirantis.com wrote:
 Hi all,

 [snip explanation+history]

 Best,
 Sebastian

Given that Pecan is used for other OpenStack projects and has plenty of
builtin functionality (REST support, sessions, etc) I'd prefer it for a
number of reasons.

1) Wouldn't have to pull in plugins for standard (in Pecan) things
2) Pecan is built for high traffic, where Flask is aimed at much smaller
projects
3) Already used by other OpenStack projects, so common patterns can be
reused as oslo libs

Of course, the Flask community seems larger (though the average flask
project seems pretty small).

I'm not sure what determines production readiness, but it seems to me
like Fuel developers fall more in Pecan's target audience than in Flask's.

My $0.02,
Ryan

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Using Job Queues for timeout ops

2014-11-13 Thread Ryan Brown
On 11/13/2014 09:58 AM, Clint Byrum wrote:
 Excerpts from Zane Bitter's message of 2014-11-13 05:54:03 -0800:
 On 13/11/14 03:29, Murugan, Visnusaran wrote:

 [snip]

 3.Migrate heat to use TaskFlow. (Too many code change)

 If it's just handling timed triggers (maybe this is closer to #2) and 
 not migrating the whole code base, then I don't see why it would be a 
 big change (or even a change at all - it's basically new functionality). 
 I'm not sure if TaskFlow has something like this already. If not we 
 could also look at what Mistral is doing with timed tasks and see if we 
 could spin some of it out into an Oslo library.

 
 I feel like it boils down to something running periodically checking for
 scheduled tasks that are due to run but have not run yet. I wonder if we
 can actually look at Ironic for how they do this, because Ironic polls
 power state of machines constantly, and uses a hash ring to make sure
 only one conductor is polling any one machine at a time. If we broke
 stacks up into a hash ring like that for the purpose of singleton tasks
 like timeout checking, that might work out nicely.

+1

Using a hash ring is a great way to shard tasks. I think the most
sensible way to add this would be to make timeout polling a
responsibility of the Observer instead of the engine.

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Conditionals, was: New function: first_nonnull

2014-11-12 Thread Ryan Brown


On 11/12/2014 07:50 AM, Zane Bitter wrote:
 On 12/11/14 06:48, Angus Salkeld wrote:
 (it's nice that there would be a consistent user experience
 between these projects -mistral/heat/murano).
 
 It's actually potentially horrible, because you introduce potential
 quoting issues when you embed mistral workbooks in Heat templates or
 pass Heat templates to Murano.

I think the intention here was to say there are two paths forward.

1) Add an intrinsic for first_nonnull, and every other task.
2) Add YAQL to let users build their own expressions.

And that (1) involves users learning the Heat way of munging their data
with intrinsics, while (2) encourages them to learn YAQL, which they can
use across Heat, Mistral, and Murano. This would be a better experience
since users would (in theory) spend less time looking up the domain
language for task X.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Conditionals, was: New function: first_nonnull

2014-11-11 Thread Ryan Brown
On 11/11/2014 01:12 PM, Clint Byrum wrote:
 Excerpts from Alexis Lee's message of 2014-11-10 09:34:13 -0800:
 Zane Bitter said on Fri, Nov 07, 2014 at 12:35:09AM +0100:
 Crazy thought: why not just implement conditionals? We had a
 proto-spec for them started at one point...

 I didn't know that was on the table :)

 How about we support YAQL expressions? https://github.com/ativelkov/yaql
 Plus some HOFs (higher-order functions) like cond, map, filter, foldleft
 etc?

 Here's first_nonnull:

   config:
 Fn::Select
   - 0
   filter:
 - yaql: $.0 != null
 - item1
 - itemN

 Making the 'yaql' function eponymous means we can easily plug other
 expression languages in later if we choose.

Adding yaql (or similar, e.g. jsonpointer) seems like it would be a good
idea, and shouldn't add more complexity than it's worth.

 Cool. I think this aligns nicely with my suggestion at the summit that
 we also allow writing functions in javascript. This would allow people
 to write their own first_nonnull/coalesce until we have a chance to add
 it:
 
   config:
 embed:
   lang: javascript
   args: - item1
 - item2
   code: |
 for (arg in args) {
   if (arg !== null) {
 return arg
   }
 }
 return null;
 
 I'd really love to have this functionality but I doubt I'll have time to
 spec and land it. Does anybody else think this functionality would be a
 good way to allow template authors flexibility that they desire?

I am strongly against allowing arbitrary Javascript functions for
complexity reasons. It's already difficult enough to get meaningful
errors when you  up your YAML syntax.

AIUI the functionality many users would look to use Javascript
embed-ability for would be better served either by something like yaql,
or by making vendored HOT functions possible.

Vendored HOT would look something like X-Vendor::YourNamespace and
functions could be managed similarly to resource plugins (stevedore).
It's a very rough idea, but I like it much better than adding Javascript.

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] image requirements for Heat software config

2014-10-14 Thread Ryan Brown
inline responses

On 10/14/2014 01:13 PM, Thomas Spatzier wrote:
 
 Hi all,
 
 I have been experimenting a lot with Heat software config to  check out
 what works today, and to think about potential next steps.
 I've also worked on an internal project where we are leveraging software
 config as of the Icehouse release.
 
 I think what we can do now from a user's perspective in a HOT template is
 really nice and resonates well also with customers I've talked to.
 One of the points where we are constantly having issues, and also got some
 push back from customers, are the requirements on the in-instance tools and
 the process of building base images.
 One observation is that building a base image with all the right stuff
 inside sometimes is a brittle process; the other point is that a lot of
 customers do not like a lot of requirements on their base images. They want
 to maintain one set of corporate base images, with as little modification
 on top as possible.
 
 Regarding the process of building base images, the currently documented way
 [1] of using diskimage-builder turns out to be a bit unstable sometimes.
 Not because diskimage-builder is unstable, but probably because it pulls in
 components from a couple of sources:
 #1 we have a dependency on implementation of the Heat engine of course (So
 this is not pulled in to the image building process, but the dependency is
 there)
 #2 we depend on features in python-heatclient (and other python-* clients)
 #3 we pull in implementation from the heat-templates repo
 #4 we depend on tripleo-image-elements
 #5 we depend on os-collect-config, os-refresh-config and os-apply-config
 #6 we depend on diskimage-builder itself
 
 Heat itself and python-heatclient are reasonably well in synch because
 there is a release process for both, so we can tell users with some
 certainty that a feature will work with release X of OpenStack and Heat and
 version x.z.y of python-heatclient. For the other 4 sources, success
 sometimes depends on the time of day when you try to build an image
 (depending on what changes are currently included in each repo). So
 basically there does not seem to be a consolidated release process across
 all that is currently needed for software config.
 
 The ideal solution would be to have one self-contained package that is easy
 to install on various distributions (an rpm, deb, MSI ...).

It would be simple enough to make an RPM metapackage that just installs
the deps. The definition of self-contained I'm using here is one
install command and not has its own vendored python and every module.

 Secondly, it would be ideal to not have to bake additional things into the
 image but doing bootstrapping during instance creation based on an existing
 cloud-init enabled image. For that we would have to strip requirements down
 to a bare minimum required for software config. One thing that comes to my
 mind is the cirros software config example [2] that Steven Hardy created.
 It is admittedly no up to what one could do with an image built according
 to [1] but on the other hand is really slick, whereas [1] installs a whole
 set of things into the image (some of which do not really seem to be needed
 for software config).

I like this option much better, actually. Idoubt many deployers would
have complaints since cloud-init is pretty much standard. The downside
here is that it wouldn't be all that feasible to include bootstrap
scripts for every platform.

Maybe it would be enough to have the ability to bootstrap one or two
popular distros (Ubuntu, Fedora, Cent, etc) and accept patches for other
platforms.

 
 Another issue that comes to mind: what about operating systems not
 supported by diskimage-builder (Windows), or other hypervisor platforms?
 
 Any, not really suggestions from my side but more observations and
 thoughts. I wanted to share those and raise some discussion on possible
 options.
 
 Regards,
 Thomas
 
 [1]
 https://github.com/openstack/heat-templates/blob/master/hot/software-config/elements/README.rst
 [2]
 https://github.com/openstack/heat-templates/tree/master/hot/software-config/example-templates/cirros-example
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Defining what is a SupportStatus version

2014-09-16 Thread Ryan Brown
 into all the 
 other documentation that gets built, and I do understand your point in 
 that context. But versioning the standard library of plugins as if it 
 were a monolithic, always-available thing seems wrong to me.

 
 Yeah I think it is too optimistic in retrospect.
 
 We do the same thing with HOT's internals, so why not also
 do the standard library this way?

 The current process for HOT is for every OpenStack development cycle 
 (Juno is the first to use this) to give it a 'version' string that is 
 the expected date of the next release (in the future), and continuous 
 deployers who use the new one before that date are on their own (i.e. 
 it's not considered stable). So not really comparable.

 
 I think there's a difference between a CD operator making it available,
 and saying they support it. Just like a new API version in OpenStack, it
 may be there, but they may communicate to users it is alpha until after
 it gets released upstream. I think that is the same for this, and so I
 think that using the version number is probably fine.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Defining what is a SupportStatus version

2014-09-16 Thread Ryan Brown
On 09/16/2014 09:49 AM, Ryan Brown wrote:.
 
 (From Zane's other message)
 snip
 I think the first supported release is probably the right information
 to add.
 snip
 
 I feel like for anything with nonzero upgrade effort (and upgrading your
 openstack install takes significantly more than 0 effort units) you can
 never assume everyone is running the latest (or even a recent) revision.
 That's why projects often host docs versions *way* back.
 
 The SQLAlchemy project hosts docs back to 2012[1] and also has latest[2]
 docs that are updated continuously. I think the way to support the most
 use cases would be to have docs for each release as well as continue to
 have CI update docs.
 
 For a URL structure I could see docs.o.o/developer/heat/latest and
 d.o.o/heat/VER where VER can be either a semver release (2014.2,
 etc) or a release name (icehouse, havana, etc). The strategy SQLA and
 other projects use is to feature a release date prominently at the top
 of the page, so users can look and say Oh, Juno isn't released yet, so
 this feature won't be in my Icehouse cloud.
 
 [1] http://docs.sqlalchemy.org/en/rel_0_6/core/index.html
 [2] http://docs.sqlalchemy.org/en/latest/core/index.html
 
 

Also most projects that use readthedocs.org have a dropdown on every
docs page that link to that page at different releases. I think it would
greatly improve discoverability of documentation for prior releases.

Sorry for doubling up messages,
-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Bringing back auto-abandon

2014-09-11 Thread Ryan Brown
On 09/10/2014 06:32 PM, James E. Blair wrote:
 James Polley j...@jamezpolley.com writes:
 Incidentally, that is the query in the Wayward Changes section of the
 Review Inbox dashboard (thanks Sean!); for nova, you can see it here:
 
   
 https://review.openstack.org/#/projects/openstack/nova,dashboards/important-changes:review-inbox-dashboard
 
 The key here is that there are a lot of changes in a lot of different
 states, and one query isn't going to do everything that everyone wants
 it to do.  Gerrit has a _very_ powerful query language that can actually
 help us make sense of all the changes we have in our system without
 externalizing the cost of that onto contributors in the form of
 forced-abandoning of changes.  Dashboards can help us share the
 knowledge of how to get the most out of it.
 
   https://review.openstack.org/Documentation/user-dashboards.html
   https://review.openstack.org/Documentation/user-search.html
 
 -Jim
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

Also if you don't feel existing dashboards scratch your project's
particular itch, there's always gerrit-dash-creator[1] to help you make
one that fits your needs.

[1]: https://github.com/stackforge/gerrit-dash-creator

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] znc as a service (was Re: [nova] Is the BP approval process broken?)

2014-09-03 Thread Ryan Brown
On 09/03/2014 09:35 AM, Sylvain Bauza wrote:
 Re: ZNC as a service, I think it's OK provided the implementation is
 open-sourced with openstack-infra repo group, as for Gerrit, Zuul and
 others.
 The only problem I can see is how to provide IRC credentials to this, as
 I don't want to share my creds up to the service.
 
 -Sylvain
There are more than just adoption (user trust) problems. An Open Source
implementation wouldn't solve the liability concerns, because users
would still have logs of their (potentially sensitive) credentials and
conversations on servers run by OpenStack Infra.

This is different from Gerrit/Zuul etc which just display code/changes
and run/display tests on those public items. There isn't anything
sensitive to be leaked there. Storing credentials and private messages
is a different story, and would require much more security work than
just storing code and test results.

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] znc as a service (was Re: [nova] Is the BP approval process broken?)

2014-09-02 Thread Ryan Brown
On 09/02/2014 02:50 PM, Stefano Maffulli wrote:
 On 08/29/2014 11:17 AM, John Garbutt wrote:
 After moving to use ZNC, I find IRC works much better for me now, but
 I am still learning really.
 
 There! this sentence has two very important points worth highlighting:
 
 1- when people say IRC they mean IRC + a hack to overcome its limitation
 2- IRC+znc is complex, not many people are used to it
 
 I never used znc, refused to install, secure and maintain yet another
 public facing service. For me IRC is: be there when it happens or read
 the logs on eavesdrop, if needed.
 
 Recently I found out that there are znc services out there that could
 make things simpler but they're not easy to join (at least the couple I
 looked at).
 
 Would it make sense to offer znc as a service within the openstack project?
 

I would worry a lot about privacy/liability if OpenStack were to provide
ZNCaaS. Not being on infra I can't speak definitively, but I know I
wouldn't be especially excited about hosting  securing folks' private
data.

Eavesdrop just records public meetings, and the logs are 100% public so
no privacy headaches. Many folks using OpenStack's ZNCaaS would be in
other channels (or at least would receive private messages) and
OpenStack probably shouldn't take responsibility for keeping all those safe.

Just my 0.02 USD.
-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Heat Juno Mid-cycle Meetup report

2014-08-27 Thread Ryan Brown
Swift does have some guarantees around read-after-write consistency, but
for Heat I think the best bet would be the X-Newest[1] header which has
been in swift for a very, very long time. The downside here is that
(IIUC) it queries all storage nodes for that object. It does not provide
a hard guarantee[2] but does at least try *harder* to get the most
recent version.

We could also (assuming it was turned on) use object versioning to
ensure that the most up to date version of the metadata was used, but I
think X-Newest is the way to go.

[1]: https://lists.launchpad.net/openstack/msg06846.html
[2]:
https://ask.openstack.org/en/question/26403/does-x-newest-apply-to-getting-container-lists-and-object-lists-also-dlo/

On 08/27/2014 11:41 AM, Zane Bitter wrote:
 On 27/08/14 11:04, Steven Hardy wrote:
 On Wed, Aug 27, 2014 at 07:54:41PM +0530, Jyoti Ranjan wrote:
 I am little bit skeptical about using Swift for this use case
 because of
 its eventual consistency issue. I am not sure Swift cluster is
 good to be
 used for this kind of problem. Please note that Swift cluster may
 give you
 old data at some point of time.

 This is probably not a major problem, but it's certainly worth
 considering.

 My assumption is that the latency of making the replicas consistent
 will be
 small relative to the timeout for things like SoftwareDeployments, so all
 we need is to ensure that instances  eventually get the new data, act on
 
 That part is fine, but if they get the new data and then later get the
 old data back again... that would not be so good.
 
 it, and send a signal back to Heat (again, Heat eventually getting it via
 Swift will be OK provided the replication delay is small relative to the
 stack timeout, which defaults to one hour)

 Steve

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Marconi][Heat] Creating accounts in Keystone

2014-08-27 Thread Ryan Brown


On 08/27/2014 12:15 PM, Kurt Griffiths wrote:
 On 8/25/14, 9:50 AM, Ryan Brown rybr...@redhat.com wrote:
 
 I'm actually quite partial to roles because, in my experience, service
 accounts rarely have their credentials rotated more than once per eon.
 Having the ability to let instances grab tokens would certainly help
 Heat, especially if we start using Zaqar (the artist formerly known as
 marconi).

 
 According to AWS docs, IAM Roles allow you to Define which API actions
 and resources the application can use after assuming the role.” What would

Optimally, you'd be able to (as a user) generate tokens with subsets of
your permissions (e.g. if you're admin, you can create non-admin
tokens/tempurls).

It seems like implementing this seems (from where I'm sitting) like it
would take lots of help from the Keystone team.

 it take to implement this in OpenStack? Currently, Keystone roles seem to
 be more oriented toward cloud operators, not end users. This quote from
 the Keystone docs[1] is telling:
 
 If you wish to restrict users from performing operations in, say,
 the Compute service, you need to create a role in the Identity
 Service and then modify /etc/nova/policy.json so that this role is
 required for Compute operations.

I wasn't aware that this was how role permissions worked. Thank you for
including that info.

 
 On 8/25/14, 9:49 AM, Zane Bitter zbit...@redhat.com wrote:
 
 In particular, even if a service like Zaqar or Heat implements their own
 authorisation (e.g. the user creating a Zaqar queue supplies lists of
 the accounts that are allowed to read or write to it, respectively), how
 does the user ensure that the service accounts they create will not have
 access to other OpenStack APIs? IIRC the default policy.json files
 supplied by the various projects allow non-admin operations from any
 account with a role in the project.

 
 It seems like end users need to be able to define custom roles and
 policies.
 
 Some example use cases for the sake of discussion:
 
 1. App developer sends a request to Zaqar to create a queue named
“customer-orders
 2. Zaqar creates a queue named customer-orders
 3. App developer sends a request to Keystone to create a role, role-x,
for App Component X
 4. Keystone creates role-x
 5. App developer sends requests to Keystone to create a service user,
“user-x” and associate it with role-x
 6. Keystone creates user-x and gives it role-x
 7. App developer sends a request to Zaqar to create a policy,
“customer-orders-observer”, and associate that policy with role-x. The
policy only allows GETing (listing) messages from the customer-orders
queue
 8. Zaqar creates customer-orders-observer and notes that it is associated
with role-x
 
 Later on...
 
 1. App Component X sends a request to Zaqar, including an auth token
 2. Zaqar sends a request to Keystone asking for roles associated with the
given token
 3. Keystone returns one or more roles, including role-x
 4. Zaqar checks for any user-defined policies associated with the roles,
including role-x, and finds customer-orders-observer
 5. Zaqar verifies that the requested operation is allowed according to
customer-orders-observer
 
 We should also compare and contrast this with signed URLs ala Swift’s
 tempurl. For example, service accounts do not have to be created or
 managed in the case of tempurl.

Perhaps there would be a way to have a more generic (Keystone-wide)
version of similar functionality. Even if there wasn't any scoping
support it would still be exceptionally useful.

This is starting to sound like it's worth drafting a blueprint for, or
at least looking through existing BP's to see if there's something that
fits.

 
 --Kurt
 
 [1]: http://goo.gl/5UBMwR [http://docs.openstack.org]
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Marconi][Heat] Creating accounts in Keystone

2014-08-25 Thread Ryan Brown


On 08/22/2014 05:35 PM, Zane Bitter wrote:

 On AWS the very first thing a user does is create a bunch of IAM
 accounts so that they virtually never have to use the credentials
 associated with their natural person ever again. There are both user
 accounts and service accounts - the latter IIUC have
 automatically-rotating keys. Is there anything like this planned in
 Keystone? Zaqar is likely only the first (I guess second, if you count
 Heat) of many services that will need it.
 

The only auto-rotation in AWS is through roles[1], which are separate
from users.

User:
* Is a real person or a service account
* Can generate temporary tokens with a subset of their perms
* Has a static credentials (access keys, username/password, MFA)

Role:
* Has no static credentials
* Is granted to an instance on launch
* Temporary tokens are provided to instance by instance metadata service

I'm actually quite partial to roles because, in my experience, service
accounts rarely have their credentials rotated more than once per eon.
Having the ability to let instances grab tokens would certainly help
Heat, especially if we start using Zaqar (the artist formerly known as
marconi).


[1]:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-05 Thread Ryan Brown


On 08/04/2014 07:18 PM, Yuriy Taraday wrote:
 Hello, git-review users!
 
 snip
 0. create new local branch;
 
 master: M--
  \
 feature:  *
 
 1. start hacking, doing small local meaningful (to you) commits;
 
 master: M--
  \
 feature:  A-B-...-C
 
 2. since hacking takes tremendous amount of time (you're doing a Cool
 Feature (tm), nothing less) you need to update some code from master, so
 you're just merging master in to your branch (i.e. using Git as you'd
 use it normally);
 
 master: M---N-O-...
  \\\
 feature:  A-B-...-C-D-...
 
 3. and now you get the first version that deserves to be seen by
 community, so you run 'git review', it asks you for desired commit
 message, and poof, magic-magic all changes from your branch is
 uploaded to Gerrit as _one_ change request;
 
 master: M---N-O-...
  \\\E* = uploaded
 feature:  A-B-...-C-D-...-E
 
 snip

+1, this is definitely a feature I'd want to see.

Currently I run two branches bug/LPBUG#-local and bug/LPBUG# where
the local is my full history of the change and the other branch is the
squashed version I send out to Gerrit.

Cheers,
-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-05 Thread Ryan Brown


On 08/05/2014 09:27 AM, Sylvain Bauza wrote:
 
 Le 05/08/2014 13:06, Ryan Brown a écrit :
 -1 to this as git-review default behaviour. Ideally, branches should be
 identical in between Gerrit and local Git.

Probably not as default behaviour (people who don't want that workflow
would be driven mad!), but I think enough folks would want it that it
should be available as an option.

 I can understand some exceptions where developers want to work on
 intermediate commits and squash them before updating Gerrit, but in that
 case, I can't see why it needs to be kept locally. If a new patchset has
 to be done on patch A, then the local branch can be rebased
 interactively on last master, edit patch A by doing an intermediate
 patch, then squash the change, and pick the later patches (B to E)
 
 That said, I can also understand that developers work their way, and so
 could dislike squashing commits, hence my proposal to have a --no-squash
 option when uploading, but use with caution (for a single branch, how
 many dependencies are outdated in Gerrit because developers work on
 separate branches for each single commit while they could work locally
 on a single branch ? I can't iimagine how often errors could happen if
 we don't force by default to squash commits before sending them to Gerrit)
 
 -Sylvain
 
 Cheers,
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

I am well aware this may be straying into feature creep territory, and
it wouldn't be terrible if this weren't implemented.

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat]Heat Db Model updates

2014-07-17 Thread Ryan Brown

On 07/17/2014 03:33 AM, Steven Hardy wrote:

On Thu, Jul 17, 2014 at 12:31:05AM -0400, Zane Bitter wrote:

On 16/07/14 23:48, Manickam, Kanagaraj wrote:

SNIP
*Resource*

Status  action should be enum of predefined status


+1


Rsrc_metadata - make full name resource_metadata


-0. I don't see any benefit here.


Agreed



I'd actually be in favor of the change from rsrc-resource, I feel like 
rsrc is a pretty opaque abbreviation.


--
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev