Thinking about this further, the interesting question to me is how much
logic we aim to encapsulate behind an API. For example, one of the simpler
CLI commands we have in RDO-Manager (which is moving upstream[1]) is to
run introspection on all of the Ironic nodes. This involves a series of
commands that need to be run in order and it can take upwards of 20
minutes depending how many nodes you have. However, this does just
communicate with Ironic (and ironic inspector) so is it worth hiding
behind an API? I am inclined to say that it is so we can make the end
result as easy to consume as possible but I think it might be difficult
to draw the line in some cases.

The question then rises about what this API would look like? Generally
speaking I feel like it looks like a workflow API, it shouldn't offer
many (or any?) unique features, rather it manages the process of
performing a series of operations across multiple APIs. There have been
attempts at doing this within OpenStack before in a more general case,
I wonder what we can learn from those.

This is where my head is too. The OpenStack on OpenStack thing means we get to leverage the existing tools and users can leverage their existing knowledge of the products.

But what I think an API will provide is guidance on how to achieve that (the big argument there being if this should be done in an API or through documentation). It coaches new users and integrations on how to make all of the underlying pieces play together to accomplish certain things.

To your question on that ironic call, I'm split on how I feel.

On one hand, I really like the idea of the TripleO API being able to support an OpenStack deployment entirely on its own. You may want to go directly to some undercloud tools for certain edge cases, but for the most part you should be able to accomplish the goal of deploying OpenStack through the TripleO APIs.

But that's not necessarily what TripleO wants to be. I've seen the sentiment of it only being tools for deploying OpenStack, in which case a single API isn't really what it's looking to do. I still think we need some sort of documentation to guide integrators instead of saying "look at the REST API docs for these 5 projects", but that documentation is lighter weight than having pass through calls in a TripleO API.

Unfortunately, as undesirable as these are, they're sometimes necessary
in the world we currently live in. The only long-term solution to this
is to put all of the logic and state behind a ReST API where it can be
accessed from any language, and where any state can be stored
appropriately, possibly in a database. In principle that could be
accomplished either by creating a tripleo-specific ReST API, or by
finding native OpenStack undercloud APIs to do everything we need. My
guess is that we'll find a use for the former before everything is ready
for the latter, but that's a discussion for another day. We're not there
yet, but there are things we can do to keep our options open to make
that transition in the future, and this is where tripleo-common comes in.

I submit that anything that adds logic or state to the client should be
implemented in the tripleo-common library instead of the client plugin.
This offers a couple of advantages:

- It provides a defined boundary between code that is CLI-specific and
code that is shared between the CLI and GUI, which could become the
model for a future ReST API once it has stabilised and we're ready to
take that step.
- It allows for an orderly transition when that happens - we can have a
deprecation period during which the tripleo-common library is imported
into both the client and the (future, hypothetical) ReST API.


OpenStack Development Mailing List (not for usage questions)


OpenStack Development Mailing List (not for usage questions)

OpenStack Development Mailing List (not for usage questions)

Reply via email to