Hi Brooklyners-

In the code we currently have two routes for users to install new
blueprints:

(1) upload a catalog YAML file to /v1/catalog

(2) install a bundle with catalog.bom in the root

The feature (2) is disabled by default, but I'd like to move towards
enabling it.  This will make it easier to create nicely structured BOM
files because scripts etc can be taken out of the BOM, stored as files in
the same bundle.  (Because URLs of the form
 `classpath://scripts/install.sh`  use the bundle's classpath to resolve.)

As a first step in #485 [1] I do a few things:

(A) add a utility class BundleMaker that lets us create and modify
bundles/zips, to make it easier to do things we might want to with bundles,
especially for testing

(B) add an endpoint to the REST API which allows uploading a bundle ZIP

(C) accept bundle symbolic name and version in that REST API to facilitate
uploading non-bundle ZIPs where the OSGi MANIFEST.MF is automatically
generated

With this PR, if you have a directory on your local file system with
scripts and config files, and a BOM which refers to them, you can just ZIP
that up an upload it, specifying the bundle name so that a YAML blueprint
author never needs to touch any java-isms.

Where I see this going is a development workflow where a user can edit
files locally and upload the ZIP to have that installed, and if they make
changes locally they can POST it again to have catalog items updated
(because default version is a SNAPSHOT).  We could also:

(D) have `br catalog add ~/my/project/ --name my.project` create a ZIP and
POST it, with bundle name metadata, so essentially the user's process is
just to run that whenever they make a change

(E) have a mechanism whereby deployed entities based on an affected
blueprint are optionally migrated to the new code, so if you've changed an
enricher the changes are picked up, or if say a launch.sh script has
changed, a restart will run the new code

The above are fairly straightforward programmatically (although good user
interaction with (E) needs some thought).  So I think we can pretty quickly
get to a much smoother dev workflow.


That's the highlight of this message.  You can jump to the end, unless
you're interested in some important but low level details...


I'm also tempted by:

(F) Integration with web-based IDE and/or Brooklyn reading and writing
straight from GitHub -- but this seems like a lot of work and I'm not
convinced it's much better than (D) workflow-wise

Before we can change (2) to be the default, or start widely using the POST
a ZIP feature, we need to sort out some issues to do with persistence and
reloading:

(G) Bundles installed via this mechanism are not persisted currently, so if
you move to a different Brooklyn using the same backing store, you'll lose
those bundles

(H) On rebind, bundles aren't always activated when needed, meaning items
can't be loaded

(I) We persist the individual catalog items as YAML, so we end up with two
records — the YAML from the catalog.bom in the bundle, and the YAML
persisted for the item.  This isn't a problem per se, but something to
think about, and some sometimes surprising behaviour.  In particular if you
delete the persisted YAML, the bundle is still there so the item is no
longer deleted after a full rebind.

One idea which might be useful is:

(J) Introduce a catalogGroupId field on catalog items; this will do two
things:  if you try to delete an item with such a record, you'll be
encouraged to delete all such items (maybe disallowed to delete an
individual one), with the effect of deleting the bundle if it comes from a
bundle; and when resolving types we search first for items with the same
catalogGroupId (so that e.g. if I install MyCluster:1.0 and MyNode:1.0 in
the same group, the former can refer simply to "MyNode" but if I install a
2.0 version of that group, the 1.0 cluster still loads the 1.0 node -- this
has bitten people i the past)

There is a related Brooklyn upgrade problem worth mentioning, which the
above might help with, where:

(K) If I migrate from Brooklyn 10 to 11 when it comes out, I'll no longer
have certain entities that were at v10, since we don't include those; an
upgrade could include rules that certain groupIds need to be updated, or it
can search and attempt to automatically apply the updates


Quite a lot here and we don't need to solve it but I wanted to:

* Share the current thinking

* Get opinions on the general dev workflow suggested by (D)


Thanks for feedback -- and if we like it help with (D) would be appreciated!

Best
Alex



[1] . https://github.com/apache/brooklyn-server/pull/485

Reply via email to