> Svet, if instead we tried to infer it from the catalog.bom, would we require > some additional metadata within the .bom file? Or would we use the catalog > item's id + version? I'm not convinced by the latter - it would mean some > .bom files would work and others wouldn't (e.g. if the .bom had multiple > items with different versions). Better to support the explicit approach IMO.
I imagine it would be additional metadata. On the other hand I don't see a technical reason why we need an explicit symbolicName and version - they can be auto-generated. Svet. > On 20.12.2016 г., at 17:50, Aled Sage <[email protected]> wrote: > > Hi all, > > +1 > > (D) sounds good. What version are you imagining the bundle would be, if one > runs `br catalog add ~/my/project/ --name com.example.myproject`? > > --- > I like the idea of uploading a plain zip (rather than only supporting OSGi > bundles) - that makes it simpler for non-java folk. The use of OSGi becomes a > (hidden) implementation detail to many users. > > --- > If auto-generating the manifest, I think we need the user to be explicit > about symbolic name and version. Having these supplied in the REST api call > (as Alex suggests) would achieve that. > > Svet, if instead we tried to infer it from the catalog.bom, would we require > some additional metadata within the .bom file? Or would we use the catalog > item's id + version? I'm not convinced by the latter - it would mean some > .bom files would work and others wouldn't (e.g. if the .bom had multiple > items with different versions). Better to support the explicit approach IMO. > > --- > For E ("have a mechanism whereby deployed entities based on an affected > blueprint are optionally migrated to the new code"), that feels like a > separate discussion. It could equally apply to a pure YAML .bom file that has > been added to the catalog. > > I suggest we discuss that in a separate email thread. > > --- > For (G), it's an interesting suggestion from Svet to make use of Karaf Cellar > for HA nodes. I'm hesitant (e.g. if restarting a standalone Brooklyn node > whose VM has died, then it adds big additional requirements for what > constitutes the "persisted state"). On the other hand, it's good to use > well-established technologies rather than re-inventing things! > > An alternative ("pure brooklyn") approach could be to write the bundle to > persisted state; on rebind, we'd install + activate those bundles. > > --- > For "catalogGroupId", I agree with Svet that in the initial use-case this can > be an implementation detail. > > It could be set as the bundle's symbolic name + version: everything from the > bundle should be deleted at once, along with the bundle. > > Longer term, I can see how exposing "catalogGroupId" to the user could > support more use-cases (e.g. for several catalog items from different bundles > to work together). I don't think we should try to support that yet. > > Aled > > > On 19/12/2016 17:19, Geoff Macartney wrote: >> hi Alex, >> >> this looks like a good feature to have, I shall look at the PR as soon as I >> can. >> >> The catalog.bom scanner feature was initially enabled by default, but we >> had to >> disable it because it turned out not to work properly with rebind. I don't >> think >> it should be a lot of work to fix that but it hasn't been something we've >> got round >> to yet. This would be a great opportunity to look back at that. >> >> Some random thoughts: >> >> re (C), if we are going to treat the zips as bundles, my gut feel is that >> we >> should insist on a manifest and get the metadata from it. It doesn't feel >> to me >> like it makes much sense to allow a zip file without a MANIFEST.MF but >> convey >> the intended bundle metadata to Brooklyn via HTTP headers. And rather than >> infer bundle metadata I think it's better to ask users to be explicit about >> what >> their intentions are. To make users lives easier, we could >> add a command to br to generate the manifest (locally) with correct syntax, >> so that the manifest is in the right place, rather than have br add the data >> to the "upload" request headers. >> >> re. (D) will be glad to have a look at it >> >> re. (E) it would certainly need to be optional - maybe keep it as an >> explicit >> separate command ('upgrade'?) >> >> (F) it does seem like a lot of work but might be nice for users who are not >> keen on command lines. >> >> G - I: we'll definitely need to pay close attention to persistence and >> rebind; >> I wonder also about HA operation, are there any additional implications? >> >> (J) I think it would be good to treat all the files from a jar, sorry >> bundle, >> as an atomic group - cleaner that way perhaps than allowing delete/update >> of >> individual entries from a bundle on a piecemeal basis. Rest support on >> delete >> catalog could warn about related catalog entries being deleted and ask for >> a "--force" param to confirm. >> >> Geoff >> >> >> >> >> >> >> >> >> >> >> >> On Fri, 16 Dec 2016 at 15:24 Svetoslav Neykov < >> [email protected]> wrote: >> >>> +1 >>> >>> Some thoughts: >>> * (A) add a utility class BundleMaker >>> Sounds very similar to >>> https://ops4j1.jira.com/wiki/display/ops4j/Tinybundles < >>> https://ops4j1.jira.com/wiki/display/ops4j/Tinybundles> >>> Looking at the code it's much more focused on zip files so I guess >>> there's no much overlap, but worth keeping in mind >>> * (C) accept bundle symbolic name and version >>> Why require them at all? Could infer them from the catalog.bom in some >>> way - maybe require those properties to be in there. If not present are >>> they really needed? >>> * (G) Bundles installed via this mechanism are not persisted currently & >>> (I) We persist the individual catalog items as YAML, so we end up with two >>> records >>> Suggest marking the catalog items coming from bundles as >>> non-persistable. Then try to share the bundles between HA nodes. (Karaf >>> Cellar?) >>> * (J) Introduce a catalogGroupId field on catalog items; >>> Agree this could be useful and I like the idea of deleting the bundle >>> altogether with the catalog items. From user's perspective I don't see the >>> need for an extra field (i.e. it's an implementation detail). >>> >>> Svet. >>> >>> >>>> On 16.12.2016 г., at 12:50, Alex Heneveld < >>> [email protected]> wrote: >>>> Hi Brooklyners- >>>> >>>> In the code we currently have two routes for users to install new >>>> blueprints: >>>> >>>> (1) upload a catalog YAML file to /v1/catalog >>>> >>>> (2) install a bundle with catalog.bom in the root >>>> >>>> The feature (2) is disabled by default, but I'd like to move towards >>>> enabling it. This will make it easier to create nicely structured BOM >>>> files because scripts etc can be taken out of the BOM, stored as files in >>>> the same bundle. (Because URLs of the form >>>> `classpath://scripts/install.sh` use the bundle's classpath to resolve.) >>>> >>>> As a first step in #485 [1] I do a few things: >>>> >>>> (A) add a utility class BundleMaker that lets us create and modify >>>> bundles/zips, to make it easier to do things we might want to with >>> bundles, >>>> especially for testing >>>> >>>> (B) add an endpoint to the REST API which allows uploading a bundle ZIP >>>> >>>> (C) accept bundle symbolic name and version in that REST API to >>> facilitate >>>> uploading non-bundle ZIPs where the OSGi MANIFEST.MF is automatically >>>> generated >>>> >>>> With this PR, if you have a directory on your local file system with >>>> scripts and config files, and a BOM which refers to them, you can just >>> ZIP >>>> that up an upload it, specifying the bundle name so that a YAML blueprint >>>> author never needs to touch any java-isms. >>>> >>>> Where I see this going is a development workflow where a user can edit >>>> files locally and upload the ZIP to have that installed, and if they make >>>> changes locally they can POST it again to have catalog items updated >>>> (because default version is a SNAPSHOT). We could also: >>>> >>>> (D) have `br catalog add ~/my/project/ --name my.project` create a ZIP >>> and >>>> POST it, with bundle name metadata, so essentially the user's process is >>>> just to run that whenever they make a change >>>> >>>> (E) have a mechanism whereby deployed entities based on an affected >>>> blueprint are optionally migrated to the new code, so if you've changed >>> an >>>> enricher the changes are picked up, or if say a launch.sh script has >>>> changed, a restart will run the new code >>>> >>>> The above are fairly straightforward programmatically (although good user >>>> interaction with (E) needs some thought). So I think we can pretty >>> quickly >>>> get to a much smoother dev workflow. >>>> >>>> >>>> That's the highlight of this message. You can jump to the end, unless >>>> you're interested in some important but low level details... >>>> >>>> >>>> I'm also tempted by: >>>> >>>> (F) Integration with web-based IDE and/or Brooklyn reading and writing >>>> straight from GitHub -- but this seems like a lot of work and I'm not >>>> convinced it's much better than (D) workflow-wise >>>> >>>> Before we can change (2) to be the default, or start widely using the >>> POST >>>> a ZIP feature, we need to sort out some issues to do with persistence and >>>> reloading: >>>> >>>> (G) Bundles installed via this mechanism are not persisted currently, so >>> if >>>> you move to a different Brooklyn using the same backing store, you'll >>> lose >>>> those bundles >>>> >>>> (H) On rebind, bundles aren't always activated when needed, meaning items >>>> can't be loaded >>>> >>>> (I) We persist the individual catalog items as YAML, so we end up with >>> two >>>> records — the YAML from the catalog.bom in the bundle, and the YAML >>>> persisted for the item. This isn't a problem per se, but something to >>>> think about, and some sometimes surprising behaviour. In particular if >>> you >>>> delete the persisted YAML, the bundle is still there so the item is no >>>> longer deleted after a full rebind. >>>> >>>> One idea which might be useful is: >>>> >>>> (J) Introduce a catalogGroupId field on catalog items; this will do two >>>> things: if you try to delete an item with such a record, you'll be >>>> encouraged to delete all such items (maybe disallowed to delete an >>>> individual one), with the effect of deleting the bundle if it comes from >>> a >>>> bundle; and when resolving types we search first for items with the same >>>> catalogGroupId (so that e.g. if I install MyCluster:1.0 and MyNode:1.0 in >>>> the same group, the former can refer simply to "MyNode" but if I install >>> a >>>> 2.0 version of that group, the 1.0 cluster still loads the 1.0 node -- >>> this >>>> has bitten people i the past) >>>> >>>> There is a related Brooklyn upgrade problem worth mentioning, which the >>>> above might help with, where: >>>> >>>> (K) If I migrate from Brooklyn 10 to 11 when it comes out, I'll no longer >>>> have certain entities that were at v10, since we don't include those; an >>>> upgrade could include rules that certain groupIds need to be updated, or >>> it >>>> can search and attempt to automatically apply the updates >>>> >>>> >>>> Quite a lot here and we don't need to solve it but I wanted to: >>>> >>>> * Share the current thinking >>>> >>>> * Get opinions on the general dev workflow suggested by (D) >>>> >>>> >>>> Thanks for feedback -- and if we like it help with (D) would be >>> appreciated! >>>> Best >>>> Alex >>>> >>>> >>>> >>>> [1] . https://github.com/apache/brooklyn-server/pull/485 >>> >
