Jeff,

P2 has certainly come a long way, and is of interest. Currently its APIs are not frozen, and if I understand correctly it does not support uses clauses. It is a technology we will continue to monitor.

In any case I'd like a clear understanding of how such synchronization should work at an OSGi level regardless of implementation technology. It's the kind of knowledge I want in my toolbox.

In a response to Marcel's comments I outlined that I think I have an understanding of how to synchronize start levels, singletons, and "normal" fragments.

Extension (fragment) bundles are still somewhat opaque to me in how they can force the framework to restart... how to detect and respond to this?

I'm also stumped as to how optional dependencies can be detected and forced to rewire if the optional dependency gets provisioned. The PackageAdmin service gives you the current wires but "loses" whether they were optional imports. And I don't want to be parsing that kind of metadata at runtime, and I'd prefer not to use platform implementation-specified services / APIs.

In your demo does the P2-driven synchronization compare the before / after profiles or does it happen at the OSGi runtime level? In other words, does the profile include extra information from the provisioning process like the fact that there was an optional dependency so that you can unresolve (update) specific bundles for that edge case? Also, how does it deal with extension bundles that can require a framework restart?

Thanks!

/djk

On May 4, 2009, at 9:54 AM, Jeff McAffer wrote:

David,

I made a small movie that describes some of what you are talking about and how you can do that with p2 and some extensions for remote/ decoupled management. See
  
http://eclipsesource.com/en/resources/presentations/remote-provisioning-with-p2/

You may have seen elements of this at EclipseCon/OSGi DevCon. Basically p2 takes care of most of the issues you point out and we have extended it to have an administrator that is maintaining a "profile" of the various clients (these are synchronized dynamically). You can tweak / change the administrator's profile as you like complete with startlevels, fragments, etc etc and then when the client shows up the two profiles are synchronized and the client's software changed. The example in the video is quite simple for clarity. Much more sophisticated UIs, workflows and operations are similarly possible.

Jeff

Jeff McAffer | CTO | EclipseSource | +1 613 851 4644
[email protected] | http://eclipsesource.com



Kemper David wrote:
I'm looking for guidance on how to deliver on OSGi's promise of enabling incremental deployment in non-trivial runtimes. This email is long because I want to provide enough detail for some edge case discussions.

We have our own (arguably idiosyncratic) approach to provisioning an OSGi runtime. We have an external runtime model that holds the normative list of bundles, and when it changes we reconcile the OSGi runtime to this list of normative bundles. Note that our provisioning can mix together multiple logical applications that can pull in and wire multiple versions of the same plugin and packages.

This is very straightforward the first time the framework starts: there is no cached state, we install all our bundles from the model and start them all. If the model hasn't changed since we stopped the framework and then restart it, the framework restores its running state from the last time it ran and we synchronize to our unchanged model (a no-op).

If there *are* changes to the model, things get more interesting. My underlying question is: how we can best synchronize the runtime to our model with the fewest disruptions?

Edge cases appear to involve:

- start levels
- singletons
- fragments
- extensions (fragments of the system bundle)
- optional dependencies

My first cut installs and starts bundles in the model that are not in the runtime, uninstalls bundles in the framework that are not in the runtime, then does a PackageAdmin.refreshPackages(null). My thinking here is that I want the new bundles to be available when the refresh "cuts over" to the new content so there is as little disruption as possible.

Start levels can be messy. If I were king I would declare by fiat that all our code needs to properly handle starting in any order, and not use the start level service at all. Alas, I am not king. When I'm processing the new bundles I actually install and start them in ascending start level order so that, given the current framework start level, even new bundles get installed "in the right order."

This basic approach fails for singletons. Imagine singleton v1 in the runtime. The model changes to replace it with singleton v2. I can install v2 with v1 still there, but it won't resolve, and I can't start it until after v1 is uninstalled. [Side question: do I need to refresh packages after v1 uninstall before I can start v2?] This implies that I should uninstall old bundles before I install the new ones. What does this imply about how long dependent bundles will be "down"?

Fragments are also problematic. Whether a fragment will attach to a resolved host bundle depends on many factors, including the host's fragment-attachment directive, the fragment dependencies, and what the fragment exports. If the synchronization doesn't want to crawl the implications of all this, would I need to unresolve all candidate host bundles first? Furthermore, unresolving a host bundle that is resolved doesn't appear possible without first uninstalling it (because stopping it will just move it to the resolved state).

Extension bundles typically cause the entire framework to need restart. This gets really tricky because when you do a refresh packages and there's a new extension bundle, the framework will actually stop, preventing you from doing further operations.

Optional dependencies are also a killer. Consider package a in bundle A with an optional dependency on package b in bundle B. Assume we install bundle A without bundle B. Bundle A resolves, we run, and there is much rejoicing. Later we update the model to include bundle A but also bundle B and bundle C, which wants to use the optional functionality through A. Well, A is already resolved, so it won't automatically rewire just because we install B, so C can't use the optional dependency. How do we go about even detecting this?

Are there other edge cases to think about?

Given these edge cases, how do we actually deliver "incremental update" without some very nasty wiring and dependency grunging? Is there an easy approach that "just works"?

/djk
_______________________________________________
OSGi Developer Mail List
[email protected]
https://mail.osgi.org/mailman/listinfo/osgi-dev
_______________________________________________
OSGi Developer Mail List
[email protected]
https://mail.osgi.org/mailman/listinfo/osgi-dev

_______________________________________________
OSGi Developer Mail List
[email protected]
https://mail.osgi.org/mailman/listinfo/osgi-dev

Reply via email to