On Tuesday, November 4, 2003, at 07:34 AM, Aaron Mulder wrote:
I thought we were going to use the JMX ObjectNames to identify deployments, rather than keeping an actual repository somewhere?
That is what I thought. We can easily find all deployments with an ObjectName pattern query.
I guess
one of the questions to answer on that score is whether we could put the
status as a property in the ObjectName and change it as the module is
started, stopped, etc. If so, then it's almost a no-brainer to do it that
way. If not, then it would be more work to pull out the right subset of
deployed modules.
That won't work. Dependencies are static and so are notification listeners, so when every you changed states you would have to reestablish all dependencies and any listener would have to reregister.
I disagree with your assertion that the server should pull code on
a distribute -- among other things, that assumes that the server can
freely contact that client, which I don't think is necessarily true, or a
good idea. However I do agree that the distribute call shouldn't block --
I asked earlier if there was a way to stream data to the server over JMX
and got no answer, so I think we need to set up a standard servlet to
receive the data or something.
Agree. We need some sort of bulk file transfer system. I think we should really look at using WebDav for this.
As I said, "registered" files are tracked. More accurately, a scanner checks
these files and upon modification executes a command. By now, two files are
automatically created in order to trigger either the redeployment or the
undeployment of a deployment. For instance, when an archive module –
archive.jar in the example - is deployed, the following files are created,
if required:
archive.jar_REDEPLOY: if one "touch"es this file, then the deployment is
redeployed; and
archive.jar_UNDEPLOY: if one "touch"es this file, then the deployment is
undeployed.
These two files are here only as a development convenience.
-1 I really dislike this idea. The biggest problem is this code assumes that it can write to the hot deploy directory, but the real reason I dislike it is it removes the normal file semantics we are all use to. If you want to redeploy, you simply touch the main deployment file, and if you want to undeploy, you remove the file.
At the end of the day, I believe that DeploymentController should not be the
entry point to action a start, stop, redeployment or undeployment. Why?
Because one knows, which planner has mounted a deployment and hence one can
bypass DeploymentController.
You are assuming that only one planner was involved in the deployment. What about ear and other meta deployments.
BTW, I tried to refactor ServiceDeploymentPlanner and this is not so simple:
ServiceDeploymentPlanner is a kernel service. if one wants to align it with
the proposed approach, then one must re-package a lot of classes from the
core sub-project to the kernel sub-project. For instance, as the proposed
approach defines a class which extends AbstractManagedContainer, one needs
to re-package the Container, Component, AbstractManagedComponent et cetera
classes.
Why is ServiceDeploymentPlanner a Container now? I dislike the idea of pulling this stuff into kernel unless it is absolutely necessary.
-dain