So far I'm with aaron on this one. I think we need to try it out and see how it works in practice. I think there's a good chance that for small numbers of jobs the gbean approach will be very convenient. If not, we'll try something else.

thanks
david jencks

On Jun 14, 2006, at 9:35 PM, Aaron Mulder wrote:

Once again, if managing a massive number of jobs, then a specialized
tool set makes sense.

However, if dealing with a small number of jobs for a single
application, what part of this is too complex for an administrator to
handle?

deployer.sh deploy my-app-jobs.jar
deployer.sh stop MyAppJobs
deployer.sh start MyAppJobs
deployer.sh redeploy my-app-jobs.jar
...

You can do that against a remote machine, redeploy a newer JAR with
updated code for the jobs, etc.  I think that's going to be a lot
easier to work with than trying to figure out how to set the classpath
and send updated code for a job stored in a database!

Thanks,
   Aaron

P.S. Recall that no one needs to *write* a GBean in this scenario --
you write a *Job* and the deployer handles the rest under the covers.

On 6/15/06, Matt Hogstrom <[EMAIL PROTECTED]> wrote:
Thinking about this from an operational model (not a developer) I think the database approach makes a lot of sense. If the jobs are hosted in a DB they can be managed directly from a GUI that would make more sense to the operators or other folks that aren't developers. It strikes me as a bit
heavy to have each job as a GBean in the config.xml.

Based on what I know it seems to make a lot of sense that there is a GBean that boostraps the scheduler container and that container manages the jobs. I was thinking back to the premise you outlined in another e-mail thread Aaron that said we should be as easy as a Mac. I think the one GBean per job is not quite in line with the simple premise (I'm thinking of non-developers here).

The people that will be adding and removing jobs are most likely not developers and will not have
their same skill set.

Does this make sense?

Matt

Aaron Mulder wrote:
> Yeah, I'm not imagining using this approach for "thousands" of jobs.
> Though in truth, it shouldn't take an XML parser an unreasonable
> amount of time to write a thousand (or ten thousand) elements, so I'm
> not sure there would be a huge problem in having more entries in
> config.xml.
>
> Anyway, if you have thousands of jobs, I'd recommend a dedicated tool > or GUI. If you have an application with "some" but not "thousands" of
> jobs, I imagine it would be nice to have a convenient way to deploy
> and manage them through Geronimo.  To me, these are not overlapping
> use cases.  I don't know where to draw the line in between, but I
> think we can clarify what we're tageting with each approach and let
> the developer decide which to take.
>
> Thanks,
>    Aaron
>
> On 6/14/06, Dain Sundstrom <[EMAIL PROTECTED]> wrote:
>> On Jun 12, 2006, at 8:11 PM, John Sisson wrote:
>>
>> > How scalable would this be?  I would imagine there would be
>> > applications that may create thousands of jobs (possibly per day).
>> > Wouldn't startup be slow if we had to de-serialize thousands of
>> > jobs at startup in the process of loading all the GBeans that
>> > represent the jobs. Not having looked at quartz myself, it seems >> > it would be much better to have jobs in a database. For example, >> > thousands of Jobs could be created that aren't to be executed until
>> > next year.   I would expect that a job management engine would
>> > optimize processing, e.g. only read jobs from the database into
>> > memory that are to be executed today or in the next hour.
>>
>>
>> And that config.xml file is going to get mighty large, so every time
>> someone makes a small change we are writing out all the jobs...
>>
>> -dain
>>
>
>
>


Reply via email to