Hi Claus, There is a mis-communication - we need not have a special classloader helper, I think.
The issue was because on the un-deployment of 1 camel blueprint bundle (with camel quartz2 route),* the quartz job data is not deleted from db - if it is clustered quartz.* Unfortunately, we do not want to delete the job data, when the route is stopped using RoutePolicySupport class, as the main intent from clustered quartz is job recovery. - The scheduler will be shut down (QuartzComponent: doStop()) if there are no more jobs (if the scheduler is not shared across camel context bundles) & it works fine. - But if the scheduler configuration / scheduler instance is shared across camel quartz routes / bundles, the scheduler continues to run. - When the scheduler acquires next trigger, the trigger related to undeployed bundle is also obtained & then it tries to execute that trigger by executing CamelJob class from uninstalled bundle, using CascadingClassLoaderHelper. - If it cannot load the class for that trigger, it throws exception and the rest of the triggers do not get executed at that time - So we get misfires. Please refer to line no. 876 in org.quartz.impl.jdbcjobstore.StdJDBCDelegate.java - this quartz class throws exception, if job class is not loaded and does not proceed further. job.setJobClass(loadHelper.loadClass(rs .getString(COL_JOB_CLASS))); 1. I have written an osgi EventHandler service that will listen to 'bundle undeploy' events, that get published. 2. If the osgi bundle related to camel quartz2 is undeployed, it will remove the corresponding job data from DB. If this can be handled by camel quartz2, it will become simple for end-users. a) There is an issue in camel QuartzEndpoint.java in addJobInScheduler(). We were getting misfires in some nodes of the cluster, due to below issue. a) If the trigger does not exist in DB, it tries to schedule the job b) But this is not an atomic transaction - After the call to find a trigger from DB is made, some other node in the cluster could have created the trigger, resulting in ObjectAlreadyExistsException when call to schedule job is made c) Then misfires happen in that cluster node, as the Quartz component / camel context itself does not get started. private void addJobInScheduler() throws Exception { // Add or use existing trigger to/from scheduler Scheduler scheduler = getComponent().getScheduler(); JobDetail jobDetail; Trigger trigger = scheduler.getTrigger(triggerKey); if (trigger == null) { jobDetail = createJobDetail(); trigger = createTrigger(jobDetail); updateJobDataMap(jobDetail); // Schedule it now. Remember that scheduler might not be started it, but we can schedule now. try{ Date nextFireDate = scheduler.scheduleJob(jobDetail, trigger); if (LOG.isInfoEnabled()) { LOG.info("Job {} (triggerType={}, jobClass={}) is scheduled. Next fire date is {}", new Object[] {trigger.getKey(), trigger.getClass().getSimpleName(), jobDetail.getJobClass().getSimpleName(), nextFireDate}); } } * catch(ObjectAlreadyExistsException e){ //double-check if Some other VM might has already stored the job & trigger in clustered mode if(!(getComponent().isClustered())){ throw e; } else{ trigger = scheduler.getTrigger(triggerKey); if(trigger==null){ throw new SchedulerException("Trigger could not be found in quartz scheduler."); } } }* } else { ensureNoDupTriggerKey(); } Can the above correction in QuartzComponent.java be made? Thanks, Lakshmi -- View this message in context: http://camel.465427.n5.nabble.com/Quartz-job-data-deletion-in-clustered-quartz2-tp5757508p5758806.html Sent from the Camel - Users mailing list archive at Nabble.com.