Hi All,

With the introduction of Quartz's clustering, the following changes will
also be introduced.
TaskServerCount

The TaskServerCount was maintained for scenarios where all members in the
cluster fail/are shut down. In such a situation, without the
TaskServerCount, the first node to come back up would schedule all the
tasks on itself because at that particular moment only that node is
available. The TaskServerCount ensured that when all the nodes fail/are
shut down, rescheduling only begins after the number of nodes specified as
the TaskServerCount come back up, to balance out the tasks.

This approach would not be needed with the introduction of Quartz’s
clustering features, since Quartz will be handling the load balancing when
more nodes join, rescheduling the tasks on the newly joined nodes as
required. However, currently Quartz’s load-balancing is almost random, and
this could have a limitation on the balancing out of tasks.
TaskLocationResolver

The TaskLocationResolver interface and its concrete implementations
RandomTaskLocationResolver, RoundRobinTaskLocationResolver and
RuleBasedTaskLocationResolver are no longer used since Quartz itself
handles on what nodes the tasks are scheduled on and the load-balancing
decisions.
Quartz Properties File and Shared Database

The quartz.properties file is used to initialize the scheduler factory. For
Quartz’s clustering with JDBC-JobStore to work, all members of the cluster
need to use a copy of the same quartz.properties file (please refer [1] for
an example properties file and more details). Currently with ntask, we
default to a set of standard quartz properties if a quartz.properties file
has not been introduced in the /conf/etc/ directory, irrespective of
whether the task manager is a standalone task manager or a clustered task
manager.

With Quartz clustering we would need to introduce additional properties
such as the following:

org.quartz.jobStore.isClustered set to true
org.quartz.jobStore.clusterCheckinInterval

In order to provide a default configuration in a clustered scenario, we
could maintain a different set of default values to use if the task manager
is a clustered task manager, and initialize the scheduler factory with
these properties, if the user hasn’t specified custom properties. Since we
need the nodes to share a database we also need to provide a datasource
configuration in this properties file. The datasource configuration could
either be specified in the quartz.properties file or we can specify a JNDI
URL to perform a lookup.

org.quartz.jobStore.dataSource = quartzDS
org.quartz.jobStore.dataSource.quartzDS.jndiURL where quartzDS is the
datasource configured for the shared database

Specifying the configuration for the datasource in the quartz.properties
file would not work since this would require us to hardcode database name,
username, password, etc. in the default properties. We could instead
introduce a default datasource (eg:- quartzDS) and provide a default
datasource configuration (in master-datasources.xml). A user can override
these default configurations by specifying required configurations in a
/conf/etc/quartz.properties file which needs to be the same across the
nodes.

Possible errors on the user’s part with the introduction of this
quartz.properties file would include the following, and would need to be
emphasized when setting up/troubleshooting:

   -

   User adds a quartz.properties in some nodes but not in others. User adds
   different properties files.
   -

   User doesn’t enable clustering in quartz.properties.


[1]
http://www.quartz-scheduler.org/documentation/quartz-2.x/configuration/ConfigJDBCJobStoreClustering.html
Feedback would be appreciated.

Thank you,
Maryam

On Thu, Sep 14, 2017 at 1:22 PM, Maryam Ziyad <[email protected]> wrote:

> Hi All,
>
> With the requirement to remove Hazelcast from task scheduling related
> implementation in ESB/EI clusters, due to possible issues in partition
> scenarios, it was decided to introduce an alternative to Hazelcast in ntask
> [1]. Currently ntask uses Hazelcast for coordinator election (where the
> oldest member is elected as the coordinator) and it is this coordinator
> that handles rescheduling of tasks (fail-over). ntask also uses the
> Hazelcast distributed executor service (IExecutorService) with task
> scheduling calls, to schedule tasks on specific members.
>
> Currently ntask already uses Quartz to schedule the tasks, but fail-over
> is handled explicitly by ntask. Scheduling related calls are executed on
> specific members via the distributed executor service by ntask.
>
> We are currently working on introducing Quartz's own clustering features
> which provide the required fail-over and load balancing functionality by
> coordinating via a database [2]. Please find attached a summary of the
> functionality/capabilities provided by Quartz clustering (summarizing [2]).
>
> With the introduction of Quartz's own clustering features, the
> ClusterGroupCommunicator in ntask will not be needed for task scheduling
> since the identification of nodes that have left and and rescheduling of
> tasks that were being executed by them will be handled by Quartz. The other
> task scheduling related calls such as pausing and resuming will also be
> handled by Quartz and thus would not require identification of the
> particular member executing the task and then executing the relevant call
> by ntask.
>
> [1] "Updated Invitation: RDBMS based Coordinator Election for EI/ESB -
> Design Review @ Tue Aug 22, 2017 4pm - 5pm (Maryam Ziyad)"
> [2] http://www.quartz-scheduler.org/documentation/
> quartz-2.x/configuration/ConfigJDBCJobStoreClustering.html
>
> Feedback would be appreciated.
>
> Thank you,
> Maryam
> --
> *Maryam Ziyad Mohamed*
> Software Engineer | WSO2
> [image: http://wso2.com/signature] <http://wso2.com/signature>
>



-- 
*Maryam Ziyad Mohamed*
Software Engineer | WSO2
[image: http://wso2.com/signature] <http://wso2.com/signature>
_______________________________________________
Architecture mailing list
[email protected]
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture

Reply via email to