Hi Abhishek,

Can you please provide some additional details


   - What version of Helix are you using.
   - What do you mean "hard coded the task to nodes in cluster", does this
   mean you are setting custom idealstate.

You have to subscribe to the list (email:[email protected])
to reply to user mailing list. we are available on irc #apachehelix if its
easier to get more information

thanks,
Kishore G


On Thu, Apr 2, 2015 at 6:20 AM, Abhishek Ghosh <[email protected]>
wrote:

> Hi,
>
> We are trying to implement the task scheduling framework in the cluster
> level, the use case is such that we should be able to divide a job into
> tasks and able to assign a task in a particular cluster. Is this possible?
> If yes can you please provide some documentation on how this can be
> achieved.
>
> Till now we are able to schedule a job and assign the tasks to the nodes,
> we have configured the clusters in master slave state model and restricting
> the tasks to be run on the master. So whenever a master goes down a node
> which has made transition from slave to master will take up the task. To
> maintain the task assigned to particular cluster we are kind of hardcoded
> the task to nodes in the cluster(based on the external view of the
> resource).
>
> We want to scale up this application such that a node can serve more than
> one task at a time. Is this possible? If yes can you please provide
> documentation(including Target Partitioner, rebalancer and provisioner).
> Also can you please give some info how to use TaskCallBackContext for
> various events as a controller.
>
> Thanks,
> Abhishek
>
>
>
>

Reply via email to