Hi,

Yes actually I'm created a custom scheduler for this case also. I think I
can take supervisor information & executor#getTasks from there as well.
Thanks for the idea!

On Fri, May 22, 2015 at 11:01 AM, Matthias J. Sax <
[email protected]> wrote:

> Hi,
>
> I think it is a ticky problem you want to solve. The TopologyContext
> object does not give enough information to get it done. Maybe you can
> get it done, by implementing a custom scheduler.
>
> And example how to implement a custom Scheduler is given here:
>
> https://xumingming.sinaapp.com/885/twitter-storm-how-to-develop-a-pluggable-scheduler/
>
>
> -Matthias
>
> On 05/21/2015 08:58 PM, Ken Danniswara wrote:
> > Hi,
> >
> > I have a question about creating a customStreamGrouping. Is it possible
> > to do a mapping from all targetTasks to the worker where it located?
> >
> > For example, in the prepare() method I could make a mapping of TaskID
> > and their ComponentID (eg. Map<TaskID, ComponentID>). But rather than
> > only string of componentID, I need more higher level information like
> > the workers or (if possible) even their supervisors. What I'm trying to
> > do is to divide the supervisors/workers into 2 different cluster
> > (Supervisors_A and Supervisors_B) and only send the stream inside the
> > same cluster. So I need to know where the targetTasks is located.
> >
> > At first, I'm thinking to send the information from Nimbus that have the
> > full information. But then I'm confused when I should send the
> > information before the prepare() method in my CustomGrouping class is
> > called.
> >
> > Is there anyone having this similar problem? Thank you
> >
> > Best Regards,
> > Ken Danniswara
>
>

Reply via email to