Use custom scheduler to run the spout and bolt on the designated m/c. See this : http://xumingming.sinaapp.com/885/twitter-storm-how-to-develop-a-pluggable-scheduler/
On 1/9/14, Harald Kirsch <[email protected]> wrote: > Hi all, > > suppose you need to process input available on source.host and the > results should finally end up on dest.host. A storm topology shall do > the processing. > > It is easy to write a sprout that fetches the data and emits it into the > topology. Similarly a sink-bolt can write the result somewhere. > > But now suppose that the data is available only locally on source.host > in the file system. Is it possible and natural to make source.host a > machine in the Storm cluster but somehow make sure that *only* the > sprout is executed on source.host. Similarly, would it be possible to > bind a sink bolt to one specific machine, the dest.host? > > If this is not a possible or not a preferred way to do it, are there any > specific techniques used to provide input to a sprout beyond whatever > remote access methods happen to be available (smb, nfs, ssh, http)? > > Thanks for any hints, > Harald. > >
