I would like to collect events from two sinks and channel them to a final sink. 
The first sink is streaming data to spark and the events there have also been 
streamed to an HDFS endpoint, in a separate flow from jms. I'd like to be able 
to use the events from HDFS for replay, by dropping specific events into a 
local directory when needed.

I've tried several configurations with an agent (a1) running 
jms=>file-channel=>avro-sink, an agent (a2) spool-dir=>file-channel=>avro-sink 
to an agent (a3) running avro-source=>file-channel=>spark-sink. All three 
agents are running on the same host, which I think is the issue, because of the 
ports. I get an invalid configuration for the flow on a3, saying the port is 
already used. Is that the issue? Do you have any examples of a fan-in topology 
being used for replay?

Thank you!
James T. Shofner

The content of this e-mail message and any attachments are confidential and may 
be legally privileged, intended solely for the addressee. If you are not the 
intended recipient, be advised that any use, dissemination, distribution, or 
copying of this e-mail is strictly prohibited. If you receive this message in 
error, please notify the sender immediately by reply email and destroy the 
message and its attachments.

Reply via email to