Hello all.

I am trying to get events from Flume and process these events in spark
cluster using spark streaming. However, according to  Spark Streaming
Programming Guide
<https://spark.incubator.apache.org/docs/latest/streaming-programming-guide.html>
 
,


If I understand correctly, Flume sink listens to a certain address:port. If
the receiver moves to another worker, that address changes, and the flume
flow is broken. For example, say I setup a flume source to send to
192.168.1.11:4141, then the FlumeReceiver has to stay on 192.168.1.11 for
this to work.

I have tried the example FlumeEventCount. It works on local mode, but fails
to run in cluster mode.

How to make FlumeInputDStream work in cluster mode?

Thank you.



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/How-to-use-FlumeInputDStream-in-spark-cluster-tp1604.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply via email to