Hi,

This is my summary of the gap between expected behavior and actual behavior.

FlumeEventCount spark://<spark_master_hostname>:7077 <address> <port>

Expected: an 'agent' listening on <address>:<port> (bind to). In the context
of Spark, this agent should be running on one of the slaves, which should be
the slave whose ip/hostname is <address>.

Observed: A random slave is chosen in the pool of available slaves.
Therefore, in a cluster environment, is likely not the slave having the
actual <address>, which in turn causes the 'Fail to bind to ...' error. This
comes naturally because the slave that is running the code to bind to
<address>:<port> has a different ip.



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/How-to-use-FlumeInputDStream-in-spark-cluster-tp1604p2990.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply via email to