jI currently have a 4 node spark setup, 1 master and 3 workers running in
spark standalone mode. I am currently stress testing a spark application I
wrote that reads data from kafka and puts it into redshift. I'm pretty happy
with the performance (Reading about 6k messages per second out of kafka) but
I've noticed just from watching top on the worker nodes that one node seems
totally under utilized. It remains under near zero load and its memory
profile never changes.

For instance on the two workers that are working I notice there memory go
from 3.0G free to about 1.5G free as I start loading data into Kafka but the
3rd node remains free at 3G. 

Have I misconfigured something? I followed the standalone setup and see 3
workers registered with all cores being reported as used. 



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Streaming-worker-underutilized-tp11808.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to