Hello Spark fellows :),
I think I need some help to understand how .cache and task input works within a
job.
I have an 7 GB input matrix in HDFS that I load using .textFile(). I also have
a config file which contains an array of 12 Logistic Regression Model
parameters, loaded as an Array[String], let's call it models.
Then I basically apply each model to each line (as a LabeledPoint) of my matrix
as following :
val matrix = sc.textFile(// HDFS path to matrix)...(parse matrix to make
RDD[(String, LabeledPoint)]
models.map( model =>
val weights = // parse model, which is an Array[String], to a
Vector to give to LogisticRegressionModel
val rl = new LogisticRegressionModel(weigths, intercept)
rl.setThresold(0.5)
matrix.flatMap(
point => rl.predict(point._2.features) match {
case 1.0 => Seq("cool")
case 0.0 => Seq()
}
)
).reduce(_++_)
It seems normal to cache the matrix, since otherwise I'm going to read it 12
times, each per model.
Sooooo...I launch my job on a 3 machines YARN cluster, using 18 executors with
4GB memory each and 1 executor core.
When I don't cache the matrix, the job executes in 12 minutes, and going to
Spark UI I can see that each task has a 128 MB Hadoop input which is normal.
When I cache the matrix before going through the models.map part, the first
tasks process data from Hadoop input, and the matrix is completely stored
in-memory (verified in the Storage tab of Spark UI). Unfortunately, the job
takes 48 minutes instead of 12 minutes, because very few tasks actually read
directly from memory afterwards, most tasks have network input and NODE_LOCAL
locality level and those tasks take triple the time than tasks with Hadoop
input or memory input.
Can you confirm my initial thoughts that :
* There are 18 executors on 3 machines, so 6 executors per machine
* One partition from matrix rdd is stored into one executor
* When a task needs to compute a partition in memory, it tries to get
itself allocated on the executor that stores the partition
* If the executor is already dealing with a task, it is going to
another executor on the same machine and then "downloads" the partition, hence
the network input
?
If that is the case, how would you deal with the problem :
* Answer 1 : Higher number of cores per executor ? (that got me a
Container [pid=55355,containerID=container_1422284274724_0066_01_000010] is
running beyond physical memory limits from YARN, sadly)
* Answer 2 : Higher spark.locality.wait ? Since each task takes about 8
seconds and it's at 3 by default
* Answer 3 : Replicate the partitions ?
* Answer 4 : Something only you guys know that I am not aware of ?
* Bonus answer : don't cache, it is not needed here
Regards,
Fanilo
________________________________
Ce message et les pi?ces jointes sont confidentiels et r?serv?s ? l'usage
exclusif de ses destinataires. Il peut ?galement ?tre prot?g? par le secret
professionnel. Si vous recevez ce message par erreur, merci d'en avertir
imm?diatement l'exp?diteur et de le d?truire. L'int?grit? du message ne pouvant
?tre assur?e sur Internet, la responsabilit? de Worldline ne pourra ?tre
recherch?e quant au contenu de ce message. Bien que les meilleurs efforts
soient faits pour maintenir cette transmission exempte de tout virus,
l'exp?diteur ne donne aucune garantie ? cet ?gard et sa responsabilit? ne
saurait ?tre recherch?e pour tout dommage r?sultant d'un virus transmis.
This e-mail and the documents attached are confidential and intended solely for
the addressee; it may also be privileged. If you receive this e-mail in error,
please notify the sender immediately and destroy it. As its integrity cannot be
secured on the Internet, the Worldline liability cannot be triggered for the
message content. Although the sender endeavours to maintain a computer
virus-free network, the sender does not warrant that this transmission is
virus-free and will not be liable for any damages resulting from any virus
transmitted.