I am running a stand alone Spark cluster, 2 workers each has 2 cores.
Apparently, I am loading and processing relatively large chunk of data so
that I receive task failure " " .  As I read from some posts and
discussions in the mailing list the failures could be related to the large
size of processing data in the partitions and if I have understood
correctly I should have smaller partitions (but many of them) ?!

Is there any way that I can set the number of partitions dynamically in
"spark-env.sh" or in the submiited Spark application?


best,
/Shahab

Reply via email to