Hi,

I'm trying to migrate a map-reduce program to work with spark. I migrated
the program from Java to Scala. The map-reduce program basically loads a
HDFS file and for each line in the file it applies several transformation
functions available in various external libraries.

When I execute this over spark, it is throwing me "Task not serializable"
exceptions for each and every class being used from these from external
libraries. I included serialization to few classes which are in my scope,
but there there are several other classes which are out of my scope like
org.apache.hadoop.io.Text.

How to overcome these exceptions?

~Sarath.

Reply via email to