Github user brkyvz commented on a diff in the pull request:

    https://github.com/apache/spark/pull/9367#discussion_r43927641
  
    --- Diff: 
core/src/test/scala/org/apache/spark/deploy/SparkSubmitSuite.scala ---
    @@ -366,6 +367,72 @@ class SparkSubmitSuite
         }
       }
     
    +  // SPARK-11195
    +  test("classes are correctly loaded when tasks fail") {
    +    // Compile a simple jar that throws a user defined exception on the 
driver
    +    val tempDir = Utils.createTempDir()
    +    val srcDir = new File(tempDir, "repro/")
    +    srcDir.mkdirs()
    +    // scalastyle:off line.size.limit
    +    val mainSource = new JavaSourceFromString(new File(srcDir, 
"MyJob").getAbsolutePath,
    +      """package repro;
    +        |
    +        |import java.util.*;
    +        |import java.util.regex.*;
    +        |import org.apache.spark.*;
    +        |import org.apache.spark.api.java.*;
    +        |import org.apache.spark.api.java.function.*;
    +        |
    +        |public class MyJob {
    +        |  public static class MyException extends Exception {
    +        |  }
    +        |
    +        |  public static void main(String[] args) {
    +        |    SparkConf conf = new SparkConf();
    +        |    JavaSparkContext sc = new JavaSparkContext(conf);
    +        |
    +        |    JavaRDD rdd = sc.parallelize(Arrays.asList(new Integer[]{1}), 
1).map(new Function<Integer, Boolean>() {
    +        |      public Boolean call(Integer x) throws MyException {
    +        |        throw new MyException();
    +        |      }
    +        |    });
    +        |
    +        |    try {
    +        |      rdd.collect();
    +        |
    +        |      assert(false); // should be unreachable
    +        |    } catch (Exception e) {
    +        |      // the driver should not have any problems resolving the 
exception class and determining
    +        |      // why the task failed.
    +        |
    +        |      Pattern unknownFailure = Pattern.compile(".*Lost task.*: 
UnknownReason.*", Pattern.DOTALL);
    +        |      Pattern expectedFailure = Pattern.compile(".*Lost task.*: 
repro.MyJob\\$MyException.*", Pattern.DOTALL);
    +        |
    +        |      assert(!unknownFailure.matcher(e.getMessage()).matches());
    +        |      assert(expectedFailure.matcher(e.getMessage()).matches());
    +        |    }
    +        |  }
    +        |}
    +      """.stripMargin)
    +    // scalastyle:on line.size.limit
    +    val sparkJar = 
"../assembly/target/scala-2.10/spark-assembly-1.5.1-hadoop2.2.0.jar"
    --- End diff --
    
    Why do you need to compile using Spark? Can't you just create your own 
exception, compile it, pass that to Spark Submit. Then move all of the code 
here down to something like `SimpleApplicationTest`, where you create your 
exception through reflection and then throw it?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to