Github user vanzin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/13962#discussion_r69178533
  
    --- Diff: 
yarn/src/test/scala/org/apache/spark/deploy/yarn/YarnClusterSuite.scala ---
    @@ -259,6 +265,43 @@ private[spark] class SaveExecutorInfo extends 
SparkListener {
       }
     }
     
    +private object YarnClusterDriverWithFailure extends Logging with Matchers {
    +
    +  val WAIT_TIMEOUT_MILLIS = 10000
    +
    +  def main(args: Array[String]): Unit = {
    +    if (args.length != 1) {
    +      // scalastyle:off println
    +      System.err.println(
    +        s"""
    +        |Invalid command line: ${args.mkString(" ")}
    +        |
    +        |Usage: YarnClusterDriver [result file]
    +        """.stripMargin)
    +      // scalastyle:on println
    +      System.exit(1)
    +    }
    +
    +    val sc = new SparkContext(new SparkConf()
    +      .set("spark.extraListeners", classOf[SaveExecutorInfo].getName)
    +      .setAppName("yarn \"test app\" 'with quotes' and \\back\\slashes and 
$dollarSigns"))
    +    val conf = sc.getConf
    +    val status = new File(args(0))
    +    var result = "failure"
    +    try {
    +      val data = sc.parallelize(1 to 4, 4).collect().toSet
    --- End diff --
    
    Do you need to do any of this? Why not just throw an exception at this 
point? You also don't need to bother with code to write the result file, since 
you want the app to fail anyway.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to