Jork Zijlstra created SPARK-19628:

             Summary: Duplicate Spark jobs in 2.1.0
                 Key: SPARK-19628
             Project: Spark
          Issue Type: Bug
          Components: Spark Core
    Affects Versions: 2.1.0
            Reporter: Jork Zijlstra
             Fix For: 2.0.1
         Attachments: spark2.0.1.png, spark2.1.0.png

After upgrading to Spark 2.1.0 we noticed that they are duplicate jobs 
executed. Going back to Spark 2.0.1 they are gone again

import org.apache.spark.sql._

object DoubleJobs {
  def main(args: Array[String]) {

    System.setProperty("hadoop.home.dir", "/tmp");

    val sparkSession: SparkSession = SparkSession.builder
      .appName("spark session example")
      .config("spark.driver.maxResultSize", "6G")
      .config("spark.sql.orc.filterPushdown", true)
      .config("spark.sql.hive.metastorePartitionPruning", true)

    sparkSession.sqlContext.setConf("spark.sql.orc.filterPushdown", "true")

    val paths = Seq(
      ""//some orc source

    def dataFrame(path: String): DataFrame = {

    paths.foreach(path => {

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

Reply via email to