spark git commit: [SPARK-12056][CORE] Part 2 Create a TaskAttemptContext only after calling setConf

2015-12-15 Thread andrewor14
Repository: spark
Updated Branches:
  refs/heads/branch-1.6 08aa3b47e -> 9e4ac5645


[SPARK-12056][CORE] Part 2 Create a TaskAttemptContext only after calling 
setConf

This is continuation of SPARK-12056 where change is applied to 
SqlNewHadoopRDD.scala

andrewor14
FYI

Author: tedyu 

Closes #10164 from tedyu/master.

(cherry picked from commit f725b2ec1ab0d89e35b5e2d3ddeddb79fec85f6d)
Signed-off-by: Andrew Or 


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/9e4ac564
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/9e4ac564
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/9e4ac564

Branch: refs/heads/branch-1.6
Commit: 9e4ac56452710ddd8efb695e69c8de49317e3f28
Parents: 08aa3b4
Author: tedyu 
Authored: Tue Dec 15 18:15:10 2015 -0800
Committer: Andrew Or 
Committed: Tue Dec 15 18:15:53 2015 -0800

--
 .../apache/spark/sql/execution/datasources/SqlNewHadoopRDD.scala | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/9e4ac564/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/SqlNewHadoopRDD.scala
--
diff --git 
a/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/SqlNewHadoopRDD.scala
 
b/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/SqlNewHadoopRDD.scala
index 56cb63d..eea780c 100644
--- 
a/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/SqlNewHadoopRDD.scala
+++ 
b/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/SqlNewHadoopRDD.scala
@@ -148,14 +148,14 @@ private[spark] class SqlNewHadoopRDD[V: ClassTag](
   }
   inputMetrics.setBytesReadCallback(bytesReadCallback)
 
-  val attemptId = newTaskAttemptID(jobTrackerId, id, isMap = true, 
split.index, 0)
-  val hadoopAttemptContext = newTaskAttemptContext(conf, attemptId)
   val format = inputFormatClass.newInstance
   format match {
 case configurable: Configurable =>
   configurable.setConf(conf)
 case _ =>
   }
+  val attemptId = newTaskAttemptID(jobTrackerId, id, isMap = true, 
split.index, 0)
+  val hadoopAttemptContext = newTaskAttemptContext(conf, attemptId)
   private[this] var reader: RecordReader[Void, V] = null
 
   /**


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



spark git commit: [SPARK-12056][CORE] Part 2 Create a TaskAttemptContext only after calling setConf

2015-12-15 Thread andrewor14
Repository: spark
Updated Branches:
  refs/heads/master 840bd2e00 -> f725b2ec1


[SPARK-12056][CORE] Part 2 Create a TaskAttemptContext only after calling 
setConf

This is continuation of SPARK-12056 where change is applied to 
SqlNewHadoopRDD.scala

andrewor14
FYI

Author: tedyu 

Closes #10164 from tedyu/master.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/f725b2ec
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/f725b2ec
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/f725b2ec

Branch: refs/heads/master
Commit: f725b2ec1ab0d89e35b5e2d3ddeddb79fec85f6d
Parents: 840bd2e
Author: tedyu 
Authored: Tue Dec 15 18:15:10 2015 -0800
Committer: Andrew Or 
Committed: Tue Dec 15 18:15:10 2015 -0800

--
 .../apache/spark/sql/execution/datasources/SqlNewHadoopRDD.scala | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/f725b2ec/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/SqlNewHadoopRDD.scala
--
diff --git 
a/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/SqlNewHadoopRDD.scala
 
b/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/SqlNewHadoopRDD.scala
index 56cb63d..eea780c 100644
--- 
a/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/SqlNewHadoopRDD.scala
+++ 
b/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/SqlNewHadoopRDD.scala
@@ -148,14 +148,14 @@ private[spark] class SqlNewHadoopRDD[V: ClassTag](
   }
   inputMetrics.setBytesReadCallback(bytesReadCallback)
 
-  val attemptId = newTaskAttemptID(jobTrackerId, id, isMap = true, 
split.index, 0)
-  val hadoopAttemptContext = newTaskAttemptContext(conf, attemptId)
   val format = inputFormatClass.newInstance
   format match {
 case configurable: Configurable =>
   configurable.setConf(conf)
 case _ =>
   }
+  val attemptId = newTaskAttemptID(jobTrackerId, id, isMap = true, 
split.index, 0)
+  val hadoopAttemptContext = newTaskAttemptContext(conf, attemptId)
   private[this] var reader: RecordReader[Void, V] = null
 
   /**


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org