Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/2753#discussion_r18810179
--- Diff:
network/common/src/main/java/org/apache/spark/network/client/SluiceClient.java
---
@@ -0,0 +1,161 @@
+/*
+ * Licensed to the Apache Software
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2576#discussion_r18810202
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/orc/OrcTableOperations.scala
---
@@ -0,0 +1,351 @@
+/*
+ * Licensed to the Apache
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/2753#discussion_r18810206
--- Diff:
network/common/src/main/java/org/apache/spark/network/client/SluiceClient.java
---
@@ -0,0 +1,161 @@
+/*
+ * Licensed to the Apache Software
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2607#issuecomment-58994169
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/21716/consoleFull)
for PR 2607 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2607#issuecomment-58994175
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/2753#discussion_r18810254
--- Diff:
network/common/src/main/java/org/apache/spark/network/client/SluiceClientHandler.java
---
@@ -0,0 +1,155 @@
+/*
+ * Licensed to the Apache
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2241#issuecomment-58994264
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/21718/consoleFull)
for PR 2241 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2792#issuecomment-58994396
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2785#issuecomment-58994523
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2785#issuecomment-58994519
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/21715/consoleFull)
for PR 2785 at commit
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/2753#discussion_r18810388
--- Diff:
network/common/src/main/java/org/apache/spark/network/protocol/request/ClientRequestEncoder.java
---
@@ -0,0 +1,46 @@
+/*
+ * Licensed to the
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/2241#issuecomment-58995591
@marmbrus from a build perspective this LGTM with the caveat that right now
it's only passing Hive compatibility for 0.12 tests and may require further
modification to
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2607#issuecomment-58995827
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/21719/consoleFull)
for PR 2607 at commit
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/2753#discussion_r18810924
--- Diff:
network/common/src/main/java/org/apache/spark/network/util/ConfigProvider.java
---
@@ -0,0 +1,52 @@
+/*
+ * Licensed to the Apache Software
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/2779
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/2779#issuecomment-58996078
I merged this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/2753#discussion_r18811176
--- Diff:
network/common/src/main/java/org/apache/spark/network/util/IOMode.java ---
@@ -15,15 +15,13 @@
* limitations under the License.
*/
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/2753#discussion_r18811237
--- Diff:
network/common/src/main/java/org/apache/spark/network/util/JavaUtils.java ---
@@ -0,0 +1,30 @@
+/*
+ * Licensed to the Apache Software
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/2753#discussion_r18811630
--- Diff:
network/common/src/main/java/org/apache/spark/network/protocol/response/ServerResponse.java
---
@@ -0,0 +1,63 @@
+/*
+ * Licensed to the
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/2753#discussion_r18811627
--- Diff:
network/common/src/main/java/org/apache/spark/network/protocol/request/ClientRequest.java
---
@@ -0,0 +1,58 @@
+/*
+ * Licensed to the Apache
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/2753#discussion_r18811635
--- Diff:
network/common/src/main/java/org/apache/spark/network/util/NettyUtils.java ---
@@ -0,0 +1,109 @@
+/*
+ * Licensed to the Apache Software
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/2753#discussion_r18811639
--- Diff:
network/common/src/main/java/org/apache/spark/network/util/NettyUtils.java ---
@@ -0,0 +1,109 @@
+/*
+ * Licensed to the Apache Software
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/2753#discussion_r18811632
--- Diff:
network/common/src/main/java/org/apache/spark/network/util/NettyUtils.java ---
@@ -0,0 +1,109 @@
+/*
+ * Licensed to the Apache Software
GitHub user liyezhang556520 opened a pull request:
https://github.com/apache/spark/pull/2793
[CORE]codeStyle: uniform ConcurrentHashMap define in StorageLevel.scala
with other places
You can merge this pull request into a Git repository by running:
$ git pull
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2793#issuecomment-58999204
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/21720/consoleFull)
for PR 2793 at commit
Github user liyezhang556520 commented on the pull request:
https://github.com/apache/spark/pull/2792#issuecomment-58999763
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/2793#issuecomment-58999716
This also seems at about the level of too trivial to bother with. It
replaces a single fully qualified class name with an import, which doesn't even
simplify.
---
If
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2792#issuecomment-5893
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/21721/consoleFull)
for PR 2792 at commit
Github user liyezhang556520 commented on the pull request:
https://github.com/apache/spark/pull/2793#issuecomment-59000376
But what about other places use ConcurrentHashMap, seems all of them are
using import, should not they be unified?
---
If your project is set up for it, you can
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2241#issuecomment-59001081
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2241#issuecomment-59001076
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/21718/consoleFull)
for PR 2241 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2607#issuecomment-59001175
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2607#issuecomment-59001167
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/21719/consoleFull)
for PR 2607 at commit
Github user Ishiihara commented on the pull request:
https://github.com/apache/spark/pull/2723#issuecomment-59001540
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/2793#issuecomment-59002181
That's not what the PR does though. How many occurrences are there to
change?
---
If your project is set up for it, you can reply to this email and have your
reply
Github user liyezhang556520 commented on the pull request:
https://github.com/apache/spark/pull/2793#issuecomment-59003525
Well, only here use a single fully qualified class name, and other places
use import. like line 25 in SortShuffleManager.scala, line 602 in Utils.scala,
etc.
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/2785#discussion_r18814130
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/tree/RandomForest.scala ---
@@ -175,6 +175,7 @@ private class RandomForest (
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/2785#discussion_r18814133
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/tree/impl/DecisionTreeMetadata.scala
---
@@ -17,6 +17,8 @@
package
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2793#issuecomment-59006920
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/21720/consoleFull)
for PR 2793 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2793#issuecomment-59006927
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user chouqin commented on the pull request:
https://github.com/apache/spark/pull/2780#issuecomment-59007084
@manishamde thanks for your comments. I will adjust my code after #2785
gets merged.
As for performance, yes, this is slower than the current implementation,
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2792#issuecomment-59007665
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2792#issuecomment-59007655
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/21721/consoleFull)
for PR 2792 at commit
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/2782#issuecomment-59008076
Hey Patrick, You are right about that. We can make TaskContext an interface
if we only allow TaskContextHelper.get() instead of TaskContext.get(). And then
maybe I
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/2784#issuecomment-59013875
Hey Aaron,
I increased the interval because its any way a noise !, We don't intend
to use the akka's Failure Detector because we have our own heart beat tracking
GitHub user zsxwing opened a pull request:
https://github.com/apache/spark/pull/2794
Add afterExecute for handleConnectExecutor
Sorry. I found that I forgot to add `afterExecute` for
`handleConnectExecutor` in #2593.
You can merge this pull request into a Git repository by
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2794#issuecomment-59015171
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/21722/consoleFull)
for PR 2794 at commit
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2762#discussion_r18819746
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveInspectors.scala ---
@@ -135,26 +139,66 @@ private[hive] trait HiveInspectors {
GitHub user Shiti opened a pull request:
https://github.com/apache/spark/pull/2795
[SPARK-3944][Core] Using Option[String] where value can be null
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/Shiti/spark master
Alternatively
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2795#issuecomment-59020496
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/2795#issuecomment-59021297
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/2795#issuecomment-59021625
Nice fix, LGTM. This is required by scala 2.11 too. @pwendell take a look ?
---
If your project is set up for it, you can reply to this email and have your
reply
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2794#issuecomment-59022483
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2794#issuecomment-59022470
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/21722/consoleFull)
for PR 2794 at commit
Github user sarutak commented on a diff in the pull request:
https://github.com/apache/spark/pull/2520#discussion_r18821013
--- Diff: project/SparkBuild.scala ---
@@ -170,6 +178,24 @@ object SparkBuild extends PomBuild {
}
+object YARNCommon {
+ lazy val
GitHub user tsudukim opened a pull request:
https://github.com/apache/spark/pull/2796
[SPARK-3946] gitignore in /python includes wrong directory
Modified to ignore not the docs/ directory, but only the docs/_build/ which
is the output directory of sphinx build.
You can merge this
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/2520#discussion_r18821425
--- Diff: project/SparkBuild.scala ---
@@ -170,6 +178,24 @@ object SparkBuild extends PomBuild {
}
+object YARNCommon {
+ lazy
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2796#issuecomment-59026516
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/771#issuecomment-59026836
I just read your proposal, somehow this update was missed by me. Forgive me
for a late reply, your proposal appears to be good. I will take a stab at it
soon and
GitHub user tsudukim opened a pull request:
https://github.com/apache/spark/pull/2797
[SPARK-3943] Some scripts bin\*.cmd pollutes environment variables in
Windows
Modified not to pollute environment variables.
Just moved the main logic into `XXX2.cmd` from `XXX.cmd`, and call
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/2796#issuecomment-59027255
Yeah docs should not be in gitignore. LGTM. (running jenkins appears to be
wasteful)
---
If your project is set up for it, you can reply to this email and have your
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/2797#issuecomment-59027470
Please merge this *AFTER* #2796 is merged, because /python/docs/make2.bat
will be ignored by .gitignore in /python by mistake.
---
If your project is set up for it,
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2797#issuecomment-59027525
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
GitHub user PraveenSeluka opened a pull request:
https://github.com/apache/spark/pull/2798
[Spark-3822] Ability to add/delete executors from Spark Context. Works in
both yarn-client and yarn-cluster mode
sc.addExecutors(count : Int)
sc.deleteExecutors(List[String])
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2798#issuecomment-59028015
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
GitHub user prudhvije opened a pull request:
https://github.com/apache/spark/pull/2799
[Core] Upgrading ScalaStyle version to 0.5 and removing
SparkSpaceAfterCommentStartChecker.
You can merge this pull request into a Git repository by running:
$ git pull
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2799#issuecomment-59028490
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/2799#issuecomment-59028558
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user sarutak commented on a diff in the pull request:
https://github.com/apache/spark/pull/2520#discussion_r18822164
--- Diff: project/SparkBuild.scala ---
@@ -170,6 +178,24 @@ object SparkBuild extends PomBuild {
}
+object YARNCommon {
+ lazy val
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/2799#issuecomment-59028701
LGTM,
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
GitHub user luogankun opened a pull request:
https://github.com/apache/spark/pull/2800
[SPARK-3945]Write properties of hive-site.xml to HiveContext when initil...
Write properties of hive-site.xml to HiveContext when initilize session
state In SparkSQLEnv.scala
The method
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2800#issuecomment-59028969
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/2520#discussion_r18822347
--- Diff: project/SparkBuild.scala ---
@@ -170,6 +178,24 @@ object SparkBuild extends PomBuild {
}
+object YARNCommon {
+ lazy
GitHub user srowen opened a pull request:
https://github.com/apache/spark/pull/2801
SPARK-3803 [MLLIB] ArrayIndexOutOfBoundsException found in executing
computePrincipalComponents
Avoid overflow in computing n*(n+1)/2 as much as possible; throw explicit
error when Gramian
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2801#issuecomment-59031928
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/21723/consoleFull)
for PR 2801 at commit
Github user sarutak commented on a diff in the pull request:
https://github.com/apache/spark/pull/2520#discussion_r18824858
--- Diff: project/SparkBuild.scala ---
@@ -170,6 +178,24 @@ object SparkBuild extends PomBuild {
}
+object YARNCommon {
+ lazy val
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2520#issuecomment-59037156
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/21725/consoleFull)
for PR 2520 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2520#issuecomment-59037613
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/21725/consoleFull)
for PR 2520 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2520#issuecomment-59037614
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user uncleGen commented on the pull request:
https://github.com/apache/spark/pull/2679#issuecomment-59037906
@ankurdave I see. And I think it is worthy to provide a memory-based
shuffle manager in some cases, like sufficient memory resources, stringent
performance requirement,
Github user sarutak commented on the pull request:
https://github.com/apache/spark/pull/2520#issuecomment-59038630
Ah, Jenkins run sbt/sbt package, so build is failed...
@ScrapCodes , do you have any good idea except for my original patch?
---
If your project is set up for it,
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2520#issuecomment-59038830
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user viper-kun commented on the pull request:
https://github.com/apache/spark/pull/2471#issuecomment-59038886
in my opinion, spark create event log data, and spark delete it. In
hadoop, event log is deleted by JobHistoryServer, not by fileSystem.
---
If your project is set
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2790#discussion_r18826596
--- Diff:
sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkSQLDriver.scala
---
@@ -62,7 +62,7 @@ private[hive] class
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2790#issuecomment-59039692
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/2790#discussion_r18826823
--- Diff:
sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkSQLDriver.scala
---
@@ -62,7 +62,7 @@ private[hive] class
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2801#issuecomment-59041272
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/21723/consoleFull)
for PR 2801 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2801#issuecomment-59041284
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2790#issuecomment-59041761
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/379/consoleFull)
for PR 2790 at commit
Github user chenghao-intel commented on the pull request:
https://github.com/apache/spark/pull/2570#issuecomment-59049695
@marmbrus any more comment on this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2790#issuecomment-59050958
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/379/consoleFull)
for PR 2790 at commit
Github user sryza commented on a diff in the pull request:
https://github.com/apache/spark/pull/2087#discussion_r18831556
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala
---
@@ -121,6 +125,31 @@ class SparkHadoopUtil extends Logging {
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/789#issuecomment-59053860
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/21726/consoleFull)
for PR 789 at commit
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/2799#issuecomment-59055480
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/2795#issuecomment-59055586
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/2782#issuecomment-59055788
@scrapcodes We cant make it a proper interface because of binary
compatibility (this is a pretty widely used API so I'd prefer not to break it).
---
If your project is
Github user sryza commented on a diff in the pull request:
https://github.com/apache/spark/pull/2087#discussion_r18832502
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala
---
@@ -121,6 +125,31 @@ class SparkHadoopUtil extends Logging {
Github user chenghao-intel commented on the pull request:
https://github.com/apache/spark/pull/2762#issuecomment-59055965
Thanks, I've rebased the latest master, and solved the issues you guys
raised.
---
If your project is set up for it, you can reply to this email and have your
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/2782#discussion_r18832542
--- Diff: core/src/main/java/org/apache/spark/TaskContext.java ---
@@ -116,33 +55,19 @@ public static TaskContext get() {
}
/** ::
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/2782#discussion_r18832643
--- Diff: core/src/main/java/org/apache/spark/TaskContext.java ---
@@ -37,68 +37,7 @@
* Contextual information about a task which can be read or mutated
1 - 100 of 338 matches
Mail list logo