Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/1414#issuecomment-48993578
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/1398#issuecomment-48993635
Thanks for fixing this - I tested this locally and it worked (though I did
have to do a clean build first).
---
If your project is set up for it, you can reply to this
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1414#issuecomment-48993731
QA tests have started for PR 1414. This patch merges cleanly. brView
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/16663/consoleFull
---
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/1398
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/886#issuecomment-48994002
@manishamde - can you add `[MLlib]` to the title of this pull request?
Otherwise it doesn't get filtered properly by our filters.
---
If your project is set up for it,
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/1413#issuecomment-48994041
LGTM - I'll merge this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/1413
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/931#issuecomment-48994210
Jenkins, test this please. @xiajunluan actually I think the main issue now
is that this isn't merging cleanly.
---
If your project is set up for it, you can reply to
GitHub user rxin opened a pull request:
https://github.com/apache/spark/pull/1415
SPARK-2469: Use Snappy (instead of LZF) for default shuffle compression
codec
This reduces shuffle compression memory usage by 3x.
You can merge this pull request into a Git repository by running:
Github user mridulm commented on the pull request:
https://github.com/apache/spark/pull/1415#issuecomment-48994982
Do we want to change default for everything or only for shuffle ? (only
shuffle wont impact anything outside of spark)
What would be impact on user data if we change
Github user lianhuiwang commented on the pull request:
https://github.com/apache/spark/pull/1114#issuecomment-48995170
@andrewor14 i have created a jira issue SPARK-2302. yes, it is for
reducing Master's memory. Thank you.
---
If your project is set up for it, you can reply to this
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1412#issuecomment-48995408
QA results for PR 1412:br- This patch PASSES unit tests.br- This patch
merges cleanlybr- This patch adds no public classesbrbrFor more
information see test
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/1415#issuecomment-48995490
This is actually only used in shuffle.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/1415#issuecomment-48995567
Actually I lied. Somebody else added some code to use the compression codec
to compress event data ...
---
If your project is set up for it, you can reply to this email
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/1415#issuecomment-48995664
cc @andrewor14 I guess you added the event code ...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/1380
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
GitHub user rxin opened a pull request:
https://github.com/apache/spark/pull/1416
[SPARK-2399] Add support for LZ4 compression.
Based on Greg Bowyer's patch from JIRA
https://issues.apache.org/jira/browse/SPARK-2399
You can merge this pull request into a Git repository by
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/1336#discussion_r14919256
--- Diff:
core/src/main/scala/org/apache/spark/deploy/master/ui/HistoryNotFoundPage.scala
---
@@ -0,0 +1,40 @@
+/*
+ * Licensed to the Apache
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/1416#issuecomment-48995876
cc @davies
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1416#issuecomment-48995956
QA tests have started for PR 1416. This patch merges cleanly. brView
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/1/consoleFull
---
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/1415#issuecomment-48996149
I looked into the event logger code and it appears that codec change should
be fine. It figures out the codec for old data automatically anyway.
---
If your project is set
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1416#issuecomment-48996263
QA tests have started for PR 1416. This patch merges cleanly. brView
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/16667/consoleFull
---
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/1415#issuecomment-48996256
Yes, we log the codec used in a separate file so we don't lock ourselves
out of our old event logs. This change seems fine.
---
If your project is set up for it, you
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/1326#issuecomment-48996710
Sure - I guess we can do this. It seems strange to open a filesystem and
never close it (what if someone creates a large number of FileLogger
instances... after all
Github user mridulm commented on the pull request:
https://github.com/apache/spark/pull/1415#issuecomment-48996763
@andrewor14 do we also log the block size, etc of the codec used ?
If yes, then atleast for event data we should be fine.
IIRC we use the codec to compress
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/1326
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/1337#discussion_r14919602
--- Diff: core/src/main/scala/org/apache/spark/rdd/CoalescedRDD.scala ---
@@ -258,7 +258,7 @@ private[spark] class PartitionCoalescer(maxPartitions:
Int,
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/1337#issuecomment-48996905
LGTM pending one small question
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1114#issuecomment-48997521
QA results for PR 1114:br- This patch PASSES unit tests.br- This patch
merges cleanlybr- This patch adds no public classesbrbrFor more
information see test
Github user aarondav commented on a diff in the pull request:
https://github.com/apache/spark/pull/1337#discussion_r14919798
--- Diff: core/src/main/scala/org/apache/spark/rdd/CoalescedRDD.scala ---
@@ -258,7 +258,7 @@ private[spark] class PartitionCoalescer(maxPartitions:
Int,
Github user aarondav commented on the pull request:
https://github.com/apache/spark/pull/1412#issuecomment-48997884
LGTM, merging into master and branch-1.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/1412
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user aarondav commented on the pull request:
https://github.com/apache/spark/pull/1405#issuecomment-48998138
@pwendell The reporters of this issue have reported that this PR fixed the
problem. Ideally it can go into 1.0.2.
---
If your project is set up for it, you can reply
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/1413#issuecomment-48998126
@willb @aarondav my bad guys, I thought all outstanding issues were
addressed here but I realize that's not the case. Feel free to submit another
patch to clean up the
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/886#issuecomment-48998279
QA results for PR 886:br- This patch FAILED unit tests.br- This patch
merges cleanlybr- This patch adds no public classesbrbrFor more
information see test
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/1114#issuecomment-48998515
Thanks. Merging this in master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/1114
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user manishamde commented on the pull request:
https://github.com/apache/spark/pull/886#issuecomment-48998668
@pwendell I modified the title.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1412#issuecomment-48998885
QA results for PR 1412:br- This patch PASSES unit tests.br- This patch
merges cleanlybr- This patch adds no public classesbrbrFor more
information see test
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/1321#issuecomment-48999051
@rxin is there a case where you think local execution will yield a relevant
performance improvement? I don't see why shopping a task for a few milliseconds
is a bit
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/1321#issuecomment-48999161
When the cluster is busy and backlogged ...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/1393#issuecomment-48999523
I am trying to figure out why it happend, this might not be my conclusion
but at the moment I feel that since this class has a private [mllib]
constructor, there is
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/1393#issuecomment-48999631
And to my surprise also has
`org.apache.spark.mllib.recommendation.MatrixFactorizationModel.predict` not
sure why it has that.
---
If your project is set up for it,
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1351#issuecomment-49000138
QA tests have started for PR 1351. This patch merges cleanly. brView
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/16668/consoleFull
---
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/1393#issuecomment-49000211
Ahh understood (please ignore my previous theory.), so it happened because
we have a function which is `@developerApi` in the same class with same name.
So this was
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1415#issuecomment-49001592
QA results for PR 1415:br- This patch FAILED unit tests.br- This patch
merges cleanlybr- This patch adds no public classesbrbrFor more
information see test
Github user aarondav commented on the pull request:
https://github.com/apache/spark/pull/1321#issuecomment-49002113
I think it makes more sense if you can't run a command than certain
commands happen to be runnable while there are no cluster resources. This sort
of execution puts
Github user liancheng closed the pull request at:
https://github.com/apache/spark/pull/829
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1416#issuecomment-49003237
QA results for PR 1416:br- This patch PASSES unit tests.br- This patch
merges cleanlybr- This patch adds the following public classes
(experimental):brclass
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1416#issuecomment-49003425
QA results for PR 1416:br- This patch FAILED unit tests.br- This patch
merges cleanlybr- This patch adds the following public classes
(experimental):brclass
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/1416#issuecomment-49005453
Ok merging this in master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/1416
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user mridulm commented on the pull request:
https://github.com/apache/spark/pull/1415#issuecomment-49005728
weird that test failures - unrelated to this change
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user mridulm commented on the pull request:
https://github.com/apache/spark/pull/1415#issuecomment-49005818
ah yes, blocksize is only used during compression time : and inferred from
stream during decompression.
Then only class name should be sufficient
---
If your
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/1415#issuecomment-49005883
Yea the test failure isn't related.
If there is no objection, I'm going to merge this tomorrow. I will file a
jira ticket so we can prepend compression codec
Github user mridulm commented on the pull request:
https://github.com/apache/spark/pull/1212#issuecomment-49006034
hi @lirui-intel looks good to me !
Will merge when I get my laptop working again - unfortunate state of
affairs :-)
In meantime, if @pwendell or someone else
Github user mridulm commented on the pull request:
https://github.com/apache/spark/pull/1415#issuecomment-49006312
Cant comment on tachyon since we dont use it and have no experience with it
unfortunately.
I am fine with this change for the rest.
---
If your project is set up
GitHub user rxin opened a pull request:
https://github.com/apache/spark/pull/1417
Added LZ4 to compression codec in configuration page.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/rxin/spark lz4
Alternatively you can review
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1351#issuecomment-49007980
QA results for PR 1351:br- This patch FAILED unit tests.br- This patch
merges cleanlybr- This patch adds no public classesbrbrFor more
information see test
Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/1410#issuecomment-49008039
LGTM. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/1410
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user avulanov commented on the pull request:
https://github.com/apache/spark/pull/1155#issuecomment-49009673
@mengxr I've addressed your comments. Thanks for pointing me to the Scala
issue
---
If your project is set up for it, you can reply to this email and have your
reply
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1155#issuecomment-49009683
QA tests have started for PR 1155. This patch merges cleanly. brView
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/16670/consoleFull
---
Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/1155#issuecomment-49011680
@avulanov I made some minor updates and send you a PR at
https://github.com/avulanov/spark/pull/1 . If it looks good to you, please
merge that PR and the changes should
Github user avulanov commented on the pull request:
https://github.com/apache/spark/pull/1155#issuecomment-49012039
@mengxr done!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1155#issuecomment-49012327
QA tests have started for PR 1155. This patch merges cleanly. brView
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/16671/consoleFull
---
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/1407#discussion_r14925960
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/clustering/LocalKMeans.scala ---
@@ -59,6 +59,11 @@ private[mllib] object LocalKMeans extends Logging {
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/1393#discussion_r14926032
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/recommendation/MatrixFactorizationModel.scala
---
@@ -53,7 +53,7 @@ class MatrixFactorizationModel
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/1407#discussion_r14926024
--- Diff:
mllib/src/test/scala/org/apache/spark/mllib/clustering/KMeansSuite.scala ---
@@ -61,6 +61,30 @@ class KMeansSuite extends FunSuite with
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/1407#discussion_r14926012
--- Diff:
mllib/src/test/scala/org/apache/spark/mllib/clustering/KMeansSuite.scala ---
@@ -61,6 +61,30 @@ class KMeansSuite extends FunSuite with
Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/1407#issuecomment-49012803
@jkbradley The fix looks good to me except some minor style issues. Thanks
for fixing it! Btw, please add `[MLLIB]` to the title so this is easy to find.
---
If your
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/1393#issuecomment-49013972
Yes you could also tell callers to track their own user-ID mapping and
maintain it consistently everywhere. Callers have to share that state then
somehow. Hashing is
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1155#issuecomment-49020152
QA results for PR 1155:br- This patch PASSES unit tests.br- This patch
merges cleanlybr- This patch adds the following public classes
(experimental):brclass
GitHub user viirya opened a pull request:
https://github.com/apache/spark/pull/1418
[SPARK-2490] Change recursive visiting on RDD dependencies to iterative
approach
When performing some transformations on RDDs after many iterations, the
dependencies of RDDs could be very
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1418#issuecomment-49022775
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user willb commented on the pull request:
https://github.com/apache/spark/pull/1413#issuecomment-49023679
@aarondav @pwendell Yes, with this patch I'm able to enable the YourKit
features that were causing crashes before. I'll submit an update to fix the
bracket style and cc
GitHub user willb opened a pull request:
https://github.com/apache/spark/pull/1419
Reformat multi-line closure argument.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/willb/spark reformat-2486
Alternatively you can review and
Github user willb commented on the pull request:
https://github.com/apache/spark/pull/1419#issuecomment-49024982
(See discussion on #1413; cc @aarondav and @pwendell.)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1419#issuecomment-49025080
QA tests have started for PR 1419. This patch merges cleanly. brView
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/16672/consoleFull
---
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1269#issuecomment-49029010
QA results for PR 1269:br- This patch FAILED unit tests.br- This patch
merges cleanlybr- This patch adds the following public classes
(experimental):brclass Document(val
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1269#issuecomment-49030237
QA tests have started for PR 1269. This patch merges cleanly. brView
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/16674/consoleFull
---
GitHub user jerryshao opened a pull request:
https://github.com/apache/spark/pull/1420
[SPARK-2492][Streaming] kafkaReceiver minor changes to align with Kafka 0.8
Update the KafkaReceiver's behavior when auto.offset.reset is set to
smallest, which is aligned with Kafka 0.8
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1420#issuecomment-49032122
QA tests have started for PR 1420. This patch merges cleanly. brView
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/16675/consoleFull
---
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1404#issuecomment-49032797
QA tests have started for PR 1404. This patch merges cleanly. brView
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/16676/consoleFull
---
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1369#issuecomment-49034870
QA tests have started for PR 1369. This patch merges cleanly. brView
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/16679/consoleFull
---
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1404#issuecomment-49033458
QA tests have started for PR 1404. This patch merges cleanly. brView
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/16677/consoleFull
---
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1404#issuecomment-49034147
QA tests have started for PR 1404. This patch merges cleanly. brView
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/16678/consoleFull
---
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/1112#discussion_r14935222
--- Diff:
yarn/stable/src/main/scala/org/apache/spark/deploy/yarn/ExecutorLauncher.scala
---
@@ -82,6 +84,9 @@ class ExecutorLauncher(args:
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/1112#issuecomment-49036292
@witgo this looks good could you also add support for setting it in yarn
alpha mode, sorry I missed that in earlier reviews.
---
If your project is set up for it,
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1419#issuecomment-49037229
QA results for PR 1419:br- This patch PASSES unit tests.br- This patch
merges cleanlybr- This patch adds no public classesbrbrFor more
information see test
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/1094#discussion_r14936355
--- Diff:
yarn/common/src/main/scala/org/apache/spark/deploy/yarn/YarnSparkHadoopUtil.scala
---
@@ -132,4 +135,17 @@ object YarnSparkHadoopUtil {
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/1094#issuecomment-49039069
This PR conflicts with pr1112. I would like to put that one in first and
then upmerge this.
---
If your project is set up for it, you can reply to this email and have
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/1094#discussion_r14936811
--- Diff:
core/src/main/scala/org/apache/spark/deploy/history/HistoryServer.scala ---
@@ -172,6 +172,8 @@ class HistoryServer(
object HistoryServer {
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1112#issuecomment-49044799
QA tests have started for PR 1112. This patch merges cleanly. brView
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/16680/consoleFull
---
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1112#issuecomment-49045150
@tgravescs The code has been submitted. Because I don't have the hadoop
0.23.x cluster, the code no strict test.
---
If your project is set up for it, you can reply to
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/634#discussion_r14940726
--- Diff:
yarn/alpha/src/main/scala/org/apache/spark/deploy/yarn/ApplicationMaster.scala
---
@@ -416,19 +407,8 @@ object ApplicationMaster extends Logging
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/634#discussion_r14940751
--- Diff:
yarn/stable/src/main/scala/org/apache/spark/deploy/yarn/ApplicationMaster.scala
---
@@ -370,7 +359,6 @@ object ApplicationMaster extends Logging
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/1196#discussion_r14941021
--- Diff: core/src/main/scala/org/apache/spark/SecurityManager.scala ---
@@ -169,18 +192,43 @@ private[spark] class SecurityManager(sparkConf:
SparkConf)
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/1196#discussion_r14941058
--- Diff: core/src/main/scala/org/apache/spark/SecurityManager.scala ---
@@ -169,18 +192,43 @@ private[spark] class SecurityManager(sparkConf:
SparkConf)
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1420#issuecomment-49046694
QA results for PR 1420:br- This patch PASSES unit tests.br- This patch
merges cleanlybr- This patch adds no public classesbrbrFor more
information see test
1 - 100 of 286 matches
Mail list logo