Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/1010#discussion_r13635058
--- Diff: docs/openstack-integration.md ---
@@ -0,0 +1,110 @@
+yout: global
+title: Accessing Openstack Swift storage from Spark
+---
+
+#
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/1010#discussion_r13635099
--- Diff: core/pom.xml ---
@@ -35,7 +35,11 @@
groupIdorg.apache.hadoop/groupId
artifactIdhadoop-client/artifactId
/dependency
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/1034#issuecomment-45705688
Looks like only python tests are failing.
Merged into 1.0 and master. Thanks!
---
If your project is set up for it, you can reply to this email and have your
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/1034
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user ueshin commented on a diff in the pull request:
https://github.com/apache/spark/pull/990#discussion_r13635448
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/optimizer/ConstantFoldingSuite.scala
---
@@ -173,4 +173,63 @@ class ConstantFoldingSuite
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/990#issuecomment-45706927
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/990#issuecomment-45706938
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1046#issuecomment-45708230
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15666/
---
If your project is set up for it, you can
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1035#issuecomment-45708229
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15667/
---
If your
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1046#issuecomment-45708228
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1035#issuecomment-45708227
Merged build finished. All automated tests passed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1038#issuecomment-45708594
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15668/
---
If your project is set up for it, you can
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1021#issuecomment-45708590
Merged build finished. All automated tests passed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1047#issuecomment-45708589
Merged build finished. All automated tests passed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1021#issuecomment-45708592
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15669/
---
If your
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1038#issuecomment-45708588
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1047#issuecomment-45708593
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15670/
---
If your
GitHub user sameeragarwal opened a pull request:
https://github.com/apache/spark/pull/1048
[SPARK-2042] Prevent unnecessary shuffle triggered by take()
This PR implements `take()` on a `SchemaRDD` by inserting a logical limit
that is followed by a `collect()`. This is also
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1048#issuecomment-45708761
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1048#issuecomment-45708748
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/1038#issuecomment-45708788
Only failing python tests.
Merged into master and 1.0. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1048#issuecomment-45708823
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/1038
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1048#issuecomment-45708825
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15672/
---
If your project is set up for it, you can
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/1048#discussion_r13636450
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/optimizer/CombiningLimitsSuite.scala
---
@@ -0,0 +1,71 @@
+/*
+ * Licensed to the
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/1048#discussion_r13636504
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/SchemaRDD.scala ---
@@ -374,6 +374,9 @@ class SchemaRDD(
override def collect(): Array[Row] =
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/1048#discussion_r13636540
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/optimizer/CombiningLimitsSuite.scala
---
@@ -0,0 +1,71 @@
+/*
+ * Licensed to
Github user sameeragarwal commented on a diff in the pull request:
https://github.com/apache/spark/pull/1048#discussion_r13636591
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/optimizer/CombiningLimitsSuite.scala
---
@@ -0,0 +1,71 @@
+/*
+ * Licensed
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/1048#discussion_r13636646
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/optimizer/CombiningLimitsSuite.scala
---
@@ -0,0 +1,71 @@
+/*
+ * Licensed to the
GitHub user adrian-wang opened a pull request:
https://github.com/apache/spark/pull/1049
Clean left semi join hash
Some improvement for PR #837, add another case to white list and use
`filter` to build result iterator.
You can merge this pull request into a Git repository by
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1049#issuecomment-45709748
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1049#issuecomment-45709739
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/1035
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/1035#issuecomment-45709770
merging this in master
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/1048#discussion_r13636724
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/SchemaRDD.scala ---
@@ -374,6 +374,9 @@ class SchemaRDD(
override def collect():
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1048#issuecomment-45710125
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1048#issuecomment-45710129
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user sameeragarwal commented on a diff in the pull request:
https://github.com/apache/spark/pull/1048#discussion_r13636933
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/SchemaRDD.scala ---
@@ -374,6 +374,9 @@ class SchemaRDD(
override def collect():
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/1048#discussion_r13636976
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/SchemaRDD.scala ---
@@ -374,6 +374,9 @@ class SchemaRDD(
override def collect():
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/1046#issuecomment-45710897
Looks like only python is failing.
Merged into 1.0 and master. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply
Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/1014#issuecomment-45712524
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1014#issuecomment-45712698
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1014#issuecomment-45712709
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1049#issuecomment-45716000
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15673/
---
If your project is set up for it, you can
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1049#issuecomment-45715998
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1048#issuecomment-45716130
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15674/
---
If your project is set up for it, you can
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1048#issuecomment-45716129
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1014#issuecomment-45716313
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15675/
---
If your
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1014#issuecomment-45716311
Merged build finished. All automated tests passed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
GitHub user ScrapCodes opened a pull request:
https://github.com/apache/spark/pull/1050
[SPARK-2109] Setting SPARK_MEM for bin/pyspark does not work.
Trivial fix.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/ScrapCodes/spark-1
Github user ahirreddy commented on a diff in the pull request:
https://github.com/apache/spark/pull/1023#discussion_r13642763
--- Diff: python/pyspark/sql.py ---
@@ -346,7 +347,7 @@ def _toPython(self):
# TODO: This is inefficient, we should construct the Python Row
Github user ahirreddy commented on the pull request:
https://github.com/apache/spark/pull/1041#issuecomment-45726121
I think you have to rebase against master, it's including extra commits
that you probably don't want in jenkins:
[SPARK-2010] Support for nested data in
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1050#issuecomment-45727899
Merged build finished. All automated tests passed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1050#issuecomment-45727900
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15676/
---
If your
GitHub user ScrapCodes opened a pull request:
https://github.com/apache/spark/pull/1051
[SPARK-2014] Make PySpark store RDDs in MEMORY_ONLY_SER with compression by
default
You can merge this pull request into a Git repository by running:
$ git pull
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1051#issuecomment-45731095
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user coderh commented on the pull request:
https://github.com/apache/spark/pull/597#issuecomment-45731942
I have tried different lamdba and # features. But nothing has changed. To
be clear, initially, the Movielens dataset it is divided into training set(80%)
and test
Github user mridulm commented on the pull request:
https://github.com/apache/spark/pull/892#issuecomment-45732793
Just wanted to drop a quick node (since I might not be able to get to this
until late next week).
I think the proposal should work : though I might be missing
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1051#issuecomment-45734418
Merged build finished. All automated tests passed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1051#issuecomment-45734421
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15677/
---
If your
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/130#issuecomment-45736798
I don't have permission to close it, can you please. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/561
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/561#issuecomment-45737297
I committed this, thanks Sandy!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
GitHub user watermen opened a pull request:
https://github.com/apache/spark/pull/1052
'killFuture' is never used
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/watermen/spark bug-fix1
Alternatively you can review and apply
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1052#issuecomment-45739098
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user dianacarroll closed the pull request at:
https://github.com/apache/spark/pull/130
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/894#issuecomment-45744296
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1022#issuecomment-45744967
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/969#issuecomment-45744973
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/969#issuecomment-45744995
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1022#issuecomment-45744985
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/894#issuecomment-45744310
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/894#issuecomment-45750264
Merged build finished. All automated tests passed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1022#issuecomment-45750701
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15679/
---
If your
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1022#issuecomment-45750697
Merged build finished. All automated tests passed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/894#issuecomment-45750266
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15678/
---
If your project
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/1002#discussion_r13653078
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -297,7 +297,7 @@ class SparkContext(config: SparkConf) extends Logging {
Github user kanzhang commented on a diff in the pull request:
https://github.com/apache/spark/pull/1023#discussion_r13655604
--- Diff: python/pyspark/sql.py ---
@@ -346,7 +347,7 @@ def _toPython(self):
# TODO: This is inefficient, we should construct the Python Row
Github user falaki commented on a diff in the pull request:
https://github.com/apache/spark/pull/1025#discussion_r13659499
--- Diff: core/src/main/scala/org/apache/spark/rdd/PairRDDFunctions.scala
---
@@ -156,6 +161,182 @@ class PairRDDFunctions[K, V](self: RDD[(K, V)])
}
Github user sameeragarwal commented on a diff in the pull request:
https://github.com/apache/spark/pull/1048#discussion_r13661368
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/SchemaRDD.scala ---
@@ -374,6 +374,9 @@ class SchemaRDD(
override def collect():
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/718#discussion_r13661671
--- Diff:
core/src/main/scala/org/apache/spark/deploy/history/FsHistoryProvider.scala ---
@@ -0,0 +1,210 @@
+/*
+ * Licensed to the Apache Software
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1048#issuecomment-45772154
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1048#issuecomment-45772172
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user aarondav commented on a diff in the pull request:
https://github.com/apache/spark/pull/360#discussion_r13661910
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/parquet/ParquetTypes.scala ---
@@ -0,0 +1,409 @@
+/*
+ * Licensed to the Apache Software
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/360#discussion_r13662320
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/parquet/ParquetTypes.scala ---
@@ -0,0 +1,409 @@
+/*
+ * Licensed to the Apache Software
Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/1052#issuecomment-45775015
LGTM.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/1021
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/1047#issuecomment-45775581
Thanks Prashant - I'm merging this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
GitHub user andrewor14 opened a pull request:
https://github.com/apache/spark/pull/1053
HOTFIX: A few pyspark tests were not actually run
This is a hot fix for the hot fix in
fb499be1ac935b6f91046ec8ff23ac1267c82342. The changes in that commit did not
actually cause the `doctest`
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/900#issuecomment-45776939
@kayousterhout @mridulm Looking briefly at pr #892 it seems that is
handling locality when executors are added later and I assume some of the
locality wait configs come
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1052#issuecomment-45777220
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1053#issuecomment-45777214
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1053#issuecomment-45777229
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1052#issuecomment-45777232
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/1041#discussion_r13664158
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/SQLContext.scala ---
@@ -296,16 +296,25 @@ class SQLContext(@transient val sparkContext:
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/1048#issuecomment-45779021
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user mridulm commented on the pull request:
https://github.com/apache/spark/pull/900#issuecomment-45779752
This one slipped off my radar, my apologies.
@tgravescs In #892, if there is even a single executor which is process
local with any partition, then we start waiting
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/718#discussion_r13665410
--- Diff:
core/src/main/scala/org/apache/spark/deploy/history/FsHistoryProvider.scala ---
@@ -0,0 +1,210 @@
+/*
+ * Licensed to the Apache Software
Github user mridulm commented on the pull request:
https://github.com/apache/spark/pull/900#issuecomment-45780405
Hit submit by mistake, to continue ...
The side effect of not having sufficient executors are different from #892.
For example,
a) the default parallelism in yarn
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1053#issuecomment-45780449
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
1 - 100 of 230 matches
Mail list logo