Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1778#issuecomment-51153091
QA tests have started for PR 1778. This patch merges cleanly. brView
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/17929/consoleFull
---
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/1507#discussion_r15796125
--- Diff: core/src/main/scala/org/apache/spark/executor/TaskMetrics.scala
---
@@ -98,19 +105,22 @@ class TaskMetrics extends Serializable {
*/
Github user tdas commented on the pull request:
https://github.com/apache/spark/pull/1694#issuecomment-51153366
Well, I have merged this patch already, in an attempt to squeeze it in 1.1
release. If you open another patch to make the change, I can try squeezing that
too. Thanks for
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/1507#discussion_r15796149
--- Diff:
core/src/main/scala/org/apache/spark/storage/BlockFetcherIterator.scala ---
@@ -131,7 +122,9 @@ object BlockFetcherIterator {
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/1779#issuecomment-51153508
LGTM pending tests - thanks Andrew. I'm guessing these were simply un-used.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/1648#discussion_r15796222
--- Diff:
sql/hive/compatibility/src/test/scala/org/apache/spark/sql/hive/execution/HiveCompatibilitySuite.scala
---
@@ -38,6 +39,7 @@ class
Github user MLnick commented on a diff in the pull request:
https://github.com/apache/spark/pull/1775#discussion_r15796202
--- Diff: python/pyspark/mllib/classification.py ---
@@ -73,11 +73,36 @@ def predict(self, x):
class LogisticRegressionWithSGD(object):
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/1780#issuecomment-51153633
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
GitHub user rxin opened a pull request:
https://github.com/apache/spark/pull/1781
[SPARK-2503] Lower shuffle output buffer (spark.shuffle.file.buffer.kb) to
32KB.
This can substantially reduce memory usage during shuffle.
You can merge this pull request into a Git repository
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/1779#issuecomment-51153773
+1 I've always used SPARK_MASTER_WEBUI_PORT and SPARK_WORKER_WEBUI_PORT in
spark-env.sh , I'd imagine everyone else has been also
---
If your project is set up for it,
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/1777#discussion_r15796327
--- Diff: core/src/main/scala/org/apache/spark/deploy/Client.scala ---
@@ -146,6 +146,7 @@ object Client {
}
val conf = new
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/1777#discussion_r15796365
--- Diff: core/src/main/scala/org/apache/spark/deploy/Client.scala ---
@@ -146,6 +146,7 @@ object Client {
}
val conf = new
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/1777#discussion_r15796377
--- Diff:
core/src/main/scala/org/apache/spark/deploy/worker/DriverWrapper.scala ---
@@ -30,8 +30,9 @@ object DriverWrapper {
args.toList match {
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1781#issuecomment-51154536
QA tests have started for PR 1781. This patch merges cleanly. brView
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/17930/consoleFull
---
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/1777#discussion_r15796416
--- Diff: core/src/main/scala/org/apache/spark/executor/Executor.scala ---
@@ -84,7 +84,8 @@ private[spark] class Executor(
// Initialize Spark
Github user ash211 commented on a diff in the pull request:
https://github.com/apache/spark/pull/1777#discussion_r15796591
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -1331,4 +1331,49 @@ private[spark] object Utils extends Logging {
.map { case
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/1777#discussion_r15796608
--- Diff: docs/spark-standalone.md ---
@@ -311,76 +311,103 @@ configure those ports.
!-- Web UIs --
tr
tdBrowser/td
-
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1309#issuecomment-51155585
QA results for PR 1309:br- This patch PASSES unit tests.brbrFor more
information see test
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/1777#discussion_r15796709
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -1331,4 +1331,49 @@ private[spark] object Utils extends Logging {
.map {
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/1773#issuecomment-51155708
Alright, merged it. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/1777#issuecomment-51155814
Hey Andrew - overall this looks good. I think ultimately we'll need to just
lock down a cluster and test this by opening up ports one by one, but I think
this is worth
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/1777#discussion_r15796780
--- Diff: core/src/main/scala/org/apache/spark/executor/Executor.scala ---
@@ -84,7 +84,8 @@ private[spark] class Executor(
// Initialize Spark
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/1777#discussion_r15796941
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -1331,4 +1331,49 @@ private[spark] object Utils extends Logging {
.map {
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/1707#issuecomment-51156435
Jenkins actually passed this (see
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/17919/consoleFull)
but a glitch in the reporting script made it not
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/1707#issuecomment-51156443
Thanks for the review.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/1780#discussion_r15797096
--- Diff:
core/src/main/scala/org/apache/spark/serializer/KryoSerializer.scala ---
@@ -47,7 +47,9 @@ class KryoSerializer(conf: SparkConf)
with
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1779#issuecomment-51156680
QA results for PR 1779:br- This patch FAILED unit tests.br- This patch
merges cleanlybr- This patch adds no public classesbrbrFor more
information see test
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/714#issuecomment-51156760
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1775#issuecomment-51156861
QA results for PR 1775:br- This patch PASSES unit tests.br- This patch
merges cleanlybr- This patch adds no public classesbrbrFor more
information see test
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/714#issuecomment-51156997
QA tests have started for PR 714. This patch merges cleanly. brView
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/17932/consoleFull
---
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1780#issuecomment-51157549
QA results for PR 1780:br- This patch PASSES unit tests.br- This patch
merges cleanlybr- This patch adds no public classesbrbrFor more
information see test
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1778#issuecomment-51157759
QA results for PR 1778:br- This patch FAILED unit tests.br- This patch
merges cleanlybr- This patch adds no public classesbrbrFor more
information see test
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/1780#discussion_r15797592
--- Diff:
core/src/main/scala/org/apache/spark/serializer/KryoSerializer.scala ---
@@ -47,7 +47,9 @@ class KryoSerializer(conf: SparkConf)
with Logging
Github user rezazadeh commented on the pull request:
https://github.com/apache/spark/pull/1778#issuecomment-51158224
The binary backwards compatibility check doesn't like adding a new method
to the trait MultivariateStatisticalSummary. Suggestions on binary
compatibility welcome,
Github user MLnick commented on a diff in the pull request:
https://github.com/apache/spark/pull/1775#discussion_r15797754
--- Diff: python/pyspark/mllib/classification.py ---
@@ -73,11 +73,36 @@ def predict(self, x):
class LogisticRegressionWithSGD(object):
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/1773
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/1707
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/1481#issuecomment-51158962
Okay here's the deal - this patch is causing some type of non-deterministic
failure which seems related to the shuffle write path. It looks like the test
is hanging,
Github user miccagiann commented on a diff in the pull request:
https://github.com/apache/spark/pull/1775#discussion_r15797995
--- Diff: python/pyspark/mllib/classification.py ---
@@ -73,11 +73,36 @@ def predict(self, x):
class LogisticRegressionWithSGD(object):
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/1309#issuecomment-51159364
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1309#issuecomment-51160214
QA tests have started for PR 1309. This patch merges cleanly. brView
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/17934/consoleFull
---
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/1779#issuecomment-51160947
I'm assuming this failure is totally unrelated. I can merge this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/714#issuecomment-51161840
Thanks - I'm going to merge this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
GitHub user gchen opened a pull request:
https://github.com/apache/spark/pull/1782
[SPARK-2859] update url of Kryo project in related docs
JIRA Issue: https://issues.apache.org/jira/browse/SPARK-2859
Kryo project has been migrated from googlecode to github, hence we need to
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1782#issuecomment-51163460
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1309#issuecomment-51163619
QA results for PR 1309:br- This patch FAILED unit tests.br- This patch
merges cleanlybr- This patch adds the following public classes
(experimental):brclass
Github user chutium commented on a diff in the pull request:
https://github.com/apache/spark/pull/1346#discussion_r15799720
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/SQLContext.scala ---
@@ -89,6 +88,44 @@ class SQLContext(@transient val sparkContext:
SparkContext)
GitHub user nrchandan opened a pull request:
https://github.com/apache/spark/pull/1783
[SPARK-1170] Add histogram method to Python's RDD API
Tested and ready to merge.
You can merge this pull request into a Git repository by running:
$ git pull
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1783#issuecomment-51164282
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/1783#issuecomment-51164357
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/714
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/1779
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/1744#issuecomment-51164855
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/1780#issuecomment-51164898
Thanks. Merging in master. @andrewor14 if you feel strongly about it, I can
push a commit to add an one-line comment.
---
If your project is set up for it, you can reply
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1309#issuecomment-51164897
QA results for PR 1309:br- This patch PASSES unit tests.br- This patch
merges cleanlybr- This patch adds the following public classes
(experimental):brclass
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/1782#issuecomment-51165308
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1782#issuecomment-51165704
QA tests have started for PR 1782. This patch merges cleanly. brView
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/17936/consoleFull
---
Github user mridulm commented on the pull request:
https://github.com/apache/spark/pull/1780#issuecomment-51168402
IIRC if kryo cant host entire serialized object in the buffer, it throws up
: we saw issues with it being as high as 256 kb for some of our jobs : though
we were using a
Github user sryza commented on the pull request:
https://github.com/apache/spark/pull/1481#issuecomment-51168899
Looking into it. I ran the test that it was hanging on and things
completed fine. I also combed the code and didn't see anywhere where this
patch had changed how things
Github user mridulm commented on the pull request:
https://github.com/apache/spark/pull/1781#issuecomment-51169641
We are running this with 8k or so :-)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/1783#discussion_r15801451
--- Diff: python/pyspark/rdd.py ---
@@ -901,6 +902,97 @@ def sampleVariance(self):
1.0
return
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/1783#discussion_r15801518
--- Diff: python/pyspark/rdd.py ---
@@ -901,6 +902,97 @@ def sampleVariance(self):
1.0
return
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/1648#issuecomment-51171274
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/1346#discussion_r15801894
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/SQLContext.scala ---
@@ -89,6 +88,44 @@ class SQLContext(@transient val sparkContext:
SparkContext)
GitHub user rxin opened a pull request:
https://github.com/apache/spark/pull/1784
Set Spark SQL Hive compatibility test shuffle partitions to 2.
This should improve test runtime because majority of the test runtime are
scheduling and task overheads.
You can merge this pull request
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1648#issuecomment-51171966
QA tests have started for PR 1648. This patch merges cleanly. brView
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/17938/consoleFull
---
GitHub user marmbrus opened a pull request:
https://github.com/apache/spark/pull/1785
[SPARK-2860][SQL] Fix coercion of CASE WHEN.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/marmbrus/spark caseNull
Alternatively you can
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1785#issuecomment-51172893
QA tests have started for PR 1785. This patch merges cleanly. brView
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/17939/consoleFull
---
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/1780
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1782#issuecomment-51175003
QA results for PR 1782:br- This patch PASSES unit tests.br- This patch
merges cleanlybr- This patch adds no public classesbrbrFor more
information see test
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/1761#issuecomment-51177555
@JoshRosen As in all things you probably have to weight benefits vs costs?
A hypothetical merge conflict might be an important or trivial worry depending
on where it is.
Github user ueshin commented on the pull request:
https://github.com/apache/spark/pull/1586#issuecomment-51179894
@javadba, @marmbrus
I saw the case of SOF sometimes, it was not with @javadba's sequence,
though.
I can't identify the exact reason now, but I guess this is not
Github user vjovanov commented on a diff in the pull request:
https://github.com/apache/spark/pull/1759#discussion_r15804941
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/TypedSql.scala ---
@@ -0,0 +1,202 @@
+package org.apache.spark.sql
+
+import
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/1778#issuecomment-51180541
As a meta-question, what's the theory about what implementations should go
into Spark, and which should be external? Not everything needs to be in a
core library like
GitHub user nrchandan opened a pull request:
https://github.com/apache/spark/pull/1786
[SPARK-2861] Fix Doc comment of histogram method
Tested and ready to merge.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/nrchandan/spark
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1786#issuecomment-51181136
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1785#issuecomment-51181256
QA results for PR 1785:br- This patch PASSES unit tests.br- This patch
merges cleanlybr- This patch adds the following public classes
(experimental):brtrait TypeWidening
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1648#issuecomment-51182851
QA results for PR 1648:br- This patch FAILED unit tests.br- This patch
merges cleanlybr- This patch adds no public classesbrbrFor more
information see test
GitHub user nrchandan opened a pull request:
https://github.com/apache/spark/pull/1787
[SPARK-2862] Use shorthand range notation to avoid Scala bug
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/nrchandan/spark spark-2862
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1787#issuecomment-51189878
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user YanTangZhai closed the pull request at:
https://github.com/apache/spark/pull/1392
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user YanTangZhai commented on the pull request:
https://github.com/apache/spark/pull/1392#issuecomment-51190110
@pwendell Sorry, I'm late. Please disregard this PR since #1734 has been
closed.
---
If your project is set up for it, you can reply to this email and have your
Github user YanTangZhai closed the pull request at:
https://github.com/apache/spark/pull/1244
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/1744#issuecomment-51202476
Jenkins! Wake up and retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1744#issuecomment-51202770
QA tests have started for PR 1744. This patch merges cleanly. brView
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/17940/consoleFull
---
Github user rjurney commented on the pull request:
https://github.com/apache/spark/pull/455#issuecomment-51202880
@ericgarcia @srowen @MLnick
Unfortunately when I follow those directions, I still get errors. It looks
like I'll have to wait to get this functionality until its
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/1744#issuecomment-51203116
See Reynold, Jenkins is smarter than you think.
He also seems to work on East Coast time. :P
---
If your project is set up for it, you can reply to this email
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/1788
[WIP][SPARK-2167]spark-submit should return exit code based on
failure/success.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1788#issuecomment-51210280
QA tests have started for PR 1788. This patch merges cleanly. brView
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/17941/consoleFull
---
Github user rezazadeh commented on the pull request:
https://github.com/apache/spark/pull/1778#issuecomment-51214586
Having all-pairs similarity in spark has been requested several times. e.g.
http://bit.ly/XAFGs8 , and also by @freeman-lab . This algorithm is also a part
of
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/1780#issuecomment-51219643
Nah that's fine.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/1780#discussion_r15822692
--- Diff:
core/src/main/scala/org/apache/spark/serializer/KryoSerializer.scala ---
@@ -47,7 +47,9 @@ class KryoSerializer(conf: SparkConf)
with
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/1196#issuecomment-51220473
Yep, I was just waiting on a review. If you are good with it then I'll
commit.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user MLnick commented on the pull request:
https://github.com/apache/spark/pull/455#issuecomment-51220911
It will be in release 1.1.
You should be able to check out branch-1.1 and build from source and it
should work ok.
Otherwise 1.1
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1788#issuecomment-51221073
QA results for PR 1788:br- This patch PASSES unit tests.br- This patch
merges cleanlybr- This patch adds the following public classes
(experimental):brcase class
GitHub user scwf opened a pull request:
https://github.com/apache/spark/pull/1789
[sql] rename module name of hive-thriftserver
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/scwf/spark patch-1
Alternatively you can review and
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/1684#issuecomment-51220306
At this point we have released it with env variable overriding configs. I
think it would be better to just update the comment (since its just in the code
and not user
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/1218#discussion_r15823358
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -1531,18 +1532,6 @@ object SparkContext extends Logging {
throw
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/1783#issuecomment-51222479
I'll try to review this later today or tomorrow.
@davies, you might want to take a look at this, too?
---
If your project is set up for it, you can reply to
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/1744#issuecomment-51222571
There's something wrong with [the
build](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/17940/consoleFull).
[infoBuild timed out (after 120
1 - 100 of 414 matches
Mail list logo