Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4806#issuecomment-76343045
[Test build #28050 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28050/consoleFull)
for PR 4806 at commit
Github user marsishandsome commented on the pull request:
https://github.com/apache/spark/pull/4525#issuecomment-76342809
Hi @andrewor14, is there anything I can do for this PR?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user tdas commented on the pull request:
https://github.com/apache/spark/pull/4802#issuecomment-76347508
LGTM
@pwendell Any final look?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/4525#discussion_r25490619
--- Diff:
core/src/main/scala/org/apache/spark/deploy/history/FsHistoryProvider.scala ---
@@ -187,47 +200,74 @@ private[history] class
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4807#issuecomment-76348368
[Test build #28052 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28052/consoleFull)
for PR 4807 at commit
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/4785#issuecomment-76348989
I see, it was failing because we stopped the event logger before stopping
the listener bus, which means we left out certain events like application end.
Thanks I'm
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4806#issuecomment-76349422
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4805#issuecomment-76349481
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4805#issuecomment-76349478
[Test build #28049 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28049/consoleFull)
for PR 4805 at commit
Github user liyezhang556520 commented on the pull request:
https://github.com/apache/spark/pull/4785#issuecomment-76350121
Thanks @srowen @andrewor14
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user davies commented on a diff in the pull request:
https://github.com/apache/spark/pull/4808#discussion_r25491455
--- Diff: python/pyspark/sql/context.py ---
@@ -620,93 +619,6 @@ def _get_hive_ctx(self):
return self._jvm.HiveContext(self._jsc.sc())
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4809#issuecomment-76351371
[Test build #28054 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28054/consoleFull)
for PR 4809 at commit
GitHub user mengxr opened a pull request:
https://github.com/apache/spark/pull/4811
[SPARK-5991][MLLIB] support save/load in PySpark's ALS
A simple wrapper in Python to save/load `MatrixFactorizationModel` in
Python. @jkbradley
You can merge this pull request into a Git repository
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/4802#issuecomment-76352327
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4778#issuecomment-76335376
[Test build #28047 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28047/consoleFull)
for PR 4778 at commit
GitHub user jerryshao opened a pull request:
https://github.com/apache/spark/pull/4805
[SPARK-6051][Streaming] Add ZooKeeper offest posting for
DirectKafkaInputDStream
Currently in DirectKafkaInputDStream, offset is managed by Spark Streaming
itself without ZK or Kafka involved,
Github user harishreedharan commented on a diff in the pull request:
https://github.com/apache/spark/pull/4688#discussion_r25488980
--- Diff:
yarn/src/main/scala/org/apache/spark/deploy/yarn/ApplicationMaster.scala ---
@@ -256,6 +256,12 @@ private[spark] class ApplicationMaster(
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4778#issuecomment-76346385
[Test build #28047 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28047/consoleFull)
for PR 4778 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4807#issuecomment-76346339
[Test build #28051 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28051/consoleFull)
for PR 4807 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4778#issuecomment-76346390
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/4778#issuecomment-76346651
Ok LGTM merging into master 1.3 thanks @elyast
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/4288
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/4800
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/4525#discussion_r25490359
--- Diff:
core/src/main/scala/org/apache/spark/deploy/history/FsHistoryProvider.scala ---
@@ -187,47 +200,74 @@ private[history] class
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/4525#discussion_r25490585
--- Diff:
core/src/main/scala/org/apache/spark/deploy/history/FsHistoryProvider.scala ---
@@ -187,47 +200,74 @@ private[history] class
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/4803#discussion_r25490767
--- Diff: docs/spark-standalone.md ---
@@ -222,8 +222,7 @@ SPARK_WORKER_OPTS supports the following system
properties:
tdfalse/td
td
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/4803#issuecomment-76348551
LGTM merging into master and 1.3, thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/4803
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user davies commented on the pull request:
https://github.com/apache/spark/pull/4808#issuecomment-76350548
will create another PR for 1.2 and 1.1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
GitHub user davies opened a pull request:
https://github.com/apache/spark/pull/4808
[SPARK-6055] [PySpark] fix incorrect __eq__ of DataType
The _eq_ of DataType is not correct, class cache is not use correctly
(created class can not be find by dataType), then it will create lots of
GitHub user davies opened a pull request:
https://github.com/apache/spark/pull/4809
[SPARK-6055] [PySpark] fix incorrect DataType.__eq__
The eq of DataType is not correct, class cache is not use correctly
(created class can not be find by dataType), then it will create lots of
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4811#issuecomment-76352594
[Test build #28056 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28056/consoleFull)
for PR 4811 at commit
Github user advancedxy commented on the pull request:
https://github.com/apache/spark/pull/4783#issuecomment-76352556
@shivaram I updated the gist, you can look at the result.
[gist](https://gist.github.com/advancedxy/2ae7c9cc7629f3aeb679)
Also you can just download the shell
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/4801#discussion_r25488493
--- Diff: docs/mllib-linear-methods.md ---
@@ -370,6 +336,59 @@ print(Training Error = + str(trainErr))
/div
/div
+### Logistic regression
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4785#issuecomment-76338610
[Test build #28046 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28046/consoleFull)
for PR 4785 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4785#issuecomment-76338633
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user harishreedharan commented on a diff in the pull request:
https://github.com/apache/spark/pull/4688#discussion_r25488895
--- Diff:
core/src/main/scala/org/apache/spark/executor/CoarseGrainedExecutorBackend.scala
---
@@ -105,6 +107,14 @@ private[spark] class
Github user harishreedharan commented on a diff in the pull request:
https://github.com/apache/spark/pull/4688#discussion_r25489048
--- Diff:
yarn/src/main/scala/org/apache/spark/deploy/yarn/YarnSparkHadoopUtil.scala ---
@@ -82,6 +93,102 @@ class YarnSparkHadoopUtil extends
Github user marsishandsome commented on the pull request:
https://github.com/apache/spark/pull/4567#issuecomment-76342480
Thanks @jerryshao
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user EntilZha opened a pull request:
https://github.com/apache/spark/pull/4807
[SPARK-5556][MLLib][WIP] Gibbs LDA, Refactor LDA for multiple LDA
algorithms (EM+Gibbs)
JIRA: https://issues.apache.org/jira/browse/SPARK-5556
As discussed in that issue, it would be
Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/4807#issuecomment-76346072
add to whitelist
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/4807#issuecomment-76346087
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/4778
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/4797#issuecomment-76346780
Merging into master, thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/4797
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/3629#discussion_r25490289
--- Diff: core/src/main/scala/org/apache/spark/storage/MemoryStore.scala ---
@@ -469,10 +485,32 @@ private[spark] class MemoryStore(blockManager:
Github user saucam commented on the pull request:
https://github.com/apache/spark/pull/4764#issuecomment-76347215
please retest
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/4525#discussion_r25490429
--- Diff:
core/src/main/scala/org/apache/spark/deploy/history/FsHistoryProvider.scala ---
@@ -187,47 +200,74 @@ private[history] class
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/4525#discussion_r25490648
--- Diff:
core/src/main/scala/org/apache/spark/deploy/history/FsHistoryProvider.scala ---
@@ -187,47 +200,74 @@ private[history] class
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/4525#issuecomment-76348298
Hey @marsishandsome I think the latest changes look reasonable. Can you
rebase to master and address the latest set of comments? I would like to get
this merged soon.
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/4785
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4806#issuecomment-76349418
[Test build #28050 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28050/consoleFull)
for PR 4806 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4808#issuecomment-76350623
[Test build #28053 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28053/consoleFull)
for PR 4808 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4810#issuecomment-76352228
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28055/consoleFull)
for PR 4810 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4087#issuecomment-76339652
[Test build #28048 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28048/consoleFull)
for PR 4087 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4087#issuecomment-76339708
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4087#issuecomment-76339707
[Test build #28048 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28048/consoleFull)
for PR 4087 at commit
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/4806
[SPARK-6052][SQL]In JSON schema inference, we should always set
containsNull of an ArrayType to true
Always set `containsNull = true` when infer the schema of JSON datasets. If
we set `containsNull`
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4805#issuecomment-76342336
[Test build #28049 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28049/consoleFull)
for PR 4805 at commit
Github user viirya commented on the pull request:
https://github.com/apache/spark/pull/4762#issuecomment-76342273
I think this is a minor one. Does anyone know if the change is correct?
Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply
Github user harishreedharan commented on a diff in the pull request:
https://github.com/apache/spark/pull/4688#discussion_r25489233
--- Diff:
yarn/src/main/scala/org/apache/spark/deploy/yarn/YarnSparkHadoopUtil.scala ---
@@ -82,6 +93,102 @@ class YarnSparkHadoopUtil extends
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4807#issuecomment-76343507
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4807#issuecomment-76346397
[Test build #28051 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28051/consoleFull)
for PR 4807 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4807#issuecomment-76346401
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/3629#discussion_r25490244
--- Diff: core/src/main/scala/org/apache/spark/storage/MemoryStore.scala ---
@@ -381,6 +395,8 @@ private[spark] class MemoryStore(blockManager:
GitHub user davies opened a pull request:
https://github.com/apache/spark/pull/4810
[SPARK-6055] [PySpark] fix incorrect DataType.__eq__
The eq of DataType is not correct, class cache is not use correctly
(created class can not be find by dataType), then it will create lots of
Github user brkyvz commented on the pull request:
https://github.com/apache/spark/pull/4754#issuecomment-76289777
Flaky test this time... @tdas, can you have this retested please?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/4754#issuecomment-76289981
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/4311#issuecomment-76289825
Ok, just ping us on your new patch.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/4778#discussion_r25472682
--- Diff:
core/src/test/scala/org/apache/spark/deploy/history/HistoryServerSuite.scala ---
@@ -0,0 +1,53 @@
+/*
+ * Licensed to the Apache
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3916#issuecomment-76293224
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user jerryshao commented on the pull request:
https://github.com/apache/spark/pull/4288#issuecomment-76295038
Yeah, I will, thanks a lot :).
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user harishreedharan commented on the pull request:
https://github.com/apache/spark/pull/4688#issuecomment-76295682
Again, since the keytab is always sent only via HDFS API, we are ok, since
that is encrypted on a secure HDFS cluster. Only the delegation tokens are sent
via
Github user viirya commented on the pull request:
https://github.com/apache/spark/pull/4729#issuecomment-76295694
@liancheng Unlike the issue of `ParquetConversions`, I think the array
insertion issue may not be just a Hive specific one. The problem is when we
create Parquet table
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/4794#issuecomment-76297594
OK this looks like no-risk small fix. I'll adjust it and merge.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/4688#discussion_r25475827
--- Diff:
yarn/src/main/scala/org/apache/spark/deploy/yarn/YarnSparkHadoopUtil.scala ---
@@ -82,6 +93,102 @@ class YarnSparkHadoopUtil extends SparkHadoopUtil
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4754#issuecomment-76303572
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4798#issuecomment-76307030
[Test build #28028 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28028/consoleFull)
for PR 4798 at commit
Github user viirya commented on the pull request:
https://github.com/apache/spark/pull/4786#issuecomment-76308026
@liancheng #4768 just explained why you need to do merging. The problem is,
before the reading task is launched, the different schemas are already merged
in
Github user harishreedharan commented on a diff in the pull request:
https://github.com/apache/spark/pull/4688#discussion_r25479691
--- Diff:
yarn/src/main/scala/org/apache/spark/deploy/yarn/YarnSparkHadoopUtil.scala ---
@@ -82,6 +93,102 @@ class YarnSparkHadoopUtil extends
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/4729#discussion_r25480542
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveStrategies.scala ---
@@ -254,4 +254,13 @@ private[hive] trait HiveStrategies {
case
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4789#issuecomment-76172235
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4588#issuecomment-76178118
[Test build #28002 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28002/consoleFull)
for PR 4588 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4588#issuecomment-76178126
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4791#issuecomment-76207169
[Test build #28005 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28005/consoleFull)
for PR 4791 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4791#issuecomment-76207186
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user harishreedharan commented on the pull request:
https://github.com/apache/spark/pull/4688#issuecomment-76218835
Thanks for taking a look, Tom!
To get new tgts, we'd still need the keytab, right? I am wondering how to
get around having the keytab being shipped to
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4775#issuecomment-76215068
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4775#issuecomment-76215053
[Test build #28006 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28006/consoleFull)
for PR 4775 at commit
Github user brkyvz commented on the pull request:
https://github.com/apache/spark/pull/4754#issuecomment-76215606
This passed locally. What the...
On Feb 26, 2015 8:39 AM, UCB AMPLab notificati...@github.com wrote:
Test FAILed.
Refer to this link for build results
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/4771#issuecomment-76204252
I think it would be good to try to give those the same treatment while
we're at it, yes. I think you're welcome to add that.
---
If your project is set up for it, you
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4754#issuecomment-76212311
[Test build #28007 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28007/consoleFull)
for PR 4754 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4754#issuecomment-76212327
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/4780#issuecomment-76214905
(PS good news, I see that `parquet-column`'s shading is set to only include
classes that are used from `fastutil`. That's great; there are only tens of
classes added.
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4771#issuecomment-76217929
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user jkleckner commented on a diff in the pull request:
https://github.com/apache/spark/pull/4780#discussion_r25440596
--- Diff: pom.xml ---
@@ -471,13 +471,6 @@
groupIdcom.clearspring.analytics/groupId
artifactIdstream/artifactId
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4771#issuecomment-76209607
[Test build #28009 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28009/consoleFull)
for PR 4771 at commit
Github user piaozhexiu commented on the pull request:
https://github.com/apache/spark/pull/4771#issuecomment-76209601
Done. Thanks again!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4771#issuecomment-76200389
[Test build #28008 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28008/consoleFull)
for PR 4771 at commit
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/4780#issuecomment-76214312
Since the PR is the implementation of an issue resolution, if the
discussion is about the implementation it can happen here on the PR.
Shading isn't the issue in
1 - 100 of 495 matches
Mail list logo