Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3796#issuecomment-68092701
[Test build #24812 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24812/consoleFull)
for PR 3796 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3796#issuecomment-68092702
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/3797#issuecomment-68092844
@JoshRosen I am sorry to forget describe this patch. I have created a jira
for it, can you take a look?
---
If your project is set up for it, you can reply to this
Github user jerryshao commented on the pull request:
https://github.com/apache/spark/pull/3798#issuecomment-68093197
Hi @koeninger , several simple questions:
1. How to map each RDD partition to Kafka partition, each Kafka partition
is a RDD partition?
2. How to do
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/3797#issuecomment-68093199
I can take a look at this later this week. It would probably be a good
idea for someone more familiar with the YARN code to take a look, too, since
they might also be
GitHub user XuTingjun opened a pull request:
https://github.com/apache/spark/pull/3799
[SPARK-1507][YARN]specify num of cores for AM
I add some configurations below.
spark.yarn.am.cores/SPARK_MASTER_CORES/SPARK_DRIVER_CORES for yarn-client
mode;
spark.driver.cores for
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/3686#issuecomment-68093593
Hi all, I accidently delete my repository, so I create a new patch #3799
for it.
---
If your project is set up for it, you can reply to this email and have your
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3799#issuecomment-68093631
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
GitHub user chenghao-intel opened a pull request:
https://github.com/apache/spark/pull/3800
[SAPRK-4967] [SQL] File name with comma will cause exception for
SQLContext.parquetFile
This is a workaround solution to support the `,` in the parquet file name,
however, we need to update
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3800#issuecomment-68094002
[Test build #24814 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24814/consoleFull)
for PR 3800 at commit
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/3687#issuecomment-68094079
Just realized that my last comment was a bit confusing, since SPARK-1600 is
not related to the FileInputStream ManualClock fix. I'll file a new
improvement JIRA to
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/3687#discussion_r22269847
--- Diff:
streaming/src/test/scala/org/apache/spark/streaming/CheckpointSuite.scala ---
@@ -281,34 +278,45 @@ class CheckpointSuite extends TestSuiteBase {
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/3795#issuecomment-68095076
+1 LGTM. I remember this came up at least once, so good to guard against it
directly.
---
If your project is set up for it, you can reply to this email and have your
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3795#issuecomment-68095313
[Test build #24813 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24813/consoleFull)
for PR 3795 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3795#issuecomment-68095317
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3800#issuecomment-68095357
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3800#issuecomment-68095355
[Test build #24814 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24814/consoleFull)
for PR 3800 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3784#issuecomment-68095691
[Test build #24815 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24815/consoleFull)
for PR 3784 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3784#issuecomment-68095891
[Test build #24816 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24816/consoleFull)
for PR 3784 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3467#issuecomment-68096064
[Test build #24818 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24818/consoleFull)
for PR 3467 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3784#issuecomment-68096063
[Test build #24817 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24817/consoleFull)
for PR 3784 at commit
Github user tsudukim commented on a diff in the pull request:
https://github.com/apache/spark/pull/3467#discussion_r22270614
--- Diff:
core/src/main/scala/org/apache/spark/deploy/history/FsHistoryProvider.scala ---
@@ -180,14 +176,15 @@ private[history] class
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2405#issuecomment-68096991
[Test build #24819 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24819/consoleFull)
for PR 2405 at commit
GitHub user JoshRosen opened a pull request:
https://github.com/apache/spark/pull/3801
[SPARK-1600] Refactor FileInputStream tests to remove Thread.sleep() calls
and SystemClock usage
This PR refactors Spark Streaming's FileInputStream tests to remove uses of
Thread.sleep() and
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/3801#issuecomment-68097807
These changes are split off from #3687, a larger PR of mine which tried to
remove all uses of Thread.sleep() in the streaming tests.
It may look like there are
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/3801#discussion_r22271205
--- Diff:
streaming/src/test/scala/org/apache/spark/streaming/CheckpointSuite.scala ---
@@ -281,102 +279,130 @@ class CheckpointSuite extends TestSuiteBase
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3801#issuecomment-68097905
[Test build #24820 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24820/consoleFull)
for PR 3801 at commit
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/3801#discussion_r22271345
--- Diff:
streaming/src/test/scala/org/apache/spark/streaming/CheckpointSuite.scala ---
@@ -281,102 +279,130 @@ class CheckpointSuite extends TestSuiteBase
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/3801#discussion_r22271355
--- Diff:
streaming/src/test/scala/org/apache/spark/streaming/CheckpointSuite.scala ---
@@ -281,102 +279,130 @@ class CheckpointSuite extends TestSuiteBase
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3784#issuecomment-68098294
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3784#issuecomment-68098292
[Test build #24815 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24815/consoleFull)
for PR 3784 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3467#issuecomment-68098394
[Test build #24818 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24818/consoleFull)
for PR 3467 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3467#issuecomment-68098398
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3784#issuecomment-68098518
[Test build #24816 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24816/consoleFull)
for PR 3784 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3784#issuecomment-68098521
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3784#issuecomment-68098625
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3784#issuecomment-68098622
[Test build #24817 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24817/consoleFull)
for PR 3784 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2405#issuecomment-68099780
[Test build #24819 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24819/consoleFull)
for PR 2405 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2405#issuecomment-68099783
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3801#issuecomment-68100834
[Test build #24820 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24820/consoleFull)
for PR 3801 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3801#issuecomment-68100836
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/3661#discussion_r22272234
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/ContextWaiter.scala ---
@@ -17,30 +17,63 @@
package org.apache.spark.streaming
Github user zsxwing commented on the pull request:
https://github.com/apache/spark/pull/3464#issuecomment-68101164
ping @tdas
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user lianhuiwang commented on the pull request:
https://github.com/apache/spark/pull/3797#issuecomment-68103677
@XuTingjun yes, i agree with you. we should let parseArgs before using
config amMemory and executorMemory. because parseArgs can change these value
from args.
---
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/3784#issuecomment-68105414
Hi @liancheng, i admit my PR is more complicated, but this only cover three
cases, i think we'd better adding a separate rule to optimize And/Or in sql
for as many as
Github user aarondav commented on the pull request:
https://github.com/apache/spark/pull/3795#issuecomment-68105479
To note, my suggested solution would look more like this in HashPartitioner:
```scala
def getPartition(key: Any): Int = key match {
case null = 0
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/3779#issuecomment-68105490
@davies can you add a unit test that fails in the old code but works with
your code? It would be helpful to more clearly document the exact bug.
---
If your project is
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/3787#issuecomment-68105645
This looks good - thanks @sarutak and @srowen!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/3787
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/3786#issuecomment-68106101
I agree with mark on this. We should try to identify root causes always as
a first step.
---
If your project is set up for it, you can reply to this email and have
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3784#issuecomment-68106662
[Test build #24821 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24821/consoleFull)
for PR 3784 at commit
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/3778#discussion_r22274369
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -293,6 +295,380 @@ object OptimizeIn extends
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3784#issuecomment-68108140
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3784#issuecomment-68108135
[Test build #24821 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24821/consoleFull)
for PR 3784 at commit
GitHub user dennyglee opened a pull request:
https://github.com/apache/spark/pull/3802
Update README.md
Corrected link to the Building Spark with Maven page from its original
(http://spark.apache.org/docs/latest/building-with-maven.html) to the current
page
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3802#issuecomment-68108716
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
GitHub user freeman-lab opened a pull request:
https://github.com/apache/spark/pull/3803
[SPARK-4969] [STREAMING] [PYTHON] Add binaryRecords to streaming
In Spark 1.2 we added a `binaryRecords` input method for loading flat
binary data. This format is useful for numerical array
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3803#issuecomment-68111500
[Test build #24822 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24822/consoleFull)
for PR 3803 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3803#issuecomment-68111519
[Test build #24822 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24822/consoleFull)
for PR 3803 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3803#issuecomment-68111521
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3803#issuecomment-68111695
[Test build #24823 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24823/consoleFull)
for PR 3803 at commit
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3804#issuecomment-68111992
cc @shivaram
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
GitHub user nchammas opened a pull request:
https://github.com/apache/spark/pull/3804
[EC2] Update mesos/spark-ec2 branch to branch-1.3
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/nchammas/spark patch-2
Alternatively you
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3804#issuecomment-68112020
[Test build #24824 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24824/consoleFull)
for PR 3804 at commit
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/3804#issuecomment-68113162
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3803#issuecomment-68113297
[Test build #24823 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24823/consoleFull)
for PR 3803 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3803#issuecomment-68113298
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3804#issuecomment-68113654
[Test build #24824 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24824/consoleFull)
for PR 3804 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3804#issuecomment-68113656
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/3802#issuecomment-68114038
Good catch; I'm going to merge this into `master` (1.3.0) and `branch-1.2`
(1.2.1). Thanks!
---
If your project is set up for it, you can reply to this email and
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/3802
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/3793#discussion_r22275672
--- Diff: ec2/spark_ec2.py ---
@@ -255,6 +255,7 @@ def get_spark_shark_version(opts):
1.0.1: 1.0.1,
1.0.2: 1.0.2,
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/3793#issuecomment-68114155
LGTM, so I'll merge this into `master` (1.3.0).
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/3793
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/3804#issuecomment-68114214
LGTM, too, so I'm going to merge this into `master` (1.3.0). Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/3804
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/3651#issuecomment-68114368
The only issue with using append = true is that multiple test invocations
will just keep appending to the file, potentially making debugging a little
more confusing.
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/3466#discussion_r22275938
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/StreamingSource.scala ---
@@ -35,6 +35,15 @@ private[streaming] class StreamingSource(ssc:
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/3466#discussion_r22275955
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/StreamingSource.scala ---
@@ -55,19 +64,31 @@ private[streaming] class StreamingSource(ssc:
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/3466#discussion_r22275969
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/StreamingSource.scala ---
@@ -55,19 +64,31 @@ private[streaming] class StreamingSource(ssc:
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/3466#discussion_r22275976
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/StreamingSource.scala ---
@@ -55,19 +64,31 @@ private[streaming] class StreamingSource(ssc:
Github user tdas commented on the pull request:
https://github.com/apache/spark/pull/3466#issuecomment-68115893
Just a couple of more comments for making the name more consistent with
existing ones. Otherwise I approve of the how the `registerGauge` works now.
---
If your project
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/3464#discussion_r22275984
--- Diff: docs/streaming-programming-guide.md ---
@@ -66,7 +66,6 @@ main entry point for all streaming functionality. We
create a local StreamingCon
{%
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/3464#discussion_r22276020
--- Diff:
streaming/src/test/scala/org/apache/spark/streamingtest/ImplicitSuite.scala ---
@@ -0,0 +1,35 @@
+/*
+ * Licensed to the Apache Software
Github user tdas commented on the pull request:
https://github.com/apache/spark/pull/3464#issuecomment-68116003
LGTM, except one comment.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user tdas commented on the pull request:
https://github.com/apache/spark/pull/2994#issuecomment-68116052
@harishreedharan Since this feature involves a public API, it requires a
design doc and some discussion. Could you make one, so that a few us can take
a look and discuss
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/3726#discussion_r22276050
--- Diff:
streaming/src/test/scala/org/apache/spark/streaming/util/WriteAheadLogSuite.scala
---
@@ -182,16 +182,34 @@ class WriteAheadLogSuite extends FunSuite
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/3726#discussion_r22276054
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/receiver/ReceivedBlockHandler.scala
---
@@ -178,7 +178,7 @@ private[streaming] class
Github user tdas commented on the pull request:
https://github.com/apache/spark/pull/3726#issuecomment-68116279
Looking good, except one (and one optional) comment in the testsuite.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/3795#issuecomment-68116658
@aarondav Good idea, I'll make that change.
Note that we can't do a similar fix for arrays: many PairRDDFunctions
methods rely on being able to use keys to
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3795#issuecomment-68117017
[Test build #24825 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24825/consoleFull)
for PR 3795 at commit
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/3795#issuecomment-68117045
Alright, I've updated this to support Enums as @aarondav has described and
have strengthened the array error-checking to prohibit most uses of arrays as
keys in
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/3795#issuecomment-68117310
I just realized that some of this error-checking for array might not work
for Java API users due to type erasure / fake class manifests. If that's the
case, we might
GitHub user maropu opened a pull request:
https://github.com/apache/spark/pull/3805
SPARK-4970Fix an implicit bug in SparkSubmitSuite
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/maropu/spark SparkSubmitBugFix
Alternatively
Github user maropu commented on the pull request:
https://github.com/apache/spark/pull/3805#issuecomment-68117570
The test 'includes jars passed in through --jarsâ in SparkSubmitSuite
fails
when spark.executor.memory is set at over 512MiB in conf/spark-default.conf.
An
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3805#issuecomment-68117596
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/3784#issuecomment-68117927
Hi, @liancheng, my PR originally also not limited to Filter, i used
```transformExpressionsDown``` from my first version, the tittle of my first
version is not accurate:)
Github user scwf commented on a diff in the pull request:
https://github.com/apache/spark/pull/3778#discussion_r22276984
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -293,6 +295,380 @@ object OptimizeIn extends
Github user maropu commented on the pull request:
https://github.com/apache/spark/pull/3782#issuecomment-68118211
Understood. I got back to the old Pregel API.
And also, I'll check #1217 later :))
---
If your project is set up for it, you can reply to this email and have your
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/3464#discussion_r22277047
--- Diff:
streaming/src/test/scala/org/apache/spark/streamingtest/ImplicitSuite.scala ---
@@ -0,0 +1,35 @@
+/*
+ * Licensed to the Apache Software
1 - 100 of 148 matches
Mail list logo