Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19419
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/82742/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19419
Merged build finished. Test FAILed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional comma
Github user jomach commented on a diff in the pull request:
https://github.com/apache/spark/pull/7842#discussion_r144641913
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/pmml/export/PMMLTreeModelUtils.scala
---
@@ -0,0 +1,261 @@
+/*
+ * Licensed to the Apache Softw
Github user jomach commented on a diff in the pull request:
https://github.com/apache/spark/pull/7842#discussion_r144642103
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/pmml/export/PMMLTreeModelUtils.scala
---
@@ -0,0 +1,261 @@
+/*
+ * Licensed to the Apache Softw
Github user jomach commented on a diff in the pull request:
https://github.com/apache/spark/pull/7842#discussion_r144642031
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/pmml/export/PMMLTreeModelUtils.scala
---
@@ -0,0 +1,261 @@
+/*
+ * Licensed to the Apache Softw
Github user jomach commented on a diff in the pull request:
https://github.com/apache/spark/pull/7842#discussion_r144642055
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/pmml/export/PMMLTreeModelUtils.scala
---
@@ -0,0 +1,261 @@
+/*
+ * Licensed to the Apache Softw
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19419
**[Test build #82742 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/82742/testReport)**
for PR 19419 at commit
[`5c76b91`](https://github.com/apache/spark/commit/5
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19451
Merged build finished. Test PASSed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional comma
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19451
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/82740/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19451
**[Test build #82740 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/82740/testReport)**
for PR 19451 at commit
[`5facb93`](https://github.com/apache/spark/commit/5
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18747
**[Test build #82746 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/82746/testReport)**
for PR 18747 at commit
[`750b230`](https://github.com/apache/spark/commit/75
Github user mridulm commented on the issue:
https://github.com/apache/spark/pull/19487
> If it does use it, it'll handle an invalid entry in setupJob/setupTask by
throwing an exception there.
This should currently happen and `hasValidPath` does not prevent it.
That is, if
Github user steveloughran commented on the issue:
https://github.com/apache/spark/pull/19487
The more I see of the committer internals, the less confident I am about
understanding any of it.
If your committer isn't writing stuff out, it doesn't need to have any
value of mapred.out
Github user mridulm commented on a diff in the pull request:
https://github.com/apache/spark/pull/19487#discussion_r144633605
--- Diff:
core/src/main/scala/org/apache/spark/internal/io/HadoopMapReduceCommitProtocol.scala
---
@@ -48,6 +49,16 @@ class HadoopMapReduceCommitProtocol(j
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18979
**[Test build #82745 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/82745/testReport)**
for PR 18979 at commit
[`c0e81a1`](https://github.com/apache/spark/commit/c0
Github user steveloughran commented on the issue:
https://github.com/apache/spark/pull/18979
done. Not writing 0-byte files will offer significant speedup against
object stores, where the cost of a call to getFileStatus() can take hundreds of
millis. I look forward to it
---
--
Github user jomach commented on the issue:
https://github.com/apache/spark/pull/19485
@HyukjinKwon I came up with this. What do you think ? What I don't like on
it is that I did not find anyway to read Javadocs into the markdown so that we
don't have duplicates. Any ideia or should we
Github user mridulm commented on the issue:
https://github.com/apache/spark/pull/19487
I will change from `test:` to `::invalid::` to explicitly indicate an
invalid path (I picked the first path which gave me a parse error :) ).
On the question of whether `path` constructor pa
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19385
Merged build finished. Test PASSed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional comma
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19385
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/82744/
Test PASSed.
---
Github user mridulm commented on a diff in the pull request:
https://github.com/apache/spark/pull/19487#discussion_r144631350
--- Diff:
core/src/main/scala/org/apache/spark/internal/io/HadoopMapReduceCommitProtocol.scala
---
@@ -60,15 +71,6 @@ class HadoopMapReduceCommitProtocol(j
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19385
**[Test build #82744 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/82744/testReport)**
for PR 19385 at commit
[`28f511e`](https://github.com/apache/spark/commit/2
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19385
**[Test build #82744 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/82744/testReport)**
for PR 19385 at commit
[`28f511e`](https://github.com/apache/spark/commit/28
Github user eyalfa closed the pull request at:
https://github.com/apache/spark/pull/19481
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/19385
ok to test
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/19494#discussion_r144622642
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/columnar/InMemoryTableScanExec.scala
---
@@ -104,7 +104,8 @@ case class InMemoryTableScan
Github user steveloughran commented on the issue:
https://github.com/apache/spark/pull/19448
> But, if I were working on a Spark distribution at a vendor, this is
something I would definitely include because it's such a useful feature.
I concur :)
---
--
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19452
**[Test build #82743 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/82743/testReport)**
for PR 19452 at commit
[`94dfa85`](https://github.com/apache/spark/commit/94
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/19494#discussion_r144621674
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/columnar/InMemoryTableScanExec.scala
---
@@ -104,7 +104,8 @@ case class InMemoryTableScanEx
Github user maryannxue commented on the issue:
https://github.com/apache/spark/pull/19488
@cloud-fan Please see CheckAnalysis.scala:170. It checks the input
expression of each aggregate expression to make sure that they are not another
aggregate function and are deterministic.
---
Github user joseph-torres commented on a diff in the pull request:
https://github.com/apache/spark/pull/19452#discussion_r144620005
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingSymmetricHashJoinHelperSuite.scala
---
@@ -0,0 +1,118 @@
+/*
+ * Li
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/18805
```
$ ldd linux/amd64/libzstd-jni.so
ldd: warning: you do not have execution permission for
`linux/amd64/libzstd-jni.so'
linux/amd64/libzstd-jni.so: /lib64/libc.so.6: version `GLIBC_2.14'
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/19459#discussion_r144618995
--- Diff: python/pyspark/sql/session.py ---
@@ -510,9 +511,43 @@ def createDataFrame(self, data, schema=None,
samplingRatio=None, verifySchema=Tr
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/18747#discussion_r144618498
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/columnar/InMemoryTableScanExec.scala
---
@@ -23,21 +23,37 @@ import org.apache.spark.sql
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/19451
If we have to do this all over again i'd put all rules in their own files.
Replace isn't really a great high level category because all rules at some
level replace something.
---
--
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/18805
Good news is that I can reproduce it on the amplab machine, so I'll try to
play around with the zstd-jni code a bit.
---
-
To uns
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/19493
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/19493
Thanks! Merged to master.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail:
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/19464#discussion_r144617310
--- Diff: core/src/test/scala/org/apache/spark/FileSuite.scala ---
@@ -510,4 +510,87 @@ class FileSuite extends SparkFunSuite with
LocalSparkContext {
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/19464#discussion_r144617011
--- Diff: core/src/test/scala/org/apache/spark/FileSuite.scala ---
@@ -510,4 +510,87 @@ class FileSuite extends SparkFunSuite with
LocalSparkContext {
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19493
Merged build finished. Test PASSed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional comma
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/19493
LGTM pending Jenkins
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: revie
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19493
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/82739/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19493
**[Test build #82739 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/82739/testReport)**
for PR 19493 at commit
[`03cd40a`](https://github.com/apache/spark/commit/0
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/18805
Yeah but that would also cause it to fail locally if it were the cause, and
it passes for me. I can't really figure out from the rest of the logs if
something obvious is wrong, so I guess the best be
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/19354
That's a reasonable suggestion, though the K8S integration is intended to
come back into Spark soon. Hence doing nothing here is also about the right
thing in the near term, even if it's not consiste
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/18805
This seems to be caused by a issue in the `zstd-jni` library. It probably
uses the wrong `ClassLoader` to load the native library, and as a result it
cannot find the library & load it.
---
---
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/19448
Thank you :)
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19419
**[Test build #82742 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/82742/testReport)**
for PR 19419 at commit
[`5c76b91`](https://github.com/apache/spark/commit/5c
Github user krishna-pandey commented on the issue:
https://github.com/apache/spark/pull/19419
@jerryshao removed Whitespace at end of line 440 in package.scala. ok to
test.
---
-
To unsubscribe, e-mail: reviews-unsu
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19269
Merged build finished. Test PASSed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional comma
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19269
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/82738/
Test PASSed.
---
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/18805
Turns out that's caused by SparkContext failing to clean up after itself
when the `UnsatisfiedLinkError` happens, so those errors are red herrings...
---
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19269
**[Test build #82738 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/82738/testReport)**
for PR 19269 at commit
[`ac3de3c`](https://github.com/apache/spark/commit/a
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19419
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/82741/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19419
**[Test build #82741 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/82741/testReport)**
for PR 19419 at commit
[`1e61484`](https://github.com/apache/spark/commit/1
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19419
Merged build finished. Test FAILed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional comma
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19419
**[Test build #82741 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/82741/testReport)**
for PR 19419 at commit
[`1e61484`](https://github.com/apache/spark/commit/1e
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/19448
Sure, I will and let me note it ahead next time. I made a mistake while
trying to think of reasons for this backport.
---
-
Github user kiszk commented on a diff in the pull request:
https://github.com/apache/spark/pull/18747#discussion_r144609180
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/columnar/InMemoryTableScanExec.scala
---
@@ -23,21 +23,37 @@ import org.apache.spark.sql.cat
Github user sathiyapk commented on the issue:
https://github.com/apache/spark/pull/19451
@rxin I think it would be better to keep all the rules of the "Replace
Operators" batch in a single file. So if you prefer to keep the rule in a new
file, we can move all the replace operator rule
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/18805
I haven't been able to reproduce the issue locally, but looking at the
jenkins logs I see a bunch of exceptions like these:
```
17/10/13 06:53:26.609 dispatcher-event-loop-15 ERROR Worker
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/18805
(I'll file a bug and send a PR for it separately, btw.)
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For add
Github user sathiyapk commented on the issue:
https://github.com/apache/spark/pull/19451
@gatorsmile
> Could you please add an end-to-end testsuite except.sql of
SQLQueryTestSuite.scala?
Please verify `except.sql ` and `except.sql.out` files are enough for the
end-to-en
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19451
**[Test build #82740 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/82740/testReport)**
for PR 19451 at commit
[`5facb93`](https://github.com/apache/spark/commit/5f
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/19448
I am not really worried about this particular change. It's already merged
and it seems a small and safe change. I am not planning to revert it.
But, in general, let's avoid of merging changes
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/19464#discussion_r144604878
--- Diff: core/src/test/scala/org/apache/spark/FileSuite.scala ---
@@ -510,4 +510,87 @@ class FileSuite extends SparkFunSuite with
LocalSparkContext {
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/18747#discussion_r144603628
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/columnar/InMemoryTableScanExec.scala
---
@@ -23,21 +23,37 @@ import org.apache.spark.sql
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/18460
Thank you, @cloud-fan , @gatorsmile , and @viirya !!!
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Fo
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/18747#discussion_r144602775
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/ColumnarBatchScan.scala
---
@@ -84,25 +84,45 @@ private[sql] trait ColumnarBatchScan ext
Github user barnardb commented on the issue:
https://github.com/apache/spark/pull/19354
I totally understand the reluctance to have non-ASF projects in a list
headed by "The system currently supportsâ¦". Looking at the [Powered
By](https://spark.apache.org/powered-by.html) page, it d
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/18460
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org
Github user BryanCutler commented on a diff in the pull request:
https://github.com/apache/spark/pull/19459#discussion_r144601470
--- Diff: python/pyspark/sql/session.py ---
@@ -510,9 +511,43 @@ def createDataFrame(self, data, schema=None,
samplingRatio=None, verifySchema=Tr
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/18460
LGTM, merging to master!
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: re
Github user rdblue commented on the issue:
https://github.com/apache/spark/pull/19448
I have a lot of sympathy for the argument that infrastructure software
shouldn't have too many backports and that those should be generally bug fixes.
But, if I were working on a Spark distribution a
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/19459#discussion_r144600676
--- Diff: python/pyspark/sql/session.py ---
@@ -510,9 +511,43 @@ def createDataFrame(self, data, schema=None,
samplingRatio=None, verifySchema=Tr
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/19448
Okay. I am sorry for this trouble. Should we revert this if you guys feel
strongly about it?
---
-
To unsubscribe, e-mail: r
Github user BryanCutler commented on a diff in the pull request:
https://github.com/apache/spark/pull/19459#discussion_r144599111
--- Diff: python/pyspark/sql/session.py ---
@@ -510,9 +511,43 @@ def createDataFrame(self, data, schema=None,
samplingRatio=None, verifySchema=Tr
Github user BryanCutler commented on a diff in the pull request:
https://github.com/apache/spark/pull/19459#discussion_r144598051
--- Diff: python/pyspark/sql/session.py ---
@@ -510,9 +511,43 @@ def createDataFrame(self, data, schema=None,
samplingRatio=None, verifySchema=Tr
Github user BryanCutler commented on a diff in the pull request:
https://github.com/apache/spark/pull/19459#discussion_r144597485
--- Diff: python/pyspark/sql/session.py ---
@@ -510,9 +511,43 @@ def createDataFrame(self, data, schema=None,
samplingRatio=None, verifySchema=Tr
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/19494
@srowen do you mean replacing `contains` with `exists`? If so, might you
please explain me why `exists` is a better option? Thanks.
---
---
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/18979
Could you resolve the conflicts again?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional co
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/18979#discussion_r144595826
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/BasicWriteStatsTracker.scala
---
@@ -44,20 +47,32 @@ case class BasicWri
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/19448
@HyukjinKwon branch-2.2 is in a maintenance branch, I am not sure it is
appropriate to merge this change to branch-2.2 since it is not really a bug
fix. If the doc is not accurate, we should fix the d
Github user BryanCutler commented on a diff in the pull request:
https://github.com/apache/spark/pull/19459#discussion_r144594930
--- Diff: python/pyspark/sql/session.py ---
@@ -510,9 +511,43 @@ def createDataFrame(self, data, schema=None,
samplingRatio=None, verifySchema=Tr
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19494
Can one of the admins verify this patch?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19480
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/82735/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19480
Merged build finished. Test FAILed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional comma
GitHub user mgaido91 opened a pull request:
https://github.com/apache/spark/pull/19494
[SPARK-22249][SQL] isin with empty list throws exception on cached DataFrame
## What changes were proposed in this pull request?
As pointed out in the JIRA, there is a bug which causes an
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/19448
@steveloughran Thanks for your inputs. Totally agree on your opinions.
Spark is an infrastructure software. We have to be very careful when
backporting the PRs.
---
-
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19480
**[Test build #82735 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/82735/testReport)**
for PR 19480 at commit
[`61cc445`](https://github.com/apache/spark/commit/6
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/19470
BTW, @cloud-fan . Could you review #18460 , too? I think we need your final
approval. :)
---
-
To unsubscribe, e-mail: rev
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/19470
Thank you so much, @cloud-fan , @gatorsmile , and @viirya !
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/19480#discussion_r144588780
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala
---
@@ -2103,4 +2103,35 @@ class DataFrameSuite extends QueryTest with
SharedS
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18979
**[Test build #82731 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/82731/testReport)**
for PR 18979 at commit
[`649f8da`](https://github.com/apache/spark/commit/6
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19488
**[Test build #82733 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/82733/testReport)**
for PR 19488 at commit
[`1cca72b`](https://github.com/apache/spark/commit/1
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18979
Build finished. Test PASSed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18979
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/82731/
Test PASSed.
---
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/19476
@jerryshao
Thanks a lot for ping. I left comments by my understanding. Not sure if
it's helpful :)
---
-
To unsubscribe,
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19488
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/82733/
Test PASSed.
---
101 - 200 of 342 matches
Mail list logo