Github user zhzhan commented on the issue:
https://github.com/apache/spark/pull/18694
Close the PR and will work on adding close interface for the iterator used
in SparkSQL to remove extra overhead.
---
If your project is set up for it, you can reply to this email and have your
Github user zhzhan closed the pull request at:
https://github.com/apache/spark/pull/18694
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user ueshin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18679#discussion_r129751794
--- Diff:
core/src/main/java/org/apache/spark/util/collection/unsafe/sort/UnsafeExternalSorter.java
---
@@ -138,15 +147,20 @@ private UnsafeExternalSorter(
Github user zhzhan commented on the issue:
https://github.com/apache/spark/pull/17180
Will fix the unit test.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user caneGuy commented on a diff in the pull request:
https://github.com/apache/spark/pull/18739#discussion_r129753610
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala ---
@@ -665,10 +667,15 @@ private[spark] class TaskSetManager(
Github user BryanCutler commented on the issue:
https://github.com/apache/spark/pull/18664
jenkins retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18664
**[Test build #79987 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/79987/testReport)**
for PR 18664 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18746
**[Test build #79988 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/79988/testReport)**
for PR 18746 at commit
GitHub user ajaysaini725 opened a pull request:
https://github.com/apache/spark/pull/18746
Implemented UnaryTransformer in Python
## What changes were proposed in this pull request?
Implemented UnaryTransformer in Python
(Please fill in changes proposed in this
Github user ajaysaini725 commented on the issue:
https://github.com/apache/spark/pull/18746
@jkbradley @thunterdb @MrBago Could you please review this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17180
**[Test build #79989 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/79989/testReport)**
for PR 17180 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18746
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18746
**[Test build #79988 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/79988/testReport)**
for PR 18746 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18746
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/79988/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18744
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18744
**[Test build #79986 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/79986/testReport)**
for PR 18744 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18744
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/79986/
Test PASSed.
---
Github user MLnick commented on the issue:
https://github.com/apache/spark/pull/18305
Merged to master. Thanks @sethah, and thanks all for reviews.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18305
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/79962/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18305
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18731
**[Test build #79963 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/79963/testReport)**
for PR 18731 at commit
Github user facaiy commented on a diff in the pull request:
https://github.com/apache/spark/pull/18554#discussion_r129562237
--- Diff: python/pyspark/ml/tests.py ---
@@ -1255,6 +1255,24 @@ def test_output_columns(self):
output = model.transform(df)
Github user yanboliang commented on a diff in the pull request:
https://github.com/apache/spark/pull/18610#discussion_r129574777
--- Diff: mllib/src/main/scala/org/apache/spark/ml/util/ReadWrite.scala ---
@@ -309,6 +313,23 @@ private[ml] object DefaultParamsWriter {
val
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18655
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/79955/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18655
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18503
**[Test build #79956 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/79956/testReport)**
for PR 18503 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18709
**[Test build #79959 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/79959/testReport)**
for PR 18709 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18655
**[Test build #79957 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/79957/testReport)**
for PR 18655 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18655
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/79957/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18709
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18738
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/18695#discussion_r129516461
--- Diff: python/pyspark/context.py ---
@@ -195,7 +195,7 @@ def _do_init(self, master, appName, sparkHome, pyFiles,
environment, batchSize,
#
Github user kiszk commented on a diff in the pull request:
https://github.com/apache/spark/pull/18659#discussion_r129522956
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/arrow/ArrowConverters.scala
---
@@ -132,6 +135,61 @@ private[sql] object ArrowConverters {
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18554
**[Test build #79964 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/79964/testReport)**
for PR 18554 at commit
Github user facaiy commented on a diff in the pull request:
https://github.com/apache/spark/pull/18554#discussion_r129562189
--- Diff: python/pyspark/ml/classification.py ---
@@ -1517,20 +1517,22 @@ class OneVsRest(Estimator, OneVsRestParams,
MLReadable, MLWritable):
Github user jreback commented on the issue:
https://github.com/apache/spark/pull/18664
I cannot repro this; can you show what ``item['timezone']`` is?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user kiszk commented on the issue:
https://github.com/apache/spark/pull/17180
Is it better to fix this test instead of remove it?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18305
**[Test build #79962 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/79962/testReport)**
for PR 18305 at commit
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/18305
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
GitHub user caneGuy opened a pull request:
https://github.com/apache/spark/pull/18739
[WIP][SPARK-21539][CORE] Job should not be aborted when dynamic allocation
is enâ¦
â¦abled or spark.executor.instances larger then current allocated number
by yarn
## What changes were
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18739
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/18737
cc @felixcheung and @shivaram, I first was worried of this behaviour change
but I guess this was rather be a workaround for a known bug that should be
removed out and we have warned properly so
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18709
**[Test build #79959 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/79959/testReport)**
for PR 18709 at commit
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/18652
@gatorsmile When the flag is enabled, we don't follow Hive on
non-deterministic join conditions.
The difference are:
* Hive allows non-deterministic expressions in equi join keys
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18737
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18503
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/79956/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18709
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/79959/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18655
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18503
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/18737
Hm.. this is a bigger change than I thought .. I mean the change itself
here should be correct as we support dots in columns in Scala side but it looks
there are few bugs related with dots in
Github user heary-cao commented on the issue:
https://github.com/apache/spark/pull/18725
@viirya @baibaichen
thank your for review it.
I made a comparison test:
```
select k,k,sum(id) from (select d004 as id, floor(c010 * 1) as k,
ceil(c010) as cceila from
Github user baibaichen commented on the issue:
https://github.com/apache/spark/pull/18725
@heary-cao, is the better performance with your fix? e.g. changing RDG's
deterministic property from false to true?
```
override def deterministic: Boolean = true
```
---
If
Github user baibaichen commented on the issue:
https://github.com/apache/spark/pull/18725
@heary-cao your fix is wrong.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user ueshin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18655#discussion_r129487716
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/arrow/ArrowWriter.scala
---
@@ -0,0 +1,383 @@
+/*
+ * Licensed to the Apache
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/18652
It is a good question. Based on previous discussion, I think Join operator
has no unique result in the non-deterministic case. The migration issue from
Hive is because this kind of queries can't run
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/18725
@baibaichen I agree. Looks correct.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user gczsjdy commented on the issue:
https://github.com/apache/spark/pull/18632
@cloud-fan You are right, thanks. I will close this PR.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user nahoj opened a pull request:
https://github.com/apache/spark/pull/18738
Typo in comment
-
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/nahoj/spark patch-1
Alternatively you can review and apply these changes as
Github user heary-cao commented on the issue:
https://github.com/apache/spark/pull/18725
@baibaichen
Okay, I try to modify this particular scenario by split it to two Projects.
thanks.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/18738
Can you have a look for similar typos, or others in this file? we encourage
people to submit more than just one minor typo fix in a PR if possible
---
If your project is set up for it, you can
Github user nahoj commented on the issue:
https://github.com/apache/spark/pull/18738
Sorry, I don't have time to proof-read the docs, I just saw this one typo
as it is in the summary of this much-used class.
---
If your project is set up for it, you can reply to this email and have
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/18728
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18655
**[Test build #79960 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/79960/testReport)**
for PR 18655 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18655
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/79960/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18305
**[Test build #79962 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/79962/testReport)**
for PR 18305 at commit
Github user yanboliang commented on a diff in the pull request:
https://github.com/apache/spark/pull/18554#discussion_r129532746
--- Diff: python/pyspark/ml/classification.py ---
@@ -1517,20 +1517,22 @@ class OneVsRest(Estimator, OneVsRestParams,
MLReadable, MLWritable):
Github user yanboliang commented on a diff in the pull request:
https://github.com/apache/spark/pull/18554#discussion_r129533677
--- Diff: python/pyspark/ml/tests.py ---
@@ -1255,6 +1255,24 @@ def test_output_columns(self):
output = model.transform(df)
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18736
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
GitHub user HyukjinKwon opened a pull request:
https://github.com/apache/spark/pull/18737
[SPARK-21536][R] Remove the workaroud to allow dots in field names in R's
createDataFame
## What changes were proposed in this pull request?
This PR removes the workaround for dots in
Github user facaiy commented on the issue:
https://github.com/apache/spark/pull/18554
ping @holdenk @yanboliang
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/18731
I am debugging, thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/18737
it's very likely we need to make sure a column name with `.` is specified
with backtick, esp. when referenced in SQL expression...
---
If your project is set up for it, you can reply to this
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18513
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/79961/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18513
**[Test build #79961 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/79961/testReport)**
for PR 18513 at commit
Github user MLnick commented on the issue:
https://github.com/apache/spark/pull/18305
Jenkins retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user ueshin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18655#discussion_r129488362
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/arrow/ArrowConvertersSuite.scala
---
@@ -857,6 +857,449 @@ class ArrowConvertersSuite
Github user baibaichen commented on the issue:
https://github.com/apache/spark/pull/18725
The `HiveTableScans` strategy need `CatalogRelation`, but it's
`LogicalRelation` in my case. Actually, the hive table is external table in my
test, I guess that's the reason.
I believe
Github user gczsjdy closed the pull request at:
https://github.com/apache/spark/pull/18632
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user heary-cao commented on the issue:
https://github.com/apache/spark/pull/18725
@baibaichen
yes, In my test environment
`Time taken: 557.276 seconds, Fetched 1 row(s)`
VS
`Time taken: 5997.238 seconds, Fetched 1 row(s)`
But I'm not sure about the
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/18728
merged to master
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18737
**[Test build #79958 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/79958/testReport)**
for PR 18737 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18737
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/79958/
Test FAILed.
---
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/18725
I think it's a `HiveTableScan`, rather than `FileSourceScanExec`?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/18737
It's a breaking change, but IMO one we need since we have quite a bit of
feedback on this.
re: test failure
```
java.lang.IllegalArgumentException: Field "Sepal_Length" does not
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/18731
The reason of the UT failure is that in these two UTs we are passing
invalid JSONs (mind the extra closed curly brace):
-
Github user heary-cao commented on the issue:
https://github.com/apache/spark/pull/18555
@gatorsmile @cloud-fan
I added new test case again. except
```
DYN_ALLOCATION_MIN_EXECUTORS
DYN_ALLOCATION_INITIAL_EXECUTORS
DYN_ALLOCATION_MAX_EXECUTORS
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18655
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18737
**[Test build #79958 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/79958/testReport)**
for PR 18737 at commit
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18652
Then, will this PR resolve the migration issue from Hive workloads?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18655
**[Test build #79955 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/79955/testReport)**
for PR 18655 at commit
Github user HyukjinKwon closed the pull request at:
https://github.com/apache/spark/pull/18737
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18655
**[Test build #79957 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/79957/testReport)**
for PR 18655 at commit
GitHub user facaiy opened a pull request:
https://github.com/apache/spark/pull/18736
[SPARK-21481][ML] Add indexOf method for ml.feature.HashingTF
## What changes were proposed in this pull request?
Add indexOf method for ml.feature.HashingTF.
The PR is a hotfix by
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18731
Could you fix the bug?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user baibaichen commented on the issue:
https://github.com/apache/spark/pull/18725
It's another issue about non-deterministic. When generating SparkPlan in
`FileSourceStrategy` , `PhysicalOperation` is used to extract projects and
filters on top of relation. But with
Github user ueshin commented on the issue:
https://github.com/apache/spark/pull/18655
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/18737
Yea. The test failure above itself is legitimate but while manually running
and debugging few more tests with some more fixes, it printed:
```
Failed
Github user daniellaah commented on the issue:
https://github.com/apache/spark/pull/18337
I also tested the SVDPlusPlus on movielens-100k dataset. The algorithm just
diverged. And the mse on the dataset gets 2.14748364347152E9.
I tested @lxmly 's code as well, it works but I
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/18702
I guess it probably will take about a week more for my Apache account
creation (according to the doc).
---
If your project is set up for it, you can reply to this email and have your
reply
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18513
**[Test build #79961 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/79961/testReport)**
for PR 18513 at commit
1 - 100 of 266 matches
Mail list logo