Github user yaooqinn commented on the issue:
https://github.com/apache/spark/pull/18668
There is a bug in HiveClientImpl about reusing cliSessionState, see
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18805
How big is the dependency that's getting pulled in? If we are adding more
compression codecs maybe we should retire some old ones, or move them into a
separate package so downstream apps can
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18809
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80147/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18809
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18809
**[Test build #80147 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80147/testReport)**
for PR 18809 at commit
Github user guoxiaolongzte commented on a diff in the pull request:
https://github.com/apache/spark/pull/18806#discussion_r130788660
--- Diff: docs/configuration.md ---
@@ -1638,7 +1638,7 @@ Apart from these, the following properties are also
available, and may be useful
Github user baibaichen commented on the issue:
https://github.com/apache/spark/pull/18808
https://issues.apache.org/jira/browse/SPARK-21605 is added
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18555
Thanks! @heary-cao
cc @jiangxb1987 Could you take a look to ensure no behavior change will be
caused by this PR?
---
If your project is set up for it, you can reply to this email and
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18805
**[Test build #80148 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80148/testReport)**
for PR 18805 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18811
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user sitalkedia commented on a diff in the pull request:
https://github.com/apache/spark/pull/18805#discussion_r130787262
--- Diff: core/src/main/scala/org/apache/spark/io/CompressionCodec.scala ---
@@ -50,13 +51,14 @@ private[spark] object CompressionCodec {
Github user sitalkedia commented on a diff in the pull request:
https://github.com/apache/spark/pull/18805#discussion_r130787287
--- Diff: core/src/main/scala/org/apache/spark/io/CompressionCodec.scala ---
@@ -216,3 +218,30 @@ private final class SnappyOutputStreamWrapper(os:
Github user sitalkedia commented on a diff in the pull request:
https://github.com/apache/spark/pull/18805#discussion_r130787269
--- Diff: core/src/main/scala/org/apache/spark/io/CompressionCodec.scala ---
@@ -216,3 +218,30 @@ private final class SnappyOutputStreamWrapper(os:
Github user sitalkedia commented on a diff in the pull request:
https://github.com/apache/spark/pull/18805#discussion_r130787205
--- Diff: core/src/main/scala/org/apache/spark/io/CompressionCodec.scala ---
@@ -216,3 +218,30 @@ private final class SnappyOutputStreamWrapper(os:
GitHub user zuotingbing opened a pull request:
https://github.com/apache/spark/pull/18811
[Spark-21604][SQL]Error class name for log, and if the object extends
Logging, i suggest to remove the var LOG which is useless.
## What changes were proposed in this pull request?
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18806#discussion_r130787079
--- Diff: docs/configuration.md ---
@@ -1638,7 +1638,7 @@ Apart from these, the following properties are also
available, and may be useful
For
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/18778
Thank you, @gatorsmile and @srowen !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/18778
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/18808
This may need a JIRA to track it.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18778
Thanks! Merging to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18810
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
GitHub user eatoncys opened a pull request:
https://github.com/apache/spark/pull/18810
[SPARK-21603][sql]The wholestage codegen will be much slower then
wholestage codegen is closed when the function is too long
## What changes were proposed in this pull request?
Close the
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/18804#discussion_r130785462
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -642,8 +642,15 @@ private[spark] class
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/18809
cc @felixcheung, could you take a look when you have some time?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18780
After we leave polite messages to close their PRs, I think we should still
keep them open one more week at least. Although it is trivial to reopen it by
themselves, the feelings are different.
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18809
**[Test build #80147 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80147/testReport)**
for PR 18809 at commit
Github user baibaichen commented on the issue:
https://github.com/apache/spark/pull/18808
cc @gslowikowski , @srowen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18808
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/18804#discussion_r130785255
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -642,8 +642,15 @@ private[spark] class
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/18806
@srowen Help review the code,Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
GitHub user baibaichen opened a pull request:
https://github.com/apache/spark/pull/18808
[HOT-FIX][BUILD] Let IntelliJ IDEA correctly detect Language level and
Target byte code version
With SPARK-21592, removing source and target properties from
maven-compiler-plugin lets IntelliJ
GitHub user HyukjinKwon opened a pull request:
https://github.com/apache/spark/pull/18809
[SPARK-21602][R] Add map_keys and map_values functions to R
## What changes were proposed in this pull request?
This PR adds `map_values` and `map_keys` to R API.
```r
>
Github user guoxiaolongzte commented on a diff in the pull request:
https://github.com/apache/spark/pull/18806#discussion_r130784995
--- Diff: docs/configuration.md ---
@@ -1638,7 +1638,7 @@ Apart from these, the following properties are also
available, and may be useful
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/18780
Yes, I just took out.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/18804#discussion_r130784093
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -642,8 +642,15 @@ private[spark] class
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/18804#discussion_r130784019
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -642,8 +642,15 @@ private[spark] class
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18780
Please take [SPARK-21287] out.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/18804#discussion_r130783643
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -642,8 +642,15 @@ private[spark] class
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/18804#discussion_r130783264
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -642,8 +642,15 @@ private[spark] class
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18804#discussion_r130783022
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -642,8 +642,15 @@ private[spark] class
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18804#discussion_r130783003
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -642,8 +642,15 @@ private[spark] class
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/18779
@10110346 Can't we also do the same on order by ordinal?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/18804
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/18804#discussion_r130780780
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/StatisticsSuite.scala ---
@@ -117,6 +117,26 @@ class StatisticsSuite extends
Github user highfei2011 commented on the issue:
https://github.com/apache/spark/pull/18807
Ok,Thanks, @markhamstra .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user highfei2011 closed the pull request at:
https://github.com/apache/spark/pull/18807
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user KevinZwx commented on the issue:
https://github.com/apache/spark/pull/16970
I'm a little confused with the behavior of dropDuplicates with watermark.
According to my understanding of the guide documentation, if I have the
following code, I expect to deduplicate still
Github user markhamstra commented on the issue:
https://github.com/apache/spark/pull/18807
These are maven-compiler-plugin configurations. We don't use
maven-compiler-plugin to compile Java code:
https://github.com/apache/spark/commit/74cda94c5e496e29f42f1044aab90cab7dbe9d38
---
If
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18742
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18742
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80146/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18742
**[Test build #80146 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80146/testReport)**
for PR 18742 at commit
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/18779
@viirya Only to `group-by ordinal`, i think this is a good idea.
but this will also result in inconsistent processing between `order-by
ordinal` and `group-by ordinal`.
and i feel
Github user holdenk commented on the issue:
https://github.com/apache/spark/pull/18734
If we can reach agreement on this I'll see about trying to get our local
workarounds upstreamed into cloudpickle.
---
If your project is set up for it, you can reply to this email and have your
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/18734
BTW, I also checked it passes tests with Python 3.6 in my local.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user hhbyyh commented on the issue:
https://github.com/apache/spark/pull/16774
I'm confused by your suggestions here and in #18733.
I don't think it's appropriate to just "include" a similar work originated
from another PR, and suggest another PR to suspend.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18804
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80143/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18804
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18804
**[Test build #80143 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80143/testReport)**
for PR 18804 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18742
**[Test build #80146 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80146/testReport)**
for PR 18742 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18807
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
GitHub user highfei2011 opened a pull request:
https://github.com/apache/spark/pull/18807
[SPARK-21601][BUILD] Modify the pom.xml file, increase the maven compiler
jdk attribute
## What changes were proposed in this pull request?
When using maven to compile spark, I want
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18806#discussion_r130777347
--- Diff: docs/configuration.md ---
@@ -1638,7 +1638,7 @@ Apart from these, the following properties are also
available, and may be useful
For
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18742
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80145/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18742
**[Test build #80145 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80145/testReport)**
for PR 18742 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18742
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18803
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80139/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18803
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18803
**[Test build #80139 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80139/testReport)**
for PR 18803 at commit
Github user ajaysaini725 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18746#discussion_r130775557
--- Diff: python/pyspark/ml/base.py ---
@@ -116,3 +121,44 @@ class Model(Transformer):
"""
__metaclass__ = ABCMeta
+
+
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18742
**[Test build #80145 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80145/testReport)**
for PR 18742 at commit
Github user ajaysaini725 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18746#discussion_r130775517
--- Diff: python/pyspark/ml/base.py ---
@@ -116,3 +121,44 @@ class Model(Transformer):
"""
__metaclass__ = ABCMeta
+
+
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18799
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18799
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80138/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18799
**[Test build #80138 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80138/testReport)**
for PR 18799 at commit
Github user holdenk commented on a diff in the pull request:
https://github.com/apache/spark/pull/18734#discussion_r130774005
--- Diff: python/pyspark/cloudpickle.py ---
@@ -220,12 +322,7 @@ def save_function(self, obj, name=None):
if name is None:
Github user holdenk commented on a diff in the pull request:
https://github.com/apache/spark/pull/18734#discussion_r130773575
--- Diff: python/pyspark/cloudpickle.py ---
@@ -397,42 +625,7 @@ def save_global(self, obj, name=None,
pack=struct.pack):
typ =
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/18805
re build failure: you can repro that locally by running
"./dev/test-dependencies.sh". Its failing due to introducing a new dep... you
need to add it to `dev/deps/spark-deps-hadoop-XXX`
---
If
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18806
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/18805#discussion_r130769482
--- Diff: core/src/main/scala/org/apache/spark/io/CompressionCodec.scala ---
@@ -50,13 +51,14 @@ private[spark] object CompressionCodec {
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/18805
In `Benchmark` section the values for `Lz4` are all zeros which feels
confusing while reading.. first thing I thought is they were absolute values
but they are supposed to be relative
---
If
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/18805#discussion_r130769858
--- Diff: core/src/main/scala/org/apache/spark/io/CompressionCodec.scala ---
@@ -216,3 +218,30 @@ private final class SnappyOutputStreamWrapper(os:
GitHub user guoxiaolongzte opened a pull request:
https://github.com/apache/spark/pull/18806
[SPARK-21600] The description of "this requires
spark.shuffle.service.enabled to be set" for the
spark.dynamicAllocation.enabled configuration item is not clear
## What changes were
Github user aosagie commented on the issue:
https://github.com/apache/spark/pull/18499
Hey @ajbozarth. Any chance you could provide a review or some guidance on
anything I can do to make this PR more amenable?
---
If your project is set up for it, you can reply to this email and
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/18805#discussion_r130769646
--- Diff: core/src/main/scala/org/apache/spark/io/CompressionCodec.scala ---
@@ -216,3 +218,30 @@ private final class SnappyOutputStreamWrapper(os:
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/18805#discussion_r130769548
--- Diff: core/src/main/scala/org/apache/spark/io/CompressionCodec.scala ---
@@ -216,3 +218,30 @@ private final class SnappyOutputStreamWrapper(os:
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/18786
@aray, it looks the tests with AppVeyor failed due to time limit, 1.5
hours. Would you mind closing and reopening this one to retrigger the test?
@felixcheung, It sounds now we are
Github user sitalkedia commented on the issue:
https://github.com/apache/spark/pull/18805
Any idea what is the build failure about?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18805
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18805
**[Test build #80144 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80144/testReport)**
for PR 18805 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18805
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80144/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18805
**[Test build #80144 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80144/testReport)**
for PR 18805 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18664
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18664
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80136/
Test FAILed.
---
Github user sitalkedia commented on the issue:
https://github.com/apache/spark/pull/18805
jenkins retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18664
**[Test build #80136 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80136/testReport)**
for PR 18664 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18804
**[Test build #80143 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80143/testReport)**
for PR 18804 at commit
Github user sitalkedia commented on the issue:
https://github.com/apache/spark/pull/18805
Please note that few minor improvements I have made when comapring to old
PR - #17303
1. Use zstd compression level 1 instead of 3, which is significantly faster.
2. Wrap the zstd
Github user sitalkedia commented on the issue:
https://github.com/apache/spark/pull/18805
@rxin - Updated with benchmark data on our production workload.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/18805
cc @dongjinleekr too.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user hhbyyh commented on the issue:
https://github.com/apache/spark/pull/18733
Features should be merged when they are reasonable and ready, but not
waiting on uncertain changes especially when there's no conflicts. Spark is
already way too slow.
---
If your project is set
1 - 100 of 404 matches
Mail list logo