Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18668#discussion_r130534640
--- Diff:
sql/hive-thriftserver/src/test/scala/org/apache/spark/sql/hive/thriftserver/CliSuite.scala
---
@@ -283,4 +283,17 @@ class CliSuite extends
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18668
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80110/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18668
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/18749
The test failure above is due to `extended` being `null`. I also allowed
this case in f76af78415085cff1f1d6dd31cb97d464b4fa52b.
---
If your project is set up for it, you can reply to this
Github user bOOm-X commented on the issue:
https://github.com/apache/spark/pull/18253
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
Github user kiszk commented on a diff in the pull request:
https://github.com/apache/spark/pull/18787#discussion_r130553883
--- Diff:
sql/core/src/main/java/org/apache/spark/sql/execution/vectorized/ColumnarBatch.java
---
@@ -65,15 +65,42 @@
final Row row;
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18769
`SetCommand.scala` throws exception seems roughly. `InsertIntoHiveTable`
throws exception seems too late. so I logWarning at `SetCommand.scala`
---
If your project is set up for it, you can reply
Github user dmvieira commented on a diff in the pull request:
https://github.com/apache/spark/pull/18765#discussion_r130572412
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -2571,6 +2572,23 @@ private[spark] object Utils extends Logging {
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/18749
For source compatibility, in the commit,
https://github.com/apache/spark/pull/18749/commits/389fc6ef788bf971f846ca36f49cea6a1c98b0d0,
I tried to move back to `extended` to show it passes the
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/18528#discussion_r130539101
--- Diff: docs/running-on-mesos.md ---
@@ -153,6 +153,8 @@ can find the results of the driver from the Mesos Web
UI.
To use cluster mode, you must
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/18528#discussion_r130539277
--- Diff:
resource-managers/mesos/src/main/scala/org/apache/spark/deploy/mesos/ui/MesosClusterPage.scala
---
@@ -76,6 +77,17 @@ private[mesos] class
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/18528#discussion_r130539189
--- Diff:
resource-managers/mesos/src/main/scala/org/apache/spark/deploy/mesos/ui/MesosClusterPage.scala
---
@@ -76,6 +77,17 @@ private[mesos] class
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/18684
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/18106
I'll fix it
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18787
**[Test build #80108 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80108/testReport)**
for PR 18787 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18787
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80108/
Test PASSed.
---
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18668
Since the original PR is proposed by @vanzin Could you please review this
PR?
Actually, the prefix `spark.hadoop.` is not documented
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18668
**[Test build #80110 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80110/testReport)**
for PR 18668 at commit
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/18337
This seems to have a lot of superfluous change. Why is the mean gradient
better? It is different just by a scale factor. Also disagree about not
clipping predictions but that's separate
---
If
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18323
**[Test build #80112 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80112/testReport)**
for PR 18323 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18749
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80111/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18749
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user mpjlu commented on the issue:
https://github.com/apache/spark/pull/18748
Thanks.
This is my test setting:
3 workersï¼ each: 40 cores, 196G memory, 1 executor.
Data Size: user 480,000, item 17,000
---
If your project is set up for it, you can reply to this
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18749
**[Test build #80115 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80115/testReport)**
for PR 18749 at commit
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/18789
@ash211 I still don't think this should be described as a CVE fix, because
it doesn't appear to affect Spark. It is, however, going to be necessary for
Scala 2.12. To me that's the most real
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/18749
I will revert 389fc6e if the test passes.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user skonto commented on the issue:
https://github.com/apache/spark/pull/18528
@srowen Could you pls have a look and merge?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user MLnick commented on the issue:
https://github.com/apache/spark/pull/18748
I don't get similar results to you (granted I have just tested locally).
```
scala> spark.time { userRecsAll.foreach(_ => Unit) }
Time taken: 122422 ms
scala> spark.time {
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18323
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80112/
Test PASSed.
---
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/18750
@gslowikowski I'll merge this if you'll make a JIRA and link it here via
the title
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18323
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18323
**[Test build #80112 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80112/testReport)**
for PR 18323 at commit
Github user ueshin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18787#discussion_r130559563
--- Diff:
sql/core/src/main/java/org/apache/spark/sql/execution/vectorized/ColumnarBatch.java
---
@@ -65,15 +65,42 @@
final Row row;
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/18684
Merged to master
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user gslowikowski commented on the issue:
https://github.com/apache/spark/pull/18750
Created https://issues.apache.org/jira/browse/SPARK-21592
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18787
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18668#discussion_r130536572
--- Diff:
sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkSQLCLIDriver.scala
---
@@ -50,6 +50,7 @@ private[hive]
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18749
**[Test build #80113 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80113/testReport)**
for PR 18749 at commit
Github user ueshin commented on the issue:
https://github.com/apache/spark/pull/18664
I don't think Scala/Java Timestamp encoder has the same issue because
`java.sql.Timestamp` always has the timestamp value from `1970-01-01 00:00:00.0
UTC` regardless of timezone as the same as Spark
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18749
**[Test build #80111 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80111/testReport)**
for PR 18749 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18749
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80113/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18749
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18749
**[Test build #80113 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80113/testReport)**
for PR 18749 at commit
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18749
What is the compatibility concern?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/18749
Here, both comments by Sean,
https://github.com/apache/spark/pull/18749#discussion_r129929153 and
https://github.com/apache/spark/pull/18749#discussion_r129927520.
It looks we are okay
Github user BryanCutler commented on the issue:
https://github.com/apache/spark/pull/18664
I merged your changes @ueshin , but having timezone as an Option this way
makes me a little nervous. It will be easy for people to omit it and in doing
so won't cause an immediate failure, but
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/18647
cc @holdenk too ...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18664
**[Test build #80136 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80136/testReport)**
for PR 18664 at commit
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/18668
`spark.hadoop.` has existed for ages, I'm kinda surprised it's not properly
documented. My change didn't add it, it just centralized its use.
As for this particular PR, I'm not so sure about
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18803
**[Test build #80139 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80139/testReport)**
for PR 18803 at commit
Github user hhbyyh commented on the issue:
https://github.com/apache/spark/pull/16774
I'm confused by your suggestions here and in #18733.
I don't think it's appropriate to just "include" a similar work originated
from another PR, and suggest another PR to suspend.
---
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/18734
BTW, I also checked it passes tests with Python 3.6 in my local.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user holdenk commented on the issue:
https://github.com/apache/spark/pull/18734
If we can reach agreement on this I'll see about trying to get our local
workarounds upstreamed into cloudpickle.
---
If your project is set up for it, you can reply to this email and have your
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/18804#discussion_r130780780
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/StatisticsSuite.scala ---
@@ -117,6 +117,26 @@ class StatisticsSuite extends
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/18806#discussion_r130787079
--- Diff: docs/configuration.md ---
@@ -1638,7 +1638,7 @@ Apart from these, the following properties are also
available, and may be useful
For
Github user sitalkedia commented on a diff in the pull request:
https://github.com/apache/spark/pull/18805#discussion_r130787262
--- Diff: core/src/main/scala/org/apache/spark/io/CompressionCodec.scala ---
@@ -50,13 +51,14 @@ private[spark] object CompressionCodec {
Github user sitalkedia commented on a diff in the pull request:
https://github.com/apache/spark/pull/18805#discussion_r130787287
--- Diff: core/src/main/scala/org/apache/spark/io/CompressionCodec.scala ---
@@ -216,3 +218,30 @@ private final class SnappyOutputStreamWrapper(os:
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18811
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18805
**[Test build #80148 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80148/testReport)**
for PR 18805 at commit
Github user yaooqinn commented on the issue:
https://github.com/apache/spark/pull/18668
There is a bug in HiveClientImpl about reusing cliSessionState, see
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/18778
Thank you, @gatorsmile and @srowen !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
GitHub user zuotingbing opened a pull request:
https://github.com/apache/spark/pull/18811
[Spark-21604][SQL]Error class name for log, and if the object extends
Logging, i suggest to remove the var LOG which is useless.
## What changes were proposed in this pull request?
Github user sitalkedia commented on a diff in the pull request:
https://github.com/apache/spark/pull/18805#discussion_r130787269
--- Diff: core/src/main/scala/org/apache/spark/io/CompressionCodec.scala ---
@@ -216,3 +218,30 @@ private final class SnappyOutputStreamWrapper(os:
Github user sitalkedia commented on a diff in the pull request:
https://github.com/apache/spark/pull/18805#discussion_r130787205
--- Diff: core/src/main/scala/org/apache/spark/io/CompressionCodec.scala ---
@@ -216,3 +218,30 @@ private final class SnappyOutputStreamWrapper(os:
Github user guoxiaolongzte commented on a diff in the pull request:
https://github.com/apache/spark/pull/18806#discussion_r130788660
--- Diff: docs/configuration.md ---
@@ -1638,7 +1638,7 @@ Apart from these, the following properties are also
available, and may be useful
Github user baibaichen commented on the issue:
https://github.com/apache/spark/pull/18808
https://issues.apache.org/jira/browse/SPARK-21605 is added
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18809
**[Test build #80147 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80147/testReport)**
for PR 18809 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18809
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80147/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18809
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18810
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18778
Thanks! Merging to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/18809
cc @felixcheung, could you take a look when you have some time?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
GitHub user eatoncys opened a pull request:
https://github.com/apache/spark/pull/18810
[SPARK-21603][sql]The wholestage codegen will be much slower then
wholestage codegen is closed when the function is too long
## What changes were proposed in this pull request?
Close the
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/18804#discussion_r130785462
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -642,8 +642,15 @@ private[spark] class
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18780
After we leave polite messages to close their PRs, I think we should still
keep them open one more week at least. Although it is trivial to reopen it by
themselves, the feelings are different.
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/18808
This may need a JIRA to track it.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/18778
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18555
Thanks! @heary-cao
cc @jiangxb1987 Could you take a look to ensure no behavior change will be
caused by this PR?
---
If your project is set up for it, you can reply to this email and
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18805
How big is the dependency that's getting pulled in? If we are adding more
compression codecs maybe we should retire some old ones, or move them into a
separate package so downstream apps can
Github user jkbradley commented on the issue:
https://github.com/apache/spark/pull/18313
Oh, you're right; I overlooked that it only holds all of the models for a
single split. In that case, I agree it could be problematic to keep all in
memory by default. How does this sound then:
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18630#discussion_r130754099
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -1315,6 +1294,80 @@ private[spark] object SparkSubmitUtils {
}
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18630#discussion_r130754644
--- Diff:
core/src/main/scala/org/apache/spark/deploy/worker/DriverWrapper.scala ---
@@ -66,4 +70,16 @@ object DriverWrapper {
System.exit(-1)
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18799
**[Test build #80138 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80138/testReport)**
for PR 18799 at commit
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/18805#discussion_r130769858
--- Diff: core/src/main/scala/org/apache/spark/io/CompressionCodec.scala ---
@@ -216,3 +218,30 @@ private final class SnappyOutputStreamWrapper(os:
GitHub user guoxiaolongzte opened a pull request:
https://github.com/apache/spark/pull/18806
[SPARK-21600] The description of "this requires
spark.shuffle.service.enabled to be set" for the
spark.dynamicAllocation.enabled configuration item is not clear
## What changes were
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18799
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18799
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80138/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18803
**[Test build #80139 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80139/testReport)**
for PR 18803 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18807
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
GitHub user highfei2011 opened a pull request:
https://github.com/apache/spark/pull/18807
[SPARK-21601][BUILD] Modify the pom.xml file, increase the maven compiler
jdk attribute
## What changes were proposed in this pull request?
When using maven to compile spark, I want
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/18804#discussion_r130783264
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -642,8 +642,15 @@ private[spark] class
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/18804
cc @wzhfy @gatorsmile
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18804
**[Test build #80135 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80135/testReport)**
for PR 18804 at commit
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18253#discussion_r130759258
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -2379,8 +2382,13 @@ class SparkContext(config: SparkConf) extends
Logging {
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18805
Any benchmark data?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18664
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80134/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18664
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/18805
cc @dongjinleekr too.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user hhbyyh commented on the issue:
https://github.com/apache/spark/pull/18733
Features should be merged when they are reasonable and ready, but not
waiting on uncertain changes especially when there's no conflicts. Spark is
already way too slow.
---
If your project is set
Github user sitalkedia commented on the issue:
https://github.com/apache/spark/pull/18805
Please note that few minor improvements I have made when comapring to old
PR - #17303
1. Use zstd compression level 1 instead of 3, which is significantly faster.
2. Wrap the zstd
1 - 100 of 404 matches
Mail list logo