Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/21036
Thank you for your comments, I will close this PR, thanks.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/21036
nope it's a radical change that affects many of integrations. I wouldn't
enable it by default for now. here is non-critical path. It's fine to loop
twice if it's more readable.
---
-
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/21036
1.No need to loop twice to filter to determine if the length is greater
than 0
2.This feature is to improve performance, the default switch needs to open
---
--
Github user jiangxb1987 commented on the issue:
https://github.com/apache/spark/pull/21036
@guoxiaolongzte Have you tried the config
`spark.hadoopRDD.ignoreEmptySplits` ?
---
-
To unsubscribe, e-mail: reviews-unsubs
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/21036
Yes, this is already supported in Spark, seems like the PR is invalid.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/21036
Thanks, I will try to add test cases. @felixcheung
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/21036
Can one of the admins verify this patch?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/21036
Can one of the admins verify this patch?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional