beliefer opened a new pull request, #36405: URL: https://github.com/apache/spark/pull/36405
### What changes were proposed in this pull request? Currently, Spark DS V2 supports push down `Limit` operator to data source. But the behavior only controlled by `pushDownList` option. If the limit is very large, then Executor will pull all the result set from data source. So it will cause the memory issue as you know. ### Why are the changes needed? Improve DS V2 Limit push-down ### Does this PR introduce _any_ user-facing change? 'Yes'. DS V2 Limit push-down also decided by `maxPushDownLimit` option. ### How was this patch tested? New tests. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
