huaxingao commented on a change in pull request #34465:
URL: https://github.com/apache/spark/pull/34465#discussion_r741646263
##########
File path:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCOptions.scala
##########
@@ -189,6 +189,7 @@ class JDBCOptions(
val pushDownPredicate = parameters.getOrElse(JDBC_PUSHDOWN_PREDICATE,
"true").toBoolean
// An option to allow/disallow pushing down aggregate into JDBC data source
+ // This only applies to Data Source V2 JDBC
val pushDownAggregate = parameters.getOrElse(JDBC_PUSHDOWN_AGGREGATE,
"false").toBoolean
// An option to allow/disallow pushing down LIMIT into JDBC data source
Review comment:
`pushDownLimit` is also for DS v2. I will fix the doc in the
`pushDownSample` PR I am currently working on. The reason I want a separate PR
to fix JDBC aggregate push down doc is because the fix needs to be in both 3.2
and 3.3. All the other push down (`pushDownLimit`, `pushDownSample`, and
aggregate push down for parquet and ORC) are for 3.3 only.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]