MaxGekk commented on a change in pull request #29396:
URL: https://github.com/apache/spark/pull/29396#discussion_r468016481
##########
File path:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/v2/jdbc/JDBCTableCatalogSuite.scala
##########
@@ -107,6 +111,41 @@ class JDBCTableCatalogSuite extends QueryTest with
SharedSparkSession {
}
}
+ test("simple scan") {
+ checkAnswer(sql("SELECT name, id FROM h2.test.people"), Seq(Row("fred",
1), Row("mary", 2)))
+ }
+
+ test("scan with filter push-down") {
+ val df = spark.table("h2.test.people").filter("id > 1")
+ val filters = df.queryExecution.optimizedPlan.collect {
+ case f: Filter => f
+ }
+ assert(filters.isEmpty)
+ checkAnswer(df, Row("mary", 2))
+ }
+
+ test("scan with column pruning") {
+ val df = spark.table("h2.test.people").select("id")
+ val scan = df.queryExecution.optimizedPlan.collectFirst {
+ case s: DataSourceV2ScanRelation => s
+ }.get
+ assert(scan.schema.names.sameElements(Seq("ID")))
+ checkAnswer(df, Seq(Row(1), Row(2)))
+ }
+
+ test("scan with filter push-down and column pruning") {
+ val df = spark.table("h2.test.people").filter("id > 1").select("name")
+ val filters = df.queryExecution.optimizedPlan.collect {
+ case f: Filter => f
+ }
+ assert(filters.isEmpty)
Review comment:
Could you explain, please, why do you expect empty `filters`? In which
cases, it should fail.
##########
File path:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/v2/jdbc/JDBCTableCatalogSuite.scala
##########
@@ -107,6 +111,41 @@ class JDBCTableCatalogSuite extends QueryTest with
SharedSparkSession {
}
}
+ test("simple scan") {
+ checkAnswer(sql("SELECT name, id FROM h2.test.people"), Seq(Row("fred",
1), Row("mary", 2)))
+ }
+
+ test("scan with filter push-down") {
+ val df = spark.table("h2.test.people").filter("id > 1")
+ val filters = df.queryExecution.optimizedPlan.collect {
+ case f: Filter => f
+ }
+ assert(filters.isEmpty)
Review comment:
The same question as below, why do you expect empty `filters`. Please,
add a comment.
##########
File path:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/v2/jdbc/JDBCTableCatalogSuite.scala
##########
@@ -107,6 +111,41 @@ class JDBCTableCatalogSuite extends QueryTest with
SharedSparkSession {
}
}
+ test("simple scan") {
+ checkAnswer(sql("SELECT name, id FROM h2.test.people"), Seq(Row("fred",
1), Row("mary", 2)))
Review comment:
I would check corner cases like empty table, `*` projection and so on
##########
File path:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/v2/jdbc/JDBCTableCatalogSuite.scala
##########
@@ -107,6 +111,41 @@ class JDBCTableCatalogSuite extends QueryTest with
SharedSparkSession {
}
}
+ test("simple scan") {
+ checkAnswer(sql("SELECT name, id FROM h2.test.people"), Seq(Row("fred",
1), Row("mary", 2)))
+ }
+
+ test("scan with filter push-down") {
Review comment:
Just in case, filters pushdown to JDBC isn't under a config. There are
config for all builtin datasources
- spark.sql.parquet.filterPushdown
- spark.sql.orc.filterPushdown
- spark.sql.csv.filterPushdown.enabled
- spark.sql.json.filterPushdown.enabled
- spark.sql.avro.filterPushdown.enabled
except JDBC?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]