huaxingao commented on a change in pull request #34164:
URL: https://github.com/apache/spark/pull/34164#discussion_r725082482



##########
File path: 
external/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/v2/V2JDBCTest.scala
##########
@@ -180,5 +180,18 @@ private[v2] trait V2JDBCTest extends SharedSparkSession 
with DockerIntegrationFu
       testCreateTableWithProperty(s"$catalogName.new_table")
     }
   }
+
+  def supportsIndex: Boolean = false
+  def testIndex(tbl: String): Unit = {}
+
+  test("SPARK-36913: Test INDEX") {
+    if (supportsIndex) {
+      withTable(s"$catalogName.new_table") {
+        sql(s"CREATE TABLE $catalogName.new_table(col1 INT, col2 INT, col3 
INT," +
+          s" col4 INT, col5 INT)")
+        testIndex(s"$catalogName.new_table")

Review comment:
       Actually we can put more common code such as `createIndex` with empty 
properties, `dropIndex` and `indexExists` in this parent class. But since I 
will need to change the tests anyways in my next PR, is it OK to leave the 
tests as is for now and refine in next PR?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to