[GitHub] [spark] maropu commented on a change in pull request #30101: [SPARK-33193][SQL][TEST] Hive ThriftServer JDBC Database MetaData API Behavior Auditing
maropu commented on a change in pull request #30101: URL: https://github.com/apache/spark/pull/30101#discussion_r509031472 ## File path: sql/hive-thriftserver/src/test/scala/org/apache/spark/sql/hive/thriftserver/SparkMetadataOperationSuite.scala ## @@ -396,4 +400,187 @@ class SparkMetadataOperationSuite extends HiveThriftJdbcTest { } } } + + test("Hive ThriftServer JDBC Database MetaData API Auditing") { +withJdbcStatement() { statement => + val metaData = statement.getConnection.getMetaData + Seq( +() => metaData.allProceduresAreCallable(), +() => metaData.getURL, +() => metaData.getUserName, +() => metaData.isReadOnly, +() => metaData.nullsAreSortedHigh, +() => metaData.nullsAreSortedLow, +() => metaData.nullsAreSortedAtStart(), +() => metaData.nullsAreSortedAtEnd(), +() => metaData.usesLocalFiles(), +() => metaData.usesLocalFilePerTable(), +() => metaData.supportsMixedCaseIdentifiers(), +() => metaData.supportsMixedCaseQuotedIdentifiers(), +() => metaData.storesUpperCaseIdentifiers(), +() => metaData.storesUpperCaseQuotedIdentifiers(), +() => metaData.storesLowerCaseIdentifiers(), +() => metaData.storesLowerCaseQuotedIdentifiers(), +() => metaData.storesMixedCaseIdentifiers(), +() => metaData.storesMixedCaseQuotedIdentifiers(), +() => metaData.getSQLKeywords, +() => metaData.nullPlusNonNullIsNull, +() => metaData.supportsConvert, +() => metaData.supportsTableCorrelationNames, +() => metaData.supportsDifferentTableCorrelationNames, +() => metaData.supportsExpressionsInOrderBy(), +() => metaData.supportsOrderByUnrelated, +() => metaData.supportsGroupByUnrelated, +() => metaData.supportsGroupByBeyondSelect, +() => metaData.supportsLikeEscapeClause, +() => metaData.supportsMultipleTransactions, +() => metaData.supportsMinimumSQLGrammar, +() => metaData.supportsCoreSQLGrammar, +() => metaData.supportsExtendedSQLGrammar, +() => metaData.supportsANSI92EntryLevelSQL, +() => metaData.supportsANSI92IntermediateSQL, +() => metaData.supportsANSI92FullSQL, +() => metaData.supportsIntegrityEnhancementFacility, +() => metaData.isCatalogAtStart, +() => metaData.supportsSubqueriesInComparisons, +() => metaData.supportsSubqueriesInExists, +() => metaData.supportsSubqueriesInIns, +() => metaData.supportsSubqueriesInQuantifieds, +// Spark support this, see https://issues.apache.org/jira/browse/SPARK-18455 Review comment: If so, could you write it in the comment like that? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] maropu commented on a change in pull request #30101: [SPARK-33193][SQL][TEST] Hive ThriftServer JDBC Database MetaData API Behavior Auditing
maropu commented on a change in pull request #30101: URL: https://github.com/apache/spark/pull/30101#discussion_r509030759 ## File path: sql/hive-thriftserver/src/test/scala/org/apache/spark/sql/hive/thriftserver/SparkMetadataOperationSuite.scala ## @@ -396,4 +400,187 @@ class SparkMetadataOperationSuite extends HiveThriftJdbcTest { } } } + + test("Hive ThriftServer JDBC Database MetaData API Auditing") { +withJdbcStatement() { statement => + val metaData = statement.getConnection.getMetaData + Seq( +() => metaData.allProceduresAreCallable(), +() => metaData.getURL, +() => metaData.getUserName, +() => metaData.isReadOnly, +() => metaData.nullsAreSortedHigh, +() => metaData.nullsAreSortedLow, +() => metaData.nullsAreSortedAtStart(), +() => metaData.nullsAreSortedAtEnd(), +() => metaData.usesLocalFiles(), +() => metaData.usesLocalFilePerTable(), +() => metaData.supportsMixedCaseIdentifiers(), +() => metaData.supportsMixedCaseQuotedIdentifiers(), +() => metaData.storesUpperCaseIdentifiers(), +() => metaData.storesUpperCaseQuotedIdentifiers(), +() => metaData.storesLowerCaseIdentifiers(), +() => metaData.storesLowerCaseQuotedIdentifiers(), +() => metaData.storesMixedCaseIdentifiers(), +() => metaData.storesMixedCaseQuotedIdentifiers(), +() => metaData.getSQLKeywords, +() => metaData.nullPlusNonNullIsNull, +() => metaData.supportsConvert, +() => metaData.supportsTableCorrelationNames, +() => metaData.supportsDifferentTableCorrelationNames, +() => metaData.supportsExpressionsInOrderBy(), +() => metaData.supportsOrderByUnrelated, +() => metaData.supportsGroupByUnrelated, +() => metaData.supportsGroupByBeyondSelect, +() => metaData.supportsLikeEscapeClause, +() => metaData.supportsMultipleTransactions, +() => metaData.supportsMinimumSQLGrammar, +() => metaData.supportsCoreSQLGrammar, +() => metaData.supportsExtendedSQLGrammar, +() => metaData.supportsANSI92EntryLevelSQL, +() => metaData.supportsANSI92IntermediateSQL, +() => metaData.supportsANSI92FullSQL, +() => metaData.supportsIntegrityEnhancementFacility, +() => metaData.isCatalogAtStart, +() => metaData.supportsSubqueriesInComparisons, +() => metaData.supportsSubqueriesInExists, +() => metaData.supportsSubqueriesInIns, +() => metaData.supportsSubqueriesInQuantifieds, +// Spark support this, see https://issues.apache.org/jira/browse/SPARK-18455 Review comment: Ah, I see... This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] maropu commented on a change in pull request #30101: [SPARK-33193][SQL][TEST] Hive ThriftServer JDBC Database MetaData API Behavior Auditing
maropu commented on a change in pull request #30101: URL: https://github.com/apache/spark/pull/30101#discussion_r508915667 ## File path: sql/hive-thriftserver/src/test/scala/org/apache/spark/sql/hive/thriftserver/SparkMetadataOperationSuite.scala ## @@ -396,4 +400,187 @@ class SparkMetadataOperationSuite extends HiveThriftJdbcTest { } } } + + test("Hive ThriftServer JDBC Database MetaData API Auditing") { Review comment: How about splitting this test into the two parts? ``` test("Hive ThriftServer JDBC Database MetaData API Auditing - supported") { test("Hive ThriftServer JDBC Database MetaData API Auditing - not supported") { ``` ## File path: sql/hive-thriftserver/src/test/scala/org/apache/spark/sql/hive/thriftserver/SparkMetadataOperationSuite.scala ## @@ -396,4 +400,187 @@ class SparkMetadataOperationSuite extends HiveThriftJdbcTest { } } } + + test("Hive ThriftServer JDBC Database MetaData API Auditing") { +withJdbcStatement() { statement => + val metaData = statement.getConnection.getMetaData + Seq( +() => metaData.allProceduresAreCallable(), +() => metaData.getURL, +() => metaData.getUserName, +() => metaData.isReadOnly, +() => metaData.nullsAreSortedHigh, +() => metaData.nullsAreSortedLow, +() => metaData.nullsAreSortedAtStart(), +() => metaData.nullsAreSortedAtEnd(), +() => metaData.usesLocalFiles(), +() => metaData.usesLocalFilePerTable(), +() => metaData.supportsMixedCaseIdentifiers(), Review comment: nit: we don't need `()` here: https://github.com/databricks/scala-style-guide#parentheses ## File path: sql/hive-thriftserver/src/test/scala/org/apache/spark/sql/hive/thriftserver/SparkMetadataOperationSuite.scala ## @@ -396,4 +400,187 @@ class SparkMetadataOperationSuite extends HiveThriftJdbcTest { } } } + + test("Hive ThriftServer JDBC Database MetaData API Auditing") { +withJdbcStatement() { statement => + val metaData = statement.getConnection.getMetaData + Seq( +() => metaData.allProceduresAreCallable(), +() => metaData.getURL, +() => metaData.getUserName, +() => metaData.isReadOnly, +() => metaData.nullsAreSortedHigh, +() => metaData.nullsAreSortedLow, +() => metaData.nullsAreSortedAtStart(), +() => metaData.nullsAreSortedAtEnd(), +() => metaData.usesLocalFiles(), +() => metaData.usesLocalFilePerTable(), +() => metaData.supportsMixedCaseIdentifiers(), +() => metaData.supportsMixedCaseQuotedIdentifiers(), +() => metaData.storesUpperCaseIdentifiers(), +() => metaData.storesUpperCaseQuotedIdentifiers(), +() => metaData.storesLowerCaseIdentifiers(), +() => metaData.storesLowerCaseQuotedIdentifiers(), +() => metaData.storesMixedCaseIdentifiers(), +() => metaData.storesMixedCaseQuotedIdentifiers(), +() => metaData.getSQLKeywords, +() => metaData.nullPlusNonNullIsNull, +() => metaData.supportsConvert, +() => metaData.supportsTableCorrelationNames, +() => metaData.supportsDifferentTableCorrelationNames, +() => metaData.supportsExpressionsInOrderBy(), +() => metaData.supportsOrderByUnrelated, +() => metaData.supportsGroupByUnrelated, +() => metaData.supportsGroupByBeyondSelect, +() => metaData.supportsLikeEscapeClause, +() => metaData.supportsMultipleTransactions, +() => metaData.supportsMinimumSQLGrammar, +() => metaData.supportsCoreSQLGrammar, +() => metaData.supportsExtendedSQLGrammar, +() => metaData.supportsANSI92EntryLevelSQL, +() => metaData.supportsANSI92IntermediateSQL, +() => metaData.supportsANSI92FullSQL, +() => metaData.supportsIntegrityEnhancementFacility, +() => metaData.isCatalogAtStart, +() => metaData.supportsSubqueriesInComparisons, +() => metaData.supportsSubqueriesInExists, +() => metaData.supportsSubqueriesInIns, +() => metaData.supportsSubqueriesInQuantifieds, +// Spark support this, see https://issues.apache.org/jira/browse/SPARK-18455 Review comment: This comment looks a bit confusing. btw, could we fix this in followup activities? If we can, could you file it in jira? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org