PetarVasiljevic-DB commented on code in PR #48625:
URL: https://github.com/apache/spark/pull/48625#discussion_r1822571707


##########
sql/core/src/main/scala/org/apache/spark/sql/jdbc/PostgresDialect.scala:
##########
@@ -382,18 +382,45 @@ private case class PostgresDialect()
       case Types.ARRAY =>
         val tableName = rsmd.getTableName(columnIdx)
         val columnName = rsmd.getColumnName(columnIdx)
-        val query =
-          s"""
-             |SELECT pg_attribute.attndims
-             |FROM pg_attribute
-             |  JOIN pg_class ON pg_attribute.attrelid = pg_class.oid
-             |  JOIN pg_namespace ON pg_class.relnamespace = pg_namespace.oid
-             |WHERE pg_class.relname = '$tableName' and pg_attribute.attname = 
'$columnName'
-             |""".stripMargin
+
+        /*
+         Spark does not support different dimensionality per row, therefore we 
retrieve the
+         dimensionality of any row from Postgres. This might fail later on as 
Postgres allows
+         different dimensions per row for arrays.
+         */
+        val query = s"SELECT array_ndims($columnName) FROM $tableName LIMIT 1"
+        var arrayDimensionalityResolveNeedsFallback = true
+
         try {
           Using.resource(conn.createStatement()) { stmt =>
             Using.resource(stmt.executeQuery(query)) { rs =>
-              if (rs.next()) metadata.putLong("arrayDimension", rs.getLong(1))
+              if (rs.next()) {

Review Comment:
   Well rs.next() returns false if the resultset is empty so we can just reuse 
it. I have updated the PR.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to