rdblue commented on a change in pull request #26929: [SPARK-30289][SQL] DSv2's 
partitioning should not accept nested columns
URL: https://github.com/apache/spark/pull/26929#discussion_r367180460
 
 

 ##########
 File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/connector/catalog/CatalogV2Implicits.scala
 ##########
 @@ -48,23 +48,27 @@ private[sql] object CatalogV2Implicits {
   }
 
   implicit class TransformHelper(transforms: Seq[Transform]) {
-    def asPartitionColumns: Seq[String] = {
+    def validatePartitionColumns(): Unit = {
       val (idTransforms, nonIdTransforms) = 
transforms.partition(_.isInstanceOf[IdentityTransform])
 
       if (nonIdTransforms.nonEmpty) {
         throw new AnalysisException("Transforms cannot be converted to 
partition columns: " +
-            nonIdTransforms.map(_.describe).mkString(", "))
+          nonIdTransforms.map(_.describe).mkString(", "))
 
 Review comment:
   That table property allows is used to make the test table implementation 
accept configuration that it doesn't support when writing. It's used to test 
that the table was passed the right `Transform`, even though the 
`InMemoryTable` only supports identity transforms.
   
   ResolveSessionCatalog should convert bucket Transforms to and from 
BucketSpec.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to