RussellSpitzer commented on issue #6290:
URL: https://github.com/apache/iceberg/issues/6290#issuecomment-1330830688

   Via sql you can see the current partitioning with "describe table", again 
this is really just the description of how new files will be added to the 
table, old partitioning still works even when the current spec changes.
   
   ```scala
   scala> spark.sql("describe table parttruncate").show
   +--------------+--------------+-------+
   |      col_name|     data_type|comment|
   +--------------+--------------+-------+
   |             x|           int|       |
   |             y|           int|       |
   |              |              |       |
   |# Partitioning|              |       |
   |        Part 0|truncate(x, 5)|       |
   +--------------+--------------+-------+
   ```
   
   But if I was doing something programattically I would probably use
   ```scala
   
   scala> import org.apache.iceberg.spark.Spark3Util
   import org.apache.iceberg.spark.Spark3Util
   scala> Spark3Util.loadIcebergTable(spark, "parttruncate").spec
   res17: org.apache.iceberg.PartitionSpec =
   [
     1000: x_trunc: truncate[5](1)
   ]
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to