wypoon commented on a change in pull request #4255:
URL: https://github.com/apache/iceberg/pull/4255#discussion_r828390997



##########
File path: docs/spark/spark-queries.md
##########
@@ -168,7 +168,9 @@ To inspect a table's history, snapshots, and other 
metadata, Iceberg supports me
 Metadata tables are identified by adding the metadata table name after the 
original table name. For example, history for `db.table` is read using 
`db.table.history`.
 
 {{< hint info >}}
-As of Spark 3.0, the format of the table name for inspection 
(`catalog.database.table.metadata`) doesn't work with Spark's default catalog 
(`spark_catalog`). If you've replaced the default catalog, you may want to use 
`DataFrameReader` API to inspect the table. 
+For Spark 2.4, use the `DataFrameReader` API to [inspect 
tables](#inspecting-with-dataframes).
+
+For Spark 3, prior to 3.2, the Spark [session 
catalog](../spark-configuration#replacing-the-session-catalog) does not support 
table names with multipart identifiers such as 
`catalog.database.table.metadata`. As a workaround, configure a catalog that 
uses `org.apache.iceberg.spark.SparkCatalog`, or use the Spark 
`DataFrameReader` API.

Review comment:
       Ok, I changed it to simply "configure an 
`org.apache.iceberg.spark.SparkCatalog`".
   
   About the link, earlier in the same file, there is a link 
"(../spark-configuration#using-catalogs)"; I followed the same syntax. The 
links don't work in github, but they should be correct on the Apache Iceberg 
website.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to