szehon-ho commented on code in PR #5978:
URL: https://github.com/apache/iceberg/pull/5978#discussion_r995161655
##########
docs/spark-queries.md:
##########
@@ -78,10 +78,10 @@ val df = spark.table("prod.db.table")
Iceberg 0.11.0 adds multi-catalog support to `DataFrameReader` in both Spark
3.x and 2.4.
Paths and table names can be loaded with Spark's `DataFrameReader` interface.
How tables are loaded depends on how
-the identifier is specified. When using
`spark.read.format("iceberg").path(table)` or `spark.table(table)` the `table`
+the identifier is specified. When using
`spark.read.format("iceberg").load(table)` or `spark.table(table)` the `table`
variable can take a number of forms as listed below:
-* `file:/path/to/table`: loads a HadoopTable at given path
+* `file:///path/to/table`: loads a HadoopTable at given path
Review Comment:
Also changed this part, Spark parser will not pass if there is file:/ (at
least for me), so added two other slashes, but let me know if this works in
other setups
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]