rdblue commented on a change in pull request #2832:
URL: https://github.com/apache/iceberg/pull/2832#discussion_r672659611



##########
File path: site/docs/index.md
##########
@@ -20,7 +20,7 @@
 # ![Iceberg](img/Iceberg-logo.png)
 
 
-**Apache Iceberg is an open table format for huge analytic datasets.** Iceberg 
adds tables to Trino and Spark that use a high-performance format that works 
just like a SQL table.
+**Apache Iceberg is an open table format for huge analytic datasets.** Iceberg 
adds tables to compute engines including Spark, Trino, PrestoDB, Flink and Hive 
using a high-performance format that works just like a SQL table.

Review comment:
       I think that some version of this update would be good. Mentioning 
support in Flink, Hive, and PrestoDB in addition to Spark is a good idea and 
doesn't affect the Trino documentation.

##########
File path: site/mkdocs.yml
##########
@@ -72,7 +72,9 @@ nav:
     - Maintenance Procedures: spark-procedures.md
     - Structured Streaming: spark-structured-streaming.md
     - Time Travel: spark-queries/#time-travel
-  - Trino: https://trino.io/docs/current/connector/iceberg.html
+  - Presto:
+    - Trino (PrestoSQL): https://trino.io/docs/current/connector/iceberg.html
+    - PrestoDB: https://prestodb.io/docs/current/connector/iceberg.html

Review comment:
       Hm, I'm wondering how we want to handle this. There should be more and 
more engines with Iceberg support and we don't want to keep adding tabs for 
each one. Maybe there's room for PrestoDB? Or maybe we should fix the problem 
now and have a page for documentation that is hosted elsewhere.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to