rdblue commented on a change in pull request #1095:
URL: https://github.com/apache/iceberg/pull/1095#discussion_r438326656



##########
File path: site/docs/api-quickstart.md
##########
@@ -48,6 +48,36 @@ logsDF.write
 
 The logs [schema](#create-a-schema) and [partition 
spec](#create-a-partition-spec) are created below.
 
+### Using a Hadoop catalog
+
+The Hadoop catalog doesn't need to connects to a Hive MetaStore. To get a 
Hadoop catalog see:
+
+```scala
+import org.apache.hadoop.conf.Configuration;
+import org.apache.iceberg.hadoop.HadoopCatalog;
+
+val conf = new Configuration();
+val warehousePath = "hdfs://warehouse_path";
+val catalog = new HadoopCatalog(conf, warehousePath);
+```
+
+Like Hive catalog, Hadoop catalog implements the interface `Catalog`. So it 
also contains methods for working with tables, like createTable, loadTable, 
renameTable, and dropTable.
+                                                                               
        
+This example create a table with Hadoop catalog:
+
+```scala
+val name = TableIdentifier.of("logging", "logs")
+val table = catalog.createTable(name, schema, spec)
+
+// write into the new logs table with Spark 2.4
+logsDF.write
+    .format("iceberg")
+    .mode("append")
+    .save("hdfs://warehouse_path/logging/logs")

Review comment:
       This is strange because it actually loads the table using `HadoopTables` 
and not the `HadoopCatalog`. The URI passed here must match the one created by 
the catalog.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to