rdblue commented on a change in pull request #1095:
URL: https://github.com/apache/iceberg/pull/1095#discussion_r441721593



##########
File path: site/docs/api-quickstart.md
##########
@@ -48,6 +48,36 @@ logsDF.write
 
 The logs [schema](#create-a-schema) and [partition 
spec](#create-a-partition-spec) are created below.
 
+### Using a Hadoop catalog
+
+A Hadoop catalog doesn't need to connect to a Hive MetaStore, but can only be 
used with HDFS or similar file systems that support atomic rename. To get a 
Hadoop catalog see:
+
+```scala
+import org.apache.hadoop.conf.Configuration;
+import org.apache.iceberg.hadoop.HadoopCatalog;
+
+val conf = new Configuration();
+val warehousePath = "hdfs://host:8020/warehouse_path";
+val catalog = new HadoopCatalog(conf, warehousePath);
+```
+
+Like Hive catalog, Hadoop catalog implements the interface `Catalog`. So it 
also contains methods for working with tables, like createTable, loadTable, and 
dropTable.
+                                                                               
        
+This example create a table with Hadoop catalog:
+
+```scala
+val name = TableIdentifier.of("logging", "logs")
+val table = catalog.createTable(name, schema, spec)
+
+// write into the new logs table with Spark 2.4
+logsDF.write
+    .format("iceberg")
+    .mode("append")
+    .save("hdfs://warehouse_path/logging/logs")

Review comment:
       This location also needs to be fixed to avoid using `warehouse_path` as 
the authority section of the URI.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to