rdblue commented on a change in pull request #2292:
URL: https://github.com/apache/iceberg/pull/2292#discussion_r586934288



##########
File path: site/docs/nessie.md
##########
@@ -0,0 +1,120 @@
+<!--
+ - Licensed to the Apache Software Foundation (ASF) under one or more
+ - contributor license agreements.  See the NOTICE file distributed with
+ - this work for additional information regarding copyright ownership.
+ - The ASF licenses this file to You under the Apache License, Version 2.0
+ - (the "License"); you may not use this file except in compliance with
+ - the License.  You may obtain a copy of the License at
+ -
+ -   http://www.apache.org/licenses/LICENSE-2.0
+ -
+ - Unless required by applicable law or agreed to in writing, software
+ - distributed under the License is distributed on an "AS IS" BASIS,
+ - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ - See the License for the specific language governing permissions and
+ - limitations under the License.
+ -->
+
+# Iceberg Nessie Integration
+
+Iceberg provides integration with Nessie through the `iceberg-nessie` module.
+This section describes how to use Iceberg with Nessie. Nessie provides several 
key features on top of iceberg:
+
+* multi-table transactions
+* git-like operations (eg branches, tags, commits)
+* hive-like metastore capabilities
+
+See [Project Nessie](https://projectnessie.org) for more information on 
Nessie. Nessie requires a server to run, see
+[Getting Started](https://projectnessie.org/try/) to start a Nessie server.
+
+## Enabling Nessie Catalog
+
+The `iceberg-nessie` module is bundled with Spark and Flink runtimes for all 
versions from `0.11.0`. To get started
+with nessie and iceberg simply add the iceberg runtime to your process. Eg: 
`spark-sql --packages
+org.apache.iceberg:iceberg-spark3-runtiume:0.11.0`. 
+
+## Nessie Catalog
+
+One major feature introduced in release `0.11.0` is the ability to easily 
interact with a [Custom
+Catalog](../custom-catalog) from Spark and Flink. See [Spark 
Configuration](../spark-configuration#catalog-configuration)
+  and [Flink Configuration](../flink#custom-catalog) for instructions for 
adding a custom catalog to Iceberg. 
+
+To use the Nessie Catalog the following properties are required:
+
+* `warehouse`. Like most other catalogs the warehouse property is a file path 
to where this catalog should store tables.
+* `uri`. This is the Nessie server base uri. Eg 
`http://localhost:19120/api/v1`.
+* `ref` (optional). This is the Nessie branch or tag you want to work in.
+
+To run directly in Java this looks like:
+
+``` java
+Map<String, String> options = new HashMap<>();
+options.put("warehouse", "/path/to/warehouse");
+options.put("ref", "main");
+options.put("uri", "https://localhost:19120/api/v1";);
+Catalog nessieCatalog = 
CatalogUtil.loadCatalog("org.apache.iceberg.nessie.NessieCatalog", "nessie", 
hadoopConfig, options);
+```
+
+and in Spark:
+
+``` java
+conf.set("spark.sql.catalog.nessie.warehouse", "/path/to/warehouse");
+conf.set("spark.sql.catalog.nessie.uri", "http://localhost:19120/api/v1";)
+conf.set("spark.sql.catalog.nessie.ref", "main")
+conf.set("spark.sql.catalog.nessie.catalog-impl", 
"org.apache.iceberg.nessie.NessieCatalog")
+conf.set("spark.sql.catalog.nessie", "org.apache.iceberg.spark.SparkCatalog")
+```
+
+Once you have a Nessie catalog you have access to your entire Nessie repo. You 
can then perform create/delete/merge
+operations on branches and perform commits on branches. Each iceberg table in 
a Nessie Catalog is identified by an
+arbitrary length namespace and table name (eg `data.base.name.table`). These 
namespaces are implicit and don't need to
+be created separately. Any transaction on a Nessie enabled Iceberg table is a 
single commit in Nessie. Nessie commits
+can encompass an arbitrary number of actions on an arbitrary number of tables, 
however in Iceberg this will be limited
+to the set of single table transactions currently available.
+
+Further operations such as merges, viewing the commit log or diffs are 
performed by direct interaction with the
+`NessieClient` in java or by using the python client or cli. See [Nessie 
CLI](https://projectnessie.org/tools/cli/) for
+more details on the CLI and [Spark 
Guide](https://projectnessie.org/tools/spark/) for a more complete description 
of 
+Nessie functionality.
+
+## Nessie and Iceberg
+
+For most cases Nessie acts just like any other Catalog for Iceberg: providing 
a logical organization of a set of tables
+and providing atomicity to transactions. However using Nessie opens up other 
interesting possibilities.
+
+### Loosely coupled multi-table transactions
+
+By creating a branch and performing a set of operations on that branch you can 
approximate a multi-table transaction.
+A sequence of commits can be performed on the newly created branch and then 
merged back into the main branch atomically.
+This gives the appearance of a series of connected changes being exposed to 
the main branch atomically. Downstream
+consumers will see this as a multi-table transaction on the database. Unlike a 
traditional database transaction this
+branch based transaction doesn't have a single owner. For example, a series of 
globally distributed Spark applications
+can simultaneously update the branch and the merge back to the main branch 
will expose all these changes atomically.

Review comment:
       I think this section should be a bit more clear about what is supported 
by Nessie. Claiming support for multi-table transactions seems misleading to 
me, because it's really the equivalent of a "fast-forward" operation and not 
what I would think of normally as a transaction. It would help to have a 
section that makes that clear in terms people familiar with `git` would 
understand.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to