This is an automated email from the ASF dual-hosted git repository.

blue pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/iceberg.git


The following commit(s) were added to refs/heads/master by this push:
     new ac70b37  Docs: Fix spelling errors in hive.md (#1990)
ac70b37 is described below

commit ac70b370c7bb1a2626bf4792693f101b32bd7993
Author: RickyMa <[email protected]>
AuthorDate: Tue Dec 29 02:38:45 2020 +0800

    Docs: Fix spelling errors in hive.md (#1990)
---
 site/docs/hive.md | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/site/docs/hive.md b/site/docs/hive.md
index b7b335d..655b11a 100644
--- a/site/docs/hive.md
+++ b/site/docs/hive.md
@@ -28,10 +28,10 @@ Regardless of the table type, the 
`HiveIcebergStorageHandler` and supporting cla
 ```sql
 add jar /path/to/iceberg-hive-runtime.jar;
 ```
-There are many others ways to achieve this including adding the jar file to 
Hive's auxillary classpath (so it is available by default) - please refer to 
Hive's documentation for more information.
+There are many others ways to achieve this including adding the jar file to 
Hive's auxiliary classpath (so it is available by default) - please refer to 
Hive's documentation for more information.
 
 #### Using Hadoop Tables
-Iceberg tables created using `HadoopTables` are stored entirely in a directory 
in a filesytem like HDFS. 
+Iceberg tables created using `HadoopTables` are stored entirely in a directory 
in a filesystem like HDFS.
 
 ##### Create an Iceberg table
 The first step is to create an Iceberg table using the Spark/Java/Python API 
and `HadoopTables`. For the purposes of this documentation we will assume that 
the table is called `table_a` and that the table location is 
`hdfs://some_path/table_a`.
@@ -85,7 +85,7 @@ SELECT * from table_b;
 ```
 
 #### Using Hadoop Catalog
-Iceberg tables created using `HadoopCatalog` are stored entirely in a 
directory in a filesytem like HDFS. 
+Iceberg tables created using `HadoopCatalog` are stored entirely in a 
directory in a filesystem like HDFS.
 
 ##### Create an Iceberg table
 The first step is to create an Iceberg table using the Spark/Java/Python API 
and `HadoopCatalog`. For the purposes of this documentation we will assume that 
the fully qualified table identifier is `database_a.table_c` and that the 
Hadoop Catalog warehouse location is 
`hdfs://some_bucket/path_to_hadoop_warehouse`. Iceberg will therefore create 
the table at the location 
`hdfs://some_bucket/path_to_hadoop_warehouse/database_a/table_c`.

Reply via email to