bowenli86 commented on a change in pull request #9308: 
[FLINK-13517][docs][hive] Restructure Hive Catalog documentation
URL: https://github.com/apache/flink/pull/9308#discussion_r311787151
 
 

 ##########
 File path: docs/dev/table/catalog.md
 ##########
 @@ -23,344 +23,151 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Catalogs provide metadata, such as names, schemas, statistics of tables, and 
information about how to access data stored in a database or other external 
systems. Once a catalog is registered within a `TableEnvironment`, all its 
meta-objects are accessible from the Table API and SQL queries.
+Catalogs provide metadata, such as names, schemas, table statistics, and 
information needed to access data stored in a database or other external 
systems.
 
+One of the most crucial aspects of data processing is managing metadata. It 
may be transient metadata like temporary tables, or UDF's registered against 
the table environment. Or permanent metadata, like a Hive Metastore. Catalogs 
provide a unified API for managing metadata and making it accessible from the 
Table API and SQL Queries. 
 
 * This will be replaced by the TOC
 {:toc}
 
+## Catalog Types
 
-Catalog Interface
------------------
+### GenericInMemoryCatalog
 
-APIs are defined in `Catalog` interface. The interface defines a set of APIs 
to read and write catalog meta-objects such as database, tables, partitions, 
views, and functions.
+Flink sessions always have a built-in `GenericInMemoryCatalog` named 
`default_catalog`, which has a built-in default database named 
`default_database`.  All temporary metadata, such tables defined using 
`TableEnvironment#registerTable` is registered to this catalog. 
 
+### HiveCatalog
 
-Catalog Meta-Objects Naming Structure
--------------------------------------
+The `HiveCatalog` serves two purposes; as persistent storage for pure Flink 
metadata, and as an interface for reading and writing Hive tables. The Flink's 
[Hive documentation]({{ site.baseurl }}/dev/table/hive/index.html) provides 
full details on setting up the catalog and interfacing with an existing Hive 
installation.
 
-Flink's catalogs use a strict two-level structure, that is, catalogs contain 
databases, and databases contain meta-objects. Thus, the full name of a 
meta-object is always structured as `catalogName`.`databaseName`.`objectName`.
+### User-Defined Catalog
 
-Each `TableEnvironment` has a `CatalogManager` to manager all registered 
catalogs. To ease access to meta-objects, `CatalogManager` has a concept of 
current catalog and current database. By setting current catalog and current 
database, users can use just the meta-object's name in their queries. This 
greatly simplifies user experience.
+Catalogs are pluggable and users can develop custom catalogs by implementing 
the `Catalog` interface, which defines a set of APIs for reading and writing 
catalog meta-objects such as database, tables, partitions, views, and functions.
 
-For example, a previous query as
+## Catalog API
 
-```sql
-select * from mycatalog.mydb.myTable;
-```
+### Registering a Catalog
 
-can be shortened to
+Users can register additional catalogs into an existing Flink session.
 
-```sql
-select * from myTable;
-```
 
-To querying tables in a different database under the current catalog, users 
don't need to specify the catalog name. In our example, it would be
-
-```
-select * from mydb2.myTable2
-```
-
-`CatalogManager` always has a built-in `GenericInMemoryCatalog` named 
`default_catalog`, which has a built-in default database named 
`default_database`. If no other catalog and database are explicitly set, they 
will be the current catalog and current database by default. All temp 
meta-objects, such as those defined by `TableEnvironment#registerTable`  are 
registered to this catalog. 
-
-Users can set current catalog and database via 
`TableEnvironment.useCatalog(...)` and 
-`TableEnvironment.useDatabase(...)` in Table API, or `USE CATALOG ...` and 
`USE ...` in Flink SQL
- Client.
-
-
-Catalog Types
--------------
-
-## GenericInMemoryCatalog
-
-The default catalog; all meta-objects in this catalog are stored in memory, 
and be will be lost once the session shuts down.
-
-Its config entry value in SQL CLI yaml file is "generic_in_memory".
-
-## HiveCatalog
-
-Flink's `HiveCatalog` can read and write both Flink and Hive meta-objects 
using Hive Metastore as persistent storage.
-
-Its config entry value in SQL CLI yaml file is "hive".
-
-### Persist Flink meta-objects
-
-Historically, Flink meta-objects are only stored in memory and are per session 
based. That means users have to recreate all the meta-objects every time they 
start a new session.
-
-To maintain meta-objects across sessions, users can choose to use 
`HiveCatalog` to persist all of users' Flink streaming (unbounded-stream) and 
batch (bounded-stream) meta-objects. Because Hive Metastore is only used for 
storage, Hive itself may not understand Flink's meta-objects stored in the 
metastore.
-
-### Integrate Flink with Hive metadata
-
-The ultimate goal for integrating Flink with Hive metadata is that:
-
-1. Existing meta-objects, like tables, views, and functions, created by Hive 
or other Hive-compatible applications can be used by Flink
-
-2. Meta-objects created by `HiveCatalog` can be written back to Hive metastore 
such that Hive and other Hive-compatible applications can consume.
-
-### Supported Hive Versions
-
-Flink's `HiveCatalog` officially supports Hive 2.3.4 and 1.2.1.
-
-The Hive version is explicitly specified as a String, either by passing it to 
the constructor when creating `HiveCatalog` instances directly in Table API or 
specifying it in yaml config file in SQL CLI. The Hive version string are 
`2.3.4` and `1.2.1`.
-
-### Case Insensitive to Meta-Object Names
-
-Note that Hive Metastore stores meta-object names in lower cases. Thus, unlike 
`GenericInMemoryCatalog`, `HiveCatalog` is case-insensitive to meta-object 
names, and users need to be cautious on that.
-
-### Dependencies
-
-To use `HiveCatalog`, users need to include the following dependency jars.
-
-For Hive 2.3.4, users need:
-
-```
-// Hive dependencies
-
-- hive-exec-2.3.4.jar // contains hive-metastore-2.3.4
-
-
-// Hadoop dependencies
-- flink-shaded-hadoop-2-uber-2.7.5-1.8.0.jar
-- flink-hadoop-compatibility-{{site.version}}.jar
-
-```
-
-For Hive 1.2.1, users need:
-
-```
-// Hive dependencies
-
-- hive-metastore-1.2.1.jar
-- hive-exec-1.2.1.jar
-- libfb303-0.9.3.jar
-
-
-// Hadoop dependencies
-- flink-shaded-hadoop-2-uber-2.6.5-1.8.0.jar
-- flink-hadoop-compatibility-{{site.version}}.jar
-
-```
-
-If you don't have Hive dependencies at hand, they can be found at 
[mvnrepostory.com](https://mvnrepository.com):
-
-- [hive-exec](https://mvnrepository.com/artifact/org.apache.hive/hive-exec)
-- 
[hive-metastore](https://mvnrepository.com/artifact/org.apache.hive/hive-metastore)
-
-Note that users need to make sure the compatibility between their Hive 
versions and Hadoop versions. Otherwise, there may be potential problem, for 
example, Hive 2.3.4 is compiled against Hadoop 2.7.2, you may run into problems 
when using Hive 2.3.4 with Hadoop 2.4.
-
-
-### Data Type Mapping
-
-For both Flink and Hive tables, `HiveCatalog` stores table schemas by mapping 
them to Hive table schemas with Hive data types. Types are dynamically mapped 
back on read.
-
-Currently `HiveCatalog` supports most Flink data types with the following 
mapping:
-
-|  Flink Data Type  |  Hive Data Type  |
-|---|---|
-| CHAR(p)       |  CHAR(p)* |
-| VARCHAR(p)    |  VARCHAR(p)** |
-| STRING        |  STRING |
-| BOOLEAN       |  BOOLEAN |
-| TINYINT       |  TINYINT |
-| SMALLINT      |  SMALLINT |
-| INT           |  INT |
-| BIGINT        |  LONG |
-| FLOAT         |  FLOAT |
-| DOUBLE        |  DOUBLE |
-| DECIMAL(p, s) |  DECIMAL(p, s) |
-| DATE          |  DATE |
-| TIMESTAMP_WITHOUT_TIME_ZONE |  TIMESTAMP |
-| TIMESTAMP_WITH_TIME_ZONE |  N/A |
-| TIMESTAMP_WITH_LOCAL_TIME_ZONE |  N/A |
-| INTERVAL      |   N/A*** |
-| BINARY        |   N/A |
-| VARBINARY(p)  |   N/A |
-| BYTES         |   BINARY |
-| ARRAY\<E>     |  ARRAY\<E> |
-| MAP<K, V>     |  MAP<K, V> ****|
-| ROW           |  STRUCT |
-| MULTISET      |  N/A |
-
-
-Note that we only cover most commonly used data types for now.
-
-The following limitations in Hive's data types impact the mapping between 
Flink and Hive:
-
-\* maximum length is 255
-
-\** maximum length is 65535
-
-\*** `INTERVAL` type can not be mapped to hive `INTERVAL` for now.
-
-\**** Hive map key type only allows primitive types, while Flink map key can 
be any data type.
-
-## User-configured Catalog
-
-Catalogs are pluggable. Users can develop custom catalogs by implementing the 
`Catalog` interface, which defines a set of APIs for reading and writing 
catalog meta-objects such as database, tables, partitions, views, and functions.
-
-Catalog Registration
---------------------
-
-## Register Catalog in Table API
-
-To register a catalog in Table API, users can create a catalog instance and 
register it through `TableEnvironment.registerCatalog(name, catalog)`.
-
-## Register Catalog in SQL CLI
 
 Review comment:
   we should keep this section somewhere, its last two paragraphs are very 
important information

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to