xushiyan commented on a change in pull request #4503:
URL: https://github.com/apache/hudi/pull/4503#discussion_r841202661



##########
File path: rfc/rfc-34/rfc-34.md
##########
@@ -0,0 +1,165 @@
+# Hudi BigQuery Integration
+
+## Abstract
+
+BigQuery is Google Cloud's fully managed, petabyte-scale, and cost-effective 
analytics data warehouse that lets you run
+analytics over vast amounts of data in near real time. BigQuery
+currently [doesn’t 
support](https://cloud.google.com/bigquery/external-data-cloud-storage) Apache 
Hudi file format, but
+it has support for the Parquet file format. The proposal is to implement a 
BigQuerySync similar to HiveSync to sync the
+Hudi table as the BigQuery External Parquet table, so that users can query the 
Hudi tables using BigQuery. Uber is
+already syncing some of its Hudi tables to BigQuery data mart this will help 
them to write, sync and query.
+
+## Background
+
+Hudi table types define how data is indexed & laid out on the DFS and how the 
above primitives and timeline activities
+are implemented on top of such organization (i.e how data is written). In 
turn, query types define how the underlying
+data is exposed to the queries (i.e how data is read).
+
+Hudi supports the following table types:
+
+* [Copy On 
Write](https://hudi.apache.org/docs/table_types#copy-on-write-table): Stores 
data using exclusively columnar
+  file formats (e.g parquet). Updates simply version & rewrite the files by 
performing a synchronous merge during write.
+* [Merge On 
Read](https://hudi.apache.org/docs/table_types#merge-on-read-table): Stores 
data using a combination of
+  columnar (e.g parquet) + row based (e.g avro) file formats. Updates are 
logged to delta files & later compacted to
+  produce new versions of columnar files synchronously or asynchronously.
+
+Hudi maintains multiple versions of the Parquet files and tracks the latest 
version using Hudi metadata (Cow), since
+BigQuery doesn’t support Hudi yet, when you sync the Hudi’s parquet files to 
BigQuery and query it without Hudi’s
+metadata layer, it will query all the versions of the parquet files which 
might cause duplicate rows.
+
+To avoid the above scenario, this proposal is to implement a BigQuery sync 
tool which will use the Hudi metadata to know
+which files are latest and filter only the latest version of parquet files to 
BigQuery external table so that users can
+query the Hudi tables without any duplicate records.
+
+## Implementation
+
+This new feature will implement
+the 
[AbstractSyncTool](https://github.com/apache/hudi/blob/master/hudi-sync/hudi-sync-common/src/main/java/org/apache/hudi/sync/common/AbstractSyncTool.java)
+similar to
+the 
[HiveSyncTool](https://github.com/apache/hudi/blob/master/hudi-sync/hudi-hive-sync/src/main/java/org/apache/hudi/hive/HiveSyncTool.java)
+named BigQuerySyncTool with sync methods for CoW tables. The sync 
implementation will identify the latest parquet files
+for each .commit file and keep these manifests synced with the BigQuery 
manifest table. Spark datasource & DeltaStreamer
+can already take a list of such classes to keep these manifests synced.
+
+###           
+
+![alt_text](big-query-arch.png "Big Query integration architecture.")
+
+To avoid duplicate records on the Hudi CoW table, we need to generate the list 
of latest snapshot files and create a BQ
+table for it, then use that table to filter the duplicate records from the 
history table.
+
+### Steps to create Hudi table on BigQuery
+
+1. Let's say you have a Hudi table data on google cloud storage (GCS).
+
+ ```
+CREATE TABLE dwh.bq_demo_partitioned_cow (
+  id bigint, 
+  name string,
+  price double,
+  ts bigint,
+  dt string
+) 
+using hudi 
+partitioned by (dt)
+options (
+  type = 'cow',
+  primaryKey = 'id',
+  preCombineField = 'ts',
+  hoodie.datasource.write.drop.partition.columns = 'true'
+)
+location 'gs://hudi_datasets/bq_demo_partitioned_cow/';
+```
+
+BigQuery doesn't accept the partition column in the parquet schema, hence we 
need to drop the partition columns from the

Review comment:
       This actually prevents deltastreamer from using bq sync, as the drop 
column config does not work with delta streamer, does it? @vingov 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to