JingsongLi commented on a change in pull request #10590:
[FLINK-15205][hive][document] Update hive partition and orc for read_write_hive
URL: https://github.com/apache/flink/pull/10590#discussion_r358597410
##
File path: docs/dev/table/hive/read_write_hive.md
##
@@ -111,16 +111,44 @@ __ __
## Writing To Hive
-Similarly, data can be written into hive using an `INSERT INTO` clause.
+Similarly, data can be written into hive using an `INSERT` clause. Consider
there is a mytable table with two columns: name, age.
{% highlight bash %}
-Flink SQL> INSERT INTO mytable (name, value) VALUES ('Tom', 4.72);
+# -- Insert with append mode --
+Flink SQL> INSERT INTO mytable SELECT 'Tom', 25;
+
+# -- Insert with overwrite mode --
+Flink SQL> INSERT OVERWRITE mytable SELECT 'Tom', 25;
+{% endhighlight %}
+
+We support partition table too, Consider there is a myparttable table with
four columns: name, age, my_type and my_date. Column my_type and column my_date
are the partition columns.
+
+{% highlight bash %}
+# -- Insert with static partition --
+Flink SQL> INSERT OVERWRITE myparttable PARTITION (my_type='type_1',
my_date='2019-08-08') SELECT 'Tom', 25;
+
+# -- Insert with dynamic partition --
+Flink SQL> INSERT OVERWRITE myparttable SELECT 'Tom', 25, 'type_1',
'2019-08-08';
+
+# -- Insert with static(my_type) and dynamic(my_date) partition --
+Flink SQL> INSERT OVERWRITE myparttable PARTITION (my_type='type_1') SELECT
'Tom', 25, '2019-08-08';
{% endhighlight %}
## Formats
We have tested on the following of table storage formats: text, csv,
SequenceFile, ORC, and Parquet.
+# -- ORC Vectorized Optimization --
Review comment:
Yes
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org
With regards,
Apache Git Services