This is an automated email from the ASF dual-hosted git repository.
github-bot pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/amoro-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new 2720be0 deploy: 838ad91cb1277831c740b12c49a4e837ae64e6f7
2720be0 is described below
commit 2720be095718f93a9fee51698db11fbaf87a4237
Author: zhoujinsong <[email protected]>
AuthorDate: Mon Apr 21 02:32:57 2025 +0000
deploy: 838ad91cb1277831c740b12c49a4e837ae64e6f7
---
output/docs/latest/deployment/index.html | 3 +--
output/docs/latest/flink-datastream/index.html | 3 +--
output/docs/latest/flink-dml/index.html | 6 ++----
output/docs/latest/flink-using-logstore/index.html | 3 +--
output/docs/latest/index.html | 2 +-
output/docs/latest/spark-ddl/index.html | 18 ++++++------------
output/docs/latest/spark-getting-started/index.html | 12 ++++--------
output/docs/latest/spark-writes/index.html | 9 +++------
8 files changed, 19 insertions(+), 37 deletions(-)
diff --git a/output/docs/latest/deployment/index.html
b/output/docs/latest/deployment/index.html
index 1438ff5..cc5b306 100644
--- a/output/docs/latest/deployment/index.html
+++ b/output/docs/latest/deployment/index.html
@@ -650,8 +650,7 @@ Unzip it to create the amoro-x.y.z directory in the same
directory, and then go
<p>You can also configure a relational backend storage as you needed.</p>
<blockquote>
<p>If you would like to use MySQL as the system database, you need to manually
download the <a
href="https://repo1.maven.org/maven2/com/mysql/mysql-connector-j/8.1.0/mysql-connector-j-8.1.0.jar">MySQL
JDBC Connector</a>
-and move it into the <code>${AMORO_HOME}/lib/</code> directory.</p>
-</blockquote>
+and move it into the <code>${AMORO_HOME}/lib/</code>
directory.</p></blockquote>
<p>You need to create an empty database in the RDBMS before to start the
server, then AMS will automatically create tables in the database when it first
started.</p>
<p>One thing you need to do is adding configuration under
<code>config.yaml</code> of Ams:</p>
<div class="highlight"><pre tabindex="0"
style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code
class="language-yaml" data-lang="yaml"><span style="display:flex;"><span><span
style="color:#f92672">ams</span>:
diff --git a/output/docs/latest/flink-datastream/index.html
b/output/docs/latest/flink-datastream/index.html
index 1b0d1fc..a93f6f0 100644
--- a/output/docs/latest/flink-datastream/index.html
+++ b/output/docs/latest/flink-datastream/index.html
@@ -889,8 +889,7 @@
<blockquote>
<p><strong>TIPS</strong></p>
<p>mixed-format.emit.mode contains log, you need to configure
log-store.enabled = true <a href="../flink-dml/">Enable Log
Configuration</a></p>
-<p>mixed-format.emit.mode When file is included, the primary key table will
only be written to ChangeStore, and the non-primary key table will be directly
written to BaseStore.</p>
-</blockquote>
+<p>mixed-format.emit.mode When file is included, the primary key table will
only be written to ChangeStore, and the non-primary key table will be directly
written to BaseStore.</p></blockquote>
</div>
diff --git a/output/docs/latest/flink-dml/index.html
b/output/docs/latest/flink-dml/index.html
index e5be5ad..2b6eb4b 100644
--- a/output/docs/latest/flink-dml/index.html
+++ b/output/docs/latest/flink-dml/index.html
@@ -622,8 +622,7 @@
<p>Use batch mode to read full and incremental data from FileStore.</p>
<blockquote>
<p><strong>TIPS</strong></p>
-<p>LogStore does not support bounded reading.</p>
-</blockquote>
+<p>LogStore does not support bounded reading.</p></blockquote>
<div class="highlight"><pre tabindex="0"
style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code
class="language-sql" data-lang="sql"><span style="display:flex;"><span><span
style="color:#75715e">-- Run Flink tasks in batch mode in the current session
</span></span></span><span style="display:flex;"><span><span
style="color:#75715e"></span><span style="color:#66d9ef">SET</span>
execution.runtime<span style="color:#f92672">-</span><span
style="color:#66d9ef">mode</span> <span style="color:#f92672">=</span> batch;
</span></span><span style="display:flex;"><span>
@@ -773,8 +772,7 @@
<ul>
<li>When log-store.type = pulsar, the parallelism of the Flink task cannot be
less than the number of partitions in the Pulsar topic, otherwise some
partition data cannot be read.</li>
<li>When the number of topic partitions in log-store is less than the
parallelism of the Flink task, some Flink subtasks will be idle. At this time,
if the task has a watermark, the parameter table.exec.source.idle-timeout must
be configured, otherwise the watermark will not advance. See <a
href="https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/dev/table/config/#table-exec-source-idle-timeout">official
documentation</a> for details.</li>
-</ul>
-</blockquote>
+</ul></blockquote>
<h3 id="streaming-mode-filestore-non-primary-key-table">Streaming mode
(FileStore non-primary key table)</h3>
<div class="highlight"><pre tabindex="0"
style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code
class="language-sql" data-lang="sql"><span style="display:flex;"><span><span
style="color:#75715e">-- Run Flink tasks in streaming mode in the current
session
</span></span></span><span style="display:flex;"><span><span
style="color:#75715e"></span><span style="color:#66d9ef">SET</span>
execution.runtime<span style="color:#f92672">-</span><span
style="color:#66d9ef">mode</span> <span style="color:#f92672">=</span>
streaming;
diff --git a/output/docs/latest/flink-using-logstore/index.html
b/output/docs/latest/flink-using-logstore/index.html
index 013a475..0ccea42 100644
--- a/output/docs/latest/flink-using-logstore/index.html
+++ b/output/docs/latest/flink-using-logstore/index.html
@@ -678,8 +678,7 @@
<div class="highlight"><pre tabindex="0"
style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code
class="language-sql" data-lang="sql"><span style="display:flex;"><span><span
style="color:#66d9ef">INSERT</span> <span style="color:#66d9ef">INTO</span>
db.log_table <span style="color:#75715e">/*+
OPTIONS('mixed-format.emit.mode'='log') */</span>
</span></span><span style="display:flex;"><span><span
style="color:#66d9ef">SELECT</span> id, name, ts <span
style="color:#66d9ef">from</span> sourceTable;
</span></span></code></pre></div><blockquote>
-<p>Currently, only the Apache Flink engine implements the dual-write LogStore
and FileStore.</p>
-</blockquote>
+<p>Currently, only the Apache Flink engine implements the dual-write LogStore
and FileStore.</p></blockquote>
</div>
diff --git a/output/docs/latest/index.html b/output/docs/latest/index.html
index 6d8bc94..c6e170f 100644
--- a/output/docs/latest/index.html
+++ b/output/docs/latest/index.html
@@ -32,7 +32,7 @@
<!DOCTYPE html>
<html>
<head>
- <meta name="generator" content="Hugo 0.141.0">
+ <meta name="generator" content="Hugo 0.146.6">
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
diff --git a/output/docs/latest/spark-ddl/index.html
b/output/docs/latest/spark-ddl/index.html
index 9fdbb51..98ab6f9 100644
--- a/output/docs/latest/spark-ddl/index.html
+++ b/output/docs/latest/spark-ddl/index.html
@@ -649,21 +649,18 @@ Integers and longs truncate to bins: truncate(10, i)
produces partitions 0, 10,
</li>
</ul>
<blockquote>
-<p>Mixed-Hive format doesn’t support transform.</p>
-</blockquote>
+<p>Mixed-Hive format doesn’t support transform.</p></blockquote>
<h2 id="create-table--as-select">CREATE TABLE … AS SELECT</h2>
<pre tabindex="0"><code>CREATE TABLE mixed_catalog.db.sample
USING mixed_iceberg
AS SELECT ...
</code></pre><blockquote>
<p>The <code>CREATE TABLE ... AS SELECT</code> syntax is used to create a
table and write the query results to the table. Primary
-keys, partitions, and properties are not inherited from the source table and
need to be configured separately.</p>
-</blockquote>
+keys, partitions, and properties are not inherited from the source table and
need to be configured separately.</p></blockquote>
<blockquote>
<p>You can enable uniqueness check for the primary key in the source table by
setting set
<code>spark.sql.mixed-format.check-source-data-uniqueness.enabled =
true</code> in Spark SQL. If there are duplicate primary keys, an
-error will be raised during the write operation.</p>
-</blockquote>
+error will be raised during the write operation.</p></blockquote>
<p>You can use the following syntax to create a table with primary keys,
partitions, and properties:</p>
<pre tabindex="0"><code>CREATE TABLE mixed_catalog.db.sample
PRIMARY KEY(id) USING mixed_iceberg
@@ -683,21 +680,18 @@ USING mixed_iceberg
TBLPROPERTIES ('owner'='xxxx');
</code></pre><blockquote>
<p>Since <code>PRIMARY KEY</code> is not a standard Spark syntax, if the
source table is a MixedFormat table with primary keys, the
-new table can copy the schema information with the primary keys. Otherwise,
only schema could be copied.</p>
-</blockquote>
+new table can copy the schema information with the primary keys. Otherwise,
only schema could be copied.</p></blockquote>
<div class="info">
<code>Create Table Like</code> only supports the binary form of
<code>db.table</code> and in the same catalog
</div>
<h2 id="replace-table--as-select">REPLACE TABLE … AS SELECT</h2>
<blockquote>
-<p>The <code>REPLACE TABLE ... AS SELECT</code> syntax only supports tables
without primary keys in the current version.</p>
-</blockquote>
+<p>The <code>REPLACE TABLE ... AS SELECT</code> syntax only supports tables
without primary keys in the current version.</p></blockquote>
<pre tabindex="0"><code>REPLACE TABLE mixed_catalog.db.sample
USING mixed_iceberg
AS SELECT ...
</code></pre><blockquote>
-<p>In the current version, <code>REPLACE TABLE ... AS SELECT</code> does not
provide atomicity guarantees.</p>
-</blockquote>
+<p>In the current version, <code>REPLACE TABLE ... AS SELECT</code> does not
provide atomicity guarantees.</p></blockquote>
<h2 id="drop-table">DROP TABLE</h2>
<div class="highlight"><pre tabindex="0"
style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code
class="language-sql" data-lang="sql"><span style="display:flex;"><span><span
style="color:#66d9ef">DROP</span> <span style="color:#66d9ef">TABLE</span>
mixed_catalog.db.sample;
</span></span></code></pre></div><h2 id="truncate-table">TRUNCATE TABLE</h2>
diff --git a/output/docs/latest/spark-getting-started/index.html
b/output/docs/latest/spark-getting-started/index.html
index 177eced..12db254 100644
--- a/output/docs/latest/spark-getting-started/index.html
+++ b/output/docs/latest/spark-getting-started/index.html
@@ -603,8 +603,7 @@ for more information.</p>
<div class="highlight"><pre tabindex="0"
style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code
class="language-bash" data-lang="bash"><span
style="display:flex;"><span>spark-shell --packages
org.apache.amoro:amoro-mixed-spark-3.3-runtime:0.7.0
</span></span></code></pre></div><blockquote>
<p>If you want to include the connector in your Spark installation, add the
<code>amoro-mixed-spark-3.3-runtime</code> Jar to
-Spark’s <code>jars</code> folder.</p>
-</blockquote>
+Spark’s <code>jars</code> folder.</p></blockquote>
<h2 id="adding-catalogs">Adding catalogs</h2>
<pre tabindex="0"><code>${SPARK_HOME}/bin/spark-sql \
--conf
spark.sql.extensions=org.apache.amoro.spark.MixedFormatSparkExtensions \
@@ -615,12 +614,10 @@ Spark’s <code>jars</code> folder.</p>
in the following format:
<code>thrift://${AMS_HOST}:${AMS_PORT}/${AMS_CATALOG_NAME}</code>,
The mixed-format-spark-connector will automatically download the Hadoop site
configuration file through
-the thrift protocol for accessing the HDFS cluster</p>
-</blockquote>
+the thrift protocol for accessing the HDFS cluster</p></blockquote>
<blockquote>
<p>The AMS_PORT is the port number of the AMS service’s thrift API
interface, with a default value of 1260
-The AMS_CATALOG_NAME is the name of the Catalog you want to access on AMS.</p>
-</blockquote>
+The AMS_CATALOG_NAME is the name of the Catalog you want to access on
AMS.</p></blockquote>
<p>Regarding detailed configurations for Spark, please refer to <a
href="../spark-configuration/">Spark Configurations</a></p>
<h2 id="creating-a-table">Creating a table</h2>
<p>In Spark SQL command line, you can execute a create table command using the
<code>CREATE TABLE</code> statement.</p>
@@ -656,8 +653,7 @@ insert overwrite test3 values
( 2, "bbb", timestamp('2022-1-2 00:00:00')),
( 3, "bbb", timestamp('2022-1-3 00:00:00'));
</code></pre><blockquote>
-<p>If you are using Static Overwrite, you cannot define transforms on
partition fields.</p>
-</blockquote>
+<p>If you are using Static Overwrite, you cannot define transforms on
partition fields.</p></blockquote>
<p>Alternatively, you can use the DataFrame API to write data to an Amoro
table within a JAR job.</p>
<pre tabindex="0"><code>val df = spark.read().load("/path-to-table")
df.writeTo('test_db.table1').overwritePartitions()
diff --git a/output/docs/latest/spark-writes/index.html
b/output/docs/latest/spark-writes/index.html
index fbeaf37..457f0c9 100644
--- a/output/docs/latest/spark-writes/index.html
+++ b/output/docs/latest/spark-writes/index.html
@@ -614,13 +614,11 @@ the table. If the PARTITION clause is omitted, all
partitions will be replaced.<
</span></span><span style="display:flex;"><span>partition( dt <span
style="color:#f92672">=</span> <span
style="color:#e6db74">'2021-1-1'</span>) <span
style="color:#66d9ef">values</span>
</span></span><span style="display:flex;"><span>(<span
style="color:#ae81ff">1</span>, <span
style="color:#e6db74">'aaa'</span>), (<span
style="color:#ae81ff">2</span>, <span
style="color:#e6db74">'bbb'</span>), (<span
style="color:#ae81ff">3</span>, <span
style="color:#e6db74">'ccc'</span>)
</span></span></code></pre></div><blockquote>
-<p>In Static mode, it is not supported to define transforms on partitioning
columns.</p>
-</blockquote>
+<p>In Static mode, it is not supported to define transforms on partitioning
columns.</p></blockquote>
<blockquote>
<p>You can enable uniqueness check of the primary key on the source table by
setting
<code>spark.sql.mixed-format.check-source-data-uniqueness.enabled =
true</code> in SPARK SQL. If there are duplicate primary keys,
-an error will be thrown during the write operation.</p>
-</blockquote>
+an error will be thrown during the write operation.</p></blockquote>
<h3 id="insert-into">INSERT INTO</h3>
<p>To append new data to a table, use <code>INSERT INTO</code>.</p>
<div class="highlight"><pre tabindex="0"
style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code
class="language-sql" data-lang="sql"><span style="display:flex;"><span><span
style="color:#66d9ef">INSERT</span> <span style="color:#66d9ef">INTO</span>
mixed_catalog.db.sample <span style="color:#66d9ef">VALUES</span> (<span
style="color:#ae81ff">1</span>, <span
style="color:#e6db74">'a'</span>), (<span
style="color:#ae81ff">2</span>, <span styl [...]
@@ -645,8 +643,7 @@ even if there are rows with the same primary key in the
table.</p>
</span></span></code></pre></div><blockquote>
<p>You can enable uniqueness check of the primary key on the source table by
setting
<code>spark.sql.mixed-format.check-source-data-uniqueness.enabled =
true</code> in SPARK SQL. If there are duplicate primary keys,
-an error will be thrown during the write operation.</p>
-</blockquote>
+an error will be thrown during the write operation.</p></blockquote>
<h3 id="delete-from">DELETE FROM</h3>
<p>The <code>DELETE FROM</code> statements delete rows from table.</p>
<div class="highlight"><pre tabindex="0"
style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code
class="language-sql" data-lang="sql"><span style="display:flex;"><span><span
style="color:#66d9ef">DELETE</span> <span style="color:#66d9ef">FROM</span>
mixed_catalog.db.sample