This is an automated email from the ASF dual-hosted git repository.
dnskr pushed a commit to branch branch-1.11
in repository https://gitbox.apache.org/repos/asf/kyuubi.git
The following commit(s) were added to refs/heads/branch-1.11 by this push:
new 1daa7281ee [KYUUBI #7339] [DOC] Fix non-consecutive header level
increase
1daa7281ee is described below
commit 1daa7281eeb1301e1835cbcab5e9272efca84b11
Author: dnskr <[email protected]>
AuthorDate: Mon Mar 2 10:32:40 2026 +0800
[KYUUBI #7339] [DOC] Fix non-consecutive header level increase
### Why are the changes needed?
The changes fix the following warnings that are printed during the
documentation building:
```
./kyuubi/docs/connector/spark/kudu.md.rst:42: WARNING: Non-consecutive
header level increase; H2 to H4 [myst.header]
./kyuubi/docs/connector/spark/kudu.md.rst:46: WARNING: Non-consecutive
header level increase; H2 to H4 [myst.header]
./kyuubi/docs/connector/spark/kudu.md.rst:50: WARNING: Non-consecutive
header level increase; H2 to H4 [myst.header]
./kyuubi/docs/connector/spark/kudu.md.rst:56: WARNING: Non-consecutive
header level increase; H2 to H4 [myst.header]
./kyuubi/docs/connector/spark/kudu.md.rst:82: WARNING: Non-consecutive
header level increase; H2 to H4 [myst.header]
./kyuubi/docs/connector/spark/kudu.md.rst:103: WARNING: Non-consecutive
header level increase; H2 to H4 [myst.header]
./kyuubi/docs/connector/spark/kudu.md.rst:121: WARNING: Non-consecutive
header level increase; H2 to H4 [myst.header]
./kyuubi/docs/monitor/logging.md.rst:190: WARNING: Non-consecutive header
level increase; H2 to H4 [myst.header]
./kyuubi/docs/monitor/logging.md.rst:194: WARNING: Non-consecutive header
level increase; H2 to H4 [myst.header]
./kyuubi/docs/monitor/logging.md.rst:210: WARNING: Non-consecutive header
level increase; H2 to H4 [myst.header]
./kyuubi/docs/monitor/logging.md.rst:214: WARNING: Non-consecutive header
level increase; H2 to H4 [myst.header]
```
### How was this patch tested?
Checked that there are no warnings anymore during the documentation build
process and the pages look the same.
```
make html
open _build/html/monitor/logging.html
open _build/html/connector/spark/kudu.html
```
### Was this patch authored or co-authored using generative AI tooling?
No
Closes #7339 from dnskr/doc-fix-non-consecutive-header-level-increase.
Closes #7339
18fc95b3e [dnskr] [DOC] Fix non-consecutive header level increase
Authored-by: dnskr <[email protected]>
Signed-off-by: Cheng Pan <[email protected]>
(cherry picked from commit ba7e0571cf3992446689ac7abf8180e817beca1b)
Signed-off-by: dnskr <[email protected]>
---
docs/connector/spark/kudu.md | 14 +++++++-------
docs/monitor/logging.md | 8 ++++----
2 files changed, 11 insertions(+), 11 deletions(-)
diff --git a/docs/connector/spark/kudu.md b/docs/connector/spark/kudu.md
index c38e7a8d38..a47b1f4880 100644
--- a/docs/connector/spark/kudu.md
+++ b/docs/connector/spark/kudu.md
@@ -39,21 +39,21 @@ Before integrating Kyuubi with Kudu, we strongly suggest
that you integrate and
## Kudu Integration with Kyuubi
-#### Install Kudu Spark Dependency
+### Install Kudu Spark Dependency
Confirm your Kudu cluster version and download the corresponding kudu spark
dependency library, such as
[org.apache.kudu:kudu-spark3_2.12-1.14.0](https://repo1.maven.org/maven2/org/apache/kudu/kudu-spark3_2.12/1.14.0/kudu-spark3_2.12-1.14.0.jar)
to `$SPARK_HOME`/jars.
-#### Start Kyuubi
+### Start Kyuubi
Now, you can start Kyuubi server with this kudu embedded Spark distribution.
-#### Start Beeline Or Other Client You Prefer
+### Start Beeline Or Other Client You Prefer
```shell
bin/kyuubi-beeline -u 'jdbc:kyuubi://<host>:<port>/;principal=<if
kerberized>;#spark.yarn.queue=kyuubi_test'
```
-#### Register Kudu table as Spark Temporary view
+### Register Kudu table as Spark Temporary view
```sql
CREATE TEMPORARY VIEW kudutest
@@ -79,7 +79,7 @@ options (
2 rows selected (0.29 seconds)
```
-#### Query Kudu Table
+### Query Kudu Table
```sql
0: jdbc:kyuubi://spark5.jd.163.org:10009/> select * from kudutest;
@@ -100,7 +100,7 @@ options (
5 rows selected (1.083 seconds)
```
-#### Join Kudu table with Hive table
+### Join Kudu table with Hive table
```sql
0: jdbc:kyuubi://spark5.jd.163.org:10009/> select t1.*, t2.* from hive_tbl t1
join kudutest t2 on t1.userid=t2.userid+1;
@@ -118,7 +118,7 @@ options (
3 rows selected (1.63 seconds)
```
-#### Insert to Kudu table
+### Insert to Kudu table
You should notice that only `INSERT INTO` is supported by Kudu, `OVERWRITE`
data is not supported
diff --git a/docs/monitor/logging.md b/docs/monitor/logging.md
index 989e24b117..6dda2e945c 100644
--- a/docs/monitor/logging.md
+++ b/docs/monitor/logging.md
@@ -187,11 +187,11 @@ Meanwhile, it also includes how all the services of an
engine start/stop, how it
In general, when an exception occurs, we are able to find more information and
clues in the engine's logs.
-#### Configuring Engine Logging
+### Configuring Engine Logging
Please refer to Apache Spark online documentation [Configuring
Logging](https://spark.apache.org/docs/latest/configuration.html#configuring-logging)
for instructions.
-#### Where to Find the Engine Log
+### Where to Find the Engine Log
The engine logs are located differently based on the deploy mode and the
cluster manager.
When using local backend or `client` deploy mode for other cluster managers,
such as YARN, you can find the whole engine log in
`$KYUUBI_WORK_DIR_ROOT/${session username}/kyuubi-spark-sql-engine.log.${num}`.
@@ -207,11 +207,11 @@ Meanwhile, it also includes how all the services of an
engine start/stop, how do
In general, when an exception occurs, we are able to find more information and
clues in the engine's logs.
-#### Configuring Engine Logging
+### Configuring Engine Logging
Please refer to Apache Flink online documentation [Configuring
Logging](https://nightlies.apache.org/flink/flink-docs-stable/docs/deployment/advanced/logging)
for instructions.
-#### Where to Find the Engine Log
+### Where to Find the Engine Log
The engine logs are located differently based on the deploy mode and the
cluster manager.
When using local backend or `client` deploy mode for other cluster managers,
such as YARN, you can find the whole engine log in
`$KYUUBI_WORK_DIR_ROOT/${session username}/kyuubi-flink-sql-engine.log.${num}`.