This is an automated email from the ASF dual-hosted git repository.

zykkk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris-website.git


The following commit(s) were added to refs/heads/master by this push:
     new 85c57ec380a [faq](lakehouse) add jdbc be java oom faq (#3014)
85c57ec380a is described below

commit 85c57ec380ab33a63f9505e93502d09726382f83
Author: zy-kkk <[email protected]>
AuthorDate: Thu Oct 30 11:26:18 2025 +0800

    [faq](lakehouse) add jdbc be java oom faq (#3014)
---
 docs/faq/lakehouse-faq.md                                        | 9 +++++++++
 .../docusaurus-plugin-content-docs/current/faq/lakehouse-faq.md  | 9 +++++++++
 .../version-1.2/faq/lakehouse-faq.md                             | 9 +++++++++
 .../version-2.0/faq/lakehouse-faq.md                             | 9 +++++++++
 .../version-2.1/faq/lakehouse-faq.md                             | 9 +++++++++
 .../version-3.x/faq/lakehouse-faq.md                             | 9 +++++++++
 versioned_docs/version-1.2/faq/lakehouse-faq.md                  | 9 +++++++++
 versioned_docs/version-2.0/faq/lakehouse-faq.md                  | 9 +++++++++
 versioned_docs/version-2.1/faq/lakehouse-faq.md                  | 9 +++++++++
 versioned_docs/version-3.x/faq/lakehouse-faq.md                  | 9 +++++++++
 10 files changed, 90 insertions(+)

diff --git a/docs/faq/lakehouse-faq.md b/docs/faq/lakehouse-faq.md
index 36da4c65204..0f23b873347 100644
--- a/docs/faq/lakehouse-faq.md
+++ b/docs/faq/lakehouse-faq.md
@@ -105,6 +105,15 @@ ln -s 
/etc/pki/ca-trust/extracted/openssl/ca-bundle.trust.crt /etc/ssl/certs/ca-
 
 4. When synchronizing MySQL data to Doris using JDBC Catalog, date data 
synchronization error occurs. Verify if the MySQL version matches the MySQL 
driver package, for example, MySQL 8 and above require the driver 
com.mysql.cj.jdbc.Driver.
 
+5. When a single field is too large, a Java memory OOM occurs on the BE side 
during a query.
+
+   When Jdbc Scanner reads data through JDBC, the session variable 
`batch_size` determines the number of rows processed in the JVM per batch. If a 
single field is too large, it may cause `field_size * batch_size` (approximate 
value, considering JVM static memory and data copy overhead) to exceed the JVM 
memory limit, resulting in OOM.
+
+   Solutions:
+
+   - Reduce the `batch_size` value by executing `set batch_size = 512;`. The 
default value is 4064.
+   - Increase the BE JVM memory by modifying the `-Xmx` parameter in 
`JAVA_OPTS`. For example: `-Xmx8g`.
+
 ## Hive Catalog
 
 1. Accessing Iceberg or Hive table through Hive Catalog reports an error: 
`failed to get schema` or `Storage schema reading not supported`
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/faq/lakehouse-faq.md 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/faq/lakehouse-faq.md
index 4147a3f93a4..95f15f41892 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/faq/lakehouse-faq.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/faq/lakehouse-faq.md
@@ -106,6 +106,15 @@ ln -s 
/etc/pki/ca-trust/extracted/openssl/ca-bundle.trust.crt /etc/ssl/certs/ca-
    请在 `jdbc_url` 中添加 `useSSL=true`
 
 4. 使用 JDBC Catalog 将 MySQL 数据同步到 Doris 中,日期数据同步错误。需要校验下 MySQL 的版本是否与 MySQL 
的驱动包是否对应,比如 MySQL8 以上需要使用驱动 com.mysql.cj.jdbc.Driver。
+   
+5. 单个字段过大,查询时 BE 侧 Java 内存 OOM
+
+   Jdbc Scanner 在通过 jdbc 读取时,由 session variable `batch_size` 决定每批次数据在 JVM 
中处理的数量,如果单个字段过大,导致 `字段大小 * batch_size`(近似值,由于 JVM 中 static 以及数据 copy 占用)超过 JVM 
内存限制,就会出现 OOM。
+
+   解决方法:
+
+   - 减小 `batch_size` 的值,可以通过 `set batch_size = 512;` 来调整,默认值为 4064。
+   - 增大 BE 的 JVM 内存,通过修改 `JAVA_OPTS` 参数中的 `-Xmx` 来调整 JVM 最大堆内存大小。例如:`"-Xmx8g`。
 
 ## Hive Catalog
 
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.2/faq/lakehouse-faq.md 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.2/faq/lakehouse-faq.md
index d81b04aafea..034b7b54001 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.2/faq/lakehouse-faq.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.2/faq/lakehouse-faq.md
@@ -107,6 +107,15 @@ ln -s 
/etc/pki/ca-trust/extracted/openssl/ca-bundle.trust.crt /etc/ssl/certs/ca-
 
 4. 使用 JDBC Catalog 将 MySQL 数据同步到 Doris 中,日期数据同步错误。需要校验下 MySQL 的版本是否与 MySQL 
的驱动包是否对应,比如 MySQL8 以上需要使用驱动 com.mysql.cj.jdbc.Driver。
 
+5. 单个字段过大,查询时 BE 侧 Java 内存 OOM
+
+   Jdbc Scanner 在通过 jdbc 读取时,由 session variable `batch_size` 决定每批次数据在 JVM 
中处理的数量,如果单个字段过大,导致 `字段大小 * batch_size`(近似值,由于 JVM 中 static 以及数据 copy 占用)超过 JVM 
内存限制,就会出现 OOM。
+
+   解决方法:
+
+   - 减小 `batch_size` 的值,可以通过 `set batch_size = 512;` 来调整,默认值为 4064。
+   - 增大 BE 的 JVM 内存,通过修改 `JAVA_OPTS` 参数中的 `-Xmx` 来调整 JVM 最大堆内存大小。例如:`"-Xmx8g`。
+
 ## Hive Catalog
 
 1. 通过 Hive Metastore 访问 Iceberg 表报错:`failed to get schema` 或 `Storage schema 
reading not supported`
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/faq/lakehouse-faq.md 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/faq/lakehouse-faq.md
index d81b04aafea..034b7b54001 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/faq/lakehouse-faq.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/faq/lakehouse-faq.md
@@ -107,6 +107,15 @@ ln -s 
/etc/pki/ca-trust/extracted/openssl/ca-bundle.trust.crt /etc/ssl/certs/ca-
 
 4. 使用 JDBC Catalog 将 MySQL 数据同步到 Doris 中,日期数据同步错误。需要校验下 MySQL 的版本是否与 MySQL 
的驱动包是否对应,比如 MySQL8 以上需要使用驱动 com.mysql.cj.jdbc.Driver。
 
+5. 单个字段过大,查询时 BE 侧 Java 内存 OOM
+
+   Jdbc Scanner 在通过 jdbc 读取时,由 session variable `batch_size` 决定每批次数据在 JVM 
中处理的数量,如果单个字段过大,导致 `字段大小 * batch_size`(近似值,由于 JVM 中 static 以及数据 copy 占用)超过 JVM 
内存限制,就会出现 OOM。
+
+   解决方法:
+
+   - 减小 `batch_size` 的值,可以通过 `set batch_size = 512;` 来调整,默认值为 4064。
+   - 增大 BE 的 JVM 内存,通过修改 `JAVA_OPTS` 参数中的 `-Xmx` 来调整 JVM 最大堆内存大小。例如:`"-Xmx8g`。
+
 ## Hive Catalog
 
 1. 通过 Hive Metastore 访问 Iceberg 表报错:`failed to get schema` 或 `Storage schema 
reading not supported`
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/faq/lakehouse-faq.md 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/faq/lakehouse-faq.md
index ac1fa0ad9dd..9890a381979 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/faq/lakehouse-faq.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/faq/lakehouse-faq.md
@@ -107,6 +107,15 @@ ln -s 
/etc/pki/ca-trust/extracted/openssl/ca-bundle.trust.crt /etc/ssl/certs/ca-
 
 4. 使用 JDBC Catalog 将 MySQL 数据同步到 Doris 中,日期数据同步错误。需要校验下 MySQL 的版本是否与 MySQL 
的驱动包是否对应,比如 MySQL8 以上需要使用驱动 com.mysql.cj.jdbc.Driver。
 
+5. 单个字段过大,查询时 BE 侧 Java 内存 OOM
+
+   Jdbc Scanner 在通过 jdbc 读取时,由 session variable `batch_size` 决定每批次数据在 JVM 
中处理的数量,如果单个字段过大,导致 `字段大小 * batch_size`(近似值,由于 JVM 中 static 以及数据 copy 占用)超过 JVM 
内存限制,就会出现 OOM。
+
+   解决方法:
+
+   - 减小 `batch_size` 的值,可以通过 `set batch_size = 512;` 来调整,默认值为 4064。
+   - 增大 BE 的 JVM 内存,通过修改 `JAVA_OPTS` 参数中的 `-Xmx` 来调整 JVM 最大堆内存大小。例如:`"-Xmx8g`。
+
 ## Hive Catalog
 
 1. 通过 Hive Catalog 访问 Iceberg 或 Hive 表报错:`failed to get schema` 或 `Storage 
schema reading not supported`
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/faq/lakehouse-faq.md 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/faq/lakehouse-faq.md
index ac1fa0ad9dd..9890a381979 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/faq/lakehouse-faq.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/faq/lakehouse-faq.md
@@ -107,6 +107,15 @@ ln -s 
/etc/pki/ca-trust/extracted/openssl/ca-bundle.trust.crt /etc/ssl/certs/ca-
 
 4. 使用 JDBC Catalog 将 MySQL 数据同步到 Doris 中,日期数据同步错误。需要校验下 MySQL 的版本是否与 MySQL 
的驱动包是否对应,比如 MySQL8 以上需要使用驱动 com.mysql.cj.jdbc.Driver。
 
+5. 单个字段过大,查询时 BE 侧 Java 内存 OOM
+
+   Jdbc Scanner 在通过 jdbc 读取时,由 session variable `batch_size` 决定每批次数据在 JVM 
中处理的数量,如果单个字段过大,导致 `字段大小 * batch_size`(近似值,由于 JVM 中 static 以及数据 copy 占用)超过 JVM 
内存限制,就会出现 OOM。
+
+   解决方法:
+
+   - 减小 `batch_size` 的值,可以通过 `set batch_size = 512;` 来调整,默认值为 4064。
+   - 增大 BE 的 JVM 内存,通过修改 `JAVA_OPTS` 参数中的 `-Xmx` 来调整 JVM 最大堆内存大小。例如:`"-Xmx8g`。
+
 ## Hive Catalog
 
 1. 通过 Hive Catalog 访问 Iceberg 或 Hive 表报错:`failed to get schema` 或 `Storage 
schema reading not supported`
diff --git a/versioned_docs/version-1.2/faq/lakehouse-faq.md 
b/versioned_docs/version-1.2/faq/lakehouse-faq.md
index b52e4a31154..96e8e0c5304 100644
--- a/versioned_docs/version-1.2/faq/lakehouse-faq.md
+++ b/versioned_docs/version-1.2/faq/lakehouse-faq.md
@@ -105,6 +105,15 @@ ln -s 
/etc/pki/ca-trust/extracted/openssl/ca-bundle.trust.crt /etc/ssl/certs/ca-
 
 4. When synchronizing MySQL data to Doris using JDBC Catalog, date data 
synchronization error occurs. Verify if the MySQL version matches the MySQL 
driver package, for example, MySQL 8 and above require the driver 
com.mysql.cj.jdbc.Driver.
 
+5. When a single field is too large, a Java memory OOM occurs on the BE side 
during a query.
+
+   When Jdbc Scanner reads data through JDBC, the session variable 
`batch_size` determines the number of rows processed in the JVM per batch. If a 
single field is too large, it may cause `field_size * batch_size` (approximate 
value, considering JVM static memory and data copy overhead) to exceed the JVM 
memory limit, resulting in OOM.
+
+   Solutions:
+
+   - Reduce the `batch_size` value by executing `set batch_size = 512;`. The 
default value is 4064.
+   - Increase the BE JVM memory by modifying the `-Xmx` parameter in 
`JAVA_OPTS`. For example: `-Xmx8g`.
+
 ## Hive Catalog
 
 1. Error accessing Iceberg table via Hive Metastore: `failed to get schema` or 
`Storage schema reading not supported`
diff --git a/versioned_docs/version-2.0/faq/lakehouse-faq.md 
b/versioned_docs/version-2.0/faq/lakehouse-faq.md
index b52e4a31154..96e8e0c5304 100644
--- a/versioned_docs/version-2.0/faq/lakehouse-faq.md
+++ b/versioned_docs/version-2.0/faq/lakehouse-faq.md
@@ -105,6 +105,15 @@ ln -s 
/etc/pki/ca-trust/extracted/openssl/ca-bundle.trust.crt /etc/ssl/certs/ca-
 
 4. When synchronizing MySQL data to Doris using JDBC Catalog, date data 
synchronization error occurs. Verify if the MySQL version matches the MySQL 
driver package, for example, MySQL 8 and above require the driver 
com.mysql.cj.jdbc.Driver.
 
+5. When a single field is too large, a Java memory OOM occurs on the BE side 
during a query.
+
+   When Jdbc Scanner reads data through JDBC, the session variable 
`batch_size` determines the number of rows processed in the JVM per batch. If a 
single field is too large, it may cause `field_size * batch_size` (approximate 
value, considering JVM static memory and data copy overhead) to exceed the JVM 
memory limit, resulting in OOM.
+
+   Solutions:
+
+   - Reduce the `batch_size` value by executing `set batch_size = 512;`. The 
default value is 4064.
+   - Increase the BE JVM memory by modifying the `-Xmx` parameter in 
`JAVA_OPTS`. For example: `-Xmx8g`.
+
 ## Hive Catalog
 
 1. Error accessing Iceberg table via Hive Metastore: `failed to get schema` or 
`Storage schema reading not supported`
diff --git a/versioned_docs/version-2.1/faq/lakehouse-faq.md 
b/versioned_docs/version-2.1/faq/lakehouse-faq.md
index ce44cb67e6a..2885fab5ba7 100644
--- a/versioned_docs/version-2.1/faq/lakehouse-faq.md
+++ b/versioned_docs/version-2.1/faq/lakehouse-faq.md
@@ -105,6 +105,15 @@ ln -s 
/etc/pki/ca-trust/extracted/openssl/ca-bundle.trust.crt /etc/ssl/certs/ca-
 
 4. When synchronizing MySQL data to Doris using JDBC Catalog, date data 
synchronization error occurs. Verify if the MySQL version matches the MySQL 
driver package, for example, MySQL 8 and above require the driver 
com.mysql.cj.jdbc.Driver.
 
+5. When a single field is too large, a Java memory OOM occurs on the BE side 
during a query.
+
+   When Jdbc Scanner reads data through JDBC, the session variable 
`batch_size` determines the number of rows processed in the JVM per batch. If a 
single field is too large, it may cause `field_size * batch_size` (approximate 
value, considering JVM static memory and data copy overhead) to exceed the JVM 
memory limit, resulting in OOM.
+
+   Solutions:
+
+   - Reduce the `batch_size` value by executing `set batch_size = 512;`. The 
default value is 4064.
+   - Increase the BE JVM memory by modifying the `-Xmx` parameter in 
`JAVA_OPTS`. For example: `-Xmx8g`.
+
 ## Hive Catalog
 
 1. Accessing Iceberg or Hive table through Hive Catalog reports an error: 
`failed to get schema` or `Storage schema reading not supported`
diff --git a/versioned_docs/version-3.x/faq/lakehouse-faq.md 
b/versioned_docs/version-3.x/faq/lakehouse-faq.md
index ce44cb67e6a..2885fab5ba7 100644
--- a/versioned_docs/version-3.x/faq/lakehouse-faq.md
+++ b/versioned_docs/version-3.x/faq/lakehouse-faq.md
@@ -105,6 +105,15 @@ ln -s 
/etc/pki/ca-trust/extracted/openssl/ca-bundle.trust.crt /etc/ssl/certs/ca-
 
 4. When synchronizing MySQL data to Doris using JDBC Catalog, date data 
synchronization error occurs. Verify if the MySQL version matches the MySQL 
driver package, for example, MySQL 8 and above require the driver 
com.mysql.cj.jdbc.Driver.
 
+5. When a single field is too large, a Java memory OOM occurs on the BE side 
during a query.
+
+   When Jdbc Scanner reads data through JDBC, the session variable 
`batch_size` determines the number of rows processed in the JVM per batch. If a 
single field is too large, it may cause `field_size * batch_size` (approximate 
value, considering JVM static memory and data copy overhead) to exceed the JVM 
memory limit, resulting in OOM.
+
+   Solutions:
+
+   - Reduce the `batch_size` value by executing `set batch_size = 512;`. The 
default value is 4064.
+   - Increase the BE JVM memory by modifying the `-Xmx` parameter in 
`JAVA_OPTS`. For example: `-Xmx8g`.
+
 ## Hive Catalog
 
 1. Accessing Iceberg or Hive table through Hive Catalog reports an error: 
`failed to get schema` or `Storage schema reading not supported`


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to