This is an automated email from the ASF dual-hosted git repository.
chengpan pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-kyuubi.git
The following commit(s) were added to refs/heads/master by this push:
new 17df2428e [KYUUBI #3640] Change the default event logger description
of hive and trino to json instead of spark
17df2428e is described below
commit 17df2428e6f2e119cf9a9e0aa62c3fe8a46d2271
Author: jiaoqingbo <[email protected]>
AuthorDate: Wed Oct 19 17:32:00 2022 +0800
[KYUUBI #3640] Change the default event logger description of hive and
trino to json instead of spark
### _Why are the changes needed?_
fix #3640
### _How was this patch tested?_
- [ ] Add some test cases that check the changes thoroughly including
negative and positive cases if possible
- [ ] Add screenshots for manual tests if appropriate
- [ ] [Run
test](https://kyuubi.apache.org/docs/latest/develop_tools/testing.html#running-tests)
locally before make a pull request
Closes #3641 from jiaoqingbo/kyuubi3640.
Closes #3640
c1b2d4a6 [jiaoqingbo] code review
a65bc132 [jiaoqingbo] code review
ab04f135 [jiaoqingbo] [KYUUBI #3640] Change the default event logger
description of hive and trino to json instead of spark
Authored-by: jiaoqingbo <[email protected]>
Signed-off-by: Cheng Pan <[email protected]>
---
docs/deployment/settings.md | 8 ++++----
.../src/main/scala/org/apache/kyuubi/config/KyuubiConf.scala | 8 ++++----
2 files changed, 8 insertions(+), 8 deletions(-)
diff --git a/docs/deployment/settings.md b/docs/deployment/settings.md
index 5905f8689..8c019d2b1 100644
--- a/docs/deployment/settings.md
+++ b/docs/deployment/settings.md
@@ -222,11 +222,11 @@ kyuubi.engine.deregister.exception.messages||A comma
separated list of exception
kyuubi.engine.deregister.exception.ttl|PT30M|Time to live(TTL) for exceptions
pattern specified in kyuubi.engine.deregister.exception.classes and
kyuubi.engine.deregister.exception.messages to deregister engines. Once the
total error count hits the kyuubi.engine.deregister.job.max.failures within the
TTL, an engine will deregister itself and wait for self-terminated. Otherwise,
we suppose that the engine has recovered from temporary failures.|duration|1.2.0
kyuubi.engine.deregister.job.max.failures|4|Number of failures of job before
deregistering the engine.|int|1.2.0
kyuubi.engine.event.json.log.path|file:///tmp/kyuubi/events|The location of
all the engine events go for the builtin JSON logger.<ul><li>Local Path: start
with 'file://'</li><li>HDFS Path: start with 'hdfs://'</li></ul>|string|1.3.0
-kyuubi.engine.event.loggers|SPARK|A comma separated list of engine history
loggers, where engine/session/operation etc events go. We use spark logger by
default.<ul> <li>SPARK: the events will be written to the spark listener
bus.</li> <li>JSON: the events will be written to the location of
kyuubi.engine.event.json.log.path</li> <li>JDBC: to be done</li> <li>CUSTOM: to
be done.</li></ul>|seq|1.3.0
+kyuubi.engine.event.loggers|SPARK|A comma separated list of engine history
loggers, where engine/session/operation etc events go.<ul> <li>SPARK: the
events will be written to the spark listener bus.</li> <li>JSON: the events
will be written to the location of kyuubi.engine.event.json.log.path</li>
<li>JDBC: to be done</li> <li>CUSTOM: to be done.</li></ul>|seq|1.3.0
kyuubi.engine.flink.extra.classpath|<undefined>|The extra classpath for
the flink sql engine, for configuring location of hadoop client jars,
etc|string|1.6.0
kyuubi.engine.flink.java.options|<undefined>|The extra java options for
the flink sql engine|string|1.6.0
kyuubi.engine.flink.memory|1g|The heap memory for the flink sql
engine|string|1.6.0
-kyuubi.engine.hive.event.loggers|JSON|A comma separated list of engine history
loggers, where engine/session/operation etc events go. We use spark logger by
default.<ul> <li>JSON: the events will be written to the location of
kyuubi.engine.event.json.log.path</li> <li>JDBC: to be done</li> <li>CUSTOM: to
be done.</li></ul>|seq|1.7.0
+kyuubi.engine.hive.event.loggers|JSON|A comma separated list of engine history
loggers, where engine/session/operation etc events go.<ul> <li>JSON: the events
will be written to the location of kyuubi.engine.event.json.log.path</li>
<li>JDBC: to be done</li> <li>CUSTOM: to be done.</li></ul>|seq|1.7.0
kyuubi.engine.hive.extra.classpath|<undefined>|The extra classpath for
the hive query engine, for configuring location of hadoop client jars,
etc|string|1.6.0
kyuubi.engine.hive.java.options|<undefined>|The extra java options for
the hive query engine|string|1.6.0
kyuubi.engine.hive.memory|1g|The heap memory for the hive query
engine|string|1.6.0
@@ -251,8 +251,8 @@ kyuubi.engine.share.level|USER|Engines will be shared in
different levels, avail
kyuubi.engine.share.level.sub.domain|<undefined>|(deprecated) - Using
kyuubi.engine.share.level.subdomain instead|string|1.2.0
kyuubi.engine.share.level.subdomain|<undefined>|Allow end-users to
create a subdomain for the share level of an engine. A subdomain is a
case-insensitive string values that must be a valid zookeeper sub path. For
example, for `USER` share level, an end-user can share a certain engine within
a subdomain, not for all of its clients. End-users are free to create multiple
engines in the `USER` share level. When disable engine pool, use 'default' if
absent.|string|1.4.0
kyuubi.engine.single.spark.session|false|When set to true, this engine is
running in a single session mode. All the JDBC/ODBC connections share the
temporary views, function registries, SQL configuration and the current
database.|boolean|1.3.0
-kyuubi.engine.spark.event.loggers|SPARK|A comma separated list of engine
loggers, where engine/session/operation etc events go. We use spark logger by
default.<ul> <li>SPARK: the events will be written to the spark listener
bus.</li> <li>JSON: the events will be written to the location of
kyuubi.engine.event.json.log.path</li> <li>JDBC: to be done</li> <li>CUSTOM: to
be done.</li></ul>|seq|1.7.0
-kyuubi.engine.trino.event.loggers|JSON|A comma separated list of engine
history loggers, where engine/session/operation etc events go. We use spark
logger by default.<ul> <li>JSON: the events will be written to the location of
kyuubi.engine.event.json.log.path</li> <li>JDBC: to be done</li> <li>CUSTOM: to
be done.</li></ul>|seq|1.7.0
+kyuubi.engine.spark.event.loggers|SPARK|A comma separated list of engine
loggers, where engine/session/operation etc events go.<ul> <li>SPARK: the
events will be written to the spark listener bus.</li> <li>JSON: the events
will be written to the location of kyuubi.engine.event.json.log.path</li>
<li>JDBC: to be done</li> <li>CUSTOM: to be done.</li></ul>|seq|1.7.0
+kyuubi.engine.trino.event.loggers|JSON|A comma separated list of engine
history loggers, where engine/session/operation etc events go.<ul> <li>JSON:
the events will be written to the location of
kyuubi.engine.event.json.log.path</li> <li>JDBC: to be done</li> <li>CUSTOM: to
be done.</li></ul>|seq|1.7.0
kyuubi.engine.trino.extra.classpath|<undefined>|The extra classpath for
the trino query engine, for configuring other libs which may need by the trino
engine |string|1.6.0
kyuubi.engine.trino.java.options|<undefined>|The extra java options for
the trino query engine|string|1.6.0
kyuubi.engine.trino.memory|1g|The heap memory for the trino query
engine|string|1.6.0
diff --git
a/kyuubi-common/src/main/scala/org/apache/kyuubi/config/KyuubiConf.scala
b/kyuubi-common/src/main/scala/org/apache/kyuubi/config/KyuubiConf.scala
index f83866c4a..3779cbe84 100644
--- a/kyuubi-common/src/main/scala/org/apache/kyuubi/config/KyuubiConf.scala
+++ b/kyuubi-common/src/main/scala/org/apache/kyuubi/config/KyuubiConf.scala
@@ -1531,7 +1531,7 @@ object KyuubiConf {
val ENGINE_EVENT_LOGGERS: ConfigEntry[Seq[String]] =
buildConf("kyuubi.engine.event.loggers")
.doc("A comma separated list of engine history loggers, where
engine/session/operation etc" +
- " events go. We use spark logger by default.<ul>" +
+ " events go.<ul>" +
" <li>SPARK: the events will be written to the spark listener
bus.</li>" +
" <li>JSON: the events will be written to the location of" +
s" ${ENGINE_EVENT_JSON_LOG_PATH.key}</li>" +
@@ -2080,7 +2080,7 @@ object KyuubiConf {
val ENGINE_SPARK_EVENT_LOGGERS: ConfigEntry[Seq[String]] =
buildConf("kyuubi.engine.spark.event.loggers")
.doc("A comma separated list of engine loggers, where
engine/session/operation etc" +
- " events go. We use spark logger by default.<ul>" +
+ " events go.<ul>" +
" <li>SPARK: the events will be written to the spark listener
bus.</li>" +
" <li>JSON: the events will be written to the location of" +
s" ${ENGINE_EVENT_JSON_LOG_PATH.key}</li>" +
@@ -2092,7 +2092,7 @@ object KyuubiConf {
val ENGINE_HIVE_EVENT_LOGGERS: ConfigEntry[Seq[String]] =
buildConf("kyuubi.engine.hive.event.loggers")
.doc("A comma separated list of engine history loggers, where
engine/session/operation etc" +
- " events go. We use spark logger by default.<ul>" +
+ " events go.<ul>" +
" <li>JSON: the events will be written to the location of" +
s" ${ENGINE_EVENT_JSON_LOG_PATH.key}</li>" +
" <li>JDBC: to be done</li>" +
@@ -2109,7 +2109,7 @@ object KyuubiConf {
val ENGINE_TRINO_EVENT_LOGGERS: ConfigEntry[Seq[String]] =
buildConf("kyuubi.engine.trino.event.loggers")
.doc("A comma separated list of engine history loggers, where
engine/session/operation etc" +
- " events go. We use spark logger by default.<ul>" +
+ " events go.<ul>" +
" <li>JSON: the events will be written to the location of" +
s" ${ENGINE_EVENT_JSON_LOG_PATH.key}</li>" +
" <li>JDBC: to be done</li>" +