panbingkun commented on code in PR #45834:
URL: https://github.com/apache/spark/pull/45834#discussion_r1550609240
##########
common/utils/src/main/scala/org/apache/spark/internal/LogKey.scala:
##########
@@ -21,17 +21,56 @@ package org.apache.spark.internal
* All structured logging keys should be defined here for standardization.
*/
object LogKey extends Enumeration {
- val APPLICATION_ID = Value
+ val EXECUTOR_ID = Value
Review Comment:
I originally planned to categorize by `category` first, and then sort it in
`alphabetical order` for the second level.
Let me give an example:
```
APPLICATION-ID
K8S1ID
MEMOSL_ID
MAX_SIZE
MIN_SIZE
```
If we only sort by `alphabetically`, we will get:
```
APPLICATION_ID
MAX_SIZE
MEMOSL_ID
MIN_SIZE
K8S_ID
```
It's a bit weird for me to see `MEMOS_ID` between `MAX_SIZE` and `MIN-SIZE`.
If we first classify by `category` at the first level and then by
`alphabetically ` at the second level, we will obtain
```
# ID
APPLICATION_ID
MEMOSL_ID
K8S_ID
# SHUFFLE Value
MAX_SIZE
MIN_SIZE
```
Just like:
https://github.com/apache/spark/blob/7dec5eb14644aee6c0562bad1d14421d9fa07f17/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/FunctionRegistry.scala#L602-L610
I think as our log migration work progresses, this class will become more
and more large. If only sort by `alphabetically` , it is not sure whether
developers and the final log searcher can quickly find the `LogKey` they want.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]