Re: Proper way to modify log4j config file for kubernetes-session

2024-05-14 Thread Vararu, Vadim
Yes, the dynamic log level modification worked great for me.

Thanks a lot,
Vadim

From: Biao Geng 
Date: Tuesday, 14 May 2024 at 10:07
To: Vararu, Vadim 
Cc: user@flink.apache.org 
Subject: Re: Proper way to modify log4j config file for kubernetes-session
Hi Vararu,

Does this document meet your requirements?
https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/resource-providers/native_kubernetes/#logging
Best,
Biao Geng


Vararu, Vadim mailto:vadim.var...@adswizz.com>> 
于2024年5月14日周二 01:39写道:
Hi,

Trying to configure loggers in the log4j-console.properties file (that is 
mounted from the host where the kubernetes-session.sh is invoked and referenced 
by the TM processes via - Dlog4j.configurationFile).

Is there a proper (documented) way to do that, meaning to append/modify the 
log4j config file?

Thanks,
Vadim.


Re: Proper way to modify log4j config file for kubernetes-session

2024-05-14 Thread Biao Geng
Hi Vararu,

Does this document meet your requirements?
https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/resource-providers/native_kubernetes/#logging
Best,
Biao Geng


Vararu, Vadim  于2024年5月14日周二 01:39写道:

> Hi,
>
>
>
> Trying to configure loggers in the *log4j-console.properties* file (that
> is mounted from the host where the kubernetes-session.sh is invoked and
> referenced by the TM processes via *- Dlog4j.configurationFile).*
>
>
>
> Is there a proper (documented) way to do that, meaning to append/modify
> the log4j config file?
>
>
>
> Thanks,
>
> Vadim.
>


Proper way to modify log4j config file for kubernetes-session

2024-05-13 Thread Vararu, Vadim
Hi,

Trying to configure loggers in the log4j-console.properties file (that is 
mounted from the host where the kubernetes-session.sh is invoked and referenced 
by the TM processes via - Dlog4j.configurationFile).

Is there a proper (documented) way to do that, meaning to append/modify the 
log4j config file?

Thanks,
Vadim.


Re: Default Log4j properties in Native Kubernetes

2023-06-21 Thread Yang Wang
I assume you are using "*bin/flink run-application*" to submit a Flink
application to K8s cluster. Then you could simply
update your local log4j-console.properties, it will be shipped and mounted
to JobManager/TaskManager pods via ConfigMap.

Best,
Yang

Vladislav Keda  于2023年6月20日周二
22:15写道:

> Hi all again!
>
> Please tell me if you can answer my question, thanks.
>
> ---
>
> Best Regards,
> Vladislav Keda
>
> пт, 16 июн. 2023 г. в 16:12, Vladislav Keda <
> vladislav.k...@glowbyteconsulting.com>:
>
>> Hi all!
>>
>> Is it possible to change Flink* log4j-console.properties* in Native
>> Kubernetes (for example in Kubernetes Application mode) without rebuilding
>> the application docker image?
>>
>> I was trying to inject a .sh script call (in the attachment) before
>> /docker-entrypoint.sh, but this workaround did not work (k8s gives me an
>> exception that the log4j* files are write-locked because there is a
>> configmap over them).
>>
>> Is there another way to change log4j* files?
>>
>> Thank you very much in advance!
>>
>> Best Regards,
>> Vladislav Keda
>>
>


Re: Default Log4j properties in Native Kubernetes

2023-06-20 Thread Vladislav Keda
Hi all again!

Please tell me if you can answer my question, thanks.

---

Best Regards,
Vladislav Keda

пт, 16 июн. 2023 г. в 16:12, Vladislav Keda <
vladislav.k...@glowbyteconsulting.com>:

> Hi all!
>
> Is it possible to change Flink* log4j-console.properties* in Native
> Kubernetes (for example in Kubernetes Application mode) without rebuilding
> the application docker image?
>
> I was trying to inject a .sh script call (in the attachment) before
> /docker-entrypoint.sh, but this workaround did not work (k8s gives me an
> exception that the log4j* files are write-locked because there is a
> configmap over them).
>
> Is there another way to change log4j* files?
>
> Thank you very much in advance!
>
> Best Regards,
> Vladislav Keda
>


Default Log4j properties in Native Kubernetes

2023-06-16 Thread Vladislav Keda
Hi all!

Is it possible to change Flink* log4j-console.properties* in Native
Kubernetes (for example in Kubernetes Application mode) without rebuilding
the application docker image?

I was trying to inject a .sh script call (in the attachment) before
/docker-entrypoint.sh, but this workaround did not work (k8s gives me an
exception that the log4j* files are write-locked because there is a
configmap over them).

Is there another way to change log4j* files?

Thank you very much in advance!

Best Regards,
Vladislav Keda
#!/usr/bin/env bash

# shellcheck disable=SC2034
LOG4J_CLI_PROPERTIES_PATH="${FLINK_HOME}/conf/log4j-cli.properties"
# shellcheck disable=SC2034
LOG4J_CONSOLE_PROPERTIES_PATH="${FLINK_HOME}/conf/log4j-console.properties"
# shellcheck disable=SC2034
LOG4J_SESSION_PROPERTIES_PATH="${FLINK_HOME}/conf/log4j-session.properties"
# shellcheck disable=SC2034
LOG4J_PROPERTIES_PATH="${FLINK_HOME}/conf/log4j.properties"

override_properties() {
  local properties_var=$1
  local properties_path_var="${properties_var}_PATH"

  local content="${!properties_var}"
  local path="${!properties_path_var}"

  if [ -n "${content}" ]; then
echo "$0: ${properties_var} env variable is set. Overwriting ${path}"
echo "${content}" > "${path}"
  else
echo "$0: ${properties_var} env variable is not set. Using Flink's ${path}"
  fi
}

override_properties "LOG4J_CLI_PROPERTIES"
override_properties "LOG4J_CONSOLE_PROPERTIES"
override_properties "LOG4J_SESSION_PROPERTIES"
override_properties "LOG4J_PROPERTIES"


Re: Flink 1.14.2 - Log4j2 -Dlog4j.configurationFile is ignored and falls back to default /opt/flink/conf/log4j-console.properties

2022-01-24 Thread Yang Wang
>
> I checked the image prior cluster creation; all logs' files are there.
> once the cluster is deployed, they are missing. (bug?)


I do not think it is a bug since we already have shipped all the config
files(log4j properties, flink-conf.yaml) via the ConfigMap.
Then it is directly mounted to an existing path(/opt/flink/conf), which
makes all the existing files hidden.

Of course, we could use the subpath mount to avoid this issue. But the
volume mount will not receive any updates[1].


[1]. https://kubernetes.io/docs/concepts/storage/volumes/#configmap


Best,
Yang


Tamir Sagi  于2022年1月22日周六 23:18写道:

> Hey Yang,
>
> I've created the ticket,
> https://issues.apache.org/jira/browse/FLINK-25762
>
> In addition,
>
> The /opt/flink/conf is cleaned up because we are mounting the conf files
> from K8s ConfigMap.
>
> I checked the image prior cluster creation; all logs' files are there.
> once the cluster is deployed, they are missing. (bug?)
>
> Best,
> Tamir.
> --
> *From:* Tamir Sagi 
> *Sent:* Friday, January 21, 2022 7:19 PM
> *To:* Yang Wang 
> *Cc:* user@flink.apache.org 
> *Subject:* Re: Flink 1.14.2 - Log4j2 -Dlog4j.configurationFile is ignored
> and falls back to default /opt/flink/conf/log4j-console.properties
>
> Yes,
>
> Thank you!
> I will handle that.
>
> Best,
> Tamir
> --
> *From:* Yang Wang 
> *Sent:* Friday, January 21, 2022 5:11 AM
> *To:* Tamir Sagi 
> *Cc:* user@flink.apache.org 
> *Subject:* Re: Flink 1.14.2 - Log4j2 -Dlog4j.configurationFile is ignored
> and falls back to default /opt/flink/conf/log4j-console.properties
>
>
> *EXTERNAL EMAIL*
>
>
> Changing the order of exec command makes sense to me. Would you please
> create a ticket for this?
>
> The /opt/flink/conf is cleaned up because we are mounting the conf files
> from K8s ConfigMap.
>
>
>
> Best,
> Yang
>
> Tamir Sagi  于2022年1月18日周二 17:48写道:
>
> Hey Yang,
>
> Thank you for confirming it.
>
> IMO, a better approach is to change the order "log_setting" , "ARGS" and 
> "FLINK_ENV_JAVA_OPTS"
> in exec command.
> In that way we prioritize user defined properties.
>
> From:
>
> exec "$JAVA_RUN" $JVM_ARGS ${FLINK_ENV_JAVA_OPTS} "${log_setting[@]}"
> -classpath "`manglePathList
> "$FLINK_TM_CLASSPATH:$INTERNAL_HADOOP_CLASSPATHS"`" ${CLASS_TO_RUN}
> "${ARGS[@]}"
>
> To
>
> exec "$JAVA_RUN" $JVM_ARGS "${log_setting[@]}" -classpath "`manglePathList
> "$FLINK_TM_CLASSPATH:$INTERNAL_HADOOP_CLASSPATHS"`" ${CLASS_TO_RUN}
> "${ARGS[@]}" "${FLINK_ENV_JAVA_OPTS}"
>
> Unless there are system configurations which not supposed to be overridden
> by user(And then having dedicated env variables is better). does it make
> sense to you?
>
>
> In addition, any idea why /opt/flink/conf gets cleaned (Only
> flink-conf.xml is there).
>
>
> Best,
> Tamir
>
>
> --
> *From:* Yang Wang 
> *Sent:* Tuesday, January 18, 2022 6:02 AM
> *To:* Tamir Sagi 
> *Cc:* user@flink.apache.org 
> *Subject:* Re: Flink 1.14.2 - Log4j2 -Dlog4j.configurationFile is ignored
> and falls back to default /opt/flink/conf/log4j-console.properties
>
>
> *EXTERNAL EMAIL*
>
>
> I think you are right. Before 1.13.0, if the log configuration file does
> not exist, the logging properties would not be added to the start command.
> That is why it could work in 1.12.2.
>
> However, from 1.13.0, we are not using
> "kubernetes.container-start-command-template" to generate the JM/TM start
> command, but the jobmanager.sh/taskmanager.sh. We do not
> have the same logic in the "flink-console.sh".
>
> Maybe we could introduce an environment for log configuration file name in
> the "flink-console.sh". The default value could be
> "log4j-console.properties" and it could be configured by users.
> If this makes sense to you, could you please create a ticket?
>
>
> Best,
> Yang
>
> Tamir Sagi  于2022年1月17日周一 22:53写道:
>
> Hey Yang,
>
> thanks for answering,
>
> TL;DR
>
> Assuming I have not missed anything , the way TM and JM are created is
> different between these 2 versions,
> but it does look like flink-console.sh gets called eventually with the
> same exec command.
>
> in 1.12.2 if org.apache.flink.kubernetes.kubeclient.parameters#hasLog4j
> returns false then logging args are not added to startCommand.
>
>
>1. why does the config dir gets cleaned once the cluster starts? Even
>when I pushed log4j-console.propert

Re: Flink 1.14.2 - Log4j2 -Dlog4j.configurationFile is ignored and falls back to default /opt/flink/conf/log4j-console.properties

2022-01-22 Thread Tamir Sagi
Hey Yang,

I've created the ticket,
https://issues.apache.org/jira/browse/FLINK-25762

In addition,
The /opt/flink/conf is cleaned up because we are mounting the conf files from 
K8s ConfigMap.
I checked the image prior cluster creation; all logs' files are there. once the 
cluster is deployed, they are missing. (bug?)

Best,
Tamir.


From: Tamir Sagi 
Sent: Friday, January 21, 2022 7:19 PM
To: Yang Wang 
Cc: user@flink.apache.org 
Subject: Re: Flink 1.14.2 - Log4j2 -Dlog4j.configurationFile is ignored and 
falls back to default /opt/flink/conf/log4j-console.properties

Yes,

Thank you!
I will handle that.

Best,
Tamir


From: Yang Wang 
Sent: Friday, January 21, 2022 5:11 AM
To: Tamir Sagi 
Cc: user@flink.apache.org 
Subject: Re: Flink 1.14.2 - Log4j2 -Dlog4j.configurationFile is ignored and 
falls back to default /opt/flink/conf/log4j-console.properties


EXTERNAL EMAIL


Changing the order of exec command makes sense to me. Would you please create a 
ticket for this?

The /opt/flink/conf is cleaned up because we are mounting the conf files from 
K8s ConfigMap.



Best,
Yang

Tamir Sagi mailto:tamir.s...@niceactimize.com>> 
于2022年1月18日周二 17:48写道:
Hey Yang,

Thank you for confirming it.

IMO, a better approach is to change the order "log_setting" , "ARGS" and 
"FLINK_ENV_JAVA_OPTS" in exec command.
In that way we prioritize user defined properties.

From:

exec "$JAVA_RUN" $JVM_ARGS ${FLINK_ENV_JAVA_OPTS} "${log_setting[@]}" 
-classpath "`manglePathList "$FLINK_TM_CLASSPATH:$INTERNAL_HADOOP_CLASSPATHS"`" 
${CLASS_TO_RUN} "${ARGS[@]}"

To

exec "$JAVA_RUN" $JVM_ARGS "${log_setting[@]}" -classpath "`manglePathList 
"$FLINK_TM_CLASSPATH:$INTERNAL_HADOOP_CLASSPATHS"`" ${CLASS_TO_RUN} 
"${ARGS[@]}" "${FLINK_ENV_JAVA_OPTS}"

Unless there are system configurations which not supposed to be overridden by 
user(And then having dedicated env variables is better). does it make sense to 
you?


In addition, any idea why /opt/flink/conf gets cleaned (Only flink-conf.xml is 
there).


Best,
Tamir




From: Yang Wang mailto:danrtsey...@gmail.com>>
Sent: Tuesday, January 18, 2022 6:02 AM
To: Tamir Sagi mailto:tamir.s...@niceactimize.com>>
Cc: user@flink.apache.org<mailto:user@flink.apache.org> 
mailto:user@flink.apache.org>>
Subject: Re: Flink 1.14.2 - Log4j2 -Dlog4j.configurationFile is ignored and 
falls back to default /opt/flink/conf/log4j-console.properties


EXTERNAL EMAIL


I think you are right. Before 1.13.0, if the log configuration file does not 
exist, the logging properties would not be added to the start command. That is 
why it could work in 1.12.2.

However, from 1.13.0, we are not using 
"kubernetes.container-start-command-template" to generate the JM/TM start 
command, but the 
jobmanager.sh/taskmanager.sh<http://jobmanager.sh/taskmanager.sh>. We do not
have the same logic in the "flink-console.sh".

Maybe we could introduce an environment for log configuration file name in the 
"flink-console.sh". The default value could be "log4j-console.properties" and 
it could be configured by users.
If this makes sense to you, could you please create a ticket?


Best,
Yang

Tamir Sagi mailto:tamir.s...@niceactimize.com>> 
于2022年1月17日周一 22:53写道:
Hey Yang,

thanks for answering,

TL;DR

Assuming I have not missed anything , the way TM and JM are created is 
different between these 2 versions,
but it does look like flink-console.sh gets called eventually with the same 
exec command.

in 1.12.2 if org.apache.flink.kubernetes.kubeclient.parameters#hasLog4j returns 
false then logging args are not added to startCommand.


  1.  why does the config dir gets cleaned once the cluster starts? Even when I 
pushed log4j-console.properties to the expected location (/opt/flink/conf) , 
the directory includes only flink-conf.yaml.
  2.  I think by running exec command "...${FLINK_ENV_JAVA_OPTS} 
"${log_setting[@]}" "${ARGS[@]}" some properties might be ignored.
IMO, it should first look for properties in java.opts provided by the user in 
flink-conf and falls back to default in case it's not present.

Taking about Native kubernetes mode

I checked the bash script in flink-dist module, it looks like in both 1.14.2 
and 1.12.2. flink-console.sh is similar. (in 1.14.2 there are more cases for 
the input argument)

logging variable is the same
https://github.com/apache/flink/blob/release-1.14.2/flink-dist/src/main/flink-bin/bin/flink-console.sh#L101
https://github.com/apache/flink/blob/release-1.12.2/flink-dist/src/main/flink-bin/bin/flink-console.sh#L89

Exec command is the same
https://github.com/apache/flink/blob/release-1.14.2/flink-dist/src/main/flink-bin/bin/flink-console.sh#L114
ht

Re: Flink 1.14.2 - Log4j2 -Dlog4j.configurationFile is ignored and falls back to default /opt/flink/conf/log4j-console.properties

2022-01-21 Thread Tamir Sagi
Yes,

Thank you!
I will handle that.

Best,
Tamir


From: Yang Wang 
Sent: Friday, January 21, 2022 5:11 AM
To: Tamir Sagi 
Cc: user@flink.apache.org 
Subject: Re: Flink 1.14.2 - Log4j2 -Dlog4j.configurationFile is ignored and 
falls back to default /opt/flink/conf/log4j-console.properties


EXTERNAL EMAIL


Changing the order of exec command makes sense to me. Would you please create a 
ticket for this?

The /opt/flink/conf is cleaned up because we are mounting the conf files from 
K8s ConfigMap.



Best,
Yang

Tamir Sagi mailto:tamir.s...@niceactimize.com>> 
于2022年1月18日周二 17:48写道:
Hey Yang,

Thank you for confirming it.

IMO, a better approach is to change the order "log_setting" , "ARGS" and 
"FLINK_ENV_JAVA_OPTS" in exec command.
In that way we prioritize user defined properties.

From:

exec "$JAVA_RUN" $JVM_ARGS ${FLINK_ENV_JAVA_OPTS} "${log_setting[@]}" 
-classpath "`manglePathList "$FLINK_TM_CLASSPATH:$INTERNAL_HADOOP_CLASSPATHS"`" 
${CLASS_TO_RUN} "${ARGS[@]}"

To

exec "$JAVA_RUN" $JVM_ARGS "${log_setting[@]}" -classpath "`manglePathList 
"$FLINK_TM_CLASSPATH:$INTERNAL_HADOOP_CLASSPATHS"`" ${CLASS_TO_RUN} 
"${ARGS[@]}" "${FLINK_ENV_JAVA_OPTS}"

Unless there are system configurations which not supposed to be overridden by 
user(And then having dedicated env variables is better). does it make sense to 
you?


In addition, any idea why /opt/flink/conf gets cleaned (Only flink-conf.xml is 
there).


Best,
Tamir




From: Yang Wang mailto:danrtsey...@gmail.com>>
Sent: Tuesday, January 18, 2022 6:02 AM
To: Tamir Sagi mailto:tamir.s...@niceactimize.com>>
Cc: user@flink.apache.org<mailto:user@flink.apache.org> 
mailto:user@flink.apache.org>>
Subject: Re: Flink 1.14.2 - Log4j2 -Dlog4j.configurationFile is ignored and 
falls back to default /opt/flink/conf/log4j-console.properties


EXTERNAL EMAIL


I think you are right. Before 1.13.0, if the log configuration file does not 
exist, the logging properties would not be added to the start command. That is 
why it could work in 1.12.2.

However, from 1.13.0, we are not using 
"kubernetes.container-start-command-template" to generate the JM/TM start 
command, but the 
jobmanager.sh/taskmanager.sh<http://jobmanager.sh/taskmanager.sh>. We do not
have the same logic in the "flink-console.sh".

Maybe we could introduce an environment for log configuration file name in the 
"flink-console.sh". The default value could be "log4j-console.properties" and 
it could be configured by users.
If this makes sense to you, could you please create a ticket?


Best,
Yang

Tamir Sagi mailto:tamir.s...@niceactimize.com>> 
于2022年1月17日周一 22:53写道:
Hey Yang,

thanks for answering,

TL;DR

Assuming I have not missed anything , the way TM and JM are created is 
different between these 2 versions,
but it does look like flink-console.sh gets called eventually with the same 
exec command.

in 1.12.2 if org.apache.flink.kubernetes.kubeclient.parameters#hasLog4j returns 
false then logging args are not added to startCommand.


  1.  why does the config dir gets cleaned once the cluster starts? Even when I 
pushed log4j-console.properties to the expected location (/opt/flink/conf) , 
the directory includes only flink-conf.yaml.
  2.  I think by running exec command "...${FLINK_ENV_JAVA_OPTS} 
"${log_setting[@]}" "${ARGS[@]}" some properties might be ignored.
IMO, it should first look for properties in java.opts provided by the user in 
flink-conf and falls back to default in case it's not present.

Taking about Native kubernetes mode

I checked the bash script in flink-dist module, it looks like in both 1.14.2 
and 1.12.2. flink-console.sh is similar. (in 1.14.2 there are more cases for 
the input argument)

logging variable is the same
https://github.com/apache/flink/blob/release-1.14.2/flink-dist/src/main/flink-bin/bin/flink-console.sh#L101
https://github.com/apache/flink/blob/release-1.12.2/flink-dist/src/main/flink-bin/bin/flink-console.sh#L89

Exec command is the same
https://github.com/apache/flink/blob/release-1.14.2/flink-dist/src/main/flink-bin/bin/flink-console.sh#L114
https://github.com/apache/flink/blob/release-1.12.2/flink-dist/src/main/flink-bin/bin/flink-console.sh#L99

As for creating TM/JM, in 1.14.2 there is a usage of 2 bash scripts

  *   kubernetes-jobmanager.sh
  *   kubernetes-taskmanager.sh

They get called while decorating the pod, referenced in startCommand.

for instance, JobManager.
https://github.com/apache/flink/blob/release-1.14.2/flink-kubernetes/src/main/java/org/apache/flink/kubernetes/kubeclient/decorators/CmdJobManagerDecorator.java#L58-L59

kubernetes-jobmanager.sh gets called once the container starts which calls 
flink-console.sh internally 

Re: Flink 1.14.2 - Log4j2 -Dlog4j.configurationFile is ignored and falls back to default /opt/flink/conf/log4j-console.properties

2022-01-20 Thread Yang Wang
Changing the order of exec command makes sense to me. Would you please
create a ticket for this?

The /opt/flink/conf is cleaned up because we are mounting the conf files
from K8s ConfigMap.



Best,
Yang

Tamir Sagi  于2022年1月18日周二 17:48写道:

> Hey Yang,
>
> Thank you for confirming it.
>
> IMO, a better approach is to change the order "log_setting" , "ARGS" and 
> "FLINK_ENV_JAVA_OPTS"
> in exec command.
> In that way we prioritize user defined properties.
>
> From:
>
> exec "$JAVA_RUN" $JVM_ARGS ${FLINK_ENV_JAVA_OPTS} "${log_setting[@]}"
> -classpath "`manglePathList
> "$FLINK_TM_CLASSPATH:$INTERNAL_HADOOP_CLASSPATHS"`" ${CLASS_TO_RUN}
> "${ARGS[@]}"
>
> To
>
> exec "$JAVA_RUN" $JVM_ARGS "${log_setting[@]}" -classpath "`manglePathList
> "$FLINK_TM_CLASSPATH:$INTERNAL_HADOOP_CLASSPATHS"`" ${CLASS_TO_RUN}
> "${ARGS[@]}" "${FLINK_ENV_JAVA_OPTS}"
>
> Unless there are system configurations which not supposed to be overridden
> by user(And then having dedicated env variables is better). does it make
> sense to you?
>
>
> In addition, any idea why /opt/flink/conf gets cleaned (Only
> flink-conf.xml is there).
>
>
> Best,
> Tamir
>
>
> ------
> *From:* Yang Wang 
> *Sent:* Tuesday, January 18, 2022 6:02 AM
> *To:* Tamir Sagi 
> *Cc:* user@flink.apache.org 
> *Subject:* Re: Flink 1.14.2 - Log4j2 -Dlog4j.configurationFile is ignored
> and falls back to default /opt/flink/conf/log4j-console.properties
>
>
> *EXTERNAL EMAIL*
>
>
> I think you are right. Before 1.13.0, if the log configuration file does
> not exist, the logging properties would not be added to the start command.
> That is why it could work in 1.12.2.
>
> However, from 1.13.0, we are not using
> "kubernetes.container-start-command-template" to generate the JM/TM start
> command, but the jobmanager.sh/taskmanager.sh. We do not
> have the same logic in the "flink-console.sh".
>
> Maybe we could introduce an environment for log configuration file name in
> the "flink-console.sh". The default value could be
> "log4j-console.properties" and it could be configured by users.
> If this makes sense to you, could you please create a ticket?
>
>
> Best,
> Yang
>
> Tamir Sagi  于2022年1月17日周一 22:53写道:
>
> Hey Yang,
>
> thanks for answering,
>
> TL;DR
>
> Assuming I have not missed anything , the way TM and JM are created is
> different between these 2 versions,
> but it does look like flink-console.sh gets called eventually with the
> same exec command.
>
> in 1.12.2 if org.apache.flink.kubernetes.kubeclient.parameters#hasLog4j
> returns false then logging args are not added to startCommand.
>
>
>1. why does the config dir gets cleaned once the cluster starts? Even
>when I pushed log4j-console.properties to the expected location
>(/opt/flink/conf) , the directory includes only flink-conf.yaml.
>2. I think by running exec command "...${FLINK_ENV_JAVA_OPTS}
>"${log_setting[@]}" "${ARGS[@]}" some properties might be ignored.
>IMO, it should first look for properties in java.opts provided by the
>user in flink-conf and falls back to default in case it's not present.
>
>
> Taking about Native kubernetes mode
>
> I checked the bash script in flink-dist module, it looks like in both
> 1.14.2 and 1.12.2. flink-console.sh is similar. (in 1.14.2 there are more
> cases for the input argument)
>
> logging variable is the same
>
> https://github.com/apache/flink/blob/release-1.14.2/flink-dist/src/main/flink-bin/bin/flink-console.sh#L101
>
> https://github.com/apache/flink/blob/release-1.12.2/flink-dist/src/main/flink-bin/bin/flink-console.sh#L89
>
> Exec command is the same
>
> https://github.com/apache/flink/blob/release-1.14.2/flink-dist/src/main/flink-bin/bin/flink-console.sh#L114
>
> https://github.com/apache/flink/blob/release-1.12.2/flink-dist/src/main/flink-bin/bin/flink-console.sh#L99
>
> As for creating TM/JM, in *1.14.2* there is a usage of 2 bash scripts
>
>- kubernetes-jobmanager.sh
>- kubernetes-taskmanager.sh
>
> They get called while decorating the pod, referenced in startCommand.
>
> for instance, JobManager.
>
> https://github.com/apache/flink/blob/release-1.14.2/flink-kubernetes/src/main/java/org/apache/flink/kubernetes/kubeclient/decorators/CmdJobManagerDecorator.java#L58-L59
>
> kubernetes-jobmanager.sh gets called once the container starts which calls
> flink-console.sh internally and pass the
> deploymentName(kubernetes-app

Re: Flink 1.14.2 - Log4j2 -Dlog4j.configurationFile is ignored and falls back to default /opt/flink/conf/log4j-console.properties

2022-01-18 Thread Tamir Sagi
Hey Yang,

Thank you for confirming it.

IMO, a better approach is to change the order "log_setting" , "ARGS" and 
"FLINK_ENV_JAVA_OPTS" in exec command.
In that way we prioritize user defined properties.

From:

exec "$JAVA_RUN" $JVM_ARGS ${FLINK_ENV_JAVA_OPTS} "${log_setting[@]}" 
-classpath "`manglePathList "$FLINK_TM_CLASSPATH:$INTERNAL_HADOOP_CLASSPATHS"`" 
${CLASS_TO_RUN} "${ARGS[@]}"

To

exec "$JAVA_RUN" $JVM_ARGS "${log_setting[@]}" -classpath "`manglePathList 
"$FLINK_TM_CLASSPATH:$INTERNAL_HADOOP_CLASSPATHS"`" ${CLASS_TO_RUN} 
"${ARGS[@]}" "${FLINK_ENV_JAVA_OPTS}"

Unless there are system configurations which not supposed to be overridden by 
user(And then having dedicated env variables is better). does it make sense to 
you?


In addition, any idea why /opt/flink/conf gets cleaned (Only flink-conf.xml is 
there).


Best,
Tamir




From: Yang Wang 
Sent: Tuesday, January 18, 2022 6:02 AM
To: Tamir Sagi 
Cc: user@flink.apache.org 
Subject: Re: Flink 1.14.2 - Log4j2 -Dlog4j.configurationFile is ignored and 
falls back to default /opt/flink/conf/log4j-console.properties


EXTERNAL EMAIL


I think you are right. Before 1.13.0, if the log configuration file does not 
exist, the logging properties would not be added to the start command. That is 
why it could work in 1.12.2.

However, from 1.13.0, we are not using 
"kubernetes.container-start-command-template" to generate the JM/TM start 
command, but the 
jobmanager.sh/taskmanager.sh<http://jobmanager.sh/taskmanager.sh>. We do not
have the same logic in the "flink-console.sh".

Maybe we could introduce an environment for log configuration file name in the 
"flink-console.sh". The default value could be "log4j-console.properties" and 
it could be configured by users.
If this makes sense to you, could you please create a ticket?


Best,
Yang

Tamir Sagi mailto:tamir.s...@niceactimize.com>> 
于2022年1月17日周一 22:53写道:
Hey Yang,

thanks for answering,

TL;DR

Assuming I have not missed anything , the way TM and JM are created is 
different between these 2 versions,
but it does look like flink-console.sh gets called eventually with the same 
exec command.

in 1.12.2 if org.apache.flink.kubernetes.kubeclient.parameters#hasLog4j returns 
false then logging args are not added to startCommand.


  1.  why does the config dir gets cleaned once the cluster starts? Even when I 
pushed log4j-console.properties to the expected location (/opt/flink/conf) , 
the directory includes only flink-conf.yaml.
  2.  I think by running exec command "...${FLINK_ENV_JAVA_OPTS} 
"${log_setting[@]}" "${ARGS[@]}" some properties might be ignored.
IMO, it should first look for properties in java.opts provided by the user in 
flink-conf and falls back to default in case it's not present.

Taking about Native kubernetes mode

I checked the bash script in flink-dist module, it looks like in both 1.14.2 
and 1.12.2. flink-console.sh is similar. (in 1.14.2 there are more cases for 
the input argument)

logging variable is the same
https://github.com/apache/flink/blob/release-1.14.2/flink-dist/src/main/flink-bin/bin/flink-console.sh#L101
https://github.com/apache/flink/blob/release-1.12.2/flink-dist/src/main/flink-bin/bin/flink-console.sh#L89

Exec command is the same
https://github.com/apache/flink/blob/release-1.14.2/flink-dist/src/main/flink-bin/bin/flink-console.sh#L114
https://github.com/apache/flink/blob/release-1.12.2/flink-dist/src/main/flink-bin/bin/flink-console.sh#L99

As for creating TM/JM, in 1.14.2 there is a usage of 2 bash scripts

  *   kubernetes-jobmanager.sh
  *   kubernetes-taskmanager.sh

They get called while decorating the pod, referenced in startCommand.

for instance, JobManager.
https://github.com/apache/flink/blob/release-1.14.2/flink-kubernetes/src/main/java/org/apache/flink/kubernetes/kubeclient/decorators/CmdJobManagerDecorator.java#L58-L59

kubernetes-jobmanager.sh gets called once the container starts which calls 
flink-console.sh internally and pass the deploymentName(kubernetes-application 
in our case) and args.

In 1.12.2 the decorator set /docker-entrypoint.sh
https://github.com/apache/flink/blob/release-1.12.2/flink-kubernetes/src/main/java/org/apache/flink/kubernetes/kubeclient/factory/KubernetesJobManagerFactory.java#L67

and set the start command
https://github.com/apache/flink/blob/release-1.12.2/flink-kubernetes/src/main/java/org/apache/flink/kubernetes/configuration/KubernetesConfigOptions.java#L224

https://github.com/apache/flink/blob/release-1.12.2/flink-kubernetes/src/main/java/org/apache/flink/kubernetes/utils/KubernetesUtils.java#L333


with additional logging parameter
https://github.com/apache/flink/blob/release-1.12.2/flink-kubernetes/src/main/java/org/apache/flink/kubernetes/utils/Kube

Re: Flink 1.14.2 - Log4j2 -Dlog4j.configurationFile is ignored and falls back to default /opt/flink/conf/log4j-console.properties

2022-01-17 Thread Yang Wang
I think you are right. Before 1.13.0, if the log configuration file does
not exist, the logging properties would not be added to the start command.
That is why it could work in 1.12.2.

However, from 1.13.0, we are not using
"kubernetes.container-start-command-template" to generate the JM/TM start
command, but the jobmanager.sh/taskmanager.sh. We do not
have the same logic in the "flink-console.sh".

Maybe we could introduce an environment for log configuration file name in
the "flink-console.sh". The default value could be
"log4j-console.properties" and it could be configured by users.
If this makes sense to you, could you please create a ticket?


Best,
Yang

Tamir Sagi  于2022年1月17日周一 22:53写道:

> Hey Yang,
>
> thanks for answering,
>
> TL;DR
>
> Assuming I have not missed anything , the way TM and JM are created is
> different between these 2 versions,
> but it does look like flink-console.sh gets called eventually with the
> same exec command.
>
> in 1.12.2 if org.apache.flink.kubernetes.kubeclient.parameters#hasLog4j
> returns false then logging args are not added to startCommand.
>
>
>1. why does the config dir gets cleaned once the cluster starts? Even
>when I pushed log4j-console.properties to the expected location
>(/opt/flink/conf) , the directory includes only flink-conf.yaml.
>2. I think by running exec command "...${FLINK_ENV_JAVA_OPTS}
>"${log_setting[@]}" "${ARGS[@]}" some properties might be ignored.
>IMO, it should first look for properties in java.opts provided by the
>user in flink-conf and falls back to default in case it's not present.
>
>
> Taking about Native kubernetes mode
>
> I checked the bash script in flink-dist module, it looks like in both
> 1.14.2 and 1.12.2. flink-console.sh is similar. (in 1.14.2 there are more
> cases for the input argument)
>
> logging variable is the same
>
> https://github.com/apache/flink/blob/release-1.14.2/flink-dist/src/main/flink-bin/bin/flink-console.sh#L101
>
> https://github.com/apache/flink/blob/release-1.12.2/flink-dist/src/main/flink-bin/bin/flink-console.sh#L89
>
> Exec command is the same
>
> https://github.com/apache/flink/blob/release-1.14.2/flink-dist/src/main/flink-bin/bin/flink-console.sh#L114
>
> https://github.com/apache/flink/blob/release-1.12.2/flink-dist/src/main/flink-bin/bin/flink-console.sh#L99
>
> As for creating TM/JM, in *1.14.2* there is a usage of 2 bash scripts
>
>- kubernetes-jobmanager.sh
>- kubernetes-taskmanager.sh
>
> They get called while decorating the pod, referenced in startCommand.
>
> for instance, JobManager.
>
> https://github.com/apache/flink/blob/release-1.14.2/flink-kubernetes/src/main/java/org/apache/flink/kubernetes/kubeclient/decorators/CmdJobManagerDecorator.java#L58-L59
>
> kubernetes-jobmanager.sh gets called once the container starts which calls
> flink-console.sh internally and pass the
> deploymentName(kubernetes-application in our case) and args.
>
> In *1.12.2* the decorator set /docker-entrypoint.sh
>
> https://github.com/apache/flink/blob/release-1.12.2/flink-kubernetes/src/main/java/org/apache/flink/kubernetes/kubeclient/factory/KubernetesJobManagerFactory.java#L67
>
> and set the start command
>
> https://github.com/apache/flink/blob/release-1.12.2/flink-kubernetes/src/main/java/org/apache/flink/kubernetes/configuration/KubernetesConfigOptions.java#L224
>
>
> https://github.com/apache/flink/blob/release-1.12.2/flink-kubernetes/src/main/java/org/apache/flink/kubernetes/utils/KubernetesUtils.java#L333
>
>
> with additional logging parameter
>
> https://github.com/apache/flink/blob/release-1.12.2/flink-kubernetes/src/main/java/org/apache/flink/kubernetes/utils/KubernetesUtils.java
> #L421-L425
> <https://github.com/apache/flink/blob/4dedee047bc69d219095bd98782c6e95f04a6cb9/flink-kubernetes/src/main/java/org/apache/flink/kubernetes/utils/KubernetesUtils.java#L421-L425>
>
> hasLog4j
>
> https://github.com/apache/flink/blob/release-1.12.2/flink-kubernetes/src/main/java/org/apache/flink/kubernetes/kubeclient/parameters/AbstractKubernetesParameters.java#L151-L155
> it checks if the file exists in conf dir.
>
> If the log4j is false, then the logging properties are not added to start
> command(Might be the case, which explains why it works in 1.12.2)
>
> It then passes 'jobmanager' as component.
> looking into /docker-entrypoint.sh it calls jobmanager.sh which calls
> flink-console.sh internally
>
> Have I missed anything?
>
>
> Best,
> Tamir
>
>
> --
> *From:* Yang Wang 
> *Sent:* Monday, January 17, 2022 1:05 PM
> *To:* Tamir Sagi 
> *Cc:* user@flink.apache.org 
>

Re: Flink 1.14.2 - Log4j2 -Dlog4j.configurationFile is ignored and falls back to default /opt/flink/conf/log4j-console.properties

2022-01-17 Thread Tamir Sagi
Hey Yang,

thanks for answering,

TL;DR

Assuming I have not missed anything , the way TM and JM are created is 
different between these 2 versions,
but it does look like flink-console.sh gets called eventually with the same 
exec command.

in 1.12.2 if org.apache.flink.kubernetes.kubeclient.parameters#hasLog4j returns 
false then logging args are not added to startCommand.


  1.  why does the config dir gets cleaned once the cluster starts? Even when I 
pushed log4j-console.properties to the expected location (/opt/flink/conf) , 
the directory includes only flink-conf.yaml.
  2.  I think by running exec command "...${FLINK_ENV_JAVA_OPTS} 
"${log_setting[@]}" "${ARGS[@]}" some properties might be ignored.
IMO, it should first look for properties in java.opts provided by the user in 
flink-conf and falls back to default in case it's not present.

Taking about Native kubernetes mode

I checked the bash script in flink-dist module, it looks like in both 1.14.2 
and 1.12.2. flink-console.sh is similar. (in 1.14.2 there are more cases for 
the input argument)

logging variable is the same
https://github.com/apache/flink/blob/release-1.14.2/flink-dist/src/main/flink-bin/bin/flink-console.sh#L101
https://github.com/apache/flink/blob/release-1.12.2/flink-dist/src/main/flink-bin/bin/flink-console.sh#L89

Exec command is the same
https://github.com/apache/flink/blob/release-1.14.2/flink-dist/src/main/flink-bin/bin/flink-console.sh#L114
https://github.com/apache/flink/blob/release-1.12.2/flink-dist/src/main/flink-bin/bin/flink-console.sh#L99

As for creating TM/JM, in 1.14.2 there is a usage of 2 bash scripts

  *   kubernetes-jobmanager.sh
  *   kubernetes-taskmanager.sh

They get called while decorating the pod, referenced in startCommand.

for instance, JobManager.
https://github.com/apache/flink/blob/release-1.14.2/flink-kubernetes/src/main/java/org/apache/flink/kubernetes/kubeclient/decorators/CmdJobManagerDecorator.java#L58-L59

kubernetes-jobmanager.sh gets called once the container starts which calls 
flink-console.sh internally and pass the deploymentName(kubernetes-application 
in our case) and args.

In 1.12.2 the decorator set /docker-entrypoint.sh
https://github.com/apache/flink/blob/release-1.12.2/flink-kubernetes/src/main/java/org/apache/flink/kubernetes/kubeclient/factory/KubernetesJobManagerFactory.java#L67

and set the start command
https://github.com/apache/flink/blob/release-1.12.2/flink-kubernetes/src/main/java/org/apache/flink/kubernetes/configuration/KubernetesConfigOptions.java#L224

https://github.com/apache/flink/blob/release-1.12.2/flink-kubernetes/src/main/java/org/apache/flink/kubernetes/utils/KubernetesUtils.java#L333


with additional logging parameter
https://github.com/apache/flink/blob/release-1.12.2/flink-kubernetes/src/main/java/org/apache/flink/kubernetes/utils/KubernetesUtils.java#L421-L425<https://github.com/apache/flink/blob/4dedee047bc69d219095bd98782c6e95f04a6cb9/flink-kubernetes/src/main/java/org/apache/flink/kubernetes/utils/KubernetesUtils.java#L421-L425>

hasLog4j
https://github.com/apache/flink/blob/release-1.12.2/flink-kubernetes/src/main/java/org/apache/flink/kubernetes/kubeclient/parameters/AbstractKubernetesParameters.java#L151-L155
it checks if the file exists in conf dir.

If the log4j is false, then the logging properties are not added to start 
command(Might be the case, which explains why it works in 1.12.2)

It then passes 'jobmanager' as component.
looking into /docker-entrypoint.sh it calls jobmanager.sh which calls 
flink-console.sh internally

Have I missed anything?


Best,
Tamir




From: Yang Wang 
Sent: Monday, January 17, 2022 1:05 PM
To: Tamir Sagi 
Cc: user@flink.apache.org 
Subject: Re: Flink 1.14.2 - Log4j2 -Dlog4j.configurationFile is ignored and 
falls back to default /opt/flink/conf/log4j-console.properties


EXTERNAL EMAIL


I think the root cause is that we are using "flink-console.sh" to start the 
JobManager/TaskManager process for native K8s integration after FLINK-21128[1].
So it forces the log4j configuration name to be "log4j-console.properties".


[1]. https://issues.apache.org/jira/browse/FLINK-21128


Best,
Yang

Tamir Sagi mailto:tamir.s...@niceactimize.com>> 
于2022年1月13日周四 20:30写道:
Hey All

I'm Running Flink 1.14.2, it seems like it ignores system property 
-Dlog4j.configurationFile and
falls back to /opt/flink/conf/log4j-console.properties

I enabled debug log for log4j2  ( -Dlog4j2.debug)

DEBUG StatusLogger Catching
 java.io.FileNotFoundException: file:/opt/flink/conf/log4j-console.properties 
(No such file or directory)
at java.base/java.io.FileInputStream.open0(Native Method)
at java.base/java.io.FileInputStream.open(Unknown Source)
at java.base/java.io.FileInputStream.(Unknown Source)
at 
org.apache.logging.log4j.core.config.ConfigurationFactory.getInputFromString(ConfigurationFactory.java:370)
at 
org.apache.logging.log4j.

Re: Flink 1.14.2 - Log4j2 -Dlog4j.configurationFile is ignored and falls back to default /opt/flink/conf/log4j-console.properties

2022-01-17 Thread Yang Wang
I think the root cause is that we are using "flink-console.sh" to start the
JobManager/TaskManager process for native K8s integration after
FLINK-21128[1].
So it forces the log4j configuration name to be "log4j-console.properties".


[1]. https://issues.apache.org/jira/browse/FLINK-21128


Best,
Yang

Tamir Sagi  于2022年1月13日周四 20:30写道:

> Hey All
>
> I'm Running Flink 1.14.2, it seems like it ignores system
> property -Dlog4j.configurationFile and
> falls back to /opt/flink/conf/log4j-console.properties
>
> I enabled debug log for log4j2  ( -Dlog4j2.debug)
>
> DEBUG StatusLogger Catching
>  java.io.FileNotFoundException:
> file:/opt/flink/conf/log4j-console.properties (No such file or directory)
> at java.base/java.io.FileInputStream.open0(Native Method)
> at java.base/java.io.FileInputStream.open(Unknown Source)
> at java.base/java.io.FileInputStream.(Unknown Source)
> at
> org.apache.logging.log4j.core.config.ConfigurationFactory.getInputFromString(ConfigurationFactory.java:370)
> at
> org.apache.logging.log4j.core.config.ConfigurationFactory$Factory.getConfiguration(ConfigurationFactory.java:513)
> at
> org.apache.logging.log4j.core.config.ConfigurationFactory$Factory.getConfiguration(ConfigurationFactory.java:499)
> at
> org.apache.logging.log4j.core.config.ConfigurationFactory$Factory.getConfiguration(ConfigurationFactory.java:422)
> at
> org.apache.logging.log4j.core.config.ConfigurationFactory.getConfiguration(ConfigurationFactory.java:322)
> at
> org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:695)
> at
> org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:716)
> at
> org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:270)
> at
> org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:155)
> at
> org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:47)
> at org.apache.logging.log4j.LogManager.getContext(LogManager.java:196)
> at
> org.apache.logging.log4j.spi.AbstractLoggerAdapter.getContext(AbstractLoggerAdapter.java:137)
> at
> org.apache.logging.slf4j.Log4jLoggerFactory.getContext(Log4jLoggerFactory.java:55)
> at
> org.apache.logging.log4j.spi.AbstractLoggerAdapter.getLogger(AbstractLoggerAdapter.java:47)
> at
> org.apache.logging.slf4j.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:33)
> at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:329)
> at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:349)
> at
> org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils.(AkkaRpcServiceUtils.java:55)
> at
> org.apache.flink.runtime.rpc.akka.AkkaRpcSystem.remoteServiceBuilder(AkkaRpcSystem.java:42)
> at
> org.apache.flink.runtime.rpc.akka.CleanupOnCloseRpcSystem.remoteServiceBuilder(CleanupOnCloseRpcSystem.java:77)
> at
> org.apache.flink.runtime.rpc.RpcUtils.createRemoteRpcService(RpcUtils.java:184)
> at
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:300)
> at
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:243)
> at
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$1(ClusterEntrypoint.java:193)
> at
> org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:28)
> at
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:190)
> at
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:617)
>
> Where I see the property is being loaded while deploying the cluster
>
> source:{
> class:org.apache.flink.configuration.GlobalConfiguration
> method:loadYAMLResource
> file:GlobalConfiguration.java
> line:213
> }
> message:Loading configuration property: env.java.opts,
> -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/dumps
> -Dlog4j.configurationFile=/opt/log4j2/log4j2.xml -Dlog4j2.debug=true
>
> in addition,  following the documentation[1], it seems like Flink comes
> with default log4j properties files located in /opt/flink/conf
>
> looking into that dir once the cluster is deployed, only flink-conf.yaml
> is there.
>
>
>
> Docker file content
>
> FROM flink:1.14.2-scala_2.12-java11
> ARG JAR_FILE
> COPY target/${JAR_FILE} $FLINK_HOME/usrlib/flink-job.jar
> ADD log4j2.xml /opt/log4j2/log4j2.xml
>
>
>
> *It perfectly works in 1.12.2 with the same log4j2.xml file and system
> property. *
>
> [1]
> https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/advanced/logging/#configuring-log4j-2
>
>
> Best,
> Tamir
>
>
>
> Confidentiality: This communication and any attachmen

Flink 1.14.2 - Log4j2 -Dlog4j.configurationFile is ignored and falls back to default /opt/flink/conf/log4j-console.properties

2022-01-13 Thread Tamir Sagi
Hey All

I'm Running Flink 1.14.2, it seems like it ignores system property 
-Dlog4j.configurationFile and
falls back to /opt/flink/conf/log4j-console.properties

I enabled debug log for log4j2  ( -Dlog4j2.debug)

DEBUG StatusLogger Catching
 java.io.FileNotFoundException: file:/opt/flink/conf/log4j-console.properties 
(No such file or directory)
at java.base/java.io.FileInputStream.open0(Native Method)
at java.base/java.io.FileInputStream.open(Unknown Source)
at java.base/java.io.FileInputStream.(Unknown Source)
at 
org.apache.logging.log4j.core.config.ConfigurationFactory.getInputFromString(ConfigurationFactory.java:370)
at 
org.apache.logging.log4j.core.config.ConfigurationFactory$Factory.getConfiguration(ConfigurationFactory.java:513)
at 
org.apache.logging.log4j.core.config.ConfigurationFactory$Factory.getConfiguration(ConfigurationFactory.java:499)
at 
org.apache.logging.log4j.core.config.ConfigurationFactory$Factory.getConfiguration(ConfigurationFactory.java:422)
at 
org.apache.logging.log4j.core.config.ConfigurationFactory.getConfiguration(ConfigurationFactory.java:322)
at 
org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:695)
at 
org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:716)
at org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:270)
at 
org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:155)
at 
org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:47)
at org.apache.logging.log4j.LogManager.getContext(LogManager.java:196)
at 
org.apache.logging.log4j.spi.AbstractLoggerAdapter.getContext(AbstractLoggerAdapter.java:137)
at 
org.apache.logging.slf4j.Log4jLoggerFactory.getContext(Log4jLoggerFactory.java:55)
at 
org.apache.logging.log4j.spi.AbstractLoggerAdapter.getLogger(AbstractLoggerAdapter.java:47)
at 
org.apache.logging.slf4j.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:33)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:329)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:349)
at 
org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils.(AkkaRpcServiceUtils.java:55)
at 
org.apache.flink.runtime.rpc.akka.AkkaRpcSystem.remoteServiceBuilder(AkkaRpcSystem.java:42)
at 
org.apache.flink.runtime.rpc.akka.CleanupOnCloseRpcSystem.remoteServiceBuilder(CleanupOnCloseRpcSystem.java:77)
at 
org.apache.flink.runtime.rpc.RpcUtils.createRemoteRpcService(RpcUtils.java:184)
at 
org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:300)
at 
org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:243)
at 
org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$1(ClusterEntrypoint.java:193)
at 
org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:28)
at 
org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:190)
at 
org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:617)

Where I see the property is being loaded while deploying the cluster

source:{
class:org.apache.flink.configuration.GlobalConfiguration
method:loadYAMLResource
file:GlobalConfiguration.java
line:213
}
message:Loading configuration property: env.java.opts, 
-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/dumps
-Dlog4j.configurationFile=/opt/log4j2/log4j2.xml -Dlog4j2.debug=true

in addition,  following the documentation[1], it seems like Flink comes with 
default log4j properties files located in /opt/flink/conf

looking into that dir once the cluster is deployed, only flink-conf.yaml is 
there.

[cid:08bf37ec-7fed-4caf-a08d-3d27f2edb5d5]

Docker file content

FROM flink:1.14.2-scala_2.12-java11
ARG JAR_FILE
COPY target/${JAR_FILE} $FLINK_HOME/usrlib/flink-job.jar
ADD log4j2.xml /opt/log4j2/log4j2.xml


It perfectly works in 1.12.2 with the same log4j2.xml file and system property.

[1] 
https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/advanced/logging/#configuring-log4j-2


Best,
Tamir




Confidentiality: This communication and any attachments are intended for the 
above-named persons only and may be confidential and/or legally privileged. Any 
opinions expressed in this communication are not necessarily those of NICE 
Actimize. If this communication has come to you in error you must take no 
action based on it, nor must you copy or show it to anyone; please 
delete/destroy and inform the sender by e-mail immediately.
Monitoring: NICE Actimize may monitor incoming and outgoing e-mails.
Viruses: Although we have taken steps toward ensuring that this e-mail and 
attachments are free from any virus, we advise that in keeping with good 
computing practice the recipient should ensure they are actually virus free.


Re: question on jar compatibility - log4j related

2021-12-19 Thread David Morávek
Hi Eddie,

the APIs should be binary compatible across patch releases, so there is no
need to re-compile your artifacts

Best,
D.

On Sun 19. 12. 2021 at 16:42, Colletta, Edward 
wrote:

> If have jar files built using flink version 11.2 in dependencies, and I
> upgrade my cluster to 11.6, is it safe to run the existing jars on the
> upgraded cluster or should I rebuild all jobs against 11.6?
>
> Thanks,
>
> Eddie Colletta
>


question on jar compatibility - log4j related

2021-12-19 Thread Colletta, Edward
If have jar files built using flink version 11.2 in dependencies, and I upgrade 
my cluster to 11.6, is it safe to run the existing jars on the upgraded cluster 
or should I rebuild all jobs against 11.6?

Thanks,
Eddie Colletta


Re: How do I determine which hardware device and software has log4j zero-day security vulnerability?

2021-12-19 Thread Turritopsis Dohrnii Teo En Ming
I realised there is an Apache Log4j mailing list.

Regards,

Mr. Turritopsis Dohrnii Teo En Ming
Targeted Individual in Singapore
19 Dec 2021 Sunday


On Fri, 17 Dec 2021 at 00:29, Arvid Heise  wrote:
>
> I think this is meant for the Apache log4j mailing list [1].
>
> [1] https://logging.apache.org/log4j/2.x/mail-lists.html
>
> On Thu, Dec 16, 2021 at 4:07 PM David Morávek  wrote:
>>
>> Hi Turritopsis,
>>
>> I fail to see any relation to Apache Flink. Can you please elaborate on how 
>> Flink fits into it?
>>
>> Best,
>> D.
>>
>> On Thu, Dec 16, 2021 at 3:52 PM Turritopsis Dohrnii Teo En Ming 
>>  wrote:
>>>
>>> Subject: How do I determine which hardware device and software has
>>> log4j zero-day security vulnerability?
>>>
>>> Good day from Singapore,
>>>
>>> I am working for a Systems Integrator (SI) in Singapore. We have
>>> several clients writing in, requesting us to identify log4j zero-day
>>> security vulnerability in their corporate infrastructure.
>>>
>>> It seems to be pretty difficult to determine which hardware device and
>>> which software has the vulnerability. There seems to be no lists of
>>> hardware devices and software affected by the flaw any where on the
>>> internet.
>>>
>>> Could you refer me to definitive documentation/guides on how to
>>> identify log4j security flaw in hardware devices and software?
>>>
>>> Thank you very much for your kind assistance.
>>>
>>> Mr. Turritopsis Dohrnii Teo En Ming, 43 years old as of 16 Dec 2021,
>>> is a TARGETED INDIVIDUAL living in Singapore. He is an IT Consultant
>>> with a Systems Integrator (SI)/computer firm in Singapore. He is an IT
>>> enthusiast.
>>>
>>>
>>>
>>>
>>>
>>> -BEGIN EMAIL SIGNATURE-
>>>
>>> The Gospel for all Targeted Individuals (TIs):
>>>
>>> [The New York Times] Microwave Weapons Are Prime Suspect in Ills of
>>> U.S. Embassy Workers
>>>
>>> Link:
>>> https://www.nytimes.com/2018/09/01/science/sonic-attack-cuba-microwave.html
>>>
>>> 
>>>
>>> Singaporean Targeted Individual Mr. Turritopsis Dohrnii Teo En Ming's
>>> Academic Qualifications as at 14 Feb 2019 and refugee seeking attempts
>>> at the United Nations Refugee Agency Bangkok (21 Mar 2017), in Taiwan
>>> (5 Aug 2019) and Australia (25 Dec 2019 to 9 Jan 2020):
>>>
>>> [1] https://tdtemcerts.wordpress.com/
>>>
>>> [2] https://tdtemcerts.blogspot.sg/
>>>
>>> [3] https://www.scribd.com/user/270125049/Teo-En-Ming
>>>
>>> -END EMAIL SIGNATURE-


Re: How do I determine which hardware device and software has log4j zero-day security vulnerability?

2021-12-19 Thread Turritopsis Dohrnii Teo En Ming
Hi,

Please refer to this link.

Article: Log4j zero-day flaw: What you need to know and how to protect yourself
Link: 
https://www.zdnet.com/article/log4j-zero-day-flaw-what-you-need-to-know-and-how-to-protect-yourself/

The article says:

[QUOTE]

WHAT DEVICES AND APPLICATIONS ARE AT RISK?

Basically any device that's exposed to the internet is at risk if it's
running Apache Log4J, versions 2.0 to 2.14.1. NCSC notes that Log4j
version 2 (Log4j2), the affected version, is included in Apache
Struts2, Solr, Druid, Flink, and Swift frameworks.

Where is Log4j used?

The Log4j 2 library is used in enterprise Java software and according
to the UK's NCSC is included in Apache frameworks such as Apache
Struts2, Apache Solr, Apache Druid, Apache Flink, and Apache Swift.

[/QUOTE]

Regards,

Mr. Turritopsis Dohrnii Teo En Ming
Targeted Individual in Singapore
19 Dec 2021 Sunday

On Thu, 16 Dec 2021 at 23:07, David Morávek  wrote:
>
> Hi Turritopsis,
>
> I fail to see any relation to Apache Flink. Can you please elaborate on how 
> Flink fits into it?
>
> Best,
> D.
>
> On Thu, Dec 16, 2021 at 3:52 PM Turritopsis Dohrnii Teo En Ming 
>  wrote:
>>
>> Subject: How do I determine which hardware device and software has
>> log4j zero-day security vulnerability?
>>
>> Good day from Singapore,
>>
>> I am working for a Systems Integrator (SI) in Singapore. We have
>> several clients writing in, requesting us to identify log4j zero-day
>> security vulnerability in their corporate infrastructure.
>>
>> It seems to be pretty difficult to determine which hardware device and
>> which software has the vulnerability. There seems to be no lists of
>> hardware devices and software affected by the flaw any where on the
>> internet.
>>
>> Could you refer me to definitive documentation/guides on how to
>> identify log4j security flaw in hardware devices and software?
>>
>> Thank you very much for your kind assistance.
>>
>> Mr. Turritopsis Dohrnii Teo En Ming, 43 years old as of 16 Dec 2021,
>> is a TARGETED INDIVIDUAL living in Singapore. He is an IT Consultant
>> with a Systems Integrator (SI)/computer firm in Singapore. He is an IT
>> enthusiast.
>>
>>
>>
>>
>>
>> -BEGIN EMAIL SIGNATURE-
>>
>> The Gospel for all Targeted Individuals (TIs):
>>
>> [The New York Times] Microwave Weapons Are Prime Suspect in Ills of
>> U.S. Embassy Workers
>>
>> Link:
>> https://www.nytimes.com/2018/09/01/science/sonic-attack-cuba-microwave.html
>>
>> 
>>
>> Singaporean Targeted Individual Mr. Turritopsis Dohrnii Teo En Ming's
>> Academic Qualifications as at 14 Feb 2019 and refugee seeking attempts
>> at the United Nations Refugee Agency Bangkok (21 Mar 2017), in Taiwan
>> (5 Aug 2019) and Australia (25 Dec 2019 to 9 Jan 2020):
>>
>> [1] https://tdtemcerts.wordpress.com/
>>
>> [2] https://tdtemcerts.blogspot.sg/
>>
>> [3] https://www.scribd.com/user/270125049/Teo-En-Ming
>>
>> -END EMAIL SIGNATURE-


Re: How do I determine which hardware device and software has log4j zero-day security vulnerability?

2021-12-16 Thread Arvid Heise
I think this is meant for the Apache log4j mailing list [1].

[1] https://logging.apache.org/log4j/2.x/mail-lists.html

On Thu, Dec 16, 2021 at 4:07 PM David Morávek  wrote:

> Hi Turritopsis,
>
> I fail to see any relation to Apache Flink. Can you please elaborate on
> how Flink fits into it?
>
> Best,
> D.
>
> On Thu, Dec 16, 2021 at 3:52 PM Turritopsis Dohrnii Teo En Ming <
> ceo.teo.en.m...@gmail.com> wrote:
>
>> Subject: How do I determine which hardware device and software has
>> log4j zero-day security vulnerability?
>>
>> Good day from Singapore,
>>
>> I am working for a Systems Integrator (SI) in Singapore. We have
>> several clients writing in, requesting us to identify log4j zero-day
>> security vulnerability in their corporate infrastructure.
>>
>> It seems to be pretty difficult to determine which hardware device and
>> which software has the vulnerability. There seems to be no lists of
>> hardware devices and software affected by the flaw any where on the
>> internet.
>>
>> Could you refer me to definitive documentation/guides on how to
>> identify log4j security flaw in hardware devices and software?
>>
>> Thank you very much for your kind assistance.
>>
>> Mr. Turritopsis Dohrnii Teo En Ming, 43 years old as of 16 Dec 2021,
>> is a TARGETED INDIVIDUAL living in Singapore. He is an IT Consultant
>> with a Systems Integrator (SI)/computer firm in Singapore. He is an IT
>> enthusiast.
>>
>>
>>
>>
>>
>> -BEGIN EMAIL SIGNATURE-
>>
>> The Gospel for all Targeted Individuals (TIs):
>>
>> [The New York Times] Microwave Weapons Are Prime Suspect in Ills of
>> U.S. Embassy Workers
>>
>> Link:
>>
>> https://www.nytimes.com/2018/09/01/science/sonic-attack-cuba-microwave.html
>>
>>
>> 
>>
>> Singaporean Targeted Individual Mr. Turritopsis Dohrnii Teo En Ming's
>> Academic Qualifications as at 14 Feb 2019 and refugee seeking attempts
>> at the United Nations Refugee Agency Bangkok (21 Mar 2017), in Taiwan
>> (5 Aug 2019) and Australia (25 Dec 2019 to 9 Jan 2020):
>>
>> [1] https://tdtemcerts.wordpress.com/
>>
>> [2] https://tdtemcerts.blogspot.sg/
>>
>> [3] https://www.scribd.com/user/270125049/Teo-En-Ming
>>
>> -END EMAIL SIGNATURE-
>>
>


Re: How do I determine which hardware device and software has log4j zero-day security vulnerability?

2021-12-16 Thread David Morávek
Hi Turritopsis,

I fail to see any relation to Apache Flink. Can you please elaborate on how
Flink fits into it?

Best,
D.

On Thu, Dec 16, 2021 at 3:52 PM Turritopsis Dohrnii Teo En Ming <
ceo.teo.en.m...@gmail.com> wrote:

> Subject: How do I determine which hardware device and software has
> log4j zero-day security vulnerability?
>
> Good day from Singapore,
>
> I am working for a Systems Integrator (SI) in Singapore. We have
> several clients writing in, requesting us to identify log4j zero-day
> security vulnerability in their corporate infrastructure.
>
> It seems to be pretty difficult to determine which hardware device and
> which software has the vulnerability. There seems to be no lists of
> hardware devices and software affected by the flaw any where on the
> internet.
>
> Could you refer me to definitive documentation/guides on how to
> identify log4j security flaw in hardware devices and software?
>
> Thank you very much for your kind assistance.
>
> Mr. Turritopsis Dohrnii Teo En Ming, 43 years old as of 16 Dec 2021,
> is a TARGETED INDIVIDUAL living in Singapore. He is an IT Consultant
> with a Systems Integrator (SI)/computer firm in Singapore. He is an IT
> enthusiast.
>
>
>
>
>
> -BEGIN EMAIL SIGNATURE-
>
> The Gospel for all Targeted Individuals (TIs):
>
> [The New York Times] Microwave Weapons Are Prime Suspect in Ills of
> U.S. Embassy Workers
>
> Link:
> https://www.nytimes.com/2018/09/01/science/sonic-attack-cuba-microwave.html
>
>
> 
>
> Singaporean Targeted Individual Mr. Turritopsis Dohrnii Teo En Ming's
> Academic Qualifications as at 14 Feb 2019 and refugee seeking attempts
> at the United Nations Refugee Agency Bangkok (21 Mar 2017), in Taiwan
> (5 Aug 2019) and Australia (25 Dec 2019 to 9 Jan 2020):
>
> [1] https://tdtemcerts.wordpress.com/
>
> [2] https://tdtemcerts.blogspot.sg/
>
> [3] https://www.scribd.com/user/270125049/Teo-En-Ming
>
> -END EMAIL SIGNATURE-
>


How do I determine which hardware device and software has log4j zero-day security vulnerability?

2021-12-16 Thread Turritopsis Dohrnii Teo En Ming
Subject: How do I determine which hardware device and software has
log4j zero-day security vulnerability?

Good day from Singapore,

I am working for a Systems Integrator (SI) in Singapore. We have
several clients writing in, requesting us to identify log4j zero-day
security vulnerability in their corporate infrastructure.

It seems to be pretty difficult to determine which hardware device and
which software has the vulnerability. There seems to be no lists of
hardware devices and software affected by the flaw any where on the
internet.

Could you refer me to definitive documentation/guides on how to
identify log4j security flaw in hardware devices and software?

Thank you very much for your kind assistance.

Mr. Turritopsis Dohrnii Teo En Ming, 43 years old as of 16 Dec 2021,
is a TARGETED INDIVIDUAL living in Singapore. He is an IT Consultant
with a Systems Integrator (SI)/computer firm in Singapore. He is an IT
enthusiast.





-BEGIN EMAIL SIGNATURE-

The Gospel for all Targeted Individuals (TIs):

[The New York Times] Microwave Weapons Are Prime Suspect in Ills of
U.S. Embassy Workers

Link:
https://www.nytimes.com/2018/09/01/science/sonic-attack-cuba-microwave.html



Singaporean Targeted Individual Mr. Turritopsis Dohrnii Teo En Ming's
Academic Qualifications as at 14 Feb 2019 and refugee seeking attempts
at the United Nations Refugee Agency Bangkok (21 Mar 2017), in Taiwan
(5 Aug 2019) and Australia (25 Dec 2019 to 9 Jan 2020):

[1] https://tdtemcerts.wordpress.com/

[2] https://tdtemcerts.blogspot.sg/

[3] https://www.scribd.com/user/270125049/Teo-En-Ming

-END EMAIL SIGNATURE-


Advise on Apache Log4j Zero Day (CVE-2021-44228)

2021-12-10 Thread Konstantin Knauf
Dear Flink Community,

Yesterday, a new Zero Day for Apache Log4j was reported [1]. It is now
tracked under CVE-2021-44228 [2].

Apache Flink bundles a version of Log4j that is affected by this
vulnerability. We recommend users to follow the advisory [3] of the Apache
Log4j Community. For Apache Flink this currently translates to “setting
system property log4j2.formatMsgNoLookups to true” until Log4j has been
upgraded to 2.15.0 in Apache Flink.

This effort is tracked in FLINK-25240 [4]. It will be included in Flink
1.15.0, Flink 1.14.1 and Flink 1.13.3. We expect Flink 1.14.1 to be
released in the next 1-2 weeks. The other releases will follow in their
regular cadence.

This advice has also been published on the Apache Flink blog
https://flink.apache.org/2021/12/10/log4j-cve.html.

Best,

Konstantin

[1]
https://www.cyberkendra.com/2021/12/apache-log4j-vulnerability-details-and.html
[2] https://nvd.nist.gov/vuln/detail/CVE-2021-44228
[3] https://logging.apache.org/log4j/2.x/security.html
[4] https://issues.apache.org/jira/browse/FLINK-25240

-- 

Konstantin Knauf

https://twitter.com/snntrable

https://github.com/knaufk


Re: flink : Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/logging/log4j/spi/LoggerContextShutdownAware

2021-09-14 Thread Ragini Manjaiah
hi David,
yes . you are correct. solved the issue

On Tue, Sep 14, 2021 at 5:57 PM David Morávek  wrote:

> From the stacktrace you've shared in the previous email, it seems that
> you're running the code from IDE, is that correct?
>
> This is the part that makes me assume that, because it's touching files
> from local maven repository.
>
> SLF4J: Found binding in
> [jar:file:/Users/z004t01/.m2/repository/org/slf4j/slf4j-log4j12/1.7.25/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>
> Please note that IDE most likely doesn't use the resulting fat jar for
> running your program, but instead constructs the classpath from the
> dependency graph. This most likely comes a transitive dependency from one
> of the hadoop deps, so you can try to exclude it there directly. You can
> use mvn dependency:tree to verify the exclusion.
>
> Best,
> D.
>
> On Tue, Sep 14, 2021 at 2:15 PM Ragini Manjaiah 
> wrote:
>
>> Hi David,
>> please find my pom.xml . where I have excluded the slf4j-log4j12
>> dependency . even after excluding encountering this issue
>>
>> 
>> http://maven.apache.org/POM/4.0.0;
>>  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
>>  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
>> http://maven.apache.org/xsd/maven-4.0.0.xsd;>
>> 4.0.0
>>
>> flinkTest
>> *
>> 1.0-SNAPSHOT
>>
>>
>> 
>> 1.11.3
>> 2.11
>> 
>> 
>>
>> 
>> org.apache.flink
>> flink-connector-elasticsearch7_2.11
>> 1.10.0
>> 
>>
>> 
>> org.apache.flink
>> flink-java
>> ${flink.version}
>> 
>> 
>> org.apache.flink
>> flink-java
>> ${flink.version}
>> 
>>
>> 
>> org.apache.flink
>> flink-streaming-java_2.11
>> ${flink.version}
>> 
>>
>> 
>> org.apache.flink
>> 
>> flink-statebackend-rocksdb_${scala.version}
>> ${flink.version}
>> 
>>
>> 
>> org.apache.flink
>> flink-clients_${scala.version}
>> ${flink.version}
>> 
>> 
>> org.apache.flink
>> flink-core
>> ${flink.version}
>> 
>> 
>> org.apache.flink
>> flink-avro
>> ${flink.version}
>> 
>> 
>> org.apache.flink
>> 
>> flink-connector-kafka-0.11_${scala.version}
>> ${flink.version}
>> 
>> 
>> org.apache.flink
>> flink-test-utils_${scala.version}
>> ${flink.version}
>> 
>> 
>> nl.basjes.parse.useragent
>> yauaa
>> 1.3
>> 
>>
>> 
>> com.googlecode.json-simple
>> json-simple
>> 1.1
>> 
>> 
>> de.javakaffee
>> kryo-serializers
>> 0.38
>> 
>> 
>> com.github.wnameless
>> json-flattener
>> 0.5.0
>> 
>> 
>> joda-time
>> joda-time
>> 2.9.1
>> 
>> 
>> com.google.code.gson
>> gson
>> 2.2.4
>> 
>> 
>> org.json
>> json
>> 20200518
>>
>> 
>> 
>> org.apache.hadoop
>> hadoop-common
>> 3.2.0
>> 
>> 
>> org.apache.hadoop
>> hadoop-mapreduce-client-core
>> 3.2.0
>> 
>>
>>
>> 
>> 
>> 
>> spring-repo
>> https://repo1.maven.org/maven2/
>> 
>> 
>>
>>
>> 
>> 
>>
>> 
>>     org.apache.maven.plugins
>> maven-compiler-plugin
>> 3.1
>> 
>> 1

Re: flink : Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/logging/log4j/spi/LoggerContextShutdownAware

2021-09-14 Thread David Morávek
>From the stacktrace you've shared in the previous email, it seems that
you're running the code from IDE, is that correct?

This is the part that makes me assume that, because it's touching files
from local maven repository.

SLF4J: Found binding in
[jar:file:/Users/z004t01/.m2/repository/org/slf4j/slf4j-log4j12/1.7.25/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]

Please note that IDE most likely doesn't use the resulting fat jar for
running your program, but instead constructs the classpath from the
dependency graph. This most likely comes a transitive dependency from one
of the hadoop deps, so you can try to exclude it there directly. You can
use mvn dependency:tree to verify the exclusion.

Best,
D.

On Tue, Sep 14, 2021 at 2:15 PM Ragini Manjaiah 
wrote:

> Hi David,
> please find my pom.xml . where I have excluded the slf4j-log4j12
> dependency . even after excluding encountering this issue
>
> 
> http://maven.apache.org/POM/4.0.0;
>  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
>  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
> http://maven.apache.org/xsd/maven-4.0.0.xsd;>
> 4.0.0
>
> flinkTest
> *
> 1.0-SNAPSHOT
>
>
> 
> 1.11.3
> 2.11
> 
> 
>
> 
> org.apache.flink
> flink-connector-elasticsearch7_2.11
> 1.10.0
> 
>
> 
> org.apache.flink
> flink-java
> ${flink.version}
> 
> 
> org.apache.flink
> flink-java
> ${flink.version}
> 
>
> 
> org.apache.flink
> flink-streaming-java_2.11
> ${flink.version}
> 
>
> 
> org.apache.flink
> 
> flink-statebackend-rocksdb_${scala.version}
> ${flink.version}
> 
>
> 
> org.apache.flink
> flink-clients_${scala.version}
> ${flink.version}
> 
> 
> org.apache.flink
> flink-core
> ${flink.version}
> 
> 
> org.apache.flink
> flink-avro
> ${flink.version}
> 
> 
> org.apache.flink
> 
> flink-connector-kafka-0.11_${scala.version}
> ${flink.version}
> 
> 
> org.apache.flink
> flink-test-utils_${scala.version}
> ${flink.version}
> 
> 
> nl.basjes.parse.useragent
> yauaa
> 1.3
> 
>
> 
> com.googlecode.json-simple
> json-simple
> 1.1
> 
> 
> de.javakaffee
> kryo-serializers
> 0.38
> 
> 
> com.github.wnameless
> json-flattener
> 0.5.0
> 
> 
> joda-time
> joda-time
> 2.9.1
> 
> 
> com.google.code.gson
> gson
> 2.2.4
> 
> 
> org.json
> json
> 20200518
>
> 
> 
> org.apache.hadoop
> hadoop-common
> 3.2.0
> 
> 
> org.apache.hadoop
> hadoop-mapreduce-client-core
> 3.2.0
> 
>
>
> 
> 
> 
> spring-repo
> https://repo1.maven.org/maven2/
> 
> 
>
>
> 
> 
>
> 
> org.apache.maven.plugins
> maven-compiler-plugin
> 3.1
> 
> 1.8
> 1.8
> 
> 
>
> 
>
> 
> 
> 
> org.apache.maven.plugins
> maven-shade-plugin
> 3.0.0
> 
>     
> 
> package
> 
> shade
> 
> 
> 
> 
> 
> org.apache.flink:force-shading
> 
> com.google.code.findbugs:jsr305
>  

Re: flink : Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/logging/log4j/spi/LoggerContextShutdownAware

2021-09-14 Thread Ragini Manjaiah
Hi David,
please find my pom.xml . where I have excluded the slf4j-log4j12 dependency
. even after excluding encountering this issue


http://maven.apache.org/POM/4.0.0;
 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
 xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.org/xsd/maven-4.0.0.xsd;>
4.0.0

flinkTest
*
1.0-SNAPSHOT



1.11.3
2.11




org.apache.flink
flink-connector-elasticsearch7_2.11
1.10.0



org.apache.flink
flink-java
${flink.version}


org.apache.flink
flink-java
${flink.version}



org.apache.flink
flink-streaming-java_2.11
${flink.version}



org.apache.flink
flink-statebackend-rocksdb_${scala.version}
${flink.version}



org.apache.flink
flink-clients_${scala.version}
${flink.version}


org.apache.flink
flink-core
${flink.version}


org.apache.flink
flink-avro
${flink.version}


org.apache.flink
flink-connector-kafka-0.11_${scala.version}
${flink.version}


org.apache.flink
flink-test-utils_${scala.version}
${flink.version}


nl.basjes.parse.useragent
yauaa
1.3



com.googlecode.json-simple
json-simple
1.1


de.javakaffee
kryo-serializers
0.38


com.github.wnameless
json-flattener
0.5.0


joda-time
joda-time
2.9.1


com.google.code.gson
gson
2.2.4


org.json
json
20200518



org.apache.hadoop
hadoop-common
3.2.0


org.apache.hadoop
hadoop-mapreduce-client-core
3.2.0






spring-repo
https://repo1.maven.org/maven2/








org.apache.maven.plugins
maven-compiler-plugin
3.1

1.8
1.8








org.apache.maven.plugins
maven-shade-plugin
3.0.0



package

shade





org.apache.flink:force-shading

com.google.code.findbugs:jsr305
org.slf4j:*
    log4j:*
org.slf4j:slf4j-api
org.slf4j:slf4j-log4j12
log4j:log4j






*:*

META-INF/*.SF
META-INF/*.DSA
META-INF/*.RSA






org.sapphire.watchtower.Application













maven-surefire-plugin
2.12.3


maven-failsafe-plugin
2.12.3




org.eclipse.m2e
lifecycle-mapping
1.0.0






org.apache.maven.plugins

maven-shade-plugin
[3.0.0,)


Re: flink : Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/logging/log4j/spi/LoggerContextShutdownAware

2021-09-14 Thread David Morávek
Hi Ragini,

I think you actually have the opposite problem that your classpath contains
slf4j binding for log4j 1.2, which is no longer supported. Can you try
getting rid of the slf4j-log4j12 dependency?

Best,
D.

On Tue, Sep 14, 2021 at 1:51 PM Ragini Manjaiah 
wrote:

> when I try to run flink .1.13 application encountering the below mentioned
> issue. what dependency I am missing . can you please help me
>
>
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
> [jar:file:/Users/z004t01/.m2/repository/org/apache/logging/log4j/log4j-slf4j-impl/2.12.1/log4j-slf4j-impl-2.12.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/Users/z004t01/.m2/repository/org/slf4j/slf4j-log4j12/1.7.25/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> SLF4J: Actual binding is of type
> [org.apache.logging.slf4j.Log4jLoggerFactory]
> at java.lang.ClassLoader.defineClass1(Native Method)
> at java.lang.ClassLoader.defineClass(ClassLoader.java:756)
> at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
> at java.net.URLClassLoader.defineClass(URLClassLoader.java:468)
> at java.net.URLClassLoader.access$100(URLClassLoader.java:74)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:369)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:363)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:362)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:355)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
> at
> org.apache.logging.log4j.core.impl.Log4jContextFactory.createContextSelector(Log4jContextFactory.java:106)
> at
> org.apache.logging.log4j.core.impl.Log4jContextFactory.(Log4jContextFactory.java:59)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at java.lang.Class.newInstance(Class.java:442)
> at org.apache.logging.log4j.LogManager.(LogManager.java:94)
> at
> org.apache.logging.log4j.spi.AbstractLoggerAdapter.getContext(AbstractLoggerAdapter.java:122)
> at
> org.apache.logging.slf4j.Log4jLoggerFactory.getContext(Log4jLoggerFactory.java:45)
> at
> org.apache.logging.log4j.spi.AbstractLoggerAdapter.getLogger(AbstractLoggerAdapter.java:46)
> at
> org.apache.logging.slf4j.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:30)
> at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:329)
> at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:349)
> at
> org.apache.flink.configuration.Configuration.(Configuration.java:67)
> at
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.createLocalEnvironment(StreamExecutionEnvironment.java:1972)
> at
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.createLocalEnvironment(StreamExecutionEnvironment.java:1958)
> at java.util.Optional.orElseGet(Optional.java:267)
> at
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.getExecutionEnvironment(StreamExecutionEnvironment.java:1945)
> at org.sapphire.watchtower.Application.main(Application.java:63)
> Caused by: java.lang.ClassNotFoundException:
> org.apache.logging.log4j.spi.LoggerContextShutdownAware
> at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:355)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
> ... 32 more
>


flink : Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/logging/log4j/spi/LoggerContextShutdownAware

2021-09-14 Thread Ragini Manjaiah
when I try to run flink .1.13 application encountering the below mentioned
issue. what dependency I am missing . can you please help me


SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:file:/Users/z004t01/.m2/repository/org/apache/logging/log4j/log4j-slf4j-impl/2.12.1/log4j-slf4j-impl-2.12.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/Users/z004t01/.m2/repository/org/slf4j/slf4j-log4j12/1.7.25/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.
SLF4J: Actual binding is of type
[org.apache.logging.slf4j.Log4jLoggerFactory]
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:756)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:468)
at java.net.URLClassLoader.access$100(URLClassLoader.java:74)
at java.net.URLClassLoader$1.run(URLClassLoader.java:369)
at java.net.URLClassLoader$1.run(URLClassLoader.java:363)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:362)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:355)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
at
org.apache.logging.log4j.core.impl.Log4jContextFactory.createContextSelector(Log4jContextFactory.java:106)
at
org.apache.logging.log4j.core.impl.Log4jContextFactory.(Log4jContextFactory.java:59)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at java.lang.Class.newInstance(Class.java:442)
at org.apache.logging.log4j.LogManager.(LogManager.java:94)
at
org.apache.logging.log4j.spi.AbstractLoggerAdapter.getContext(AbstractLoggerAdapter.java:122)
at
org.apache.logging.slf4j.Log4jLoggerFactory.getContext(Log4jLoggerFactory.java:45)
at
org.apache.logging.log4j.spi.AbstractLoggerAdapter.getLogger(AbstractLoggerAdapter.java:46)
at
org.apache.logging.slf4j.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:30)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:329)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:349)
at
org.apache.flink.configuration.Configuration.(Configuration.java:67)
at
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.createLocalEnvironment(StreamExecutionEnvironment.java:1972)
at
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.createLocalEnvironment(StreamExecutionEnvironment.java:1958)
at java.util.Optional.orElseGet(Optional.java:267)
at
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.getExecutionEnvironment(StreamExecutionEnvironment.java:1945)
at org.sapphire.watchtower.Application.main(Application.java:63)
Caused by: java.lang.ClassNotFoundException:
org.apache.logging.log4j.spi.LoggerContextShutdownAware
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:355)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
... 32 more


flink on yarn??????????log4j????

2021-07-22 Thread comsir
hi all
flink??log4jlog4j
??
 ??

Re: Flink On Yarn部署模式下,提交Flink作业 如何指定自定义log4j 配置

2021-01-18 Thread Yang Wang
具体你可以看一下YarnClusterDescriptor和YarnLogConfigUtil这两个类的代码
里面包含了如何来发现log4j的配置文件,以及如何来注册LocalResource,让Yarn来进行配置分发

Best,
Yang

Bobby <1010445...@qq.com> 于2021年1月18日周一 下午11:17写道:

> 首先感谢提供解决方案。我回头就去试试。
>
>
> 关于提到的“在Yarn部署的时候是依赖log4j.properties这个文件名来ship资源的,所以不能手动指定一个其他文件”,怎么理解,可以提供相关资料吗,我去了解具体flink
> on yarn 部署逻辑。
>
> thx.
>
>
> Yang Wang wrote
> > 在Yarn部署的时候是依赖log4j.properties这个文件名来ship资源的,所以不能手动指定一个其他文件
> >
> > 但是你可以export一个FLINK_CONF_DIR=/path/of/your/flink-conf环境变量
> > 在相应的目录下放自己的flink-conf.yaml和log4j.properties
> >
> > Best,
> > Yang
> >
> > Bobby <
>
> > 1010445050@
>
> >> 于2021年1月18日周一 下午7:18写道:
> >
> >> Flink On Yarn 日志配置log4j.properties 文件默认读取flink/conf中的log4j.properties。
> >> 有没有方法可以在提交flink 作业时指定自己编写的log4.properties。
> >> thx。
> >>
> >>
> >> Flink版本:1.9.1
> >> 部署方式:Flink on Yarn
> >>
> >>
> >>
> >> --
> >> Sent from: http://apache-flink.147419.n8.nabble.com/
> >>
>
>
>
>
>
> --
> Sent from: http://apache-flink.147419.n8.nabble.com/


Re: Flink On Yarn部署模式下,提交Flink作业 如何指定自定义log4j 配置

2021-01-18 Thread Bobby
首先感谢提供解决方案。我回头就去试试。

关于提到的“在Yarn部署的时候是依赖log4j.properties这个文件名来ship资源的,所以不能手动指定一个其他文件”,怎么理解,可以提供相关资料吗,我去了解具体flink
on yarn 部署逻辑。

thx.


Yang Wang wrote
> 在Yarn部署的时候是依赖log4j.properties这个文件名来ship资源的,所以不能手动指定一个其他文件
> 
> 但是你可以export一个FLINK_CONF_DIR=/path/of/your/flink-conf环境变量
> 在相应的目录下放自己的flink-conf.yaml和log4j.properties
> 
> Best,
> Yang
> 
> Bobby <

> 1010445050@

>> 于2021年1月18日周一 下午7:18写道:
> 
>> Flink On Yarn 日志配置log4j.properties 文件默认读取flink/conf中的log4j.properties。
>> 有没有方法可以在提交flink 作业时指定自己编写的log4.properties。
>> thx。
>>
>>
>> Flink版本:1.9.1
>> 部署方式:Flink on Yarn
>>
>>
>>
>> --
>> Sent from: http://apache-flink.147419.n8.nabble.com/
>>





--
Sent from: http://apache-flink.147419.n8.nabble.com/

Re: Flink On Yarn部署模式下,提交Flink作业 如何指定自定义log4j 配置

2021-01-18 Thread Bobby
11



--
Sent from: http://apache-flink.147419.n8.nabble.com/


Re: Flink On Yarn部署模式下,提交Flink作业 如何指定自定义log4j 配置

2021-01-18 Thread Bobby
首先感谢提供解决方案。我回头就去试试。

关于提到的“在Yarn部署的时候是依赖log4j.properties这个文件名来ship资源的,所以不能手动指定一个其他文件”,怎么理解,可以提供相关资料吗,我去了解具体flink
on yarn 部署逻辑。

thx.



--
Sent from: http://apache-flink.147419.n8.nabble.com/

Re: Flink On Yarn部署模式下,提交Flink作业 如何指定自定义log4j 配置

2021-01-18 Thread Yang Wang
在Yarn部署的时候是依赖log4j.properties这个文件名来ship资源的,所以不能手动指定一个其他文件

但是你可以export一个FLINK_CONF_DIR=/path/of/your/flink-conf环境变量
在相应的目录下放自己的flink-conf.yaml和log4j.properties

Best,
Yang

Bobby <1010445...@qq.com> 于2021年1月18日周一 下午7:18写道:

> Flink On Yarn 日志配置log4j.properties 文件默认读取flink/conf中的log4j.properties。
> 有没有方法可以在提交flink 作业时指定自己编写的log4.properties。
> thx。
>
>
> Flink版本:1.9.1
> 部署方式:Flink on Yarn
>
>
>
> --
> Sent from: http://apache-flink.147419.n8.nabble.com/
>


Flink On Yarn部署模式下,提交Flink作业 如何指定自定义log4j 配置

2021-01-18 Thread Bobby
Flink On Yarn 日志配置log4j.properties 文件默认读取flink/conf中的log4j.properties。
有没有方法可以在提交flink 作业时指定自己编写的log4.properties。
thx。


Flink版本:1.9.1
部署方式:Flink on Yarn



--
Sent from: http://apache-flink.147419.n8.nabble.com/


Re: flink1.11.1 如何让多个log4j配置文件生效

2021-01-13 Thread 赵一旦
个人观点:
这个应该不可以,你提交的任务最终实际是打包给tm去执行的,使用的是tm的日志配置,而不是你自己的配置。
你自己那个配置仅仅用于本地调试启动的时候有效。

nicygan  于2021年1月13日周三 上午9:55写道:

> dear all:
>  我的flink任务提交到yarn运行,
>  默认生效的是日志配置是flink/conf中的log4j.properties。
>  但我的应用jar包中还有一个log4j2.xml,这里面配置了KafkaAppend,要把日志发送到kafka。
>  我要如果设置,才能让这两个配置文件都生效呢?
>  哪位大侠有配置经验。
>
>
>
> thanks
> by nicygan
>


flink1.11.1 如何让多个log4j配置文件生效

2021-01-12 Thread nicygan
dear all:
 我的flink任务提交到yarn运行,
 默认生效的是日志配置是flink/conf中的log4j.properties。
 但我的应用jar包中还有一个log4j2.xml,这里面配置了KafkaAppend,要把日志发送到kafka。
 我要如果设置,才能让这两个配置文件都生效呢?
 哪位大侠有配置经验。



thanks
by nicygan


Re: Tracking ID in log4j MDC

2020-12-02 Thread Till Rohrmann
Hi Anil,

Flink does not maintain the MDC context between threads. Hence, I don't
think that it is possible w/o changes to Flink.

One note, if operators are chained then they are run by the same thread.

Cheers,
Till

On Wed, Dec 2, 2020 at 7:22 AM Anil K  wrote:

> Hi All,
>
> Is it possible to have a tracking id in MDC that will be shared across
> chained users defined operations like Filter, KeySelector, Flat map,
> Process function, and  Producer?
>
> Tracking id will be read from headers of Kafka Message, which if possible
> plan to set to MDC in log4j. Right now I am seeing tracking id is not
> getting propagated to the next function.
>
> I am using flink 1.9 running in k8.
>
> Thanks, Anil
>


Tracking ID in log4j MDC

2020-12-01 Thread Anil K
Hi All,

Is it possible to have a tracking id in MDC that will be shared across
chained users defined operations like Filter, KeySelector, Flat map,
Process function, and  Producer?

Tracking id will be read from headers of Kafka Message, which if possible
plan to set to MDC in log4j. Right now I am seeing tracking id is not
getting propagated to the next function.

I am using flink 1.9 running in k8.

Thanks, Anil


Re: Use logback instead of log4j

2019-08-25 Thread Vishwas Siravara
Any idea on how I can use log back instead ?

On Fri, Aug 23, 2019 at 1:22 PM Vishwas Siravara 
wrote:

> Hi ,
> From the flink doc , in order to use logback instead of log4j " Users
> willing to use logback instead of log4j can just exclude log4j (or delete
> it from the lib/ folder)."
> https://ci.apache.org/projects/flink/flink-docs-stable/monitoring/logging.html
>  .
>
> However when i delete it from the lib and start the cluster , there are no
> logs generated , instead I see console log which says "Failed to
> instantiate SLF4J LoggerFactory"
>
> Reported exception:
> java.lang.NoClassDefFoundError: org/apache/log4j/Level
> at org.slf4j.LoggerFactory.bind(LoggerFactory.java:143)
> at 
> org.slf4j.LoggerFactory.performInitialization(LoggerFactory.java:122)
> at org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java:378)
> at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:328)
> at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:349)
> at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.(ClusterEntrypoint.java:98)
>
>
> How can I use logback instead ?
>
>
> Thanks,
> Vishwas
>
>


Use logback instead of log4j

2019-08-23 Thread Vishwas Siravara
Hi ,
>From the flink doc , in order to use logback instead of log4j " Users
willing to use logback instead of log4j can just exclude log4j (or delete
it from the lib/ folder)."
https://ci.apache.org/projects/flink/flink-docs-stable/monitoring/logging.html
 .

However when i delete it from the lib and start the cluster , there are no
logs generated , instead I see console log which says "Failed to
instantiate SLF4J LoggerFactory"

Reported exception:
java.lang.NoClassDefFoundError: org/apache/log4j/Level
at org.slf4j.LoggerFactory.bind(LoggerFactory.java:143)
at org.slf4j.LoggerFactory.performInitialization(LoggerFactory.java:122)
at org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java:378)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:328)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:349)
at 
org.apache.flink.runtime.entrypoint.ClusterEntrypoint.(ClusterEntrypoint.java:98)


How can I use logback instead ?


Thanks,
Vishwas


Re: Help Required for Log4J

2018-03-20 Thread Puneet Kinra
Hi

Fabin thanks for reply I fixed the issue that i was facing.

On Tue, Mar 20, 2018 at 7:31 PM, Fabian Hueske <fhue...@gmail.com> wrote:

> Hi,
>
> TBH, I don't have much experience with logging, but you might want to
> consider using Side Outputs [1] to route invalid records into a separate
> stream.
> The stream can then separately handled, be written to files or Kafka or
> wherever.
>
> Best,
> Fabian
>
> [1] https://ci.apache.org/projects/flink/flink-docs-
> release-1.4/dev/stream/side_output.html
>
> 2018-03-20 10:36 GMT+01:00 Puneet Kinra <puneet.ki...@customercentria.com>
> :
>
>> Hi
>>
>> I have a use case in which i want to log bad records in the log file. I
>> have configured the log4j
>> property file is getting generated as well but it also going to flink
>> logs as well i want to detach
>> it from flink logs want to write to log file.
>>
>> .Here is configuration
>> *(Note :AMSSource is the custom written adaptor here)*
>>
>> # This affects logging for both user code and Flink
>> log4j.rootLogger=INFO, file
>> log4j.logger.amssource=DEBUG, amssourceAppender
>>
>> # Uncomment this if you want to _only_ change Flink's logging
>> #log4j.logger.org.apache.flink=INFO
>>
>> # The following lines keep the log level of common libraries/connectors on
>> # log level INFO. The root logger does not override this. You have to
>> manually
>> # change the log levels here.
>> log4j.logger.akka=INFO
>> log4j.logger.org.apache.kafka=INFO
>> log4j.logger.org.apache.hadoop=INFO
>> log4j.logger.org.apache.zookeeper=INFO
>>
>> # Log all infos in the given file
>> log4j.appender.file=org.apache.log4j.FileAppender
>> log4j.appender.file.file=D:\\logs\\flink-log
>> log4j.appender.file.append=false
>> log4j.appender.file.layout=org.apache.log4j.PatternLayout
>> log4j.appender.file.layout.ConversionPattern=%d{-MM-dd HH:mm:ss,SSS}
>> %-5p %-60c %x - %m%n
>>
>> # Suppress the irrelevant (wrong) warnings from the Netty channel handler
>> log4j.logger.org.apache.flink.shaded.akka.org.jboss.netty.ch
>> annel.DefaultChannelPipeline=ERROR, file
>>
>>
>> #BonusPointAppender
>> log4j.appender.bonuspointAppender=org.apache.log4j.RollingFileAppender
>> log4j.appender.bonuspointAppender.MaxFileSize=1024MB
>> log4j.appender.bonuspointAppender.MaxBackupIndex=10
>> log4j.appender.bonuspointAppender.Append=true
>> log4j.appender.bonuspointAppender.File=D:\\logs\\flink-bpuser-bonus.logs
>> #log4j.appender.bonuspointAppender.DatePattern='.'-MM-dd
>> log4j.appender.bonuspointAppender.layout=org.apache.log4j.PatternLayout
>> log4j.appender.bonuspointAppender.layout.ConversionPattern=%d [%t] %-5p
>> (%C %M:%L) %x - %m%n
>>
>> #AMSSourceAppender
>> log4j.appender.amssourceAppender=org.apache.log4j.RollingFileAppender
>> log4j.appender.amssourceAppender.MaxFileSize=1024MB
>> log4j.appender.amssourceAppender.MaxBackupIndex=10
>> log4j.appender.amssourceAppender.Append=true
>> log4j.appender.amssourceAppender.File=D:\\logs\\flink-bpuser
>> -bonus-amssource.logs
>> #log4j.appender.amssourceAppender.DatePattern='.'-MM-dd
>> log4j.appender.amssourceAppender.layout=org.apache.log4j.PatternLayout
>> log4j.appender.amssourceAppender.layout.ConversionPattern=%d [%t] %-5p
>> (%C %M:%L) %x - %m%n
>>
>>
>>
>>
>> --
>> *Cheers *
>>
>> *Puneet Kinra*
>>
>> *Mobile:+918800167808 <+91%2088001%2067808> | Skype :
>> puneet.ki...@customercentria.com <puneet.ki...@customercentria.com>*
>>
>> *e-mail :puneet.ki...@customercentria.com
>> <puneet.ki...@customercentria.com>*
>>
>>
>>
>


-- 
*Cheers *

*Puneet Kinra*

*Mobile:+918800167808 | Skype : puneet.ki...@customercentria.com
<puneet.ki...@customercentria.com>*

*e-mail :puneet.ki...@customercentria.com
<puneet.ki...@customercentria.com>*


Re: Help Required for Log4J

2018-03-20 Thread Fabian Hueske
Hi,

TBH, I don't have much experience with logging, but you might want to
consider using Side Outputs [1] to route invalid records into a separate
stream.
The stream can then separately handled, be written to files or Kafka or
wherever.

Best,
Fabian

[1]
https://ci.apache.org/projects/flink/flink-docs-release-1.4/dev/stream/side_output.html

2018-03-20 10:36 GMT+01:00 Puneet Kinra <puneet.ki...@customercentria.com>:

> Hi
>
> I have a use case in which i want to log bad records in the log file. I
> have configured the log4j
> property file is getting generated as well but it also going to flink logs
> as well i want to detach
> it from flink logs want to write to log file.
>
> .Here is configuration
> *(Note :AMSSource is the custom written adaptor here)*
>
> # This affects logging for both user code and Flink
> log4j.rootLogger=INFO, file
> log4j.logger.amssource=DEBUG, amssourceAppender
>
> # Uncomment this if you want to _only_ change Flink's logging
> #log4j.logger.org.apache.flink=INFO
>
> # The following lines keep the log level of common libraries/connectors on
> # log level INFO. The root logger does not override this. You have to
> manually
> # change the log levels here.
> log4j.logger.akka=INFO
> log4j.logger.org.apache.kafka=INFO
> log4j.logger.org.apache.hadoop=INFO
> log4j.logger.org.apache.zookeeper=INFO
>
> # Log all infos in the given file
> log4j.appender.file=org.apache.log4j.FileAppender
> log4j.appender.file.file=D:\\logs\\flink-log
> log4j.appender.file.append=false
> log4j.appender.file.layout=org.apache.log4j.PatternLayout
> log4j.appender.file.layout.ConversionPattern=%d{-MM-dd HH:mm:ss,SSS}
> %-5p %-60c %x - %m%n
>
> # Suppress the irrelevant (wrong) warnings from the Netty channel handler
> log4j.logger.org.apache.flink.shaded.akka.org.jboss.netty.ch
> annel.DefaultChannelPipeline=ERROR, file
>
>
> #BonusPointAppender
> log4j.appender.bonuspointAppender=org.apache.log4j.RollingFileAppender
> log4j.appender.bonuspointAppender.MaxFileSize=1024MB
> log4j.appender.bonuspointAppender.MaxBackupIndex=10
> log4j.appender.bonuspointAppender.Append=true
> log4j.appender.bonuspointAppender.File=D:\\logs\\flink-bpuser-bonus.logs
> #log4j.appender.bonuspointAppender.DatePattern='.'-MM-dd
> log4j.appender.bonuspointAppender.layout=org.apache.log4j.PatternLayout
> log4j.appender.bonuspointAppender.layout.ConversionPattern=%d [%t] %-5p
> (%C %M:%L) %x - %m%n
>
> #AMSSourceAppender
> log4j.appender.amssourceAppender=org.apache.log4j.RollingFileAppender
> log4j.appender.amssourceAppender.MaxFileSize=1024MB
> log4j.appender.amssourceAppender.MaxBackupIndex=10
> log4j.appender.amssourceAppender.Append=true
> log4j.appender.amssourceAppender.File=D:\\logs\\flink-
> bpuser-bonus-amssource.logs
> #log4j.appender.amssourceAppender.DatePattern='.'-MM-dd
> log4j.appender.amssourceAppender.layout=org.apache.log4j.PatternLayout
> log4j.appender.amssourceAppender.layout.ConversionPattern=%d [%t] %-5p
> (%C %M:%L) %x - %m%n
>
>
>
>
> --
> *Cheers *
>
> *Puneet Kinra*
>
> *Mobile:+918800167808 <+91%2088001%2067808> | Skype :
> puneet.ki...@customercentria.com <puneet.ki...@customercentria.com>*
>
> *e-mail :puneet.ki...@customercentria.com
> <puneet.ki...@customercentria.com>*
>
>
>


Help Required for Log4J

2018-03-20 Thread Puneet Kinra
Hi

I have a use case in which i want to log bad records in the log file. I
have configured the log4j
property file is getting generated as well but it also going to flink logs
as well i want to detach
it from flink logs want to write to log file.

.Here is configuration
*(Note :AMSSource is the custom written adaptor here)*

# This affects logging for both user code and Flink
log4j.rootLogger=INFO, file
log4j.logger.amssource=DEBUG, amssourceAppender

# Uncomment this if you want to _only_ change Flink's logging
#log4j.logger.org.apache.flink=INFO

# The following lines keep the log level of common libraries/connectors on
# log level INFO. The root logger does not override this. You have to
manually
# change the log levels here.
log4j.logger.akka=INFO
log4j.logger.org.apache.kafka=INFO
log4j.logger.org.apache.hadoop=INFO
log4j.logger.org.apache.zookeeper=INFO

# Log all infos in the given file
log4j.appender.file=org.apache.log4j.FileAppender
log4j.appender.file.file=D:\\logs\\flink-log
log4j.appender.file.append=false
log4j.appender.file.layout=org.apache.log4j.PatternLayout
log4j.appender.file.layout.ConversionPattern=%d{-MM-dd HH:mm:ss,SSS}
%-5p %-60c %x - %m%n

# Suppress the irrelevant (wrong) warnings from the Netty channel handler
log4j.logger.org.apache.flink.shaded.akka.org.jboss.netty.channel.DefaultChannelPipeline=ERROR,
file


#BonusPointAppender
log4j.appender.bonuspointAppender=org.apache.log4j.RollingFileAppender
log4j.appender.bonuspointAppender.MaxFileSize=1024MB
log4j.appender.bonuspointAppender.MaxBackupIndex=10
log4j.appender.bonuspointAppender.Append=true
log4j.appender.bonuspointAppender.File=D:\\logs\\flink-bpuser-bonus.logs
#log4j.appender.bonuspointAppender.DatePattern='.'-MM-dd
log4j.appender.bonuspointAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.bonuspointAppender.layout.ConversionPattern=%d [%t] %-5p (%C
%M:%L) %x - %m%n

#AMSSourceAppender
log4j.appender.amssourceAppender=org.apache.log4j.RollingFileAppender
log4j.appender.amssourceAppender.MaxFileSize=1024MB
log4j.appender.amssourceAppender.MaxBackupIndex=10
log4j.appender.amssourceAppender.Append=true
log4j.appender.amssourceAppender.File=D:\\logs\\flink-bpuser-bonus-
amssource.logs
#log4j.appender.amssourceAppender.DatePattern='.'-MM-dd
log4j.appender.amssourceAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.amssourceAppender.layout.ConversionPattern=%d [%t] %-5p (%C
%M:%L) %x - %m%n




-- 
*Cheers *

*Puneet Kinra*

*Mobile:+918800167808 | Skype : puneet.ki...@customercentria.com
<puneet.ki...@customercentria.com>*

*e-mail :puneet.ki...@customercentria.com
<puneet.ki...@customercentria.com>*


Flink on AWS EMR - how to use flink-log4j configuration?

2018-02-01 Thread Ishwara Varnasi
I didn't find an example of flink-log4j configuration while creating EMR
cluster for running Flink. What should be passed to "flink-log4j" config?
Actual log4j config or path to file? Also, how to see application logs in
EMR?
thanks
Ishwara Varnasi


Re: Log4J

2017-02-20 Thread Stephan Ewen
How about adding this to the "logging" docs - a section on how to run log4j2

On Mon, Feb 20, 2017 at 8:50 AM, Robert Metzger <rmetz...@apache.org> wrote:

> Hi Chet,
>
> These are the files I have in my lib/ folder with the working log4j2
> integration:
>
> -rw-r--r--  1 robert robert 79966937 Oct 10 13:49 flink-dist_2.10-1.1.3.jar
> -rw-r--r--  1 robert robert90883 Dec  9 20:13
> flink-python_2.10-1.1.3.jar
> -rw-r--r--  1 robert robert60547 Dec  9 18:45 log4j-1.2-api-2.7.jar
> -rw-rw-r--  1 robert robert  1638598 Oct 22 16:08
> log4j2-gelf-1.3.1-shaded.jar
> -rw-rw-r--  1 robert robert 1056 Dec  9 20:12 log4j2.properties
> -rw-r--r--  1 robert robert   219001 Dec  9 18:45 log4j-api-2.7.jar
> -rw-r--r--  1 robert robert  1296865 Dec  9 18:45 log4j-core-2.7.jar
> -rw-r--r--  1 robert robert22918 Dec  9 18:46 log4j-slf4j-impl-2.7.jar
>
> You don't need the "log4j2-gelf-1.3.1-shaded.jar", that's a GELF appender
> for Greylog2.
>
> On Mon, Feb 20, 2017 at 5:41 AM, Chet Masterson <chet.master...@yandex.com
> > wrote:
>
>> I read through the link you provided, Stephan. However, I am still
>> confused. The instructions mention specific jar files for Logback, I am not
>> sure which of the log4j 2.x jars I need to put in the the flink /lib
>> directory. I tried various combinations of log4j-1.2-api-2.8.jar,
>> log4j-slf4j-impl-2.8.jar, log4j-to-slf4j-2.8.jar, and renamed the stock
>> log4j-1.2.17.jar and slf4j-log4j12-1.7.7.jar, but then the job manager
>> would not start, and threw a 'NoClassDefFoundError:
>> org/apache/logging/log4j/LogManager'. And this is without deploying my
>> job out there, so I don't think any of the "Use Logback when running Flink
>> out of the IDE / from a Java application" section instructions are relevant.
>>
>> Can someone be more specific how to do this? If I get it to work, I'll be
>> happy to formally document it in whatever format would help the project out
>> long term.
>>
>> Thanks!
>>
>>
>> 16.02.2017, 05:54, "Stephan Ewen" <se...@apache.org>:
>>
>> Hi!
>>
>> The bundled log4j version (1.x) does not support that.
>>
>> But you can replace the logging jars with those of a different framework
>> (like log4j 2.x), which supports changing the configuration without
>> stopping the application.
>> You don't need to rebuild flink, simply replace two jars in the "lib"
>> folder (and update the config file, because log4j 2.x has a different
>> config format).
>>
>> This guide shows how to swap log4j 1.x for logback, and you should be
>> able to swap in log4j 2.x in the exact same way.
>>
>> https://ci.apache.org/projects/flink/flink-docs-release-1.2/
>> monitoring/best_practices.html#use-logback-when-running-
>> flink-on-a-cluster
>>
>>
>> On Thu, Feb 16, 2017 at 5:20 AM, Chet Masterson <
>> chet.master...@yandex.com> wrote:
>>
>> Is there a way to reload a log4j.properties file without stopping and
>> starting the job server?
>>
>>
>


Re: Log4J

2017-02-16 Thread Robert Metzger
I've also (successfully) tried running Flink with log4j2 to connect it to
greylog2. If I remember correctly, the biggest problem was "injecting" the
log4j2 properties file into the classpath (when running Flink on YARN).

Maybe you need to put the file into the lib/ folder, so that it is shipped
to all the nodes, and then loaded from the classpath (there is a special
name in the log4j2 documentation. If you use that name, it'll be loaded
from the classloader)

If you are running in standalone mode, you can just modify the scripts to
point the JVMs to the right config file.

On Thu, Feb 16, 2017 at 11:54 AM, Stephan Ewen <se...@apache.org> wrote:

> Hi!
>
> The bundled log4j version (1.x) does not support that.
>
> But you can replace the logging jars with those of a different framework
> (like log4j 2.x), which supports changing the configuration without
> stopping the application.
> You don't need to rebuild flink, simply replace two jars in the "lib"
> folder (and update the config file, because log4j 2.x has a different
> config format).
>
> This guide shows how to swap log4j 1.x for logback, and you should be able
> to swap in log4j 2.x in the exact same way.
>
> https://ci.apache.org/projects/flink/flink-docs-
> release-1.2/monitoring/best_practices.html#use-logback-
> when-running-flink-on-a-cluster
>
>
> On Thu, Feb 16, 2017 at 5:20 AM, Chet Masterson <chet.master...@yandex.com
> > wrote:
>
>> Is there a way to reload a log4j.properties file without stopping and
>> starting the job server?
>>
>
>


Re: Log4J

2017-02-16 Thread Stephan Ewen
Hi!

The bundled log4j version (1.x) does not support that.

But you can replace the logging jars with those of a different framework
(like log4j 2.x), which supports changing the configuration without
stopping the application.
You don't need to rebuild flink, simply replace two jars in the "lib"
folder (and update the config file, because log4j 2.x has a different
config format).

This guide shows how to swap log4j 1.x for logback, and you should be able
to swap in log4j 2.x in the exact same way.

https://ci.apache.org/projects/flink/flink-docs-release-1.2/monitoring/best_practices.html#use-logback-when-running-flink-on-a-cluster


On Thu, Feb 16, 2017 at 5:20 AM, Chet Masterson <chet.master...@yandex.com>
wrote:

> Is there a way to reload a log4j.properties file without stopping and
> starting the job server?
>


Log4J

2017-02-15 Thread Chet Masterson
Is there a way to reload a log4j.properties file without stopping and starting the job server?


Re: Log4j configuration on YARN

2016-03-14 Thread Robert Metzger
Hi Nick,

the name of the "log4j-yarn-session.properties" file might be a bit
misleading. The file is just used for the YARN session client, running
locally.
The Job- and TaskManager are going to use the log4j.properties on the
cluster.

On Fri, Mar 11, 2016 at 7:20 PM, Ufuk Celebi <u...@apache.org> wrote:

> Hey Nick!
>
> I just checked and the conf/log4j.properties file is copied and is
> given as an argument to the JVM.
>
> You should see the following:
> - client logs that the conf/log4j.properties file is copied
> - JobManager logs show log4j.configuration being passed to the JVM.
>
> Can you confirm that these shows up? If yes, but you still don't get
> the expected logging, I would check via -Dlog4j.debug what is
> configured (prints to stdout I think). Does this help?
>
> – Ufuk
>
>
> On Fri, Mar 11, 2016 at 6:02 PM, Nick Dimiduk <ndimi...@gmail.com> wrote:
> > Can anyone tell me where I must place my application-specific
> > log4j.properties to have them honored when running on a YARN cluster? In
> my
> > application jar doesn't work. In the log4j files under flink/conf doesn't
> > work.
> >
> > My goal is to set the log level for 'com.mycompany' classes used in my
> flink
> > application to DEBUG.
> >
> > Thanks,
> > Nick
> >
>


Re: Log4j configuration on YARN

2016-03-11 Thread Ufuk Celebi
Hey Nick!

I just checked and the conf/log4j.properties file is copied and is
given as an argument to the JVM.

You should see the following:
- client logs that the conf/log4j.properties file is copied
- JobManager logs show log4j.configuration being passed to the JVM.

Can you confirm that these shows up? If yes, but you still don't get
the expected logging, I would check via -Dlog4j.debug what is
configured (prints to stdout I think). Does this help?

– Ufuk


On Fri, Mar 11, 2016 at 6:02 PM, Nick Dimiduk <ndimi...@gmail.com> wrote:
> Can anyone tell me where I must place my application-specific
> log4j.properties to have them honored when running on a YARN cluster? In my
> application jar doesn't work. In the log4j files under flink/conf doesn't
> work.
>
> My goal is to set the log level for 'com.mycompany' classes used in my flink
> application to DEBUG.
>
> Thanks,
> Nick
>


Log4j configuration on YARN

2016-03-11 Thread Nick Dimiduk
Can anyone tell me where I must place my application-specific
log4j.properties to have them honored when running on a YARN cluster? In my
application jar doesn't work. In the log4j files under flink/conf doesn't
work.

My goal is to set the log level for 'com.mycompany' classes used in my
flink application to DEBUG.

Thanks,
Nick


Re: Configure log4j with XML files

2015-12-21 Thread Till Rohrmann
Hi Gwenhaël,

as far as I know, there is no direct way to do so. You can either adapt the
flink-daemon.sh script in line 68 to use a different configuration or you
can test whether the dynamic property -Dlog4j.configurationFile:CONFIG_FILE
overrides the -Dlog4j.confguration property. You can set the dynamic
property using Flink’s env.java.opts configuration parameter.

Cheers,
Till
​

On Mon, Dec 21, 2015 at 3:34 PM, Gwenhael Pasquiers <
gwenhael.pasqui...@ericsson.com> wrote:

> Hi everybody,
>
>
>
> Could it be possible to have a way to configure log4j with xml files ?
>
>
>
> I’ve looked into the code and it looks like the properties files names are
> hardcoded. However we have the need to use xml :
>
> -  We log everything into ELK (Elasticsearch / Logstash / Kibana)
> using SocketAppender
>
> -  Socket appender is synchronous by default and slow whole app
> if anything goes wrong with the ELK
>
> -  We usually add an AsyncAppender on top of the SocketAppender,
> but this sort of configuration is only possible using an XML config file…
>
>
>
> We’ve already ran into the issue. Everything was almost paused because the
> ELK was overloaded and extremely slow.
>
>
>
> B.R.
>
>
>
> Gwenhaël PASQUIERS
>


Re: Configure log4j with XML files

2015-12-21 Thread Robert Metzger
as an additional note: Flink is sending all files in the /lib folder to all
YARN containers. So you could place the XML file in "/lib" and override the
properties.

I think you need to delete the log4j properties from the conf/ directory,
then at least on YARN, we'll not set the -Dlog4j.configuration property

On Mon, Dec 21, 2015 at 3:58 PM, Till Rohrmann <trohrm...@apache.org> wrote:

> Hi Gwenhaël,
>
> as far as I know, there is no direct way to do so. You can either adapt
> the flink-daemon.sh script in line 68 to use a different configuration or
> you can test whether the dynamic property
> -Dlog4j.configurationFile:CONFIG_FILE overrides the -Dlog4j.confguration
> property. You can set the dynamic property using Flink’s env.java.opts
> configuration parameter.
>
> Cheers,
> Till
> ​
>
> On Mon, Dec 21, 2015 at 3:34 PM, Gwenhael Pasquiers <
> gwenhael.pasqui...@ericsson.com> wrote:
>
>> Hi everybody,
>>
>>
>>
>> Could it be possible to have a way to configure log4j with xml files ?
>>
>>
>>
>> I’ve looked into the code and it looks like the properties files names
>> are hardcoded. However we have the need to use xml :
>>
>> -  We log everything into ELK (Elasticsearch / Logstash /
>> Kibana) using SocketAppender
>>
>> -  Socket appender is synchronous by default and slow whole app
>> if anything goes wrong with the ELK
>>
>> -  We usually add an AsyncAppender on top of the SocketAppender,
>> but this sort of configuration is only possible using an XML config file…
>>
>>
>>
>> We’ve already ran into the issue. Everything was almost paused because
>> the ELK was overloaded and extremely slow.
>>
>>
>>
>> B.R.
>>
>>
>>
>> Gwenhaël PASQUIERS
>>
>
>