Yes, the dynamic log level modification worked great for me.
Thanks a lot,
Vadim
From: Biao Geng
Date: Tuesday, 14 May 2024 at 10:07
To: Vararu, Vadim
Cc: user@flink.apache.org
Subject: Re: Proper way to modify log4j config file for kubernetes-session
Hi Vararu,
Does this document meet your
Hi Vararu,
Does this document meet your requirements?
https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/resource-providers/native_kubernetes/#logging
Best,
Biao Geng
Vararu, Vadim 于2024年5月14日周二 01:39写道:
> Hi,
>
>
>
> Trying to configure logger
Hi,
Trying to configure loggers in the log4j-console.properties file (that is
mounted from the host where the kubernetes-session.sh is invoked and referenced
by the TM processes via - Dlog4j.configurationFile).
Is there a proper (documented) way to do that, meaning to append/modify the
log4j
I assume you are using "*bin/flink run-application*" to submit a Flink
application to K8s cluster. Then you could simply
update your local log4j-console.properties, it will be shipped and mounted
to JobManager/TaskManager pods via ConfigMap.
Best,
Yang
Vladislav Keda 于2023年6月20日
Hi all again!
Please tell me if you can answer my question, thanks.
---
Best Regards,
Vladislav Keda
пт, 16 июн. 2023 г. в 16:12, Vladislav Keda <
vladislav.k...@glowbyteconsulting.com>:
> Hi all!
>
> Is it possible to change Flink* log4j-console.properties* in Nati
Hi all!
Is it possible to change Flink* log4j-console.properties* in Native
Kubernetes (for example in Kubernetes Application mode) without rebuilding
the application docker image?
I was trying to inject a .sh script call (in the attachment) before
/docker-entrypoint.sh, but this workaround did
>
> I checked the image prior cluster creation; all logs' files are there.
> once the cluster is deployed, they are missing. (bug?)
I do not think it is a bug since we already have shipped all the config
files(log4j properties, flink-conf.yaml) via the ConfigMap.
Then it is directl
, they are missing. (bug?)
Best,
Tamir.
From: Tamir Sagi
Sent: Friday, January 21, 2022 7:19 PM
To: Yang Wang
Cc: user@flink.apache.org
Subject: Re: Flink 1.14.2 - Log4j2 -Dlog4j.configurationFile is ignored and
falls back to default /opt/flink/conf/log4j
/log4j-console.properties
EXTERNAL EMAIL
Changing the order of exec command makes sense to me. Would you please create a
ticket for this?
The /opt/flink/conf is cleaned up because we are mounting the conf files from
K8s ConfigMap.
Best,
Yang
Tamir Sagi mailto:tamir.s...@niceactimize.com
is better). does it make
> sense to you?
>
>
> In addition, any idea why /opt/flink/conf gets cleaned (Only
> flink-conf.xml is there).
>
>
> Best,
> Tamir
>
>
> ------
> *From:* Yang Wang
> *Sent:* Tuesday, January 18, 2022 6:02 AM
ion, any idea why /opt/flink/conf gets cleaned (Only flink-conf.xml is
there).
Best,
Tamir
From: Yang Wang
Sent: Tuesday, January 18, 2022 6:02 AM
To: Tamir Sagi
Cc: user@flink.apache.org
Subject: Re: Flink 1.14.2 - Log4j2 -Dlog4j.configurationFile is ign
M/TM start
command, but the jobmanager.sh/taskmanager.sh. We do not
have the same logic in the "flink-console.sh".
Maybe we could introduce an environment for log configuration file name in
the "flink-console.sh". The default value could be
"log4j-console.properties&q
if org.apache.flink.kubernetes.kubeclient.parameters#hasLog4j returns
false then logging args are not added to startCommand.
1. why does the config dir gets cleaned once the cluster starts? Even when I
pushed log4j-console.properties to the expected location (/opt/flink/conf) ,
the directory includes only flink-conf.yaml.
2. I think
I think the root cause is that we are using "flink-console.sh" to start the
JobManager/TaskManager process for native K8s integration after
FLINK-21128[1].
So it forces the log4j configuration name to be "log4j-console.properties".
[1]. https://issues.apache.org/jira/brows
Hey All
I'm Running Flink 1.14.2, it seems like it ignores system property
-Dlog4j.configurationFile and
falls back to /opt/flink/conf/log4j-console.properties
I enabled debug log for log4j2 ( -Dlog4j2.debug)
DEBUG StatusLogger Catching
java.io.FileNotFoundException: file:/opt/flink/conf
Hi Eddie,
the APIs should be binary compatible across patch releases, so there is no
need to re-compile your artifacts
Best,
D.
On Sun 19. 12. 2021 at 16:42, Colletta, Edward
wrote:
> If have jar files built using flink version 11.2 in dependencies, and I
> upgrade my cluster to 11.6, is it
If have jar files built using flink version 11.2 in dependencies, and I upgrade
my cluster to 11.6, is it safe to run the existing jars on the upgraded cluster
or should I rebuild all jobs against 11.6?
Thanks,
Eddie Colletta
I realised there is an Apache Log4j mailing list.
Regards,
Mr. Turritopsis Dohrnii Teo En Ming
Targeted Individual in Singapore
19 Dec 2021 Sunday
On Fri, 17 Dec 2021 at 00:29, Arvid Heise wrote:
>
> I think this is meant for the Apache log4j mailing list [1].
>
&g
Hi,
Please refer to this link.
Article: Log4j zero-day flaw: What you need to know and how to protect yourself
Link:
https://www.zdnet.com/article/log4j-zero-day-flaw-what-you-need-to-know-and-how-to-protect-yourself/
The article says:
[QUOTE]
WHAT DEVICES AND APPLICATIONS ARE AT RISK
I think this is meant for the Apache log4j mailing list [1].
[1] https://logging.apache.org/log4j/2.x/mail-lists.html
On Thu, Dec 16, 2021 at 4:07 PM David Morávek wrote:
> Hi Turritopsis,
>
> I fail to see any relation to Apache Flink. Can you please elaborate on
> ho
tware has
> log4j zero-day security vulnerability?
>
> Good day from Singapore,
>
> I am working for a Systems Integrator (SI) in Singapore. We have
> several clients writing in, requesting us to identify log4j zero-day
> security vulnerability in their corporate infrastruct
Subject: How do I determine which hardware device and software has
log4j zero-day security vulnerability?
Good day from Singapore,
I am working for a Systems Integrator (SI) in Singapore. We have
several clients writing in, requesting us to identify log4j zero-day
security vulnerability
Dear Flink Community,
Yesterday, a new Zero Day for Apache Log4j was reported [1]. It is now
tracked under CVE-2021-44228 [2].
Apache Flink bundles a version of Log4j that is affected by this
vulnerability. We recommend users to follow the advisory [3] of the Apache
Log4j Community. For Apache
> hadoop-mapreduce-client-core
>> 3.2.0
>>
>>
>>
>>
>>
>>
>> spring-repo
>> https://repo1.maven.org/maven2/
>>
>>
>>
>>
>>
>>
>>
>>
>>
3.1
>
> 1.8
> 1.8
>
>
>
>
>
>
>
>
> org.apache.maven.plugins
> maven-shade-plugin
> 3.0.0
>
>
package
shade
org.apache.flink:force-shading
com.google.code.findbugs:jsr305
org.slf4j:*
Hi Ragini,
I think you actually have the opposite problem that your classpath contains
slf4j binding for log4j 1.2, which is no longer supported. Can you try
getting rid of the slf4j-log4j12 dependency?
Best,
D.
On Tue, Sep 14, 2021 at 1:51 PM Ragini Manjaiah
wrote:
> when I try to run fl
when I try to run flink .1.13 application encountering the below mentioned
issue. what dependency I am missing . can you please help me
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:file:/Users/z004t01/.m2/repository/org/apache/logging/log4j/log4j-slf4j-impl
hi all
flink??log4jlog4j
??
??
具体你可以看一下YarnClusterDescriptor和YarnLogConfigUtil这两个类的代码
里面包含了如何来发现log4j的配置文件,以及如何来注册LocalResource,让Yarn来进行配置分发
Best,
Yang
Bobby <1010445...@qq.com> 于2021年1月18日周一 下午11:17写道:
> 首先感谢提供解决方案。我回头就去试试。
>
>
> 关于提到的“在Yarn部署的时候是依赖log4j.properties这个文件名来ship资源的,所以不能手动指定一个其他文件”,怎么理解,可以提
首先感谢提供解决方案。我回头就去试试。
关于提到的“在Yarn部署的时候是依赖log4j.properties这个文件名来ship资源的,所以不能手动指定一个其他文件”,怎么理解,可以提供相关资料吗,我去了解具体flink
on yarn 部署逻辑。
thx.
Yang Wang wrote
> 在Yarn部署的时候是依赖log4j.properties这个文件名来ship资源的,所以不能手动指定一个其他文件
>
> 但是你可以export一个FLINK_CONF_DIR=/path/of/your/flink-conf环境变量
>
11
--
Sent from: http://apache-flink.147419.n8.nabble.com/
首先感谢提供解决方案。我回头就去试试。
关于提到的“在Yarn部署的时候是依赖log4j.properties这个文件名来ship资源的,所以不能手动指定一个其他文件”,怎么理解,可以提供相关资料吗,我去了解具体flink
on yarn 部署逻辑。
thx.
--
Sent from: http://apache-flink.147419.n8.nabble.com/
在Yarn部署的时候是依赖log4j.properties这个文件名来ship资源的,所以不能手动指定一个其他文件
但是你可以export一个FLINK_CONF_DIR=/path/of/your/flink-conf环境变量
在相应的目录下放自己的flink-conf.yaml和log4j.properties
Best,
Yang
Bobby <1010445...@qq.com> 于2021年1月18日周一 下午7:18写道:
> Flink On Yarn 日志配置log4j.properties 文件默认读取flink/conf中的log4j.properties。
>
Flink On Yarn 日志配置log4j.properties 文件默认读取flink/conf中的log4j.properties。
有没有方法可以在提交flink 作业时指定自己编写的log4.properties。
thx。
Flink版本:1.9.1
部署方式:Flink on Yarn
--
Sent from: http://apache-flink.147419.n8.nabble.com/
个人观点:
这个应该不可以,你提交的任务最终实际是打包给tm去执行的,使用的是tm的日志配置,而不是你自己的配置。
你自己那个配置仅仅用于本地调试启动的时候有效。
nicygan 于2021年1月13日周三 上午9:55写道:
> dear all:
> 我的flink任务提交到yarn运行,
> 默认生效的是日志配置是flink/conf中的log4j.properties。
> 但我的应用jar包中还有一个log4j2.xml,这里面配置了KafkaAppend,要把日志发送到kafka。
> 我要如果设置,才能让这两个配置文件都生效呢?
dear all:
我的flink任务提交到yarn运行,
默认生效的是日志配置是flink/conf中的log4j.properties。
但我的应用jar包中还有一个log4j2.xml,这里面配置了KafkaAppend,要把日志发送到kafka。
我要如果设置,才能让这两个配置文件都生效呢?
哪位大侠有配置经验。
thanks
by nicygan
possible to have a tracking id in MDC that will be shared across
> chained users defined operations like Filter, KeySelector, Flat map,
> Process function, and Producer?
>
> Tracking id will be read from headers of Kafka Message, which if possible
> plan to set to MDC in log4j. Right
Hi All,
Is it possible to have a tracking id in MDC that will be shared across
chained users defined operations like Filter, KeySelector, Flat map,
Process function, and Producer?
Tracking id will be read from headers of Kafka Message, which if possible
plan to set to MDC in log4j. Right now I
Any idea on how I can use log back instead ?
On Fri, Aug 23, 2019 at 1:22 PM Vishwas Siravara
wrote:
> Hi ,
> From the flink doc , in order to use logback instead of log4j " Users
> willing to use logback instead of log4j can just exclude log4j (or delete
> it from the lib/
Hi ,
>From the flink doc , in order to use logback instead of log4j " Users
willing to use logback instead of log4j can just exclude log4j (or delete
it from the lib/ folder)."
https://ci.apache.org/projects/flink/flink-docs-stable/monitoring/logging.html
.
However when i delete it
1:00 Puneet Kinra <puneet.ki...@customercentria.com>
> :
>
>> Hi
>>
>> I have a use case in which i want to log bad records in the log file. I
>> have configured the log4j
>> property file is getting generated as well but it also going to flink
>> logs as
/flink-docs-release-1.4/dev/stream/side_output.html
2018-03-20 10:36 GMT+01:00 Puneet Kinra <puneet.ki...@customercentria.com>:
> Hi
>
> I have a use case in which i want to log bad records in the log file. I
> have configured the log4j
> property file is getting generated as w
Hi
I have a use case in which i want to log bad records in the log file. I
have configured the log4j
property file is getting generated as well but it also going to flink logs
as well i want to detach
it from flink logs want to write to log file.
.Here is configuration
*(Note :AMSSource
I didn't find an example of flink-log4j configuration while creating EMR
cluster for running Flink. What should be passed to "flink-log4j" config?
Actual log4j config or path to file? Also, how to see application logs in
EMR?
thanks
Ishwara Varnasi
w-r--r-- 1 robert robert 79966937 Oct 10 13:49 flink-dist_2.10-1.1.3.jar
> -rw-r--r-- 1 robert robert90883 Dec 9 20:13
> flink-python_2.10-1.1.3.jar
> -rw-r--r-- 1 robert robert60547 Dec 9 18:45 log4j-1.2-api-2.7.jar
> -rw-rw-r-- 1 robert robert 1638598 Oct 22 16:08
> lo
hu, Feb 16, 2017 at 11:54 AM, Stephan Ewen <se...@apache.org> wrote:
> Hi!
>
> The bundled log4j version (1.x) does not support that.
>
> But you can replace the logging jars with those of a different framework
> (like log4j 2.x), which supports changing the configuration
Hi!
The bundled log4j version (1.x) does not support that.
But you can replace the logging jars with those of a different framework
(like log4j 2.x), which supports changing the configuration without
stopping the application.
You don't need to rebuild flink, simply replace two jars in the &quo
Is there a way to reload a log4j.properties file without stopping and starting the job server?
Hi Nick,
the name of the "log4j-yarn-session.properties" file might be a bit
misleading. The file is just used for the YARN session client, running
locally.
The Job- and TaskManager are going to use the log4j.properties on the
cluster.
On Fri, Mar 11, 2016 at 7:20 PM, Ufuk
plication-specific
> log4j.properties to have them honored when running on a YARN cluster? In my
> application jar doesn't work. In the log4j files under flink/conf doesn't
> work.
>
> My goal is to set the log level for 'com.mycompany' classes used in my flink
> application to DEBUG.
>
> Thanks,
> Nick
>
Can anyone tell me where I must place my application-specific
log4j.properties to have them honored when running on a YARN cluster? In my
application jar doesn't work. In the log4j files under flink/conf doesn't
work.
My goal is to set the log level for 'com.mycompany' classes used in my
flink
the dynamic
property using Flink’s env.java.opts configuration parameter.
Cheers,
Till
On Mon, Dec 21, 2015 at 3:34 PM, Gwenhael Pasquiers <
gwenhael.pasqui...@ericsson.com> wrote:
> Hi everybody,
>
>
>
> Could it be possible to have a way to configure log4j with xml files
as an additional note: Flink is sending all files in the /lib folder to all
YARN containers. So you could place the XML file in "/lib" and override the
properties.
I think you need to delete the log4j properties from the conf/ directory,
then at least on YARN, we'
54 matches
Mail list logo