[ 
https://issues.apache.org/jira/browse/FLINK-7990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16983290#comment-16983290
 ] 

Akshay Iyangar edited comment on FLINK-7990 at 11/27/19 8:43 AM:
-----------------------------------------------------------------

Yup looks like that may be the issue. As a workaround, we used File Appender to 
`/dev/stdout` in our logback.xml
{code:java}
<configuration>
    <appender name="FILE" class="ch.qos.logback.core.FileAppender">
        <file>/dev/stdout</file>
        <append>true</append>
        <!-- set immediateFlush to false for much higher logging throughput -->
        <immediateFlush>true</immediateFlush>
        <!-- encoders are assigned the type
             setting net.logstash.logback.encoder.LogstashEncoder -->
        <encoder class="net.logstash.logback.encoder.LogstashEncoder"/>
    </appender>
    <!-- This affects logging for both user code and Flink -->
    <root level="INFO">
        <appender-ref ref="FILE"/>
    </root>
</configuration>
{code}


was (Author: aiyangar):
Yup looks that may be the issue. As a workaround, we used File Appender to 
`/dev/stdout` in our logback.xml
{code:java}
<configuration>
    <appender name="FILE" class="ch.qos.logback.core.FileAppender">
        <file>/dev/stdout</file>
        <append>true</append>
        <!-- set immediateFlush to false for much higher logging throughput -->
        <immediateFlush>true</immediateFlush>
        <!-- encoders are assigned the type
             setting net.logstash.logback.encoder.LogstashEncoder -->
        <encoder class="net.logstash.logback.encoder.LogstashEncoder"/>
    </appender>
    <!-- This affects logging for both user code and Flink -->
    <root level="INFO">
        <appender-ref ref="FILE"/>
    </root>
</configuration>
{code}

> Strange behavior when configuring Logback for logging
> -----------------------------------------------------
>
>                 Key: FLINK-7990
>                 URL: https://issues.apache.org/jira/browse/FLINK-7990
>             Project: Flink
>          Issue Type: Bug
>          Components: Runtime / Configuration
>    Affects Versions: 1.3.2
>            Reporter: Fabian Hueske
>            Priority: Major
>
> The following issue was reported on the [user 
> mailinglist|https://lists.apache.org/thread.html/c06a9f0b1189bf21d946d3d9728631295c88bfc57043cdbe18409d52@%3Cuser.flink.apache.org%3E]
> {quote}
> I have a single node Flink instance which has the required jars for logback 
> in the lib folder (logback-classic.jar, logback-core.jar, 
> log4j-over-slf4j.jar). I have removed the jars for log4j from the lib folder 
> (log4j-1.2.17.jar, slf4j-log4j12-1.7.7.jar). 'logback.xml' is also correctly 
> updated in 'conf' folder. I have also included 'logback.xml' in the 
> classpath, although this does not seem to be considered while the job is run. 
> Flink refers to logback.xml inside the conf folder only. I have updated 
> pom.xml as per Flink's documentation in order to exclude log4j. I have some 
> log entries set inside a few map and flatmap functions and some log entries 
> outside those functions (eg: "program execution started").
> When I run the job, Flink writes only those logs that are coded outside the 
> transformations. Those logs that are coded inside the transformations (map, 
> flatmap etc) are not getting written to the log file. If this was happening 
> always, I could have assumed that the Task Manager is not writing the logs. 
> But Flink displays a strange behavior regarding this. Whenever I update the 
> logback jars inside the the lib folder(due to version changes), during the 
> next job run, all logs (even those inside map and flatmap) are written 
> correctly into the log file. But the logs don't get written in any of the 
> runs after that. This means that my 'logback.xml' file is correct and the 
> settings are also correct. But I don't understand why the same settings don't 
> work while the same job is run again.
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to