[ 
https://issues.apache.org/jira/browse/HDFS-15924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qi Zhu updated HDFS-15924:
--------------------------
    Description: 
!image-2021-03-26-16-18-03-341.png|width=707,height=234!

!image-2021-03-26-16-19-42-165.png|width=824,height=198!

The thread blocked when audit log boom show in above.

Such as [https://dzone.com/articles/log4j-thread-deadlock-case] , it seems the 
same case when heavy load, should we update to Log4j2 or other things we can do 
to improve it in heavy audit log.

 
{code:java}
 /**
     Call the appenders in the hierrachy starting at
     <code>this</code>.  If no appenders could be found, emit a
     warning.

     <p>This method calls all the appenders inherited from the
     hierarchy circumventing any evaluation of whether to log or not
     to log the particular log request.

     @param event the event to log.  */
public void callAppenders(LoggingEvent event) {
    int writes = 0;

    for(Category c = this; c != null; c=c.parent) {
      // Protected against simultaneous call to addAppender, removeAppender,...
      synchronized(c) {
        if(c.aai != null) {
            writes += c.aai.appendLoopOnAppenders(event);
        }
        if(!c.additive) {
            break;
        }
      }
    }

    if(writes == 0) {
      repository.emitNoAppenderWarning(this);
    }
  }{code}
The log4j code, use the  global synchronized, it will cause this happened.

cc [~weichiu] [~hexiaoqiao] [~ayushtkn]  [~shv] [~ferhui]

  was:
!image-2021-03-26-16-18-03-341.png|width=707,height=234!

!image-2021-03-26-16-19-42-165.png|width=824,height=198!

The thread blocked when audit log boom show in above.

Such as [https://dzone.com/articles/log4j-thread-deadlock-case] , it seems the 
same case when heavy load, should we update to Log4j2 or other things we can do 
to improve it in heavy audit log.

cc [~weichiu] [~hexiaoqiao] [~ayushtkn]  [~shv] [~ferhui]


> Log4j will cause Server handler blocked when audit log boom.
> ------------------------------------------------------------
>
>                 Key: HDFS-15924
>                 URL: https://issues.apache.org/jira/browse/HDFS-15924
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Qi Zhu
>            Priority: Major
>         Attachments: image-2021-03-26-16-18-03-341.png, 
> image-2021-03-26-16-19-42-165.png
>
>
> !image-2021-03-26-16-18-03-341.png|width=707,height=234!
> !image-2021-03-26-16-19-42-165.png|width=824,height=198!
> The thread blocked when audit log boom show in above.
> Such as [https://dzone.com/articles/log4j-thread-deadlock-case] , it seems 
> the same case when heavy load, should we update to Log4j2 or other things we 
> can do to improve it in heavy audit log.
>  
> {code:java}
>  /**
>      Call the appenders in the hierrachy starting at
>      <code>this</code>.  If no appenders could be found, emit a
>      warning.
>      <p>This method calls all the appenders inherited from the
>      hierarchy circumventing any evaluation of whether to log or not
>      to log the particular log request.
>      @param event the event to log.  */
> public void callAppenders(LoggingEvent event) {
>     int writes = 0;
>     for(Category c = this; c != null; c=c.parent) {
>       // Protected against simultaneous call to addAppender, 
> removeAppender,...
>       synchronized(c) {
>         if(c.aai != null) {
>             writes += c.aai.appendLoopOnAppenders(event);
>         }
>         if(!c.additive) {
>             break;
>         }
>       }
>     }
>     if(writes == 0) {
>       repository.emitNoAppenderWarning(this);
>     }
>   }{code}
> The log4j code, use the  global synchronized, it will cause this happened.
> cc [~weichiu] [~hexiaoqiao] [~ayushtkn]  [~shv] [~ferhui]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to