[ 
https://issues.apache.org/jira/browse/STORM-1519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15129257#comment-15129257
 ] 

Boyang Jerry Peng commented on STORM-1519:
------------------------------------------

though changing the syslog setting in cluster.xml immediateFlush="false" => 
"true" seems to fix the issue
Description from log4j2 website:
immediateFlush - When set to true - the default, each write will be followed by 
a flush. This will guarantee the data is written to disk but could impact 
performance.


> Storm syslog logging not confirming to RFC5426 3.1
> --------------------------------------------------
>
>                 Key: STORM-1519
>                 URL: https://issues.apache.org/jira/browse/STORM-1519
>             Project: Apache Storm
>          Issue Type: Bug
>            Reporter: Boyang Jerry Peng
>
> AS per RFC 5426 section 3.1, there should be only one message per datagram:
> 3.1. One Message Per Datagram
> Each syslog UDP datagram MUST contain only one syslog message, which
> MAY be complete or truncated. The message MUST be formatted and
> truncated according to RFC 5424 [2]. Additional data MUST NOT be
> present in the datagram payload.
> For example
> UDP package: <174>1 2016-02-02T22:44:05.558Z localhost [nimbus] - [nobody:S0] 
> [mdc@18060 ClassName="?"] Using custom scheduler: 
> backtype.storm.scheduler.bridge.MultitenantResourceAwareBridgeScheduler 
> <174>1 2016-02-02T22:44:05.591Z localhost [nimbus] - [nobody:S0] [mdc@18060 
> ClassName="?"] Creating new blob store based in /home/y/var/storm/blobs
> but the two log messages should be each in their own udp packet



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to