Hello,
I’m having difficulties with getting useful logging information out of my
workers/topology on Storm 2.1.0. We’re running our Storm workers as Docker
services in Swarm mode and forward STDOUT via a GELF Driver to our ELK stack.
To keep it as simple as possible, my current worker log4j2 configurations look
as follows (basically – log everything to STDOUT @ INFO, log some
topology-specific logs to STDOUT @ DEBUG):
cluster.xml
<Configuration monitorInterval="60" shutdownHook="disable"
packages="ch.qos.logback.core">
<Properties>
<property name="logstash">+ %msg +%n</property> <!-- only for 'bug'
demonstration purposes -->
</Properties>
<Appenders>
<Console name="CONSOLE">
<PatternLayout>
<Pattern>${logstash}</Pattern>
</PatternLayout>
</Console>
</Appenders>
<Loggers>
<Root level="INFO">
<AppenderRef ref="CONSOLE" />
</Root>
</Loggers>
</Configuration>
worker.xml
<Configuration monitorInterval="60" shutdownHook="disable"
packages="ch.qos.logback.core">
<Properties>
<property
name="logstash">{"@timestamp":"%d{yyyy-MM-ddTHH:mm:ss.SSS}", "logger":
"%logger", "message":"%msg","thread_name":"%t","level":"%level"}%n</property>
</Properties>
<Appenders>
<Console name="CONSOLE">
<PatternLayout>
<Pattern>${logstash}</Pattern>
</PatternLayout>
</Console>
</Appenders>
<Loggers>
<Logger name="org.com.package" level="DEBUG" additivity="false">
<AppenderRef ref="CONSOLE"/>
</Logger>
<Root level="INFO">
<AppenderRef ref="CONSOLE"/>
</Root>
</Loggers>
</Configuration>
According to my this configuration, I would've expected a log message formatted
similarly to the following (newlines added for clarity):
{
"@timestamp": "2020-02-11 11:32:40,748",
"logger": "org.com.package.aggregation.SomeAggregation",
"message": "joinStateStream: values: [33333333-4e30-49a6-8e1c-f7817633bb34,
7c777777-a622-4ae4-a504-2490db47cafe, 2020-02-11]",
"thread_name": "Thread-23-b-25-joinState-groupedQ-revQ-executor[22, 22]",
"level": "DEBUG"
}
The resulting message that is delivered, however, looks completely different
and ‘wrong’:
+ Worker Process 273c05df-f087-43ca-a59a-e281bae98ab1:
{
"@timestamp":"2020-02-11 11:32:40,748",
"logger": "STDERR",
"message":
"{
"@timestamp":"2020-02-11 11:32:40,748",
"logger": "org.com.package.aggregation.SomeAggregation",
"message":"joinStateStream: values:
[33333333-4e30-49a6-8e1c-f7817633bb34, 7c777777-a622-4ae4-a504-2490db47cafe,
2020-02-11]",
"thread_name":"Thread-23-b-25-joinState-groupedQ-revQ-executor[22,
22]",
"level":"DEBUG"
}",
"thread_name":"Thread-2",
"level":"INFO"} +
Basically:
- It seems that any worker logs are actually logged by the supervisor
(“Worker Process <id>” prefix as well as the enclosing “+” from the cluste.xml
above)
- The worker logs themselves seem to be wrapped for some reason. The
“message” part of the outer message seems to contain yet another log message.
And so:
1. I took a look at the source for
“org.apache.storm.daemon.supervisor.BasicContainer#launch” and it doesn’t seem
like this log prefix (“Worker process <id>:”) is configurable, which is quite
surprising. Is there a different way of deployment where this is configurable?
2. What could be causing this “message” wrapping going on? As far as I can
see, the “wrappee” is my actual topology log message that I wish to parse, and
the “wrapper” is something else entirely (seemingly an error log? With level
info? ¯\_(ツ)_/¯
Is there any way to simply get the worker/topology logs in a specified format?
As a last resort, I could potentially parse the worker.log files, but I would
rather not resort to that as it is not as robust as I’d like (worker could
potentially crash between writing to and reading from the log file).
--
Amtsgericht Münster HRB 10502Vertretungsberechtigter Vorstand / Board of
Management: Thomas Stümmler (Vorsitz / Chairman), Jens Reckendorf, Silvia
OstermannAufsichtsrat / Supervisory Board: Christian Ehlers (Vorsitz / Chairman)