[
https://issues.apache.org/jira/browse/SAMZA-81?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13820709#comment-13820709
]
Jakob Homan commented on SAMZA-81:
----------------------------------
That says to me that we're (by which I mean Kafka in this case) is using
exception handling for non-exceptional things, which is a bad thing.
Overloading actual exceptional situations with routine control flow or state
notification means that one needs to decide what's worse: lots of false
positives in the logs or not having the information when the positives are
real. I understand the expediency of the first solution, but I don't think we
can realistically escape the second. A third alternative would be to identify
and partition the Kafka exceptions between routine and extraordinary ourselves,
verbosely logging or not as appropriate.
However, as part of SAMZA-84, I plan to add Hadoop-style dynamic log level
changing so that we can change the logging levels of specific classes on the
fly. This will be very useful in Samza, never-ending-tasks contexts, assuming
that the logging we're trying to capture isn't part of a problem that brings
the whole task down. In those cases, we'd still be left with no information
since the process will have gone away before we're able to adjust the logging.
Assuming we have SAMZA-84 and dynamic log level changing, I'm ok with keeping
this particular log at debug.
> Exception logging in KafkaSystemConsumer should not be at debug
> ---------------------------------------------------------------
>
> Key: SAMZA-81
> URL: https://issues.apache.org/jira/browse/SAMZA-81
> Project: Samza
> Issue Type: Bug
> Reporter: Jakob Homan
> Assignee: Jakob Homan
> Attachments: SAMZA-81.patch
>
>
> When encountering an exception while fetching metadata, we warn that
> something was encountered, but don't share what it actually was.
--
This message was sent by Atlassian JIRA
(v6.1#6144)