Thanks Till,
I’ve raised a JIRA for this: https://issues.apache.org/jira/browse/FLINK-10988.
Let me know if theres anything else I can add to the JIRA to help
Regards,
Scott
SCOTT SUE
CHIEF TECHNOLOGY OFFICER
Support Line : +44(0) 2031 371 603
Mobile : +852 9611 3969
9/F, 33 Lockhart Road,
I can see the benefit for other users as well. One could include it as part
of some development/debugging tools, for example. It would not strictly
need to go into Flink but it would have the benefit of better/increased
visibility I guess. In that sense, opening a JIRA issue and posting on dev
migh
Hi Till,
Yeah I think that would work especially knowing this isn’ something that is out
of the box at the moment. Do you think its worth raising this as a feature
request at all? I think that’s one thing with my experience with Flink is that
its quite hard to debug what is going on when ther
Hi Scott,
I think you could write some Wrappers for the different user function types
which could contain the logging logic. That way you would still need to
wrap you actual business logic but don't have to duplicate the logic over
and over again.
If you also want to log the state, then you would
Yeah I think that would work for incorrect data consumed, but not for if
deserialization passes correctly, but one of my custom functions post
deserialization generates an error?
Regards,
Scott
SCOTT SUE
CHIEF TECHNOLOGY OFFICER
Support Line : +44(0) 2031 371 603
Mobile : +852 9611 3969
9/F,
If so , then you can implement your own deserializer[1] with costume logic
and error handling
1.
https://ci.apache.org/projects/flink/flink-docs-stable/api/java/org/apache/flink/streaming/util/serialization/KeyedDeserializationSchemaWrapper.html
On Thu, Nov 22, 2018 at 8:57 AM Scott Sue wrote
Hi Paul,
Yes correct, Flink shouldn’t make any assumptions on what is inside the user
function.
That is true, the exception may not be from a direct result of unexpected data,
but the incoming data coupled by the state of the job is causing unexpected
behaviour. From my perspective, I wouldn’
Hi Scott,
IMHO, the exception is caused by the user codes so it should be handled by the
user function,
and Flink runtime shouldn’t make any assumption about what’s happening in the
user function.
The exception may or may not be caused by unexpected data, so logging the
current processing
rec
Hi Paul,
Thanks for the quick reply. Ok does that mean that as general practice, I
should be catching all exceptions for the purpose of logging in any of my
Operators? This seems like something that could be handled by Flink itself as
they are unexpected exceptions. Otherwise every single ope
Hi Scott,
I think you can do it by catching the exception in the user function and log
the current
message that the operator is processing before re-throwing (or not) the
exception to
Flink runtime.
Best,
Paul Lam
> 在 2018年11月22日,12:59,Scott Sue 写道:
>
> Hi all,
>
> When I'm running my jobs
Hi all,
When I'm running my jobs I am consuming data from Kafka to process in my
job. Unfortunately my job receives unexpected data from time to time which
I'm trying to find the root cause of the issue.
Ideally, I want to be able to have a way to know when the job has failed due
to an exception
11 matches
Mail list logo