That would be great, thanks!
On Tue, Sep 13, 2022 at 3:00 PM Steve Niemitz wrote:
> Ah this is super useful context, thank you! I can submit a couple PRs to
> get AvroIO.sink up to parity if that's the way forward.
>
> On Tue, Sep 13, 2022 at 2:53 PM John Casey via user
> wrote:
>
>> Hi
Ah this is super useful context, thank you! I can submit a couple PRs to
get AvroIO.sink up to parity if that's the way forward.
On Tue, Sep 13, 2022 at 2:53 PM John Casey via user
wrote:
> Hi Steve,
>
> I've asked around, and it looks like this confusing state is due to a
> migration that
Hi Steve,
I've asked around, and it looks like this confusing state is due to a
migration that isn't complete (and likely won't be until Beam 3.0).
Here is the doc that explains some of the history:
https://docs.google.com/document/d/1zcF4ZGtq8pxzLZxgD_JMWAouSszIf9LnFANWHKBsZlg/edit
And a PR
There's a few related log lines, but there isn't a full stacktrace as the
info originates from a logger statement[1] as opposed to thrown exception.
The related log lines are like so:
org.apache.kafka.clients.NetworkClient [Producer clientId=producer-109]
Disconnecting from node 10 due to socket
Do you have by any chance the full stacktrace of this error?
—
Alexey
> On 13 Sep 2022, at 18:05, Evan Galpin wrote:
>
> Ya likewise, I'd expect this to be handled in the Kafka code without the need
> for special handling by Beam. I'll reach out to Kafka mailing list as well
> and try to
Ya likewise, I'd expect this to be handled in the Kafka code without the
need for special handling by Beam. I'll reach out to Kafka mailing list as
well and try to get a better understanding of the root issue. Thanks for
your time so far John! I'll ping this thread with any interesting findings
In principle yes, but I don't see any Beam level code to handle that. I'm a
bit surprised it isn't handled in the Kafka producer layer itself.
On Tue, Sep 13, 2022 at 11:15 AM Evan Galpin wrote:
> I'm not certain based on the logs where the disconnect is starting. I
> have seen
I'm not certain based on the logs where the disconnect is starting. I have
seen TimeoutExceptions like that mentioned in the SO issue you linked, so
if we assume it's starting from the kafka cluster side, my concern is that
the producers don't seem to be able to gracefully recover. Given that
Googling that error message returned
https://stackoverflow.com/questions/71077394/kafka-producer-resetting-the-last-seen-epoch-of-partition-resulting-in-timeout
and
https://github.com/a0x8o/kafka/blob/master/clients/src/main/java/org/apache/kafka/clients/Metadata.java#L402
Which suggests that
Thanks for the quick reply John! I should also add that the root issue is
not so much the logging, rather that these log messages seem to be
correlated with periods where producers are not able to publish data to
kafka. The issue of not being able to publish data does not seem to
resolve until
Hi Evan,
I haven't seen this before. Can you share your Kafka write configuration,
and any other stack traces that could be relevant?
John
On Tue, Sep 13, 2022 at 10:23 AM Evan Galpin wrote:
> Hi all,
>
> I've recently started using the KafkaIO connector as a sink, and am new to
> Kafka in
Hi all,
I've recently started using the KafkaIO connector as a sink, and am new to
Kafka in general. My kafka clusters are hosted by Confluent Cloud. I'm
using Beam SDK 2.41.0. At least daily, the producers in my Beam pipeline
are getting stuck in a loop frantically logging this message:
Node
12 matches
Mail list logo