[
https://issues.apache.org/jira/browse/BEAM-9082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17482874#comment-17482874
]
YeT commented on BEAM-9082:
---------------------------
Thank you for looking into this. To provide more context, this happened to be
with beam version 2.34, flink 1.13 and beam-sdk-py38. It is non deterministic.
If it happens, usually is for longer running jobs with bigger data that lasts
more than 20min.
> "Socket closed" Spurious GRPC errors in Flink/Spark runner log output
> ---------------------------------------------------------------------
>
> Key: BEAM-9082
> URL: https://issues.apache.org/jira/browse/BEAM-9082
> Project: Beam
> Issue Type: Sub-task
> Components: runner-flink, runner-spark
> Reporter: Kyle Weaver
> Priority: P3
> Labels: portability-flink, portability-spark
>
> We often see "Socket closed" errors on job shutdown, even though the pipeline
> has finished successfully. They are misleading and especially annoying at
> scale.
> ERROR:root:Failed to read inputs in the data plane.
> ...
> grpc._channel._MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that
> terminated with:
> status = StatusCode.UNAVAILABLE
> details = "Socket closed"
> debug_error_string =
> "{"created":"@1578597616.309419460","description":"Error received from peer
> ipv6:[::1]:37211","file":"src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Socket
> closed","grpc_status":14}"
--
This message was sent by Atlassian Jira
(v8.20.1#820001)