See
<https://ci-beam.apache.org/job/beam_LoadTests_Go_CoGBK_Flink_batch/710/display/redirect?page=changes>
Changes:
[bulat.safiullin] [Website] change share-your-story link, add text to
ADD_CASE_STUDY.md,
[Robert Bradshaw] Add a portable runner that renders pipelines as a dot graph.
[Robert Bradshaw] Add basic tests for render runner.
[Robert Bradshaw] Add the ability to pass a pipeline proto directly.
[Robert Bradshaw] lint
[mr.malarg] pg_23865 fix selected example at list
[leha] Expand all categories that contain a selected example (#23865)
[Valentyn Tymofieiev] Fix typo.
[Valentyn Tymofieiev] Serve the graph when output file is not specified.
[Valentyn Tymofieiev] Serve the graph when output file is not specified.
[Valentyn Tymofieiev] Fix parsing of standalone protos.
[Valentyn Tymofieiev] Support reading from GCS.
[Valentyn Tymofieiev] Add text logging.
[Valentyn Tymofieiev] fix typo.
[Valentyn Tymofieiev] Some lint and yapf.
[Robert Bradshaw] Fix dot detection logic.
[Robert Bradshaw] fix error detected by lint
[Robert Bradshaw] Make gcs an optional dependency.
[Robert Bradshaw] return rather than sys.exit
[kn1kn1] Fix mvn command to refer the GCP_REGION variable
[Robert Bradshaw] lint
[noreply] Bump github.com/aws/aws-sdk-go-v2/feature/s3/manager in /sdks (#24280)
[noreply] Copy editing the machine learning pages (#24301)
[noreply] TensorRT Custom Inference Function Implementation (#24039)
[noreply] Teach Azure Filesystem to authenticate using DefaultAzureCredential in
[noreply] Apply suggestions from code review
[noreply] Add retry to test connections (#23757)
[Robert Bradshaw] More cleanup, mypy.
[noreply] fixed typo
[noreply] [#24266] Update release candidate script to use -PisRelease (#24269)
[noreply] Golang SpannerIO Implementation (#23285)
[noreply] Add rootCaCertificate option to SplunkIO (#24229)
[noreply] [Playground] Remove example bucket (#24198)
[noreply] Extract Go and Python Beam symbols for Playground (#23378)
[noreply] Dask runner tests action (#24324)
[Robert Bradshaw] lint
------------------------------------------
[...truncated 40.98 KB...]
urn: "beam:env:docker:v1"
payload: "\n6gcr.io/apache-beam-testing/beam-sdk/beam_go_sdk:latest"
capabilities: "beam:protocol:progress_reporting:v1"
capabilities: "beam:protocol:multi_core_bundle_processing:v1"
capabilities: "beam:transform:sdf_truncate_sized_restrictions:v1"
capabilities: "beam:protocol:****_status:v1"
capabilities: "beam:protocol:monitoring_info_short_ids:v1"
capabilities: "beam:version:sdk_base:go"
capabilities: "beam:coder:bytes:v1"
capabilities: "beam:coder:bool:v1"
capabilities: "beam:coder:varint:v1"
capabilities: "beam:coder:double:v1"
capabilities: "beam:coder:string_utf8:v1"
capabilities: "beam:coder:length_prefix:v1"
capabilities: "beam:coder:kv:v1"
capabilities: "beam:coder:iterable:v1"
capabilities: "beam:coder:state_backed_iterable:v1"
capabilities: "beam:coder:windowed_value:v1"
capabilities: "beam:coder:global_window:v1"
capabilities: "beam:coder:interval_window:v1"
capabilities: "beam:coder:row:v1"
capabilities: "beam:coder:nullable:v1"
dependencies: <
type_urn: "beam:artifact:type:file:v1"
role_urn: "beam:artifact:role:go_****_binary:v1"
>
>
>
>
root_transform_ids: "s1"
root_transform_ids: "e4"
root_transform_ids: "s2"
root_transform_ids: "e8"
root_transform_ids: "e9_cogbk"
root_transform_ids: "e10"
root_transform_ids: "e11"
requirements: "beam:requirement:pardo:splittable_dofn:v1"
2022/11/23 08:27:32 Using specified **** binary: 'linux_amd64/cogbk'
2022/11/23 08:27:32 Prepared job with id:
load-tests-go-flink-batch-cogbk-2-1123065549_211f813f-e004-4af1-b6a4-5eef82cbb716
and staging token:
load-tests-go-flink-batch-cogbk-2-1123065549_211f813f-e004-4af1-b6a4-5eef82cbb716
2022/11/23 08:27:41 Staged binary artifact with token:
2022/11/23 08:27:44 Submitted job:
load0tests0go0flink0batch0cogbk0201123065549-root-1123082742-5fa4ffde_e854d714-618d-4ef6-89f1-2cf4d63496c8
2022/11/23 08:27:44 Job state: STOPPED
2022/11/23 08:27:44 Job state: STARTING
2022/11/23 08:27:44 Job state: RUNNING
2022/11/23 08:39:55 ():
org.apache.flink.client.program.ProgramInvocationException: Job failed (JobID:
0f6cfe473d225e6c2e96d493c24c97cf)
at
org.apache.flink.client.deployment.ClusterClientJobClientAdapter.lambda$null$6(ClusterClientJobClientAdapter.java:130)
at
java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
at
java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
at
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
at
java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
at
org.apache.flink.util.concurrent.FutureUtils.lambda$retryOperationWithDelay$9(FutureUtils.java:403)
at
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
at
java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
at
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
at
java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
at
org.apache.flink.client.program.rest.RestClusterClient.lambda$pollResourceAsync$26(RestClusterClient.java:708)
at
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
at
java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
at
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
at
java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
at
org.apache.flink.util.concurrent.FutureUtils.lambda$retryOperationWithDelay$9(FutureUtils.java:403)
at
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
at
java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
at
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
at
java.util.concurrent.CompletableFuture.postFire(CompletableFuture.java:575)
at
java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:943)
at
java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:456)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
Caused by: org.apache.flink.runtime.client.JobExecutionException: Job execution
failed.
at
org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144)
at
org.apache.flink.client.deployment.ClusterClientJobClientAdapter.lambda$null$6(ClusterClientJobClientAdapter.java:128)
... 24 more
Caused by: org.apache.flink.runtime.JobException: Recovery is suppressed by
NoRestartBackoffTimeStrategy
at
org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:138)
at
org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:82)
at
org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:301)
at
org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:291)
at
org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:282)
at
org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:739)
at
org.apache.flink.runtime.scheduler.SchedulerNG.updateTaskExecutionState(SchedulerNG.java:78)
at
org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:443)
at jdk.internal.reflect.GeneratedMethodAccessor31.invoke(Unknown Source)
at
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:566)
at
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.lambda$handleRpcInvocation$1(AkkaRpcActor.java:304)
at
org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:83)
at
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:302)
at
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:217)
at
org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:78)
at
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:163)
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:24)
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:20)
at scala.PartialFunction.applyOrElse(PartialFunction.scala:127)
at scala.PartialFunction.applyOrElse$(PartialFunction.scala:126)
at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:20)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:175)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:176)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:176)
at akka.actor.Actor.aroundReceive(Actor.scala:537)
at akka.actor.Actor.aroundReceive$(Actor.scala:535)
at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:220)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:580)
at akka.actor.ActorCell.invoke(ActorCell.scala:548)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:270)
at akka.dispatch.Mailbox.run(Mailbox.scala:231)
at akka.dispatch.Mailbox.exec(Mailbox.scala:243)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290)
at
java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020)
at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656)
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594)
at
java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:183)
Caused by: java.lang.OutOfMemoryError: Direct buffer memory. The direct
out-of-memory error has occurred. This can mean two things: either job(s)
require(s) a larger size of JVM direct memory or there is a direct memory leak.
The direct memory can be allocated by user code or some of its dependencies. In
this case 'taskmanager.memory.task.off-heap.size' configuration option should
be increased. Flink framework and its dependencies also consume the direct
memory, mostly for network communication. The most of network memory is managed
by Flink and should not result in out-of-memory error. In certain special
cases, in particular for jobs with high parallelism, the framework may require
more direct memory which is not managed by Flink. In this case
'taskmanager.memory.framework.off-heap.size' configuration option should be
increased. If the error persists then there is probably a direct memory leak in
user code or some of its dependencies which has to be investigated and fixed.
The task executor has to be shutdown...
at java.nio.Bits.reserveMemory(Bits.java:175)
at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:118)
at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:317)
at
org.apache.beam.vendor.grpc.v1p48p1.io.netty.buffer.PoolArena$DirectArena.allocateDirect(PoolArena.java:649)
at
org.apache.beam.vendor.grpc.v1p48p1.io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:624)
at
org.apache.beam.vendor.grpc.v1p48p1.io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:203)
at
org.apache.beam.vendor.grpc.v1p48p1.io.netty.buffer.PoolArena.tcacheAllocateNormal(PoolArena.java:187)
at
org.apache.beam.vendor.grpc.v1p48p1.io.netty.buffer.PoolArena.allocate(PoolArena.java:136)
at
org.apache.beam.vendor.grpc.v1p48p1.io.netty.buffer.PoolArena.allocate(PoolArena.java:126)
at
org.apache.beam.vendor.grpc.v1p48p1.io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:396)
at
org.apache.beam.vendor.grpc.v1p48p1.io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:188)
at
org.apache.beam.vendor.grpc.v1p48p1.io.netty.buffer.AbstractByteBufAllocator.buffer(AbstractByteBufAllocator.java:124)
at
org.apache.beam.vendor.grpc.v1p48p1.io.grpc.netty.NettyWritableBufferAllocator.allocate(NettyWritableBufferAllocator.java:51)
at
org.apache.beam.vendor.grpc.v1p48p1.io.grpc.internal.MessageFramer.writeRaw(MessageFramer.java:285)
at
org.apache.beam.vendor.grpc.v1p48p1.io.grpc.internal.MessageFramer.access$400(MessageFramer.java:43)
at
org.apache.beam.vendor.grpc.v1p48p1.io.grpc.internal.MessageFramer$OutputStreamAdapter.write(MessageFramer.java:375)
at
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.CodedOutputStream$OutputStreamEncoder.write(CodedOutputStream.java:2984)
at
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.CodedOutputStream$OutputStreamEncoder.writeLazy(CodedOutputStream.java:2992)
at
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.ByteString$LiteralByteString.writeTo(ByteString.java:1470)
at
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:461)
at
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:460)
at
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:460)
at
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:460)
at
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:460)
at
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:460)
at
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:460)
at
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:460)
at
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:460)
at
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:460)
at
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:460)
at
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:460)
at
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:460)
at
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:460)
at
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:460)
at
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:460)
at
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:461)
at
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:460)
at
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:460)
at
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:460)
at
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:460)
at
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:460)
at
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:460)
at
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:460)
at
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:460)
at
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:460)
at
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:460)
at
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:460)
at
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:460)
at
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:460)
at
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:460)
at
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.CodedOutputStream$OutputStreamEncoder.writeBytesNoTag(CodedOutputStream.java:2780)
at
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.CodedOutputStream$OutputStreamEncoder.writeBytes(CodedOutputStream.java:2754)
at
org.apache.beam.model.fnexecution.v1.BeamFnApi$Elements$Data.writeTo(BeamFnApi.java:31639)
at
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.CodedOutputStream$OutputStreamEncoder.writeMessageNoTag(CodedOutputStream.java:2834)
at
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.CodedOutputStream$OutputStreamEncoder.writeMessage(CodedOutputStream.java:2803)
at
org.apache.beam.model.fnexecution.v1.BeamFnApi$Elements.writeTo(BeamFnApi.java:33712)
at
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.AbstractMessageLite.writeTo(AbstractMessageLite.java:83)
at
org.apache.beam.vendor.grpc.v1p48p1.io.grpc.protobuf.lite.ProtoInputStream.drainTo(ProtoInputStream.java:52)
at
org.apache.beam.vendor.grpc.v1p48p1.io.grpc.internal.MessageFramer.writeToOutputStream(MessageFramer.java:267)
at
org.apache.beam.vendor.grpc.v1p48p1.io.grpc.internal.MessageFramer.writeKnownLengthUncompressed(MessageFramer.java:229)
at
org.apache.beam.vendor.grpc.v1p48p1.io.grpc.internal.MessageFramer.writeUncompressed(MessageFramer.java:168)
at
org.apache.beam.vendor.grpc.v1p48p1.io.grpc.internal.MessageFramer.writePayload(MessageFramer.java:141)
at
org.apache.beam.vendor.grpc.v1p48p1.io.grpc.internal.AbstractStream.writeMessage(AbstractStream.java:65)
at
org.apache.beam.vendor.grpc.v1p48p1.io.grpc.internal.ServerCallImpl.sendMessageInternal(ServerCallImpl.java:171)
at
org.apache.beam.vendor.grpc.v1p48p1.io.grpc.internal.ServerCallImpl.sendMessage(ServerCallImpl.java:153)
at
org.apache.beam.vendor.grpc.v1p48p1.io.grpc.stub.ServerCalls$ServerCallStreamObserverImpl.onNext(ServerCalls.java:380)
at
org.apache.beam.sdk.fn.stream.DirectStreamObserver.onNext(DirectStreamObserver.java:108)
at
org.apache.beam.sdk.fn.data.BeamFnDataOutboundAggregator.flushInternal(BeamFnDataOutboundAggregator.java:169)
at
org.apache.beam.sdk.fn.data.BeamFnDataOutboundAggregator.access$400(BeamFnDataOutboundAggregator.java:65)
at
org.apache.beam.sdk.fn.data.BeamFnDataOutboundAggregator$Receiver.accept(BeamFnDataOutboundAggregator.java:349)
at
org.apache.beam.sdk.fn.data.BeamFnDataOutboundObserver.accept(BeamFnDataOutboundObserver.java:85)
at
org.apache.beam.runners.fnexecution.control.SdkHarnessClient$CountingFnDataReceiver.accept(SdkHarnessClient.java:716)
at
org.apache.beam.runners.flink.translation.functions.FlinkExecutableStageFunction.processElements(FlinkExecutableStageFunction.java:363)
at
org.apache.beam.runners.flink.translation.functions.FlinkExecutableStageFunction.mapPartition(FlinkExecutableStageFunction.java:268)
at
org.apache.flink.runtime.operators.MapPartitionDriver.run(MapPartitionDriver.java:113)
at org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:514)
at
org.apache.flink.runtime.operators.BatchTask.invoke(BatchTask.java:357)
at
org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:948)
at
org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:927)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:741)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:563)
at java.lang.Thread.run(Thread.java:829)
Suppressed: java.lang.IllegalStateException: call is closed
at
org.apache.beam.vendor.grpc.v1p48p1.com.google.common.base.Preconditions.checkState(Preconditions.java:502)
at
org.apache.beam.vendor.grpc.v1p48p1.io.grpc.internal.ServerCallImpl.sendMessageInternal(ServerCallImpl.java:161)
at
org.apache.beam.vendor.grpc.v1p48p1.io.grpc.internal.ServerCallImpl.sendMessage(ServerCallImpl.java:153)
at
org.apache.beam.vendor.grpc.v1p48p1.io.grpc.stub.ServerCalls$ServerCallStreamObserverImpl.onNext(ServerCalls.java:380)
at
org.apache.beam.sdk.fn.stream.DirectStreamObserver.onNext(DirectStreamObserver.java:108)
at
org.apache.beam.sdk.fn.data.BeamFnDataOutboundAggregator.sendOrCollectBufferedDataAndFinishOutboundStreams(BeamFnDataOutboundAggregator.java:219)
at
org.apache.beam.sdk.fn.data.BeamFnDataOutboundObserver.close(BeamFnDataOutboundObserver.java:71)
at
org.apache.beam.runners.fnexecution.control.SdkHarnessClient$CountingFnDataReceiver.close(SdkHarnessClient.java:727)
at
org.apache.beam.runners.fnexecution.control.SdkHarnessClient$BundleProcessor$ActiveBundle.close(SdkHarnessClient.java:492)
at
org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$SimpleStageBundleFactory$1.close(DefaultJobBundleFactory.java:555)
at
org.apache.beam.runners.flink.translation.functions.FlinkExecutableStageFunction.$closeResource(FlinkExecutableStageFunction.java:269)
at
org.apache.beam.runners.flink.translation.functions.FlinkExecutableStageFunction.mapPartition(FlinkExecutableStageFunction.java:269)
... 8 more
Suppressed: java.lang.IllegalStateException: Processing bundle
failed, TODO: [https://github.com/apache/beam/issues/18756] abort bundle.
at
org.apache.beam.runners.fnexecution.control.SdkHarnessClient$BundleProcessor$ActiveBundle.close(SdkHarnessClient.java:517)
... 11 more
2022/11/23 08:39:55 (): java.lang.OutOfMemoryError: Direct buffer memory. The
direct out-of-memory error has occurred. This can mean two things: either
job(s) require(s) a larger size of JVM direct memory or there is a direct
memory leak. The direct memory can be allocated by user code or some of its
dependencies. In this case 'taskmanager.memory.task.off-heap.size'
configuration option should be increased. Flink framework and its dependencies
also consume the direct memory, mostly for network communication. The most of
network memory is managed by Flink and should not result in out-of-memory
error. In certain special cases, in particular for jobs with high parallelism,
the framework may require more direct memory which is not managed by Flink. In
this case 'taskmanager.memory.framework.off-heap.size' configuration option
should be increased. If the error persists then there is probably a direct
memory leak in user code or some of its dependencies which has to be
investigated and fixed. The task executor has to be shutdown...
2022/11/23 08:39:55 Job state: FAILED
2022/11/23 08:39:55 Failed to execute job: job
load0tests0go0flink0batch0cogbk0201123065549-root-1123082742-5fa4ffde_e854d714-618d-4ef6-89f1-2cf4d63496c8
failed
panic: Failed to execute job: job
load0tests0go0flink0batch0cogbk0201123065549-root-1123082742-5fa4ffde_e854d714-618d-4ef6-89f1-2cf4d63496c8
failed
goroutine 1 [running]:
github.com/apache/beam/sdks/v2/go/pkg/beam/log.Fatalf({0x17e9928,
0xc000120000}, {0x163eae1?, 0x22648a8?}, {0xc0005d5e60?, 0x0?, 0x3e8?})
<https://ci-beam.apache.org/job/beam_LoadTests_Go_CoGBK_Flink_batch/ws/src/sdks/go/pkg/beam/log/log.go>:153
+0xa5
main.main()
<https://ci-beam.apache.org/job/beam_LoadTests_Go_CoGBK_Flink_batch/ws/src/sdks/go/test/load/cogbk/cogbk.go>:103
+0x4bb
> Task :sdks:go:test:load:run FAILED
FAILURE: Build failed with an exception.
* Where:
Build file
'<https://ci-beam.apache.org/job/beam_LoadTests_Go_CoGBK_Flink_batch/ws/src/sdks/go/test/load/build.gradle'>
line: 31
* What went wrong:
Execution failed for task ':sdks:go:test:load:run'.
> Process 'command 'sh'' finished with non-zero exit value 2
* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.
* Get more help at https://help.gradle.org
BUILD FAILED in 12m 43s
12 actionable tasks: 6 executed, 4 from cache, 2 up-to-date
Publishing build scan...
https://gradle.com/s/rz4zcncvnlz5g
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]