See 
<https://ci-beam.apache.org/job/beam_LoadTests_Go_CoGBK_Flink_batch/719/display/redirect?page=changes>

Changes:

[noreply] Bump cloud.google.com/go/bigquery from 1.43.0 to 1.44.0 in /sdks

[noreply] ML notebook formatting and text updates (#24437)

[noreply] lint fixes (#24455)

[noreply] Bump cloud.google.com/go/pubsub from 1.26.0 to 1.27.0 in /sdks 
(#24450)

[noreply] Install venv dependencies in local env setup (#24461)

[noreply] Don't set BigQuery services in schema transform configuration (#24316)

[noreply] Apache playground blog (#24431)

[noreply] Sort SchemaTransform configuration schema fields by name to establish

[noreply] Rename from default-pool to pool-1 (#24466)

[Kenneth Knowles] Moving to 2.45.0-SNAPSHOT on master branch.

[noreply] [Website] change svg to png (#24268)

[noreply] Fix error messages in cred rotation email (#24474)

[noreply] Add integer to NUMERIC and BIGNUMERIC conversion support (#24447)

[noreply] Update jackson dep. (#24445)

[noreply] Reduce calls to FileSystem.match and API calls in FileSystem._list

[noreply] Capture full response context to provide complete error information

[noreply] Pandas 1.5 support (#23973)


------------------------------------------
[...truncated 41.09 KB...]
      payload: "\n6gcr.io/apache-beam-testing/beam-sdk/beam_go_sdk:latest"
      capabilities: "beam:protocol:progress_reporting:v1"
      capabilities: "beam:protocol:multi_core_bundle_processing:v1"
      capabilities: "beam:transform:sdf_truncate_sized_restrictions:v1"
      capabilities: "beam:protocol:****_status:v1"
      capabilities: "beam:protocol:monitoring_info_short_ids:v1"
      capabilities: "beam:version:sdk_base:go"
      capabilities: "beam:coder:bytes:v1"
      capabilities: "beam:coder:bool:v1"
      capabilities: "beam:coder:varint:v1"
      capabilities: "beam:coder:double:v1"
      capabilities: "beam:coder:string_utf8:v1"
      capabilities: "beam:coder:length_prefix:v1"
      capabilities: "beam:coder:kv:v1"
      capabilities: "beam:coder:iterable:v1"
      capabilities: "beam:coder:state_backed_iterable:v1"
      capabilities: "beam:coder:windowed_value:v1"
      capabilities: "beam:coder:global_window:v1"
      capabilities: "beam:coder:interval_window:v1"
      capabilities: "beam:coder:row:v1"
      capabilities: "beam:coder:nullable:v1"
      dependencies: <
        type_urn: "beam:artifact:type:file:v1"
        role_urn: "beam:artifact:role:go_****_binary:v1"
      >
    >
  >
>
root_transform_ids: "s1"
root_transform_ids: "e4"
root_transform_ids: "s2"
root_transform_ids: "e8"
root_transform_ids: "e9_cogbk"
root_transform_ids: "e10"
root_transform_ids: "e11"
requirements: "beam:requirement:pardo:splittable_dofn:v1"
2022/12/02 08:28:20 Using specified **** binary: 'linux_amd64/cogbk'
2022/12/02 08:28:21 Prepared job with id: 
load-tests-go-flink-batch-cogbk-2-1202065536_fffcbef9-1ce0-451f-9fa5-9d37aec7fcab
 and staging token: 
load-tests-go-flink-batch-cogbk-2-1202065536_fffcbef9-1ce0-451f-9fa5-9d37aec7fcab
2022/12/02 08:28:27 Staged binary artifact with token: 
2022/12/02 08:28:29 Submitted job: 
load0tests0go0flink0batch0cogbk0201202065536-root-1202082827-fffd66df_add8465c-2e18-4e57-ad30-95cea49d3276
2022/12/02 08:28:29 Job state: STOPPED
2022/12/02 08:28:29 Job state: STARTING
2022/12/02 08:28:29 Job state: RUNNING
2022/12/02 08:40:27  (): 
org.apache.flink.client.program.ProgramInvocationException: Job failed (JobID: 
533eef7167e66c70c828f7982dde9fc1)
        at 
org.apache.flink.client.deployment.ClusterClientJobClientAdapter.lambda$null$6(ClusterClientJobClientAdapter.java:130)
        at 
java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
        at 
java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
        at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
        at 
java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
        at 
org.apache.flink.util.concurrent.FutureUtils.lambda$retryOperationWithDelay$9(FutureUtils.java:403)
        at 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
        at 
java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
        at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
        at 
java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
        at 
org.apache.flink.client.program.rest.RestClusterClient.lambda$pollResourceAsync$26(RestClusterClient.java:708)
        at 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
        at 
java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
        at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
        at 
java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
        at 
org.apache.flink.util.concurrent.FutureUtils.lambda$retryOperationWithDelay$9(FutureUtils.java:403)
        at 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
        at 
java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
        at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
        at 
java.util.concurrent.CompletableFuture.postFire(CompletableFuture.java:575)
        at 
java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:943)
        at 
java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:456)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:750)
Caused by: org.apache.flink.runtime.client.JobExecutionException: Job execution 
failed.
        at 
org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144)
        at 
org.apache.flink.client.deployment.ClusterClientJobClientAdapter.lambda$null$6(ClusterClientJobClientAdapter.java:128)
        ... 24 more
Caused by: org.apache.flink.runtime.JobException: Recovery is suppressed by 
NoRestartBackoffTimeStrategy
        at 
org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:138)
        at 
org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:82)
        at 
org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:301)
        at 
org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:291)
        at 
org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:282)
        at 
org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:739)
        at 
org.apache.flink.runtime.scheduler.SchedulerNG.updateTaskExecutionState(SchedulerNG.java:78)
        at 
org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:443)
        at jdk.internal.reflect.GeneratedMethodAccessor24.invoke(Unknown Source)
        at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:566)
        at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.lambda$handleRpcInvocation$1(AkkaRpcActor.java:304)
        at 
org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:83)
        at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:302)
        at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:217)
        at 
org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:78)
        at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:163)
        at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:24)
        at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:20)
        at scala.PartialFunction.applyOrElse(PartialFunction.scala:127)
        at scala.PartialFunction.applyOrElse$(PartialFunction.scala:126)
        at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:20)
        at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:175)
        at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:176)
        at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:176)
        at akka.actor.Actor.aroundReceive(Actor.scala:537)
        at akka.actor.Actor.aroundReceive$(Actor.scala:535)
        at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:220)
        at akka.actor.ActorCell.receiveMessage(ActorCell.scala:580)
        at akka.actor.ActorCell.invoke(ActorCell.scala:548)
        at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:270)
        at akka.dispatch.Mailbox.run(Mailbox.scala:231)
        at akka.dispatch.Mailbox.exec(Mailbox.scala:243)
        at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290)
        at 
java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020)
        at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656)
        at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594)
        at 
java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:183)
Caused by: java.lang.OutOfMemoryError: Direct buffer memory. The direct 
out-of-memory error has occurred. This can mean two things: either job(s) 
require(s) a larger size of JVM direct memory or there is a direct memory leak. 
The direct memory can be allocated by user code or some of its dependencies. In 
this case 'taskmanager.memory.task.off-heap.size' configuration option should 
be increased. Flink framework and its dependencies also consume the direct 
memory, mostly for network communication. The most of network memory is managed 
by Flink and should not result in out-of-memory error. In certain special 
cases, in particular for jobs with high parallelism, the framework may require 
more direct memory which is not managed by Flink. In this case 
'taskmanager.memory.framework.off-heap.size' configuration option should be 
increased. If the error persists then there is probably a direct memory leak in 
user code or some of its dependencies which has to be investigated and fixed. 
The task executor has to be shutdown...
        at java.nio.Bits.reserveMemory(Bits.java:175)
        at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:118)
        at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:317)
        at 
org.apache.beam.vendor.grpc.v1p48p1.io.netty.buffer.PoolArena$DirectArena.allocateDirect(PoolArena.java:649)
        at 
org.apache.beam.vendor.grpc.v1p48p1.io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:624)
        at 
org.apache.beam.vendor.grpc.v1p48p1.io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:203)
        at 
org.apache.beam.vendor.grpc.v1p48p1.io.netty.buffer.PoolArena.tcacheAllocateNormal(PoolArena.java:187)
        at 
org.apache.beam.vendor.grpc.v1p48p1.io.netty.buffer.PoolArena.allocate(PoolArena.java:136)
        at 
org.apache.beam.vendor.grpc.v1p48p1.io.netty.buffer.PoolArena.allocate(PoolArena.java:126)
        at 
org.apache.beam.vendor.grpc.v1p48p1.io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:396)
        at 
org.apache.beam.vendor.grpc.v1p48p1.io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:188)
        at 
org.apache.beam.vendor.grpc.v1p48p1.io.netty.buffer.AbstractByteBufAllocator.buffer(AbstractByteBufAllocator.java:124)
        at 
org.apache.beam.vendor.grpc.v1p48p1.io.grpc.netty.NettyWritableBufferAllocator.allocate(NettyWritableBufferAllocator.java:51)
        at 
org.apache.beam.vendor.grpc.v1p48p1.io.grpc.internal.MessageFramer.writeRaw(MessageFramer.java:285)
        at 
org.apache.beam.vendor.grpc.v1p48p1.io.grpc.internal.MessageFramer.access$400(MessageFramer.java:43)
        at 
org.apache.beam.vendor.grpc.v1p48p1.io.grpc.internal.MessageFramer$OutputStreamAdapter.write(MessageFramer.java:375)
        at 
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.CodedOutputStream$OutputStreamEncoder.write(CodedOutputStream.java:2984)
        at 
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.CodedOutputStream$OutputStreamEncoder.writeLazy(CodedOutputStream.java:2992)
        at 
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.ByteString$LiteralByteString.writeTo(ByteString.java:1470)
        at 
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:461)
        at 
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:460)
        at 
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:460)
        at 
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:460)
        at 
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:460)
        at 
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:460)
        at 
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:460)
        at 
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:460)
        at 
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:460)
        at 
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:460)
        at 
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:460)
        at 
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:460)
        at 
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:460)
        at 
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:460)
        at 
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:460)
        at 
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:460)
        at 
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:460)
        at 
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:461)
        at 
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:460)
        at 
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:460)
        at 
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:460)
        at 
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:460)
        at 
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:460)
        at 
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:460)
        at 
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:460)
        at 
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:460)
        at 
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:460)
        at 
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:460)
        at 
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:460)
        at 
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:460)
        at 
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:460)
        at 
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.RopeByteString.writeTo(RopeByteString.java:460)
        at 
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.CodedOutputStream$OutputStreamEncoder.writeBytesNoTag(CodedOutputStream.java:2780)
        at 
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.CodedOutputStream$OutputStreamEncoder.writeBytes(CodedOutputStream.java:2754)
        at 
org.apache.beam.model.fnexecution.v1.BeamFnApi$Elements$Data.writeTo(BeamFnApi.java:31639)
        at 
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.CodedOutputStream$OutputStreamEncoder.writeMessageNoTag(CodedOutputStream.java:2834)
        at 
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.CodedOutputStream$OutputStreamEncoder.writeMessage(CodedOutputStream.java:2803)
        at 
org.apache.beam.model.fnexecution.v1.BeamFnApi$Elements.writeTo(BeamFnApi.java:33712)
        at 
org.apache.beam.vendor.grpc.v1p48p1.com.google.protobuf.AbstractMessageLite.writeTo(AbstractMessageLite.java:83)
        at 
org.apache.beam.vendor.grpc.v1p48p1.io.grpc.protobuf.lite.ProtoInputStream.drainTo(ProtoInputStream.java:52)
        at 
org.apache.beam.vendor.grpc.v1p48p1.io.grpc.internal.MessageFramer.writeToOutputStream(MessageFramer.java:267)
        at 
org.apache.beam.vendor.grpc.v1p48p1.io.grpc.internal.MessageFramer.writeKnownLengthUncompressed(MessageFramer.java:229)
        at 
org.apache.beam.vendor.grpc.v1p48p1.io.grpc.internal.MessageFramer.writeUncompressed(MessageFramer.java:168)
        at 
org.apache.beam.vendor.grpc.v1p48p1.io.grpc.internal.MessageFramer.writePayload(MessageFramer.java:141)
        at 
org.apache.beam.vendor.grpc.v1p48p1.io.grpc.internal.AbstractStream.writeMessage(AbstractStream.java:65)
        at 
org.apache.beam.vendor.grpc.v1p48p1.io.grpc.internal.ServerCallImpl.sendMessageInternal(ServerCallImpl.java:171)
        at 
org.apache.beam.vendor.grpc.v1p48p1.io.grpc.internal.ServerCallImpl.sendMessage(ServerCallImpl.java:153)
        at 
org.apache.beam.vendor.grpc.v1p48p1.io.grpc.stub.ServerCalls$ServerCallStreamObserverImpl.onNext(ServerCalls.java:380)
        at 
org.apache.beam.sdk.fn.stream.DirectStreamObserver.onNext(DirectStreamObserver.java:108)
        at 
org.apache.beam.sdk.fn.data.BeamFnDataOutboundAggregator.flushInternal(BeamFnDataOutboundAggregator.java:169)
        at 
org.apache.beam.sdk.fn.data.BeamFnDataOutboundAggregator.access$400(BeamFnDataOutboundAggregator.java:65)
        at 
org.apache.beam.sdk.fn.data.BeamFnDataOutboundAggregator$Receiver.accept(BeamFnDataOutboundAggregator.java:349)
        at 
org.apache.beam.sdk.fn.data.BeamFnDataOutboundObserver.accept(BeamFnDataOutboundObserver.java:85)
        at 
org.apache.beam.runners.fnexecution.control.SdkHarnessClient$CountingFnDataReceiver.accept(SdkHarnessClient.java:716)
        at 
org.apache.beam.runners.flink.translation.functions.FlinkExecutableStageFunction.processElements(FlinkExecutableStageFunction.java:363)
        at 
org.apache.beam.runners.flink.translation.functions.FlinkExecutableStageFunction.mapPartition(FlinkExecutableStageFunction.java:268)
        at 
org.apache.flink.runtime.operators.MapPartitionDriver.run(MapPartitionDriver.java:113)
        at org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:514)
        at 
org.apache.flink.runtime.operators.BatchTask.invoke(BatchTask.java:357)
        at 
org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:948)
        at 
org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:927)
        at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:741)
        at org.apache.flink.runtime.taskmanager.Task.run(Task.java:563)
        at java.lang.Thread.run(Thread.java:829)
        Suppressed: java.lang.IllegalStateException: call is closed
                at 
org.apache.beam.vendor.grpc.v1p48p1.com.google.common.base.Preconditions.checkState(Preconditions.java:502)
                at 
org.apache.beam.vendor.grpc.v1p48p1.io.grpc.internal.ServerCallImpl.sendMessageInternal(ServerCallImpl.java:161)
                at 
org.apache.beam.vendor.grpc.v1p48p1.io.grpc.internal.ServerCallImpl.sendMessage(ServerCallImpl.java:153)
                at 
org.apache.beam.vendor.grpc.v1p48p1.io.grpc.stub.ServerCalls$ServerCallStreamObserverImpl.onNext(ServerCalls.java:380)
                at 
org.apache.beam.sdk.fn.stream.DirectStreamObserver.onNext(DirectStreamObserver.java:108)
                at 
org.apache.beam.sdk.fn.data.BeamFnDataOutboundAggregator.sendOrCollectBufferedDataAndFinishOutboundStreams(BeamFnDataOutboundAggregator.java:219)
                at 
org.apache.beam.sdk.fn.data.BeamFnDataOutboundObserver.close(BeamFnDataOutboundObserver.java:71)
                at 
org.apache.beam.runners.fnexecution.control.SdkHarnessClient$CountingFnDataReceiver.close(SdkHarnessClient.java:727)
                at 
org.apache.beam.runners.fnexecution.control.SdkHarnessClient$BundleProcessor$ActiveBundle.close(SdkHarnessClient.java:492)
                at 
org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$SimpleStageBundleFactory$1.close(DefaultJobBundleFactory.java:555)
                at 
org.apache.beam.runners.flink.translation.functions.FlinkExecutableStageFunction.$closeResource(FlinkExecutableStageFunction.java:269)
                at 
org.apache.beam.runners.flink.translation.functions.FlinkExecutableStageFunction.mapPartition(FlinkExecutableStageFunction.java:269)
                ... 8 more
                Suppressed: java.lang.IllegalStateException: Processing bundle 
failed, TODO: [https://github.com/apache/beam/issues/18756] abort bundle.
                        at 
org.apache.beam.runners.fnexecution.control.SdkHarnessClient$BundleProcessor$ActiveBundle.close(SdkHarnessClient.java:517)
                        ... 11 more
2022/12/02 08:40:27  (): java.lang.OutOfMemoryError: Direct buffer memory. The 
direct out-of-memory error has occurred. This can mean two things: either 
job(s) require(s) a larger size of JVM direct memory or there is a direct 
memory leak. The direct memory can be allocated by user code or some of its 
dependencies. In this case 'taskmanager.memory.task.off-heap.size' 
configuration option should be increased. Flink framework and its dependencies 
also consume the direct memory, mostly for network communication. The most of 
network memory is managed by Flink and should not result in out-of-memory 
error. In certain special cases, in particular for jobs with high parallelism, 
the framework may require more direct memory which is not managed by Flink. In 
this case 'taskmanager.memory.framework.off-heap.size' configuration option 
should be increased. If the error persists then there is probably a direct 
memory leak in user code or some of its dependencies which has to be 
investigated and fixed. The task executor has to be shutdown...
2022/12/02 08:40:27 Job state: FAILED
2022/12/02 08:40:27 Failed to execute job: job 
load0tests0go0flink0batch0cogbk0201202065536-root-1202082827-fffd66df_add8465c-2e18-4e57-ad30-95cea49d3276
 failed
panic: Failed to execute job: job 
load0tests0go0flink0batch0cogbk0201202065536-root-1202082827-fffd66df_add8465c-2e18-4e57-ad30-95cea49d3276
 failed

goroutine 1 [running]:
github.com/apache/beam/sdks/v2/go/pkg/beam/log.Fatalf({0x17f5d28, 
0xc000198000}, {0x164a227?, 0x227a8b8?}, {0xc000673e60?, 0x0?, 0x3e8?})
        
<https://ci-beam.apache.org/job/beam_LoadTests_Go_CoGBK_Flink_batch/ws/src/sdks/go/pkg/beam/log/log.go>:153
 +0xa5
main.main()
        
<https://ci-beam.apache.org/job/beam_LoadTests_Go_CoGBK_Flink_batch/ws/src/sdks/go/test/load/cogbk/cogbk.go>:103
 +0x4bb

> Task :sdks:go:test:load:run FAILED

FAILURE: Build failed with an exception.

* Where:
Build file 
'<https://ci-beam.apache.org/job/beam_LoadTests_Go_CoGBK_Flink_batch/ws/src/sdks/go/test/load/build.gradle'>
 line: 31

* What went wrong:
Execution failed for task ':sdks:go:test:load:run'.
> Process 'command 'sh'' finished with non-zero exit value 2

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.

* Get more help at https://help.gradle.org

BUILD FAILED in 12m 47s
12 actionable tasks: 6 executed, 4 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/su3bqudfrfymo

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to