[ 
https://issues.apache.org/jira/browse/BEAM-12359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17347184#comment-17347184
 ] 

Filip Popić commented on BEAM-12359:
------------------------------------

+ Added table and it works.
+ Something seems still wrong regarding the ManagedChannel log (I am basically 
in IntelliJ running Main to submit dataflow job)
{code:java}
2021-05-18 23:39:57.985 CEST [main] INFO  o.a.b.r.d.DataflowPipelineTranslator 
- Adding 
ReadBQMyClass/ReadTableRowsFromFieldsThroughStorageAPI_MyClass/Read(BigQueryStorageTableSource)
 as step s12021-05-18 

11:39:57 PM 
io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference cleanQueue

SEVERE: *~*~*~ Channel ManagedChannelImpl{logId=3, 
target=bigquerystorage.googleapis.com:443} was not shutdown properly!!! ~*~*~*  
  Make sure to call shutdown()/shutdownNow() and wait until awaitTermination() 
returns true.

java.lang.RuntimeException: ManagedChannel allocation site at 
io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference.<init>(ManagedChannelOrphanWrapper.java:93)
 at 
io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:53)
 at 
io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:44)
 at 
io.grpc.internal.ManagedChannelImplBuilder.build(ManagedChannelImplBuilder.java:612)
 at 
io.grpc.internal.AbstractManagedChannelImplBuilder.build(AbstractManagedChannelImplBuilder.java:261)
 at 
com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createSingleChannel(InstantiatingGrpcChannelProvider.java:340)
 at 
com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.access$1600(InstantiatingGrpcChannelProvider.java:73)
 at 
com.google.api.gax.grpc.InstantiatingGrpcChannelProvider$1.createSingleChannel(InstantiatingGrpcChannelProvider.java:214)
 at com.google.api.gax.grpc.ChannelPool.create(ChannelPool.java:72) at 
com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createChannel(InstantiatingGrpcChannelProvider.java:221)
 at 
com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.getTransportChannel(InstantiatingGrpcChannelProvider.java:204)
 at com.google.api.gax.rpc.ClientContext.create(ClientContext.java:169) at 
com.google.cloud.bigquery.storage.v1beta2.stub.GrpcBigQueryWriteStub.create(GrpcBigQueryWriteStub.java:136)
 at 
com.google.cloud.bigquery.storage.v1beta2.stub.BigQueryWriteStubSettings.createStub(BigQueryWriteStubSettings.java:145)
 at 
com.google.cloud.bigquery.storage.v1beta2.BigQueryWriteClient.<init>(BigQueryWriteClient.java:120)
 at 
com.google.cloud.bigquery.storage.v1beta2.BigQueryWriteClient.create(BigQueryWriteClient.java:101)
 at 
org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.newBigQueryWriteClient(BigQueryServicesImpl.java:1255)
 at 
org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.access$800(BigQueryServicesImpl.java:135)
 at 
org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.<init>(BigQueryServicesImpl.java:521)
 at 
org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.<init>(BigQueryServicesImpl.java:449)
 at 
org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.getDatasetService(BigQueryServicesImpl.java:169)
 at 
org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO$TypedRead.validate(BigQueryIO.java:965)
 at 
org.apache.beam.sdk.Pipeline$ValidateVisitor.enterCompositeTransform(Pipeline.java:661)
 at 
org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:575)
 at 
org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:579)
 at 
org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:579)
 at 
org.apache.beam.sdk.runners.TransformHierarchy$Node.access$500(TransformHierarchy.java:239)
 at 
org.apache.beam.sdk.runners.TransformHierarchy.visit(TransformHierarchy.java:213)
 at org.apache.beam.sdk.Pipeline.traverseTopologically(Pipeline.java:468) at 
org.apache.beam.sdk.Pipeline.validate(Pipeline.java:597) at 
org.apache.beam.sdk.Pipeline.run(Pipeline.java:321) at 
org.apache.beam.sdk.Pipeline.run(Pipeline.java:308)
{code}

> BigQuery Storage Read API source throws NullPointerException when a source 
> table is not found
> ---------------------------------------------------------------------------------------------
>
>                 Key: BEAM-12359
>                 URL: https://issues.apache.org/jira/browse/BEAM-12359
>             Project: Beam
>          Issue Type: Bug
>          Components: extensions-java-gcp
>    Affects Versions: 2.29.0
>            Reporter: Filip Popić
>            Assignee: Vachan Shetty
>            Priority: P2
>
> When Reading from BigQueryIO using DIRECT_READ getting 
> {code:java}
> ~ Channel ManagedChannelImpl{logId=1, 
> target=bigquerystorage.googleapis.com:443} was not shutdown properly!!! ~ 
> Make sure to call shutdown()/shutdownNow() and wait until awaitTermination() 
> returns true. {code}
> {code:java}
> 2021-05-12 13:25:49.319 CEST Error message from worker:
> java.lang.NullPointerException
> org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageSourceBase.split(BigQueryStorageSourceBase.java:105)
> org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageTableSource.split(BigQueryStorageTableSource.java:40)
> org.apache.beam.runners.dataflow.worker.WorkerCustomSources.splitAndValidate(WorkerCustomSources.java:294)
> org.apache.beam.runners.dataflow.worker.WorkerCustomSources.performSplitTyped(WorkerCustomSources.java:216)
> org.apache.beam.runners.dataflow.worker.WorkerCustomSources.performSplitWithApiLimit(WorkerCustomSources.java:200)
> org.apache.beam.runners.dataflow.worker.WorkerCustomSources.performSplit(WorkerCustomSources.java:179)
> org.apache.beam.runners.dataflow.worker.WorkerCustomSourceOperationExecutor.execute(WorkerCustomSourceOperationExecutor.java:82)
> org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
> org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
> org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
> org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
> org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
> org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> java.lang.Thread.run(Thread.java:748)
> {code}
> Sbt/Scala minimal example to reproduce [https://github.com/fpopic/BEAM-12359] 
> (if required can try to make it in Java).
> Relates to user mailing list 
> [question|https://lists.apache.org/x/thread.html/r705ca8f0ea9517f148637be59e509ce80fa75b22c98ad63acd551065@%3Cuser.beam.apache.org%3E].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to