[ 
https://issues.apache.org/jira/browse/BEAM-12359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Filip Popić updated BEAM-12359:
-------------------------------
    Description: 
When Reading from BigQueryIO using DIRECT_READ getting 

 

 
{code:java}
~ Channel ManagedChannelImpl{logId=1, 
target=bigquerystorage.googleapis.com:443} was not shutdown properly!!! ~ Make 
sure to call shutdown()/shutdownNow() and wait until awaitTermination() returns 
true.
{code}
 

 
{code:java}
2021-05-12 13:25:49.319 CEST Error message from worker:
java.lang.NullPointerException
org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageSourceBase.split(BigQueryStorageSourceBase.java:105)
org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageTableSource.split(BigQueryStorageTableSource.java:40)
org.apache.beam.runners.dataflow.worker.WorkerCustomSources.splitAndValidate(WorkerCustomSources.java:294)
org.apache.beam.runners.dataflow.worker.WorkerCustomSources.performSplitTyped(WorkerCustomSources.java:216)
org.apache.beam.runners.dataflow.worker.WorkerCustomSources.performSplitWithApiLimit(WorkerCustomSources.java:200)
org.apache.beam.runners.dataflow.worker.WorkerCustomSources.performSplit(WorkerCustomSources.java:179)
org.apache.beam.runners.dataflow.worker.WorkerCustomSourceOperationExecutor.execute(WorkerCustomSourceOperationExecutor.java:82)
org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
java.util.concurrent.FutureTask.run(FutureTask.java:266)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
java.lang.Thread.run(Thread.java:748)
{code}
Sbt/Scala minimal example to reproduce [https://github.com/fpopic/BEAM-12359] 
(if required can try to make it in Java).

Relates to:

-  
[https://lists.apache.org/x/thread.html/r705ca8f0ea9517f148637be59e509ce80fa75b22c98ad63acd551065@%3Cuser.beam.apache.org%3E]
 

  was:
When Reading from BigQueryIO using DIRECT_READ getting

 

```log

*~*~*~ Channel ManagedChannelImpl\{logId=1, 
target=bigquerystorage.googleapis.com:443} was not shutdown properly!!! ~*~*~* 
Make sure to call shutdown()/shutdownNow() and wait until awaitTermination() 
returns true.

```

 

Sbt/Scala minimal example to reproduce [https://github.com/fpopic/BEAM-12359] 
(if required can try to make it in Java).

Relates to:

-  
[https://lists.apache.org/x/thread.html/r705ca8f0ea9517f148637be59e509ce80fa75b22c98ad63acd551065@%3Cuser.beam.apache.org%3E]
 


> Throwing warning/error when reading by using BigQuery Storage Read API
> ----------------------------------------------------------------------
>
>                 Key: BEAM-12359
>                 URL: https://issues.apache.org/jira/browse/BEAM-12359
>             Project: Beam
>          Issue Type: Bug
>          Components: extensions-java-gcp
>    Affects Versions: 2.29.0
>            Reporter: Filip Popić
>            Assignee: Kenneth Jung
>            Priority: P2
>
> When Reading from BigQueryIO using DIRECT_READ getting 
>  
>  
> {code:java}
> ~ Channel ManagedChannelImpl{logId=1, 
> target=bigquerystorage.googleapis.com:443} was not shutdown properly!!! ~ 
> Make sure to call shutdown()/shutdownNow() and wait until awaitTermination() 
> returns true.
> {code}
>  
>  
> {code:java}
> 2021-05-12 13:25:49.319 CEST Error message from worker:
> java.lang.NullPointerException
> org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageSourceBase.split(BigQueryStorageSourceBase.java:105)
> org.apache.beam.sdk.io.gcp.bigquery.BigQueryStorageTableSource.split(BigQueryStorageTableSource.java:40)
> org.apache.beam.runners.dataflow.worker.WorkerCustomSources.splitAndValidate(WorkerCustomSources.java:294)
> org.apache.beam.runners.dataflow.worker.WorkerCustomSources.performSplitTyped(WorkerCustomSources.java:216)
> org.apache.beam.runners.dataflow.worker.WorkerCustomSources.performSplitWithApiLimit(WorkerCustomSources.java:200)
> org.apache.beam.runners.dataflow.worker.WorkerCustomSources.performSplit(WorkerCustomSources.java:179)
> org.apache.beam.runners.dataflow.worker.WorkerCustomSourceOperationExecutor.execute(WorkerCustomSourceOperationExecutor.java:82)
> org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:420)
> org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:389)
> org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:314)
> org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
> org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
> org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> java.lang.Thread.run(Thread.java:748)
> {code}
> Sbt/Scala minimal example to reproduce [https://github.com/fpopic/BEAM-12359] 
> (if required can try to make it in Java).
> Relates to:
> -  
> [https://lists.apache.org/x/thread.html/r705ca8f0ea9517f148637be59e509ce80fa75b22c98ad63acd551065@%3Cuser.beam.apache.org%3E]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to