Abacn commented on code in PR #30186:
URL: https://github.com/apache/beam/pull/30186#discussion_r1474731788


##########
sdks/java/io/google-cloud-platform/src/main/java/org/apache/beam/sdk/io/gcp/bigquery/providers/BigQueryStorageWriteApiSchemaTransformProvider.java:
##########
@@ -383,13 +383,6 @@ public PCollectionRowTuple expand(PCollectionRowTuple 
input) {
         Boolean autoSharding = configuration.getAutoSharding();
         int numStreams = configuration.getNumStreams() == null ? 0 : 
configuration.getNumStreams();
 
-        // TODO(https://github.com/apache/beam/issues/30058): remove once 
Dataflow supports multiple
-        // DoFn's per fused step.
-        if (numStreams < 1) {
-          throw new IllegalStateException(
-              "numStreams must be set to a positive integer when input data is 
unbounded.");
-        }

Review Comment:
   This check is too broad and breaking exisiting tests has been working on 
direct runner and dataflow:  https://github.com/apache/beam/runs/20839188440 
https://github.com/apache/beam/runs/20815247023
   
   On the other hand, there was a similar restriction added to Dataflow side 
also breaks users and have been rolled back: b/322741233
   
   We were yet to fully understand the cause of "crash loopback ". Using 
GroupIntoBatches with autosharding alone won't cause issue



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to