robertwb commented on code in PR #25200:
URL: https://github.com/apache/beam/pull/25200#discussion_r1093494849


##########
sdks/python/apache_beam/pipeline.py:
##########
@@ -525,6 +525,31 @@ def run(self, test_runner_api='AUTO'):
     self.contains_external_transforms = (
         ExternalTransformFinder.contains_external_transforms(self))
 
+    # Finds if RunInference has side inputs enables.
+    # also, checks for the side input window is global and has non default
+    # triggers.
+    run_inference_visitor = RunInferenceVisitor().visit_run_inference(self)
+    self._run_inference_contains_side_input = (
+        run_inference_visitor.contains_run_inference_side_inputs)
+
+    self.run_inference_global_window_non_default_trigger = (
+        run_inference_visitor.contains_global_windows_non_default_trigger)
+
+    if (self._run_inference_contains_side_input and
+        not self._options.view_as(StandardOptions).streaming):
+      raise RuntimeError(
+          "SideInputs to RunInference PTransform is only supported "

Review Comment:
   In batch, the "latest" version of the model is likely to be the only version 
of the model, but that doesn't mean it can't be computed lazily. This also 
avoids having to have two separate codepaths (known at construction time can be 
"upgraded" to Create + Side Input). 
   
   If you do have a streaming source, the pipeline should automatically run in 
streaming mode. (Also, IMHO, if just because a mode is more useful in one mode 
than the other doesn't mean we should prohibit it in the other unless it's 
actually detrimental.)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to