damccorm commented on code in PR #35066:
URL: https://github.com/apache/beam/pull/35066#discussion_r2112584122


##########
examples/notebooks/beam-ml/custom_remote_inference.ipynb:
##########
@@ -137,7 +137,7 @@
       "source": [
         "!pip install --upgrade pip\n",
         "!pip install protobuf==3.19.4\n",
-        "!pip install apache-beam[interactive,gcp]>=2.40.0\n",
+        "!pip install apache-beam[interactive,gcp]==2.65.0\n",

Review Comment:
   ```suggestion
           "!pip install apache-beam[interactive,gcp]>=2.65.0\n",
   ```



##########
examples/notebooks/beam-ml/custom_remote_inference.ipynb:
##########
@@ -257,29 +255,35 @@
         "\n",
         "When you run remote inference, prepare to encounter, identify, and 
handle failure as gracefully as possible. We recommend using the following 
techniques:\n",
         "\n",
-        "* **Exponential backoff:** Retry failed remote calls with 
exponentially growing pauses between retries. Using exponential backoff ensures 
that failures don't lead to an overwhelming number of retries in quick 
succession.\n",
+        "* **Exponential backoff:** Retry failed remote calls with 
exponentially growing pauses between retries. Using exponential backoff ensures 
that failures don't lead to an overwhelming number of retries in quick 
succession. The `RemoteModelHandler` base class handles this logic, with the 
`retry_fn` argument determining which errors are retryable. For this example we 
will always retry. \n",
         "\n",
         "* **Dead-letter queues:** Route failed inferences to a separate 
`PCollection` without failing the whole transform. Continue execution without 
failing the job (batch jobs' default behavior) or retrying indefinitely 
(streaming jobs' default behavior).\n",

Review Comment:
   May be scope creep, but I think it should be pretty small - DLQs are also 
now part of RunInference via with_exception_handling. Could we:
   
   a) Call this out with a link to 
https://beam.apache.org/releases/pydoc/current/apache_beam.ml.inference.base.html#apache_beam.ml.inference.base.RunInference.with_exception_handling
   b) Actually implement it below? We could still probably just format and 
print the failures (in theory there should be none, so it will no-op).
   
   That way the notebook is a full version of what we are recommending 
customers do, including backoff and DLQs.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscr...@beam.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to