yeandy commented on code in PR #22795:
URL: https://github.com/apache/beam/pull/22795#discussion_r954208736


##########
sdks/python/apache_beam/ml/inference/pytorch_inference.py:
##########
@@ -40,11 +41,32 @@
 def _load_model(
     model_class: torch.nn.Module, state_dict_path, device, **model_params):
   model = model_class(**model_params)
-  model.to(device)
+
+  if device == torch.device('cuda') and not torch.cuda.is_available():
+    logging.warning(
+        "Model handler specified a 'GPU' device, but GPUs are not available. " 
\
+        "Switching to CPU.")
+    device = torch.device('cpu')
+
   file = FileSystems.open(state_dict_path, 'rb')
-  model.load_state_dict(torch.load(file))
+  try:
+    logging.info(
+        "Loading state_dict_path %s onto a %s device", state_dict_path, device)
+    state_dict = torch.load(file, map_location=device)
+  except RuntimeError as e:
+    message = "Loading the model onto a GPU device failed due to an " \
+      f"exception:\n{e}\nAttempting to load onto a CPU device instead."
+    logging.warning(message)
+
+    device = torch.device('cpu')

Review Comment:
   Thanks for bringing up your concern; I think the scenario you posed is a 
possibility (I just don't know of one of the top of my head), so we can 
incorporate this change to be safe.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscr...@beam.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to