gemini-code-assist[bot] commented on code in PR #37471:
URL: https://github.com/apache/beam/pull/37471#discussion_r2760685783


##########
sdks/java/ml/inference/remote/src/main/java/org/apache/beam/sdk/ml/inference/remote/RetryHandler.java:
##########
@@ -82,13 +77,12 @@ public <T> T execute(RetryableRequest<T> request) throws 
Exception {
         if (backoffMillis == BackOff.STOP) {
           LOG.error("Request failed after {} retry attempts.", attempt);
           throw new RuntimeException(
-            "Request failed after exhausting retries. " +
-              "Max retries: " + maxRetries + ", " ,
-            lastException);
+              "Request failed after exhausting retries. " + "Max retries: " + 
maxRetries + ", ",
+              lastException);
         }
 
         attempt++;
-        LOG.warn("Retry request attempt {} failed with: {}. Retrying in {} 
ms", attempt, e.getMessage(), backoffMillis);
+        LOG.warn("Retry request attempt {} failed. Retrying in {} ms", 
attempt, backoffMillis, e);

Review Comment:
   ![medium](https://www.gstatic.com/codereviewagent/medium-priority.svg)
   
   While logging the full exception is a great improvement for debugging, the 
log message itself has lost some useful context by removing the exception 
message. You can include both the message for quick scanning and the full stack 
trace for deep dives.
   
   ```suggestion
           LOG.warn("Retry request attempt {} failed with: {}. Retrying in {} 
ms", attempt, e.getMessage(), backoffMillis, e);
   ```



##########
sdks/java/ml/inference/openai/src/test/java/org/apache/beam/sdk/ml/inference/openai/OpenAIModelHandlerIT.java:
##########
@@ -252,33 +292,40 @@ public void testWithDifferentModel() {
     // Test with a different model
     String input = "Explain quantum computing in one sentence.";
 
-    PCollection<OpenAIModelInput> inputs = pipeline
-      .apply("CreateInput", Create.of(input))
-      .apply("MapToInput", MapElements
-        .into(TypeDescriptor.of(OpenAIModelInput.class))
-        .via(OpenAIModelInput::create));
-
-    PCollection<Iterable<PredictionResult<OpenAIModelInput, 
OpenAIModelResponse>>> results = inputs
-      .apply("DifferentModelInference",
-        RemoteInference.<OpenAIModelInput, OpenAIModelResponse>invoke()
-          .handler(OpenAIModelHandler.class)
-          .withParameters(OpenAIModelParameters.builder()
-            .apiKey(apiKey)
-            .modelName("gpt-5")
-            .instructionPrompt("Respond concisely")
-            .build()));
-
-    PAssert.that(results).satisfies(batches -> {
-      for (Iterable<PredictionResult<OpenAIModelInput, OpenAIModelResponse>> 
batch : batches) {
-        for (PredictionResult<OpenAIModelInput, OpenAIModelResponse> result : 
batch) {
-          assertNotNull("Output should not be null",
-            result.getOutput().getModelResponse());
-          assertFalse("Output should not be empty",
-            result.getOutput().getModelResponse().trim().isEmpty());
-        }
-      }
-      return null;
-    });
+    PCollection<OpenAIModelInput> inputs =
+        pipeline
+            .apply("CreateInput", Create.of(input))
+            .apply(
+                "MapToInput",
+                MapElements.into(TypeDescriptor.of(OpenAIModelInput.class))
+                    .via(OpenAIModelInput::create));
+
+    PCollection<Iterable<PredictionResult<OpenAIModelInput, 
OpenAIModelResponse>>> results =
+        inputs.apply(
+            "DifferentModelInference",
+            RemoteInference.<OpenAIModelInput, OpenAIModelResponse>invoke()
+                .handler(OpenAIModelHandler.class)
+                .withParameters(
+                    OpenAIModelParameters.builder()
+                        .apiKey(apiKey)
+                        .modelName("gpt-5")

Review Comment:
   ![high](https://www.gstatic.com/codereviewagent/high-priority.svg)
   
   The `gpt-5` model does not exist, which will cause this test to fail with an 
API error. However, the test asserts a successful outcome. To fix this, please 
use a valid and available OpenAI model. For example, you could use 
`gpt-4-turbo` if you want to test a different model from the default 
`gpt-4o-mini`.
   
   ```suggestion
                           .modelName("gpt-4-turbo")
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to