dannycjones commented on code in PR #5110:
URL: https://github.com/apache/hadoop/pull/5110#discussion_r1022901285


##########
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/audit/impl/LoggingAuditor.java:
##########
@@ -230,6 +240,26 @@ private class LoggingAuditSpan extends 
AbstractAuditSpanImpl {
 
     private final HttpReferrerAuditHeader referrer;
 
+    /**
+     * Attach Range of data for GetObject Request.
+     * @param request given get object request
+     */
+    private void attachRangeFromRequest(AmazonWebServiceRequest request) {
+      if (request instanceof GetObjectRequest) {
+        long[] rangeValue = ((GetObjectRequest) request).getRange();
+        if (rangeValue == null || rangeValue.length == 0) {
+          return;
+        }
+        if (rangeValue.length != 2) {
+          WARN_INCORRECT_RANGE.warn("Expected range to contain 0 or 2 
elements. Got "
+              + rangeValue.length + " elements. Ignoring");
+          return;
+        }
+        String combinedRangeValue = String.format("bytes=%d-%d", 
rangeValue[0], rangeValue[1]);

Review Comment:
   Yes, I think that makes sense.
   
   The only risk I see is if the object store uses a range specified in another 
unit from bytes. In V1 SDK, it's hard coded to `bytes` - no issue there. In V2, 
I believe you specify the header itself so we can warn once if we see something 
other than `bytes=` prefix and ignore the header if so. We are the ones 
specifying the header in the first place anyway, right? (FYI @ahmarsuhail 
@passaro)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to