[ 
https://issues.apache.org/jira/browse/HADOOP-19645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=18011766#comment-18011766
 ] 

ASF GitHub Bot commented on HADOOP-19645:
-----------------------------------------

anujmodi2021 commented on code in PR #7837:
URL: https://github.com/apache/hadoop/pull/7837#discussion_r2250353287


##########
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/TestTracingContext.java:
##########
@@ -326,8 +329,8 @@ fileSystemId, FSOperationType.CREATE_FILESYSTEM, 
tracingHeaderFormat, new Tracin
   }
 
   private void checkHeaderForRetryPolicyAbbreviation(String header, String 
expectedFailureReason, String expectedRetryPolicyAbbreviation) {
-    String[] headerContents = header.split(":");
-    String previousReqContext = headerContents[6];
+    String[] headerContents = header.split(":", SPLIT_NO_LIMIT);

Review Comment:
   Taken



##########
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsInputStream.java:
##########
@@ -781,6 +794,132 @@ public void testDefaultReadaheadQueueDepth() throws 
Exception {
     in.close();
   }
 
+  @Test
+  public void testReadTypeInTracingContextHeader() throws Exception {
+    AzureBlobFileSystem spiedFs = Mockito.spy(getFileSystem());
+    AzureBlobFileSystemStore spiedStore = Mockito.spy(spiedFs.getAbfsStore());
+    AbfsConfiguration spiedConfig = 
Mockito.spy(spiedStore.getAbfsConfiguration());
+    AbfsClient spiedClient = Mockito.spy(spiedStore.getClient());
+    Mockito.doReturn(ONE_MB).when(spiedConfig).getReadBufferSize();
+    Mockito.doReturn(ONE_MB).when(spiedConfig).getReadAheadBlockSize();
+    Mockito.doReturn(spiedClient).when(spiedStore).getClient();
+    Mockito.doReturn(spiedStore).when(spiedFs).getAbfsStore();
+    Mockito.doReturn(spiedConfig).when(spiedStore).getAbfsConfiguration();
+    int numOfReadCalls = 0;
+    int fileSize = 0;
+
+    /*
+     * Test to verify Normal Read Type.
+     * Disabling read ahead ensures that read type is normal read.
+     */
+    fileSize = 3 * ONE_MB; // To make sure multiple blocks are read.
+    numOfReadCalls += 3; // 3 blocks of 1MB each.
+    doReturn(false).when(spiedConfig).isReadAheadV2Enabled();
+    doReturn(false).when(spiedConfig).isReadAheadEnabled();
+    testReadTypeInTracingContextHeaderInternal(spiedFs, fileSize, NORMAL_READ, 
numOfReadCalls);

Review Comment:
   Taken





> ABFS: [ReadAheadV2] Improve Metrics for Read Calls to identify type of read 
> done.
> ---------------------------------------------------------------------------------
>
>                 Key: HADOOP-19645
>                 URL: https://issues.apache.org/jira/browse/HADOOP-19645
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/azure
>    Affects Versions: 3.3.6, 3.4.1
>            Reporter: Anuj Modi
>            Assignee: Anuj Modi
>            Priority: Major
>              Labels: pull-request-available
>
> There are a number of ways in which ABFS driver can trigger a network call to 
> read data. We need a way to identify what type of read call was made from 
> client. Plan is to add an indication for this in already present 
> ClientRequestId header.
> Following are types of read we want to identify:
>  # Direct Read: Read from a given position in remote file. This will be 
> synchronous read
>  # Normal Read: Read from current seeked position where read ahead was 
> bypassed. This will be synchronous read.
>  # Prefetch Read: Read triggered from background threads filling up in memory 
> cache. This will be asynchronous read.
>  # Missed Cache Read: Read triggered after nothing was received from read 
> ahead. This will be synchronous read.
>  # Footer Read: Read triggered as part of footer read optimization. This will 
> be synchronous.
>  # Small File Read: Read triggered as a part of small file read. This will be 
> synchronous read.
> We will add another field in the Tracing Header (Client Request Id) for each 
> request. We can call this field "Operation Specific Header" very similar to 
> how we have "Retry Header" today. As part of this we will only use it for 
> read operations keeping it empty for other operations. Moving ahead f we need 
> to publish any operation specific info, same header can be used.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to