[ 
https://issues.apache.org/jira/browse/HADOOP-18231?focusedWorklogId=772009&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-772009
 ]

ASF GitHub Bot logged work on HADOOP-18231:
-------------------------------------------

                Author: ASF GitHub Bot
            Created on: 18/May/22 16:17
            Start Date: 18/May/22 16:17
    Worklog Time Spent: 10m 
      Work Description: steveloughran commented on code in PR #4305:
URL: https://github.com/apache/hadoop/pull/4305#discussion_r876075971


##########
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3PrefetchingInputStream.java:
##########
@@ -0,0 +1,143 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import java.io.IOException;
+
+import org.junit.Test;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.contract.ContractTestUtils;
+import org.apache.hadoop.fs.s3a.performance.AbstractS3ACostTest;
+import org.apache.hadoop.fs.statistics.IOStatistics;
+import org.apache.hadoop.fs.statistics.StoreStatisticNames;
+import org.apache.hadoop.fs.statistics.StreamStatisticNames;
+
+import static org.apache.hadoop.fs.s3a.Constants.PREFETCH_BLOCK_DEFAULT_SIZE;
+import static org.apache.hadoop.fs.s3a.Constants.PREFETCH_BLOCK_SIZE_KEY;
+import static org.apache.hadoop.fs.s3a.Constants.PREFETCH_ENABLED_KEY;
+import static 
org.apache.hadoop.fs.statistics.IOStatisticAssertions.verifyStatisticCounterValue;
+
+/**
+ * Test the prefetching input stream, validates that the underlying 
S3CachingInputStream and
+ * S3InMemoryInputStream are working as expected.
+ */
+public class ITestS3PrefetchingInputStream extends AbstractS3ACostTest {
+
+  public ITestS3PrefetchingInputStream() {
+    super(true);
+  }
+
+  private static final int _1K = 1024;
+  // Path for file which should have length > block size so 
S3CachingInputStream is used
+  private Path largeFile;
+  private FileSystem fs;
+  private int numBlocks;
+  private int blockSize;
+  private long largeFileSize;
+  // Size should be < block size so S3InMemoryInputStream is used
+  private static final int smallFileSize = _1K * 16;
+
+  @Override
+  public void setup() throws Exception {
+    super.setup();
+
+    Configuration conf = getConfiguration();

Review Comment:
   comes in too late as the superclass will have created the fs already.
   
   override `createConfiguration()` which is where the config to use is created
   
   use 'S3ATestUtils.removeBaseAndBucketOverrides()` to clear any per bucket 
option before setting it



##########
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3PrefetchingInputStream.java:
##########
@@ -0,0 +1,143 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import java.io.IOException;
+
+import org.junit.Test;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.contract.ContractTestUtils;
+import org.apache.hadoop.fs.s3a.performance.AbstractS3ACostTest;
+import org.apache.hadoop.fs.statistics.IOStatistics;
+import org.apache.hadoop.fs.statistics.StoreStatisticNames;
+import org.apache.hadoop.fs.statistics.StreamStatisticNames;
+
+import static org.apache.hadoop.fs.s3a.Constants.PREFETCH_BLOCK_DEFAULT_SIZE;
+import static org.apache.hadoop.fs.s3a.Constants.PREFETCH_BLOCK_SIZE_KEY;
+import static org.apache.hadoop.fs.s3a.Constants.PREFETCH_ENABLED_KEY;
+import static 
org.apache.hadoop.fs.statistics.IOStatisticAssertions.verifyStatisticCounterValue;
+
+/**
+ * Test the prefetching input stream, validates that the underlying 
S3CachingInputStream and
+ * S3InMemoryInputStream are working as expected.
+ */
+public class ITestS3PrefetchingInputStream extends AbstractS3ACostTest {
+
+  public ITestS3PrefetchingInputStream() {
+    super(true);
+  }
+
+  private static final int _1K = 1024;
+  // Path for file which should have length > block size so 
S3CachingInputStream is used
+  private Path largeFile;
+  private FileSystem fs;

Review Comment:
   superclass already creates an fs in setup, shuts it safely in teardown, uses 
it to clean up dest paths. no need to create a new one



##########
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3PrefetchingInputStream.java:
##########
@@ -0,0 +1,143 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import java.io.IOException;
+
+import org.junit.Test;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.contract.ContractTestUtils;
+import org.apache.hadoop.fs.s3a.performance.AbstractS3ACostTest;
+import org.apache.hadoop.fs.statistics.IOStatistics;
+import org.apache.hadoop.fs.statistics.StoreStatisticNames;
+import org.apache.hadoop.fs.statistics.StreamStatisticNames;
+
+import static org.apache.hadoop.fs.s3a.Constants.PREFETCH_BLOCK_DEFAULT_SIZE;
+import static org.apache.hadoop.fs.s3a.Constants.PREFETCH_BLOCK_SIZE_KEY;
+import static org.apache.hadoop.fs.s3a.Constants.PREFETCH_ENABLED_KEY;
+import static 
org.apache.hadoop.fs.statistics.IOStatisticAssertions.verifyStatisticCounterValue;
+
+/**
+ * Test the prefetching input stream, validates that the underlying 
S3CachingInputStream and
+ * S3InMemoryInputStream are working as expected.
+ */
+public class ITestS3PrefetchingInputStream extends AbstractS3ACostTest {
+
+  public ITestS3PrefetchingInputStream() {
+    super(true);
+  }
+
+  private static final int _1K = 1024;
+  // Path for file which should have length > block size so 
S3CachingInputStream is used
+  private Path largeFile;
+  private FileSystem fs;
+  private int numBlocks;
+  private int blockSize;
+  private long largeFileSize;
+  // Size should be < block size so S3InMemoryInputStream is used
+  private static final int smallFileSize = _1K * 16;
+
+  @Override
+  public void setup() throws Exception {
+    super.setup();
+
+    Configuration conf = getConfiguration();
+    conf.setBoolean(PREFETCH_ENABLED_KEY, true);
+  }
+
+  private void openFS() throws IOException {
+    Configuration conf = getConfiguration();
+
+    largeFile = new Path(DEFAULT_CSVTEST_FILE);
+    blockSize = conf.getInt(PREFETCH_BLOCK_SIZE_KEY, 
PREFETCH_BLOCK_DEFAULT_SIZE);
+    fs = largeFile.getFileSystem(getConfiguration());
+    FileStatus fileStatus = fs.getFileStatus(largeFile);
+    largeFileSize = fileStatus.getLen();
+    numBlocks = (largeFileSize == 0) ?
+        0 :
+        ((int) (largeFileSize / blockSize)) + (largeFileSize % blockSize > 0 ? 
1 : 0);
+  }
+
+  @Test
+  public void testReadLargeFileFully() throws Throwable {
+    describe("read a large file fully, uses S3CachingInputStream");
+    openFS();
+
+    try (FSDataInputStream in = fs.open(largeFile)) {
+      IOStatistics ioStats = in.getIOStatistics();
+
+      byte[] buffer = new byte[(int) largeFileSize];
+
+      in.read(buffer, 0, (int) largeFileSize);
+
+      verifyStatisticCounterValue(ioStats, 
StoreStatisticNames.ACTION_HTTP_GET_REQUEST, numBlocks);
+      verifyStatisticCounterValue(ioStats, 
StreamStatisticNames.STREAM_READ_OPENED, numBlocks);
+    }
+  }
+
+  @Test
+  public void testRandomReadLargeFile() throws Throwable {
+    describe("random read on a large file, uses S3CachingInputStream");
+    openFS();
+
+    try (FSDataInputStream in = fs.open(largeFile)) {
+      IOStatistics ioStats = in.getIOStatistics();
+
+      byte[] buffer = new byte[blockSize];
+
+      // Don't read the block completely so it gets cached on seek
+      in.read(buffer, 0, blockSize - _1K * 10);
+      in.seek(blockSize + _1K * 10);
+      // Backwards seek, will use cached block
+      in.seek(_1K * 5);
+      in.read();
+
+      verifyStatisticCounterValue(ioStats, 
StoreStatisticNames.ACTION_HTTP_GET_REQUEST, 2);
+      verifyStatisticCounterValue(ioStats, 
StreamStatisticNames.STREAM_READ_OPENED, 2);
+    }
+  }
+
+  @Test
+  public void testRandomReadSmallFile() throws Throwable {
+    describe("random read on a small file, uses S3InMemoryInputStream");
+
+    byte[] data = ContractTestUtils.dataset(smallFileSize, 'a', 26);
+    Path smallFile = path("randomReadSmallFile");
+    ContractTestUtils.writeDataset(getFileSystem(), smallFile, data, 
data.length, 16, true);
+
+    try (FSDataInputStream in = getFileSystem().open(smallFile)) {

Review Comment:
   try using the openFile call for consistency, as that's where we can add the 
option for switching to this on a per stream basis...primarily for testing



##########
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/ITestS3AInputStreamPerformance.java:
##########
@@ -550,15 +551,15 @@ public void testRandomReadOverBuffer() throws Throwable {
     describe("read over a buffer, making sure that the requests" +
         " spans readahead ranges");
     int datasetLen = _32K;
-    S3AFileSystem fs = getFileSystem();
     Path dataFile = path("testReadOverBuffer.bin");
+    bindS3aFS(dataFile);

Review Comment:
   no, just set up filesystem config in createConfiguration()





Issue Time Tracking
-------------------

    Worklog Id:     (was: 772009)
    Time Spent: 2h 10m  (was: 2h)

> tests in ITestS3AInputStreamPerformance are failing 
> ----------------------------------------------------
>
>                 Key: HADOOP-18231
>                 URL: https://issues.apache.org/jira/browse/HADOOP-18231
>             Project: Hadoop Common
>          Issue Type: Sub-task
>            Reporter: Ahmar Suhail
>            Assignee: Ahmar Suhail
>            Priority: Minor
>              Labels: pull-request-available
>          Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> The following tests are failing when prefetching is enabled:
> testRandomIORandomPolicy - expects stream to be opened 4 times (once for 
> every random read), but prefetching will only open twice. 
> testDecompressionSequential128K - expects stream to be opened once, but 
> prefetching will open once for each block the file has. landsat file used in 
> the test has size 42MB, prefetching block size = 8MB, expected open count is 
> 6.
>  testReadWithNormalPolicy - same as above. 
> testRandomIONormalPolicy - executes random IO, but with a normal policy. 
> S3AInputStream will abort the stream and change the policy, prefetching 
> handles random IO by caching blocks so doesn't do any of that. 
> testRandomReadOverBuffer - multiple assertions failing here, also depends a 
> lot on readAhead values, not very relevant for prefetching



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to