[
https://issues.apache.org/jira/browse/HADOOP-19354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17926368#comment-17926368
]
ASF GitHub Bot commented on HADOOP-19354:
-----------------------------------------
ahmarsuhail commented on code in PR #7214:
URL: https://github.com/apache/hadoop/pull/7214#discussion_r1952602438
##########
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java:
##########
@@ -1877,100 +1868,41 @@ private FSDataInputStream executeOpen(
fileInformation.applyOptions(readContext);
LOG.debug("Opening '{}'", readContext);
- if (this.prefetchEnabled) {
- Configuration configuration = getConf();
- initLocalDirAllocatorIfNotInitialized(configuration);
- return new FSDataInputStream(
- new S3APrefetchingInputStream(
- readContext.build(),
- createObjectAttributes(path, fileStatus),
- createInputStreamCallbacks(auditSpan),
- inputStreamStats,
- configuration,
- directoryAllocator));
- } else {
- return new FSDataInputStream(
- new S3AInputStream(
- readContext.build(),
- createObjectAttributes(path, fileStatus),
- createInputStreamCallbacks(auditSpan),
- inputStreamStats,
- new SemaphoredDelegatingExecutor(
- boundedThreadPool,
- vectoredActiveRangeReads,
- true,
- inputStreamStats)));
- }
- }
-
- /**
- * Override point: create the callbacks for S3AInputStream.
- * @return an implementation of the InputStreamCallbacks,
- */
- private S3AInputStream.InputStreamCallbacks createInputStreamCallbacks(
+ // what does the stream need
+ final StreamFactoryRequirements requirements =
+ getStore().factoryRequirements();
+
+ // calculate the permit count.
+ final int permitCount = requirements.streamThreads()
+ + requirements.vectoredIOContext().getVectoredActiveRangeReads();
+ // create an executor which is a subset of the
+ // bounded thread pool.
+ final SemaphoredDelegatingExecutor pool = new SemaphoredDelegatingExecutor(
+ boundedThreadPool,
+ permitCount,
+ true,
+ inputStreamStats);
+
+ // do not validate() the parameters as the store
+ // completes this.
+ ObjectReadParameters parameters = new ObjectReadParameters()
Review Comment:
@steveloughran just realised, in our internal integration, we used to do
`s3SeekableInputStreamFactory.createStream()` before the
`extractOrFetchSimpleFileStatus()` call in this `executeOpen()` method.
AAL has a metadata cache, and so this ensures we don't make repeated HEADs
for the same key. Important (though not sure what the perf impact is), because
Spark opens the same file multiple times in a task, once to read the footer,
and then to read the column data. So S3A default currently does atleast 2 HEADs
per file.
Now that the stream initialisation happens after
extractOrFetchSimpleFileStatus(), S3A does the head even though it's not
required as it's already in the AAL cache.
We should discuss what we can do here (maybe wire up S3A to AAL's metadata
cache regardless of the stream it's using?), and do it as a follow up.
> S3A: InputStreams to be created by factory under S3AStore
> ---------------------------------------------------------
>
> Key: HADOOP-19354
> URL: https://issues.apache.org/jira/browse/HADOOP-19354
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3
> Affects Versions: 3.4.2
> Reporter: Steve Loughran
> Assignee: Steve Loughran
> Priority: Major
> Labels: pull-request-available
>
> Migrate S3AInputStream creation into a factory pattern, push down into
> S3AStore.
> Proposed factories
> * default: whatever this release has as default
> * classic: current S3AInputStream
> * prefetch: prefetching
> * analytics: new analytics stream
> * other: reads a classname from another prop, instantiates.
> Also proposed
> * stream to implement some stream capability to declare what they are
> (classic, prefetch, analytics, other).
> h2. Implementation
> All callbacks used by the stream also to call directly onto S3AStore.
> S3AFileSystem must not be invoked at all (if it is needed: PR is still not
> ready).
> Some interface from Instrumentation will be passed to factory; this shall
> include a way to create new per-stream
> The factory shall implement org.apache.hadoop.service.Service; S3AStore shall
> do same and become a subclass of CompositeService. It shall attach the
> factory as a child, so they can follow the same lifecycle. We shall do the
> same for anything else that gets pushed down.
> Everything related to stream creation must go from s3afs; and creation of the
> factory itself. This must be done in S3AStore.initialize().
> As usual, this will complicate mocking. But the streams themselves should not
> require changes, at least significant ones.
> Testing.
> * The huge file tests should be tuned so each of the different ones uses a
> different stream, always.
> * use a -Dstream="factory name" to choose factory, rather than the -Dprefetch
> * if not set, whatever is in auth-keys gets picked up.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]