[
https://issues.apache.org/jira/browse/HADOOP-19354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17916020#comment-17916020
]
ASF GitHub Bot commented on HADOOP-19354:
-----------------------------------------
ahmarsuhail commented on code in PR #7214:
URL: https://github.com/apache/hadoop/pull/7214#discussion_r1925156781
##########
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java:
##########
@@ -1877,100 +1859,43 @@ private FSDataInputStream executeOpen(
fileInformation.applyOptions(readContext);
LOG.debug("Opening '{}'", readContext);
- if (this.prefetchEnabled) {
- Configuration configuration = getConf();
- initLocalDirAllocatorIfNotInitialized(configuration);
- return new FSDataInputStream(
- new S3APrefetchingInputStream(
- readContext.build(),
- createObjectAttributes(path, fileStatus),
- createInputStreamCallbacks(auditSpan),
- inputStreamStats,
- configuration,
- directoryAllocator));
- } else {
- return new FSDataInputStream(
- new S3AInputStream(
- readContext.build(),
- createObjectAttributes(path, fileStatus),
- createInputStreamCallbacks(auditSpan),
- inputStreamStats,
- new SemaphoredDelegatingExecutor(
- boundedThreadPool,
- vectoredActiveRangeReads,
- true,
- inputStreamStats)));
- }
- }
-
- /**
- * Override point: create the callbacks for S3AInputStream.
- * @return an implementation of the InputStreamCallbacks,
- */
- private S3AInputStream.InputStreamCallbacks createInputStreamCallbacks(
+ // what does the stream need
+ final StreamThreadOptions requirements =
+ getStore().threadRequirements();
+
+ // calculate the permit count.
+ final int permitCount = requirements.streamThreads() +
+ (requirements.vectorSupported()
+ ? vectoredActiveRangeReads
+ : 0);
+ // create an executor which is a subset of the
+ // bounded thread pool.
+ final SemaphoredDelegatingExecutor pool = new SemaphoredDelegatingExecutor(
Review Comment:
ok I think I get it, this is basically a way to ensure a single stream
instance does not use up too many threads.
> S3A: InputStreams to be created by factory under S3AStore
> ---------------------------------------------------------
>
> Key: HADOOP-19354
> URL: https://issues.apache.org/jira/browse/HADOOP-19354
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3
> Affects Versions: 3.4.2
> Reporter: Steve Loughran
> Assignee: Steve Loughran
> Priority: Major
> Labels: pull-request-available
>
> Migrate S3AInputStream creation into a factory pattern, push down into
> S3AStore.
> Proposed factories
> * default: whatever this release has as default
> * classic: current S3AInputStream
> * prefetch: prefetching
> * analytics: new analytics stream
> * other: reads a classname from another prop, instantiates.
> Also proposed
> * stream to implement some stream capability to declare what they are
> (classic, prefetch, analytics, other).
> h2. Implementation
> All callbacks used by the stream also to call directly onto S3AStore.
> S3AFileSystem must not be invoked at all (if it is needed: PR is still not
> ready).
> Some interface from Instrumentation will be passed to factory; this shall
> include a way to create new per-stream
> The factory shall implement org.apache.hadoop.service.Service; S3AStore shall
> do same and become a subclass of CompositeService. It shall attach the
> factory as a child, so they can follow the same lifecycle. We shall do the
> same for anything else that gets pushed down.
> Everything related to stream creation must go from s3afs; and creation of the
> factory itself. This must be done in S3AStore.initialize().
> As usual, this will complicate mocking. But the streams themselves should not
> require changes, at least significant ones.
> Testing.
> * The huge file tests should be tuned so each of the different ones uses a
> different stream, always.
> * use a -Dstream="factory name" to choose factory, rather than the -Dprefetch
> * if not set, whatever is in auth-keys gets picked up.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]