[
https://issues.apache.org/jira/browse/HADOOP-19354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17913654#comment-17913654
]
ASF GitHub Bot commented on HADOOP-19354:
-----------------------------------------
steveloughran commented on PR #7214:
URL: https://github.com/apache/hadoop/pull/7214#issuecomment-2595139668
@rajdchak thanks for the comments, will address
I do want to pull up the vector IO support, with integration with prefetch
and cacheing.
For prefetch/caching stream we'd ask for a the requested ranges to be split
up into
1. ranges which were wholly in memory: satisfy immediately in current thread
(or copier thread?)
1. ranges which have an active prefetch to wholly satisfy the request:
somehow wire prefetching up so as soon as it arrives, range gets the data.
1. other ranges (not cached, prefetched or only partially in cache):
coalesce as needed, then retrieve. +notify stream that these ranges are being
fetched, so no need to prefetch
It'd be good to collect stats on cache hit/miss here, to assess integration
of vector reads with ranges. When a list of ranges comes down, there is less
need to infer the next range and prefetch, and I'm not actually sure how
important cacheing becomes. This is why setting parquet up to use vector IO
already appears to give speedups comparable to the analytics stream benchmarks
published.
what I want is best of both worlds: prefetch of rowgroups from stream
inference -and when vector reads come in, statisfy those by returning
current/active prefetches, or retrieve new ranges through ranged GET requests.
#7105 is where that will go; I've halted that until this is in. And I'll
only worry about that integration with prefetched/cached blocks with the
analytics stream.
> S3A: InputStreams to be created by factory under S3AStore
> ---------------------------------------------------------
>
> Key: HADOOP-19354
> URL: https://issues.apache.org/jira/browse/HADOOP-19354
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3
> Affects Versions: 3.4.2
> Reporter: Steve Loughran
> Assignee: Steve Loughran
> Priority: Major
> Labels: pull-request-available
>
> Migrate S3AInputStream creation into a factory pattern, push down into
> S3AStore.
> Proposed factories
> * default: whatever this release has as default
> * classic: current S3AInputStream
> * prefetch: prefetching
> * analytics: new analytics stream
> * other: reads a classname from another prop, instantiates.
> Also proposed
> * stream to implement some stream capability to declare what they are
> (classic, prefetch, analytics, other).
> h2. Implementation
> All callbacks used by the stream also to call directly onto S3AStore.
> S3AFileSystem must not be invoked at all (if it is needed: PR is still not
> ready).
> Some interface from Instrumentation will be passed to factory; this shall
> include a way to create new per-stream
> The factory shall implement org.apache.hadoop.service.Service; S3AStore shall
> do same and become a subclass of CompositeService. It shall attach the
> factory as a child, so they can follow the same lifecycle. We shall do the
> same for anything else that gets pushed down.
> Everything related to stream creation must go from s3afs; and creation of the
> factory itself. This must be done in S3AStore.initialize().
> As usual, this will complicate mocking. But the streams themselves should not
> require changes, at least significant ones.
> Testing.
> * The huge file tests should be tuned so each of the different ones uses a
> different stream, always.
> * use a -Dstream="factory name" to choose factory, rather than the -Dprefetch
> * if not set, whatever is in auth-keys gets picked up.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]