steveloughran commented on PR #50765:
URL: https://github.com/apache/spark/pull/50765#issuecomment-3896714528
s3a & friends can skip a HEAD request on file open, and just do ranged GET
calls on the reads.
* s3a: `<? extends FileStatus` or length option. Validates filename *but not
path*, so wrapped filestatuses are fine.
* abfs: `org.apache.hadoop.fs.azurebfs.services.VersionedFileStatus` only
(wants etag and encryption info)
not tracked the others
For s3a you don't need to create the filestatus, just do
```
in = openFile(path)
.optLong(`fs.option.openfile.length`, length)
.build()
.get(); // it's an async HEAD if you don't pass in length/status
```
There's other opts related to split start and end and format, see
org.apache.hadoop.fs.Options.OpenFileOptions
Parquet lib uses this now, as it already calls getFileStatus to get the
length and we know it reads in format "parquet" :)
```
final CompletableFuture<FSDataInputStream> future =
fs.openFile(stat.getPath())
.withFileStatus(stat)
.opt(OPENFILE_READ_POLICY_KEY, PARQUET_READ_POLICY)
.build();
stream = awaitFuture(future);
```
Can save 100+ millis, one read option and a small amount of money opening
every file.
No benefit for hdfs, I'm afraid you'll have to ask the hdfs devs to do some
work there.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]