sunchao commented on issue #2205: URL: https://github.com/apache/arrow-datafusion/issues/2205#issuecomment-1099675328
FWIW within each Spark task, it currently process each row group in a sequential manner, and for each of these it'll read all the projected column chunks (with filtered pages after column index), buffer them in memory and then start decompressing + decoding. For interacting with S3/HDFS/etc, it relies on the Hadoop's [FileSystem](https://hadoop.apache.org/docs/r3.1.0/api/org/apache/hadoop/fs/FileSystem.html) API. @steveloughran is the expert here on the S3 client implementation. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: github-unsubscr...@arrow.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org