Akshay-A-Kulkarni commented on issue #36765: URL: https://github.com/apache/arrow/issues/36765#issuecomment-1642551901
> `fragment_readahead` and `batch_readahead` control how many files/row-groups to read at a time. `pre_buffer` controls how an individual row group is read. So these are separate properties. `pre_buffer` is probably always a good thing when reading from S3. However, when reading from local disk I think `pre_buffer` can sometimes lead to greater memory consumption. Is `pre_buffer=True` the default for `read_table`? @westonpace With the limited understanding that I have, if memory consumption on local fs is an issue, could we check the filesystem on `dataset()`call and if its S3 enable pre_buffering for parquet datasets then? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
