drin commented on issue #13827: URL: https://github.com/apache/arrow/issues/13827#issuecomment-1209792098
> maybe not. there will be 1000 files, and we may have 1M such files. it brings more disk IOs, file open requests, overheads on each column, and complexity to maintain the data. just for clarity, I meant you should group columns in files in the size that you access them. In this case it'd be 10 columns per file, in which case you can also fit more rows per batch in the same footprint, thus improving your useful throughput. but, the point of that was just that any other layout is going to have some inefficiencies related to "partial reads" as Weston mentioned. Or, some form of having to access extents that contain data for other columns. Since I don't know the exact use case, I agree, this may not actually improve performance across various use cases. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
