westonpace commented on PR #34281:
URL: https://github.com/apache/arrow/pull/34281#issuecomment-1439075520

   > Also FYI I think [DuckDB defaults to 
100,000](https://github.com/Mytherin/duckdb/commit/3b8ad037bff978b263fd06ec9d0635fcb049e92a#diff-a95d5e017c81184e18f0f04c5df3b72061fd80555d581a4ce163af5deca3dac0R394)
 (unless I read their source wrong earlier).
   
   100k is probably workable but I don't think ideal.  100k in an int32 column 
would mean, at most (e.g. no encodings / compression), 400KiB per column.  If 
you are reading scattershot from a file (e.g. 12 out of 100 columns) then this 
starts to degrade performance on HDD (and probably SSD as well) and S3 (but 
would be fine on NVME or hot-in-memory).
   
   That being said, no single default is going to work for all cases (100k is 
better for single row reads for example).  I personally think it would be more 
useful to make scanners that are more robust against large row groups (e.g. 
pyarrow's could be improved) by supporting reading at the page level instead of 
the row group level (still not 100% convinced this is possible).  So at the 
moment I'm still leaning towards 1Mi but I could be convinced otherwise.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to