[
https://issues.apache.org/jira/browse/ARROW-14354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17429924#comment-17429924
]
Antoine Pitrou commented on ARROW-14354:
----------------------------------------
{quote}Right now, the parquet format is doing its reads on the CPU thread pool
(a potentially separate problem)
{quote}
Hmm, ok, so reducing the IO thread pool size wouldn't fix this particular issue
(of Parquet performance), right?
{quote}we probably don't need very many threads for a local filesystem.
{quote}
That's also my intuition. Might be worth checking the policy used by Postgres,
MariaDB and other well-tuned database engines.
As a semi-separate thought, for local filesystem access we may want to first
try a non-blocking read on the current thread before deferring to the IO
thread. That would avoid some thread synchronization latency if the data is
already available; but might add some overhead if the non-blocking read fails.
{quote}For datasets, the IOContext (and thus the IO executor) is currently
passed in via scan options. Should this be obtained from the filesystem instead?
{quote}
Hmm, ideally the user should be able to override the IO context _but_ the
default IO context (if not overriden) should be filesystem-decided. Perhaps we
need to pass {{nullptr}} to say "use the default" (is it already the case?).
> [C++] Investigate reducing I/O thread pool size to avoid CPU wastage.
> ---------------------------------------------------------------------
>
> Key: ARROW-14354
> URL: https://issues.apache.org/jira/browse/ARROW-14354
> Project: Apache Arrow
> Issue Type: Improvement
> Components: C++
> Reporter: Weston Pace
> Priority: Major
>
> If we are reading over HTTP (e.g. S3) we generally want high parallelism in
> the I/O thread pool.
> If we are reading from disk then high parallelism is usually harmless but
> ineffective. Most of the I/O threads will spend their time in a waiting
> state and the cores can be used for other work.
> However, it appears that when we are reading locally, and the data is cached
> in memory, then having too much parallelism will be harmful, but some
> parallelism is beneficial. Once the DRAM <-> CPU bandwidth limit is hit then
> all reading threads will experience high DRAM latency. Unlike an I/O
> bottleneck a RAM bottleneck will waste cycles on the physical core.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)