[
https://issues.apache.org/jira/browse/IMPALA-13478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17954824#comment-17954824
]
ASF subversion and git services commented on IMPALA-13478:
----------------------------------------------------------
Commit 2a680b302e1c581deb4f7312323188d718c23bcb in impala's branch
refs/heads/master from Joe McDonnell
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=2a680b302 ]
IMPALA-13478: Sync tuple cache files to disk asynchronously
When a tuple cache entry is first being written, we want to
sync the contents to disk. Currently, that happens on the
fast path and delays the query results, sometimes significantly.
This moves the Sync() call off of the fast path by passing
the work to a thread pool. The threads in the pool open
the file, sync it to disk, then close the file. If anything
goes wrong, the cache entry is evicted.
The tuple cache can generate writes very quickly, so this needs
a backpressure mechanism to avoid overwhelming the disk. In
particular, it needs to avoid accumulating dirty buffers to
the point that the OS throttles new writes, delaying the query
fast path. This implements a limit on outstanding writes (i.e.
writes that have not been flushed to disk). To enforce it,
writers now call UpdateWriteSize() to reserve space before
writing. UpdateWriteSize() can fail if it hits the limit on
outstanding writes or if this particular cache entry has hit
the maximum size. When it fails, the writer should abort writing
the cache entry.
Since UpdateWriteSize() is updating the charge in the cache,
the outstanding writes are being counted against the capacity,
triggering evictions. This improves the tuple cache's adherence
to the capacity limit.
The outstanding writes limits is configured via the
tuple_cache_outstanding_write_limit startup flag, which is
either a specific size string (e.g. 1GB) or a percentage of
the process memory limit. To avoid updating the cache charge
very frequently, this has an update chunk size specified
by tuple_cache_outstanding_write_chunk_bytes.
This adds counters at the daemon level:
- outstanding write bytes
- number of writes halted due to backpressure
- number of sync calls that fail (due to IO errors)
- number of sync calls dropped due to queue backpressure
The runtime profile adds a NumTupleCacheBackpressureHalted
counter that is set when a write hits the outstanding write
limit.
This has a startup option to add randomness to the tuple cache
keys to make it easy to test a scenario with no cache hits.
Testing:
- Added unit tests to tuple-cache-mgr-test
- Testing with TPC-DS on a cluster with fast NVME SSDs showed
a significant improvement in the first-run times due to the
asynchronous syncs.
- Testing with TPC-H on a system with a slow disk and zero cache
hits showed improved behavior with the backpressure
Change-Id: I646bb56300656d8b8ac613cb8fe2f85180b386d3
Reviewed-on: http://gerrit.cloudera.org:8080/22215
Reviewed-by: Joe McDonnell <[email protected]>
Reviewed-by: Michael Smith <[email protected]>
Tested-by: Impala Public Jenkins <[email protected]>
> Don't sync tuple cache file contents to disk immediately
> --------------------------------------------------------
>
> Key: IMPALA-13478
> URL: https://issues.apache.org/jira/browse/IMPALA-13478
> Project: IMPALA
> Issue Type: Task
> Components: Backend
> Affects Versions: Impala 4.5.0
> Reporter: Joe McDonnell
> Assignee: Joe McDonnell
> Priority: Major
>
> Currently, the tuple cache file writer syncs the file contents to disk before
> closing the file. This slows down the write path considerably, especially if
> disks are slow. This should be moved off of the fast path and done
> asynchronously. As a first step, this can remove the sync call and close the
> file without syncing. Other readers are still able to access it, and the
> kernel will flush the buffers as needed.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]