[ 
https://issues.apache.org/jira/browse/IMPALA-13497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17894380#comment-17894380
 ] 

ASF subversion and git services commented on IMPALA-13497:
----------------------------------------------------------

Commit 1267fde57b16755956e6a710bbc0543a61249d92 in impala's branch 
refs/heads/master from Joe McDonnell
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=1267fde57 ]

IMPALA-13497: Add TupleCacheBytesWritten/Read to the profile

This adds counters for the number of bytes written / read
from the tuple cache. This gives visibility into whether
certain locations have enormous result sizes. This will be
used to tune the placement of tuple cache nodes.

Tests:
 - Added checks of the TupleCacheBytesWritten/Read counters
   to existing tests in test_tuple_cache.py

Change-Id: Ib5c9249049d8d46116a65929896832d02c2d9f1f
Reviewed-on: http://gerrit.cloudera.org:8080/21991
Reviewed-by: Yida Wu <[email protected]>
Reviewed-by: Michael Smith <[email protected]>
Tested-by: Impala Public Jenkins <[email protected]>


> Add profile counters for bytes written / read from the tuple cache
> ------------------------------------------------------------------
>
>                 Key: IMPALA-13497
>                 URL: https://issues.apache.org/jira/browse/IMPALA-13497
>             Project: IMPALA
>          Issue Type: Task
>          Components: Backend
>    Affects Versions: Impala 4.5.0
>            Reporter: Joe McDonnell
>            Assignee: Joe McDonnell
>            Priority: Major
>             Fix For: Impala 4.5.0
>
>
> The size of the tuple cache entry written / read is useful information for 
> understanding the performance of the cache. Having this information in the 
> profile will help us tune the placement policy for the tuple cache nodes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to