[
https://issues.apache.org/jira/browse/KUDU-2671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17565190#comment-17565190
]
ASF subversion and git services commented on KUDU-2671:
-------------------------------------------------------
Commit 8c8f393a4dc772a3dae2c14e59952ab1569884ec in kudu's branch
refs/heads/master from Alexey Serbin
[ https://gitbox.apache.org/repos/asf?p=kudu.git;h=8c8f393a4 ]
KUDU-2671 fix double-free in new scenario of scan_token-test
Changelist b746978c7 introduced a bug in the updated CountRowsSeq()
function -- the mistake was calling delete twice on KuduScanToken
pointers. That lead to reading garbage data, so the newly introduced
test scenario ScanTokensWithCustomHashSchemasPerRange was failing if
enabled. I didn't pay enough attention to the updated test code when
reviewing it since the new test scenario was disabled in the original
patch b746978c7. This patch fixes the mistake.
Change-Id: I227f1dadf9d5b4d3b209570716cde7bda74c6b25
Reviewed-on: http://gerrit.cloudera.org:8080/18714
Tested-by: Alexey Serbin <[email protected]>
Reviewed-by: Khazar Mammadli <[email protected]>
Reviewed-by: Mahesh Reddy <[email protected]>
Reviewed-by: Abhishek Chennaka <[email protected]>
Reviewed-by: Attila Bukor <[email protected]>
> Change hash number for range partitioning
> -----------------------------------------
>
> Key: KUDU-2671
> URL: https://issues.apache.org/jira/browse/KUDU-2671
> Project: Kudu
> Issue Type: Improvement
> Components: client, java, master, server
> Affects Versions: 1.8.0
> Reporter: yangz
> Assignee: Mahesh Reddy
> Priority: Major
> Labels: feature, roadmap-candidate, scalability
> Attachments: 屏幕快照 2019-01-24 下午12.03.41.png
>
>
> For our usage, the kudu schema design isn't flexible enough.
> We create our table for day range such as dt='20181112' as hive table.
> But our data size change a lot every day, for one day it will be 50G, but for
> some other day it will be 500G. For this case, it be hard to set the hash
> schema. If too big, for most case, it will be too wasteful. But too small,
> there is a performance problem in the case of a large amount of data.
>
> So we suggest a solution we can change the hash number by the history data of
> a table.
> for example
> # we create schema with one estimated value.
> # we collect the data size by day range
> # we create new day range partition by our collected day size.
> We use this feature for half a year, and it work well. We hope this feature
> will be useful for the community. Maybe the solution isn't so complete.
> Please help us make it better.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)