GitHub user xuchuanyin opened a pull request:
https://github.com/apache/carbondata/pull/2574
[HotFix][CARBONDATA-2788][BloomDataMap] Fix bugs in incorrect query result
with bloom datamap
This PR solve three problems which will affect the correctness of the query
on bloom.
1. optimize blockletId in rebuilding datamap
After review the code, we found that modification in PR2539 is not needed,
so we revert that PR.
2. bugs in overflow for blocklet count
Carbondata stores blocklet count for each block in byte data type, when
a block contains more than 128 blocklets, it will overflow the byte
limits.
Here we change the data type to short.
3. Fix bug in querying with bloom datamap with block cache level enabled
In block cache level scenario, previously the main BlockDataMap return
block as pruned blocklet with its blockletId=-1;
However, other index datamap such as BloomDataMap return actual blocklet
with correct blockletId. Due to the behaviour of Blocklet's hashcode,
some blocklets will be uncorrectly marked as duplicated and dropped.
Thus cause incorrect query result.
To fix this problem, we will return all blocklets with correct
blockletId for the block instead of returning a fake blocklet with
blockletId=-1.
This will not affect the following procedure.
Be sure to do all of the following checklist to help us incorporate
your contribution quickly and easily:
- [ ] Any interfaces changed?
- [ ] Any backward compatibility impacted?
- [ ] Document update required?
- [ ] Testing done
Please provide details on
- Whether new unit test cases have been added or why no new tests
are required?
- How it is tested? Please attach test report.
- Is it a performance related change? Please attach the performance
test report.
- Any additional information to help reviewers in testing this
change.
- [ ] For large changes, please consider breaking it into sub-tasks under
an umbrella JIRA.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/xuchuanyin/carbondata
0728_fix_bug_query_bloom_opt
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/carbondata/pull/2574.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #2574
----
commit cba17975affad343d555f49cb255043b556310f7
Author: xuchuanyin <xuchuanyin@...>
Date: 2018-07-27T13:13:51Z
Fix bugs in overflow for blocklet count
Carbondata stores blocklet count for each block in byte data type, when
a block contains more than 128 blocklets, it will overflow the byte
limits.
Here we change the data type to short.
commit 7e41d9533dae86ab26e102a0342c128cbab03d1c
Author: xuchuanyin <xuchuanyin@...>
Date: 2018-07-28T07:37:38Z
Revert optimize blockletId in rebuilding datamap
We found querying huge data with rebuilding bloom datamap will give
incorrect result. The root cause is that the blockletId in
ResultCollector is wrong. (This was introduced in PR2539)
We will revert the previous modification for this. Now it is checked and
works fine.
commit e418018e05cb0a15c996c5bb58debb0486252f84
Author: xuchuanyin <xuchuanyin@...>
Date: 2018-07-28T09:08:52Z
Fix bug in querying with bloom datamap with block cache level enabled
In block cache level scenario, previously the main BlockDataMap return
block as pruned blocklet with its blockletId=-1;
However, other index datamap such as BloomDataMap return actual blocklet
with correct blockletId. Due to the behaviour of Blocklet's hashcode,
some blocklets will be uncorrectly marked as duplicated and dropped.
Thus cause incorrect query result.
To fix this problem, we will return all blocklets with correct
blockletId for the block instead of returning a fake blocklet with
blockletId=-1.
This will not affect the following procedure.
----
---