abstractdog commented on code in PR #5195:
URL: https://github.com/apache/hive/pull/5195#discussion_r1570103070
##########
ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcInputFormat.java:
##########
@@ -1727,14 +1727,15 @@ private void populateAndCacheStripeDetails() throws
IOException {
private long computeProjectionSize(List<OrcProto.Type> fileTypes,
List<OrcProto.ColumnStatistics> stats, boolean[] fileIncluded) throws
FileFormatException {
List<Integer> internalColIds = Lists.newArrayList();
+ int rootColumn = 0;
if (fileIncluded == null) {
// Add all.
- for (int i = 0; i < fileTypes.size(); i++) {
+ for (int i = rootColumn + 1; i < fileTypes.size(); i++) {
internalColIds.add(i);
}
} else {
- for (int i = 0; i < fileIncluded.length; i++) {
- if (fileIncluded[i]) {
+ for (int i = rootColumn + 1; i < fileIncluded.length; i++) {
+ if (fileIncluded[i] && (isOriginal || i != OrcRecordUpdater.ROW +
1)) {
Review Comment:
this condition is very vague and hard to understand for an average code
reader, please include comments for every parts:
- isOriginal
- OrcRecordUpdater.ROW + 1
please add a unit test for computeProjectionSize, even if this means it has
to become @VisibleForTesting package-private
unfortunately, the current getProjectedColumnsUncompressedSize unit test
assertions tell nothing about different scenarios (they are more like noisy
q.outs :) )
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]