[ 
https://issues.apache.org/jira/browse/HIVE-5009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Jakobus updated HIVE-5009:
-----------------------------------

    Description: 
I have found some minor optimization issues in the codebase, which I would like 
to rectify and contribute. Specifically, these are:

The optimizations that could be applied to Hive's code base are as follows:

1. Use StringBuffer when appending strings - In 184 instances, the 
concatination operator (+=) was used when appending strings. This is inherintly 
inefficient - instead Java's StringBuffer or StringBuilder class should be 
used. 12 instances of this optimization can be applied to the 
GenMRSkewJoinProcessor class and another three to the optimizer. CliDriver uses 
the + operator inside a loop, so does the column projection utilities class 
(ColumnProjectionUtils) and the aforementioned skew-join processor. Tests 
showed that using the StringBuilder when appending strings is 57\% faster than 
using the + operator (using the StringBuffer took 122 milliseconds whilst the + 
operator took 284 milliseconds). The reason as to why using the StringBuffer 
class is preferred over using the + operator, is because

String third = first + second;

gets compiled to:

StringBuilder builder = new StringBuilder( first );
builder.append( second );
third = builder.toString();

Therefore, when building complex strings, that, for example involve loops, 
require many instantiations (and as discussed below, creating new objects 
inside loops is inefficient).


2. Use arrays instead of List - Java's java.util.Arrays class asList method is 
a more efficient at creating  creating lists from arrays than using loops to 
manually iterate over the elements (using asList is computationally very cheap, 
O(1), as it merely creates a wrapper object around the array; looping through 
the list however has a complexity of O(n) since a new list is created and every 
element in the array is added to this new list). As confirmed by the experiment 
detailed in Appendix D, the Java compiler does not automatically optimize and 
replace tight-loop copying with asList: the loop-copying of 1,000,000 items 
took 15 milliseconds whilst using asList is instant. 

Four instances of this optimization can be applied to Hive's codebase (two of 
these should be applied to the Map-Join container - MapJoinRowContainer) - 
lines 92 to 98:

 for (obj = other.first(); obj != null; obj = other.next()) {
      ArrayList<Object> ele = new ArrayList(obj.length);
      for (int i = 0; i < obj.length; i++) {
        ele.add(obj[i]);
      }
      list.add((Row) ele);
    }


3. Unnecessary wrapper object creation - In 31 cases, wrapper object creation 
could be avoided by simply using the provided static conversion methods. As 
noted in the PMD documentation, "using these avoids the cost of creating 
objects that also need to be garbage-collected later."

For example, line 587 of the SemanticAnalyzer class, could be replaced by the 
more efficient parseDouble method call:

// Inefficient:
Double percent = Double.valueOf(value).doubleValue();
// To be replaced by:
Double percent = Double.parseDouble(value);


Our test case in Appendix D confirms this: converting 10,000 strings into 
integers using Integer.parseInt(gen.nextSessionId()) (i.e. creating an 
unnecessary wrapper object) took 119 on average; using parseInt() took only 38. 
Therefore creating even just one unnecessary wrapper object can make your code 
up to 68% slower.

4. Converting literals to strings using + "" - Converting literals to strings 
using + "" is quite inefficient (see Appendix D) and should be done by calling 
the toString() method instead: converting 1,000,000 integers to strings using + 
"" took, on average, 1340 milliseconds whilst using the toString() method only 
required 1183 milliseconds (hence adding empty strings takes nearly 12% more 
time). 

89 instances of this using + "" when converting literals were found in Hive's 
codebase - one of these are found in the JoinUtil.

5. Avoid manual copying of arrays - Instead of copying arrays as is done in 
GroupByOperator on line 1040 (see below), the more efficient System.arraycopy 
can be used (arraycopy is a native method meaning that the entire memory block 
is copied using memcpy or mmove).

// Line 1040 of the GroupByOperator
for (int i = 0; i < keys.length; i++) {
        forwardCache[i] = keys[i];
}   

Using System.arraycopy on an array of 10,000 strings was (close to) instant 
whilst the manual copy took 6 milliseconds.
11 instances of this optimization should be applied to the Hive codebase.

6. Avoiding instantiation inside loops - As noted in the PMD documentation, 
"new objects created within loops should be checked to see if they can created 
outside them and reused.". 

Declaring variables inside a loop (i from 0 to 10,000) took 300 milliseconds
whilst declaring them outside took only 88 milliseconds (this can be explained 
by the fact that when declaring a variable outside the loop, its reference will 
be re-used for each iteration. However when declaring variables inside a loop, 
new references will be created for each iteration. In our case, 10,000 
references will be created by the time that this loop finishes, meaning lots of 
work in terms of memory allocation and garbage collection). 1623 instances of 
this optimization can be applied.

To summarize, I propose to modify the code to address issue 1 and issue 6 
(remaining issues (2 - 5) will be addressed later). Details are specified as 
sub-tasks.





  was:
I have found some minor optimization issues in the codebase, which I would like 
to rectify and contribute. Specifically, these are:

The optimizations that could be applied to Hive's code base are as follows:

1. Use StringBuffer when appending strings - In 184 instances, the 
concatination operator (+=) was used when appending strings. This is inherintly 
inefficient - instead Java's StringBuffer or StringBuilder class should be 
used. 12 instances of this optimization can be applied to the 
GenMRSkewJoinProcessor class and another three to the optimizer. CliDriver uses 
the + operator inside a loop, so does the column projection utilities class 
(ColumnProjectionUtils) and the aforementioned skew-join processor. Tests 
showed that using the StringBuilder when appending strings is 57\% faster than 
using the + operator (using the StringBuffer took 122 milliseconds whilst the + 
operator took 284 milliseconds). The reason as to why using the StringBuffer 
class is preferred over using the + operator, is because

String third = first + second;

gets compiled to:

StringBuilder builder = new StringBuilder( first );
builder.append( second );
third = builder.toString();

Therefore, when building complex strings, that, for example involve loops, 
require many instantiations (and as discussed below, creating new objects 
inside loops is inefficient).


2. Use arrays instead of List - Java's java.util.Arrays class asList method is 
a more efficient at creating  creating lists from arrays than using loops to 
manually iterate over the elements (using asList is computationally very cheap, 
O(1), as it merely creates a wrapper object around the array; looping through 
the list however has a complexity of O(n) since a new list is created and every 
element in the array is added to this new list). As confirmed by the experiment 
detailed in Appendix D, the Java compiler does not automatically optimize and 
replace tight-loop copying with asList: the loop-copying of 1,000,000 items 
took 15 milliseconds whilst using asList is instant. 

Four instances of this optimization can be applied to Hive's codebase (two of 
these should be applied to the Map-Join container - MapJoinRowContainer) - 
lines 92 to 98:

 for (obj = other.first(); obj != null; obj = other.next()) {
      ArrayList<Object> ele = new ArrayList(obj.length);
      for (int i = 0; i < obj.length; i++) {
        ele.add(obj[i]);
      }
      list.add((Row) ele);
    }


3. Unnecessary wrapper object creation - In 31 cases, wrapper object creation 
could be avoided by simply using the provided static conversion methods. As 
noted in the PMD documentation, "using these avoids the cost of creating 
objects that also need to be garbage-collected later."

For example, line 587 of the SemanticAnalyzer class, could be replaced by the 
more efficient parseDouble method call:

// Inefficient:
Double percent = Double.valueOf(value).doubleValue();
// To be replaced by:
Double percent = Double.parseDouble(value);


Our test case in Appendix D confirms this: converting 10,000 strings into 
integers using Integer.parseInt(gen.nextSessionId()) (i.e. creating an 
unnecessary wrapper object) took 119 on average; using parseInt() took only 38. 
Therefore creating even just one unnecessary wrapper object can make your code 
up to 68% slower.

4. Converting literals to strings using + "" - Converting literals to strings 
using + "" is quite inefficient (see Appendix D) and should be done by calling 
the toString() method instead: converting 1,000,000 integers to strings using + 
"" took, on average, 1340 milliseconds whilst using the toString() method only 
required 1183 milliseconds (hence adding empty strings takes nearly 12% more 
time). 

89 instances of this using + "" when converting literals were found in Hive's 
codebase - one of these are found in the JoinUtil.

5. Avoid manual copying of arrays - Instead of copying arrays as is done in 
GroupByOperator on line 1040 (see below), the more efficient System.arraycopy 
can be used (arraycopy is a native method meaning that the entire memory block 
is copied using memcpy or mmove).

// Line 1040 of the GroupByOperator
for (int i = 0; i < keys.length; i++) {
        forwardCache[i] = keys[i];
}   

Using System.arraycopy on an array of 10,000 strings was (close to) instant 
whilst the manual copy took 6 milliseconds.
11 instances of this optimization should be applied to the Hive codebase.

6. Avoiding instantiation inside loops - As noted in the PMD documentation, 
"new objects created within loops should be checked to see if they can created 
outside them and reused.". 

Declaring variables inside a loop (i from 0 to 10,000) took 300 milliseconds
whilst declaring them outside took only 88 milliseconds (this can be explained 
by the fact that when declaring a variable outside the loop, its reference will 
be re-used for each iteration. However when declaring variables inside a loop, 
new references will be created for each iteration. In our case, 10,000 
references will be created by the time that this loop finishes, meaning lots of 
work in terms of memory allocation and garbage collection). 1623 instances of 
this optimization can be applied.

To summarize, I propose to modify the code to address issue 1 and issue 6 
(remaining issues (2 - 5) will be addressed later). The following classes would 
be modified:

Avoiding object instantiation in loops (issue 6):
gen/thrift/gen-javabean/org/apache/hadoop/hive/ql/plan/api/Adjacency.java
gen/thrift/gen-javabean/org/apache/hadoop/hive/ql/plan/api/Graph.java
gen/thrift/gen-javabean/org/apache/hadoop/hive/ql/plan/api/Operator.java
gen/thrift/gen-javabean/org/apache/hadoop/hive/ql/plan/api/Query.java
gen/thrift/gen-javabean/org/apache/hadoop/hive/ql/plan/api/QueryPlan.java
gen/thrift/gen-javabean/org/apache/hadoop/hive/ql/plan/api/Stage.java
gen/thrift/gen-javabean/org/apache/hadoop/hive/ql/plan/api/Task.java
java/org/apache/hadoop/hive/ql/Context.java
java/org/apache/hadoop/hive/ql/Driver.java
java/org/apache/hadoop/hive/ql/QueryPlan.java
java/org/apache/hadoop/hive/ql/exec/ColumnStatsTask.java
java/org/apache/hadoop/hive/ql/exec/CommonJoinOperator.java
java/org/apache/hadoop/hive/ql/exec/DDLTask.java
java/org/apache/hadoop/hive/ql/exec/DefaultBucketMatcher.java
java/org/apache/hadoop/hive/ql/exec/DemuxOperator.java
java/org/apache/hadoop/hive/ql/exec/ExplainTask.java
java/org/apache/hadoop/hive/ql/exec/ExprNodeGenericFuncEvaluator.java
java/org/apache/hadoop/hive/ql/exec/FetchOperator.java
java/org/apache/hadoop/hive/ql/exec/FileSinkOperator.java
java/org/apache/hadoop/hive/ql/exec/FunctionRegistry.java
java/org/apache/hadoop/hive/ql/exec/GroupByOperator.java
java/org/apache/hadoop/hive/ql/exec/HashTableSinkOperator.java
java/org/apache/hadoop/hive/ql/exec/JoinUtil.java
java/org/apache/hadoop/hive/ql/exec/MapJoinOperator.java
java/org/apache/hadoop/hive/ql/exec/MapOperator.java
java/org/apache/hadoop/hive/ql/exec/MoveTask.java
java/org/apache/hadoop/hive/ql/exec/MuxOperator.java
java/org/apache/hadoop/hive/ql/exec/OperatorFactory.java
java/org/apache/hadoop/hive/ql/exec/PTFPersistence.java
java/org/apache/hadoop/hive/ql/exec/PartitionKeySampler.java
java/org/apache/hadoop/hive/ql/exec/ReduceSinkOperator.java
java/org/apache/hadoop/hive/ql/exec/SMBMapJoinOperator.java
java/org/apache/hadoop/hive/ql/exec/ScriptOperator.java
java/org/apache/hadoop/hive/ql/exec/SkewJoinHandler.java
java/org/apache/hadoop/hive/ql/exec/StatsTask.java
java/org/apache/hadoop/hive/ql/exec/TaskFactory.java
java/org/apache/hadoop/hive/ql/exec/UDFArgumentException.java
java/org/apache/hadoop/hive/ql/exec/UnionOperator.java
java/org/apache/hadoop/hive/ql/exec/Utilities.java
java/org/apache/hadoop/hive/ql/exec/errors/RegexErrorHeuristic.java
java/org/apache/hadoop/hive/ql/exec/errors/ScriptErrorHeuristic.java
java/org/apache/hadoop/hive/ql/exec/errors/TaskLogProcessor.java
java/org/apache/hadoop/hive/ql/exec/mr/ExecDriver.java
java/org/apache/hadoop/hive/ql/exec/mr/ExecReducer.java
java/org/apache/hadoop/hive/ql/exec/mr/HadoopJobExecHelper.java
java/org/apache/hadoop/hive/ql/exec/mr/JobDebugger.java
java/org/apache/hadoop/hive/ql/exec/mr/MapRedTask.java
java/org/apache/hadoop/hive/ql/exec/mr/MapredLocalTask.java
java/org/apache/hadoop/hive/ql/exec/mr/Throttle.java
java/org/apache/hadoop/hive/ql/exec/persistence/MapJoinObjectValue.java
java/org/apache/hadoop/hive/ql/exec/persistence/MapJoinRowContainer.java
java/org/apache/hadoop/hive/ql/history/HiveHistory.java
java/org/apache/hadoop/hive/ql/index/HiveIndexResult.java
java/org/apache/hadoop/hive/ql/index/HiveIndexedInputFormat.java
java/org/apache/hadoop/hive/ql/index/IndexPredicateAnalyzer.java
java/org/apache/hadoop/hive/ql/index/TableBasedIndexHandler.java
java/org/apache/hadoop/hive/ql/index/bitmap/BitmapIndexHandler.java
java/org/apache/hadoop/hive/ql/io/BucketizedHiveInputFormat.java
java/org/apache/hadoop/hive/ql/io/CombineHiveInputFormat.java
java/org/apache/hadoop/hive/ql/io/HiveFileFormatUtils.java
java/org/apache/hadoop/hive/ql/io/HiveInputFormat.java
java/org/apache/hadoop/hive/ql/io/NonSyncDataInputBuffer.java
java/org/apache/hadoop/hive/ql/io/RCFile.java
java/org/apache/hadoop/hive/ql/io/RCFileInputFormat.java
java/org/apache/hadoop/hive/ql/io/SequenceFileInputFormatChecker.java
java/org/apache/hadoop/hive/ql/io/SymbolicInputFormat.java
java/org/apache/hadoop/hive/ql/io/SymlinkTextInputFormat.java
java/org/apache/hadoop/hive/ql/io/orc/DynamicByteArray.java
java/org/apache/hadoop/hive/ql/io/orc/DynamicIntArray.java
java/org/apache/hadoop/hive/ql/io/orc/FileDump.java
java/org/apache/hadoop/hive/ql/io/orc/OrcStruct.java
java/org/apache/hadoop/hive/ql/io/orc/RecordReaderImpl.java
java/org/apache/hadoop/hive/ql/io/orc/WriterImpl.java
java/org/apache/hadoop/hive/ql/io/rcfile/merge/BlockMergeTask.java
java/org/apache/hadoop/hive/ql/io/rcfile/merge/RCFileMergeMapper.java
java/org/apache/hadoop/hive/ql/io/rcfile/stats/PartialScanTask.java
java/org/apache/hadoop/hive/ql/io/rcfile/truncate/ColumnTruncateMapper.java
java/org/apache/hadoop/hive/ql/lockmgr/EmbeddedLockManager.java
java/org/apache/hadoop/hive/ql/lockmgr/zookeeper/ZooKeeperHiveLockManager.java
java/org/apache/hadoop/hive/ql/metadata/Hive.java
java/org/apache/hadoop/hive/ql/metadata/HiveMetaStoreChecker.java
java/org/apache/hadoop/hive/ql/metadata/formatting/JsonMetaDataFormatter.java
java/org/apache/hadoop/hive/ql/metadata/formatting/TextMetaDataFormatter.java
java/org/apache/hadoop/hive/ql/optimizer/AbstractBucketJoinProc.java
java/org/apache/hadoop/hive/ql/optimizer/AbstractSMBJoinProc.java
java/org/apache/hadoop/hive/ql/optimizer/BucketingSortingReduceSinkOptimizer.java
java/org/apache/hadoop/hive/ql/optimizer/ColumnPrunerProcFactory.java
java/org/apache/hadoop/hive/ql/optimizer/GenMRFileSink1.java
java/org/apache/hadoop/hive/ql/optimizer/GenMRTableScan1.java
java/org/apache/hadoop/hive/ql/optimizer/GenMapRedUtils.java
java/org/apache/hadoop/hive/ql/optimizer/GroupByOptimizer.java
java/org/apache/hadoop/hive/ql/optimizer/MapJoinProcessor.java
java/org/apache/hadoop/hive/ql/optimizer/SimpleFetchOptimizer.java
java/org/apache/hadoop/hive/ql/optimizer/SkewJoinOptimizer.java
java/org/apache/hadoop/hive/ql/optimizer/correlation/CorrelationOptimizer.java
java/org/apache/hadoop/hive/ql/optimizer/correlation/QueryPlanTreeTransformation.java
java/org/apache/hadoop/hive/ql/optimizer/correlation/ReduceSinkDeDuplication.java
java/org/apache/hadoop/hive/ql/optimizer/index/RewriteGBUsingIndex.java
java/org/apache/hadoop/hive/ql/optimizer/index/RewriteQueryUsingAggregateIndex.java
java/org/apache/hadoop/hive/ql/optimizer/lineage/OpProcFactory.java
java/org/apache/hadoop/hive/ql/optimizer/listbucketingpruner/ListBucketingPruner.java
java/org/apache/hadoop/hive/ql/optimizer/physical/AbstractJoinTaskDispatcher.java
java/org/apache/hadoop/hive/ql/optimizer/physical/BucketingSortingInferenceOptimizer.java
java/org/apache/hadoop/hive/ql/optimizer/physical/BucketingSortingOpProcFactory.java
java/org/apache/hadoop/hive/ql/optimizer/physical/CommonJoinTaskDispatcher.java
java/org/apache/hadoop/hive/ql/optimizer/physical/GenMRSkewJoinProcessor.java
java/org/apache/hadoop/hive/ql/optimizer/physical/LocalMapJoinProcFactory.java
java/org/apache/hadoop/hive/ql/optimizer/physical/MetadataOnlyOptimizer.java
java/org/apache/hadoop/hive/ql/optimizer/physical/index/IndexWhereProcessor.java
java/org/apache/hadoop/hive/ql/optimizer/ppr/PartitionPruner.java
java/org/apache/hadoop/hive/ql/optimizer/unionproc/UnionProcFactory.java
java/org/apache/hadoop/hive/ql/parse/BaseSemanticAnalyzer.java
java/org/apache/hadoop/hive/ql/parse/ColumnStatsSemanticAnalyzer.java
java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java
java/org/apache/hadoop/hive/ql/parse/EximUtil.java
java/org/apache/hadoop/hive/ql/parse/ExportSemanticAnalyzer.java
java/org/apache/hadoop/hive/ql/parse/ImportSemanticAnalyzer.java
java/org/apache/hadoop/hive/ql/parse/IndexUpdater.java
java/org/apache/hadoop/hive/ql/parse/MacroSemanticAnalyzer.java
java/org/apache/hadoop/hive/ql/parse/MapReduceCompiler.java
java/org/apache/hadoop/hive/ql/parse/PTFInvocationSpec.java
java/org/apache/hadoop/hive/ql/parse/PTFTranslator.java
java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
java/org/apache/hadoop/hive/ql/parse/TypeCheckProcFactory.java
java/org/apache/hadoop/hive/ql/parse/WindowingComponentizer.java
java/org/apache/hadoop/hive/ql/parse/WindowingSpec.java
java/org/apache/hadoop/hive/ql/plan/BucketMapJoinContext.java
java/org/apache/hadoop/hive/ql/plan/ConditionalResolverCommonJoin.java
java/org/apache/hadoop/hive/ql/plan/ConditionalResolverSkewJoin.java
java/org/apache/hadoop/hive/ql/plan/FetchWork.java
java/org/apache/hadoop/hive/ql/plan/HashTableSinkDesc.java
java/org/apache/hadoop/hive/ql/plan/JoinDesc.java
java/org/apache/hadoop/hive/ql/plan/ListBucketingCtx.java
java/org/apache/hadoop/hive/ql/plan/MapJoinDesc.java
java/org/apache/hadoop/hive/ql/plan/MsckDesc.java
java/org/apache/hadoop/hive/ql/plan/PTFDesc.java
java/org/apache/hadoop/hive/ql/plan/PlanUtils.java
java/org/apache/hadoop/hive/ql/plan/ReduceSinkDesc.java
java/org/apache/hadoop/hive/ql/ppd/OpProcFactory.java
java/org/apache/hadoop/hive/ql/ppd/PredicateTransitivePropagate.java
java/org/apache/hadoop/hive/ql/session/CreateTableAutomaticGrant.java
java/org/apache/hadoop/hive/ql/udf/UDAFPercentile.java
java/org/apache/hadoop/hive/ql/udf/UDFJson.java
java/org/apache/hadoop/hive/ql/udf/generic/AbstractGenericUDFEWAHBitmapBop.java
java/org/apache/hadoop/hive/ql/udf/generic/GenericUDAFContextNGrams.java
java/org/apache/hadoop/hive/ql/udf/generic/GenericUDAFCumeDist.java
java/org/apache/hadoop/hive/ql/udf/generic/GenericUDAFHistogramNumeric.java
java/org/apache/hadoop/hive/ql/udf/generic/GenericUDAFNTile.java
java/org/apache/hadoop/hive/ql/udf/generic/GenericUDAFPercentRank.java
java/org/apache/hadoop/hive/ql/udf/generic/GenericUDAFPercentileApprox.java
java/org/apache/hadoop/hive/ql/udf/generic/GenericUDAFnGrams.java
java/org/apache/hadoop/hive/ql/udf/generic/GenericUDF.java
java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFEWAHBitmapEmpty.java
java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFIn.java
java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFSentences.java
java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFSplit.java
java/org/apache/hadoop/hive/ql/udf/generic/GenericUDTFInline.java
java/org/apache/hadoop/hive/ql/udf/generic/GenericUDTFJSONTuple.java
java/org/apache/hadoop/hive/ql/udf/generic/GenericUDTFParseUrlTuple.java
java/org/apache/hadoop/hive/ql/udf/generic/GenericUDTFStack.java
java/org/apache/hadoop/hive/ql/udf/generic/NGramEstimator.java
java/org/apache/hadoop/hive/ql/udf/generic/NumDistinctValueEstimator.java
java/org/apache/hadoop/hive/ql/udf/generic/NumericHistogram.java
java/org/apache/hadoop/hive/ql/udf/ptf/NPath.java
java/org/apache/hadoop/hive/ql/udf/ptf/WindowingTableFunction.java
java/org/apache/hadoop/hive/ql/udf/xml/GenericUDFXPath.java


Issue 1 (use of StringBuffer over +=)
java/org/apache/hadoop/hive/ql/Driver.java
java/org/apache/hadoop/hive/ql/Driver.java
java/org/apache/hadoop/hive/ql/QueryPlan.java
java/org/apache/hadoop/hive/ql/exec/DDLTask.java
java/org/apache/hadoop/hive/ql/exec/DDLTask.java
java/org/apache/hadoop/hive/ql/exec/DDLTask.java
java/org/apache/hadoop/hive/ql/exec/DDLTask.java
java/org/apache/hadoop/hive/ql/exec/DDLTask.java
java/org/apache/hadoop/hive/ql/exec/DDLTask.java
java/org/apache/hadoop/hive/ql/exec/DDLTask.java
java/org/apache/hadoop/hive/ql/exec/DDLTask.java
java/org/apache/hadoop/hive/ql/exec/DDLTask.java
java/org/apache/hadoop/hive/ql/exec/DDLTask.java
java/org/apache/hadoop/hive/ql/exec/DDLTask.java
java/org/apache/hadoop/hive/ql/exec/DDLTask.java
java/org/apache/hadoop/hive/ql/exec/DDLTask.java
java/org/apache/hadoop/hive/ql/exec/DDLTask.java
java/org/apache/hadoop/hive/ql/exec/DDLTask.java
java/org/apache/hadoop/hive/ql/exec/DDLTask.java
java/org/apache/hadoop/hive/ql/exec/DDLTask.java
java/org/apache/hadoop/hive/ql/exec/DDLTask.java
java/org/apache/hadoop/hive/ql/exec/DDLTask.java
java/org/apache/hadoop/hive/ql/exec/DDLTask.java
java/org/apache/hadoop/hive/ql/exec/DDLTask.java
java/org/apache/hadoop/hive/ql/exec/DDLTask.java
java/org/apache/hadoop/hive/ql/exec/DDLTask.java
java/org/apache/hadoop/hive/ql/exec/DDLTask.java
java/org/apache/hadoop/hive/ql/exec/DDLTask.java
java/org/apache/hadoop/hive/ql/exec/DDLTask.java
java/org/apache/hadoop/hive/ql/exec/DDLTask.java
java/org/apache/hadoop/hive/ql/exec/DDLTask.java
java/org/apache/hadoop/hive/ql/exec/DDLTask.java
java/org/apache/hadoop/hive/ql/exec/mr/ExecDriver.java
java/org/apache/hadoop/hive/ql/exec/mr/ExecDriver.java
java/org/apache/hadoop/hive/ql/exec/mr/HadoopJobExecHelper.java
java/org/apache/hadoop/hive/ql/exec/mr/HadoopJobExecHelper.java
java/org/apache/hadoop/hive/ql/exec/mr/MapRedTask.java
java/org/apache/hadoop/hive/ql/exec/mr/MapredLocalTask.java
java/org/apache/hadoop/hive/ql/exec/persistence/RowContainer.java
java/org/apache/hadoop/hive/ql/io/rcfile/merge/BlockMergeTask.java
java/org/apache/hadoop/hive/ql/io/rcfile/merge/BlockMergeTask.java
java/org/apache/hadoop/hive/ql/io/rcfile/stats/PartialScanTask.java
java/org/apache/hadoop/hive/ql/io/rcfile/stats/PartialScanTask.java
java/org/apache/hadoop/hive/ql/io/rcfile/truncate/ColumnTruncateTask.java
java/org/apache/hadoop/hive/ql/io/rcfile/truncate/ColumnTruncateTask.java
java/org/apache/hadoop/hive/ql/lib/RuleExactMatch.java
java/org/apache/hadoop/hive/ql/lib/RuleRegExp.java
java/org/apache/hadoop/hive/ql/lockmgr/HiveLockObject.java
java/org/apache/hadoop/hive/ql/lockmgr/HiveLockObject.java
java/org/apache/hadoop/hive/ql/lockmgr/zookeeper/ZooKeeperHiveLockManager.java
java/org/apache/hadoop/hive/ql/lockmgr/zookeeper/ZooKeeperHiveLockManager.java
java/org/apache/hadoop/hive/ql/lockmgr/zookeeper/ZooKeeperHiveLockManager.java
java/org/apache/hadoop/hive/ql/metadata/HiveMetaStoreChecker.java
java/org/apache/hadoop/hive/ql/metadata/Partition.java
java/org/apache/hadoop/hive/ql/metadata/Table.java
java/org/apache/hadoop/hive/ql/optimizer/ColumnPrunerProcFactory.java
java/org/apache/hadoop/hive/ql/optimizer/GenMRTableScan1.java
java/org/apache/hadoop/hive/ql/optimizer/GenMapRedUtils.java
java/org/apache/hadoop/hive/ql/optimizer/GenMapRedUtils.java
java/org/apache/hadoop/hive/ql/optimizer/MapJoinProcessor.java
java/org/apache/hadoop/hive/ql/optimizer/MapJoinProcessor.java
java/org/apache/hadoop/hive/ql/optimizer/index/RewriteQueryUsingAggregateIndex.java
java/org/apache/hadoop/hive/ql/optimizer/physical/GenMRSkewJoinProcessor.java
java/org/apache/hadoop/hive/ql/optimizer/physical/GenMRSkewJoinProcessor.java
java/org/apache/hadoop/hive/ql/optimizer/physical/GenMRSkewJoinProcessor.java
java/org/apache/hadoop/hive/ql/optimizer/physical/GenMRSkewJoinProcessor.java
java/org/apache/hadoop/hive/ql/optimizer/physical/GenMRSkewJoinProcessor.java
java/org/apache/hadoop/hive/ql/optimizer/physical/GenMRSkewJoinProcessor.java
java/org/apache/hadoop/hive/ql/optimizer/physical/GenMRSkewJoinProcessor.java
java/org/apache/hadoop/hive/ql/optimizer/physical/GenMRSkewJoinProcessor.java
java/org/apache/hadoop/hive/ql/optimizer/physical/GenMRSkewJoinProcessor.java
java/org/apache/hadoop/hive/ql/optimizer/physical/GenMRSkewJoinProcessor.java
java/org/apache/hadoop/hive/ql/optimizer/physical/GenMRSkewJoinProcessor.java
java/org/apache/hadoop/hive/ql/optimizer/physical/GenMRSkewJoinProcessor.java
java/org/apache/hadoop/hive/ql/optimizer/ppr/PartitionPruner.java
java/org/apache/hadoop/hive/ql/parse/PTFTranslator.java
java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
java/org/apache/hadoop/hive/ql/plan/ConditionalResolverMergeFiles.java
java/org/apache/hadoop/hive/ql/plan/PlanUtils.java
java/org/apache/hadoop/hive/ql/security/authorization/BitSetCheckedAuthorizationProvider.java
java/org/apache/hadoop/hive/ql/security/authorization/BitSetCheckedAuthorizationProvider.java
java/org/apache/hadoop/hive/ql/security/authorization/BitSetCheckedAuthorizationProvider.java
java/org/apache/hadoop/hive/ql/security/authorization/BitSetCheckedAuthorizationProvider.java
java/org/apache/hadoop/hive/ql/security/authorization/BitSetCheckedAuthorizationProvider.java
java/org/apache/hadoop/hive/ql/stats/jdbc/JDBCStatsUtils.java
java/org/apache/hadoop/hive/ql/stats/jdbc/JDBCStatsUtils.java
java/org/apache/hadoop/hive/ql/stats/jdbc/JDBCStatsUtils.java
java/org/apache/hadoop/hive/ql/stats/jdbc/JDBCStatsUtils.java
java/org/apache/hadoop/hive/ql/stats/jdbc/JDBCStatsUtils.java
java/org/apache/hadoop/hive/ql/stats/jdbc/JDBCStatsUtils.java
java/org/apache/hadoop/hive/ql/stats/jdbc/JDBCStatsUtils.java
java/org/apache/hadoop/hive/ql/udf/UDFLike.java
java/org/apache/hadoop/hive/ql/udf/UDFLike.java
java/org/apache/hadoop/hive/ql/udf/UDFLike.java
java/org/apache/hadoop/hive/ql/udf/UDFLike.java
java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFSentences.java
java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFSentences.java
java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFSentences.java
java/org/apache/hadoop/hive/ql/udf/generic/NumDistinctValueEstimator.java
java/org/apache/hadoop/hive/ql/udf/generic/NumDistinctValueEstimator.java
java/org/apache/hadoop/hive/ql/udf/generic/NumDistinctValueEstimator.java
java/org/apache/hadoop/hive/ql/udf/generic/NumDistinctValueEstimator.java
java/org/apache/hadoop/hive/ql/udf/ptf/NPath.java
java/org/apache/hadoop/hive/ql/udf/ptf/NPath.java
java/org/apache/hadoop/hive/ql/udf/ptf/NPath.java
java/org/apache/hadoop/hive/ql/udf/ptf/NPath.java





    
> Fix minor optimization issues
> -----------------------------
>
>                 Key: HIVE-5009
>                 URL: https://issues.apache.org/jira/browse/HIVE-5009
>             Project: Hive
>          Issue Type: Improvement
>            Reporter: Benjamin Jakobus
>            Assignee: Benjamin Jakobus
>            Priority: Minor
>             Fix For: 0.12.0
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> I have found some minor optimization issues in the codebase, which I would 
> like to rectify and contribute. Specifically, these are:
> The optimizations that could be applied to Hive's code base are as follows:
> 1. Use StringBuffer when appending strings - In 184 instances, the 
> concatination operator (+=) was used when appending strings. This is 
> inherintly inefficient - instead Java's StringBuffer or StringBuilder class 
> should be used. 12 instances of this optimization can be applied to the 
> GenMRSkewJoinProcessor class and another three to the optimizer. CliDriver 
> uses the + operator inside a loop, so does the column projection utilities 
> class (ColumnProjectionUtils) and the aforementioned skew-join processor. 
> Tests showed that using the StringBuilder when appending strings is 57\% 
> faster than using the + operator (using the StringBuffer took 122 
> milliseconds whilst the + operator took 284 milliseconds). The reason as to 
> why using the StringBuffer class is preferred over using the + operator, is 
> because
> String third = first + second;
> gets compiled to:
> StringBuilder builder = new StringBuilder( first );
> builder.append( second );
> third = builder.toString();
> Therefore, when building complex strings, that, for example involve loops, 
> require many instantiations (and as discussed below, creating new objects 
> inside loops is inefficient).
> 2. Use arrays instead of List - Java's java.util.Arrays class asList method 
> is a more efficient at creating  creating lists from arrays than using loops 
> to manually iterate over the elements (using asList is computationally very 
> cheap, O(1), as it merely creates a wrapper object around the array; looping 
> through the list however has a complexity of O(n) since a new list is created 
> and every element in the array is added to this new list). As confirmed by 
> the experiment detailed in Appendix D, the Java compiler does not 
> automatically optimize and replace tight-loop copying with asList: the 
> loop-copying of 1,000,000 items took 15 milliseconds whilst using asList is 
> instant. 
> Four instances of this optimization can be applied to Hive's codebase (two of 
> these should be applied to the Map-Join container - MapJoinRowContainer) - 
> lines 92 to 98:
>  for (obj = other.first(); obj != null; obj = other.next()) {
>       ArrayList<Object> ele = new ArrayList(obj.length);
>       for (int i = 0; i < obj.length; i++) {
>         ele.add(obj[i]);
>       }
>       list.add((Row) ele);
>     }
> 3. Unnecessary wrapper object creation - In 31 cases, wrapper object creation 
> could be avoided by simply using the provided static conversion methods. As 
> noted in the PMD documentation, "using these avoids the cost of creating 
> objects that also need to be garbage-collected later."
> For example, line 587 of the SemanticAnalyzer class, could be replaced by the 
> more efficient parseDouble method call:
> // Inefficient:
> Double percent = Double.valueOf(value).doubleValue();
> // To be replaced by:
> Double percent = Double.parseDouble(value);
> Our test case in Appendix D confirms this: converting 10,000 strings into 
> integers using Integer.parseInt(gen.nextSessionId()) (i.e. creating an 
> unnecessary wrapper object) took 119 on average; using parseInt() took only 
> 38. Therefore creating even just one unnecessary wrapper object can make your 
> code up to 68% slower.
> 4. Converting literals to strings using + "" - Converting literals to strings 
> using + "" is quite inefficient (see Appendix D) and should be done by 
> calling the toString() method instead: converting 1,000,000 integers to 
> strings using + "" took, on average, 1340 milliseconds whilst using the 
> toString() method only required 1183 milliseconds (hence adding empty strings 
> takes nearly 12% more time). 
> 89 instances of this using + "" when converting literals were found in Hive's 
> codebase - one of these are found in the JoinUtil.
> 5. Avoid manual copying of arrays - Instead of copying arrays as is done in 
> GroupByOperator on line 1040 (see below), the more efficient System.arraycopy 
> can be used (arraycopy is a native method meaning that the entire memory 
> block is copied using memcpy or mmove).
> // Line 1040 of the GroupByOperator
> for (int i = 0; i < keys.length; i++) {
>       forwardCache[i] = keys[i];
> }   
> Using System.arraycopy on an array of 10,000 strings was (close to) instant 
> whilst the manual copy took 6 milliseconds.
> 11 instances of this optimization should be applied to the Hive codebase.
> 6. Avoiding instantiation inside loops - As noted in the PMD documentation, 
> "new objects created within loops should be checked to see if they can 
> created outside them and reused.". 
> Declaring variables inside a loop (i from 0 to 10,000) took 300 milliseconds
> whilst declaring them outside took only 88 milliseconds (this can be 
> explained by the fact that when declaring a variable outside the loop, its 
> reference will be re-used for each iteration. However when declaring 
> variables inside a loop, new references will be created for each iteration. 
> In our case, 10,000 references will be created by the time that this loop 
> finishes, meaning lots of work in terms of memory allocation and garbage 
> collection). 1623 instances of this optimization can be applied.
> To summarize, I propose to modify the code to address issue 1 and issue 6 
> (remaining issues (2 - 5) will be addressed later). Details are specified as 
> sub-tasks.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to