[
https://issues.apache.org/jira/browse/HIVE-20484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16606649#comment-16606649
]
Hive QA commented on HIVE-20484:
--------------------------------
Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12938718/HIVE-20484.2.patch
{color:red}ERROR:{color} -1 due to no test(s) being added or modified.
{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 14928 tests
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[allcolref_in_udf]
(batchId=56)
{noformat}
Test results:
https://builds.apache.org/job/PreCommit-HIVE-Build/13634/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13634/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13634/
Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}
This message is automatically generated.
ATTACHMENT ID: 12938718 - PreCommit-HIVE-Build
> Disable Block Cache By Default With HBase SerDe
> -----------------------------------------------
>
> Key: HIVE-20484
> URL: https://issues.apache.org/jira/browse/HIVE-20484
> Project: Hive
> Issue Type: Improvement
> Components: HBase Handler
> Affects Versions: 1.2.3, 2.4.0, 4.0.0, 3.2.0
> Reporter: BELUGA BEHR
> Assignee: BELUGA BEHR
> Priority: Major
> Attachments: HIVE-20484.1.patch, HIVE-20484.2.patch
>
>
> {quote}
> Scan instances can be set to use the block cache in the RegionServer via the
> setCacheBlocks method. For input Scans to MapReduce jobs, this should be
> false.
> https://hbase.apache.org/book.html#perf.hbase.client.blockcache
> {quote}
> However, from the Hive code, we can see that this is not the case.
> {code}
> public static final String HBASE_SCAN_CACHEBLOCKS = "hbase.scan.cacheblock";
> ...
> String scanCacheBlocks =
> tableProperties.getProperty(HBaseSerDe.HBASE_SCAN_CACHEBLOCKS);
> if (scanCacheBlocks != null) {
> jobProperties.put(HBaseSerDe.HBASE_SCAN_CACHEBLOCKS, scanCacheBlocks);
> }
> ...
> String scanCacheBlocks = jobConf.get(HBaseSerDe.HBASE_SCAN_CACHEBLOCKS);
> if (scanCacheBlocks != null) {
> scan.setCacheBlocks(Boolean.parseBoolean(scanCacheBlocks));
> }
> {code}
> In the Hive code, we can see that if {{hbase.scan.cacheblock}} is not
> specified in the {{SERDEPROPERTIES}} then {{setCacheBlocks}} is not called
> and the default value of the HBase {{Scan}} class is used.
> {code:java|title=Scan.java}
> /**
> * Set whether blocks should be cached for this Scan.
> * <p>
> * This is true by default. When true, default settings of the table and
> * family are used (this will never override caching blocks if the block
> * cache is disabled for that family or entirely).
> *
> * @param cacheBlocks if false, default settings are overridden and blocks
> * will not be cached
> */
> public Scan setCacheBlocks(boolean cacheBlocks) {
> this.cacheBlocks = cacheBlocks;
> return this;
> }
> {code}
> Hive is doing full scans of the table with MapReduce/Spark and therefore,
> according to the HBase docs, the default behavior here should be that blocks
> are not cached. Hive should set this value to "false" by default unless the
> table {{SERDEPROPERTIES}} override this.
> {code:sql}
> -- Commands for HBase
> -- create 'test', 't'
> CREATE EXTERNAL TABLE test(value map<string,string>, row_key string)
> STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
> WITH SERDEPROPERTIES (
> "hbase.columns.mapping" = "t:,:key",
> "hbase.scan.cacheblock" = "false"
> );
> {code}
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)