[
https://issues.apache.org/jira/browse/HIVE-12058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14953239#comment-14953239
]
Jimmy Xiang commented on HIVE-12058:
------------------------------------
I think we should not remove "2>/dev/null". Otherwise, the script will break
sometimes when mapredcp outputs more than we need.
In the failure scenario you have, does mparedcp outputs more or less than the
normal output?
> Change hive script to record errors when calling hbase fails
> ------------------------------------------------------------
>
> Key: HIVE-12058
> URL: https://issues.apache.org/jira/browse/HIVE-12058
> Project: Hive
> Issue Type: Bug
> Components: Hive, HiveServer2
> Affects Versions: 0.14.0, 1.1.0, 2.0.0
> Reporter: Yongzhi Chen
> Assignee: Yongzhi Chen
> Attachments: HIVE-12058.1.patch
>
>
> By default hive will try to find out which jars need to be added to the
> classpath in order to run MR jobs against an HBase cluster, however if hbase
> can't be found or if hbase mapredcp fails, the hive script will fail
> silently and ignore some of the jars to be included into the. That makes very
> difficult to analyze the real problem.
> Hive script should record the error not just simply redirect two hbase
> failures:
> HBASE_BIN=$
> {HBASE_BIN:-"$(which hbase 2>/dev/null)"}
> $HBASE_BIN mapredcp 2>/dev/null
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)