[ https://issues.apache.org/jira/browse/HIVE-10073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14381424#comment-14381424 ]
Chengxiang Li commented on HIVE-10073: -------------------------------------- Hi, [~jxiang], I saw you only call checkOutputSpecs for ReduceWork, but there may be a FileSinkOperator in map-only job as well, so we may also need to checkOutputSpecs for MapWork. Besides, the checkOutputSpecs is invoked at SparkRecordHandler::init which would be executed for each task, SparkPlanGenerator::generate(BaseWork work) may be a better place to do this, we can checkOutputSpecs between clone jobconf and serialized jobconf, so this would only be checked once time at RSC side. > Runtime exception when querying HBase with Spark [Spark Branch] > --------------------------------------------------------------- > > Key: HIVE-10073 > URL: https://issues.apache.org/jira/browse/HIVE-10073 > Project: Hive > Issue Type: Bug > Components: Spark > Affects Versions: spark-branch > Reporter: Jimmy Xiang > Assignee: Jimmy Xiang > Fix For: spark-branch > > Attachments: HIVE-10073.1-spark.patch > > > When querying HBase with Spark, we got > {noformat} > Caused by: java.lang.IllegalArgumentException: Must specify table name > at > org.apache.hadoop.hbase.mapreduce.TableOutputFormat.setConf(TableOutputFormat.java:188) > at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73) > at > org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133) > at > org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveOutputFormat(HiveFileFormatUtils.java:276) > at > org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveOutputFormat(HiveFileFormatUtils.java:266) > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator.initializeOp(FileSinkOperator.java:331) > {noformat} > But it works fine for MapReduce. -- This message was sent by Atlassian JIRA (v6.3.4#6332)