[ 
https://issues.apache.org/jira/browse/HIVE-7950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HIVE-7950:
-----------------------------
    Summary: StorageHandler resources aren't added to Tez Session if already 
Session is already Open  (was: AccumuloStorageHandler doesn't work with Hive on 
Tez)

> StorageHandler resources aren't added to Tez Session if already Session is 
> already Open
> ---------------------------------------------------------------------------------------
>
>                 Key: HIVE-7950
>                 URL: https://issues.apache.org/jira/browse/HIVE-7950
>             Project: Hive
>          Issue Type: Bug
>          Components: StorageHandler, Tez
>            Reporter: Josh Elser
>            Assignee: Josh Elser
>             Fix For: 0.14.0
>
>
> Was trying to run some queries using the AccumuloStorageHandler when using 
> the Tez execution engine. Some things that I've noticed already (probably 
> more as I can get past the ones I already found):
> * Jars added to the classpath via tmpjars (which is done by the copied HBase 
> Utils class) aren't available in the Tez Map task -- need to compare to 
> HBaseStorageHandler and see if there is something magic happening
> * Configuration generated by the AccumuloStorageHandler doesn't make it all 
> the way to the Configuration passed to the AccumuloOutputFormat (probably 
> AccumuloInputFormat, too)
> {noformat}
> 2014-09-03 01:28:45,357 ERROR [TezChild] 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor: java.lang.RuntimeException: 
> org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
> processing row {"row":"a","col":"d"}
>       at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.processRow(MapRecordProcessor.java:195)
>       at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:161)
>       at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:165)
>       at 
> org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:309)
>       at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:180)
>       at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:172)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at javax.security.auth.Subject.doAs(Subject.java:396)
>       at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
>       at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.call(TezTaskRunner.java:172)
>       at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.call(TezTaskRunner.java:167)
>       at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>       at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>       at java.lang.Thread.run(Thread.java:662)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
> Error while processing row {"row":"a","col":"d"}
>       at 
> org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:550)
>       at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.processRow(MapRecordProcessor.java:183)
>       ... 15 more
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
> org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: 
> java.lang.IllegalStateException: Instance has not been configured for 
> AccumuloOutputFormat
>       at 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator.createBucketFiles(FileSinkOperator.java:459)
>       at 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator.processOp(FileSinkOperator.java:540)
>       at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:800)
>       at 
> org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:84)
>       at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:800)
>       at 
> org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:95)
>       at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:800)
>       at 
> org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:540)
>       ... 16 more
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
> java.io.IOException: java.lang.IllegalStateException: Instance has not been 
> configured for AccumuloOutputFormat
>       at 
> org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:286)
>       at 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator.createBucketForFileIdx(FileSinkOperator.java:496)
>       at 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator.createBucketFiles(FileSinkOperator.java:448)
>       ... 23 more
> Caused by: java.io.IOException: java.lang.IllegalStateException: Instance has 
> not been configured for AccumuloOutputFormat
>       at 
> org.apache.accumulo.core.client.mapred.AccumuloOutputFormat.getRecordWriter(AccumuloOutputFormat.java:553)
>       at 
> org.apache.hadoop.hive.ql.io.HivePassThroughOutputFormat.getHiveRecordWriter(HivePassThroughOutputFormat.java:113)
>       at 
> org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getRecordWriter(HiveFileFormatUtils.java:296)
>       at 
> org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:283)
>       ... 25 more
> Caused by: java.lang.IllegalStateException: Instance has not been configured 
> for AccumuloOutputFormat
>       at 
> org.apache.accumulo.core.client.mapreduce.lib.impl.ConfiguratorBase.getInstance(ConfiguratorBase.java:335)
>       at 
> org.apache.accumulo.core.client.mapred.AccumuloOutputFormat.getInstance(AccumuloOutputFormat.java:229)
>       at 
> org.apache.accumulo.core.client.mapred.AccumuloOutputFormat$AccumuloRecordWriter.<init>(AccumuloOutputFormat.java:402)
>       at 
> org.apache.accumulo.core.client.mapred.AccumuloOutputFormat.getRecordWriter(AccumuloOutputFormat.java:551)
>       ... 28 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to