[ 
https://issues.apache.org/jira/browse/HIVE-8254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Supriya Sahay updated HIVE-8254:
--------------------------------
    Summary: Transaction throwing java.lang.NullPointerException at 
org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.heartbeat(DbTxnManager.java:244) 
 (was: Compaction throwing java.lang.NullPointerException at 
org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.heartbeat(DbTxnManager.java:244))

> Transaction throwing java.lang.NullPointerException at 
> org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.heartbeat(DbTxnManager.java:244)
> --------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: HIVE-8254
>                 URL: https://issues.apache.org/jira/browse/HIVE-8254
>             Project: Hive
>          Issue Type: Bug
>    Affects Versions: 0.13.0
>            Reporter: Supriya Sahay
>
> While trying to INSERT OVERWRITE into bucketed table using transactions, I am 
> getting below error:
> java.lang.NullPointerException
>         at 
> org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.heartbeat(DbTxnManager.java:244)
>         at 
> org.apache.hadoop.hive.ql.exec.Heartbeater.heartbeat(Heartbeater.java:79)
>         at 
> org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:242)
>         at 
> org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:547)
>         at 
> org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:426)
>         at 
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:136)
>         at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:153)
>         at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85)
>         at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1508)
>         at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1275)
>         at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1093)
>         at org.apache.hadoop.hive.ql.Driver.run(Driver.java:916)
>         at org.apache.hadoop.hive.ql.Driver.run(Driver.java:906)
>         at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:268)
>         at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:220)
>         at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:423)
>         at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:793)
>         at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:686)
>         at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:625)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:606)
>         at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
> Ended Job = job_1411574868628_0015 with exception 
> 'java.lang.NullPointerException(null)'
> This is what I was doing:
> hive> CREATE EXTERNAL TABLE BUCKET_EMP (ID INT, NAME STRING, VAR STRING)
>     > PARTITIONED BY (COUNTRY STRING)
>     > CLUSTERED BY(VAR) INTO 3 BUCKETS
>     > ROW FORMAT DELIMITED
>     > FIELDS TERMINATED BY '\t'
>     > LINES TERMINATED BY '\n'
>     > STORED AS ORC
>     > LOCATION '/tmp/bucket_emp';
> hive> SELECT * FROM BUCKET_EMP;
> OK
> 7       G       x       AUS
> 3       C       1       AUS
> 8       H       y       IND
> 10      J       y       UK
> 2       B       y       UK
> 6       F       2       UK
> 4       D       2       UK
> 9       I        x       US
> 1       A       x       US
> 5       E       1       US
> hive> SET hive.exec.dynamic.partition = true;
> hive> SET hive.exec.dynamic.partition.mode = nonstrict;
> hive> SET hive.txn.manager = org.apache.hadoop.hive.ql.lockmgr.DbTxnManager;
> hive> SET hive.compactor.initiator.on = true;
> hive> SET hive.compactor.worker.threads = 3;
> hive> SET hive.compactor.check.interval = 300;
> hive> SET hive.compactor.delta.num.threshold = 1;
> hive> INSERT OVERWRITE TABLE BUCKET_EMP
>     > PARTITION(COUNTRY)
>     > SELECT ID, NAME,
>     > CASE WHEN VAR = '1' THEN 'X' WHEN VAR = '2' THEN 'Y' END AS VAR, COUNTRY
>     > FROM EMP;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to