Takahiko Saito created HIVE-12596:
-------------------------------------

             Summary: Delete timestamp row throws 
java.lang.IllegalArgumentException: Timestamp format must be yyyy-mm-dd 
hh:mm:ss[.fffffffff]
                 Key: HIVE-12596
                 URL: https://issues.apache.org/jira/browse/HIVE-12596
             Project: Hive
          Issue Type: Bug
    Affects Versions: 2.0.0
            Reporter: Takahiko Saito


Run the below:
{noformat}
create table test_acid( i int, ts timestamp)
                      clustered by (i) into 2 buckets
                      stored as orc
                      tblproperties ('transactional'='true');
insert into table test_acid values (1, '2014-09-14 12:34:30');
delete from test_acid where ts = '2014-15-16 17:18:19.20';
{noformat}

The below error is thrown:
{noformat}
15/12/04 19:55:49 INFO SessionState: Map 1: -/- Reducer 2: 0/2
Status: Failed
15/12/04 19:55:49 ERROR SessionState: Status: Failed
Vertex failed, vertexName=Map 1, vertexId=vertex_1447960616881_0022_2_00, 
diagnostics=[Vertex vertex_1447960616881_0022_2_00 [Map 1] killed/failed due 
to:ROOT_INPUT_INIT_FAILURE, Vertex Input: test_acid initializer failed, 
vertex=vertex_1447960616881_0022_2_00 [Map 1], 
java.lang.IllegalArgumentException: Timestamp format must be yyyy-mm-dd 
hh:mm:ss[.fffffffff]
        at java.sql.Timestamp.valueOf(Timestamp.java:237)
        at 
org.apache.hadoop.hive.ql.io.sarg.ConvertAstToSearchArg.boxLiteral(ConvertAstToSearchArg.java:160)
        at 
org.apache.hadoop.hive.ql.io.sarg.ConvertAstToSearchArg.findLiteral(ConvertAstToSearchArg.java:191)
        at 
org.apache.hadoop.hive.ql.io.sarg.ConvertAstToSearchArg.createLeaf(ConvertAstToSearchArg.java:268)
        at 
org.apache.hadoop.hive.ql.io.sarg.ConvertAstToSearchArg.createLeaf(ConvertAstToSearchArg.java:326)
        at 
org.apache.hadoop.hive.ql.io.sarg.ConvertAstToSearchArg.parse(ConvertAstToSearchArg.java:377)
        at 
org.apache.hadoop.hive.ql.io.sarg.ConvertAstToSearchArg.<init>(ConvertAstToSearchArg.java:68)
        at 
org.apache.hadoop.hive.ql.io.sarg.ConvertAstToSearchArg.create(ConvertAstToSearchArg.java:417)
        at 
org.apache.hadoop.hive.ql.io.sarg.ConvertAstToSearchArg.createFromConf(ConvertAstToSearchArg.java:436)
        at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$Context.<init>(OrcInputFormat.java:484)
        at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:1121)
        at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getSplits(OrcInputFormat.java:1207)
        at 
org.apache.hadoop.hive.ql.io.HiveInputFormat.addSplitsForGroup(HiveInputFormat.java:369)
        at 
org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:481)
        at 
org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:160)
        at 
org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:246)
        at 
org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:240)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
        at 
org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:240)
        at 
org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:227)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
{noformat}

Not sure if this change is intended as the issue is not seen with ver. 1.2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to