[jira] [Commented] (HIVE-6412) SMB join on Decimal columns causes cast exception in JoinUtil.computeKeys

2014-02-25 Thread Remus Rusanu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13911568#comment-13911568
 ] 

Remus Rusanu commented on HIVE-6412:


I concur, this seems to no longer repro in current trunk.

 SMB join on Decimal columns causes cast exception in JoinUtil.computeKeys
 -

 Key: HIVE-6412
 URL: https://issues.apache.org/jira/browse/HIVE-6412
 Project: Hive
  Issue Type: Bug
Reporter: Remus Rusanu
Assignee: Xuefu Zhang
Priority: Critical

 {code}
 Caused by: java.lang.ClassCastException: 
 org.apache.hadoop.hive.serde2.io.HiveDecimalWritable cannot be cast to 
 org.apache.hadoop.hive.common.type.HiveDecimal
 at 
 org.apache.hadoop.hive.serde2.objectinspector.primitive.JavaHiveDecimalObjectInspector.getPrimitiveWritableObject(JavaHiveDecimalObjectInspector.java:49)
 at 
 org.apache.hadoop.hive.serde2.objectinspector.primitive.JavaHiveDecimalObjectInspector.getPrimitiveWritableObject(JavaHiveDecimalObjectInspector.java:27)
 at 
 org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.copyToStandardObject(ObjectInspectorUtils.java:281)
 at 
 org.apache.hadoop.hive.ql.exec.JoinUtil.computeKeys(JoinUtil.java:143)
 at 
 org.apache.hadoop.hive.ql.exec.SMBMapJoinOperator$MergeQueue.next(SMBMapJoinOperator.java:809)
 at 
 org.apache.hadoop.hive.ql.exec.SMBMapJoinOperator$MergeQueue.nextHive(SMBMapJoinOperator.java:771)
 at 
 org.apache.hadoop.hive.ql.exec.SMBMapJoinOperator$MergeQueue.setupContext(SMBMapJoinOperator.java:710)
 at 
 org.apache.hadoop.hive.ql.exec.SMBMapJoinOperator.setUpFetchContexts(SMBMapJoinOperator.java:538)
 at 
 org.apache.hadoop.hive.ql.exec.SMBMapJoinOperator.processOp(SMBMapJoinOperator.java:248)
 at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:790)
 at 
 org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:92)
 at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:790)
 at 
 org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:524)
 {code}
 Repro:
 {code}
 create table vsmb_bucket_1(key decimal(9,0), value decimal(38,10)) 
   CLUSTERED BY (key) 
   SORTED BY (key) INTO 1 BUCKETS 
   STORED AS ORC;
 create table vsmb_bucket_2(key decimal(19,3), value decimal(28,0)) 
   CLUSTERED BY (key) 
   SORTED BY (key) INTO 1 BUCKETS 
   STORED AS ORC;
   
 insert into table vsmb_bucket_1 
   select cast(cint as decimal(9,0)) as key, 
 cast(cfloat as decimal(38,10)) as value 
   from alltypesorc limit 2;
 insert into table vsmb_bucket_2 
   select cast(cint as decimal(19,3)) as key, 
 cast(cfloat as decimal(28,0)) as value 
   from alltypesorc limit 2;
 set hive.optimize.bucketmapjoin = true;
 set hive.optimize.bucketmapjoin.sortedmerge = true;
 set hive.auto.convert.sortmerge.join.noconditionaltask = true;
 set hive.input.format = 
 org.apache.hadoop.hive.ql.io.BucketizedHiveInputFormat;
 explain
 select /*+MAPJOIN(a)*/ * from vsmb_bucket_1 a join vsmb_bucket_2 b on a.key = 
 b.key;
 select /*+MAPJOIN(a)*/ * from vsmb_bucket_1 a join vsmb_bucket_2 b on a.key = 
 b.key;
 {code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6412) SMB join on Decimal columns causes cast exception in JoinUtil.computeKeys

2014-02-24 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13910779#comment-13910779
 ] 

Xuefu Zhang commented on HIVE-6412:
---

I tried the queries in the latest trunk and wasn't able to reproduce the 
problem. Presumably the problem is fixed. [~rusanu] Could you please verify?

 SMB join on Decimal columns causes cast exception in JoinUtil.computeKeys
 -

 Key: HIVE-6412
 URL: https://issues.apache.org/jira/browse/HIVE-6412
 Project: Hive
  Issue Type: Bug
Reporter: Remus Rusanu
Assignee: Xuefu Zhang
Priority: Critical

 {code}
 Caused by: java.lang.ClassCastException: 
 org.apache.hadoop.hive.serde2.io.HiveDecimalWritable cannot be cast to 
 org.apache.hadoop.hive.common.type.HiveDecimal
 at 
 org.apache.hadoop.hive.serde2.objectinspector.primitive.JavaHiveDecimalObjectInspector.getPrimitiveWritableObject(JavaHiveDecimalObjectInspector.java:49)
 at 
 org.apache.hadoop.hive.serde2.objectinspector.primitive.JavaHiveDecimalObjectInspector.getPrimitiveWritableObject(JavaHiveDecimalObjectInspector.java:27)
 at 
 org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.copyToStandardObject(ObjectInspectorUtils.java:281)
 at 
 org.apache.hadoop.hive.ql.exec.JoinUtil.computeKeys(JoinUtil.java:143)
 at 
 org.apache.hadoop.hive.ql.exec.SMBMapJoinOperator$MergeQueue.next(SMBMapJoinOperator.java:809)
 at 
 org.apache.hadoop.hive.ql.exec.SMBMapJoinOperator$MergeQueue.nextHive(SMBMapJoinOperator.java:771)
 at 
 org.apache.hadoop.hive.ql.exec.SMBMapJoinOperator$MergeQueue.setupContext(SMBMapJoinOperator.java:710)
 at 
 org.apache.hadoop.hive.ql.exec.SMBMapJoinOperator.setUpFetchContexts(SMBMapJoinOperator.java:538)
 at 
 org.apache.hadoop.hive.ql.exec.SMBMapJoinOperator.processOp(SMBMapJoinOperator.java:248)
 at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:790)
 at 
 org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:92)
 at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:790)
 at 
 org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:524)
 {code}
 Repro:
 {code}
 create table vsmb_bucket_1(key decimal(9,0), value decimal(38,10)) 
   CLUSTERED BY (key) 
   SORTED BY (key) INTO 1 BUCKETS 
   STORED AS ORC;
 create table vsmb_bucket_2(key decimal(19,3), value decimal(28,0)) 
   CLUSTERED BY (key) 
   SORTED BY (key) INTO 1 BUCKETS 
   STORED AS ORC;
   
 insert into table vsmb_bucket_1 
   select cast(cint as decimal(9,0)) as key, 
 cast(cfloat as decimal(38,10)) as value 
   from alltypesorc limit 2;
 insert into table vsmb_bucket_2 
   select cast(cint as decimal(19,3)) as key, 
 cast(cfloat as decimal(28,0)) as value 
   from alltypesorc limit 2;
 set hive.optimize.bucketmapjoin = true;
 set hive.optimize.bucketmapjoin.sortedmerge = true;
 set hive.auto.convert.sortmerge.join.noconditionaltask = true;
 set hive.input.format = 
 org.apache.hadoop.hive.ql.io.BucketizedHiveInputFormat;
 explain
 select /*+MAPJOIN(a)*/ * from vsmb_bucket_1 a join vsmb_bucket_2 b on a.key = 
 b.key;
 select /*+MAPJOIN(a)*/ * from vsmb_bucket_1 a join vsmb_bucket_2 b on a.key = 
 b.key;
 {code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)