[ https://issues.apache.org/jira/browse/SPARK-26437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16732701#comment-16732701 ]
zengxl commented on SPARK-26437: -------------------------------- Thanks [~dongjoon] > Decimal data becomes bigint to query, unable to query > ----------------------------------------------------- > > Key: SPARK-26437 > URL: https://issues.apache.org/jira/browse/SPARK-26437 > Project: Spark > Issue Type: Bug > Components: SQL > Affects Versions: 1.6.3, 2.0.2, 2.1.3, 2.2.2, 2.3.1 > Reporter: zengxl > Priority: Major > Fix For: 3.0.0 > > > this is my sql: > create table tmp.tmp_test_6387_1224_spark stored as ORCFile as select 0.00 > as a > select a from tmp.tmp_test_6387_1224_spark > CREATE TABLE `tmp.tmp_test_6387_1224_spark`( > {color:#f79232} `a` decimal(2,2)){color} > ROW FORMAT SERDE > 'org.apache.hadoop.hive.ql.io.orc.OrcSerde' > STORED AS INPUTFORMAT > 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' > OUTPUTFORMAT > 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat' > When I query this table(use hive or sparksql,the exception is same), I throw > the following exception information > *Caused by: java.io.EOFException: Reading BigInteger past EOF from compressed > stream Stream for column 1 kind DATA position: 0 length: 0 range: 0 offset: 0 > limit: 0* > *at > org.apache.hadoop.hive.ql.io.orc.SerializationUtils.readBigInteger(SerializationUtils.java:176)* > *at > org.apache.hadoop.hive.ql.io.orc.TreeReaderFactory$DecimalTreeReader.next(TreeReaderFactory.java:1264)* > *at > org.apache.hadoop.hive.ql.io.orc.TreeReaderFactory$StructTreeReader.next(TreeReaderFactory.java:2004)* > *at > org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.next(RecordReaderImpl.java:1039)* > -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org