[jira] [Updated] (HIVE-7787) Reading Parquet file with enum in Thrift Encoding throws NoSuchFieldError
[ https://issues.apache.org/jira/browse/HIVE-7787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ryan Blue updated HIVE-7787: Resolution: Not a Problem Status: Resolved (was: Patch Available) I'm closing this per discussion with [~amalakar]. Reading Parquet file with enum in Thrift Encoding throws NoSuchFieldError - Key: HIVE-7787 URL: https://issues.apache.org/jira/browse/HIVE-7787 Project: Hive Issue Type: Bug Components: Database/Schema, Thrift API Affects Versions: 0.12.0, 0.13.0, 0.12.1, 0.14.0, 0.13.1 Environment: Hive 0.12 CDH 5.1.0, Hadoop 2.3.0 CDH 5.1.0 Reporter: Raymond Lau Assignee: Arup Malakar Priority: Minor Attachments: HIVE-7787.trunk.1.patch When reading Parquet file, where the original Thrift schema contains a struct with an enum, this causes the following error (full stack trace blow): {code} java.lang.NoSuchFieldError: DECIMAL. {code} Example Thrift Schema: {code} enum MyEnumType { EnumOne, EnumTwo, EnumThree } struct MyStruct { 1: optional MyEnumType myEnumType; 2: optional string field2; 3: optional string field3; } struct outerStruct { 1: optional listMyStruct myStructs } {code} Hive Table: {code} CREATE EXTERNAL TABLE mytable ( mystructs arraystructmyenumtype: string, field2: string, field3: string ) ROW FORMAT SERDE 'parquet.hive.serde.ParquetHiveSerDe' STORED AS INPUTFORMAT 'parquet.hive.DeprecatedParquetInputFormat' OUTPUTFORMAT 'parquet.hive.DeprecatedParquetOutputFormat' ; {code} Error Stack trace: {code} Java stack trace for Hive 0.12: Caused by: java.lang.NoSuchFieldError: DECIMAL at org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter.getNewConverter(ETypeConverter.java:146) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:31) at org.apache.hadoop.hive.ql.io.parquet.convert.ArrayWritableGroupConverter.init(ArrayWritableGroupConverter.java:45) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:34) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:64) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:47) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:36) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:64) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:40) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableRecordConverter.init(DataWritableRecordConverter.java:32) at org.apache.hadoop.hive.ql.io.parquet.read.DataWritableReadSupport.prepareForRead(DataWritableReadSupport.java:128) at parquet.hadoop.InternalParquetRecordReader.initialize(InternalParquetRecordReader.java:142) at parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:118) at parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:107) at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.init(ParquetRecordReaderWrapper.java:92) at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.init(ParquetRecordReaderWrapper.java:66) at org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:51) at org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.init(CombineHiveRecordReader.java:65) ... 16 more {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-7787) Reading Parquet file with enum in Thrift Encoding throws NoSuchFieldError
[ https://issues.apache.org/jira/browse/HIVE-7787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gunther Hagleitner updated HIVE-7787: - Fix Version/s: (was: 0.14.0) Reading Parquet file with enum in Thrift Encoding throws NoSuchFieldError - Key: HIVE-7787 URL: https://issues.apache.org/jira/browse/HIVE-7787 Project: Hive Issue Type: Bug Components: Database/Schema, Thrift API Affects Versions: 0.12.0, 0.13.0, 0.12.1, 0.14.0, 0.13.1 Environment: Hive 0.12 CDH 5.1.0, Hadoop 2.3.0 CDH 5.1.0 Reporter: Raymond Lau Assignee: Arup Malakar Priority: Minor Attachments: HIVE-7787.trunk.1.patch When reading Parquet file, where the original Thrift schema contains a struct with an enum, this causes the following error (full stack trace blow): {code} java.lang.NoSuchFieldError: DECIMAL. {code} Example Thrift Schema: {code} enum MyEnumType { EnumOne, EnumTwo, EnumThree } struct MyStruct { 1: optional MyEnumType myEnumType; 2: optional string field2; 3: optional string field3; } struct outerStruct { 1: optional listMyStruct myStructs } {code} Hive Table: {code} CREATE EXTERNAL TABLE mytable ( mystructs arraystructmyenumtype: string, field2: string, field3: string ) ROW FORMAT SERDE 'parquet.hive.serde.ParquetHiveSerDe' STORED AS INPUTFORMAT 'parquet.hive.DeprecatedParquetInputFormat' OUTPUTFORMAT 'parquet.hive.DeprecatedParquetOutputFormat' ; {code} Error Stack trace: {code} Java stack trace for Hive 0.12: Caused by: java.lang.NoSuchFieldError: DECIMAL at org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter.getNewConverter(ETypeConverter.java:146) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:31) at org.apache.hadoop.hive.ql.io.parquet.convert.ArrayWritableGroupConverter.init(ArrayWritableGroupConverter.java:45) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:34) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:64) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:47) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:36) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:64) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:40) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableRecordConverter.init(DataWritableRecordConverter.java:32) at org.apache.hadoop.hive.ql.io.parquet.read.DataWritableReadSupport.prepareForRead(DataWritableReadSupport.java:128) at parquet.hadoop.InternalParquetRecordReader.initialize(InternalParquetRecordReader.java:142) at parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:118) at parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:107) at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.init(ParquetRecordReaderWrapper.java:92) at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.init(ParquetRecordReaderWrapper.java:66) at org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:51) at org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.init(CombineHiveRecordReader.java:65) ... 16 more {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-7787) Reading Parquet file with enum in Thrift Encoding throws NoSuchFieldError
[ https://issues.apache.org/jira/browse/HIVE-7787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arup Malakar updated HIVE-7787: --- Attachment: HIVE-7787.trunk.1.patch Looks like {{ArrayWritableGroupConverter}} enforces that the struct should have either 1 or 2 elements. I am not sure the rational behind this, since a struct may have more than two elements. I did a quick patch to omit the check and handle any number of fields. I have tested it and it seems to be working for me for the schema in the description. Given there were explicit checks for the filed count to be either 1 or 2, I am not sure if it is the right approach. Please take a look. Reading Parquet file with enum in Thrift Encoding throws NoSuchFieldError - Key: HIVE-7787 URL: https://issues.apache.org/jira/browse/HIVE-7787 Project: Hive Issue Type: Bug Components: Database/Schema, Thrift API Affects Versions: 0.12.0, 0.13.0, 0.12.1, 0.14.0, 0.13.1 Environment: Hive 0.12 CDH 5.1.0, Hadoop 2.3.0 CDH 5.1.0 Reporter: Raymond Lau Priority: Minor Attachments: HIVE-7787.trunk.1.patch When reading Parquet file, where the original Thrift schema contains a struct with an enum, this causes the following error (full stack trace blow): {code} java.lang.NoSuchFieldError: DECIMAL. {code} Example Thrift Schema: {code} enum MyEnumType { EnumOne, EnumTwo, EnumThree } struct MyStruct { 1: optional MyEnumType myEnumType; 2: optional string field2; 3: optional string field3; } struct outerStruct { 1: optional listMyStruct myStructs } {code} Hive Table: {code} CREATE EXTERNAL TABLE mytable ( mystructs arraystructmyenumtype: string, field2: string, field3: string ) ROW FORMAT SERDE 'parquet.hive.serde.ParquetHiveSerDe' STORED AS INPUTFORMAT 'parquet.hive.DeprecatedParquetInputFormat' OUTPUTFORMAT 'parquet.hive.DeprecatedParquetOutputFormat' ; {code} Error Stack trace: {code} Java stack trace for Hive 0.12: Caused by: java.lang.NoSuchFieldError: DECIMAL at org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter.getNewConverter(ETypeConverter.java:146) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:31) at org.apache.hadoop.hive.ql.io.parquet.convert.ArrayWritableGroupConverter.init(ArrayWritableGroupConverter.java:45) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:34) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:64) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:47) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:36) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:64) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:40) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableRecordConverter.init(DataWritableRecordConverter.java:32) at org.apache.hadoop.hive.ql.io.parquet.read.DataWritableReadSupport.prepareForRead(DataWritableReadSupport.java:128) at parquet.hadoop.InternalParquetRecordReader.initialize(InternalParquetRecordReader.java:142) at parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:118) at parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:107) at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.init(ParquetRecordReaderWrapper.java:92) at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.init(ParquetRecordReaderWrapper.java:66) at org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:51) at org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.init(CombineHiveRecordReader.java:65) ... 16 more {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-7787) Reading Parquet file with enum in Thrift Encoding throws NoSuchFieldError
[ https://issues.apache.org/jira/browse/HIVE-7787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arup Malakar updated HIVE-7787: --- Fix Version/s: 0.14.0 Assignee: Arup Malakar Status: Patch Available (was: Open) Reading Parquet file with enum in Thrift Encoding throws NoSuchFieldError - Key: HIVE-7787 URL: https://issues.apache.org/jira/browse/HIVE-7787 Project: Hive Issue Type: Bug Components: Database/Schema, Thrift API Affects Versions: 0.13.1, 0.13.0, 0.12.0, 0.12.1, 0.14.0 Environment: Hive 0.12 CDH 5.1.0, Hadoop 2.3.0 CDH 5.1.0 Reporter: Raymond Lau Assignee: Arup Malakar Priority: Minor Fix For: 0.14.0 Attachments: HIVE-7787.trunk.1.patch When reading Parquet file, where the original Thrift schema contains a struct with an enum, this causes the following error (full stack trace blow): {code} java.lang.NoSuchFieldError: DECIMAL. {code} Example Thrift Schema: {code} enum MyEnumType { EnumOne, EnumTwo, EnumThree } struct MyStruct { 1: optional MyEnumType myEnumType; 2: optional string field2; 3: optional string field3; } struct outerStruct { 1: optional listMyStruct myStructs } {code} Hive Table: {code} CREATE EXTERNAL TABLE mytable ( mystructs arraystructmyenumtype: string, field2: string, field3: string ) ROW FORMAT SERDE 'parquet.hive.serde.ParquetHiveSerDe' STORED AS INPUTFORMAT 'parquet.hive.DeprecatedParquetInputFormat' OUTPUTFORMAT 'parquet.hive.DeprecatedParquetOutputFormat' ; {code} Error Stack trace: {code} Java stack trace for Hive 0.12: Caused by: java.lang.NoSuchFieldError: DECIMAL at org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter.getNewConverter(ETypeConverter.java:146) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:31) at org.apache.hadoop.hive.ql.io.parquet.convert.ArrayWritableGroupConverter.init(ArrayWritableGroupConverter.java:45) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:34) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:64) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:47) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:36) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:64) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:40) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableRecordConverter.init(DataWritableRecordConverter.java:32) at org.apache.hadoop.hive.ql.io.parquet.read.DataWritableReadSupport.prepareForRead(DataWritableReadSupport.java:128) at parquet.hadoop.InternalParquetRecordReader.initialize(InternalParquetRecordReader.java:142) at parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:118) at parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:107) at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.init(ParquetRecordReaderWrapper.java:92) at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.init(ParquetRecordReaderWrapper.java:66) at org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:51) at org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.init(CombineHiveRecordReader.java:65) ... 16 more {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-7787) Reading Parquet file with enum in Thrift Encoding throws NoSuchFieldError
[ https://issues.apache.org/jira/browse/HIVE-7787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Raymond Lau updated HIVE-7787: -- Summary: Reading Parquet file with enum in Thrift Encoding throws NoSuchFieldError (was: Reading Parquet file with enum in Thrift Encoding) Reading Parquet file with enum in Thrift Encoding throws NoSuchFieldError - Key: HIVE-7787 URL: https://issues.apache.org/jira/browse/HIVE-7787 Project: Hive Issue Type: Bug Components: Database/Schema, Thrift API Affects Versions: 0.12.0, 0.13.0, 0.14.0 Environment: Hive 0.12 CDH 5.1.0 Reporter: Raymond Lau Priority: Minor When reading Parquet file, where the original Thrift schema contains an enum, this causes the following error: java.lang.NoSuchFieldError: DECIMAL Java stack trace for Hive 0.12: Caused by: java.lang.NoSuchFieldError: DECIMAL at org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter.getNewConverter(ETypeConverter.java:146) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:31) at org.apache.hadoop.hive.ql.io.parquet.convert.ArrayWritableGroupConverter.init(ArrayWritableGroupConverter.java:45) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:34) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:64) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:47) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:36) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:64) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:40) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableRecordConverter.init(DataWritableRecordConverter.java:32) at org.apache.hadoop.hive.ql.io.parquet.read.DataWritableReadSupport.prepareForRead(DataWritableReadSupport.java:128) at parquet.hadoop.InternalParquetRecordReader.initialize(InternalParquetRecordReader.java:142) at parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:118) at parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:107) at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.init(ParquetRecordReaderWrapper.java:92) at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.init(ParquetRecordReaderWrapper.java:66) at org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:51) at org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.init(CombineHiveRecordReader.java:65) ... 16 more -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7787) Reading Parquet file with enum in Thrift Encoding throws NoSuchFieldError
[ https://issues.apache.org/jira/browse/HIVE-7787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Raymond Lau updated HIVE-7787: -- Description: When reading Parquet file, where the original Thrift schema contains an enum, this causes the following error: {code} java.lang.NoSuchFieldError: DECIMAL. {code} Full Stack trace below: Example Thrift Schema: {code} enum MyEnumType { EnumOne, EnumTwo, EnumThree } struct MyStruct { 1: optional MyEnumType myEnumType; 2: optional string field2; 3: optional string field3; } {code} Error Stack trace: {code} Java stack trace for Hive 0.12: Caused by: java.lang.NoSuchFieldError: DECIMAL at org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter.getNewConverter(ETypeConverter.java:146) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:31) at org.apache.hadoop.hive.ql.io.parquet.convert.ArrayWritableGroupConverter.init(ArrayWritableGroupConverter.java:45) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:34) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:64) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:47) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:36) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:64) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:40) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableRecordConverter.init(DataWritableRecordConverter.java:32) at org.apache.hadoop.hive.ql.io.parquet.read.DataWritableReadSupport.prepareForRead(DataWritableReadSupport.java:128) at parquet.hadoop.InternalParquetRecordReader.initialize(InternalParquetRecordReader.java:142) at parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:118) at parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:107) at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.init(ParquetRecordReaderWrapper.java:92) at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.init(ParquetRecordReaderWrapper.java:66) at org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:51) at org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.init(CombineHiveRecordReader.java:65) ... 16 more {code} was: When reading Parquet file, where the original Thrift schema contains an enum, this causes the following error: java.lang.NoSuchFieldError: DECIMAL Java stack trace for Hive 0.12: Caused by: java.lang.NoSuchFieldError: DECIMAL at org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter.getNewConverter(ETypeConverter.java:146) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:31) at org.apache.hadoop.hive.ql.io.parquet.convert.ArrayWritableGroupConverter.init(ArrayWritableGroupConverter.java:45) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:34) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:64) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:47) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:36) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:64) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:40) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableRecordConverter.init(DataWritableRecordConverter.java:32) at org.apache.hadoop.hive.ql.io.parquet.read.DataWritableReadSupport.prepareForRead(DataWritableReadSupport.java:128) at parquet.hadoop.InternalParquetRecordReader.initialize(InternalParquetRecordReader.java:142) at parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:118) at parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:107) at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.init(ParquetRecordReaderWrapper.java:92) at
[jira] [Updated] (HIVE-7787) Reading Parquet file with enum in Thrift Encoding throws NoSuchFieldError
[ https://issues.apache.org/jira/browse/HIVE-7787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Raymond Lau updated HIVE-7787: -- Description: When reading Parquet file, where the original Thrift schema contains an enum, this causes the following error (full stack trace blow): {code} java.lang.NoSuchFieldError: DECIMAL. {code} Example Thrift Schema: {code} enum MyEnumType { EnumOne, EnumTwo, EnumThree } struct MyStruct { 1: optional MyEnumType myEnumType; 2: optional string field2; 3: optional string field3; } struct outerStruct { 1: optional listMyStruct myStructs } {code} Error Stack trace: {code} Java stack trace for Hive 0.12: Caused by: java.lang.NoSuchFieldError: DECIMAL at org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter.getNewConverter(ETypeConverter.java:146) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:31) at org.apache.hadoop.hive.ql.io.parquet.convert.ArrayWritableGroupConverter.init(ArrayWritableGroupConverter.java:45) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:34) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:64) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:47) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:36) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:64) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:40) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableRecordConverter.init(DataWritableRecordConverter.java:32) at org.apache.hadoop.hive.ql.io.parquet.read.DataWritableReadSupport.prepareForRead(DataWritableReadSupport.java:128) at parquet.hadoop.InternalParquetRecordReader.initialize(InternalParquetRecordReader.java:142) at parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:118) at parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:107) at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.init(ParquetRecordReaderWrapper.java:92) at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.init(ParquetRecordReaderWrapper.java:66) at org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:51) at org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.init(CombineHiveRecordReader.java:65) ... 16 more {code} was: When reading Parquet file, where the original Thrift schema contains an enum, this causes the following error: {code} java.lang.NoSuchFieldError: DECIMAL. {code} Full Stack trace below: Example Thrift Schema: {code} enum MyEnumType { EnumOne, EnumTwo, EnumThree } struct MyStruct { 1: optional MyEnumType myEnumType; 2: optional string field2; 3: optional string field3; } {code} Error Stack trace: {code} Java stack trace for Hive 0.12: Caused by: java.lang.NoSuchFieldError: DECIMAL at org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter.getNewConverter(ETypeConverter.java:146) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:31) at org.apache.hadoop.hive.ql.io.parquet.convert.ArrayWritableGroupConverter.init(ArrayWritableGroupConverter.java:45) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:34) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:64) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:47) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:36) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:64) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:40) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableRecordConverter.init(DataWritableRecordConverter.java:32) at org.apache.hadoop.hive.ql.io.parquet.read.DataWritableReadSupport.prepareForRead(DataWritableReadSupport.java:128) at parquet.hadoop.InternalParquetRecordReader.initialize(InternalParquetRecordReader.java:142) at
[jira] [Updated] (HIVE-7787) Reading Parquet file with enum in Thrift Encoding throws NoSuchFieldError
[ https://issues.apache.org/jira/browse/HIVE-7787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Raymond Lau updated HIVE-7787: -- Environment: Hive 0.12 CDH 5.1.0, Hadoop 0.23 (was: Hive 0.12 CDH 5.1.0) Reading Parquet file with enum in Thrift Encoding throws NoSuchFieldError - Key: HIVE-7787 URL: https://issues.apache.org/jira/browse/HIVE-7787 Project: Hive Issue Type: Bug Components: Database/Schema, Thrift API Affects Versions: 0.12.0, 0.13.0, 0.14.0 Environment: Hive 0.12 CDH 5.1.0, Hadoop 0.23 Reporter: Raymond Lau Priority: Minor When reading Parquet file, where the original Thrift schema contains an enum, this causes the following error (full stack trace blow): {code} java.lang.NoSuchFieldError: DECIMAL. {code} Example Thrift Schema: {code} enum MyEnumType { EnumOne, EnumTwo, EnumThree } struct MyStruct { 1: optional MyEnumType myEnumType; 2: optional string field2; 3: optional string field3; } struct outerStruct { 1: optional listMyStruct myStructs } {code} Error Stack trace: {code} Java stack trace for Hive 0.12: Caused by: java.lang.NoSuchFieldError: DECIMAL at org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter.getNewConverter(ETypeConverter.java:146) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:31) at org.apache.hadoop.hive.ql.io.parquet.convert.ArrayWritableGroupConverter.init(ArrayWritableGroupConverter.java:45) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:34) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:64) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:47) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:36) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:64) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:40) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableRecordConverter.init(DataWritableRecordConverter.java:32) at org.apache.hadoop.hive.ql.io.parquet.read.DataWritableReadSupport.prepareForRead(DataWritableReadSupport.java:128) at parquet.hadoop.InternalParquetRecordReader.initialize(InternalParquetRecordReader.java:142) at parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:118) at parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:107) at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.init(ParquetRecordReaderWrapper.java:92) at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.init(ParquetRecordReaderWrapper.java:66) at org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:51) at org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.init(CombineHiveRecordReader.java:65) ... 16 more {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7787) Reading Parquet file with enum in Thrift Encoding throws NoSuchFieldError
[ https://issues.apache.org/jira/browse/HIVE-7787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Raymond Lau updated HIVE-7787: -- Description: When reading Parquet file, where the original Thrift schema contains a struct with an enum, this causes the following error (full stack trace blow): {code} java.lang.NoSuchFieldError: DECIMAL. {code} Example Thrift Schema: {code} enum MyEnumType { EnumOne, EnumTwo, EnumThree } struct MyStruct { 1: optional MyEnumType myEnumType; 2: optional string field2; 3: optional string field3; } struct outerStruct { 1: optional listMyStruct myStructs } {code} Error Stack trace: {code} Java stack trace for Hive 0.12: Caused by: java.lang.NoSuchFieldError: DECIMAL at org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter.getNewConverter(ETypeConverter.java:146) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:31) at org.apache.hadoop.hive.ql.io.parquet.convert.ArrayWritableGroupConverter.init(ArrayWritableGroupConverter.java:45) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:34) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:64) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:47) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:36) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:64) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:40) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableRecordConverter.init(DataWritableRecordConverter.java:32) at org.apache.hadoop.hive.ql.io.parquet.read.DataWritableReadSupport.prepareForRead(DataWritableReadSupport.java:128) at parquet.hadoop.InternalParquetRecordReader.initialize(InternalParquetRecordReader.java:142) at parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:118) at parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:107) at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.init(ParquetRecordReaderWrapper.java:92) at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.init(ParquetRecordReaderWrapper.java:66) at org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:51) at org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.init(CombineHiveRecordReader.java:65) ... 16 more {code} was: When reading Parquet file, where the original Thrift schema contains an enum, this causes the following error (full stack trace blow): {code} java.lang.NoSuchFieldError: DECIMAL. {code} Example Thrift Schema: {code} enum MyEnumType { EnumOne, EnumTwo, EnumThree } struct MyStruct { 1: optional MyEnumType myEnumType; 2: optional string field2; 3: optional string field3; } struct outerStruct { 1: optional listMyStruct myStructs } {code} Error Stack trace: {code} Java stack trace for Hive 0.12: Caused by: java.lang.NoSuchFieldError: DECIMAL at org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter.getNewConverter(ETypeConverter.java:146) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:31) at org.apache.hadoop.hive.ql.io.parquet.convert.ArrayWritableGroupConverter.init(ArrayWritableGroupConverter.java:45) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:34) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:64) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:47) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:36) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:64) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:40) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableRecordConverter.init(DataWritableRecordConverter.java:32) at org.apache.hadoop.hive.ql.io.parquet.read.DataWritableReadSupport.prepareForRead(DataWritableReadSupport.java:128) at
[jira] [Updated] (HIVE-7787) Reading Parquet file with enum in Thrift Encoding throws NoSuchFieldError
[ https://issues.apache.org/jira/browse/HIVE-7787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Raymond Lau updated HIVE-7787: -- Affects Version/s: (was: 0.14.0) (was: 0.13.0) Reading Parquet file with enum in Thrift Encoding throws NoSuchFieldError - Key: HIVE-7787 URL: https://issues.apache.org/jira/browse/HIVE-7787 Project: Hive Issue Type: Bug Components: Database/Schema, Thrift API Affects Versions: 0.12.0, 0.12.1 Environment: Hive 0.12 CDH 5.1.0, Hadoop 0.23 Reporter: Raymond Lau Priority: Minor When reading Parquet file, where the original Thrift schema contains an enum, this causes the following error (full stack trace blow): {code} java.lang.NoSuchFieldError: DECIMAL. {code} Example Thrift Schema: {code} enum MyEnumType { EnumOne, EnumTwo, EnumThree } struct MyStruct { 1: optional MyEnumType myEnumType; 2: optional string field2; 3: optional string field3; } struct outerStruct { 1: optional listMyStruct myStructs } {code} Error Stack trace: {code} Java stack trace for Hive 0.12: Caused by: java.lang.NoSuchFieldError: DECIMAL at org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter.getNewConverter(ETypeConverter.java:146) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:31) at org.apache.hadoop.hive.ql.io.parquet.convert.ArrayWritableGroupConverter.init(ArrayWritableGroupConverter.java:45) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:34) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:64) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:47) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:36) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:64) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:40) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableRecordConverter.init(DataWritableRecordConverter.java:32) at org.apache.hadoop.hive.ql.io.parquet.read.DataWritableReadSupport.prepareForRead(DataWritableReadSupport.java:128) at parquet.hadoop.InternalParquetRecordReader.initialize(InternalParquetRecordReader.java:142) at parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:118) at parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:107) at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.init(ParquetRecordReaderWrapper.java:92) at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.init(ParquetRecordReaderWrapper.java:66) at org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:51) at org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.init(CombineHiveRecordReader.java:65) ... 16 more {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7787) Reading Parquet file with enum in Thrift Encoding throws NoSuchFieldError
[ https://issues.apache.org/jira/browse/HIVE-7787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Raymond Lau updated HIVE-7787: -- Affects Version/s: 0.12.1 Reading Parquet file with enum in Thrift Encoding throws NoSuchFieldError - Key: HIVE-7787 URL: https://issues.apache.org/jira/browse/HIVE-7787 Project: Hive Issue Type: Bug Components: Database/Schema, Thrift API Affects Versions: 0.12.0, 0.12.1 Environment: Hive 0.12 CDH 5.1.0, Hadoop 0.23 Reporter: Raymond Lau Priority: Minor When reading Parquet file, where the original Thrift schema contains an enum, this causes the following error (full stack trace blow): {code} java.lang.NoSuchFieldError: DECIMAL. {code} Example Thrift Schema: {code} enum MyEnumType { EnumOne, EnumTwo, EnumThree } struct MyStruct { 1: optional MyEnumType myEnumType; 2: optional string field2; 3: optional string field3; } struct outerStruct { 1: optional listMyStruct myStructs } {code} Error Stack trace: {code} Java stack trace for Hive 0.12: Caused by: java.lang.NoSuchFieldError: DECIMAL at org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter.getNewConverter(ETypeConverter.java:146) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:31) at org.apache.hadoop.hive.ql.io.parquet.convert.ArrayWritableGroupConverter.init(ArrayWritableGroupConverter.java:45) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:34) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:64) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:47) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:36) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:64) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:40) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableRecordConverter.init(DataWritableRecordConverter.java:32) at org.apache.hadoop.hive.ql.io.parquet.read.DataWritableReadSupport.prepareForRead(DataWritableReadSupport.java:128) at parquet.hadoop.InternalParquetRecordReader.initialize(InternalParquetRecordReader.java:142) at parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:118) at parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:107) at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.init(ParquetRecordReaderWrapper.java:92) at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.init(ParquetRecordReaderWrapper.java:66) at org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:51) at org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.init(CombineHiveRecordReader.java:65) ... 16 more {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7787) Reading Parquet file with enum in Thrift Encoding throws NoSuchFieldError
[ https://issues.apache.org/jira/browse/HIVE-7787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Raymond Lau updated HIVE-7787: -- Environment: Hive 0.12 CDH 5.1.0, Hadoop 2.3.0 CDH 5.1.0 (was: Hive 0.12 CDH 5.1.0, Hadoop 0.23) Reading Parquet file with enum in Thrift Encoding throws NoSuchFieldError - Key: HIVE-7787 URL: https://issues.apache.org/jira/browse/HIVE-7787 Project: Hive Issue Type: Bug Components: Database/Schema, Thrift API Affects Versions: 0.12.0, 0.12.1 Environment: Hive 0.12 CDH 5.1.0, Hadoop 2.3.0 CDH 5.1.0 Reporter: Raymond Lau Priority: Minor When reading Parquet file, where the original Thrift schema contains a struct with an enum, this causes the following error (full stack trace blow): {code} java.lang.NoSuchFieldError: DECIMAL. {code} Example Thrift Schema: {code} enum MyEnumType { EnumOne, EnumTwo, EnumThree } struct MyStruct { 1: optional MyEnumType myEnumType; 2: optional string field2; 3: optional string field3; } struct outerStruct { 1: optional listMyStruct myStructs } {code} Error Stack trace: {code} Java stack trace for Hive 0.12: Caused by: java.lang.NoSuchFieldError: DECIMAL at org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter.getNewConverter(ETypeConverter.java:146) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:31) at org.apache.hadoop.hive.ql.io.parquet.convert.ArrayWritableGroupConverter.init(ArrayWritableGroupConverter.java:45) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:34) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:64) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:47) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:36) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:64) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:40) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableRecordConverter.init(DataWritableRecordConverter.java:32) at org.apache.hadoop.hive.ql.io.parquet.read.DataWritableReadSupport.prepareForRead(DataWritableReadSupport.java:128) at parquet.hadoop.InternalParquetRecordReader.initialize(InternalParquetRecordReader.java:142) at parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:118) at parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:107) at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.init(ParquetRecordReaderWrapper.java:92) at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.init(ParquetRecordReaderWrapper.java:66) at org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:51) at org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.init(CombineHiveRecordReader.java:65) ... 16 more {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7787) Reading Parquet file with enum in Thrift Encoding throws NoSuchFieldError
[ https://issues.apache.org/jira/browse/HIVE-7787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Raymond Lau updated HIVE-7787: -- Description: When reading Parquet file, where the original Thrift schema contains a struct with an enum, this causes the following error (full stack trace blow): {code} java.lang.NoSuchFieldError: DECIMAL. {code} Example Thrift Schema: {code} enum MyEnumType { EnumOne, EnumTwo, EnumThree } struct MyStruct { 1: optional MyEnumType myEnumType; 2: optional string field2; 3: optional string field3; } struct outerStruct { 1: optional listMyStruct myStructs } {code} Hive Table: {code} CREATE EXTERNAL TABLE mytable ( mystructs arraystructmyenumtype: string, field2: string, field3: string ) ROW FORMAT SERDE 'parquet.hive.serde.ParquetHiveSerDe' STORED AS INPUTFORMAT 'parquet.hive.DeprecatedParquetInputFormat' OUTPUTFORMAT 'parquet.hive.DeprecatedParquetOutputFormat' ; {code} Error Stack trace: {code} Java stack trace for Hive 0.12: Caused by: java.lang.NoSuchFieldError: DECIMAL at org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter.getNewConverter(ETypeConverter.java:146) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:31) at org.apache.hadoop.hive.ql.io.parquet.convert.ArrayWritableGroupConverter.init(ArrayWritableGroupConverter.java:45) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:34) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:64) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:47) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:36) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:64) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:40) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableRecordConverter.init(DataWritableRecordConverter.java:32) at org.apache.hadoop.hive.ql.io.parquet.read.DataWritableReadSupport.prepareForRead(DataWritableReadSupport.java:128) at parquet.hadoop.InternalParquetRecordReader.initialize(InternalParquetRecordReader.java:142) at parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:118) at parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:107) at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.init(ParquetRecordReaderWrapper.java:92) at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.init(ParquetRecordReaderWrapper.java:66) at org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:51) at org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.init(CombineHiveRecordReader.java:65) ... 16 more {code} was: When reading Parquet file, where the original Thrift schema contains a struct with an enum, this causes the following error (full stack trace blow): {code} java.lang.NoSuchFieldError: DECIMAL. {code} Example Thrift Schema: {code} enum MyEnumType { EnumOne, EnumTwo, EnumThree } struct MyStruct { 1: optional MyEnumType myEnumType; 2: optional string field2; 3: optional string field3; } struct outerStruct { 1: optional listMyStruct myStructs } {code} Error Stack trace: {code} Java stack trace for Hive 0.12: Caused by: java.lang.NoSuchFieldError: DECIMAL at org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter.getNewConverter(ETypeConverter.java:146) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:31) at org.apache.hadoop.hive.ql.io.parquet.convert.ArrayWritableGroupConverter.init(ArrayWritableGroupConverter.java:45) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:34) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:64) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:47) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:36) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:64) at
[jira] [Updated] (HIVE-7787) Reading Parquet file with enum in Thrift Encoding throws NoSuchFieldError
[ https://issues.apache.org/jira/browse/HIVE-7787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Raymond Lau updated HIVE-7787: -- Affects Version/s: 0.14.0 0.13.0 0.13.1 Reading Parquet file with enum in Thrift Encoding throws NoSuchFieldError - Key: HIVE-7787 URL: https://issues.apache.org/jira/browse/HIVE-7787 Project: Hive Issue Type: Bug Components: Database/Schema, Thrift API Affects Versions: 0.12.0, 0.13.0, 0.12.1, 0.14.0, 0.13.1 Environment: Hive 0.12 CDH 5.1.0, Hadoop 2.3.0 CDH 5.1.0 Reporter: Raymond Lau Priority: Minor When reading Parquet file, where the original Thrift schema contains a struct with an enum, this causes the following error (full stack trace blow): {code} java.lang.NoSuchFieldError: DECIMAL. {code} Example Thrift Schema: {code} enum MyEnumType { EnumOne, EnumTwo, EnumThree } struct MyStruct { 1: optional MyEnumType myEnumType; 2: optional string field2; 3: optional string field3; } struct outerStruct { 1: optional listMyStruct myStructs } {code} Hive Table: {code} CREATE EXTERNAL TABLE mytable ( mystructs arraystructmyenumtype: string, field2: string, field3: string ) ROW FORMAT SERDE 'parquet.hive.serde.ParquetHiveSerDe' STORED AS INPUTFORMAT 'parquet.hive.DeprecatedParquetInputFormat' OUTPUTFORMAT 'parquet.hive.DeprecatedParquetOutputFormat' ; {code} Error Stack trace: {code} Java stack trace for Hive 0.12: Caused by: java.lang.NoSuchFieldError: DECIMAL at org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter.getNewConverter(ETypeConverter.java:146) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:31) at org.apache.hadoop.hive.ql.io.parquet.convert.ArrayWritableGroupConverter.init(ArrayWritableGroupConverter.java:45) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:34) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:64) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:47) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:36) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:64) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:40) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableRecordConverter.init(DataWritableRecordConverter.java:32) at org.apache.hadoop.hive.ql.io.parquet.read.DataWritableReadSupport.prepareForRead(DataWritableReadSupport.java:128) at parquet.hadoop.InternalParquetRecordReader.initialize(InternalParquetRecordReader.java:142) at parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:118) at parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:107) at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.init(ParquetRecordReaderWrapper.java:92) at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.init(ParquetRecordReaderWrapper.java:66) at org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:51) at org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.init(CombineHiveRecordReader.java:65) ... 16 more {code} -- This message was sent by Atlassian JIRA (v6.2#6252)