[
https://issues.apache.org/jira/browse/HIVE-26955?focusedWorklogId=839914&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-839914
]
ASF GitHub Bot logged work on HIVE-26955:
-----------------------------------------
Author: ASF GitHub Bot
Created on: 18/Jan/23 11:03
Start Date: 18/Jan/23 11:03
Worklog Time Spent: 10m
Work Description: kasakrisz commented on code in PR #3964:
URL: https://github.com/apache/hive/pull/3964#discussion_r1073368036
##########
ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/ETypeConverter.java:
##########
@@ -676,6 +676,39 @@ private int getScale(PrimitiveType type) {
return logicalType.getScale();
}
};
+ case serdeConstants.VARCHAR_TYPE_NAME:
+ return new BinaryConverter<HiveVarcharWritable>(type, parent, index,
hiveTypeInfo) {
+ @Override
+ protected HiveVarcharWritable convert(Binary binary) {
+ DecimalLogicalTypeAnnotation logicalType =
(DecimalLogicalTypeAnnotation) type.getLogicalTypeAnnotation();
+ HiveDecimalWritable decimalWritable =
HiveDecimalUtils.enforcePrecisionScale(
+ new HiveDecimalWritable(binary.getBytes(),
logicalType.getScale()),
+ new DecimalTypeInfo(logicalType.getPrecision(),
logicalType.getScale()));
Review Comment:
These 4 lines are the same in all the new cases. Could you please extract it?
Exmaple
```
abstract class BinaryConverterToCharacterType<T extends Writable> extends
BinaryConverter<T> {
protected byte[] convertToBytes(Binary binary) {
DecimalLogicalTypeAnnotation logicalType =
(DecimalLogicalTypeAnnotation) type.getLogicalTypeAnnotation();
return HiveDecimalUtils.enforcePrecisionScale(
new HiveDecimalWritable(binary.getBytes(),
logicalType.getScale()),
new DecimalTypeInfo(logicalType.getPrecision(),
logicalType.getScale())).toString().getBytes();
}
protected abstract T convert(Binary binary);
}
```
and extend it and call `convertToBytes` in `convert`
Issue Time Tracking
-------------------
Worklog Id: (was: 839914)
Time Spent: 0.5h (was: 20m)
> Select query fails when decimal column data type is changed to
> string/char/varchar in Parquet
> ---------------------------------------------------------------------------------------------
>
> Key: HIVE-26955
> URL: https://issues.apache.org/jira/browse/HIVE-26955
> Project: Hive
> Issue Type: Bug
> Components: HiveServer2
> Reporter: Taraka Rama Rao Lethavadla
> Assignee: Sourabh Badhya
> Priority: Major
> Labels: pull-request-available
> Time Spent: 0.5h
> Remaining Estimate: 0h
>
> Steps to reproduce
> {noformat}
> create table test_parquet (id decimal) stored as parquet;
> insert into test_parquet values(238);
> alter table test_parquet change id id string;
> select * from test_parquet;
> Error: java.io.IOException: org.apache.parquet.io.ParquetDecodingException:
> Can not read value at 1 in block 0 in file
> hdfs:/namenode:8020/warehouse/tablespace/managed/hive/test_parquet/delta_0000001_0000001_0000/000000_0
> (state=,code=0)
> at
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:624)
> at
> org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:531)
> at
> org.apache.hadoop.hive.ql.exec.FetchTask.executeInner(FetchTask.java:194)
> ... 55 more
> Caused by: org.apache.parquet.io.ParquetDecodingException: Can not read value
> at 1 in block 0 in file
> file:/home/centos/Apache-Hive-Tarak/itests/qtest/target/localfs/warehouse/test_parquet/000000_0
> at
> org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:255)
> at
> org.apache.parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:207)
> at
> org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.<init>(ParquetRecordReaderWrapper.java:87)
> at
> org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:89)
> at
> org.apache.hadoop.hive.ql.exec.FetchOperator$FetchInputFormatSplit.getRecordReader(FetchOperator.java:771)
> at
> org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator.java:335)
> at
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:562)
> ... 57 more
> Caused by: java.lang.ClassCastException:
> org.apache.hadoop.hive.serde2.typeinfo.PrimitiveTypeInfo cannot be cast to
> org.apache.hadoop.hive.serde2.typeinfo.DecimalTypeInfo
> at
> org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter$8$5.convert(ETypeConverter.java:669)
> at
> org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter$8$5.convert(ETypeConverter.java:664)
> at
> org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter$BinaryConverter.addBinary(ETypeConverter.java:977)
> at
> org.apache.parquet.column.impl.ColumnReaderBase$2$6.writeValue(ColumnReaderBase.java:360)
> at
> org.apache.parquet.column.impl.ColumnReaderBase.writeCurrentValueToConverter(ColumnReaderBase.java:410)
> at
> org.apache.parquet.column.impl.ColumnReaderImpl.writeCurrentValueToConverter(ColumnReaderImpl.java:30)
> at
> org.apache.parquet.io.RecordReaderImplementation.read(RecordReaderImplementation.java:406)
> at
> org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:230)
> ... 63 more{noformat}
> However the same is working as expected in ORC table
> {noformat}
> create table test_orc (id decimal) stored as orc;
> insert into test_orc values(238);
> alter table test_orc change id id string;
> select * from test_orc;
> +--------------+
> | test_orc.id |
> +--------------+
> | 238 |
> +--------------+{noformat}
> As well as text table
> {noformat}
> create table test_text (id decimal) stored as textfile;
> insert into test_text values(238);
> alter table test_text change id id string;
> select * from test_text;
> +---------------+
> | test_text.id |
> +---------------+
> | 238 |
> +---------------+{noformat}
> Similar exception is thrown when the altered datatype is varchar and char
> datatype.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)