[
https://issues.apache.org/jira/browse/DRILL-7275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16845573#comment-16845573
]
Arina Ielchiieva commented on DRILL-7275:
-----------------------------------------
[~khfaraaz] it would be nice if you include dataset in Jira. This is helpful
when reproducing the issue as well as when verifying the fix.
> CTAS + CTE query fails with IllegalStateException: Read batch count [%d]
> should be greater than zero [0]
> --------------------------------------------------------------------------------------------------------
>
> Key: DRILL-7275
> URL: https://issues.apache.org/jira/browse/DRILL-7275
> Project: Apache Drill
> Issue Type: Bug
> Components: Storage - Parquet
> Affects Versions: 1.15.0
> Reporter: Khurram Faraaz
> Assignee: salim achouche
> Priority: Major
>
> CTAS + CTE query fails with IllegalStateException: Read batch count [%d]
> should be greater than zero [0]
> Precondition check fails on line 47 in VarLenFixedEntryReader.java
> {noformat}
> 44 final int expectedDataLen = columnPrecInfo.precision;
> 45 final int entrySz = 4 + columnPrecInfo.precision;
> 46 final int readBatch = getFixedLengthMaxRecordsToRead(valuesToRead,
> entrySz);
> 47 Preconditions.checkState(readBatch > 0, "Read batch count [%d] should be
> greater than zero", readBatch);
> {noformat}
> Stack trace from drillbit.log, also has the failing query.
> {noformat}
> 2019-05-13 14:40:14,090 [23268c40-ef3a-6349-5901-5762f6888971:foreman] INFO
> o.a.drill.exec.work.foreman.Foreman - Query text for query with id
> 23268c40-ef3a-6349-5901-5762f6888971 issued by scoop_stc: CREATE TABLE
> TEST_TEMPLATE_SCHEMA_creid.tbl_c_EquityProxyDailyReturn AS
> WITH
> che AS (
> SELECT * FROM
> TEST_TEMPLATE_SCHEMA_creid.tbl_c_CompositeHierarchyEntry_TimeVarying
> WHERE CompositeHierarchyName = 'AxiomaRegion/AxiomaSector/VectorUniverse'
> AND state = 'DupesRemoved'
> AND CompositeLevel = 'AxiomaRegion_1/AxiomaSector_1/VectorUniverse_0'
> ),
> ef AS (SELECT * FROM
> TEST_TEMPLATE_SCHEMA_creid.tbl_c_EquityDailyReturn_FXAdjusted WHERE Status =
> 'PresentInRawData'),
> d AS (SELECT * FROM TEST_TEMPLATE_SCHEMA_creid.tbl_r_BusinessDate WHERE
> IsWeekday),
> x AS
> (
> SELECT
> che.CompositeHierarchyName,
> che.State,
> che.CompositeNodeName,
> d.`Date` AS RecordDate,
> COUNT(che.CompositeNodeName) AS countDistinctConstituents,
> COUNT(ef.VectorListingId) AS countDataPoints,
> AVG(ef.DailyReturn) AS AvgReturn,
> AVG(ef.DailyReturnUSD) AS AvgReturnUSD,
> AVG(ef.NotionalReturnUSD) AS AvgNotionalReturnUSD
> FROM d
> INNER JOIN che ON d.`Date` BETWEEN che.CompositeUltimateChildStartDate AND
> che.CompositeUltimateChildEndDate
> LEFT OUTER JOIN ef ON d.`Date` = ef.RecordDate AND 'VectorListingId_' ||
> CAST(ef.VectorListingId AS VARCHAR(100)) = che.UltimateChild
> GROUP BY che.CompositeHierarchyName, che.State, che.CompositeNodeName,
> d.`Date`, d.IsWeekday, d.IsHoliday
> )
> SELECT * FROM x
> 2019-05-13 14:40:16,971 [23268c40-ef3a-6349-5901-5762f6888971:foreman] INFO
> o.a.d.e.p.s.h.CreateTableHandler - Creating persistent table
> [tbl_c_EquityProxyDailyReturn].
> ...
> ...
> 2019-05-13 14:40:20,036 [23268c40-ef3a-6349-5901-5762f6888971:frag:6:10] INFO
> o.a.d.exec.physical.impl.ScanBatch - User Error Occurred: Error in parquet
> record reader.
> Message:
> Hadoop path: /DEV/tbl_c_EquityDailyReturn_FXAdjusted/1_32_32.parquet
> Total records read: 0
> Row group index: 0
> Records in row group: 3243
> Parquet Metadata: ParquetMetaData{FileMetaData{schema: message root {
> optional int64 VectorListingId;
> optional int32 RecordDate (DATE);
> required binary Status (UTF8);
> required binary CurrencyISO (UTF8);
> optional double DailyReturn;
> optional double DailyReturnUSD;
> optional double NotionalReturnUSD;
> }
> , metadata: \{drill-writer.version=2, drill.version=1.15.0.0-mapr}}, blocks:
> [BlockMetaData\{3243, 204762 [ColumnMetaData{UNCOMPRESSED [VectorListingId]
> optional int64 VectorListingId [RLE, BIT_PACKED, PLAIN], 4},
> ColumnMetaData\{UNCOMPRESSED [RecordDate] optional int32 RecordDate (DATE)
> [RLE, BIT_PACKED, PLAIN], 26021}, ColumnMetaData\{UNCOMPRESSED [Status]
> required binary Status (UTF8) [BIT_PACKED, PLAIN], 39050},
> ColumnMetaData\{UNCOMPRESSED [CurrencyISO] required binary CurrencyISO (UTF8)
> [BIT_PACKED, PLAIN], 103968}, ColumnMetaData\{UNCOMPRESSED [DailyReturn]
> optional double DailyReturn [RLE, BIT_PACKED, PLAIN], 126715},
> ColumnMetaData\{UNCOMPRESSED [DailyReturnUSD] optional double DailyReturnUSD
> [RLE, BIT_PACKED, PLAIN], 152732}, ColumnMetaData\{UNCOMPRESSED
> [NotionalReturnUSD] optional double NotionalReturnUSD [RLE, BIT_PACKED,
> PLAIN], 178749}]}]} (Error in parquet record reader.
> ...
> ...
> Hadoop path: /DEV/tbl_c_EquityDailyReturn_FXAdjusted/1_32_32.parquet
> Total records read: 0
> Row group index: 0
> Records in row group: 3243
> Parquet Metadata: ParquetMetaData{FileMetaData{schema: message root {
> optional int64 VectorListingId;
> optional int32 RecordDate (DATE);
> required binary Status (UTF8);
> required binary CurrencyISO (UTF8);
> optional double DailyReturn;
> optional double DailyReturnUSD;
> optional double NotionalReturnUSD;
> }
> , metadata: \{drill-writer.version=2, drill.version=1.15.0.0-mapr}}, blocks:
> [BlockMetaData\{3243, 204762 [ColumnMetaData{UNCOMPRESSED [VectorListingId]
> optional int64 VectorListingId [RLE, BIT_PACKED, PLAIN], 4},
> ColumnMetaData\{UNCOMPRESSED [RecordDate] optional int32 RecordDate (DATE)
> [RLE, BIT_PACKED, PLAIN], 26021}, ColumnMetaData\{UNCOMPRESSED [Status]
> required binary Status (UTF8) [BIT_PACKED, PLAIN], 39050},
> ColumnMetaData\{UNCOMPRESSED [CurrencyISO] required binary CurrencyISO (UTF8)
> [BIT_PACKED, PLAIN], 103968}, ColumnMetaData\{UNCOMPRESSED [DailyReturn]
> optional double DailyReturn [RLE, BIT_PACKED, PLAIN], 126715},
> ColumnMetaData\{UNCOMPRESSED [DailyReturnUSD] optional double DailyReturnUSD
> [RLE, BIT_PACKED, PLAIN], 152732}, ColumnMetaData\{UNCOMPRESSED
> [NotionalReturnUSD] optional double NotionalReturnUSD [RLE, BIT_PACKED,
> PLAIN], 178749}]}]})
> org.apache.drill.common.exceptions.UserException: INTERNAL_ERROR ERROR: Error
> in parquet record reader.
> Message:
> Hadoop path: /DEV/tbl_c_EquityDailyReturn_FXAdjusted/1_32_32.parquet
> Total records read: 0
> Row group index: 0
> Records in row group: 3243
> Parquet Metadata: ParquetMetaData{FileMetaData{schema: message root {
> optional int64 VectorListingId;
> optional int32 RecordDate (DATE);
> required binary Status (UTF8);
> required binary CurrencyISO (UTF8);
> optional double DailyReturn;
> optional double DailyReturnUSD;
> optional double NotionalReturnUSD;
> }
> ...
> ...
> Caused by: org.apache.drill.common.exceptions.DrillRuntimeException: Error in
> parquet record reader.
> Message:
> Hadoop path: /DEV/tbl_c_EquityDailyReturn_FXAdjusted/1_32_32.parquet
> Total records read: 0
> Row group index: 0
> Records in row group: 3243
> Parquet Metadata: ParquetMetaData{FileMetaData{schema: message root {
> optional int64 VectorListingId;
> optional int32 RecordDate (DATE);
> required binary Status (UTF8);
> required binary CurrencyISO (UTF8);
> optional double DailyReturn;
> optional double DailyReturnUSD;
> optional double NotionalReturnUSD;
> }
> , metadata: \{drill-writer.version=2, drill.version=1.15.0.0-mapr}}, blocks:
> [BlockMetaData\{3243, 204762 [ColumnMetaData{UNCOMPRESSED [VectorListingId]
> optional int64 VectorListingId [RLE, BIT_PACKED, PLAIN], 4},
> ColumnMetaData\{UNCOMPRESSED [RecordDate] optional int32 RecordDate (DATE)
> [RLE, BIT_PACKED, PLAIN], 26021}, ColumnMetaData\{UNCOMPRESSED [Status]
> required binary Status (UTF8) [BIT_PACKED, PLAIN], 39050},
> ColumnMetaData\{UNCOMPRESSED [CurrencyISO] required binary CurrencyISO (UTF8)
> [BIT_PACKED, PLAIN], 103968}, ColumnMetaData\{UNCOMPRESSED [DailyReturn]
> optional double DailyReturn [RLE, BIT_PACKED, PLAIN], 126715},
> ColumnMetaData\{UNCOMPRESSED [DailyReturnUSD] optional double DailyReturnUSD
> [RLE, BIT_PACKED, PLAIN], 152732}, ColumnMetaData\{UNCOMPRESSED
> [NotionalReturnUSD] optional double NotionalReturnUSD [RLE, BIT_PACKED,
> PLAIN], 178749}]}]}
> at
> org.apache.drill.exec.store.parquet.columnreaders.ParquetRecordReader.handleException(ParquetRecordReader.java:271)
> ~[drill-java-exec-1.15.0.0-mapr.jar:1.15.0.0-mapr]
> at
> org.apache.drill.exec.store.parquet.columnreaders.ParquetRecordReader.next(ParquetRecordReader.java:290)
> ~[drill-java-exec-1.15.0.0-mapr.jar:1.15.0.0-mapr]
> at
> org.apache.drill.exec.physical.impl.ScanBatch.internalNext(ScanBatch.java:223)
> [drill-java-exec-1.15.0.0-mapr.jar:1.15.0.0-mapr]
> at org.apache.drill.exec.physical.impl.ScanBatch.next(ScanBatch.java:271)
> [drill-java-exec-1.15.0.0-mapr.jar:1.15.0.0-mapr]
> ... 44 common frames omitted
> Caused by: java.lang.IllegalStateException: Read batch count [%d] should be
> greater than zero [0]
> at
> org.apache.drill.shaded.guava.com.google.common.base.Preconditions.checkState(Preconditions.java:509)
> ~[drill-shaded-guava-23.0.jar:23.0]
> at
> org.apache.drill.exec.store.parquet.columnreaders.VarLenFixedEntryReader.getEntry(VarLenFixedEntryReader.java:47)
> ~[drill-java-exec-1.15.0.0-mapr.jar:1.15.0.0-mapr]
> at
> org.apache.drill.exec.store.parquet.columnreaders.VarLenBulkPageReader.getFixedEntry(VarLenBulkPageReader.java:169)
> ~[drill-java-exec-1.15.0.0-mapr.jar:1.15.0.0-mapr]
> at
> org.apache.drill.exec.store.parquet.columnreaders.VarLenBulkPageReader.getEntry(VarLenBulkPageReader.java:132)
> ~[drill-java-exec-1.15.0.0-mapr.jar:1.15.0.0-mapr]
> at
> org.apache.drill.exec.store.parquet.columnreaders.VarLenColumnBulkInput.next(VarLenColumnBulkInput.java:154)
> ~[drill-java-exec-1.15.0.0-mapr.jar:1.15.0.0-mapr]
> at
> org.apache.drill.exec.store.parquet.columnreaders.VarLenColumnBulkInput.next(VarLenColumnBulkInput.java:38)
> ~[drill-java-exec-1.15.0.0-mapr.jar:1.15.0.0-mapr]
> at
> org.apache.drill.exec.vector.VarCharVector$Mutator.setSafe(VarCharVector.java:624)
> ~[vector-1.15.0.0-mapr.jar:1.15.0.0-mapr]
> at
> org.apache.drill.exec.vector.VarCharVector$Mutator.setSafe(VarCharVector.java:608)
> ~[vector-1.15.0.0-mapr.jar:1.15.0.0-mapr]
> at
> org.apache.drill.exec.store.parquet.columnreaders.VarLengthColumnReaders$VarCharColumn.setSafe(VarLengthColumnReaders.java:168)
> ~[drill-java-exec-1.15.0.0-mapr.jar:1.15.0.0-mapr]
> at
> org.apache.drill.exec.store.parquet.columnreaders.VarLengthValuesColumn.readRecordsInBulk(VarLengthValuesColumn.java:98)
> ~[drill-java-exec-1.15.0.0-mapr.jar:1.15.0.0-mapr]
> at
> org.apache.drill.exec.store.parquet.columnreaders.VarLenBinaryReader.readRecordsInBulk(VarLenBinaryReader.java:114)
> ~[drill-java-exec-1.15.0.0-mapr.jar:1.15.0.0-mapr]
> at
> org.apache.drill.exec.store.parquet.columnreaders.VarLenBinaryReader.readFields(VarLenBinaryReader.java:92)
> ~[drill-java-exec-1.15.0.0-mapr.jar:1.15.0.0-mapr]
> at
> org.apache.drill.exec.store.parquet.columnreaders.BatchReader$VariableWidthReader.readRecords(BatchReader.java:156)
> ~[drill-java-exec-1.15.0.0-mapr.jar:1.15.0.0-mapr]
> at
> org.apache.drill.exec.store.parquet.columnreaders.BatchReader.readBatch(BatchReader.java:43)
> ~[drill-java-exec-1.15.0.0-mapr.jar:1.15.0.0-mapr]
> at
> org.apache.drill.exec.store.parquet.columnreaders.ParquetRecordReader.next(ParquetRecordReader.java:288)
> ~[drill-java-exec-1.15.0.0-mapr.jar:1.15.0.0-mapr]
> ... 46 common frames omitted
> {noformat}
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)