[
https://issues.apache.org/jira/browse/HIVE-17423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16971872#comment-16971872
]
Tak-Lon (Stephen) Wu edited comment on HIVE-17423 at 11/12/19 1:19 AM:
-----------------------------------------------------------------------
is LLAP parquet caching working with current trunk? e.g. Hive 3.1.2 LLAP with
parquet , we're facing some issue when running LLAP with TPCDS query (e.g.
query12) with partitioned table, please see the error below, it happened both
on HDFS and S3
{quote}{{, errorMessage=Cannot recover from this
error:java.lang.AssertionError: Lower bound for offset 15352480 is [6963872,
38018333)}}
\{{ at
org.apache.hadoop.hive.llap.LlapCacheAwareFs$CacheAwareInputStream.getAndValidateMissingChunks(LlapCacheAwareFs.java:384)}}
\{{ at
org.apache.hadoop.hive.llap.LlapCacheAwareFs$CacheAwareInputStream.read(LlapCacheAwareFs.java:259)}}
\{{ at java.io.DataInputStream.read(DataInputStream.java:149)}}
\{{ at
org.apache.parquet.io.DelegatingSeekableInputStream.readFully(DelegatingSeekableInputStream.java:102)}}
\{{ at
org.apache.parquet.io.DelegatingSeekableInputStream.readFullyHeapBuffer(DelegatingSeekableInputStream.java:127)}}
\{{ at
org.apache.parquet.io.DelegatingSeekableInputStream.readFully(DelegatingSeekableInputStream.java:91)}}
\{{ at
org.apache.parquet.hadoop.ParquetFileReader$ConsecutiveChunkList.readAll(ParquetFileReader.java:1174)}}
\{{ at
org.apache.parquet.hadoop.ParquetFileReader.readNextRowGroup(ParquetFileReader.java:805)}}
\{{ at
org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.checkEndOfRowGroup(VectorizedParquetRecordReader.java:427)}}
\{{ at
org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.nextBatch(VectorizedParquetRecordReader.java:405)}}
\{{ at
org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.next(VectorizedParquetRecordReader.java:357)}}
\{{ at
org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.next(VectorizedParquetRecordReader.java:93)}}
\{{ at
org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:360)}}
\{{ at
org.apache.hadoop.hive.ql.io.HiveRecordReader.doNext(HiveRecordReader.java:79)}}
\{{ at
org.apache.hadoop.hive.ql.io.HiveRecordReader.doNext(HiveRecordReader.java:33)}}
\{{ at
org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.next(HiveContextAwareRecordReader.java:116)}}
\{{ at
org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.next(TezGroupedSplitsInputFormat.java:151)}}
\{{ at
org.apache.tez.mapreduce.lib.MRReaderMapred.next(MRReaderMapred.java:116)}}
\{{ at
org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:68)}}
\{{ at
org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:419)}}
\{{ at
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:267)}}
\{{ at
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:250)}}
\{{ at
org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374)}}
\{{ at
org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73)}}
\{{ at
org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61)}}
\{{ at java.security.AccessController.doPrivileged(Native Method)}}
\{{ at javax.security.auth.Subject.doAs(Subject.java:422)}}
\{{ at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)}}
\{{ at
org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61)}}
\{{ at
org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)}}
\{{ at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)}}
\{{ at
org.apache.hadoop.hive.llap.daemon.impl.StatsRecordingThreadPool$WrappedCallable.call(StatsRecordingThreadPool.java:110)}}
\{{ at java.util.concurrent.FutureTask.run(FutureTask.java:266)}}
\{{ at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)}}
\{{ at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)}}
\{{ at java.lang.Thread.run(Thread.java:748)}}
{quote}
was (Author: taklwu):
is LLAP parquet caching working with current trunk? e.g. Hive 3.1.2 LLAP with
parquet , we're facing some issue when running LLAP with TPCDS query (e.g.
query12), please see the error below, it happened both on HDFS and S3
{quote}{{, errorMessage=Cannot recover from this
error:java.lang.AssertionError: Lower bound for offset 15352480 is [6963872,
38018333)}}
{{ at
org.apache.hadoop.hive.llap.LlapCacheAwareFs$CacheAwareInputStream.getAndValidateMissingChunks(LlapCacheAwareFs.java:384)}}
{{ at
org.apache.hadoop.hive.llap.LlapCacheAwareFs$CacheAwareInputStream.read(LlapCacheAwareFs.java:259)}}
{{ at java.io.DataInputStream.read(DataInputStream.java:149)}}
{{ at
org.apache.parquet.io.DelegatingSeekableInputStream.readFully(DelegatingSeekableInputStream.java:102)}}
{{ at
org.apache.parquet.io.DelegatingSeekableInputStream.readFullyHeapBuffer(DelegatingSeekableInputStream.java:127)}}
{{ at
org.apache.parquet.io.DelegatingSeekableInputStream.readFully(DelegatingSeekableInputStream.java:91)}}
{{ at
org.apache.parquet.hadoop.ParquetFileReader$ConsecutiveChunkList.readAll(ParquetFileReader.java:1174)}}
{{ at
org.apache.parquet.hadoop.ParquetFileReader.readNextRowGroup(ParquetFileReader.java:805)}}
{{ at
org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.checkEndOfRowGroup(VectorizedParquetRecordReader.java:427)}}
{{ at
org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.nextBatch(VectorizedParquetRecordReader.java:405)}}
{{ at
org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.next(VectorizedParquetRecordReader.java:357)}}
{{ at
org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.next(VectorizedParquetRecordReader.java:93)}}
{{ at
org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:360)}}
{{ at
org.apache.hadoop.hive.ql.io.HiveRecordReader.doNext(HiveRecordReader.java:79)}}
{{ at
org.apache.hadoop.hive.ql.io.HiveRecordReader.doNext(HiveRecordReader.java:33)}}
{{ at
org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.next(HiveContextAwareRecordReader.java:116)}}
{{ at
org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.next(TezGroupedSplitsInputFormat.java:151)}}
{{ at
org.apache.tez.mapreduce.lib.MRReaderMapred.next(MRReaderMapred.java:116)}}
{{ at
org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:68)}}
{{ at
org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:419)}}
{{ at
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:267)}}
{{ at
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:250)}}
{{ at
org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374)}}
{{ at
org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73)}}
{{ at
org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61)}}
{{ at java.security.AccessController.doPrivileged(Native Method)}}
{{ at javax.security.auth.Subject.doAs(Subject.java:422)}}
{{ at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)}}
{{ at
org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61)}}
{{ at
org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)}}
{{ at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)}}
{{ at
org.apache.hadoop.hive.llap.daemon.impl.StatsRecordingThreadPool$WrappedCallable.call(StatsRecordingThreadPool.java:110)}}
{{ at java.util.concurrent.FutureTask.run(FutureTask.java:266)}}
{{ at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)}}
{{ at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)}}
{{ at java.lang.Thread.run(Thread.java:748)}}{quote}
> LLAP Parquet caching - support file ID in splits
> ------------------------------------------------
>
> Key: HIVE-17423
> URL: https://issues.apache.org/jira/browse/HIVE-17423
> Project: Hive
> Issue Type: Bug
> Reporter: Sergey Shelukhin
> Priority: Major
>
> To get LLAP cache data one needs a file ID which is either an HDFS inode ID,
> or a composite of path, modification time and size. These can be embedded
> into splits for ORC, cause in particular for the former it's possible to get
> the IDs as a part of a normal file enumeration that split generation performs
> anyway.
> If they are missing, the IDs need to be obtained for every file on the
> fragment side.
> We should explore adding file IDs to Parquet splits when the cache is enabled.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)