Kwangwon (Trey) Yi created HIVE-27647:
-----------------------------------------

             Summary: NullPointerException from LowLevelCacheImpl
                 Key: HIVE-27647
                 URL: https://issues.apache.org/jira/browse/HIVE-27647
             Project: Hive
          Issue Type: Bug
          Components: Tez
    Affects Versions: 3.1.3
         Environment: Hive: 3.1.3

Tez: 0.9.2
            Reporter: Kwangwon (Trey) Yi


Hi all,

 

I've executed Hive using Tez engine, and got the below NPE.

 

It seems like there was a NullPointerException error from LLAP 
LowLevelCacheImpl class when `putFileData` was called.

 

What are some possible reasons for this error and suggestions to mitigate it?

 

It would be much appreciated if you could provide me some sort of advice.

Thank you.

 

```

2023-08-24 10:22:31,122 [INFO] [Dispatcher thread \{Central}] 
|HistoryEventHandler.criticalEvents|: 
[HISTORY][DAG:dag_1691565142260_0112_1][Event:TASK_ATTEMPT_FINISHED]: 
vertexName=Map 4, taskAttemptId=attempt_1691565142260_0112_1_00_000010_2, 
creationTime=1691962230725, allocationTime=1691962230727, 
startTime=1691962230728, finishTime=1691962230991, timeTaken=263, 
status=FAILED, taskFailureType=NON_FATAL, errorEnum=FRAMEWORK_ERROR, 
diagnostics=Error: Error while running task ( failure ) : 
attempt_1691565142260_0112_1_00_000010_2:java.lang.RuntimeException: 
org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: 
java.lang.NullPointerException at 
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:303)
 at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:254) 
at 
org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374)
 at 
org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73)
 at 
org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61)
 at java.security.AccessController.doPrivileged(Native Method) at 
javax.security.auth.Subject.doAs(Subject.java:422) at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
 at 
org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61)
 at 
org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)
 at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) at 
org.apache.hadoop.hive.llap.daemon.impl.StatsRecordingThreadPool$WrappedCallable.call(StatsRecordingThreadPool.java:118)
 at java.util.concurrent.FutureTask.run(FutureTask.java:266) at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:750) Caused by: 
org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: 
java.lang.NullPointerException at 
org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:80)
 at 
org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:426)
 at 
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:272)
 ... 15 more Caused by: java.io.IOException: java.lang.NullPointerException at 
org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
 at 
org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
 at 
org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:376)
 at 
org.apache.hadoop.hive.ql.io.HiveRecordReader.doNext(HiveRecordReader.java:82) 
at 
org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.next(HiveContextAwareRecordReader.java:119)
 at 
org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.next(HiveContextAwareRecordReader.java:59)
 at 
org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.next(TezGroupedSplitsInputFormat.java:151)
 at org.apache.tez.mapreduce.lib.MRReaderMapred.next(MRReaderMapred.java:116) 
at 
org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:68)
 ... 17 more Caused by: java.lang.NullPointerException at 
org.apache.hadoop.hive.llap.cache.LowLevelCacheImpl.putFileData(LowLevelCacheImpl.java:300)
 at 
org.apache.hadoop.hive.llap.io.api.impl.LlapIoImpl$GenericDataCache.putFileData(LlapIoImpl.java:303)
 at 
org.apache.hadoop.hive.llap.LlapCacheAwareFs$CacheAwareInputStream.read(LlapCacheAwareFs.java:324)
 at org.apache.commons.io.IOUtils.read(IOUtils.java:1542) at 
org.apache.commons.io.IOUtils.readFully(IOUtils.java:1658) at 
org.apache.hadoop.util.ByteBufferIOUtils.readFullyHeapBuffer(ByteBufferIOUtils.java:89)
 at 
org.apache.hadoop.util.ByteBufferIOUtils.readFully(ByteBufferIOUtils.java:53) 
at 
org.apache.hadoop.fs.DefaultMultiByteBufferReader.readFullyIntoBuffers(DefaultMultiByteBufferReader.java:36)
 at 
org.apache.hadoop.fs.FSDataInputStream.readFullyIntoBuffers(FSDataInputStream.java:264)
 at 
org.apache.parquet.hadoop.util.H1SeekableInputStream.readFullyIntoBuffers(H1SeekableInputStream.java:64)
 at 
org.apache.parquet.hadoop.ParquetFileReader$ConsecutivePartList.readAll(ParquetFileReader.java:1741)
 at 
org.apache.parquet.hadoop.ParquetFileReader.readNextRowGroup(ParquetFileReader.java:952)
 at 
org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.checkEndOfRowGroup(VectorizedParquetRecordReader.java:428)
 at 
org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.nextBatch(VectorizedParquetRecordReader.java:406)
 at 
org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.next(VectorizedParquetRecordReader.java:358)
 at 
org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.next(VectorizedParquetRecordReader.java:93)
 at 
org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:371)
 ... 23 more

```



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to