[ 
https://issues.apache.org/jira/browse/CARBONDATA-2991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Babulal updated CARBONDATA-2991:
--------------------------------
    Description: 
During Query Execution sometime NegativeArraySizeException  Exception in Some 
Tasks . And sometime Executor is lost (JVM crash)

 

ava.lang.NegativeArraySizeException at 
org.apache.carbondata.core.datastore.chunk.store.impl.unsafe.UnsafeVariableLengthDimesionDataChunkStore.getRow(UnsafeVariableLengthDimesionDataChunkStore.java:157)
 at 
org.apache.carbondata.core.datastore.chunk.impl.AbstractDimensionDataChunk.getChunkData(AbstractDimensionDataChunk.java:46)
 at 
org.apache.carbondata.core.scan.result.AbstractScannedResult.getNoDictionaryKeyArray(AbstractScannedResult.java:470)
 at 
org.apache.carbondata.core.scan.result.impl.NonFilterQueryScannedResult.getNoDictionaryKeyArray(NonFilterQueryScannedResult.java:102)
 at 
org.apache.carbondata.core.scan.collector.impl.DictionaryBasedResultCollector.collectData(DictionaryBasedResultCollector.java:101)
 at 
org.apache.carbondata.core.scan.processor.impl.DataBlockIteratorImpl.next(DataBlockIteratorImpl.java:51)
 at 
org.apache.carbondata.core.scan.processor.impl.DataBlockIteratorImpl.next(DataBlockIteratorImpl.java:32)
 at 
org.apache.carbondata.core.scan.result.iterator.DetailQueryResultIterator.getBatchResult(DetailQueryResultIterator.java:49)

 

 

 

Issue Analysis :- 

Possible Root Cause :- It is because existing memoryblock is removed while it 
was in-use. This happened  because duplicate taskid generated. Sometime freed 
same memory addresses are assigned to another task which will initialize memory 
block to0 and this cause NegativeSizeArrayException whereas sometime freed 
memory will not be used any task of executor process but running task will try 
to access it and as that address is not part of process so JVM crash will 
happen.

 

 

 

  was:
During Query Execution sometime NegativeArraySizeException  Exception in Some 
Tasks . And sometime Executor is lost (JVM crash)

 

ava.lang.NegativeArraySizeException at 
org.apache.carbondata.core.datastore.chunk.store.impl.unsafe.UnsafeVariableLengthDimesionDataChunkStore.getRow(UnsafeVariableLengthDimesionDataChunkStore.java:157)
 at 
org.apache.carbondata.core.datastore.chunk.impl.AbstractDimensionDataChunk.getChunkData(AbstractDimensionDataChunk.java:46)
 at 
org.apache.carbondata.core.scan.result.AbstractScannedResult.getNoDictionaryKeyArray(AbstractScannedResult.java:470)
 at 
org.apache.carbondata.core.scan.result.impl.NonFilterQueryScannedResult.getNoDictionaryKeyArray(NonFilterQueryScannedResult.java:102)
 at 
org.apache.carbondata.core.scan.collector.impl.DictionaryBasedResultCollector.collectData(DictionaryBasedResultCollector.java:101)
 at 
org.apache.carbondata.core.scan.processor.impl.DataBlockIteratorImpl.next(DataBlockIteratorImpl.java:51)
 at 
org.apache.carbondata.core.scan.processor.impl.DataBlockIteratorImpl.next(DataBlockIteratorImpl.java:32)
 at 
org.apache.carbondata.core.scan.result.iterator.DetailQueryResultIterator.getBatchResult(DetailQueryResultIterator.java:49)

 

 

 

Issue Analysis :- 

Possible Root Cause :- It is because existing memoryblock is removed while it 
was in-use. This happened  because duplicate taskid generated. Sometime freed 
same memory addresses are assigned to another task which will initialize memory 
block to0 and this cause NegativeSizeArrayException whereas sometime freed 
memory will not be used any task of executor process but running task will try 
to access it and as that address is not part of process so JVM crash will 
happen.

 

Method to find Cause :- 

 

!Root_Cause_Find_Step.JPG!

 

!Log_Message.JPG!

 


> NegativeArraySizeException during query execution 
> --------------------------------------------------
>
>                 Key: CARBONDATA-2991
>                 URL: https://issues.apache.org/jira/browse/CARBONDATA-2991
>             Project: CarbonData
>          Issue Type: Bug
>    Affects Versions: 1.4.0, 1.3.1
>            Reporter: Babulal
>            Priority: Major
>
> During Query Execution sometime NegativeArraySizeException  Exception in Some 
> Tasks . And sometime Executor is lost (JVM crash)
>  
> ava.lang.NegativeArraySizeException at 
> org.apache.carbondata.core.datastore.chunk.store.impl.unsafe.UnsafeVariableLengthDimesionDataChunkStore.getRow(UnsafeVariableLengthDimesionDataChunkStore.java:157)
>  at 
> org.apache.carbondata.core.datastore.chunk.impl.AbstractDimensionDataChunk.getChunkData(AbstractDimensionDataChunk.java:46)
>  at 
> org.apache.carbondata.core.scan.result.AbstractScannedResult.getNoDictionaryKeyArray(AbstractScannedResult.java:470)
>  at 
> org.apache.carbondata.core.scan.result.impl.NonFilterQueryScannedResult.getNoDictionaryKeyArray(NonFilterQueryScannedResult.java:102)
>  at 
> org.apache.carbondata.core.scan.collector.impl.DictionaryBasedResultCollector.collectData(DictionaryBasedResultCollector.java:101)
>  at 
> org.apache.carbondata.core.scan.processor.impl.DataBlockIteratorImpl.next(DataBlockIteratorImpl.java:51)
>  at 
> org.apache.carbondata.core.scan.processor.impl.DataBlockIteratorImpl.next(DataBlockIteratorImpl.java:32)
>  at 
> org.apache.carbondata.core.scan.result.iterator.DetailQueryResultIterator.getBatchResult(DetailQueryResultIterator.java:49)
>  
>  
>  
> Issue Analysis :- 
> Possible Root Cause :- It is because existing memoryblock is removed while it 
> was in-use. This happened  because duplicate taskid generated. Sometime freed 
> same memory addresses are assigned to another task which will initialize 
> memory block to0 and this cause NegativeSizeArrayException whereas sometime 
> freed memory will not be used any task of executor process but running task 
> will try to access it and as that address is not part of process so JVM crash 
> will happen.
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to