[jira] [Updated] (CARBONDATA-2991) NegativeArraySizeException during query execution

2018-10-03 Thread Babulal (JIRA)


 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Babulal updated CARBONDATA-2991:

Description: 
During Query Execution sometime NegativeArraySizeException  Exception in Some 
Tasks . And sometime Executor is lost (JVM crash)

 

ava.lang.NegativeArraySizeException at 
org.apache.carbondata.core.datastore.chunk.store.impl.unsafe.UnsafeVariableLengthDimesionDataChunkStore.getRow(UnsafeVariableLengthDimesionDataChunkStore.java:157)
 at 
org.apache.carbondata.core.datastore.chunk.impl.AbstractDimensionDataChunk.getChunkData(AbstractDimensionDataChunk.java:46)
 at 
org.apache.carbondata.core.scan.result.AbstractScannedResult.getNoDictionaryKeyArray(AbstractScannedResult.java:470)
 at 
org.apache.carbondata.core.scan.result.impl.NonFilterQueryScannedResult.getNoDictionaryKeyArray(NonFilterQueryScannedResult.java:102)
 at 
org.apache.carbondata.core.scan.collector.impl.DictionaryBasedResultCollector.collectData(DictionaryBasedResultCollector.java:101)
 at 
org.apache.carbondata.core.scan.processor.impl.DataBlockIteratorImpl.next(DataBlockIteratorImpl.java:51)
 at 
org.apache.carbondata.core.scan.processor.impl.DataBlockIteratorImpl.next(DataBlockIteratorImpl.java:32)
 at 
org.apache.carbondata.core.scan.result.iterator.DetailQueryResultIterator.getBatchResult(DetailQueryResultIterator.java:49)

 

 

 

Issue Analysis :- 

Possible Root Cause :- It is because existing memoryblock is removed while it 
was in-use. This happened  because duplicate taskid generated. Sometime freed 
same memory addresses are assigned to another task which will initialize memory 
block to0 and this cause NegativeSizeArrayException whereas sometime freed 
memory will not be used any task of executor process but running task will try 
to access it and as that address is not part of process so JVM crash will 
happen.

 

*Steps to find cause* 

*Add the code to create tasklist ids and add taskid to list on 
setCarbonTaskInfo()  , and if it is duplicate then Log a Warn message.*

Please check attachment. 

 

 Run the Query multiple time and found warn message in executor logs

2018-09-29 14:48:41,840 | INFO | [[Executor task launch worker for task 
435242][partitionID:1;queryID:29971946625611231]] | [Executor task launch 
worker for task 435242][partitionID:1;queryID:29971946625611231] Total memory 
used after task 29971946381679677 is 0 Current tasks running now are : [] | 
org.apache.carbondata.common.logging.impl.StandardLogService.logInfoMessage(StandardLogService.java:150)
2018-09-29 14:48:41,840 | INFO | [[Executor task launch worker for task 
435242][partitionID:1;queryID:29971946625611231]] | Finished task 17091.0 in 
stage 22.0 (TID 435242). 1412 bytes result sent to driver | 
org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
{color:#FF}*2018-09-29 14:48:41,842 | WARN | [Executor task launch worker 
for task 435393] | Executor task launch worker for task 435393 Already This 
Task is is Present29971946637094373 | 
org.apache.carbondata.common.logging.impl.StandardLogService.logWarnMessage(StandardLogService.java:168)*{color}
2018-09-29 14:48:41,842 | INFO | [dispatcher-event-loop-13] | Got assigned task 
435395 | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)

  was:
During Query Execution sometime NegativeArraySizeException  Exception in Some 
Tasks . And sometime Executor is lost (JVM crash)

 

ava.lang.NegativeArraySizeException at 
org.apache.carbondata.core.datastore.chunk.store.impl.unsafe.UnsafeVariableLengthDimesionDataChunkStore.getRow(UnsafeVariableLengthDimesionDataChunkStore.java:157)
 at 
org.apache.carbondata.core.datastore.chunk.impl.AbstractDimensionDataChunk.getChunkData(AbstractDimensionDataChunk.java:46)
 at 
org.apache.carbondata.core.scan.result.AbstractScannedResult.getNoDictionaryKeyArray(AbstractScannedResult.java:470)
 at 
org.apache.carbondata.core.scan.result.impl.NonFilterQueryScannedResult.getNoDictionaryKeyArray(NonFilterQueryScannedResult.java:102)
 at 
org.apache.carbondata.core.scan.collector.impl.DictionaryBasedResultCollector.collectData(DictionaryBasedResultCollector.java:101)
 at 
org.apache.carbondata.core.scan.processor.impl.DataBlockIteratorImpl.next(DataBlockIteratorImpl.java:51)
 at 
org.apache.carbondata.core.scan.processor.impl.DataBlockIteratorImpl.next(DataBlockIteratorImpl.java:32)
 at 
org.apache.carbondata.core.scan.result.iterator.DetailQueryResultIterator.getBatchResult(DetailQueryResultIterator.java:49)

 

 

 

Issue Analysis :- 

Possible Root Cause :- It is because existing memoryblock is removed while it 
was in-use. This happened  because duplicate taskid generated. Sometime freed 
same memory addresses are assigned to another task which will initialize memory 
block to0 and this cause NegativeSizeArrayException whereas sometime freed 
memory will not be used any task of executor process but running task will try 

[jira] [Updated] (CARBONDATA-2991) NegativeArraySizeException during query execution

2018-10-03 Thread Babulal (JIRA)


 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Babulal updated CARBONDATA-2991:

Attachment: Root_Cause_Find_Step.JPG

> NegativeArraySizeException during query execution 
> --
>
> Key: CARBONDATA-2991
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2991
> Project: CarbonData
>  Issue Type: Bug
>Affects Versions: 1.4.0, 1.3.1
>Reporter: Babulal
>Assignee: Babulal
>Priority: Major
> Attachments: Root_Cause_Find_Step.JPG
>
>
> During Query Execution sometime NegativeArraySizeException  Exception in Some 
> Tasks . And sometime Executor is lost (JVM crash)
>  
> ava.lang.NegativeArraySizeException at 
> org.apache.carbondata.core.datastore.chunk.store.impl.unsafe.UnsafeVariableLengthDimesionDataChunkStore.getRow(UnsafeVariableLengthDimesionDataChunkStore.java:157)
>  at 
> org.apache.carbondata.core.datastore.chunk.impl.AbstractDimensionDataChunk.getChunkData(AbstractDimensionDataChunk.java:46)
>  at 
> org.apache.carbondata.core.scan.result.AbstractScannedResult.getNoDictionaryKeyArray(AbstractScannedResult.java:470)
>  at 
> org.apache.carbondata.core.scan.result.impl.NonFilterQueryScannedResult.getNoDictionaryKeyArray(NonFilterQueryScannedResult.java:102)
>  at 
> org.apache.carbondata.core.scan.collector.impl.DictionaryBasedResultCollector.collectData(DictionaryBasedResultCollector.java:101)
>  at 
> org.apache.carbondata.core.scan.processor.impl.DataBlockIteratorImpl.next(DataBlockIteratorImpl.java:51)
>  at 
> org.apache.carbondata.core.scan.processor.impl.DataBlockIteratorImpl.next(DataBlockIteratorImpl.java:32)
>  at 
> org.apache.carbondata.core.scan.result.iterator.DetailQueryResultIterator.getBatchResult(DetailQueryResultIterator.java:49)
>  
>  
>  
> Issue Analysis :- 
> Possible Root Cause :- It is because existing memoryblock is removed while it 
> was in-use. This happened  because duplicate taskid generated. Sometime freed 
> same memory addresses are assigned to another task which will initialize 
> memory block to0 and this cause NegativeSizeArrayException whereas sometime 
> freed memory will not be used any task of executor process but running task 
> will try to access it and as that address is not part of process so JVM crash 
> will happen.
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CARBONDATA-2991) NegativeArraySizeException during query execution

2018-10-03 Thread Babulal (JIRA)


 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Babulal updated CARBONDATA-2991:

Description: 
During Query Execution sometime NegativeArraySizeException  Exception in Some 
Tasks . And sometime Executor is lost (JVM crash)

 

ava.lang.NegativeArraySizeException at 
org.apache.carbondata.core.datastore.chunk.store.impl.unsafe.UnsafeVariableLengthDimesionDataChunkStore.getRow(UnsafeVariableLengthDimesionDataChunkStore.java:157)
 at 
org.apache.carbondata.core.datastore.chunk.impl.AbstractDimensionDataChunk.getChunkData(AbstractDimensionDataChunk.java:46)
 at 
org.apache.carbondata.core.scan.result.AbstractScannedResult.getNoDictionaryKeyArray(AbstractScannedResult.java:470)
 at 
org.apache.carbondata.core.scan.result.impl.NonFilterQueryScannedResult.getNoDictionaryKeyArray(NonFilterQueryScannedResult.java:102)
 at 
org.apache.carbondata.core.scan.collector.impl.DictionaryBasedResultCollector.collectData(DictionaryBasedResultCollector.java:101)
 at 
org.apache.carbondata.core.scan.processor.impl.DataBlockIteratorImpl.next(DataBlockIteratorImpl.java:51)
 at 
org.apache.carbondata.core.scan.processor.impl.DataBlockIteratorImpl.next(DataBlockIteratorImpl.java:32)
 at 
org.apache.carbondata.core.scan.result.iterator.DetailQueryResultIterator.getBatchResult(DetailQueryResultIterator.java:49)

 

 

 

Issue Analysis :- 

Possible Root Cause :- It is because existing memoryblock is removed while it 
was in-use. This happened  because duplicate taskid generated. Sometime freed 
same memory addresses are assigned to another task which will initialize memory 
block to0 and this cause NegativeSizeArrayException whereas sometime freed 
memory will not be used any task of executor process but running task will try 
to access it and as that address is not part of process so JVM crash will 
happen.

 

 

 

  was:
During Query Execution sometime NegativeArraySizeException  Exception in Some 
Tasks . And sometime Executor is lost (JVM crash)

 

ava.lang.NegativeArraySizeException at 
org.apache.carbondata.core.datastore.chunk.store.impl.unsafe.UnsafeVariableLengthDimesionDataChunkStore.getRow(UnsafeVariableLengthDimesionDataChunkStore.java:157)
 at 
org.apache.carbondata.core.datastore.chunk.impl.AbstractDimensionDataChunk.getChunkData(AbstractDimensionDataChunk.java:46)
 at 
org.apache.carbondata.core.scan.result.AbstractScannedResult.getNoDictionaryKeyArray(AbstractScannedResult.java:470)
 at 
org.apache.carbondata.core.scan.result.impl.NonFilterQueryScannedResult.getNoDictionaryKeyArray(NonFilterQueryScannedResult.java:102)
 at 
org.apache.carbondata.core.scan.collector.impl.DictionaryBasedResultCollector.collectData(DictionaryBasedResultCollector.java:101)
 at 
org.apache.carbondata.core.scan.processor.impl.DataBlockIteratorImpl.next(DataBlockIteratorImpl.java:51)
 at 
org.apache.carbondata.core.scan.processor.impl.DataBlockIteratorImpl.next(DataBlockIteratorImpl.java:32)
 at 
org.apache.carbondata.core.scan.result.iterator.DetailQueryResultIterator.getBatchResult(DetailQueryResultIterator.java:49)

 

 

 

Issue Analysis :- 

Possible Root Cause :- It is because existing memoryblock is removed while it 
was in-use. This happened  because duplicate taskid generated. Sometime freed 
same memory addresses are assigned to another task which will initialize memory 
block to0 and this cause NegativeSizeArrayException whereas sometime freed 
memory will not be used any task of executor process but running task will try 
to access it and as that address is not part of process so JVM crash will 
happen.

 

Method to find Cause :- 

 

!Root_Cause_Find_Step.JPG!

 

!Log_Message.JPG!

 


> NegativeArraySizeException during query execution 
> --
>
> Key: CARBONDATA-2991
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2991
> Project: CarbonData
>  Issue Type: Bug
>Affects Versions: 1.4.0, 1.3.1
>Reporter: Babulal
>Priority: Major
>
> During Query Execution sometime NegativeArraySizeException  Exception in Some 
> Tasks . And sometime Executor is lost (JVM crash)
>  
> ava.lang.NegativeArraySizeException at 
> org.apache.carbondata.core.datastore.chunk.store.impl.unsafe.UnsafeVariableLengthDimesionDataChunkStore.getRow(UnsafeVariableLengthDimesionDataChunkStore.java:157)
>  at 
> org.apache.carbondata.core.datastore.chunk.impl.AbstractDimensionDataChunk.getChunkData(AbstractDimensionDataChunk.java:46)
>  at 
> org.apache.carbondata.core.scan.result.AbstractScannedResult.getNoDictionaryKeyArray(AbstractScannedResult.java:470)
>  at 
> org.apache.carbondata.core.scan.result.impl.NonFilterQueryScannedResult.getNoDictionaryKeyArray(NonFilterQueryScannedResult.java:102)
>  at 
>