[ 
https://issues.apache.org/jira/browse/CARBONDATA-241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15499568#comment-15499568
 ] 

ASF GitHub Bot commented on CARBONDATA-241:
-------------------------------------------

Github user gvramana commented on a diff in the pull request:

    https://github.com/apache/incubator-carbondata/pull/158#discussion_r79290374
  
    --- Diff: 
integration/spark/src/main/scala/org/apache/carbondata/spark/rdd/CarbonScanRDD.scala
 ---
    @@ -102,7 +115,7 @@ class CarbonScanRDD[V: ClassTag](
         val splits = carbonInputFormat.getSplits(job)
         if (!splits.isEmpty) {
           val carbonInputSplits = 
splits.asScala.map(_.asInstanceOf[CarbonInputSplit])
    -
    +      
queryModel.setInvalidSegmentIds(validAndInvalidSegments.getInvalidSegments)
    --- End diff --
    
    move this to common getSplits, other wise validAndInvalidSegments can be 
null, if parallel deletion happens.


> OOM error during query execution in long run
> --------------------------------------------
>
>                 Key: CARBONDATA-241
>                 URL: https://issues.apache.org/jira/browse/CARBONDATA-241
>             Project: CarbonData
>          Issue Type: Bug
>            Reporter: kumar vishal
>            Assignee: kumar vishal
>
> **Problem:** During long run query execution is taking more time and it is 
> throwing out of memory issue.
> **Reason**: In compaction we are compacting segments and each segment 
> metadata is loaded in memory. So after compaction compacted segments are 
> invalid but its meta data is not removed from memory because of this 
> duplicate metadata is pile up and it is taking more memory and after few days 
> query exeution is throwing OOM
> **Solution**: Need to remove invalid blocks from memory
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to