[ 
https://issues.apache.org/jira/browse/CASSANDRA-12707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nicolas ginder updated CASSANDRA-12707:
---------------------------------------
    Description: 
We have an extra large partition of 40 million cells where most of the cells 
were deleted. When querying this partition with a slice query, Cassandra runs 
out of memory as tombstones fill up the JVM heap. After debugging one of the 
large SSTable we found that this part of the code loads all the tombstones.
In org.apache.cassandra.db.filter.QueryFilter
...
public static Iterator<Cell> gatherTombstones(final ColumnFamily returnCF, 
final Iterator<? extends OnDiskAtom> iter)
    {
...
while (iter.hasNext())
                {
                    OnDiskAtom atom = iter.next();

                    if (atom instanceof Cell)
                    {
                        next = (Cell)atom;
                        break;
                    }
                    else
                    {
                        returnCF.addAtom(atom);
                    }
                }
...


  was:
We have an extra large partition of 40 million cells where most of the cells 
were deleted. When querying this partition, Cassandra runs out of memory as 
tombstones fill up the JVM heap. After debugging one of the large SSTable we 
found that this part of the code loads all the tombstones.
In org.apache.cassandra.db.filter.QueryFilter
...
public static Iterator<Cell> gatherTombstones(final ColumnFamily returnCF, 
final Iterator<? extends OnDiskAtom> iter)
    {
...
while (iter.hasNext())
                {
                    OnDiskAtom atom = iter.next();

                    if (atom instanceof Cell)
                    {
                        next = (Cell)atom;
                        break;
                    }
                    else
                    {
                        returnCF.addAtom(atom);
                    }
                }
...



> JVM out of memory when querying an extra-large partition with lots of 
> tombstones
> --------------------------------------------------------------------------------
>
>                 Key: CASSANDRA-12707
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-12707
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>            Reporter: nicolas ginder
>             Fix For: 2.1.x, 2.2.x
>
>
> We have an extra large partition of 40 million cells where most of the cells 
> were deleted. When querying this partition with a slice query, Cassandra runs 
> out of memory as tombstones fill up the JVM heap. After debugging one of the 
> large SSTable we found that this part of the code loads all the tombstones.
> In org.apache.cassandra.db.filter.QueryFilter
> ...
> public static Iterator<Cell> gatherTombstones(final ColumnFamily returnCF, 
> final Iterator<? extends OnDiskAtom> iter)
>     {
> ...
> while (iter.hasNext())
>                 {
>                     OnDiskAtom atom = iter.next();
>                     if (atom instanceof Cell)
>                     {
>                         next = (Cell)atom;
>                         break;
>                     }
>                     else
>                     {
>                         returnCF.addAtom(atom);
>                     }
>                 }
> ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to