Makes sense to me, although I don't see it making a material
difference when there are 1000 mutations in a memtable vs 1001.

On Sat, Feb 25, 2012 at 11:23 AM, Thomas Richter <t...@tricnet.de> wrote:
> Hi,
>
> while hunting down some memory consumption issues in 0.7.10 I realized
> that MemtableThroughput condition is tested before writing the new data.
> As this causes memtables to grow larger than expected I changed
>
> Memtable apply(DecoratedKey key, ColumnFamily columnFamily)
>    {
>        long start = System.nanoTime();
>
>        boolean flushRequested = memtable.isThresholdViolated();
>        memtable.put(key, columnFamily);
>        ColumnFamily cachedRow = getRawCachedRow(key);
>        if (cachedRow != null)
>            cachedRow.addAll(columnFamily);
>        writeStats.addNano(System.nanoTime() - start);
>
>        return flushRequested ? memtable : null;
>    }
>
> to
>
> Memtable apply(DecoratedKey key, ColumnFamily columnFamily)
>    {
>        long start = System.nanoTime();
>
>
>        memtable.put(key, columnFamily);
>        ColumnFamily cachedRow = getRawCachedRow(key);
>        if (cachedRow != null)
>            cachedRow.addAll(columnFamily);
>        writeStats.addNano(System.nanoTime() - start);
>        boolean flushRequested = memtable.isThresholdViolated();
>        return flushRequested ? memtable : null;
>    }
>
> Are there any objections to this change? So far it works for me.
>
> Best,
>
> Thomas



-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of DataStax, the source for professional Cassandra support
http://www.datastax.com

Reply via email to