-user@incubator.apache.org
Subject: Re: Testing row cache feature in trunk: write should put record in
cache
... tell you what, if you write the option-processing part in
DatabaseDescriptor I will do the actual cache part. :)
On Tue, Feb 16, 2010 at 11:07 PM, Jonathan Ellis jbel...@gmail.com
: Jonathan Ellis [mailto:jbel...@gmail.com]
Sent: Tuesday, February 16, 2010 9:22 PM
To: cassandra-user@incubator.apache.org
Subject: Re: Testing row cache feature in trunk: write should put record in
cache
... tell you what, if you write the option-processing part in
DatabaseDescriptor I will do
On Sat, Feb 20, 2010 at 12:20 PM, Jonathan Ellis jbel...@gmail.com wrote:
We don't use native java serialization for anything but the on-disk
BitSets in our bloom filters (because those are deserialized once at
startup, so the overhead doesn't matter), btw.
Right, tangential use is pretty
[mailto:jbel...@gmail.com]
Sent: Thursday, February 18, 2010 12:04 PM
To: cassandra-user@incubator.apache.org
Subject: Re: Testing row cache feature in trunk: write should put record in
cache
Did you force a GC from jconsole to make sure you weren't just
measuring uncollected garbage?
On Wed, Feb 17
@incubator.apache.org
Subject: Re: Testing row cache feature in trunk: write should put record
in
cache
Did you force a GC from jconsole to make sure you weren't just
measuring uncollected garbage?
On Wed, Feb 17, 2010 at 2:51 PM, Weijun Li weiju...@gmail.com wrote:
OK I'll work
.
-Weijun
-Original Message-
From: Jonathan Ellis [mailto:jbel...@gmail.com]
Sent: Thursday, February 18, 2010 12:04 PM
To: cassandra-user@incubator.apache.org
Subject: Re: Testing row cache feature in trunk: write should put record
in
cache
Did you force a GC from
Did you force a GC from jconsole to make sure you weren't just
measuring uncollected garbage?
On Wed, Feb 17, 2010 at 2:51 PM, Weijun Li weiju...@gmail.com wrote:
OK I'll work on the change later because there's another problem to solve:
the overhead for cache is too big that 1.4mil records (1k
OK I'll work on the change later because there's another problem to solve:
the overhead for cache is too big that 1.4mil records (1k each) consumed all
of the 6gb memory of JVM (I guess 4gb are consumed by the row cache). I'm
thinking that ConcurrentHashMap is not a good choice for LRU and the row
Great!
On Wed, Feb 17, 2010 at 1:51 PM, Weijun Li weiju...@gmail.com wrote:
OK I'll work on the change later because there's another problem to solve:
the overhead for cache is too big that 1.4mil records (1k each) consumed all
of the 6gb memory of JVM (I guess 4gb are consumed by the row
Just started to play with the row cache feature in trunk: it seems to be
working fine so far except that for RowsCached parameter you need to specify
number of rows rather than a percentage (e.g., 20% doesn't work). Thanks
for this great feature that improves read latency dramatically so that disk
On Tue, Feb 16, 2010 at 7:11 PM, Weijun Li weiju...@gmail.com wrote:
Just started to play with the row cache feature in trunk: it seems to be
working fine so far except that for RowsCached parameter you need to specify
number of rows rather than a percentage (e.g., 20% doesn't work).
20%
On Tue, Feb 16, 2010 at 7:17 PM, Jonathan Ellis jbel...@gmail.com wrote:
On Tue, Feb 16, 2010 at 7:11 PM, Weijun Li weiju...@gmail.com wrote:
Just started to play with the row cache feature in trunk: it seems to be
working fine so far except that for RowsCached parameter you need to specify
Just tried to make quick change to enable it but it didn't work out :-(
ColumnFamily cachedRow = cfs.getRawCachedRow(mutation.key());
// What I modified
if( cachedRow == null ) {
cfs.cacheRow(mutation.key());
... tell you what, if you write the option-processing part in
DatabaseDescriptor I will do the actual cache part. :)
On Tue, Feb 16, 2010 at 11:07 PM, Jonathan Ellis jbel...@gmail.com wrote:
https://issues.apache.org/jira/secure/CreateIssue!default.jspa, but
this is pretty low priority for me.
14 matches
Mail list logo