On Apr 7, 2011, at 23:43 , Jonathan Ellis wrote:
> The history is that, way back in the early days, we used to max it out
> the other way (MTT=128) but observed behavior is that objects that
> survive 1 new gen collection are very likely to survive "forever."
Just a quick note: my own tests seem
On Thu, Apr 7, 2011 at 2:27 PM, Erik Onnen wrote:
> 1) Does this seem like a sane amount of garbage (512MB) to generate
> when flushing a 64MB table to disk?
Sort of -- that's just about exactly the amount of space you'd expect
64MB of serialized data to take, in memory. (Not very efficient, I
kn
I'll capture what I we're seeing here for anyone else who may look
into this in more detail later.
Our standard heap growth is ~300K in between collections with regular
ParNew collections happening on average about every 4 seconds. All
very healthy.
The memtable flush (where we see almost all our
No, 2252 is not suitable for backporting to 0.7.
On Thu, Apr 7, 2011 at 7:33 AM, ruslan usifov wrote:
>
>
> 2011/4/7 Jonathan Ellis
>>
>> Hypothesis: it's probably the flush causing the CMS, not the snapshot
>> linking.
>>
>> Confirmation possibility #1: Add a logger.warn to
>> CLibrary.createHa
2011/4/7 Jonathan Ellis
> Hypothesis: it's probably the flush causing the CMS, not the snapshot
> linking.
>
> Confirmation possibility #1: Add a logger.warn to
> CLibrary.createHardLinkWithExec -- with JNA enabled it shouldn't be
> called, but let's rule it out.
>
> Confirmation possibility #2:
Hypothesis: it's probably the flush causing the CMS, not the snapshot linking.
Confirmation possibility #1: Add a logger.warn to
CLibrary.createHardLinkWithExec -- with JNA enabled it shouldn't be
called, but let's rule it out.
Confirmation possibility #2: Force some flushes w/o snapshot.
Either
Hello,
We're running a six-node 0.7.4 ring in EC2 on m1.xlarge instances with 4GB heap
(15GB total memory, 4 cores, dataset fits in RAM, storage on ephemeral disk).
We've noticed a brief flurry of query failures during the night corresponding
with our backup schedule. More specifically, our log