On Thu, Dec 3, 2009 at 3:59 PM, Freeman, Tim <[email protected]> wrote:
> I stopped the client at 11:28.  There were 2306 files in data/Keyspace1.  
> It's now 12:44, and there are 1826 files in data/Keyspace1.  As I wrote this 
> email, the number increased to 1903, then to 1938 and 2015, even though the 
> server has no clients.  I used jconsole to invoke a few explicit garbage 
> collections and the number went down to 811.

Sounds normal.

> jconsole reports that the compaction pool has 1670 pending tasks.  As I wrote 
> this email, the number gradually increased to 1673.  The server has no 
> clients, so this is odd.  The number of completed tasks in the compaction 
> pool has consistently been going up while the number of pending tasks stays 
> the same.  The number of completed tasks increased from 130 to 136.

This is because whenever compaction finishes, it adds another
compaction task to see if the newly compacted table is itself large
enough to compact with others.  In a system where compaction has kept
up with demand, these are quickly cleaned out of the queue, but in
your case they are stuck behind all the compactions that are merging
sstables.

So this is working as designed, but the design is poor because it
causes confusion.  If you can open a ticket for this that would be
great.

> log.2009-12-02-19: WARN [Timer-0] 2009-12-02 19:55:23,305 
> LoadDisseminator.java (line 44) Exception was generated at : 12/02/2009 
> 19:55:22 on thread Timer-0

These have been fixed and are unrelated to compaction.

So, it sounds like things are working, and if you leave it alone for a
while it will finish compacting everything and the queue of compaction
jobs will clear out, and reads should be fast(er) again.

Like I said originally, increaing memtable size / object count will
reduce the number of compactions requred.  That's about all you can do
in 0.5...  Can you tell if the system is i/o or cpu bound during
compaction?

-Jonathan

Reply via email to