On Mon, Oct 31, 2011 at 11:35 AM, Mick Semb Wever <m...@apache.org> wrote:
> On Mon, 2011-10-31 at 10:08 +0100, Sylvain Lebresne wrote:
>> >> I set chunk_length_kb to 16 as my rows are very skinny (typically 100b)
>> >
>> >
>> > I see now this was a bad choice.
>> > The read pattern of these rows is always in bulk so the chunk_length
>> > could have been much higher so to reduce memory usage (my largest
>> > sstable is 61G).
>> >
>> > After changing the ckunk_length is there any way to rebuild just some
>> > sstables rather than having to do a full nodetool scrub ?
>>
>> Provided you're using SizeTieredCompaction (i.e, the default), you can
>> trigger a "user defined compaction" through JMX on each of the sstable
>> you want to rebuild. Not necessarily a fun process though. Also note that
>> you can scrub only an individual column family if that was the question.
>
> Actually this won't work i think.
>
> I presume that scrub or any "user defined compaction" will still need to
> SSTableReader.openDataReader(..) and so will still OOM no matter what...
>
> How the hell am i supposed to re-chunk_length an sstable? :-(

You could start the node without joining the ring (to make sure it doesn't
get any work), i.e, with -Dcassandra.join_ring=false and giving the jvm
the maximum heap the machine can allow. Hopefully that could be enough
to recompact the sstable without OOMing.

>
> ~mck
>
> --
> "We all may have come on different ships, but we’re in the same boat
> now." Martin Luther King. Jr.
>
> | http://semb.wever.org | http://sesat.no |
> | http://tech.finn.no   | Java XSS Filter |
>
>

Reply via email to