We are importing lots and lots of data, 130 TB worth.   If we set
compaction limit to say 128, and blocking limit to say 200.  I know we
expect longer read times unless we use a bloomfilter, however, are
they any other detrimental performance issues to be expected?  Our
flush size limit is set to 100MB.  I notice that REST server sometimes
is unable to talk to regions if there are too many store files, is
that expected behavior?  During massive imports it seems that we run
into problems with REST server freezes (e.g. responds to tcp, but
neither to puts or gets), and CPU goes crazy high, and stays high
until we restart the REST server.  Any ideas?
(I've attached link to jstack trace in my previous email).

-Jack

Reply via email to