In my little case, just a 2 clients to 2 tservers. It's possible I could've seen throughput that I expected, but I was running Continuous Ingest, so modifying the number of writers doesn't really help much for my tiny case.

On 7/23/14, 6:14 PM, Jonathan Park wrote:
+1 as well.

In our 1.5.x deployments, we typically increase the tserver.mutation.queue.max 
value after an installation to achieve acceptable ingest performance. 
Increasing to 1M to be consistent with 1.6.0 sounds like a good idea.

Out of curiosity, how many concurrent clients were actively writing to your 
instance and how many tservers? What effect does varying # of concurrent 
writers have? (trying to vary the benefit from wal group commit)
Jonathan Park
Senior Software Engineer | Sqrrl
-----------------------------------
130 Prospect Street | Cambridge, MA 02139
703.501.0449 | www.sqrrl.com
-----------------------------------

The information contained in this communication may be confidential, subject to 
legal privilege, or otherwise protected from disclosure, and is intended solely 
for the use of the intended recipient(s). If you are not the intended recipient 
of this communication, please destroy all copies in your possession, notify the 
sender that you have received this communication in error, and note that any 
review or dissemination of, or the taking of any action in reliance on, this 
communication is expressly prohibited.  Please note that sqrrl data, INC. 
reserves the right to intercept, monitor, and retain e-mail messages to and 
from its systems as permitted by applicable law.



On Jul 23, 2014, at 5:43 PM, Bill Havanki <[email protected]> wrote:

+1


On Wed, Jul 23, 2014 at 4:36 PM, Josh Elser <[email protected]> wrote:

I started running some tests against 1.5.2 today and was baffled for a few
hours with horrid ingest performance (~30% of what I expected).

After staring at configs for a while, I finally realized it was because I
didn't increase tserver.mutation.queue.max from the default of 256k. Sure
enough, this resulted in speeds that I expected.

I see that in 1.6.0, the default value for this was increase from 256K to
1M (in ACCUMULO-1905). I know we have it written down in the release notes,
but I think the likelihood of causing terrible performance for users is
much greater than causing OOMEs (or similar increased memory footprint)
problems for users. I would like to change the default for 1.5.2 to 1M to
match 1.6.0.

Thoughts/worries/complaints?




--
// Bill Havanki
// Solutions Architect, Cloudera Govt Solutions
// 443.686.9283


Reply via email to