Hi Keith,

I have 4 tablet servers + 1 master. I also did a pre-split before ingesting and 
it increased the speed a lot.


And you're right, when I created too many ingest threads, many of them were on 
the queue of thread pools and the hold time will increases. In some intense 
ingest, there was a case when a tablet was killed by master for the hold time 
exceeded 5 min. In this situation, all Tablets were in stuck. Only after that 
one is dead, the ingest was back with the comparable speed. But the entries in 
dead tablet were all gone and lost to the table.

I have had no idea to repair this except for regulating the number of ingest 
threads and speed to make it more friendly to the terminal of Accumulo itself.


Another myth to me is that when I did a pre-split to, e.g. 8 tablets. But along 
with the ingest operation, the tablet number increases (e.g. 10, 14 or bigger). 
Any idea?

Hai
________________________________
From: Keith Turner <[email protected]>
Sent: Friday, July 31, 2015 8:39 AM
To: [email protected]
Subject: Re: How to control Minor Compaction by programming

How many tablets do you have?  Entire tablets are minor compacted at once.  If 
you have 1 tablet per tablet server, then minor compactions will have a lot of 
work to do at once.  While this work is being done, the tablet servers memory 
may fill up, leading to writes being held.

If you have 10 tablets per tablet server, then tablets can be compacted in 
parallel w/ less work to do at any given point in time.    This can avoid 
memory filling up and writes being held.

In short, its possible that adding good split points to the table (and 
therefore creating more tablets) may help w/ this issue.

Also, are you seeing hold times?

On Thu, Jul 30, 2015 at 11:24 PM, Hai Pham 
<[email protected]<mailto:[email protected]>> wrote:
Hey William, Josh and David,

Thanks for explaining, I might not have been clear: I used the web interface 
with port 50095 to monitor the real-time charts (ingest, scan, load average, 
minor compaction, major compaction, ...).

Nonetheless, as I witnessed, when I ingested about 100k entries -> then minor 
compaction happened -> ingest was stuck -> the level of minor compaction on the 
charts was just about 1.0, 2.0 and max 3.0 while about >20k entries were forced 
out of memory (I knew this by looking at the number of entries in memory w.r.t 
the table being ingested to) -> then when minor compaction ended, ingest 
resumed, somewhat faster.

Thus I presume the level 1.0, 2.0, 3.0 is not representative for number of 
files being minor-compacted from memory?

Hai
________________________________________
From: Josh Elser <[email protected]<mailto:[email protected]>>
Sent: Thursday, July 30, 2015 7:12 PM
To: [email protected]<mailto:[email protected]>
Subject: Re: How to control Minor Compaction by programming

>
> Also, can you please explain the number 0, 1.0, 2.0, ... in charts (web
> monitoring) denoting the level of Minor Compaction and Major Compaction?

On the monitor, the number of compactions are of the form:

active (queued)

e.g. 4 (2), would mean that 4 are running and 2 are queued.

>
>
> Thank you!
>
> Hai Pham
>
>
>
>

Reply via email to