I just tested the ingestion with 10 tablets per Server and now the system 
achieve up to 24 concurrent major compactions. 

I have another issue with the tserver.memory.maps.max property. I am play 
with the size of the native map to track how its size affect the ingestion 
performance. I set the values of table.compaction.minor.logs.threshold and 
tserver.walog.max.size big enough so I can increase the map size up to 
40GB without problems. At the beginning changing the size of the map 
(using the shell) didn't bring any benefit to the ingestion process. 
Monitoring the servers I noticed that their RAM consumption was constant. 
Eventually I tried to restart the tablet servers after changing the 
tserver.memory.maps.max and the RAM usage on each node increased as I 
expected. From the documentation I know that any change to the 
tserver.memory.maps.max should take effect without restarting the tablet 
servers, is that always true? (tserver.memory.maps.native.enabled has 
always been true and from the logs I see that the shared library has been 
correctly loaded).
Thanks!

Best Regards,
Max



From:   Michael Wall <mjw...@gmail.com>
To:     user@accumulo.apache.org
Date:   23/03/2017 17:49
Subject:        Re: tserver.compaction.*.concurrent.max behavior in 
Accumulo 1.8.1



Yes, until you hit another constraint like Marc and Dave were asking 
about.

Mike

On Thu, Mar 23, 2017 at 11:34 AM Massimilian Mattetti <massi...@il.ibm.com
> wrote:
I wasn't aware of such constrain. So I can just increase the number of 
tablets per server and it will perform more major compactions.
Thanks,
Max




From:        Michael Wall <mjw...@gmail.com>
To:        user@accumulo.apache.org
Date:        23/03/2017 17:09
Subject:        Re: tserver.compaction.*.concurrent.max behavior in 
Accumulo 1.8.1



Max,

So your max major compactions will be 3 per tablet server.  Accumulo will 
not run 2 majors on the same tablet concurrently.

Mike

On Thu, Mar 23, 2017 at 10:37 AM Massimilian Mattetti <massi...@il.ibm.com
> wrote:
One table, 9 pre-split tablets, 3 tablets per server and the data is 
uniform distributed among each tablet.
Max




From:        Michael Wall <mjw...@gmail.com>
To:        user@accumulo.apache.org
Date:        23/03/2017 16:28
Subject:        Re: tserver.compaction.*.concurrent.max behavior in 
Accumulo 1.8.1



Max,

On you 3 node cluster, how many tables are you ingesting into?  How many 
tablets are in each table?  Are the tablets equally spread amongst the 3 
tablet servers?

Mike

On Thu, Mar 23, 2017 at 10:13 AM Massimilian Mattetti <massi...@il.ibm.com
> wrote:
With the configuration I presented before the concurrent major compactions 
are never more than 3 per tablet server while the minor are under the 4 
per node. Can one of the other configurations be the cause of this 
behavior?

Regards,
Max



From:        Dave Marion <dlmar...@comcast.net>
To:        user@accumulo.apache.org, Massimilian Mattetti/Haifa/IBM@IBMIL
Date:        23/03/2017 14:55
Subject:        Re: tserver.compaction.*.concurrent.max behavior in 
Accumulo 1.8.1



Can you explain more what you mean by "My problem is that both the minor 
and major compactions do not overcome their default max values?" I have 
done some testing with 1.8.1 and specifically modifying 
tserver.compaction.major.concurrent.max to a higher number and I have seen 
it take effect.

On March 23, 2017 at 7:54 AM Massimilian Mattetti <massi...@il.ibm.com> 
wrote:

Hi All,

I am running some heavy ingestion process on a 3 nodes cluster of Accumulo 
1.8.1, using the following configuration:

table.compaction.minor.logs.threshold=10
table.durability=flush
table.file.max=30

tserver.wal.blocksize=2G
tserver.walog.max.size=4G
tserver.mutation.queue.max=2M
tserver.memory.maps.max=4G
tserver.compaction.minor.concurrent.max=50
tserver.compaction.major.concurrent.max=8

My problem is that both the minor and major compactions do not overcome 
their default max values. I checked the config from the shell and it looks 
fine to me:

default    | tserver.compaction.major.concurrent.max ................ | 3
system    |                     @override 
........................................... | 8

default    | tserver.compaction.minor.concurrent.max ............... | 4
system    |                     @override 
........................................... | 50

Is something changed from 1.8.0? I haven't seen such behavior with the 
previous version. 
Thanks.

Regards,
Max











Reply via email to