[ https://issues.apache.org/jira/browse/ACCUMULO-3303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14200343#comment-14200343 ]
Eric Newton commented on ACCUMULO-3303: --------------------------------------- When using a WAL of 1G, a block size of 1.1G is allocated, which ensures that the file can be written without additional NN requests. Using a WAL of 8G, the block size must be < 2 ^31^, because block lengths are signed 32-bit integers. Additional NN requests are required as the WAL grows, to create new blocks. So there is a known difference between large (>2G) WALs and the default size (1G). I don't see how that gives you the performance you are seeing, though. > funky performance with large WAL > -------------------------------- > > Key: ACCUMULO-3303 > URL: https://issues.apache.org/jira/browse/ACCUMULO-3303 > Project: Accumulo > Issue Type: Bug > Components: logger, tserver > Affects Versions: 1.6.1 > Reporter: Adam Fuchs > Attachments: 1GB_WAL.png, 2GB_WAL.png, 4GB_WAL.png, 512MB_WAL.png, > 8GB_WAL.png, WAL_disabled.png > > > The tserver seems to get into a funky state when writing to a large > write-ahead log. I ran some continuous ingest tests varying > tserver.walog.max.size in {512M, 1G, 2G, 4G, 8G} and got some results that I > have yet to understand. I was expecting to see the effects of walog metadata > management as described in ACCUMULO-2889, but I also found an additional > behavior of ingest slowing down for long periods when using a large walog > size. > The cluster configuration was as follows: > {code} > Accumulo version: 1.6.2-SNAPSHOT (current head of origin/1.6) > Nodes: 4 > Masters: 1 > Slaves: 3 > Cores per node: 24 > Drives per node: 8x1TB data + 2 raided system > Memory per node: 64GB > tserver.memory.maps.max=2G > table.file.compress.type=snappy (for ci table only) > tserver.mutation.queue.max=16M > tserver.wal.sync.method=hflush > Native maps enabled > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)