Github user keith-turner commented on a diff in the pull request:

    https://github.com/apache/accumulo/pull/176#discussion_r86040011
  
    --- Diff: _posts/blog/2016-10-28-durability-performance.md ---
    @@ -0,0 +1,136 @@
    +---
    +title: "Durability Performance Implications"
    +date: 2016-10-28 17:00:00 +0000
    +author: Keith Turner
    +---
    +
    +## Overview
    +
    +Accumulo stores recently written data in a sorted in memory map.  Before 
data is
    +added to this map, it's written to an unsorted WAL (write ahead log).  In 
the
    +case when a Tablet Server dies, the recently written data is recovered 
from the
    +WAL.
    +
    +When data is written to Accumulo the following happens :
    +
    + * Client sends a batch of mutations to a tablet server
    + * Tablet server does the following :
    +   * Writes mutation to Tablet Servers WAL
    +   * Sync or flush WAL
    +   * Adds mutations to sorted in memory maps
    +   * Reports success back to client.
    +
    +The sync/flush step above moves data written to the WAL from memory to 
disk.
    +Write ahead logs are stored in HDFS. HDFS supports two ways of forcing 
data to
    +disk for an open file : `hsync` and `hflush`.  
    +
    +## HDFS Sync/Flush Details
    +
    +When `hflush` is called on a WAL, it does not ensure data is on disk.  It 
only
    +ensure that data is in OS buffers on each datanode and on its way to disk. 
 As a
    +result calls to `hflush` are very fast.  If a WAL is replicated to 3 data 
nodes
    +then data may be lost if all three machines reboot.  If the datanode 
process
    +dies, thats ok because it flushed to OS.  The machines have to reboot for 
data
    +loss to occur.
    +
    +In order to avoid data loss in the event of reboot, `hsync` can be called. 
 This
    +will ensure data is written to disk on all datanodes before returning.  
When
    +using `hsync` for the WAL, if Accumulo reports success to a user it means 
the
    +data is on disk.  However `hsync` is much slower than `hflush` and the way 
it's
    +implemented exacerbates the problem.  For example `hflush` make take 1ms 
and
    +`hsync` may take 50ms.  This difference will impact writes to Accumulo and 
can
    +be mitigated in some situations with larger buffers in Accumulo.
    +
    +HDFS keeps checksum data internally by default.  Datanodes store checksum 
data
    +in a separate file in the local filesystem.  This means when `hsync` is 
called
    +on a WAL, two files must be synced on each datanode.  Syncing two files 
doubles
    +the time. To make matters even worse, when the two files are synced the 
local
    +filesystem metadata is also synced.  Depending on the local filesystem and 
its
    +configuration, syncing the metadata may or may not take time.  In the worst
    +case, we need to wait for four sync operations at the local filesystem 
level on
    +each datanode. One thing I am not sure about, is if these sync operations 
occur
    +in parallel on the replicas on different datanodes.  Lets hope they occur 
in
    +parallel.  The following pointers show where sync occurs in the datanode 
code.
    +
    + * [BlockReceiver.flushOrSync()][fos] calls 
[ReplicaOutputStreams.syncDataOut()][ros1] and 
[ReplicaOutputStreams.syncChecksumOut()][ros2] when `isSync` is true.
    + * The methods in ReplicaOutputStreams call [FileChannel.force(true)][fcf] 
which
    +   synchronously flushes data and filesystem metadata.
    +
    +If files were preallocated (this would avoid syncing local filesystem 
metadata)
    +and checksums were stored in-line, then 1 sync could be done instead of 4. 
 
    +
    +## Configuring WAL flush/sync in Accumulo 1.6
    +
    +Accumulo 1.6.0 only supported `hsync` and this caused [performance
    +problems][160_RN_WAL].  In order to offer better performance, the option to
    +configure `hflush` was [added in 1.6.1][161_RN_WAL].  The
    +[tserver.wal.sync.method][16_UM_SM] configuration option was added to 
support
    +this feature.  This was a tablet server wide option that applied to 
everything
    +written to any table.   
    +
    +## Group Commit
    +
    +Each Accumulo Tablet Server has a single WAL.  When multiple clients send
    +mutations to a tablet server at around the same time, the tablet sever may 
group
    +all of this into a single WAL operation.  It will do this instead of 
writing and
    +syncing or flushing each client's mutations to the WAL separately.  Doing 
this
    +increase throughput and lowers average latency for clients.
    +
    +## Configuring WAL flush/sync in Accumulo 1.7+
    +
    +Accumulo 1.7.0 introduced [table.durability][17_UM_TD], a new per table 
property
    +for configuring durability.  It also stopped using the 
`tserver.wal.sync.method`
    +property.  The `table.durability` property has the following four legal 
values.
    +
    + * **none** : Do not write to WAL            
    + * **log**  : Write to WAL, but do not sync  
    + * **flush** : Write to WAL and call `hflush` 
    + * **sync** : Write to WAL and call `hsync`  
    +
    +If multiple writes arrive at around the same time with different durability
    +settings, then the group commit code will choose the most conservative
    +durability.  This can cause one tables settings to slow down writes to 
another
    +table.  
    +
    +In Accumulo 1.6, it was easy to make all writes use `hflush` because there 
was
    +only one tserver setting.  Getting everything to use `flush` in 1.7 and 
later
    +can be a little tricky because by default the Accumulo metadata table is 
set to
    +use `sync`.  Executing the following command in the Accumulo shell will
    +accomplish this (assuming no tables or namespaces have been specifically 
set to
    +`sync`).  The first command sets a system wide table default for `flush`.  
The
    +second two commands override metadata table specific settings of `sync`.
    +
    +```
    +config -s table.durability=flush
    +config -t accumulo.metadata -s table.durability=flush
    +config -t accumulo.root -s table.durability=flush
    --- End diff --
    
    I made further changes to this section in ebf9bca in an attempt to more 
clearly communicate why 1.7+ is trickier to configure.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

Reply via email to