checksums should be closer to data generation and consumption
-------------------------------------------------------------

                 Key: HADOOP-1450
                 URL: https://issues.apache.org/jira/browse/HADOOP-1450
             Project: Hadoop
          Issue Type: Improvement
          Components: fs
            Reporter: Doug Cutting
             Fix For: 0.14.0


ChecksumFileSystem checksums data by inserting a filter between two buffers.  
The outermost buffer should be as small as possible, so that, when writing, 
checksums are computed before the data has spent much time in memory, and, when 
reading, checksums are validated as close to their time of use as possible.  
Currently the outer buffer is the larger, using the bufferSize specified by the 
user, and the inner is small, so that most reads and writes will bypass it, as 
an optimization.  Instead, the outer buffer should be made to be 
bytesPerChecksum, and the inner buffer should be the user-specified buffer size.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to