I see. Make sense then.

Thanks,

JM

2015-08-05 12:52 GMT-04:00 Colin McCabe <cmcc...@alumni.cmu.edu>:

> Hi Jean-Marc,
>
> Short-circuit covers reads, but this performance improvement covers writes.
>
> best,
> Colin
>
> On Wed, Aug 5, 2015 at 7:17 AM, Jean-Marc Spaggiari
> <jean-m...@spaggiari.org> wrote:
> > Hi Nick,
> >
> > If we are doing short-circuit, we skip Hadoop CRC, right? So this should
> > impact us only in case we are not doing short-circuit? Or wall doesn't
> > bypass it?
> >
> > JM
> >
> > 2015-08-03 19:04 GMT-04:00 Nick Dimiduk <ndimi...@apache.org>:
> >
> >> FYI, this looks like it would impact small WAL writes.
> >>
> >> On Tue, Jul 7, 2015 at 10:44 AM, Kihwal Lee (JIRA) <j...@apache.org>
> >> wrote:
> >>
> >> > Kihwal Lee created HDFS-8722:
> >> > --------------------------------
> >> >
> >> >              Summary: Optimize datanode writes for small writes and
> >> flushes
> >> >                  Key: HDFS-8722
> >> >                  URL: https://issues.apache.org/jira/browse/HDFS-8722
> >> >              Project: Hadoop HDFS
> >> >           Issue Type: Improvement
> >> >             Reporter: Kihwal Lee
> >> >             Priority: Critical
> >> >
> >> >
> >> > After the data corruption fix by HDFS-4660, the CRC recalculation for
> >> > partial chunk is executed more frequently, if the client repeats
> writing
> >> > few bytes and calling hflush/hsync.  This is because the generic logic
> >> > forces CRC recalculation if on-disk data is not CRC chunk aligned.
> Prior
> >> to
> >> > HDFS-4660, datanode blindly accepted whatever CRC client provided, if
> the
> >> > incoming data is chunk-aligned. This was the source of the corruption.
> >> >
> >> > We can still optimize for the most common case where a client is
> >> > repeatedly writing small number of bytes followed by hflush/hsync
> with no
> >> > pipeline recovery or append, by allowing the previous behavior for
> this
> >> > specific case.  If the incoming data has a duplicate portion and that
> is
> >> at
> >> > the last chunk-boundary before the partial chunk on disk, datanode can
> >> use
> >> > the checksum supplied by the client without redoing the checksum on
> its
> >> > own.  This reduces disk reads as well as CPU load for the checksum
> >> > calculation.
> >> >
> >> > If the incoming packet data goes back further than the last on-disk
> chunk
> >> > boundary, datanode will still do a recalculation, but this occurs
> rarely
> >> > during pipeline recoveries. Thus the optimization for this specific
> case
> >> > should be sufficient to speed up the vast majority of cases.
> >> >
> >> >
> >> >
> >> > --
> >> > This message was sent by Atlassian JIRA
> >> > (v6.3.4#6332)
> >> >
> >>
>

Reply via email to