Branch: refs/heads/master
  Home:   https://github.com/conformal/btcd
  Commit: 5ec951f6a701ff2e1753bf9c10cf2208cef56e65
      
https://github.com/conformal/btcd/commit/5ec951f6a701ff2e1753bf9c10cf2208cef56e65
  Author: Dave Collins <[email protected]>
  Date:   2014-02-01 (Sat, 01 Feb 2014)

  Changed paths:
    M blockmanager.go
    M peer.go

  Log Message:
  -----------
  Rework and improve headers-first mode.

This commit improves how the headers-first mode works in several ways.

The previous headers-first code was an initial implementation that did not
have all of the bells and whistles and a few less than ideal
characteristics.  This commit improves the heaers-first code to resolve
the issues discussed next.

- The previous code only used headers-first mode when starting out from
  block height 0 rather than allowing it to work starting at any height
  before the final checkpoint.  This means if you stopped the chain
  download at any point before the final checkpoint and restarted, it
  would not resume and you therefore would not have the benefit of the
  faster processing offered by headers-first mode.
- Previously all headers (even those after the final checkpoint) were
  downloaded and only the final checkpoint was verified.  This resulted in
  the following issues:
  - As the block chain grew, increasingly larger numbers of headers were
    downloaded and kept in memory
  - If the node the node serving up the headers was serving an invalid
    chain, it wouldn't be detected until downloading a large number of
    headers
  - When an invalid checkpoint was detected, no action was taken to
    recover which meant the chain download would essentially be stalled
- The headers were kept in memory even though they didn't need to be as
  merely keeping track of the hashes and heights is enough to provde they
  properly link together and checkpoints match
- There was no logging when headers were being downloaded so it could
  appear like nothing was happening
- Duplicate requests for the same headers weren't being filtered which
  meant is was possible to inadvertently download the same headers twice
  only to throw them away.

This commit resolves these issues with the following changes:

- The current height is now examined at startup and prior each sync peer
  selection to allow it to resume headers-first mode starting from the
  known height to the next checkpoint
- All checkpoints are now verified and the headers are only downloaded
  from the current known block height up to the next checkpoint.  This has
  several desirable properties:
  - The amount of memory required is bounded by the maximum distance
    between to checkpoints rather than the entire length of the chain
  - A node serving up an invalid chain is detected very quickly and with
    little work
  - When an invalid checkpoint is detected, the headers are simply
    discarded and the peer is disconnected for serving an invalid chain
  - When the sync peer disconnets, all current headers are thrown away
    and, due to the new aforementioned resume code, when a new sync peer
    is selected, headers-first mode will continue from the last known good
    block
- In addition to reduced memory usage from only keeping information about
  headers between two checkpoints, the only information now kept in memory
  about the headers is the hash and height rather than the entire header
- There is now logging information about what is happening with headers
- Duplicate header requests are now filtered


Reply via email to