On Mon, May 23, 2016 at 12:30 PM, Andreas Schildbach <andr...@schildbach.de>
wrote:

> I'm replying to https://github.com/bitcoinj/bitcoinj/issues/1259
>
> > I present some ideas for action but more thoughts are needed. Testnet
> forks can be very long, to the degree it will run beyond what can be
> handled with SPVBlockStore, so we need to think about this. A solution for
> a very low spec (memory/disk) wallet could maybe do a two-pass download
> from peers. (I am unsure but does bitcoinj make use getheaders in normal
> catch up)
> > Peers do not recognize hashes in getblock-message
>
> Do you know *how* long exactly forks on testnet can be? Since mainnet
> forks are not that long (proof of works prevents them to be), I'd be ok
> with a trivial fix like simply increasing the size for the ring buffer
> (for testnet only!).
>
>
At this point I think that they are essentially unbounded. I have seen
chain forks that are thousands of blocks long; I think I've seen a few that
are over 10,000 blocks long (this tends to happen in conjunction with what
I call "block storms" which occur when the testnet difficulty gets reset
back to 1 on a difficulty change due to a quirk in the testnet mining rules
that allow for difficulty-1 blocks to be mined under certain conditions.


> > The hashes sent in the getblock message should help a peer to decide
> which blocks the client needs to catch up with the chain. I noticed that
> Peers response to getblocks was to send the 500 first blocks of the block
> chain. These are not to much use of course. It will not change the state of
> of the chain head in the wallet's store so the wallet will be stuck.
> >
> > Analysis: The bitcoinj seams to fill in 100 hashes starting from its
> chain head in a linear fashion, and if all are on a fork that was
> discarded, the Peer cannot find any common block except for the genesis
> block, thus it starts there in the reply inv-message.
>
> Exactly; also see comment and implementation at
> org.bitcoinj.core.Peer.blockChainDownloadLocked(Sha256Hash).
>
> > Action: Use better set of hashes from the known blocks in the store
> (5000 for SPVBlockStore). A better selection is proposed on the bitcoin
> wiki: "dense to start, but then sparse". This helps but I ran into the next
> problem:
> > getdata 500 blocks do not trigger re-organize despite head on a dead fork
>
> Should be easy to do. The only problem is backwards scanning the block
> store is a bit expensive, but lets see.
>
> > Requesting blocks with better hashes can still leave the the wallet
> store chain head unchanged. This will result in the same request for blocks
> again, and the store head is effectively stuck.
>
> How did you test this? Did you already fix the above problem?
>
> > Analysis: This happens when the downloaded blocks, despite belong to the
> correct chain, will not trigger a re-organize despite the head is a dead
> chain. Why? It seems the special difficulty jumps on testnet can make a
> branch of blocks have less total work despite being very much longer, e.g.
> several 100 blocks longer than head. For some reason the network selected
> this longer chain for many blocks.
>
> The reorg logic is tricky unfortunately and because reorgs are difficult
> to test I fear they are not tested well and they are prone to regressions.
>
> > Action: Not sure here: one way is to follow the current rules. In these
> cases we need to download many blocks (1000s) to trigger a re-org. To make
> that happen the store needs to track not only chain head but also what, at
> this point, looks like a fork (less total work) to send a different
> getblocks and getdata to get more blocks it has not downloaded. Note that
> getblocks can easily be used to ask for multiple branches in one request.
> But, without extending the SPVBlockStore this solution does not work as we
> run out of space (currently to manage the reorg bitcoinj seems to need all
> blocks in both branches back to the split point).
> > Another way is maybe to discard blocks back to the split point and try
> to restart with getblocks to peers (but this would trust this peer more
> than those that resulted in the current head). Does the transaction
> confidence model allow this two step operation: first lowering head and
> total work, and then following another branch that eventually will reach
> higher total work much later? Currently I assume total work can only
> increase. I general, tx confidence changes for these deep reorg events
> seems very hard to handle in applications anyway. Any ideas?
>
> My gut feeling is that total work should only increase. I'd try
> increasing block store capacity first.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "bitcoinj" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to bitcoinj+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"bitcoinj" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bitcoinj+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to