Hi Dave, that's great! Apart from what was written on this thread (and
the linked issue), I'm not aware of any ongoing efforts to improve the
testnet situation.


On 08/25/2016 03:09 PM, David Wilson wrote:
> I am writing to just let you know that I spent the last two nights
> looking at this issue. That time was mostly spent learning how the chain
> is updated and writing some utility code.
> 
> Please let me know if any of you are also actively working on this.
> 
> Thanks!
> Dave
> 
> On Tuesday, May 24, 2016 at 3:54:04 AM UTC-5, Jarl Fransson wrote:
> 
> 
>     Den måndag 23 maj 2016 kl. 18:30:19 UTC+2 skrev Andreas Schildbach:
> 
>         I'm replying to https://github.com/bitcoinj/bitcoinj/issues/1259
>         <https://github.com/bitcoinj/bitcoinj/issues/1259>
> 
>         > I present some ideas for action but more thoughts are needed.
>         Testnet forks can be very long, to the degree it will run beyond
>         what can be handled with SPVBlockStore, so we need to think
>         about this. A solution for a very low spec (memory/disk) wallet
>         could maybe do a two-pass download from peers. (I am unsure but
>         does bitcoinj make use getheaders in normal catch up)
>         > Peers do not recognize hashes in getblock-message
> 
>         Do you know *how* long exactly forks on testnet can be? Since
>         mainnet
>         forks are not that long (proof of works prevents them to be),
>         I'd be ok
>         with a trivial fix like simply increasing the size for the ring
>         buffer
>         (for testnet only!).
> 
>     I agree with Jameson's observations, forks can be 10000 blocks long.
>      I think this possibility is a simple conclusion from the difficulty
>     rules of testnet and the rule of most total work when selecting
>     branch.  If some part of the network "misses" by ignorance or tactic
>     the latest high difficulty (work) block generated, and gets to mine
>     blocks on the low difficulty will just crank out 1000s of blocks
>     very quickly. Any nodes/SPV clients that saw the last high
>     difficulty block will vote (stay) on that as long as its total work
>     is greatest.
> 
>     Because of these long forks I am not sure it is a very good solution
>     to increase the SPVBlockStore ring buffer.  We could make buffer
>     size an optional argument.
> 
> 
>         > The hashes sent in the getblock message should help a peer to
>         decide which blocks the client needs to catch up with the chain.
>         I noticed that Peers response to getblocks was to send the 500
>         first blocks of the block chain. These are not to much use of
>         course. It will not change the state of of the chain head in the
>         wallet's store so the wallet will be stuck.
>         >
>         > Analysis: The bitcoinj seams to fill in 100 hashes starting
>         from its chain head in a linear fashion, and if all are on a
>         fork that was discarded, the Peer cannot find any common block
>         except for the genesis block, thus it starts there in the reply
>         inv-message.
> 
>         Exactly; also see comment and implementation at
>         org.bitcoinj.core.Peer.blockChainDownloadLocked(Sha256Hash).
> 
>         > Action: Use better set of hashes from the known blocks in the
>         store (5000 for SPVBlockStore). A better selection is proposed
>         on the bitcoin wiki: "dense to start, but then sparse". This
>         helps but I ran into the next problem:
>         > getdata 500 blocks do not trigger re-organize despite head on
>         a dead fork
> 
>         Should be easy to do. The only problem is backwards scanning the
>         block
>         store is a bit expensive, but lets see.
> 
>         > Requesting blocks with better hashes can still leave the the
>         wallet store chain head unchanged. This will result in the same
>         request for blocks again, and the store head is effectively stuck.
> 
>         How did you test this? Did you already fix the above problem?
> 
> 
>     Yes, I just tried it to see if I could unstuck a particular wallet,
>     ignoring any performance issues I just traversed back the whole
>     store when selecting.  I hoped it would solve most of the problems
>     but it did not so I wonder if it is worth it.  It think we can say,
>     for mainnet, having the top 100 blocks in SPVBlockStore on a
>     orphaned fork will be very very unlikely.
> 
> 
>         > Analysis: This happens when the downloaded blocks, despite
>         belong to the correct chain, will not trigger a re-organize
>         despite the head is a dead chain. Why? It seems the special
>         difficulty jumps on testnet can make a branch of blocks have
>         less total work despite being very much longer, e.g. several 100
>         blocks longer than head. For some reason the network selected
>         this longer chain for many blocks.
> 
>         The reorg logic is tricky unfortunately and because reorgs are
>         difficult
>         to test I fear they are not tested well and they are prone to
>         regressions.
> 
> 
>     At least I am quite sure about what happens here. I think it is not
>     a new bug but a consequence of the design.  The blocks received from
>     peers in one batch (500 blocks) is not enough total work to create a
>     re-org.
>      
> 
> 
>         > Action: Not sure here: one way is to follow the current rules.
>         In these cases we need to download many blocks (1000s) to
>         trigger a re-org. To make that happen the store needs to track
>         not only chain head but also what, at this point, looks like a
>         fork (less total work) to send a different getblocks and getdata
>         to get more blocks it has not downloaded. Note that getblocks
>         can easily be used to ask for multiple branches in one request.
>         But, without extending the SPVBlockStore this solution does not
>         work as we run out of space (currently to manage the reorg
>         bitcoinj seems to need all blocks in both branches back to the
>         split point).
>         > Another way is maybe to discard blocks back to the split point
>         and try to restart with getblocks to peers (but this would trust
>         this peer more than those that resulted in the current head).
>         Does the transaction confidence model allow this two step
>         operation: first lowering head and total work, and then
>         following another branch that eventually will reach higher total
>         work much later? Currently I assume total work can only
>         increase. I general, tx confidence changes for these deep reorg
>         events seems very hard to handle in applications anyway. Any ideas?
> 
>         My gut feeling is that total work should only increase. I'd try
>         increasing block store capacity first.
>          
> 
>     I am not sure it helps very much for testnet unless capacity gets
>     very big. And then, there are algorithmic scaling problems as
>     SPVBlockStore is not really made for traversing fast. (e.g. problems
>     for deep forks and many evals of AbstractBlockChain.findSplit)  I
>     think there are better ways to still keep it light weight but that
>     requires hard work. 
> 
>     At least I am sure these particular cases can not (very unlikely)
>     happen on main net and we know a bit more about the problems.
> 
> 
>          
> 
> -- 
> You received this message because you are subscribed to the Google
> Groups "bitcoinj" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to bitcoinj+unsubscr...@googlegroups.com
> <mailto:bitcoinj+unsubscr...@googlegroups.com>.
> For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
"bitcoinj" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to bitcoinj+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to