The INV scheme used by Bitcoin is not very efficient at all. Once you take into 
account Bitcoin, TCP (including ACKs), IP, and ethernet overheads, each INV 
takes 193 bytes, according to wireshark. That's 127 bytes for the INV message 
and 66 bytes for the ACK. All of this is for 32 bytes of payload, for an 
"efficiency" of 16.5% (i.e. 83.5% overhead). For a 400 byte transaction with 20 
peers, this can result in 3860 bytes sent in INVs for only 400 bytes of actual 
data.

An improvement that I've been thinking about implementing (after Blocktorrent) 
is an option for batched INVs. Including the hashes for two txes per IP packet 
instead of one would increase the INV size to 229 bytes for 64 bytes of payload 
-- that is, you add 36 bytes to the packet for every 32 bytes of actual 
payload. This is a marginal efficiency of 88.8% for each hash after the first. 
This is *much* better.

Waiting a short period of time to accumulate several hashes together and send 
them as a batched INV could easily reduce the traffic of running bitcoin nodes 
by a factor of 2, and possibly even more than that. However, if too many people 
used it, such a technique would slow down the propagation of transactions 
across the bitcoin network slightly, which might make some people unhappy. The 
ill effects could likely be mitigated by choosing a different batch size for 
each peer based on each peer's preferences. Each node could choose one or two 
peers to which they send INVs in batches of one or two, four more peers in 
which they send batches of two to four, and the rest in batches of four to 
eight, for example.

(This is a continuation of a conversation started on 
https://bitcointalk.org/index.php?topic=1377345 
<https://bitcointalk.org/index.php?topic=1377345.new#new>.)

Jonathan

Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

_______________________________________________
bitcoin-dev mailing list
[email protected]
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

Reply via email to