Hi all,
Most light wallets will want to download the minimum amount of data required to 
operate, which means they would ideally download the smallest possible filters 
containing the subset of elements they need.
What if instead of trying to decide up front which subset of elements will be 
most useful to include in the filters, and the size tradeoff, we let the 
full-node decide which subsets of elements it serves filters for?

For instance, a full node would advertise that it could serve filters for the 
subsets 110 (txid+script+outpoint), 100 (txid only), 011 ( script+outpoint) 
etc. A light client could then choose to download the minimal filter type 
covering its needs.
The obvious benefit of this would be minimal bandwidth usage for the light 
client, but there are also some less obvious ones. We wouldn’t have to decide 
up front what each filter type should contain, only the possible elements a 
filter can contain (more can be added later without breaking existing clients). 
This, I think, would let the most served filter types grow organically, with 
full-node implementations coming with sane defaults for served filter types 
(maybe even all possible types as long as the number of elements is small), 
letting their operator add/remove types at will.
The main disadvantage of this as I see it, is that there’s an exponential 
blowup in the number of possible filter types in the number of element types. 
However, this would let us start out small with only the elements we need, and 
in the worst case the node operators just choose to serve the subsets 
corresponding to what now is called “regular” + “extended” filters anyway, 
requiring no more resources.
This would also give us some data on what is the most widely used filter types, 
which could be useful in making the decision on what should be part of filters 
to eventually commit to in blocks.
- Johan On Sat, May 19, 2018 at 5:12, Olaoluwa Osuntokun via bitcoin-dev 
<bitcoin-dev@lists.linuxfoundation.org> wrote:
On Thu, May 17, 2018 at 2:44 PM Jim Posen via bitcoin-dev <bitcoin- Monitoring 
inputs by scriptPubkey vs input-txid also has a massive
advantage for parallel filtering: You can usually known your pubkeys
well in advance, but if you have to change what you're watching block
N+1 for based on the txids that paid you in N you can't filter them
in parallel.

Yes, I'll grant that this is a benefit of your suggestion.
Yeah parallel filtering would be pretty nice. We've implemented a serial 
filtering for btcwallet [1] for the use-case of rescanning after a seed phrase 
import. Parallel filtering would help here, but also we don't yet take 
advantage of batch querying for the filters themselves. This would speed up the 
scanning by quite a bit.
I really like the filtering model though, it really simplifies the code, and we 
can leverage identical logic for btcd (which has RPCs to fetch the filters) as 
well.
[1]: https://github.com/Roasbeef/btcwallet/blob/master/chain/neutrino.go#L180 
[https://github.com/Roasbeef/btcwallet/blob/master/chain/neutrino.go#L180]
_______________________________________________ bitcoin-dev mailing list 
bitcoin-dev@lists.linuxfoundation.org 
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
_______________________________________________
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

Reply via email to