Re: [bitcoin-dev] PubRef - Script OP Code For Public Data References

2019-07-27 Thread Mike Brooks via bitcoin-dev
Hey ZmnSCPxj,

As to your first point.  I wasn't aware there was so much volatility at the
tip, also 100 blocks is quite the difference!  I agree no one could
references a transaction in a newly formed blocks, but I'm curious how this
number was chosen. Do you have any documentation or code that you can share
related to how re-orgs are handled? Do we have a kind of 'consensus
checkpoint' when a re-org is no longer possible? This is a very interesting
topic.

 > * It strongly encourages pubkey reuse, reducing privacy.
Privacy-aware users are free to have single-use p2sh transactions, and they
are free to use the same SCRIPT opcodes we have now.  Adding an extra
opcode helps with the utility of SCRIPT by compressing the smallest SegWit
transactions by a further 40% from 233 bytes to 148 bytes.  Cost savings is
a great utility - and it need not undermine anyones privacy. The resulting
p2sh SCRIPT could end up using public key material that could be compressed
with a PubRef - everyone wins.

 > * There is a design-decision wherein a SCRIPT can only access data in
the transaction that triggers its execution.
In order for a compression algorithm like LZ78 to be written in a
stack-based language like SCRIPT, there needs to be pointer arithmetic to
refer back to the dictionary or a larger application state.  If Bitcoin's
entire stack was made available to the SCRIPT language as an application
state, then LZ78-like compression could be accomplished using PubRef. If a
Script can reuse a PUSHDATA, then transactions will be less repetitious...
and this isn't free.  There is a cost in supporting this opcode.

Giving the SCRIPT language access to more data opens the door for
interesting algorithms, not just LZ78.  This is interesting to discuss how
this application state could be brought to the language.  It strikes me
that your concerns(ZmnSCPxj), as well as the topic of pruning brought up by
others (including Pieter Wuille) could be fixed by the creation of a
side-chain of indexes.  A validator would not need a hash table which is
only needed for O(1) PUBREF creation, these nodes need not be burdened with
this added index.  A validator only needs an array of PUSHDATA elements and
can then validate any given SCRIPT at O(1).

Just a thought.

Best Regards,
Mike


On Fri, Jul 19, 2019 at 11:08 AM ZmnSCPxj  wrote:

> Good morning Mike,
>
> > PubRef is not susceptible to malleability attacks because the blockchain
> is immutable.
>
> This is not quite accurate.
> While very old blocks are indeed immutable-in-practice, chain tips are
> not, and are often replaced.
> At least specify that such data can only be referred to if buried under
> 100 blocks.
>
> --
>
> There are a number of other issues:
>
> * It strongly encourages pubkey reuse, reducing privacy.
> * There is a design-decision wherein a SCRIPT can only access data in the
> transaction that triggers its execution.
>   In particular, it cannot access data in the block the transaction is in,
> or in past blocks.
>   For example, `OP_CHECKLOCKTIMEVERIFY` does not check the blockheight of
> the block that the transaction is confirmed in, but instead checks only
> `nLockTime`, a field in the transaction.
>   * This lets us run SCRIPT in isolation on a transaction, exactly one
> time, when the transaction is about to be put into our mempool.
> When a new block arrives, transactions in our mempool that are in the
> block do not need to have their SCRIPTs re-executed or re-validated.
>
> > In order for a client to make use of the PUBREF operations they’ll need
> access to a database that look up public-keys and resolve their PUBREF
> index.  A value can be resolved to an index with a hash-table lookup in
> O(1) constant time. Additionally, all instances of PUSHDATA can be indexed
> as an ordered list, resolution of a PUBREF index to the intended value
> would be an O(1) array lookup.  Although the data needed to build and
> resolve public references is already included with every full node,
> additional computational effort is needed to build and maintain these
> indices - a tradeoff which provides smaller transaction sizes and relieving
> the need to store repetitive data on the blockchain.
>
> This is not only necessary at the creator of the transaction --- it is
> also necessary at every validator.
>
> In particular, consider existing pruning nodes, which cannot refer to
> previous block data.
>
> We would need to have another new database containing every `PUSHDATA` in
> existence.
> And existing pruning nodes would need to restart from genesis, as this
> database would not exist yet.
>
> Regards,
> ZmnSCPxj
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Improving JoinMarket's resistance to sybil attacks using fidelity bonds

2019-07-27 Thread David A. Harding via bitcoin-dev
On Thu, Jul 25, 2019 at 12:47:54PM +0100, Chris Belcher via bitcoin-dev wrote:
> A way to create a fidelity bond is to burn an amount of bitcoins by
> sending to a OP_RETURN output. Another kind is time-locked addresses
> created using OP_CHECKLOCKTIMEVERIFY where the valuable thing being
> sacrificed is time rather than money, but the two are related because of
> the time-value-of-money.

Timelocking bitcoins, especially for long periods, carries some special
risks in Bitcoin:

1. Inability to sell fork coins, also creating an inability to influence
the price signals that help determine the outcome of chainsplits.

2. Possible inability to transition to new security mechanisms if
a major weakness is discovered in ECC or a hash function.

An alternative to timelocks might be coin age---the value of a UTXO
multiplied by the time since that UTXO was confirmed.  Coin age may be
even harder for an attacker to acquire given that it is a measure of
past patience rather than future sacrifice.  It also doesn't require
using any particular script and so is flexible no matter what policy the
coin owner wants to use (especially if proof-of-funds signatures are
generated using something like BIP322).

Any full node (archival or pruned) can verify coin age using the UTXO
set.[1]  Unlike script-based timelock (CLTV or CSV), there is no current
SPV-level secure way to prove to lite clients that an output is still
unspent, however such verification may be possible within each lite
client's own security model related to transaction withholding attacks:

- Electrum-style clients can poll their server to see if a particular
  UTXO is unspent.

- BIP158 users who have saved their past filters to disk can use them to
  determine which blocks subsequent to the one including the UTXO may
  contain a spend from it.  However, since a UTXO can be spent in the
  same block, they'd always need to download the block containing the
  UTXO (alternatively, the script could contain a 1-block CSV delay
  ensuring any spend occurred in a later block).  If BIP158 filters
  become committed at some point, this mechanism is upgraded to SPV-level
  security.

> Note that a long-term holder (or hodler) of bitcoins can buy time-locked
> fidelity bonds essentially for free, assuming they never intended to
> transact with their coins much anyway.

This is the thing I most like about the proposal.  I suspect most
honest makers are likely to have only a small portion of their funds
under JoinMarket control, with the rest sitting idle in a cold wallet.
Giving makers a way to communicate that they fit that user template
would indeed seem to provide significant sybil resistance.

-Dave

[1] See, bitcoin-cli help gettxout
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bitcoin Core to disable Bloom-based Filtering by default

2019-07-27 Thread Luke Dashjr via bitcoin-dev
On Tuesday 23 July 2019 14:47:18 Andreas Schildbach via bitcoin-dev wrote:
> 3) Afaik, it enforces/encourages address re-use. This stems from the
> fact that the server decides on the filter and in particular on the
> false positive rate. On wallets with many addresses, a hardcoded filter
> will be too blurry and thus each block will be matched. So wallets that
> follow the "one address per incoming payment" pattern (e.g. HD wallets)
> at some point will be forced to wrap their key chains back to the
> beginning. If I'm wrong on this one please let me know.

BTW, you are indeed wrong on this. You don't need to match every single 
address the wallet has ever used, only outstanding addresses that haven't 
been paid. ;)

Luke
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bitcoin Core to disable Bloom-based Filtering by default

2019-07-27 Thread Matt Corallo via bitcoin-dev
This conversation went off the rails somewhat. I don't think there's any 
immediate risk of NODE_BLOOM peers being unavailable. This is a defaults 
change, not a removal of the code to serve BIP 37 peers (nor would I suggest 
removing said code while people still want to use them - the maintenance burden 
isn't much). Looking at historical upgrade cycles, ignoring any other factors, 
there will be a large number of nodes serving NODE_BLOOM for many years.

Even more importantly, if you need them, run a node or two. As long as no one 
is exploiting the issues with them such a node isn't *too* expensive. Or don't, 
I guarantee you chainanalysis or some competitor of theirs will very very 
happily serve bloom-filtered clients as long as such clients want to 
deanonymize themselves. We already see a plurality of nodes on the network are 
clearly not run-of-the-mill Core nodes, many of which are likely 
deanonimization efforts.

In some cases BIP 137 is a replacement, in some cases, indeed, it is not. I 
agree at a protocol level we shouldn't be passing judgement about how users 
wish to interact with the Bitcoin system (aside from not putting our own, 
personal, effort into building such things) but that isn't what's happening 
here. This is an important DoS fix for the average node, and I don't really 
understand the argument that this is going to break existing BIP 37 wallets, 
but if it makes you feel any better I can run some beefy BIP 37 nodes.

Matt

> On Jul 26, 2019, at 06:04, Jonas Schnelli via bitcoin-dev 
>  wrote:
> 
> 
>> 1) It causes way too much traffic for mobile users, and likely even too
>> much traffic for fixed lines in not so developed parts of the world.
> 
> Yes. It causes more traffic than BIP37.
> Basic block filters for current last ~7 days (1008 blocks) are about 19MB 
> (just the filters).
> On top, you will probably fetch a handful of irrelevant blocks due to the FPs 
> and due to true relevant txns.
> A over-the-thumb estimation: ~25MB per week of catch-up.
> If you where offline for a month: ~108MB
> 
> Thats certainly more then BIP37 BF (measured 1.6MB total traffic with android 
> schildbach wallet restore blockchain for 8 week [7 weeks headers, 1week 
> merkleblocks]).
> 
> But lets look at it like this: for an additional, say 25MB per week (maybe a 
> bit more), you get the ability to filter blocks without depending on serving 
> peers who may compromise your financial privacy.
> Also, if you keep the filters, further rescans do consume the same or less 
> bandwidth than BF BIP37.
> In other words: you have the chance to potentially increase privacy by 
> consuming bandwidth in the range of a single audio podcast per week.
> 
> I would say the job of protocol developers is protect users privacy where 
> it’s possible (as a default).
> It’s probably a debatable point wether 25MB per week of traffic is worth a 
> potential increase in privacy, though I absolutely think 25MB/week is an 
> acceptable tradeoff.
> Saving traffic is possible by using BIP37 or stratum/electrum… but developers 
> should make sure users are __warned about the consequences__!
> 
> Additionally, it looks like, peer operators are not endless being willing to 
> serve – for free – a CPU/disk intense service with no benefits for the 
> network. I would question wether a decentralised form of BIP37 is sustainable 
> in the long run (if SPV wallet provider bootstrap a net range of NODE_BLOOM 
> peers to make it more reliable on the network would be snake-oil).
> 
> 
>> 
>> 2) It filters blocks only. It doesn't address unconfirmed transactions.
> 
> Well, unconfirmed transaction are uncertain for various reasons.
> 
> BIP158 won't allow you to filter the mempool.
> But as soon as you are connected to the network, you may fetch tx with 
> inv/getdata and pick out the relevant ones (causes also traffic).
> Unclear and probably impossible with the current BIP158 specs to fetch 
> transactions that are not in active relay and are not in a block (mempool 
> txns, at least this is true with the current observed relay tactics).
> 
> 
>> 3) Afaik, it enforces/encourages address re-use. This stems from the
>> fact that the server decides on the filter and in particular on the
>> false positive rate. On wallets with many addresses, a hardcoded filter
>> will be too blurry and thus each block will be matched. So wallets that
>> follow the "one address per incoming payment" pattern (e.g. HD wallets)
>> at some point will be forced to wrap their key chains back to the
>> beginning. If I'm wrong on this one please let me know.
> 
> I’m probably the wrong guy to ask (haven’t made the numbers) but last time I 
> rescanned a Core wallet (in my dev branch) with block filters (and a Core 
> wallet has >2000 addresses by default) it fetched a low and acceptable amount 
> of false positive blocks.
> (Maybe someone who made the numbers step in here.)
> 
> Though, large wallets – AFAIK – also operate badly with BIP37.
> 
>> 
>> 4) T

Re: [bitcoin-dev] Improving JoinMarket's resistance to sybil attacks using fidelity bonds

2019-07-27 Thread Dmitry Petukhov via bitcoin-dev
В Fri, 26 Jul 2019 10:10:15 +0200
Tamas Blummer via bitcoin-dev 
wrote:

> Imposing opportunity costs however requires larger time locked
> amounts than burning and the user might not have sufficient funds to
> do so. This is however not a restriction but an opportunity that can
> give rise to an additional market of locking UTXOs in exchange of a
> payment.
> 
> This would give rise to a transparent interest rate market for
> Bitcoin an additional huge benefit.

Wouldn't that 'locked utxo rent' market just drive the cost of attack
down to manageable levels for the attacker ?

The owner of the locked utxo can derive potential profit from it by
being a maker, and then the profit will be reduced by operational
expenses of running a maker operation.

The owner of utxo can just 'outsource' that task to someone, and pay
some fee for the convenience.

In effect, the owner would be renting out that utxo for the price of

 -  - 

If the attacker is the entity who provides this 'maker outsourcing',
and it captures significant portion of that maker-outsourcing/utxo-rent
market, it can even receive some profit from the convenience fee, while
deanonymizing the joins.

And with pseudonymous entities, you cannot be sure how much of that
market the attacker controls.


pgpqFGuSZ85N_.pgp
Description: Цифровая подпись OpenPGP
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bitcoin Core to disable Bloom-based Filtering by default

2019-07-27 Thread Chris via bitcoin-dev

On 7/23/19 10:47 AM, Andreas Schildbach via bitcoin-dev wrote:


3) Afaik, it enforces/encourages address re-use. This stems from the
fact that the server decides on the filter and in particular on the
false positive rate. On wallets with many addresses, a hardcoded filter
will be too blurry and thus each block will be matched. So wallets that
follow the "one address per incoming payment" pattern (e.g. HD wallets)
at some point will be forced to wrap their key chains back to the
beginning. If I'm wrong on this one please let me know.


Maybe someone who knows better can confirm but I thought I read the the 
fp rate on the filter was chosen assuming a wallet would average a 
certain number of addresses (ie, they assumed use of an HD wallet). Your 
criticism might be valid for wallets that well exceed the average.


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bitcoin Core to disable Bloom-based Filtering by default

2019-07-27 Thread Jonas Schnelli via bitcoin-dev

> 1) It causes way too much traffic for mobile users, and likely even too
> much traffic for fixed lines in not so developed parts of the world.

Yes. It causes more traffic than BIP37.
Basic block filters for current last ~7 days (1008 blocks) are about 19MB (just 
the filters).
On top, you will probably fetch a handful of irrelevant blocks due to the FPs 
and due to true relevant txns.
A over-the-thumb estimation: ~25MB per week of catch-up.
If you where offline for a month: ~108MB

Thats certainly more then BIP37 BF (measured 1.6MB total traffic with android 
schildbach wallet restore blockchain for 8 week [7 weeks headers, 1week 
merkleblocks]).

But lets look at it like this: for an additional, say 25MB per week (maybe a 
bit more), you get the ability to filter blocks without depending on serving 
peers who may compromise your financial privacy.
Also, if you keep the filters, further rescans do consume the same or less 
bandwidth than BF BIP37.
In other words: you have the chance to potentially increase privacy by 
consuming bandwidth in the range of a single audio podcast per week.

I would say the job of protocol developers is protect users privacy where it’s 
possible (as a default).
It’s probably a debatable point wether 25MB per week of traffic is worth a 
potential increase in privacy, though I absolutely think 25MB/week is an 
acceptable tradeoff.
Saving traffic is possible by using BIP37 or stratum/electrum… but developers 
should make sure users are __warned about the consequences__!

Additionally, it looks like, peer operators are not endless being willing to 
serve – for free – a CPU/disk intense service with no benefits for the network. 
I would question wether a decentralised form of BIP37 is sustainable in the 
long run (if SPV wallet provider bootstrap a net range of NODE_BLOOM peers to 
make it more reliable on the network would be snake-oil).


> 
> 2) It filters blocks only. It doesn't address unconfirmed transactions.

Well, unconfirmed transaction are uncertain for various reasons.

BIP158 won't allow you to filter the mempool.
But as soon as you are connected to the network, you may fetch tx with 
inv/getdata and pick out the relevant ones (causes also traffic).
Unclear and probably impossible with the current BIP158 specs to fetch 
transactions that are not in active relay and are not in a block (mempool txns, 
at least this is true with the current observed relay tactics).


> 3) Afaik, it enforces/encourages address re-use. This stems from the
> fact that the server decides on the filter and in particular on the
> false positive rate. On wallets with many addresses, a hardcoded filter
> will be too blurry and thus each block will be matched. So wallets that
> follow the "one address per incoming payment" pattern (e.g. HD wallets)
> at some point will be forced to wrap their key chains back to the
> beginning. If I'm wrong on this one please let me know.

I’m probably the wrong guy to ask (haven’t made the numbers) but last time I 
rescanned a Core wallet (in my dev branch) with block filters (and a Core 
wallet has >2000 addresses by default) it fetched a low and acceptable amount 
of false positive blocks.
(Maybe someone who made the numbers step in here.)

Though, large wallets – AFAIK – also operate badly with BIP37.

> 
> 4) The filters are not yet committed to the blockchain. Until that
> happens we'd have to trust a server to provide correct filters.

I wouldn’t say so. It’s on a similar level than BIP37.
BIP37 is not – and can not – be committed to the blockchain.
You fully trust the peer that it won’t…
a) create fake unconfirmed transactions (would be the same if a BIP158 wallet 
would show you unconfirmed transaction)
b) lies by omission (you will miss relevant transactions, eventually swipe your 
wallet and loose coins)

IMO, the point b) is true for BIP37 and BIP158 (as long as not commited).
In both cases, you can reduce the trust by comparing between peers / 
filter-providers.

b) is conceptually solvable with a soft-fork (commitment) in BIP158 (not with 
BIP37).

Additionally, block-filters will, very likely, be useful for other features 
(load/rescan an [old] wallet on a prune peer that has the filters constructed).



There is probably no clear answer like „X is better than Y“.

Personally I would like to see developers being more honest/transparent to 
users about the implications of the used filtering,... and giving them choices.
Imagine a user can choose between „Electrum / BIP37 / BIP158“ depending on his 
needs for privacy and availability of bandwidth. Eventually also taking the 
future usage of this wallet (will he load old private keys, will he receive 
money daily, etc.) into account.

Plus, full node hardware appliances that run at home (or in a trusted 
environment) are solving many of these issues plus adding a bunch of great 
features – if done right.

/jonas


signature.asc
Description: Message signed with OpenPGP