Re: [bitcoin-dev] Responsible disclosure of bugs

2017-09-13 Thread Anthony Towns via bitcoin-dev
On Tue, Sep 12, 2017 at 09:10:18AM -0700, Simon Liu wrote:
> It would be a good starting point if the current policy could be
> clarified, so everyone is on the same page, and there is no confusion.

Collecting various commentary from here and reddit, I think current de
facto policy is something like:

 * Vulnerabilities should be reported via secur...@bitcoincore.org [0]

 * A critical issue (that can be exploited immediately or is already
   being exploited causing large harm) will be dealt with by:
 * a released patch ASAP
 * wide notification of the need to upgrade (or to disable affected
   systems)
 * minimal disclosure of the actual problem, to delay attacks
   [1] [2]

 * A non-critical vulnerability (because it is difficult or expensive to
   exploit) will be dealt with by:
 * patch and review undertaken in the ordinary flow of development
 * backport of a fix or workaround from master to the current
   released version [2]

 * Devs will attempt to ensure that publication of the fix does not
   reveal the nature of the vulnerability by providing the proposed fix
   to experienced devs who have not been informed of the vulnerability,
   telling them that it fixes a vulnerability, and asking them to identify
   the vulnerability. [2]

 * Devs may recommend other bitcoin implementations adopt vulnerability
   fixes prior to the fix being released and widely deployed, if they
   can do so without revealing the vulnerability; eg, if the fix has
   significant performance benefits that would justify its inclusion. [3]

 * Prior to a vulnerability becoming public, devs will generally recommend
   to friendly altcoin devs that they should catch up with fixes. But this
   is only after the fixes are widely deployed in the bitcoin network. [4]

 * Devs will generally not notify altcoin developers who have behaved
   in a hostile manner (eg, using vulnerabilities to attack others, or
   who violate embargoes). [5]

 * Bitcoin devs won't disclose vulnerability details until >80% of bitcoin
   nodes have deployed the fixes. Vulnerability discovers are encouraged
   and requested to follow the same policy. [1] [6]

Those seem like pretty good policies to me, for what it's worth.

I haven't seen anything that indicates bitcoin devs will *ever* encourage
public disclosure of vulnerabilities (as opposed to tolerating other
people publishing them [6]). So I'm guessing current de facto policy is
more along the lines of:

 * Where possible, Bitcoin devs will never disclose vulnerabilities
   publically while affected code may still be in use (including by
   altcoins).

rather than something like:

 * Bitcoin devs will disclose vulnerabilities publically after 99% of the
   bitcoin network has upgraded [7], and fixes have been released for
   at least 12 months.


Instinctively, I'd say documenting this policy (or whatever it actually
is) would be good, and having all vulnerabilities get publically released
eventually would also be good; that's certainly the more "open source"
approach. But arguing the other side:

 - documenting security policy gives attackers a better handle on where
   to find weak points; this may be more harm than there is benefit to
   improving legitimate users' understanding of and confidence in the
   development process

 - the main benefit of public vulnerability disclosure is a better
   working relationship with security researchers and perhaps better
   understanding of what sort of bugs happen in practice in general;
   but if most of your security research is effectively in house [6],
   maybe those benefits aren't as great as the harm done by revealing
   even old vulnerabilities to attackers

If the first of those arguments holds, well, hopefully this message has
egregious errors that no one will correct, or it will quickly get lost
in this list's archives...

Cheers,
aj

[0] http://bitcoincore.org/en/contact
referenced from .github/ISSUE_TEMPLATE.md in git

[1] 
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-September/014986.html

[2] 
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-September/014990.html

[3] 
https://www.reddit.com/r/btc/comments/6zf1qo/peter_todd_nicely_pulled_away_attention_from_jjs/dmxcw70/

[4] 
https://www.reddit.com/r/btc/comments/6z827o/chris_jeffrey_jj_discloses_bitcoin_attack_vector/dmxdg83/

[5] 
https://www.reddit.com/r/btc/comments/6zb3lp/maxwell_admits_core_sat_on_vulnerability/dmv4y7g/

[6] 
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-September/014991.html
 

[7] Per http://luke.dashjr.org/programs/bitcoin/files/charts/branches.html
it seems like 1.7% of the network is running known-vulnerable versions
0.8 and 0.9; but only 0.37% are running 0.10 or 0.11, so that might argue
revealing any vulnerabilities fixed since 0.12.0 would be fine...
(bitnodes.21.co doesn't seem to break down anything earlier than 0.12)

___

[bitcoin-dev] SigOps limit.

2017-09-13 Thread Russell O'Connor via bitcoin-dev
On Tue, Sep 12, 2017 at 3:57 PM, Mark Friedenbach via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

>
> 4MB of secp256k1 signatures takes 10s to validate on my 5 year old
> laptop (125,000 signatures, ignoring public keys and other things that
> would consume space). That's much less than bad blocks that can be
> constructed using other vulnerabilities.


If there were no sigops limits, I believe the worst case block could have
closer to 1,000,000 CHECKSIG operations.  Signature checks are cached so
while repeating the sequence "2DUP CHECKSIGVERIFY" does create a lot of
checksig operations, the cached values prevent a lot of work being done.

To defeat the cache one can repeat the sequence "2DUP CHECKSIG DROP
CODESEPARATOR", which will create unique signature validation requests
every 4 bytes.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Merkle branch verification & tail-call semantics for generalized MAST

2017-09-13 Thread Peter Todd via bitcoin-dev
On Wed, Sep 13, 2017 at 08:27:36AM +0900, Karl Johan Alm via bitcoin-dev wrote:
> On Wed, Sep 13, 2017 at 4:57 AM, Mark Friedenbach via bitcoin-dev
>  wrote:
> >> Without the limit I think we would be DoS-ed to dead
> >
> > 4MB of secp256k1 signatures takes 10s to validate on my 5 year old
> > laptop (125,000 signatures, ignoring public keys and other things that
> > would consume space). That's much less than bad blocks that can be
> > constructed using other vulnerabilities.
> 
> Sidenote-ish, but I also believe it would be fairly trivial to keep a
> per UTXO tally and demand additional fees when trying to respend a
> UTXO which was previously "spent" with an invalid op count. I.e. if
> you sign off on an input for a tx that you know is bad, the UTXO in
> question will be penalized proportionately to the wasted ops when
> included in another transaction later. That would probably kill that
> DoS attack as the attacker would effectively lose bitcoin every time,
> even if it was postponed until they spent the UTXO. The only thing
> clients would need to do is to add a fee rate penalty ivar and a
> mapping of outpoint to penalty value, probably stored as a separate
> .dat file. I think.

Ethereum does something quite like this; it's a very bad idea for a few
reasons:

1) If you bailed out of verifying a script due to wasted ops, how did you know 
the
transaction trying to spend that txout did in fact come from the owner of it?

2) How do you verify that transactions were penalized correctly without *all*
nodes re-running the DoS script?

3) If the DoS is significant enough to matter on a per-node level, you're going
to have serious problems anyway, quite possibly so serious that the attacker
manages to cause consensus to fail. They can then spend the txouts in a block
that does *not* penalize their outputs, negating the deterrent.

-- 
https://petertodd.org 'peter'[:-1]@petertodd.org


signature.asc
Description: Digital signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Minutia in CT for Bitcoin. Was: SF proposal: prohibit unspendable outputs with amount=0

2017-09-13 Thread Gregory Maxwell via bitcoin-dev
On Wed, Sep 13, 2017 at 9:24 AM, Peter Todd via bitcoin-dev
 wrote:
> 2) Spending CT-shielded outputs to unshielded outputs
>
> Here one or more CT-shielded outputs will be spent. Since their value is zero,
> we make up the difference by spending one or more outputs from the CT pool,
> with the change - if any - assigned to a CT-pool output.

Can we solve the problem that pool inputs are gratuitously non-reorg
safe, without creating something like a maturity limit for shielded to
unshielded?

So far the best I have is this:  Support unshielded coins in shielded
space too. So the only time you transition out of the pool is paying
to a legacy wallet.  If support were phased in (e.g. addresses that
say you can pay me in the pool after its enabled), and the pool only
used long after wallets supported getting payments in it, then this
would be pretty rare and a maturity limit wouldn't be a big deal.

Can better be done?
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] SF proposal: prohibit unspendable outputs with amount=0

2017-09-13 Thread Gregory Maxwell via bitcoin-dev
On Wed, Sep 13, 2017 at 9:24 AM, Peter Todd via bitcoin-dev
 wrote:
> Quite simply, I just don't think the cost-benefit tradeoff of what you're
> proposing makes sense.

I agree that dropping zero value outputs is a needless loss of
flexibility.  In addition to the CT example, something similar could
be done for increased precision (nanobitcoin!).

Maybe if in the future the value of 1e-8 btc is high enough then an
argument could be made that requiring one is a meaningful reduction in
a miner's ability to spam up the network... but the argument doesn't
fly today... the cost in lost fee income from the spam just totally
dwarfs it.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] SF proposal: prohibit unspendable outputs with amount=0

2017-09-13 Thread Peter Todd via bitcoin-dev
On Sat, Sep 09, 2017 at 11:11:57PM +0200, Jorge Timón wrote:
> Tier Nolan, right, a new tx version would be required.
> 
> I have to look deeper into the CT as sf proposal.
> 
> What futures upgrades could this conflict with it's precisely the
> question here. So that vague statement without providing any example
> it's not very valuable.

So with Confidential Transactions, the only thing that's changed relative to a
normal Bitcoin transaction is that fact that the sum of input values is >= the
sum of output values is proven via a CT proof, rather than revealing the actual
sums. Other than that, CT transactions don't need to be any different from
regular transactions.

For CT to be a softfork, we have to ensure that each CT transaction's sum of
inputs and outputs is valid. An obvious way to do this is to have a pool of
"shielded" outputs, whose total value is the sum of all CT-protected outputs.
Outputs in this pool would appear to be anyone-can-spend outputs to pre-CT
nodes.

This gives us three main cases:

1) Spending unshielded outputs to CT-shielded outputs

Since the CT-shielded output's value is unknown, we can simply set their value
to zero. Secondly, we will add the newly CT-shielded value to the pool with an
additional output whose value is the sum of all newly created CT-shielded
outputs.


2) Spending CT-shielded outputs to unshielded outputs

Here one or more CT-shielded outputs will be spent. Since their value is zero,
we make up the difference by spending one or more outputs from the CT pool,
with the change - if any - assigned to a CT-pool output.


3) Spending CT-shielded outputs to CT-shielded outputs

Since both the inputs and outputs are zero-valued, to pre-CT nodes the
transaction is perfectly valid: the sum of coins spent is 0 BTC, and the sum of
coins created is also 0 BTC. We do have the problem of paying miners fees, but
that could be done with an additional CT output that the miner can spend, a
child-pays-for-parent transaction, or something else entirely that I haven't
thought of.


> Although TXO commitments are interesting, I don't think they make UTXO
> growth a "non-issue" and I also don't think they justify not doing
> this.
> 
> Yeah, the costs for spammers are very small and doesn't really improve
> things all that much, as acknowledged in the initial post.

Suppose zero-valued outputs are prohibited. In case #3 above, if there are more
outputs than inputs, we need to add an additional input from the CT-shielded
pool to make up the difference, and an additional change output back to the
CT-shielded pool.

If shielded-to-shielded transactions are common, these extra outputs could
consume a significant % of the total blockchain space - that's a significant
cost. Meanwhile the benefit is so small it's essentially theoretical: an
additional satoshi per output is an almost trivial cost to an attacker.

Quite simply, I just don't think the cost-benefit tradeoff of what you're
proposing makes sense.

-- 
https://petertodd.org 'peter'[:-1]@petertodd.org


signature.asc
Description: Digital signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] 2 softforks to cut the blockchain and IBD time

2017-09-13 Thread Tier Nolan via bitcoin-dev
On Tue, Sep 12, 2017 at 11:58 PM, michele terzi via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

>
> Pros:
>
> you gain a much faster syncing for new nodes.
> full non pruning nodes need a lot less HD space.
> dropping old history results in more difficult future chainanalysis (at
> least by small entities)
> freezing old history in one new genesis block means the chain can no
> longer be reorged prior to that point
>

Current nodes allow pruning so you can save disk space that way.  Users
still need to download/verify the new blocks though.

Under your scheme, you don't need to throw the data away.  Nodes can decide
how far back that they want to go.

"Fast" IBD

- download header chain from genesis (~4MB per year)
- check headers against "soft" checkpoints (every 50k blocks)
- download the UTXO set of the most recent soft checkpoint (and verify
against hash)
- download blocks starting from the most recent soft checkpoint
- node is now ready to use
- [Optional] Slowly download the remaining blocks

This requires some new protocol messages to allow requesting and send the
UTXO set, though the inv and getdata messages could be used.

If you add a new services bit, NODE_NETWORK_RECENT, then nodes can find
other nodes that have the most recent blocks.  This indicates that you have
all blocks since the most recent snapshot.

The slow download doesn't have to download the blocks in order.  It can
just check against the header chain.  Once a node has all the blocks, it
would switch from NODE_NETWORK_RECENT to NODE_NETWORK.

(Multiple bits could be used to indicate that the node has 2 or more recent
time periods).

"Soft" checkpoints mean that re-orgs can't cause a network partition.  Each
soft checkpoint is a mapping of {block_hash: utxo_hash}.

A re-org of 1 year or more would be devastating so it is probably
academic.  Some people may object to centralized checkpointing and soft
checkpoints cover that objection.

full nodes with old software can no longer be fired up and sync with the
> existing network
> full nodes that went off line prior to the second fork cannot sync back
> once they turn back on line again.
>
>
This is why having archive nodes (and a way to find them) is important.

You could have a weaker requirement that nodes shouldn't delete blocks
unless they are at least 3 time periods (~3 years) old.

The software should have a setting which allows the user to specify maximum
disk space.  Disk space is cheap, so it is likely that a reasonable number
of people will leave that set to infinite.

This automatically results in lots of archive nodes.  Another setting could
decide how many time periods to download.  2-3 seem reasonable as a default
(or maybe infinite too).


> Addressing security concerns:
>
> being able to write a new genesis block means that an evil core has the
> power to steal/destroy/censor/whatever coins.
>
> this is possible only in theory, but not in practice. right now devs can
> misbehave with every softfork, but the community tests and inspects every
> new release.
>

Soft forks are inherently backward compatible.  Coins cannot be stolen
using a soft fork.  It has nothing to do with inspecting new releases.

It is possible for a majority of miners to re-write history, but that is
separate to a soft fork.

A soft fork can lock coins away.  This effectively destroys the coins, but
doesn't steal them.  It could be part of a extortion scheme I guess, but if
a majority of miners did that, then I think Bitcoin has bigger problems.


> the 2 forks will be tested and inspected as well so they are no more risky
> than other softforks.
>
>
For it to be a soft fork, you need to maintain archive nodes.  That is the
whole point.  The old network and the new network rules agree that the new
network rules are valid (and that miners only mine blocks that are valid
under the new rules).  If IBD is impossible for old nodes, then that counts
as a network split.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] 2 softforks to cut the blockchain and IBD time

2017-09-13 Thread michele terzi via bitcoin-dev
the blockchain is 160Gb and this is literally the biggest problem bitcoin has 
right now. syncing a new node is a nightmare that discourages a lot of people.
this single aspect is what hurts bitcoin's decentralization the most and it is 
getting worse by the day.

to solve this problem i propose 2 softfork.

both of them have been partially discussed so you may be already familiar with 
them. I'll just try to highlight problems and benefits.


first SF)
a snapshot of the UTXO set plus all the relevant info (like OP_RETURNs) is 
hashed in the coinbase.
this can be repeated automatically every given period of x blocks. I suggest 
55k blocks (1 year)

second SF)
after a given amount of time the UTXO hash is written in the consensus code.
this hash becomes the hash of a new genesis block and all the older blocks are 
chopped away


Pros:

you gain a much faster syncing for new nodes.
full non pruning nodes need a lot less HD space.
dropping old history results in more difficult future chainanalysis (at least 
by small entities)
freezing old history in one new genesis block means the chain can no longer be 
reorged prior to that point

old status

genesis |- x --| newgenesis |- y --| now

new status

 newgenesis |- y --| now

while the old chain can be reorged to the genesis block the new chain can be 
reorged only to the newgenesisblock

cutting the chain has also some other small benefits: without the need to 
validate old blocks we can clean old no more usefull consensus code


Cons: 

a small amount of space is consumed on the blockchain
every node needs to perform the calculations

full nodes with old software can no longer be fired up and sync with the 
existing network
full nodes that went off line prior to the second fork cannot sync back once 
they turn back on line again.

if these things are concerning (which for me are not) we can just keep online a 
few archive nodes.
old clients will sync only from archivial nodes with full history and new full 
nodes will sync from everywere


Addressing security concerns:

being able to write a new genesis block means that an evil core has the power 
to steal/destroy/censor/whatever coins.

this is possible only in theory, but not in practice. right now devs can 
misbehave with every softfork, but the community tests and inspects every new 
release.

the 2 forks will be tested and inspected as well so they are no more risky than 
other softforks.

additionally the process is divided into 2 separate steps and the first step 
(the critical one) is effectively void without the second (which is 
substantially delayed) this gives the community additional time to test it and 
thus is actually more secure than a standard softfork.
besides after the first softfork locks in there is no more room for mistakes. 
either the hashes match or they do not so spotting a misbehaviour is trivially 
simple

kind regards,Michele
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Merkle branch verification & tail-call semantics for generalized MAST

2017-09-13 Thread Karl Johan Alm via bitcoin-dev
On Wed, Sep 13, 2017 at 4:57 AM, Mark Friedenbach via bitcoin-dev
 wrote:
>> Without the limit I think we would be DoS-ed to dead
>
> 4MB of secp256k1 signatures takes 10s to validate on my 5 year old
> laptop (125,000 signatures, ignoring public keys and other things that
> would consume space). That's much less than bad blocks that can be
> constructed using other vulnerabilities.

Sidenote-ish, but I also believe it would be fairly trivial to keep a
per UTXO tally and demand additional fees when trying to respend a
UTXO which was previously "spent" with an invalid op count. I.e. if
you sign off on an input for a tx that you know is bad, the UTXO in
question will be penalized proportionately to the wasted ops when
included in another transaction later. That would probably kill that
DoS attack as the attacker would effectively lose bitcoin every time,
even if it was postponed until they spent the UTXO. The only thing
clients would need to do is to add a fee rate penalty ivar and a
mapping of outpoint to penalty value, probably stored as a separate
.dat file. I think.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev