Re: [bitcoin-dev] Signet

2019-03-13 Thread Karl-Johan Alm via bitcoin-dev
Hi Varunram,

On Wed, Mar 13, 2019 at 3:41 PM Varunram Ganesh
 wrote:
>
> I like your idea of a signet as it would greatly help test reorgs and stuff 
> without having to experiment with regtest. But I'm a bit concerned about 
> running a common signet (Signet1) controlled by a trusted entity. I guess if 
> someone wants to test signet on a global scale, they could spin up a couple 
> nodes in a couple places (and since it is anyway trusted, they can choose to 
> run it on centralised services like AWS). Another concern is that the 
> maintainer might have unscheduled work, emergencies, etc and that could 
> affect how people test stuff on. This would also mean that we need people to 
> run signet1 nodes in parallel with current testnet nodes (one could argue 
> that Signet is trusted anyway and this doesn't matter, still)
>
> I'm sure you would have considered these while designing, so would be great 
> to hear your thoughts.

For starters, I assume that the signer would run an automated script
that generated blocks on regular intervals without requiring manual
interaction. So even if the signer went on a vacation, the network
would keep on ticking. I also assume the signer would be running a
faucet service so users could get coins as needed. Ultimately though,
if a signer ended up vanishing or being unreliable, people would just
set up a new signet with a different signer and use that instead, so
ultimately it's not a big deal.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Signet

2019-03-13 Thread Karl-Johan Alm via bitcoin-dev
Hi Anthony,

On Wed, Mar 13, 2019 at 12:23 PM Anthony Towns  wrote:
>
> Maybe make the signature be an optional addition to the header, so
> that you can have a "light node" that doesn't download/verify sigs
> and a full node that does? (So signatures just sign the traditional
> 80-byte header, and aren't included in the block's tx merkle root, and
> the prevHash reflects the hash of the previous block's 80-byte header,
> without the signature)

This seems to be what everyone around me thinks is the best approach.
I.e. signet is a "p2p level agreement" that an additional signature is
required for a block to be considered fully valid.

It has the added complexity that a signature-aware signet node talking
to a non-signature-aware signet node should reject/discard headers
sent from the peer, or you will run into situations where a node
doesn't know if the headers are valid or not and has to hold onto them
indefinitely, which is a situation that currently does not occur in
"regular" nets.

If you detach the signature from the header, you also end up with
cases where a malicious user could send garbage data as the signature
for a valid header, forcing peers to mark that header as invalid, even
though it isn't. That is regardless of whether a fix is in place for
the above, too.

> If you did this, it might be a good idea to enforce including the previous
> block's header signature in the current block's coinbase.

Yeah that is one of the ideas we had early on, and I think it's a good
one to do. It doesn't mean someone cannot spam a bunch of invalid
headers at block height current_tip+1, but at least you can get all
but the latest signature now. So as long as you are able to fetch the
latest signature, you should theoretically be able to verify the
entire chain just from the blocks + that one sig.

-Kalle.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Removal of reject network messages from Bitcoin Core (BIP61)

2019-03-13 Thread Dustin Dettmer via bitcoin-dev
I’ve solved the same problem in a different way.

1) Submit a transaction
2) Collect all reject messages (that have matching txid in the reject data)
3) Wait 16 seconds after first error message received (chosen semirandomly
from trial and error) before processing errors
4) Wait for our txid to be submitted back to us through the mempool, if we
get it notify success and delete all pending error events
5) Signal failure with the given reject code if present (after the 16
seconds have elapsed)
6) If no error or success after 20 seconds, signal timeout failure

This works fairly well in testing. Newer transaction types seem to generate
reject codes 100% of the time (from at least one node when sending to 4
nodes) so this culling / time delay approach is essentially required.

On a related note: One issue is that RBF attempts with too small a fee and
accidental double spends (with enough fee for 1 tx but not a RBF) both
generate the same reject code: not enough fee.

A new reject code for RBF based too small of fee would definitely make for
a better user experience as I’ve seen this exact problem create confusion
for users.

Removing reject codes would make for a much worse user experience. “Your tx
failed and we have no idea why” would be the only message and it would
require waiting for a full timeout.

On Wed, Mar 13, 2019 at 3:16 PM Oscar Guindzberg via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> > I'd like to better understand this, but it would be easier to just
> > read the code than ask a bunch of questions. I tried looking for the
> > handling of reject messages in Android  Bitcoin Wallet and BitcoinJ
> > and didn't really find and handling other than logging exceptions.
> > Would you mind giving me a couple pointers to where in the code
> > they're handled?
>
>
> https://github.com/bitcoinj/bitcoinj/blob/master/core/src/main/java/org/bitcoinj/core/TransactionBroadcast.java#L93-L108
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Removal of reject network messages from Bitcoin Core (BIP61)

2019-03-13 Thread Oscar Guindzberg via bitcoin-dev
> I'd like to better understand this, but it would be easier to just
> read the code than ask a bunch of questions. I tried looking for the
> handling of reject messages in Android  Bitcoin Wallet and BitcoinJ
> and didn't really find and handling other than logging exceptions.
> Would you mind giving me a couple pointers to where in the code
> they're handled?

https://github.com/bitcoinj/bitcoinj/blob/master/core/src/main/java/org/bitcoinj/core/TransactionBroadcast.java#L93-L108
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Removal of reject network messages from Bitcoin Core (BIP61)

2019-03-13 Thread Andreas Schildbach via bitcoin-dev
On 12/03/2019 23.14, Gregory Maxwell via bitcoin-dev wrote:
> On Tue, Mar 12, 2019 at 7:45 PM Andreas Schildbach via bitcoin-dev
>  wrote:
>> These two cases are understood and handled by current code. Generally
>> the idea is take reject messages serious, but don't overrate the lack
>> of. Luckily, network confirmations fill the gap. (Yes, a timeout is
> 
> I'd like to better understand this, but it would be easier to just
> read the code than ask a bunch of questions. I tried looking for the
> handling of reject messages in Android  Bitcoin Wallet and BitcoinJ
> and didn't really find and handling other than logging exceptions.
> Would you mind giving me a couple pointers to where in the code
> they're handled?

It's implemented in bitcoinj's TransactionBroadcast class. Received
reject messages are collected and -- if a certain consensus (currently:
half of connected peers) is reached -- a RejectedTransactionException is
raised.

The handling of that exception in Bitcoin Wallet is extremely
rudimentary. I think it still only shows the exception message. But
certainly I was hoping to improve on this soon.

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] More thoughts on NOINPUT safety

2019-03-13 Thread Anthony Towns via bitcoin-dev
On Wed, Mar 13, 2019 at 06:41:47AM +, ZmnSCPxj via Lightning-dev wrote:
> > -   alternatively, we could require every script to have a valid signature
> > that commits to the input. In that case, you could do eltoo with a
> > script like either:
> >  CHECKSIGVERIFY  CHECKSIG
> > or  CHECKSIGVERIFY  CHECKSIG
> > where A is Alice's key and B is Bob's key, P is muSig(A,B) and Q is
> > a key they both know the private key for. In the first case, Alice
> > would give Bob a NOINPUT sig for the tx, and when Bob wanted to publish
> > Bob would just do a SIGHASH_ALL sig with his own key. In the second,
> > Alice and Bob would share partial NOINPUT sigs of the tx with P, and
> > finish that when they wanted to publish.
> At my point of view, if a NONINPUT sig is restricted and cannot be
> used to spend an "ordinary" 2-of-2, this is output tagging regardless
> of exact mechanism.

With taproot, you could always do the 2-of-2 spend without revealing a
script at all, let alone that it was meant to be NOINPUT capable. The
setup I'm thinking of in this scenario is something like:

  0) my key is A, your key is B, we want to setup an eltoo channel

  1) post a funding tx to the blockchain, spending money to an address
 P = muSig(A,B)

  2) we cycle through a bunch of states from 0..N, with "0" being the
 refund state we establish before publishing the funding tx to
 the blockchain. each state essentially has two corresponding tx's,
 and update tx and a settlement tx.

  3) the update tx for state k spends to an output Qk which is a
 taproot address Qk = P + H(P,Sk)*G where Sk is the eltoo ratchet
 condition:
Sk = (5e8+k+1) CLTV A CHECKDLS_NOINPUT B CHECKDLS_NOINPUT_VERIFY

 we establish two partial signatures for update state k, one which
 is a partial signature spending the funding tx with key P and
 SIGHASH_ALL, the other is a NOINPUT signature via A (for you) and
 via B (for me) with locktime set to (k+5e8), so that we can spend
 any earlier state's update tx's, but not itself or any later
 state's update tx's.

  4) for each state we have also have a settlement transaction,
 Sk, which spends update tx k, to outputs corresponding to the state
 of the channel, after a relative timelock delay.

 we have two partial signatures for this transaction too, one with
 SIGHASH_ALL assuming that we directly spent the funding tx with
 update state k (so the input txid is known), via the key path with
 key Qk; the other SIGHASH_NOINPUT via the Sk path. both partially
 signed tx's have nSequence set to the required relative timelock
 delay.

  5) if you're using scriptless scripts to do HTLCs, you'll need to
 allow for NOINPUT sigs when claiming funds as well (and update
 the partial signatures for the non-NOINPUT cases if you want to
 maximise privacy), which is a bit fiddly

  6) when closing the channel the process is then:

   - if you're in contact with the other party, negotiate a new
 key path spend of the funding tx, publish it, and you're done.

   - otherwise, if the funding tx hasn't been spent, post the latest
 update tx you know about, using the "spend the funding tx via
 key path" partial signature

   - otherwise, trace the children of the funding tx, so you can see
 the most recent published state:
   - if that's newer than the latest state you know about, your
 info is out of date (restored from an old backup?), and you
 have to wait for your counterparty to post the settlement tx
   - if it's equal to the latest state you know about, wait
   - if it's older than the latest state, post the latest update
 tx (via the NOINPUT script path sig), and wait

   - once the CSV delay for the latest update tx has expired, post
 the corresponding settlement tx (key path if the update tx
 spent the funding tx, NOINPUT if the update tx spent an earlier
 update tx)

   - once the settlement tx is posted, claim your funds

So the cases look like:

   mutual close:
 funding tx -> claimed funds

 -- only see one key via muSig, single signature, SIGHASH_ALL
 -- if there are active HTLCs when closing the channel, and they
timeout, then the claiming tx will likely be one-in, one-out,
SIGHASH_ALL, with a locktime, which may be unusual enough to
indicate a lightning channel.

   unilateral close, no cheating: 
 funding tx -> update N -> settlement N -> claimed funds

 -- update N is probably SINGLE|ANYONECANPAY, so chain analysis
of accompanying inputs might reveal who closed the channel
 -- settlement N has relative timelock
 -- claimed funds may have timelocks if they claim active HTLCs via
the refund path
 -- no NOINPUT signatures needed, and all signatures use the key path
so don't reveal any scripts

  

Re: [bitcoin-dev] Signet

2019-03-13 Thread Varunram Ganesh via bitcoin-dev
Hello Kalle,

I like your idea of a signet as it would greatly help test reorgs and stuff
without having to experiment with regtest. But I'm a bit concerned about
running a common signet (Signet1) controlled by a trusted entity. I guess
if someone wants to test signet on a global scale, they could spin up a
couple nodes in a couple places (and since it is anyway trusted, they can
choose to run it on centralised services like AWS). Another concern is that
the maintainer might have unscheduled work, emergencies, etc and that could
affect how people test stuff on. This would also mean that we need people
to run signet1 nodes in parallel with current testnet nodes (one could
argue that Signet is trusted anyway and this doesn't matter, still)

I'm sure you would have considered these while designing, so would be great
to hear your thoughts.

Regards,
Varunram
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] More thoughts on NOINPUT safety

2019-03-13 Thread ZmnSCPxj via bitcoin-dev
Good morning aj,

First off, I have little to no idea of the issues at the lower-level Bitcoin.

In any case ---

> -   alternatively, we could require every script to have a valid signature
> that commits to the input. In that case, you could do eltoo with a
> script like either:
>
>  CHECKSIGVERIFY  CHECKSIG
> or  CHECKSIGVERIFY  CHECKSIG
>
>
> where A is Alice's key and B is Bob's key, P is muSig(A,B) and Q is
> a key they both know the private key for. In the first case, Alice
> would give Bob a NOINPUT sig for the tx, and when Bob wanted to publish
> Bob would just do a SIGHASH_ALL sig with his own key. In the second,
> Alice and Bob would share partial NOINPUT sigs of the tx with P, and
> finish that when they wanted to publish.
>
> This is a bit more costly than a key path spend: you have to reveal
> the taproot point to do a script (+33B) and you have two signatures
> instead of one (+65B) and you have to reveal two keys as well
> (+66B), plus some script overhead. If we did the  variant,
> we could provide a "PUSH_TAPROOT_KEY" opcode that would just push
> the taproot key to stack, saving 33B from pushing P as a literal,
> but you can't do much better than that. All in all, it'd be about 25%
> overhead in order to prevent cheating. [0]
>
> I think that output tagging doesn't provide a workable defense against the
> third party malleability via a deeper-than-the-CSV-delay reorg mentioned
> earlier; but requiring a non-NOINPUT sig does: you'd have to replace
> the non-NOINPUT sig to make state 5 spend state 3 instead of state 4,
> and only the holders of the appropriate private key can do that.

At my point of view, if a NONINPUT sig is restricted and cannot be used to 
spend an "ordinary" 2-of-2, this is output tagging regardless of exact 
mechanism.
So the restriction to add a non-NOINPUT sig in addition to a NOINPUT sig is 
still output tagging, as a cooperative close would still reveal that the output 
is not a 2-of-2.

Ideally, historical data of whether onchain coin was used in Lightning or not 
should be revealed as little as possible.
So in a cooperative close (which we hope, to be a common case), ideally the 
spend should look no different from an ordinary 2-of-2 spend.
Of course if the channel is published on Lightning, those who participated in 
Lightning at the time will learn of it, but at least the effort to remember 
this information is on those who want to remember this fact.

Now, this can be worked around by adding a "kickoff" transaction that spends 
the eltoo setup transaction.
The eltoo setup transaction outputs to an ordinary 2-of-2.
The kickoff outputs to an output that allows NOINPUT.
Then the rest of the protocol anchors on top of the kickoff.

The kickoff is kept offchain, until a non-cooperative close is needed.
Of course, as it is not a NOINPUT itself, it must need onchain fees attached to 
it.
This of course complicates fees, as we know.
Alternately maybe the kickoff can be signed with `SIGHASH_SINGLE | 
SIGHASH_ANYONECANPAY` so that it is possible to add a fee-paying UTXO to it.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Signet

2019-03-13 Thread Anthony Towns via bitcoin-dev
On Fri, Mar 08, 2019 at 03:20:49PM -0500, Matt Corallo via bitcoin-dev wrote:
> To make testing easier, it may make sense to keep the existing block header
> format (and PoW) and instead apply the signature rules to some field in the
> coinbase transaction.

Maybe make the signature be an optional addition to the header, so
that you can have a "light node" that doesn't download/verify sigs
and a full node that does? (So signatures just sign the traditional
80-byte header, and aren't included in the block's tx merkle root, and
the prevHash reflects the hash of the previous block's 80-byte header,
without the signature)

I think you could do that by adding a p2p service bit to say "send me
signatures if you have them / I can send you signatures", which changes
the p2p encoding of the header from (ver, prev, mrkl, time, bits, nonce)
to (ver, prev, mrkl, time, 0, nonce, bits, sig), and change header
processing to ignore headers from nodes that don't have the service
bit set?

If you did this, it might be a good idea to enforce including the previous
block's header signature in the current block's coinbase.

Cheers,
aj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] More thoughts on NOINPUT safety

2019-03-13 Thread Anthony Towns via bitcoin-dev
Hi all,

The following has some more thoughts on trying to make a NOINPUT
implementation as safe as possible for the Bitcoin ecosystem.

One interesting property of NOINPUT usage like in eltoo is that it
actually reintroduces the possibility of third-party malleability to
transactions -- ie, you publish transactions to the blockchain (tx A,
which is spent by tx B, which is spent by tx C), and someone can come
along and change A or B so that C is no longer valid). The way this works
is due to eltoo's use of NOINPUT to "skip intermediate states". If you
publish to the blockchain:

  funding tx -> state 3 -> state 4[NOINPUT] -> state 5[NOINPUT] -> finish

then in the event of a reorg, state 4 could be dropped, state 5's
inputs adjusted to refer to state 3 instead (the sig remains valid
due to NOINPUT, so this can be done by anyone not just holders of some
private key), and finish would no longer be a valid tx (because the new
"state 5" tx has different inputs so a different txid, and finish uses
SIGHASH_ALL for the signature so committed to state 5's original txid).

There is a safety measure here though: if the "finish" transaction is
itself a NOINPUT tx, and has a a CSV delay (this is the case in eltoo;
the CSV delay is there to give time for a hypothetical state 6 to be
published), then the only way to have a problem is for some SIGHASH_ALL tx
that spends finish, and a reorg deeper than the CSV delay (so that state
4 can be dropped, state 5 and finish can be altered). Since the CSV delay
is chosen by the participants, the above is still a possible scenario
in eltoo, though, and it means there's some risk for someone accepting
bitcoins that result from a non-cooperative close of an eltoo channel.


Beyond that, I think NOINPUT has two fundamental ways to cause problems
for the people doing NOINPUT sigs:

 1) your signature gets applied to a unexpectedly different
script, perhaps making it look like you've being dealing
with some blacklisted entity. OP_MASK and similar solves
this.

 2) your signature is applied to some transaction and works
perfectly; but then someone else sends money to the same address
and reuses your prior signature to forward it on to the same
destination, without your consent

I still like OP_MASK as a solution to (1), but I can't convince myself that
the problem it solves is particularly realistic; it doesn't apply to
address blacklists, because for OP_MASK to make the signature invalid
the address has to be different, and you could just short circuit the
whole thing by sending money from a blacklisted address to the target's
personal address directly. Further, if the sig's been seen on chain
before, that's probably good evidence that someone's messing with you;
and if it hasn't been seen on chain before, how is anyone going to tell
it's your sig to blame you for it?

I still wonder if there isn't a real problem hiding somewhere here,
but if so, I'm not seeing it.

For the second case, that seems a little more concerning. The nightmare
scenario is maybe something like:

 * naive users do silly things with NOINPUT signatures, and end up
   losing funds due to replays like the above

 * initial source of funds was some major exchange, who decide it's
   cheaper to refund the lost funds than deal with the customer complaints

 * the lost funds end up costing enough that major exchanges just outright
   ban sending funds to any address capable of NOINPUT, which also bans
   all taproot/schnorr addresses

That's not super likely to happen by chance: NOINPUT sigs will commit
to the value being spent, so to lose money, you (Alice) have to have
done a NOINPUT sig spending a coin sent to your address X, to someone
(Bob) and then have to have a coin with the exact same value sent from
someone else again (Carol) to your address X (or if you did a script
path NOINPUT spend, to some related address Y with a script that uses the same
key). But because it involves losing money to others, bad actors might
trick people into having it happen more often than chance (or well
written software) would normally allow.

That "nightmare" could be stopped at either the first step or the
last step:

 * if we "tag" addresses that can be spent via NOINPUT then having an
   exchange ban those addresses doesn't also impact regular
   taproot/schnorr addresses, though it does mean you can tell when
   someone is using a protocol like eltoo that might need to make use
   of NOINPUT signatures.  This way exchanges and wallets could simply
   not provide NOINPUT capable addresses in the first place normally,
   and issue very large warnings when asked to send money to one. That's
   not a problem for eltoo, because all the NOINPUT-capable address eltoo
   needs are internal parts of the protocol, and are spent automatically.

 * or we could make it so NOINPUT signatures aren't replayable on
   different transactions, at least by third parties. one way of doing
   this might be to require 

Re: [bitcoin-dev] OP_CODESEPARATOR Re: BIP Proposal: The Great Consensus Cleanup

2019-03-13 Thread Russell O'Connor via bitcoin-dev
Hi Matt,

On Mon, Mar 11, 2019 at 10:23 PM Matt Corallo 
wrote:

> I think you may have misunderstood part of the motivation. Yes, part of
> the motivation *is* to remove OP_CODESEPARATOR wholesale, greatly
> simplifying the theoretical operation of checksig operations (thus somewhat
> simplifying the implementation but also simplifying analysis of future
> changes, such as sighash-caching code).
>

I see.  I was under the mistaken impression the concerns about of
OP_CODESEPARATOR was simply due to the vulnerability it induces.

I'll say it now then: Simplifying the theoretical operation of Bitcoin is
not a sufficient reason to make changes to the consensus rules, and it is
most certainly not a sufficient reason to remove usable op codes.

Had I understood that this was your motivation I would have presented my
opinion earlier. I understand that the OP_CODESEPARATOR vulnerability is
quite serious and making it non-standard while we address the problem is a
good idea (hence the reason why I never objected before now).

What I don't understand is why you feel that avoiding flushing the sigcache
is so critical that you are willing to go through a risky consensus change
just to achieve it?  The sigcache is effectively flushed for each input of
a transaction anyways, so what's the big deal about flushing it during
Script execution as well?


> I think a key part of the analysis here is that no one I've spoken to (and
> we've been discussing removing it for *years*, including many attempts at
> coming up with reasons to keep it) is aware of any real proposals to use
> OP_CODESEPARATOR, let alone anyone using it in the wild. Hiding data in
> invalid pubic keys is a long-discussed-and-implemented idea (despite it's
> discouragement, not to mention it appears on the chain in many places).
>

Well you've spoken to me now, and I believe I have given you good reasons
to keep it.  We all used to think that OP_CODESEPARATOR was a useless
operation that no one in their right mind would ever use, but it turns out
that we were wrong.  Lesson learned.  We should be more humble about
considering these sorts of changes in the future because it seems we might
not understand Bitcoin as well as we think we do.  At the very least I was
caught by surprise by the utility of OP_CODESEPARATOR.

You misunderstand my point regarding invalid public keys.  My point is that
if no one has spoken up about the invalid public key issue on this mailing
list, something that we know really does affects people, why do you expect
that people would have spoken up about OP_CODESEPARATATOR affecting them?


> It would end up being a huge shame to have all the OP_CORESEPARATOR mess
> left around after all the effort that has gone into removing it for the
> past few years, especially given the stark difference in visibility of a
> fork when compared to a standardness change.
>
> As for your specific proposal of increasing the weight of anything that
> has an OP_CODESEPARATOR in it by the cost of an additional (simple) input,
> this doesn't really solve the issue. After all, if we're assuming some user
> exists who has been using sending money, unspent, to scripts with
> OP_CODESEPARATOR to force signatures to commit to whether some other
> signature was present and who won't see a (invariably media-covered)
> pending soft-fork in time to claim their funds, we should also assume such
> a user has pre-signed transactions which are time-locked and claim a number
> of inputs and have several paths in the script which contain
> OP_CODESEPARATOR, rendering their transcription invalid.
>

Agreed, that's why we will want to not simply count the OP_CODESEPARATORS,
but rather count the maximum number of OP_CODESEPARATORS that can be
executed through the any of the various possible OP_IF branches.  Adding
this sort of control-flow analysis is a pretty simple. It just requires a
small stack of pairs of numbers and linear traversal through the Script.
This sort of OP_IF control flow analysis ought to have been done for
counting CHECKSIG operations, but unfortunately it is too late for that
now.  I could prototype the sort of analysis I have in mind if you think
that would be helpful.

In fact, it is really alternating uses of OP_CODESEPARATOR and CheckSig
operations that is problematic, so it is probably worth attempting to count
these pairs rather than just OP_CODESEPARATORS.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] OP_CODESEPARATOR Re: BIP Proposal: The Great Consensus Cleanup

2019-03-13 Thread Russell O'Connor via bitcoin-dev
On Tue, Mar 12, 2019 at 6:39 PM Jacob Eliosoff 
wrote:

> Also, if future disabling isn't the point of making a tx type like
> OP_CODESEPARATOR non-standard - what is?  If we're committed to indefinite
> support of these oddball features, what do we gain by making them hard to
> use/mine?
>

The purpose of making OP_CODESEPARATOR non-standard was to partly mitigate
the risk of the vulnerability that OP_CODESEPARATOR induces while we
consider how to patch it.

>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Sighash Type Byte; Re: BIP Proposal: The Great Consensus Cleanup

2019-03-13 Thread Russell O'Connor via bitcoin-dev
Hi Matt,

(I moved your comment to this thread, where I think it is more relevant).

This is a fair point.  I concede that as far as Sighash Type Byte is
concerned, the type of change is fairly similar to BIP 68 (though I don't
think the argument applies to OP_CODESEPARATOR).
I might rephrase what you say as "invalidating otherwise-unusable bits of
the protocol".  I don't quite know the right phrasing that captures both
the insecure and redundant aspects of the protocol.  I'm willing to accept
that nSequence numbers (as they originally were), NOP1-NOP10 and these
extra sighash types can all be classified as redundant aspects of the
Bitcoin protocol.

I still think the alternative proposal of caching the sha256 midstate is
the better choice.  We should strive to avoid changing the consensus rules
when we have reasonable alternatives to achieve our goals. However, I now
see that this proposal isn't entirely unprecedented.

On Tue, Mar 12, 2019 at 5:08 PM Matt Corallo 
wrote:

> Note that even your carve-outs for OP_NOP is not sufficient here - if you
> were using nSequence to tag different pre-signed transactions into
> categories (roughly as you suggest people may want to do with extra sighash
> bits) then their transactions could very easily have become
> un-realistically-spendable. The whole point of soft forks is that we
> invalidate otherwise-unused bits of the protocol. This does not seem
> inconsistent with the proposal here.
>
> > On Mar 9, 2019, at 13:29, Russell O'Connor 
> wrote:
> > Bitcoin has *never* made a soft-fork, since the time of Satoishi, that
> invalidated transactions that send secured inputs to secured outputs
> (excluding uses of OP_NOP1-OP_NOP10).
>

On Fri, Mar 8, 2019 at 10:57 AM Russell O'Connor 
wrote:

> On Thu, Mar 7, 2019 at 2:57 PM Matt Corallo 
> wrote:
>
>> I can't say I'm particularly married to this idea (hence the alternate
>> proposal in the original email), but at the same time the lack of
>> existing transactions using these bits (and the redundancy thereof -
>> they don't *do* anything special) seems to be pretty strong indication
>> that they are not in use. One could argue a similarity between these
>> bits and OP_NOPs - no one is going to create transactions that require
>> OP_NOP execution to be valid as they are precisely the kind of thing
>> that may get soft-forked to have a new meaning. While the sighash bits
>> are somewhat less candidates for soft-forking,
>
>
> I don't think "somewhat less candidates for soft-forking" is a fair
> description.  These bits essentially unsuitable for soft-forking within
> legacy Script.
>
> I don't think "someone
>> may have shoved random bits into parts of their
>> locked-for-more-than-a-year transactions" is sufficient reason to not
>> soft-fork something out.
>
>
> I disagree. It is sufficient.
>
> When was the last time Bitcoin soft-forked out working transactions that
> sent funds from securely held UTXOs to securely held UTXOs (aside from
> those secured by Scripts using the reserved OP_NOP1-OP_NOP10)?  AFAIK it
> has never occurred since the time of Satoshi, even for the most
> hypothetical of transactions.  It is my understanding is that Bitcoin Core
> would never do such a thing unless the security of Bitcoin protocol itself
> was under existential threat (see OP_CODESEPARATOR) and even then Bitcoin
> Core would only soft-fork out the minimal amount necessary to safely
> diffuse such a threat.
>
> Since the above soft-fork isn't addressing addressing any such threat
> (that I'm aware of), and could hypothetically destroy other people money,
> it crosses a line I thought we were never supposed to cross.
>
>>
>> Obviously, actually *seeing* it used in
>> practice or trying to fork them out in a fast manner would be
>> unacceptable, but neither is being proposed here.
>>
>
> Perhaps you don't see them in used in the blockchain because the users
> trying to use them are caught up by the fact they they are not being
> relayed by default (violating SCRIPT_VERIFY_STRICTENC) and are having
> difficulty redeeming them.
> You cannot first make transactions non-standard and then use the fact that
> you don't see them being used to justify a soft-fork.
>
> I know of users who have their funds tied up due to other changes in
> Bitcoin Core's default relay policy.  I believe they waiting for their
> funds to become valuable enough to go through the trouble of having them
> directly mined.  Shall we now permanently destroy their funds too, before
> they have a chance to get their transactions mined?
>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] OP_CODESEPARATOR Re: BIP Proposal: The Great Consensus Cleanup

2019-03-13 Thread Gregory Maxwell via bitcoin-dev
On Wed, Mar 13, 2019 at 12:42 AM Jacob Eliosoff via bitcoin-dev
 wrote:
> Also, if future disabling isn't the point of making a tx type like 
> OP_CODESEPARATOR non-standard - what is?  If we're committed to indefinite 
> support of these oddball features, what do we gain by making them hard to 
> use/mine?

It makes them infeasible to abuse without miner assistance... which
doesn't fix them, but in practice greatly reduces the risk they create
and allows efforts improving the system to be allocated to other more
pressing issues.

> I see questions like "Is it possible someone's existing tx relies on this?" 
> as overly black-and-white.  We all agree it's possible: the question is how 
> likely, vs the harms of continued support - including not just security risks 
> but friction on other useful changes, safety/correctness analyses, etc.

Don't underestimate the value of taking a principled position that
*strongly* avoids confiscating user funds.  Among many other benefits
being cautious about this avoids creating a situation where people are
demanding human intervention to restore improperly lost funds and the
associated loss of effort that would come from the effort wasted
debating that.

It's true that most other cryptocurrencies proceed without any such
caution or care-- e.g. bcash recently confiscating all funds
accidentally sent to segwit using Bitcoin addresses because of their
reckless address aliasing as a result of promoting the standardness
rule that made those txn non-standard before segwit without
considering the implications--, but they're not the standard we should
hold Bitcoin to...

> Again, the point being not to throw caution to the wind, but that a case like 
> this where extensive research unearthed zero users, is taking caution too far.

All things in balance: Codeseperator and its related costs are not an
especially severe problem. The arguments on both side of this point
have enough merit to be worth discussing, at least.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev