Re: [bitcoin-dev] Alternative way to count sigops

2018-02-16 Thread Gregory Maxwell via bitcoin-dev
On Fri, Feb 16, 2018 at 10:49 PM, Johnson Lau via bitcoin-dev
 wrote:
> Since we have a block weight limit of 4,000,000 and sigop limit of 80,000,
> each sigop could not use more than 50 weight unit on average. For new script
> proposals we could count the actual number of sigop at execution (i.e. skip
> unexecuted sigop, skip 0-size signature, count the actual checksig
> operations in multi-sig), and make sure the number of executed sigop * 50 is
> not greater than the size of the input.

We have a related policy rule in Bitcoin Core for some time now, the
weight of the transaction for the purpose of mining is
max(weight,lambda*sigops), though we set lambda a bit lower than makes
sense due to how checkmultisig.  This policy rule replaced an earlier
one which was almost equivalent to your proposal: it rejected
transactions with too many sigops per the byte count, but we found it
block actual more or less sensible transactions.

Going forward I don't think this is a great framework.  It works if
the only expensive operations all involve large input data, but I
think many proposals people have made for new operations would have
computational cost which requires relatively small amounts of
additional input-- aggregation is just one fairly minor example.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Alternative way to count sigops

2018-02-16 Thread Johnson Lau via bitcoin-dev
Short history


Satoshi introduced sigops counting as a softfork to limit the number of 
signature operation in a block. He statically counted all 
OP_CHECK(MULTI)SIG(VERIFY) in both scriptSig and scriptPubKey, assumed a 
OP_CHECKMULTISIG is equivalent to 20 OP_CHECKSIG, and enforced a block limit of 
2 sigop. The counting is not contextual, i.e. one doesn’t need the UTXO set 
to determine the number of sigop. The counting was also static so one doesn’t 
need to execute a script in order to count sigop. However, this is completely 
wrong for few reasons: a) opcodes in scriptPubKey are not executed; b) 
scriptPubKey of spent UTXO, which are actually executed, are not counted at 
all; c) it greatly overestimate the cost of multi-sig; d) it counts sigop in 
unexecuted branch.

As P2SH was introduced, sigop counting also covered the sigop redeemScript. 
This is good because redeemScript is what being executed. It also improved the 
counting of OP_CHECKMULTISIG. If it is in certain canonical form, it would 
count the number of public keys instead of assuming it as 20. On the other 
hand, counting sigop becomes not possible without the UTXO set, since one needs 
UTXO to identify P2SH inputs. Also, the canonical OP_CHECKMULTISIG counting is 
not quite elegant and created more special cases in the code.

Segwit (BIP141) scaled the legacy sigop limit by 4x. So every legacy sigop 
becomes 4 new sigop, with a block limit of 8 new sigop. P2WPKH is counted 
as 1 new sigop, and P2WSH is counted in the same way as P2SH.



Problem


We now have multiple 2nd generation script proposals, such as BIP114, BIP117, 
taproot, etc. BIP114 and taproot allows static sigop counting, but not BIP117 
as it requires execution to determine what would be run as script (like 
OP_EVAL). As we want to allow more complicated script functions, static sigop 
counting might not be easy. However, we still want to have a limit to avoid 
unexpected DoS attack.




Proposal


Since we have a block weight limit of 4,000,000 and sigop limit of 80,000, each 
sigop could not use more than 50 weight unit on average. For new script 
proposals we could count the actual number of sigop at execution (i.e. skip 
unexecuted sigop, skip 0-size signature, count the actual checksig operations 
in multi-sig), and make sure the number of executed sigop * 50 is not greater 
than the size of the input.

The minimal size of each input is 32 (prevout.hash) + 4 (prevout.n) + 4 
(nSequence) + 1 (empty scriptSig) = 41 bytes or 164 weight unit. So the new 
rule would require that (164 + input witness size) >= (actual_sigop * 50). This 
is a per-input limit, as script validation is parallel.

Since a compressed key is 33 bytes and a normal compact signature is 64 bytes, 
the 1:50 ratio should be more than enough to allow any normal use of CHECKSIG, 
unless people are doing weird things like 2DUP 2DUP 2DUP……..CHECKSIG CHECKSIG 
CHECKSIG CHECKSIG , which would have many sigop with a relatively small 
witness. Interactive per-tx signature aggregation allows 64bytes/tx signature, 
and per-block non-interatcitve signature aggregation allows 32bytes/signature 
(https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-May/014272.html 
).
 In such cases, the 1:50 ratio might not be enough if many signatures are 
aggregated. Depends on the number of sigop we could tolerate, the 1:50 ratio 
might be reduced to 1:32 or lower to make sure legitimate use would never hit 
the limit. I think 32 is reasonable as it is about the size of a public key, 
which would guarantee each pubkey must get a sigop slot.

So a relay node could be certain that a tx won’t spend excessive CPU power by 
just looking at its size. If it spends too much, it is invalid and script 
execution could be terminated early.


signature.asc
Description: Message signed with OpenPGP
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev