On Sun, Oct 06, 2019 at 08:46:59AM +0000, ZmnSCPxj wrote:
> > Obviously with care you can get the computation right. But at that point 
> > what's
> > the actual advantage over OP_CAT?
> >
> > We're limited by the size of the script anyway; if the OP_CAT output size 
> > limit
> > is comparable to that for almost anything you could use SHA256STREAM on you
> > could just as easily use OP_CAT, followed by a single OP_SHA256.
> 
> Theoretically, `OP_CAT` is less efficient.
> 
> In cases where the memory area used to back the data cannot be resized, new 
> backing memory must be allocated elsewhere and the existing data copied.
> This leads to possible O( n^2 ) behavior for `OP_CAT` (degenerate case where 
> we add 1 byte per `OP_CAT` and each time find that the memory area currently 
> in use is exactly fitting the data and cannot be resized in-place).

In even that degenerate case allocators also free memory.

Anyway, every execution step in script evaluation has a maximum output size,
and the number of steps is limited. At worst you can allocate the entire
possible stack up-front for relatively little cost (eg fitting in the MB or two
that is a common size for L2 cache).

> Admittedly a sufficiently-limited  maximum `OP_CAT` output would be helpful 
> in reducing the worst-case `OP_CAT` behavior.
> The question is what limit would be reasonable.
> 64 bytes feels too small if one considers Merkle tree proofs, due to 
> mentioned issues of lack of typechecking.

256 bytes is more than enough for even the most complex summed merkle tree with
512-byte hashes and full-sized sum commitments. Yet that's still less than the
~500byte limit proposed elsewhere.

-- 
https://petertodd.org 'peter'[:-1]@petertodd.org

Attachment: signature.asc
Description: PGP signature

_______________________________________________
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

Reply via email to