Re: [bitcoin-dev] Validity Rollups on Bitcoin

2022-11-04 Thread ZmnSCPxj via bitcoin-dev
Good morning Trey,

> * something like OP_PUSHSCRIPT which would remove the need for the
> introspection the the prevout's script and avoids duplicating data in
> the witness
> * some kind of OP_MERKLEUPDATEVERIFY which checks a merkle proof for a
> leaf against a root and checks if replacing the leaf with some hash
> using the proof yields a specified updated root (or instead, a version
> that just pushes the updated root)
> * if we really wanted to golf the size of the script, then possibly a
> limited form of OP_EVAL if we can't figure out another way to split up
> the different spend paths into different tapleafs while still being able
> to do the recursive covenant, but still the script and the vk would
> still be significant so it's not actually that much benefit per-batch

A thing I had been musing on is to reuse pay-to-contract to store a commitment 
to the state.

As we all know, in Taproot, the Taproot outpoint script is just the public key 
corresponding to the pay-to-contract of the Taproot MAST root and an internal 
public key.

The internal public key can itself be a pay-to-contract, where the contract 
being committed to would be the state of some covenant.

One could then make an opcode which is given an "internal internal" pubkey 
(i.e. the pubkey that is behind the pay-to-contract to the covenant state, 
which when combined serves as the internal pubkey for the Taproot construct), a 
current state, and an optional expected new state.
It determines if the Taproot internal pubkey is actually a pay-to-contract of 
the current state on the internal-internal pubkey.
If the optional expected new state exists, then it also recomputes a 
pay-to-contract of the new state to the same internal-internal pubkey, which is 
a new Taproot internal pubkey, and then recomputes a pay-to-contract of the 
same Taproot MAST root on the new Taproot internal pubkey, and that the first 
output commits to that.

Basically it retains the same MASTed set of Tapscripts and the same 
internal-internal pubkey (which can be used to escape the covenant, in case a 
bug is found, if it is an n-of-n of all the interested parties, or otherwise 
should be a NUMS point if you trust the tapscripts are bug-free), only 
modifying the covenant state.
The covenant state is committed to on the Taproot output, indirectly by two 
nested pay-to-contracts.

With this, there is no need for quining and `OP_PUSHSCRIPT`.
The mechanism only needs some way to compute the new state from the old state.

In addition, you can split up the control script among multiple Tapscript 
branches and only publish onchain (== spend onchain bytes) the one you need for 
a particular state transition.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Validity Rollups on Bitcoin

2022-11-04 Thread Russell O'Connor via bitcoin-dev
On Fri, Nov 4, 2022 at 4:04 PM Trey Del Bonis via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

>
> Instead of that approach, I assume we have fairly granular transaction
> introspection opcodes from a list in Elements [2] (which seem like they
> aren't actually used in mainnet Liquid?)


These opcodes went live on Liquid along with Taproot <
https://blog.liquid.net/taproot-on-liquid-is-now-live/>, so feel free to
try them out on Elements or Liquid.

One complicated part is the actual proof verification.  I had considered
> looking into what it would take to build a verifying for a modern proof
> system if we used pairings as a primitive, but it turns out even that is
> pretty involved even in a higher level language (at least for PLONK [3])
> and would be error-prone when trying to adapt the code for new circuits
> with differently-shaped public inputs.  The size of the code on-chain
> alone would probably make it prohibitively expensive, so it would be a
> lot more efficient just to assume we can introduce a specific opcode for
> doing a proof verification implemented natively.  The way I assumed it
> would work is taking the serialized proof, a verification key, and the
> public input as separate stack items.  The public input is the
> concatenation of the state and deposit commitments we took from the
> input, the batch post-state commitment (provided as part of the
> witness), data from transaction outputs corresponding to
> internally-initiated withdrawals from the rollup, and the rollup batch
> data itself (also passed as part of the witness).
>

I'd be interested in knowing what sort of Simplicity Jets would facilitate
rollups.  I suppose some pairing-friendly curve operations would do.  It
might not make the first cut of Simplicity, but we will see.

Simplicity's design doesn't have anything like a 520 byte stack limit.
There is just going to be an overall maximum allowed Simplicity evaluation
state size of some value that I have yet to decide.  I would imagine it to
be something like 1MB.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Validity Rollups on Bitcoin

2022-11-04 Thread Trey Del Bonis via bitcoin-dev
Hi all, I figured I could answer some of these rollup questions,

There's a few different possibilities to make rollups work that have
different tradeoffs.  The core construction I worked out in [1] involves
a quine-ish recursive covenant that stores some persistent "state" as
part of the beginning of the script which is then updated by the
transaction according to rules asserted by the program and then
constructs a new scriptPubKey that we assert is on the first output. 
This is apparently not a new idea, as I was recently made aware of that
the sCrypt project does something similar to build a Solidity-looking
stateful contract environment by using OP_PUSH_TX.

Instead of that approach, I assume we have fairly granular transaction
introspection opcodes from a list in Elements [2] (which seem like they
aren't actually used in mainnet Liquid?) that can be used to implement
covenants since the 520 byte limit means it's hard to pull data out of
OP_PUSH_TX.  I also assume some math and byte manipulation opcodes
(OP_ADD, OP_MUL, OP_CAT, OP_RIGHT, OP_LEFT, OP_SUBSTR, etc.) that were
disabled years ago are re-added.

One complicated part is the actual proof verification.  I had considered
looking into what it would take to build a verifying for a modern proof
system if we used pairings as a primitive, but it turns out even that is
pretty involved even in a higher level language (at least for PLONK [3])
and would be error-prone when trying to adapt the code for new circuits
with differently-shaped public inputs.  The size of the code on-chain
alone would probably make it prohibitively expensive, so it would be a
lot more efficient just to assume we can introduce a specific opcode for
doing a proof verification implemented natively.  The way I assumed it
would work is taking the serialized proof, a verification key, and the
public input as separate stack items.  The public input is the
concatenation of the state and deposit commitments we took from the
input, the batch post-state commitment (provided as part of the
witness), data from transaction outputs corresponding to
internally-initiated withdrawals from the rollup, and the rollup batch
data itself (also passed as part of the witness).

The parameters used in the PLONK system for the one zk-rollup I looked
at give us a verification key size of 964 bytes and a proof size of 1088
bytes, which means that they're larger than the 520 byte stack element
size limit so we'd actually have to use 2 stack elements for those.  But
that seems messy.  The worse issue though is the public inputs would
probably blow way past the 520 byte stack element size limit, especially
if we wanted to pack a lot of batch txs in there.  One solution to that
is by designing the proof verification opcode to take multiple stack
elements, but the complexity to shuffle around the elements as we're
getting ready to verify the proof seems like it would be extremely error
prone and would further impact the size of the script.  The size of the
script alone is very roughly around 1000 bytes.

Other nice-to-haves:

* something like OP_PUSHSCRIPT which would remove the need for the
introspection the the prevout's script and avoids duplicating data in
the witness
* some kind of OP_MERKLEUPDATEVERIFY which checks a merkle proof for a
leaf against a root *and* checks if replacing the leaf with some hash
using the proof yields a specified updated root (or instead, a version
that just pushes the updated root)
* if we really wanted to golf the size of the script, then possibly a
limited form of OP_EVAL if we can't figure out another way to split up
the different spend paths into different tapleafs while still being able
to do the recursive covenant, but still the script and the vk would
still be significant so it's not actually that much benefit per-batch
* a negative relative timelock to prevent a sniping issue I outlined in
the doc

It's probably possible that some of the introspection opcodes to look at
outputs could be replaced with OP_CHECKTEMPLATEVERIFY and putting a copy
of all the outputs on the stack, which combined with OP_PUSHSCRIPT means
I think we wouldn't need any of the Elements-style introspection opcodes
linked above, but it would be slightly messier and mean more data needs
to get duplicated in the witness.

It may be the case that there's enough issues with the above
requirements that the safer path to take is just to soft-fork in
Simplicity (or something like Chialisp as was suggested for
consideration in a prior mailing list thread a while back [4]) with
support for the necessary transaction introspection and go from there. 
Regardless of which option is decided upon, somehow we'll need to use a
new witness version since there's non-soft-forkable requirements in any
other case.

Moving on, I had not considered the possibility that a non-zk optimistic
rollup might be practical on Bitcoin.  I had assumed based on my
understanding of existing ones that the amount of statefulness required

Re: [bitcoin-dev] Validity Rollups on Bitcoin

2022-11-02 Thread AdamISZ via bitcoin-dev
Hi John,

Sorry for late feedback. Very much appreciated the in depth report!

So, I second Greg's main question, which I've really been thinking about a bit 
myself since starting to research this area more: it feels like the Bitcoin 
protocol research community (or, uh, some of it) should focus in on this 
question of: what is the minimal functionality required onchain (via, 
presumably soft fork) that enables something close to general purpose offchain 
contracting that is provable, ideally in zero knowledge, but at the very least, 
succinctly, with onchain crypto operations. An example might be: if we had 
verification of bilinear pairings onchain, combined with maybe some covenant 
opcode, does it give us enough to do something like a rollup/sidechain model 
with full client side computation and very compact state update and 
verification onchain? (To be clear: just made that up! there is certainly no 
deep theory behind that particular combination .. although I did see this [1] 
thread on *optimistic* + covenant).

Is the actual answer to this something like Simplicity? (Above my paygrade to 
answer, that's for sure!)

Ideally you would want (don't laugh) for this to be the 'soft fork to end all 
soft forks' so that innovation could all be then in higher layers.

As to rollups themselves: centralization in the sequencer/publisher of state 
updates seems to be a really big issue that's somewhat brushed under the 
carpet. Depending on the model, there are cases where it actually is a theft 
risk (e.g. full control of an onchain smart contract), but there's significant 
censorship risk at the very least, as well as availability/uptime risk. At the 
extreme, Optimism has a 'security model' [3] that is frankly laughable (though, 
no doubt it's possible that will radically change) and for things like Arbitrum 
you have centralized sequencers, where the claim is that it will migrate to a 
more decentralized model; maybe, but that's a huge part of the challenge here, 
so while it's nice to see the sexy 'fast, cheap, scale' aspect straight away, I 
feel like those models haven't done the hard part yet. I also think these 
optimistic L2 models have a 'fake finality' issue from my perspective; the 
delay needed onchain is how long it takes to *really* confirm. (e.g
 .: rollups look cool compared to sidechains from the pov of 'instant' instead 
of confirmations on a chain, but that seems a bit sleight-of-hand-y).

It's notable to compare that with a payment-channels style L2 where 
decentralization and trustlessness are sine-qua-non and so the limitations are 
much more out in the open (e.g. the capacity tradeoff - while the 'instantness' 
is much more real perhaps, with the appropriate liveness caveat).

For the validity rollups, some of the above qualms don't apply, but afaik the 
concrete instantiations today still have this heavy sequencer/publisher 
centralization. Correct me if I'm wrong.

In any case, I do agree with a lot of people that some variant of this model 
(validity rollups) intuitively looks like a good choice, for the future, in 
comparison with other possible L2s that focus on *functionality* - with a mild 
censorship and centralization tradeoff perhaps.

And I'm maybe a bit heretical but I see no issue with using 1 of N security 
models for trusted setup here (note how it's probably different from base 
chain), so PLONK type stuff is just as, if not more, interesting than STARKS 
which aiui are pretty big and computationally heavy (sure, maybe that changes). 
So if that's true, it comes back to my first paragraph.

Cheers,
AdamISZ/waxwing

[1] https://nitter.it/salvatoshi/status/1537362661754683396
[3] https://community.optimism.io/docs/security-model/optimism-security-model/


Sent with Proton Mail secure email.

--- Original Message ---
On Wednesday, October 12th, 2022 at 16:40, John Light via bitcoin-dev 
 wrote:


> On Wed, Oct 12, 2022, at 9:28 AM, Greg Sanders wrote:
> 
> > Is there a one page cheat sheet of "asks" for transaction
> > introspection/OP_ZKP(?) and their uses both separately and together for
> > different rollup architectures?
> 
> 
> We do not have this yet. Trey Del Bonis wrote a more detailed technical post 
> about how those components would be used in a validity rollup, which was 
> cited in my report and can be found here:
> https://tr3y.io/articles/crypto/bitcoin-zk-rollups.html
> 
> But it'll take more research and design work to suss out those details you 
> asked for and put them into a nice cheatsheet. I like this idea though!
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Validity Rollups on Bitcoin

2022-10-12 Thread John Light via bitcoin-dev
On Wed, Oct 12, 2022, at 9:28 AM, Greg Sanders wrote:
> Is there a one page cheat sheet of "asks" for transaction 
> introspection/OP_ZKP(?) and their uses both separately and together for 
> different rollup architectures?

We do not have this yet. Trey Del Bonis wrote a more detailed technical post 
about how those components would be used in a validity rollup, which was cited 
in my report and can be found here:
https://tr3y.io/articles/crypto/bitcoin-zk-rollups.html

But it'll take more research and design work to suss out those details you 
asked for and put them into a nice cheatsheet. I like this idea though!
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Validity Rollups on Bitcoin

2022-10-12 Thread Greg Sanders via bitcoin-dev
Thanks for the writeup John,

Is there a one page cheat sheet of "asks" for transaction
introspection/OP_ZKP(?) and their uses both separately and together for
different rollup architectures?

On Tue, Oct 11, 2022 at 11:52 AM John Light via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Hi all,
>
> Today I am publishing "Validity Rollups on Bitcoin", a report I produced
> as part of the Human Rights Foundation's ZK-Rollup Research Fellowship.
>
> Here's the preface:
>
> > Ever since Satoshi Nakamoto first publicly announced bitcoin, its
> supporters, critics, and skeptics alike have questioned how the protocol
> would scale as usage increases over time. This question is more important
> than ever today, as blocks are increasingly full or close to full of
> transactions. So-called "Layer 2" (L2) protocols such as the Lightning
> Network have been deployed to take some transaction volume "offchain" but
> even Lightning needs to use _some_ bitcoin block space. It's clear that as
> bitcoin is adopted by more and more of the world's population (human and
> machine alike!) more block space will be needed. Another thread of inquiry
> concerns whether bitcoin's limited scripting capabilities help or hinder
> its value as electronic cash. Researchers and inventors have shown that the
> electronic cash transactions first made possible by bitcoin could be given
> new form by improving transaction privacy, supporting new types of smart
> contracts, and even creating entirely new blockchain-based assets.
> >
> > One of the results of the decade-plus research into scaling and
> expanding the capabilities of blockchains such as bitcoin is the invention
> of the validity rollup. Given the observed benefits that validity rollups
> have for the blockchains that have already implemented them, attention now
> turns to the question of whether they would be beneficial for bitcoin and
> existing bitcoin L2 protocols such as Lightning, too. We explore this
> question by examining validity rollups from several angles, including their
> history, how they work on a technical level, how they could be built on
> bitcoin, and what the benefits, costs, and risks of building them on
> bitcoin might be. We conclude that validity rollups have the potential to
> improve the scalability, privacy, and programmability of bitcoin without
> sacrificing bitcoin's core values or functionality as a peer-to-peer
> electronic cash system. Given the "trustless" nature of validity rollups as
> cryptographically-secured extensions of their parent chain, and given
> bitcoin's status as the most secure settlement layer, one could even say
> these protocols are a _perfect match_ for one another.
>
> You can find the full report here:
>
> https://bitcoinrollups.org
>
> Happy to receive any comments and answer any questions the bitcoin dev
> community may have about the report!
>
> Best regards,
> John Light
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Validity Rollups on Bitcoin

2022-10-11 Thread John Light via bitcoin-dev
Hi all,

Today I am publishing "Validity Rollups on Bitcoin", a report I produced as 
part of the Human Rights Foundation's ZK-Rollup Research Fellowship.

Here's the preface:

> Ever since Satoshi Nakamoto first publicly announced bitcoin, its supporters, 
> critics, and skeptics alike have questioned how the protocol would scale as 
> usage increases over time. This question is more important than ever today, 
> as blocks are increasingly full or close to full of transactions. So-called 
> "Layer 2" (L2) protocols such as the Lightning Network have been deployed to 
> take some transaction volume "offchain" but even Lightning needs to use 
> _some_ bitcoin block space. It's clear that as bitcoin is adopted by more and 
> more of the world's population (human and machine alike!) more block space 
> will be needed. Another thread of inquiry concerns whether bitcoin's limited 
> scripting capabilities help or hinder its value as electronic cash. 
> Researchers and inventors have shown that the electronic cash transactions 
> first made possible by bitcoin could be given new form by improving 
> transaction privacy, supporting new types of smart contracts, and even 
> creating entirely new blockchain-based assets.
> 
> One of the results of the decade-plus research into scaling and expanding the 
> capabilities of blockchains such as bitcoin is the invention of the validity 
> rollup. Given the observed benefits that validity rollups have for the 
> blockchains that have already implemented them, attention now turns to the 
> question of whether they would be beneficial for bitcoin and existing bitcoin 
> L2 protocols such as Lightning, too. We explore this question by examining 
> validity rollups from several angles, including their history, how they work 
> on a technical level, how they could be built on bitcoin, and what the 
> benefits, costs, and risks of building them on bitcoin might be. We conclude 
> that validity rollups have the potential to improve the scalability, privacy, 
> and programmability of bitcoin without sacrificing bitcoin's core values or 
> functionality as a peer-to-peer electronic cash system. Given the "trustless" 
> nature of validity rollups as cryptographically-secured extensions of their 
> parent chain, and given bitcoin's status as the most secure settlement layer, 
> one could even say these protocols are a _perfect match_ for one another.

You can find the full report here:

https://bitcoinrollups.org

Happy to receive any comments and answer any questions the bitcoin dev 
community may have about the report!

Best regards,
John Light
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev