[bitcoin-dev] Beyond Jets: Microcode: Consensus-Critical Jets Without Softforks

2022-03-21 Thread ZmnSCPxj via bitcoin-dev
Good morning list,

It is entirely possible that I have gotten into the deep end and am now 
drowning in insanity, but here goes

Subject: Beyond Jets: Microcode: Consensus-Critical Jets Without Softforks

Introduction


Recent (Early 2022) discussions on the bitcoin-dev mailing
list have largely focused on new constructs that enable new
functionality.

One general idea can be summarized this way:

* We should provide a very general language.
  * Then later, once we have learned how to use this language,
we can softfork in new opcodes that compress sections of
programs written in this general language.

There are two arguments against this style:

1.  One of the most powerful arguments the "general" side of
the "general v specific" debate is that softforks are
painful because people are going to keep reiterating the
activation parameters debate in a memoryless process, so
we want to keep the number of softforks low.
* So, we should just provide a very general language and
  never softfork in any other change ever again.
2.  One of the most powerful arguments the "general" side of
the "general v specific" debate is that softforks are
painful because people are going to keep reiterating the
activation parameters debate in a memoryless process, so
we want to keep the number of softforks low.
* So, we should just skip over the initial very general
  language and individually activate small, specific
  constructs, reducing the needed softforks by one.

By taking a page from microprocessor design, it seems to me
that we can use the same above general idea (a general base
language where we later "bless" some sequence of operations)
while avoiding some of the arguments against it.

Digression: Microcodes In CISC Microprocessors
--

In the 1980s and 1990s, two competing microprocessor design
paradigms arose:

* Complex Instruction Set Computing (CISC)
  - Few registers, many addressing/indexing modes, variable
instruction length, many obscure instructions.
* Reduced Instruction Set Computing (RISC)
  - Many registers, usually only immediate and indexed
addressing modes, fixed instruction length, few
instructions.

In CISC, the microprocessor provides very application-specific
instructions, often with a small number of registers with
specific uses.
The instruction set was complicated, and often required
multiple specific circuits for each application-specific
instruction.
Instructions had varying sizes and varying number of cycles.

In RISC, the micrprocessor provides fewer instructions, and
programmers (or compilers) are supposed to generate the code
for all application-specific needs.
The processor provided large register banks which could be
used very generically and interchangeably.
Instructions had the same size and every instruction took a
fixed number of cycles.

In CISC you usually had shorter code which could be written
by human programmers in assembly language or machine language.
In RISC, you generally had longer code, often difficult for
human programmers to write, and you *needed* a compiler to
generate it (unless you were very careful, or insane enough
you could scroll over multiple pages of instructions without
becoming more insane), or else you might forget about stuff
like jump slots.

For the most part, RISC lost, since most modern processors
today are x86 or x86-64, an instruction set with varying
instruction sizes, varying number of cycles per instruction,
and complex instructions with application-specific uses.

Or at least, it *looks like* RISC lost.
In the 90s, Intel was struggling since their big beefy CISC
designs were becoming too complicated.
Bugs got past testing and into mass-produced silicon.
RISC processors were beating the pants off 386s in terms of
raw number of computations per second.

RISC processors had the major advantage that they were
inherently simpler, due to having fewer specific circuits
and filling up their silicon with general-purpose registers
(which are large but very simple circuits) to compensate.
This meant that processor designers could fit more of the
design in their merely human meat brains, and were less
likely to make mistakes.
The fixed number of cycles per instruction made it trivial
to create a fixed-length pipeline for instruction processing,
and practical RISC processors could deliver one instruction
per clock cycle.
Worse, the simplicity of RISC meant that smaller and less
experienced teams could produce viable competitors to the
Intel x86s.

So what Intel did was to use a RISC processor, and add a
special Instruction Decoder unit.
The Instruction Decoder would take the CISC instruction
stream accepted by classic Intel x86 processors, and emit
RISC instructions for the internal RISC processor.
CISC instructions might be variable length and have variable
number of cycles, but the emitted RISC instructions were
individually fix

Re: [bitcoin-dev] Speedy Trial

2022-03-21 Thread vjudeu via bitcoin-dev
> I don't quite understand this part. I don't understand how this would make 
> your signature useless in a different context. Could you elaborate?

It is simple. If you vote by making transactions, then someone could capture 
that and broadcast to nodes. If your signature is "useless in a different 
context", then you can only send that to your network. If it will be sent 
anywhere else, it will be invalid, so also useless. Another reason to sign 
transactions and not just some custom data is to make it compatible with 
"signet way of making signatures", the same as used in signet challenge.

> I don't think any kind of chain is necessary to store this data.

Even if it is not needed, it is kind of "free" if you take transaction size 
into account. Because each person moving coins on-chain could attach "OP_RETURN 
" in TapScript, just to save commitments. Then, even if someone is 
not in your network from the very beginning, that person could still collect 
commitments and find out how they are connected with on-chain transactions.

> Perhaps one day it could be used for something akin to voting, but certainly 
> if we were going to implement this to help decide on the next soft fork, it 
> would very likely be a quite biased set of responders.

If it will be ever implemented, it should be done in a similar way as 
difficulty: if you want 90%, you should calculate, what amount in satoshis is 
needed to reach that 90%, and update it every two weeks, based on all votes. In 
this way, you reduce floating-point operations to a bare minimum, and have a 
system, where you can compare uint64 amounts to quickly get "yes/no" answer to 
the question, if something should be triggered (also, you can compress it to 32 
bits in the same way as 256-bit target is compressed).

> But on that note, I was thinking that it might be interesting to have an 
> optional human readable message into these poll messages.

As I said, "OP_RETURN " inside TapScript is enough to produce all 
commitments of arbitrary size for "free", so that on-chain transaction size is 
constant, no matter how large that commitment is. And about storage: you could 
create a separate chain for that, you could store that in the same way as LN 
nodes store data, you could use something else, it doesn't really matter, 
because on-chain commitments could be constructed in the same way (also, as 
long as the transaction creator keeps those commitments as a secret, there is 
no way to get them; that means you can add them later if needed and easily 
pretend that "it was always possible").


On 2022-03-21 10:17:29 user Billy Tetrud via bitcoin-dev 
 wrote:
Good Evening ZmnSCPxj,


>  I need to be able to invalidate the previous signal, one that is tied to the 
>fulfillment of the forwarding request.


You're right that there's some nuance there. You could add a block hash into 
the poll message and define things so any subsequent poll message sent with a 
newer block hash overrides the old poll message at the block with that hash and 
later blocks. That way if a channel balance changes significantly, a new poll 
message can be sent out. 


Or you could remove the ability to specify fractional support/opposition and 
exclude multiparty UTXOs from participation. I tend to like the idea of the 
possibility of full participation tho, even in a world that mainly uses 
lightning.


> if the signaling is done onchain


I don't think any of this signaling needs to be done on-chain. Anyone who wants 
to keep a count of the poll can simply collect together all these poll messages 
and count up the weighted preferences. Yes, it would be possible for one person 
to send out many conflicting poll messages, but this could be handled without 
any commitment to the blockchain. A simple thing to do would be to simply 
invalidate poll messages that conflict (ie include them both in your list of 
counted messages, but ignore them in your weighted-sums of poll preferences). 
As long as these polls are simply used to inform action rather than to trigger 
action, it should be ok that someone can produce biased incomplete counts, 
since anyone can show a provably more complete set (a superset) of poll 
messages. Also, since this would generally be a time-bound thing, where this 
poll information would for example be used to gauge support for a soft fork, 
there isn't much of a need to keep the poll messages on an immutable ledger. 
Old poll data is inherently not very practically useful compared to recent poll 
data. So we can kind of side step things like history attacks by simply 
ignoring polls that aren't recent.


> Semantically, we would consider the "cold" key to be the "true" owner of the 
> fund, with "hot" key being delegates who are semi-trusted, but not as trusted 
> as the "cold" key.


I'm not sure I agree with those semantics as a hard rule. I don't consider a 
"key" to be an owner of anything. A person owns a key, which gives them access 
to funds. A key is a tool, a

Re: [bitcoin-dev] Covenants and feebumping

2022-03-21 Thread darosior via bitcoin-dev
Hi ZmnSCPxj,

Thanks for the feedback. The DLC idea is interesting but you are centralizing 
the liveness requirements,
effectively creating a SPOF: in order to bypass the revocation clause no need 
to make sure to down each and
every watchtower anymore, just down the oracle and you are sure no revocation 
transaction can be pushed.


> Okay, let me see if I understand your concern correctly.
> When using a signature challenge, the concern is that you need to presign 
> multiple versions of a transaction with varying feerates.

I was thinking of having a hot key (in this case probably shared amongst the 
monitors) where they would sign
the right fee level at broadcast time. Pre-signing makes it quickly too many 
signatures (and kills the purpose
of having covenants in the first place).

> And you have a set of network monitors / watchtowers that are supposed to 
> watch the chain on your behalf in case your ISP suddenly hates you for no 
> reason.
> The more monitors there are, the more likely that one of them will be 
> corrupted by a miner and jump to the highest-feerate version, overpaying fees 
> and making miners very happy.
> Such is third-party trust.
> Is my understanding correct?

Your understanding of the tradeoff is correct.

--- Original Message ---

Le jeudi 17 mars 2022 à 12:29 AM, ZmnSCPxj  a écrit :

> Good morning Antoine,
>
> > For "hot contracts" a signature challenge is used to achieve the same. I 
> > know the latter is imperfect, since
> >
> > the lower the uptime risk (increase the number of network monitors) the 
> > higher the DOS risk (as you duplicate
> >
> > the key).. That's why i asked if anybody had some thoughts about this and 
> > if there was a cleverer way of doing
> >
> > it.
>
> Okay, let me see if I understand your concern correctly.
>
> When using a signature challenge, the concern is that you need to presign 
> multiple versions of a transaction with varying feerates.
>
> And you have a set of network monitors / watchtowers that are supposed to 
> watch the chain on your behalf in case your ISP suddenly hates you for no 
> reason.
>
> The more monitors there are, the more likely that one of them will be 
> corrupted by a miner and jump to the highest-feerate version, overpaying fees 
> and making miners very happy.
>
> Such is third-party trust.
>
> Is my understanding correct?
>
> A cleverer way, which requires consolidating (but is unable to eliminate) 
> third-party trust, would be to use a DLC oracle.
>
> The DLC oracle provides a set of points corresponding to a set of feerate 
> ranges, and commits to publishing the scalar of one of those points at some 
> particular future block height.
>
> Ostensibly, the scalar it publishes is the one of the point that corresponds 
> to the feerate range found at that future block height.
>
> You then create adaptor signatures for each feerate version, corresponding to 
> the feerate ranges the DLC oracle could eventually publish.
>
> The adaptor signatures can only be completed if the DLC oracle publishes the 
> corresponding scalar for that feerate range.
>
> You can then send the adaptor signatures to multiple watchtowers, who can 
> only publish one of the feerate versions, unless the DLC oracle is hacked and 
> publishes multiple scalars (at which point the DLC oracle protocol reveals a 
> privkey of the DLC oracle, which should be usable for slashing some bond of 
> the DLC oracle).
>
> This prevents any of them from publishing the highest-feerate version, as the 
> adaptor signature cannot be completed unless that is what the oracle 
> published.
>
> There are still drawbacks:
>
> * Third-party trust risk: the oracle can still lie.
>
> * DLC oracles are prevented from publishing multiple scalars; they cannot be 
> prevented from publishing a single wrong scalar.
>
> * DLCs must be time bound.
>
> * DLC oracles commit to publishing a particular point at a particular fixed 
> time.
>
> * For "hot" dynamic protocols, you need the ability to invoke the oracle at 
> any time, not a particular fixed time.
>
> The latter probably makes this unusable for hot protocols anyway, so maybe 
> not so clever.
>
> Regards,
>
> ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] OP_RETURN inside TapScript

2022-03-21 Thread Kostas Karasavvas via bitcoin-dev
Hi vjudeu,

There are use cases where your following assumption is wrong:  ".. and that
kind of data is useful only to the transaction maker."

No one really publishes the actual data with an OP_RETURN. They publish the
hash (typically merkle root) of that 1.5 GB of data. So the overhead is
just 32 bytes for arbitrarily large data sets. What you gain with these 32
bytes is that your hash is visible to anyone and they can verify it without
active participation of the hash publisher.

Regards,
Kostas


On Sat, Mar 19, 2022 at 9:26 PM vjudeu via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> > There are two use-cases for OP_RETURN: committing to data, and
> publishing data. Your proposal can only do the former, not the latter, and
> there are use-cases for both.
>
> Only the former is needed. Pushing data on-chain is expensive and that
> kind of data is useful only to the transaction maker. Also, the latter can
> be pushed on a separate chain (or even a separate layer that is not a chain
> at all).
>
> Also note that since Taproot we have the latter: we can spend by TapScript
> and reveal some public key and tapbranches. It is possible to push more
> than 80 bytes in this way, so why direct OP_RETURN is needed, except for
> backward-compatibility? (for example in Segwit commitments)
>
> There is only one problem with spending by TapScript, when it comes to
> publishing data: only the first item is the public key. If we could use
> public keys instead of tapbranch hashes, we could literally replace
> "OP_RETURN " with " 
>   ... ".
> Then, we could use unspendable public keys to push data, so OP_RETURN would
> be obsolete.
>
> By the way, committing to data has a lot of use cases, for example the
> whole idea of NameCoin could be implemented on such OP_RETURN's. Instead of
> creating some special transaction upfront, people could place some hidden
> commitment and reveal that later. Then, there would be no need to produce
> any new coins out of thin air, because everything would be merge-mined by
> default, providing Bitcoin-level Proof of Work protection all the time,
> 24/7/365. Then, people could store that revealed commitments on their own
> chain, just to keep track of who owns which name. And then, that network
> could easily turn on and off all Bitcoin features as they please. Lightning
> Network on NameCoin? No problem, even the same satoshis could be used to
> pay for domains!
>
> On 2022-03-16 19:21:37 user Peter Todd  wrote:
> > On Thu, Feb 24, 2022 at 10:02:08AM +0100, vjudeu via bitcoin-dev wrote:
> > Since Taproot was activated, we no longer need separate OP_RETURN
> outputs to be pushed on-chain. If we want to attach any data to a
> transaction, we can create "OP_RETURN " as a branch in the
> TapScript. In this way, we can store that data off-chain and we can always
> prove that they are connected with some taproot address, that was pushed
> on-chain. Also, we can store more than 80 bytes for "free", because no such
> taproot branch will be ever pushed on-chain and used as an input. That
> means we can use "OP_RETURN <1.5 GB of data>", create some address having
> that taproot branch, and later prove to anyone that such "1.5 GB of data"
> is connected with our taproot address.
>
> There are two use-cases for OP_RETURN: committing to data, and publishing
> data.
> Your proposal can only do the former, not the latter, and there are
> use-cases
> for both.
>
> --
> https://petertodd.org 'peter'[:-1]@petertodd.org
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Speedy Trial

2022-03-21 Thread Billy Tetrud via bitcoin-dev
Good Evening ZmnSCPxj,

>  I need to be able to invalidate the previous signal, one that is tied to
the fulfillment of the forwarding request.

You're right that there's some nuance there. You could add a block hash
into the poll message and define things so any subsequent poll message sent
with a newer block hash overrides the old poll message at the block with
that hash and later blocks. That way if a channel balance changes
significantly, a new poll message can be sent out.

Or you could remove the ability to specify fractional support/opposition
and exclude multiparty UTXOs from participation. I tend to like the idea of
the possibility of full participation tho, even in a world that mainly uses
lightning.

> if the signaling is done onchain

I don't think any of this signaling needs to be done on-chain. Anyone who
wants to keep a count of the poll can simply collect together all these
poll messages and count up the weighted preferences. Yes, it would be
possible for one person to send out many conflicting poll messages, but
this could be handled without any commitment to the blockchain. A simple
thing to do would be to simply invalidate poll messages that conflict (ie
include them both in your list of counted messages, but ignore them in your
weighted-sums of poll preferences). As long as these polls are simply used
to inform action rather than to trigger action, it should be ok that
someone can produce biased incomplete counts, since anyone can show a
provably more complete set (a superset) of poll messages. Also, since this
would generally be a time-bound thing, where this poll information would
for example be used to gauge support for a soft fork, there isn't much of a
need to keep the poll messages on an immutable ledger. Old poll data is
inherently not very practically useful compared to recent poll data. So we
can kind of side step things like history attacks by simply ignoring polls
that aren't recent.

> Semantically, we would consider the "cold" key to be the "true" owner of
the fund, with "hot" key being delegates who are semi-trusted, but not as
trusted as the "cold" key.

I'm not sure I agree with those semantics as a hard rule. I don't consider
a "key" to be an owner of anything. A person owns a key, which gives them
access to funds. A key is a tool, and the owner of a key or wallet vault
can define whatever semantics they want. If they want to designate a hot
key as their poll-signing key, that's their prerogative. If they want to
require a cold-key as their message-signing key or even require multisig
signing, that's up to them as well. You could even mirror wallet-vault
constructs by overriding a poll message signed with fewer key using one
signed with more keys. The trade offs you bring up are reasonable
considerations, and I think which trade offs to choose may vary by the
individual in question and their individual situation. However, I think the
time-bound and non-binding nature of a poll makes the risks here pretty
small for most situations you would want to use this in (eg in a soft-fork
poll). It should be reasonable to consider any signed poll message valid,
regardless of possibilities of theft or key renting shinanigans. Nacho keys
nacho coins would of course be important in this scenario.

>  if I need to be able to somehow indicate that a long-term-cold-storage
UTXO has a signaling pubkey, I imagine this mechanism of indioating might
itself require a softfork, so you have a chicken-and-egg problem...

If such a thing did need a soft fork, the chicken and egg question would be
easy to answer: the soft fork comes first. We've done soft forks before
having this mechanism, and if necessary we could do another one to enable
it.

However, I think taproot can enable this mechanism without a soft fork. It
should be possible to include a taproot leaf that has the data necessary to
validate a signaling signature. The tapleaf would contain an invalid script
that has an alternative interpretation, where your poll message can include
the merkle path to tapleaf (the invalid-script), and the data at that leaf
would be a public key you can then verify the signaling signature against.

@vjudeu

> It should not be expressed in percents, but in amounts

Agreed. You make a good case for that.

> it could be just some kind of transaction, where you have utxo_id just as
transaction input, amount of coins as some output, and then add your
message as "OP_RETURN " in your input, in this way your
signature would be useless in a different context than voting.

I don't quite understand this part. I don't understand how this would make
your signature useless in a different context. Could you elaborate?

> it does not really matter if you store that commitments on-chain to
preserve signalling results in consensus rules or if there would be some
separate chain for storing commitments and nothing else

I don't think any kind of chain is necessary to store this data. I'm
primarily suggesting this as a method