Re: [bitcoin-dev] bitcoin scripting and lisp

2022-03-10 Thread Anthony Towns via bitcoin-dev
On Tue, Mar 08, 2022 at 03:06:43AM +, ZmnSCPxj via bitcoin-dev wrote:
> > > They're radically different approaches and
> > > it's hard to see how they mix. Everything in lisp is completely sandboxed,
> > > and that functionality is important to a lot of things, and it's really
> > > normal to be given a reveal of a scriptpubkey and be able to rely on your
> > > parsing of it.
> > The above prevents combining puzzles/solutions from multiple coin spends,
> > but I don't think that's very attractive in bitcoin's context, the way
> > it is for chia. I don't think it loses much else?
> But cross-input signature aggregation is a nice-to-have we want for Bitcoin, 
> and, to me, cross-input sigagg is not much different from cross-input 
> puzzle/solution compression.

Signature aggregation has a lot more maths and crypto involved than
reversible compression of puzzles/solutions. I was more meaning
cross-transaction relationships rather than cross-input ones though.

> > I /think/ the compression hook would be to allow you to have the puzzles
> > be (re)generated via another lisp program if that was more efficient
> > than just listing them out. But I assume it would be turtles, err,
> > lisp all the way down, no special C functions like with jets.
> Eh, you could use Common LISP or a recent-enough RnRS Scheme to write a 
> cryptocurrency node software, so "special C function" seems to overprivilege 
> C...

Jets are "special" in so far as they are costed differently at the
consensus level than the equivalent pure/jetless simplicity code that
they replace.  Whether they're written in C or something else isn't the
important part.

By comparison, generating lisp code with lisp code in chia doesn't get
special treatment.

(You *could* also use jets in a way that doesn't impact consensus just
to make your node software more efficient in the normal case -- perhaps
via a JIT compiler that sees common expressions in the blockchain and
optimises them eg)

On Wed, Mar 09, 2022 at 02:30:34PM +, ZmnSCPxj via bitcoin-dev wrote:
> Do note that PTLCs remain more space-efficient though, so forget about HTLCs 
> and just use PTLCs.

Note that PTLCs aren't really Chia-friendly, both because chia doesn't
have secp256k1 operations in the first place, but also because you can't
do a scriptless-script because the information you need to extract
is lost when signatures are non-interactively aggregated via BLS --
so that adds an expensive extra ECC operation rather than reusing an
op you're already paying for (scriptless script PTLCs) or just adding
a cheap hash operation (HTLCs).

(Pretty sure Chia could do (= PTLC (pubkey_for_exp PREIMAGE)) for
preimage reveal of BLS PTLCs, but that wouldn't be compatible with
bitcoin secp256k1 PTLCs. You could sha256 the PTLC to save a few bytes,
but I think given how much a sha256 opcode costs in Chia, that that
would actually be more expensive?)

None of that applies to a bitcoin implementation that doesn't switch to
BLS signatures though.

> > But if they're fully baked into the scriptpubkey then they're opted into by 
> > the recipient and there aren't any weird surprises.
> This is really what I kinda object to.
> Yes, "buyer beware", but consider that as the covenant complexity increases, 
> the probability of bugs, intentional or not, sneaking in, increases as well.
> And a bug is really "a weird surprise" --- xref TheDAO incident.

Which is better: a bug in the complicated script code specified for
implementing eltoo in a BOLT; or a bug in the BIP/implementation of a
new sighash feature designed to make it easy to implement eltoo, that's
been soft-forked into consensus?

Seems to me, that it's always better to have the bug be at the wallet
level, since that can be fixed by upgrading individual wallet software.

> This makes me kinda wary of using such covenant features at all, and if stuff 
> like `SIGHASH_ANYPREVOUT` or `OP_CHECKTEMPLATEVERIFY` are not added but must 
> be reimplemented via a covenant feature, I would be saddened, as I now have 
> to contend with the complexity of covenant features and carefully check that 
> `SIGHASH_ANYPREVOUT`/`OP_CHECKTEMPLATEVERIFY` were implemented correctly.
> True I also still have to check the C++ source code if they are implemented 
> directly as opcodes, but I can read C++ better than frikkin Bitcoin SCRIPT.

If OP_CHECKTEMPLATEVERIFY (etc) is implemented as a consensus update, you
probably want to review the C++ code even if you're not going to use it,
just to make sure consensus doesn't end up broken as a result. Whereas if
it's only used by other people's wallets, you might be able to ignore it
entirely (at least until it becomes so common that any bugs might allow
a significant fraction of BTC to be stolen/lost and indirectly cause a
systemic risk).

> Not to mention that I now have to review both the (more complicated due to 
> more general) covenant feature implementation, *and* the implementation of 
> `SIGHASH_ANYPREVOUT`/`OP_CHECKT

Re: [bitcoin-dev] Speedy Trial

2022-03-10 Thread Luke Dashjr via bitcoin-dev
On Friday 11 March 2022 00:12:19 Russell O'Connor via bitcoin-dev wrote:
> The "no-miner-veto" concerns are, to an extent, addressed by the short
> timeline of Speedy Trial.  No more waiting 2 years on the miners dragging
> their feet.

It's still a miner veto. The only way this works is if the full deployment 
(with UASF fallback) is released in parallel.

> If you are so concerned about listening to legitimate criticism, maybe you
> can design a new deployment mechanism that addresses the concerns of the
> "devs-do-not-decide" faction and the "no-divegent-consensus-rules"
> faction.

BIP8 already does that.

> A major contender to the Speedy Trial design at the time was to mandate
> eventual forced signalling, championed by luke-jr.  It turns out that, at
> the time of that proposal, a large amount of hash power simply did not have
> the firmware required to support signalling.  That activation proposal
> never got broad consensus,

BIP 8 did in fact have broad consensus before some devs decided to ignore the 
community and do their own thing. Why are you trying to rewrite history?

> and rightly so, because in retrospect we see 
> that the design might have risked knocking a significant fraction of mining
> power offline if it had been deployed.  Imagine if the firmware couldn't be
> quickly updated or imagine if the problem had been hardware related.

They had 18 months to fix their broken firmware. That's plenty of time.

Luke
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Speedy Trial

2022-03-10 Thread Russell O'Connor via bitcoin-dev
On Thu., Mar. 10, 2022, 08:04 Jorge Timón via bitcoin-dev, <
bitcoin-dev@lists.linuxfoundation.org> wrote:

>
>
> You're right, we shouldn't get personal. We shouldn't ignore feedback from
> me, mark friedenbach or luke just because of who it comes from.
>

For goodness sake Jorge, enough with the persecution complex.

As the person who initially proposed the Speedy Trial deployment design, I
can say it was designed to take in account those concerns raised by luke-jr
and the "no-miner-veto" faction.  I also listened to the
"devs-do-not-decide" faction and the "no-divegent-consensus-rules" faction
and their concerns.

The "no-miner-veto" concerns are, to an extent, addressed by the short
timeline of Speedy Trial.  No more waiting 2 years on the miners dragging
their feet.  If ST fails to active then we are back where we started with
at most a few weeks lost.  And those weeks aren't really lost if they would
have been wasted away anyways trying to find broad consensus on another
deployment mechanism.

I get that you don't like the design of Speedy Trial.  You may even object
that it fails to really address your concerns by leaving open how to follow
up a failed Speedy Trial deployment.  But regardless of how you feel, I
believe I did meaningfully address the those miner-veto concerns and other
people agree with me.

If you are so concerned about listening to legitimate criticism, maybe you
can design a new deployment mechanism that addresses the concerns of the
"devs-do-not-decide" faction and the "no-divegent-consensus-rules"
faction.  Or do you feel that their concerns are illegitimate?  Maybe, by
sheer coincidence, all people you disagree with have illegitimate concerns
while only your concerns are legitimate.

A major contender to the Speedy Trial design at the time was to mandate
eventual forced signalling, championed by luke-jr.  It turns out that, at
the time of that proposal, a large amount of hash power simply did not have
the firmware required to support signalling.  That activation proposal
never got broad consensus, and rightly so, because in retrospect we see
that the design might have risked knocking a significant fraction of mining
power offline if it had been deployed.  Imagine if the firmware couldn't be
quickly updated or imagine if the problem had been hardware related.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] CTV vaults in the wild

2022-03-10 Thread Antoine Riard via bitcoin-dev
Hi James,

> I don't really see the vaults case as any different from other
> sufficiently involved uses of bitcoin script - I don't remember anyone
> raising these concerns for lightning scripts or DLCs or tapscript use,
> any of which could be catastrophic if wallet implementations are not
> tested properly.

I think on the lightning side there were enough concerns w.r.t bugs
affecting the toolchains in their infancy phases to motivate developers to
bound max channel value to 2^24 for a while [0]

[0] https://github.com/lightning/bolts/pull/590

> By comparison, decreasing amount per vault step and one CSV use
> seems pretty simple. It's certainly easy to test (as the repo shows),
> and really the only parameter the user has is how many blocks to delay
> to the `tohot_tx` and perhaps fee-rate. Not too hard to test
> comprehensively as far as I can tell.

As of today you won't be able to test against bitcoin core that a CSV'ed
transaction is valid for propagation across the network because your
mempool is going to reject it as non-final [1]

[1] https://github.com/bitcoin/bitcoin/pull/21413

Verifying that your whole set of off-chain covenanted transactions is
propagating well at different feerate levels, and there is no surface
offered to a malicious vault co-owner to pin them can turn quickly as a
real challenge, I believe.

> I think the main concern I have with any hashchain-based vault design
> is the immutability of the flow paths once the funds are locked to the
> root vault UTXO.

> Isn't this kind of inherent to the idea of covenants? You're
> precommitting to a spend path. You can put in as many "escape-hatch"
> conditions as you want (e.g. Jeremy makes the good point I should
> include an immediate-to-cold step that is sibling to the unvaulting),
> but fundamentally if you're doing covenants, you're precommitting to a
> flow of funds. Otherwise what's the point?

Yeah, I agree here that's the idea of covenants to commit to a flow of
funds. However, I think leveraging hashchain covenants in terms of  vault
design comes at the price to make transaction generation errors or key
endpoint compromises hardly irrevocable.

I would say you can achieve the same end goal of precommiting to a flow of
funds with "pre-signed" transactions (and actually that's what we do for
lightning) though while still keeping the upgrade emergency option open. Of
course, you re-introduce more assumptions on the devices where the upgrade
keys are laying.

I believe both designs are viable, it's more a matter of explaining
security and reliability trade-offs to the vaults users. They might be even
complimentary as answering different classes of self-custody needs. I'm
just worried as protocol devs, we have a good understanding of those
trade-offs to convey them well to the vaults users and have them make a
well-informed decision.

> Who's saying to trust hardware? Your cold key in the vault structure
> could have been generated by performing SHA rounds with the
> pebbles in your neighbor's zen garden.
>
> Keeping an actively used multi-sig setup secure certainly isn't free or
> easy. Multi-sig ceremonies (which of course can be used in this scheme)
> can be cumbersome to coordinate.
>
> If there's a known scheme that doesn't require covenants, but has
> similar usage and security characteristics, I'd love
> to know it! But being able to lock coins up for an arbitrary amount of
> time and then have advance notice of an attempted spend only seems
> possible with some kind of covenant technique.

Well, if by covenants you include pre-signed transactions vaults designs,
no sadly I don't know schemes offering the same usage and security
characteristics...

> That said, I think this security advantage is only relevant in the
> context of recursive design, where the partial unvault sends back the
> remaining funds to vault UTXO (not the design proposed here).

> I'm not really sure why this would be. Yeah, it would be cool to be able
> to partially unvault arbitrary amounts or something, but that seems like
> another order of complexity. Personally, I'd be happy to "tranche up"
> funds I'd like to store into a collection of single-hop vaults vs.
> the techniques available to us today.

Hmmm if you would like to be able to partially unvault arbitrary amounts,
while still precommitting to the flow of funds, you might need a sighash
flag extension like SIGHASH_ANYAMOUNT ? (my 2 sats, I don't have a design)

Yes, "tranche up" funds where the remainder is sent back to a vault UTXO
sounds to me belonging to the recursive class of design, and yeah I agree
that might be one of the most interesting features of vaults.

> Pretty straightforward to send such a process (whether it's a program or
> a collection of humans) an authenticated signal that says "hey, expect a
> withdrawal." This kind of alert allows for cross-referencing the
> activity and seems a lot better than nothing!

Yep, a nice improvement. And now you enter into a new wormho

Re: [bitcoin-dev] CTV vaults in the wild

2022-03-10 Thread Antoine Riard via bitcoin-dev
Hi Zeeman,

> Have not looked at the actual vault design, but I observe that Taproot
allows for a master key (which can be an n-of-n, or a k-of-n with setup
(either expensive or trusted, but I repeat myself)) to back out of any
contract.
>
> This master key could be an "even colder" key that you bury in the desert
to be guarded over by generations of Fremen riding giant sandworms until
the Bitcoin Path prophesied by the Kwisatz Haderach, Satoshi Nakamoto,
arrives.

Yes I agree you can always bless your hashchain-based off-chain contract
with an upgrade path thanks to Taproot. Though now this master key become
the point-of-failure to compromise, compared to hashchain.

I think you can even go fancier than a human desert to hide a master key
with "vaults" geostationary satellites [0] !

[0] https://github.com/oleganza/bitcoin-papers/blob/master/SatelliteVault.md

> Thought: It would be nice if Alice could use Lightning watchtowers as
well, that would help increase the anonymity set of both LN watchtower
users and vault users.

Well, I'm not sure if it's really binding toward the watchtowers.
A LN channel is likely to have a high-frequency of updates (in both
LN-penalty/Eltoo design I think)
A vault is likely to have low-frequency of updates (e.g an once a day
spending)

I think that point is addressable by generating noise traffic from the
vault entity to adopt a classic LN channel pattern. However, as a vault
"high-stake" user, you might not be eager to leak your watchtower IP
address or even Tor onion service to "low-stake" LN channel swarms of
users. So it might end up on different tower deployments because off-chain
contracts' level of safety requirements are not the same, I don't know..

> With Taproot trees the versions of the cold transaction are also stored
off-chain, and each tower gets its own transaction revealing only one of
the tapleaf branches.
> It does have the disadvantage that you have O(log N) x 32 Merkle tree
path references, whereas a presigned Taproot transaction just needs a
single 64-byte signature for possibly millions of towers.

I agree here though note vaults users might be interested to pay the fee
witness premium just to get the tower accountability feature.

Antoine

Le lun. 7 mars 2022 à 19:57, ZmnSCPxj  a écrit :

> Good morning Antoine,
>
> > Hi James,
> >
> > Interesting to see a sketch of a CTV-based vault design !
> >
> > I think the main concern I have with any hashchain-based vault design is
> the immutability of the flow paths once the funds are locked to the root
> vault UTXO. By immutability, I mean there is no way to modify the
> unvault_tx/tocold_tx transactions and therefore recover from transaction
> fields
> > corruption (e.g a unvault_tx output amount superior to the root vault
> UTXO amount) or key endpoints compromise (e.g the cold storage key being
> stolen).
> >
> > Especially corruption, in the early phase of vault toolchain deployment,
> I believe it's reasonable to expect bugs to slip in affecting the output
> amount or relative-timelock setting correctness (wrong user config,
> miscomputation from automated vault management, ...) and thus definitively
> freezing the funds. Given the amounts at stake for which vaults are
> designed, errors are likely to be far more costly than the ones we see in
> the deployment of payment channels.
> >
> > It might be more conservative to leverage a presigned transaction data
> design where every decision point is a multisig. I think this design gets
> you the benefit to correct or adapt if all the multisig participants agree
> on. It should also achieve the same than a key-deletion design, as long as
> all
> > the vault's stakeholders are participating in the multisig, they can
> assert that flow paths are matching their spending policy.
>
> Have not looked at the actual vault design, but I observe that Taproot
> allows for a master key (which can be an n-of-n, or a k-of-n with setup
> (either expensive or trusted, but I repeat myself)) to back out of any
> contract.
>
> This master key could be an "even colder" key that you bury in the desert
> to be guarded over by generations of Fremen riding giant sandworms until
> the Bitcoin Path prophesied by the Kwisatz Haderach, Satoshi Nakamoto,
> arrives.
>
> > Of course, relying on presigned transactions comes with higher
> assumptions on the hardware hosting the flow keys. Though as
> hashchain-based vault design imply "secure" key endpoints (e.g
> ), as a vault user you're still encumbered with the issues of
> key management, it doesn't relieve you to find trusted hardware. If you
> want to avoid multiplying devices to trust, I believe flow keys can be
> stored on the same keys guarding the UTXOs, before sending to vault custody.
> >
> > I think the remaining presence of trusted hardware in the vault design
> might lead one to ask what's the security advantage of vaults compared to
> classic multisig setup. IMO, it's introducing the idea of privileges in the
> coins custo

Re: [bitcoin-dev] Meeting Summary & Logs for CTV Meeting #5

2022-03-10 Thread Jorge Timón via bitcoin-dev
Thank you for explaining. I agree with luke then, I'm against speedy trial.
I explained why already, I think.
In summary: speedy trial kind of means is miners and not users who decide
the rules.
It gives users less opportunities to react and oppose a malevolent change
in case miners want to impose such change on them.


Why specially jeremy?

I personally distrust him more from experience, but that's subjective, and
kind of offtopic. Sorry, I should try to distrust all the other devs as
much as I distrust him in particular.
"Don't trust, verify", right?


On Wed, Mar 9, 2022, 14:42 ZmnSCPxj  wrote:

> Good morning Jorge,
>
> > What is ST? If it may be a reason to oppose CTV, why not talk about it
> more explicitly so that others can understand the criticisms?
>
> ST is Speedy Trial.
> Basically, a short softfork attempt with `lockinontimeout=false` is first
> done.
> If this fails, then developers stop and think and decide whether to offer
> a UASF `lockinontimeout=true` version or not.
>
> Jeremy showed a state diagram of Speedy Trial on the IRC, which was
> complicated enough that I ***joked*** that it would be better to not
> implement `OP_CTV` and just use One OPCODE To Rule Them All, a.k.a.
> `OP_RING`.
>
> If you had actually read the IRC logs you would have understood it, I even
> explicitly asked "ST ?=" so that the IRC logs have it explicitly listed as
> "Speedy Trial".
>
>
> > It seems that criticism isn't really that welcomed and is just explained
> away.
>
> It seems that you are trying to grasp at any criticism and thus fell
> victim to a joke.
>
> > Perhaps it is just my subjective perception.
> > Sometimes it feels we're going from "don't trust, verify" to "just trust
> jeremy rubin", i hope this is really just my subjective perception. Because
> I think it would be really bad that we started to blindly trust people like
> that, and specially jeremy.
>
> Why "specially jeremy"?
> Any particular information you think is relevant?
>
> The IRC logs were linked, you know, you could have seen what was discussed.
>
> In particular, on the other thread you mention:
>
> > We should talk more about activation mechanisms and how users should be
> able to actively resist them more.
>
> Speedy Trial means that users with mining hashpower can block the initial
> Speedy Trial, and the failure to lock in ***should*** cause the developers
> to stop-and-listen.
> If the developers fail to stop-and-listen, then a counter-UASF can be
> written which *rejects* blocks signalling *for* the upgrade, which will
> chainsplit from a pro-UASF `lockinontimeout=true`, but clients using the
> initial Speedy Trial code will follow which one has better hashpower.
>
> If we assume that hashpower follows price, then users who want for /
> against a particular softfork will be able to resist the Speedy Trial, and
> if developers release a UASF `lockinontimeout=true` later, will have the
> choice to reject running the UASF and even running a counter-UASF.
>
>
> Regards,
> ZmnSCPxj
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] CTV Meeting #5 Agenda (Tuesday, March 7th, 12:00 PT)

2022-03-10 Thread Jorge Timón via bitcoin-dev
On Wed, Mar 9, 2022, 14:14 Michael Folkson 
wrote:

> Hi Jorge
>
> > Since this has meetings like taproot, it seems it's going to end up
> being added in bitcoin core no matter what.
>
> Anyone can set up a IRC channel, anyone can organize a IRC meeting, anyone
> can announce meetings on the mailing list. Just because an individual is
> enthusiastic for a soft fork proposal does not imply it has community
> consensus or that it is likely to be merged into Core in the short term or
> long term. It is true that other soft fork proposal authors/contributors
> are not taking the approach Jeremy is taking and are instead working
> quietly in the background. I prefer the latter approach so soon after
> Taproot activation but I look forward to hearing about the progress made on
> other proposals in due course.
>

I hope you're right and not every proposal that gets to have a meeting gets
deployed.

> Should we start the conversation on how to resist it when that happens?
> We should talk more about activation mechanisms and how users should be
> able to actively resist them more.
>
> I can only speak for myself but if activation was being pursued for a soft
> fork that didn't have community consensus I would seek to join you in an
> effort to resist that activation. Taproot (pre-activation discussion) set a
> strong precedent in terms of community outreach and patiently building
> community consensus over many years. If that precedent was thrown out I
> think we are in danger of creating the chaos that most of us would seek to
> avoid. You are free to start whatever conversation you want but personally
> until Jeremy or whoever else embarks on an activation attempt I'd rather
> forget about activation discussions for a while.
>

I strongly disagree taproot set a strong precedent in terms of listening to
criticism and looking for consensus. Lots of legitimate criticisms seemed
to be simply ignored.
I really think it set a bad preference, even if taproot as deployed is
good, which I'm not sure about.

> What is ST? If it may be a reason to oppose CTV, why not talk about it
> more explicitly so that others can understand the criticisms?
>
> ST is short for Speedy Trial, the activation mechanism used for Taproot. I
> have implored people on many occasions now to not mix discussion of a soft
> fork proposal with discussion of an activation mechanism. Those discussions
> can happen in parallel but they are entirely independent topics of
> discussion. Mixing them is misleading at best and manipulative at worst.
>

Thanks. Yes, those topics were ignored before "let's focus on the proposal
first" and afterwards "let's just deploy this and we can discuss this in
more detail for the next proposal".
And I thonk lots of valid criticism was ignored and disregarded.


> It seems that criticism isn't really that welcomed and is just explained
> away.
> Perhaps it is just my subjective perception. Sometimes it feels we're
> going from "don't trust, verify" to "just trust jeremy rubin", i hope this
> is really just my subjective perception. Because I think it would be really
> bad that we started to blindly trust people like that, and specially jeremy.
>
> I think we should generally avoid getting personal on this mailing list.
> However, although I agree that Jeremy has done some things in the past that
> have been over-exuberant to put it mildly, as long as he listens to
> community feedback and doesn't try to force through a contentious soft fork
> earlier than the community is comfortable with I think his work can add
> material value to the future soft fork discussion. I entirely agree that we
> can't get into a situation where any one individual can push through a soft
> fork without getting community consensus and deep technical review from as
> many qualified people as possible. That can take a long time (the demands
> on long term contributors' time are vast) and hence anyone without serious
> levels of patience should probably exclusively work on sidechains, altcoins
> etc (or non-consensus changes in Bitcoin) rather than Bitcoin consensus
> changes.
>

You're right, we shouldn't get personal. We shouldn't ignore feedback from
me, mark friedenbach or luke just because of who it comes from.
I don't think jeremy listens to feedback, judging from taproot activation
discussions, I felt very much ignores by him and others. Luke was usually
ignored. Mark criticisms on taproot, not the activation itself, seemed to
be ignored as well. I mean, if somebody refuted his concerns somewhere, I
missed it.
But even if I believe jremey has malicious intentions and doesn't listen to
the community, you're still right, we shouldn't get personal. I shoud
assume the same malevolent intentions I assume jeremy has from everyone
else.

Thanks
> Michael
>
> --
> Michael Folkson
> Email: michaelfolkson at protonmail.com
> Keybase: michaelfolkson
> PGP: 43ED C999 9F85 1D40 EAF4 9835 92D6 0159 214C FEE3
>
> --- Original Message ---
> On Wedn

Re: [bitcoin-dev] Wasabi Wallet 2.0 Testnet Release

2022-03-10 Thread nopara73 via bitcoin-dev
>  There is no coin control in Wasabi Wallet 2.

This is correct, but in and of itself can be misleading for those who know
that privacy in Bitcoin is near impossible without coin control, because
the conclusion would be then that Wasabi 2.0 ruined privacy for no reason,
which is obviously not the case, in fact it improves it in many ways.

The idea is that you don't need coin control when you can make your
transaction with coinjoined coins. These coins are indistinguishable, so
you don't really have a use for coin control in that case. I think this is
non-controversial, but what about the case when you cannot make the tx from
coinjoined coins?

In that case there still is a mandatory privacy control, which is an
improved version of coin control. The insight here is that, in coin control
settings, users are differentiating between coins based on their labels.
Since Wasabi creates label clusters, it is ok to select the clusters the
user wants to make the transaction from instead of individual coins. I know
you liked the never released cluster selection page before it got further
improved to be a privacy control page, but note the privacy control still
uses the same insight, it just further removed unnecessary friction. That
being said, coins can also be seen with this super secret developer key
combination: CTRL + D + C

> User does not select coins because they are never shared with the user in
the first place.

As explained above it is selecting coins indirectly rather than directly.
It is selecting clusters of coins that are assumed to belong to the same
wallet from an outside observer's point of view instead of individually
selecting coins one by one.

>  There are no 'private' coins. Every coin is public in Bitcoin.

Not sure I'd like to engage in bikeshedding on terminology, but in my
opinion this terminology is not only true, but also good and useful:
Ownership of equalized coinjoin UTXOs is only known by the owner and not by
external observers. The owner has control over who it reveals the ownership
of these UTXOs. Privacy is your ability to selectively reveal yourself to
the world, therefore the terminology of "private coins" naturally makes
sense and it's a useful differentiator from non-coinjoined coins.

>  Since, the wallet assumes some coins as 'private' based on certain
things it can be misleading for the user. Privacy depends on the things
users want to share with others.

The wallet does not assume. The user assumes when selecting the anonymity
levels. The wallet works with the user's assumption of its threat model. If
a misleading claim can be made here then it's that the user misleads the
wallet (and her/himself) rather than the other way around.

>  Privacy involved in using a change or not using it is debatable. Not
using a change address makes it easier to understand who might be the
recipient in a transaction whereas using a change address same as other
outputs would be difficult to analyze for possible recipients.

Although I agree it's debatable, but for different reasons. I'd rather take
an issue of its usefulness instead. About the assumption that it's easier
to understand who might be the recipient, that's incorrect as the
transaction can easily be considered a self spend. In comparison to change
generating transactions, there the change and the recipient can most of the
times be established.

>  Wasabi wallet does not have different types of addresses to use for a
change however [Bitcoin Core][2] recently made some related improvement
which would improve privacy.

Yup. Unfortunately this is a hack to make the wallet feel like a light
wallet as it greatly reduces the size of the client side filters we have.
Although, as the blockchain grows further optimizations are needed. So it's
not very helpful if Bitcoin Core gives us 10 GB of filters so we can use
all the types of addresses. We had a pull request to Core about creating
custom filters, but it was NACK-ed. In order to do this correctly and get
merged into Core we'd have to have a more comprehensive modification than
our initial PR and that we have no resources to allocate to yet.

>  As far as issues are concerned, there are several things not fixed and
shared in different GitHub issues or discussions. These include privacy,
security and other things.

I greatly disagree with this assessment, in fact, quite the opposite. Take
for example the tremendous activity your pull request about an empty catch
block received: https://github.com/zkSNACKs/WalletWasabi/pull/6791
No sane project would allow their best developers to spend more than 5
minutes on this issue, yet 7 developers were discussing if leaving a single
empty catch block in the code could be a potential security risk in the
future and our resolution was actually contributing to NBitcoin to make
sure we aren't getting an exception for incorrect password, but rather a
boolean signal.

>  As WW2 is not developed for power users (mentioned by developers working
on Wasabi), 

Re: [bitcoin-dev] Jets (Was: `OP_FOLD`: A Looping Construct For Bitcoin SCRIPT)

2022-03-10 Thread Billy Tetrud via bitcoin-dev
Hi ZmnSCPxj,

>  Just ask a bunch of fullnodes to add this 1Mb of extra ignored data in
this tiny 1-input-1-output transaction so I pay only a small fee

I'm not suggesting that you wouldn't have to pay a fee for it. You'd pay a
fee for it as normal, so there's no DOS vector. Doesn't adding
extra witness data do what would be needed here? Eg simply adding extra
data onto the witness script that will remain unconsumed after successful
execution of the script?

> how do new jets get introduced?

In scenario A, new jets get introduced by being added to bitcoin software
as basically relay rules.

> If a new jet requires coordinated deployment over the network, then you
might as well just softfork and be done with it.

It would not need a coordinated deployment. However, the more nodes that
supported that jet, the more efficient using it would be for the network.

> If a new jet can just be entered into some configuration file, how do you
coordinate those between multiple users so that there *is* some benefit for
relay?

When a new version of bitcoin comes out, people generally upgrade to it
eventually. No coordination is needed. 100% of the network need not support
a jet. Just some critical mass to get some benefit.

> Having a static lookup table is better since you can pattern-match on
strings of specific, static length

Sorry, better than what exactly?

> How does the unupgraded-to-upgraded boundary work?

This is what I'm thinking. Imagine a simple script:

OP_DUP
OP_ADD

with witness

1

This would execute as 1+1 = 2 -> success. Let's say the script is
jettified so we can instead write it as:

OP_JET
1b5f03cf # adler32 hash of the replaced script

with a witness:

OP_JET   # Some number that represents OP_JET
1b5f03cf
0
1

A jet-aware node transmitting to another jet-aware node can transmit it as
is (tho it would need to do a swap out to validate). For a jet-aware node
to transmit this to a non-jet aware node, it would need to swap the OP_JET
call with the script it represents. So the transaction sent to the non-jet
aware node would have:

Script:

OP_DUP
OP_ADD

Witness:

OP_JET
1b5f03cf
0
1

And you can see that this would execute and succeed by adding 1+1 and
ending up with the stack:

2
0
1b5f03cf
OP_JET

Which would succeed because of the non-zero top of stack.

When the non-jet aware node sends this to a jet-aware node, that node would
see the extra items on the stack after script execution, and would
interpret them as an OP_JET call specifying that OP_JET should replace the
witness items starting at index 0 with `1b5f03cf  OP_JET`. It does this and
then sends that along to the next hop.

In order to support this without a soft fork, this extra otherwise
unnecessary data would be needed, but for jets that represent long scripts,
the extra witness data could be well worth it (for the network).

However, this extra data would be a disincentive to do transactions this
way, even when its better for the network. So it might not be worth doing
it this way without a soft fork. But with a soft fork to upgrade nodes to
support an OP_JET opcode, the extra witness data can be removed (replaced
with out-of-band script fragment transmission for nodes that don't support
a particular jet).

One interesting additional thing that could be done with this mechanism is
to add higher-order function ability to jets, which could allow nodes to
add OP_FOLD or similar functions as a jet without requiring additional soft
forks.  Hypothetically, you could imagine a jet script that uses an OP_LOOP
jet be written as follows:

5 # Loop 5 times
1 # Loop the next 1 operation
3c1g14ad
OP_JET
OP_ADD  # The 1 operation to loop

The above would sum up 5 numbers from the stack. And while this summation
jet can't be represented in bitcoin script on its own (since bitcoin script
can't manipulate opcode calls), the jet *call* can still be represented as:

OP_ADD
OP_ADD
OP_ADD
OP_ADD
OP_ADD

which means all of the above replacement functionality would work just as
well.

So my point here is that jets implemented in a way similar to this would
give a much wider range of "code as compression" possibilities than
implementing a single opcode like op_fold.

> To make jets more useful, we should redesign the language so that
`OP_PUSH` is not in the opcode stream, but instead, we have a separate
table of constants that is attached / concatenated to the actual SCRIPT.

This can already be done, right? You just have to redesign the script to
consume and swap/rot around the data in the right way to separate them out
from the main script body.


On Mon, Mar 7, 2022 at 5:35 PM ZmnSCPxj  wrote:

> Good morning Billy,
>
> Changed subject since this is only tangentially related to `OP_FOLD`.
>
> > Let me organize my thoughts on this a little more clearly. There's a
> couple possibilities I can think of for a jet-like system:
> >
> > A. We could implement jets now without a consensus change, and
> without requiring all nodes to upg