Re: [bitcoin-dev] TAPLEAF_UPDATE_VERIFY covenant opcode

2021-09-10 Thread Anthony Towns via bitcoin-dev
On Fri, Sep 10, 2021 at 12:12:24AM -0400, Antoine Riard wrote:
> "Talk is cheap. Show me the code" :p
>     case OP_MERKLESUB:

I'm not entirely clear on what your opcode there is trying to do. I
think it's taking

 MERKLESUB

and checking that output N has the same scripts as the current input
except with the current script removed, and with its internal pubkey as
the current input's internal pubkey plus P.

>         txTo->vout[out_pos].scriptPubKey.IsWitnessProgram(witnessversion,
> witnessprogram);
>         //! The committed to output must be a witness v1 program at least

That would mean anyone who could do a valid spend of the tx could
violate the covenant by spending to an unencumbered witness v2 output
and (by collaborating with a miner) steal the funds. I don't think
there's a reasonable way to have existing covenants be forward
compatible with future destination addresses (beyond something like CTV
that strictly hardcodes them).

> One could also imagine a list of output positions to force the taproot update
> on multiple outputs ("OP_MULTIMERKLESUB").

Having the output position parameter might be an interesting way to
merge/split a vault/pool, but it's not clear to me how much sense it
makes sense to optimise for that, rather than just doing that via the key
path. For pools, you want the key path to be common anyway (for privacy
and efficiency), so it shouldn't be a problem; but even for vaults,
you want the cold wallet accessible enough to be useful for the case
where theft is attempted, and maybe that's also accessible enough for
the ocassional merge/split to keep your utxo count/sizes reasonable.

> For the merkle branches extension, I was thinking of introducing a separate
> OP_MERKLEADD, maybe to *add* a point to the internal pubkey group signer. If
> you're only interested in leaf pruning, using OP_MERKLESUB only should save 
> you
> one byte of empty vector ?

Saving a byte of witness data at the cost of specifying additional
opcodes seems like optimising the wrong thing to me.

> One solution I was thinking about was introducing a new tapscript version
> (`TAPROOT_INTERNAL_TAPSCRIPT`) signaling that VerifyTaprootCommitment must
> compute the TapTweak with a new TapTweak=(internal_pubkey || merkle_root ||
> parity_bit). A malicious participant wouldn't be able to interfere with the
> updated internal key as it would break its own spending taproot commitment
> verification ?

I don't think that works, because different scripts in the same merkle
tree can have different script versions, which would here indicate
different parities for the same internal pub key.

> > That's useless without some way of verifying that the new utxo retains
> > the bitcoin that was in the old utxo, so also include a new opcode
> > IN_OUT_AMOUNT that pushes two items onto the stack: the amount from this
> > input's utxo, and the amount in the corresponding output, and then expect
> > anyone using TLUV to use maths operators to verify that funds are being
> > appropriately retained in the updated scriptPubKey.
> Credit to you for the SIGHASH_GROUP design, here the code, with
> SIGHASH_ANYPUBKEY/ANYAMOUNT extensions.
> 
> I think it's achieving the same effect as IN_OUT_AMOUNT, at least for CoinPool
> use-case.

The IN_OUT_AMOUNT opcode lets you do maths on the values, so you can
specify "hot wallets can withdraw up to X" rather than "hot wallets
must withdraw exactly X". I don't think there's a way of doing that with
SIGHASH_GROUP, even with a modifier like ANYPUBKEY?

> (I think I could come with some use-case from lex mercatoria where if you play
> out a hardship provision you want to tweak all the other provisions by a CSV
> delay while conserving the rest of their policy)

If you want to tweak all the scripts, I think you should be using the
key path.

One way you could do somthing like that without changing the scripts
though, is have the timelock on most of the scripts be something like
"[3 months] CSV", and have a "delay" script that doesn't require a CSV,
does require a signature from someone able to authorise the delay,
and requires the output to have the same scriptPubKey and amount. Then
you can use that path to delay resolution by 3 months however often,
even if you can't coordinate a key path spend.

> > And second, it doesn't provide a way for utxos to "interact", which is
> > something that is interesting for automated market makers [5], but perhaps
> > only interesting for chains aiming to support multiple asset types,
> > and not bitcoin directly. On the other hand, perhaps combining it with
> > CTV might be enough to solve that, particularly if the hash passed to
> > CTV is constructed via script/CAT/etc.
> That's where SIGHASH_GROUP might be more interesting as you could generate
> transaction "puzzles".
> IIUC, the problem is how to have a set of ratios between x/f(x).

Normal way to do it is specify a formula, eg

   outBTC * outUSDT >= inBTC * inUSDT

that's a constant product market 

Re: [bitcoin-dev] Braidpool: Proposal for a decentralised mining pool

2021-09-10 Thread ZmnSCPxj via bitcoin-dev
Good morning Filippo,

> Hi!
>
> From the proposal it is not clear why a miner must reference other miners' 
> shares in his shares.
> What I mean is that there is a huge incentive for a rogue miner to not 
> reference any share from
> other miner so he won't share the reward with anyone, but it will be paid for 
> the share that he
> create because good miners will reference his shares.
> The pool will probably become unprofitable for good miners.
>
> Another thing that I do not understand is how to resolve conflicts. For 
> example, using figure 1 at
> page 1, a node could be receive this 2 valid states:
>
> 1. L -> a1 -> a2 -> a3 -> R
> 2. L -> a1* -> a2* -> R
>
> To resolve the above fork the only two method that comes to my mind are:
>
> 1. use the one that has more work
> 2. use the longest one
> Btw both methods present an issue IMHO.
>
> If the longest chain is used:
> When a block (L) is find, a miner (a) could easily create a lot of share with 
> low difficulty
> (L -> a1* -> a2* -> ... -> an*), then start to mine shares with his real 
> hashrate (L -> a1 -> a2)
> and publish them so they get referenced. If someone else finds a block he 
> gets the reward cause he
> has been referenced. If he finds the block he just attaches the funded block 
> to the longest chain
> (that reference no one) and publishes it without sharing the reward
> (L -> a1* -> a2* -> ... -> an* -> R).
>
> If is used the one with more work:
> A miner that has published the shares (L -> a1 -> a2 -> a3) when find a block 
> R that alone has more
> work than a1 + a2 + a3 it just publish (L -> R) and he do not share the 
> reward with anyone.


My understanding from the "Braid" in braidpool is that every share can 
reference more than one previous share.

In your proposed attack, a single hasher refers only to shares that the hasher 
itself makes.

However, a good hasher will refer not only to its own shares, but also to 
shares of the "bad" hasher.

And all honest hashers will be based, not on a single chain, but on the share 
that refers to the most total work.

So consider these shares from a bad hasher:

 BAD1 <- BAD2 <- BAD3

A good hasher will refer to those, and also to its own shares:

 BAD1 <- BAD2 <- BAD3
   ^   ^   ^
   |   |   |
   |   |   +--+
   |   +-+|
   | ||
   +--- GOOD1 <- GOOD2 <- GOOD3

`GOOD3` refers to 5 other shares, whereas `BAD3` refers to only 2 shares, so 
`GOOD3` will be considered weightier, thus removing this avenue of attack and 
resolving the issue.
Even if measured in terms of total work, `GOOD3` also contains the work that 
`BAD3` does, so it would still win.

Regards,
ZmnSCPxj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Reorgs on SigNet - Looking for feedback on approach and parameters

2021-09-10 Thread Michael Folkson via bitcoin-dev
> Huh? Why would the goal be to match mainnet? The goal, as I understand it, is 
> to allow software to
use SigNet without modification *to make testing simpler* - keep the
header format the same to let
SPV clients function without (significant) modification, etc. The
point of the whole thing is to
make testing as easy as possible, why would we do otherwise.

I guess Kalle (and AJ) can answer this question better than me but my
understanding is that the motivation for Signet was that testnet
deviated erratically from mainnet behavior (e.g. long delays before
any blocks were mined followed by a multitude of blocks mined in a
short period of time) which meant it wasn't conducive to normal
testing of applications. Why would you want a mainnet like chain? To
check if your application works on a mainnet like chain without
risking any actual value before moving to mainnet. The same purpose as
testnet but more reliably resembling mainnet behavior. You are well
within your rights to demand more than that but my preference would be
to push some of those demands to custom signets rather than the
default Signet.

Testing out proposed soft forks in advance of them being considered
for activation would already be introducing a dimension of complexity
that is going to be hard to manage [0]. I'm generally of the view that
if you are going to introduce a complexity dimension, keep the other
dimensions as vanilla as possible. Otherwise you are battling
complexity in multiple different dimensions and it becomes hard or
impossible to maintain it and meet your initial objectives.

But if this feature of extremely regular re-orgs is an in demand
feature for testers I think the question then becomes what the default
be (I would suggest re-orgs every 8 hours rather than no re-orgs at
all) and then the alternative which you can switch to, re-orgs every
block or every 6 blocks or whatever.

> I believe my suggestion was not correctly understood. I'm not suggesting 
> *users* sign blocks or
otherwise do anything manually here, only that the existing block
producers each generate a new key,
and we then only sign reorgs with *those* keys. Users will be able to
set a flag to indicate "I want
to accept sigs from either sets of keys, and see reorgs" or "I only
want sigs from the non-reorg
keys, and will consider the reorg keys-signed blocks invalid"

Ah I did misunderstand, yes this makes more sense. Thanks for the correction.

[0] 
https://bitcoin.stackexchange.com/questions/98642/can-we-experiment-on-signet-with-multiple-proposed-soft-forks-whilst-maintaining

On Fri, Sep 10, 2021 at 7:24 PM Matt Corallo  wrote:
>
>
>
> On 9/10/21 06:05, Michael Folkson wrote:
> >> I see zero reason whatsoever to not simply reorg ~every block, or as often 
> >> as is practical. If users opt in to wanting to test with reorgs, they 
> >> should be able to test with reorgs, not wait a day to test with reorgs.
> >
> > One of the goals of the default Signet was to make the default Signet
> > resemble mainnet as much as possible. (You can do whatever you want on
> > a custom signet you set up yourself including manufacturing a re-org
> > every block if you wish.) Hence I'm a bit wary of making the behavior
> > on the default Signet deviate significantly from what you might
> > experience on mainnet. Given re-orgs don't occur that often on mainnet
> > I can see the argument for making them more regular (every 8 hours
> > seems reasonable to me) on the default Signet but every block seems
> > excessive. It makes the default Signet into an environment for purely
> > testing whether your application can withstand various flavors of edge
> > case re-orgs. You may want to test whether your application can
> > withstand normal mainnet behavior (no re-orgs for long periods of
> > time) first before you concern yourself with re-orgs.
>
> Huh? Why would the goal be to match mainnet? The goal, as I understand it, is 
> to allow software to
> use SigNet without modification *to make testing simpler* - keep the header 
> format the same to let
> SPV clients function without (significant) modification, etc. The point of 
> the whole thing is to
> make testing as easy as possible, why would we do otherwise.
>
> Further, because one goal here is to enable clients to opt in or out of the 
> reorg chain at will
> (presumably by just changing one config flag in bitcoin.conf), why would we 
> worry about making it
> "similar to mainnet". If users want an experience "similar to mainnet", they 
> can simply turn off
> reorgs and they'll see a consistent chain moving forward which never reorgs, 
> similar to the
> practical experience of mainnet.
>
> Once you've opted into reorgs, you almost certainly are looking to *test* 
> reorgs - you just
> restarted Bitcoin Core with the reorg flag set, waiting around for a reorg 
> after doing that seems
> like the experience of testnet3 today, and the whole reason why we wanted 
> signet to begin with -
> things happen sporadically and 

Re: [bitcoin-dev] Reorgs on SigNet - Looking for feedback on approach and parameters

2021-09-10 Thread David A. Harding via bitcoin-dev
On Fri, Sep 10, 2021 at 11:24:15AM -0700, Matt Corallo via bitcoin-dev wrote:
> I'm [...] suggesting [...] that the existing block producers each
> generate a new key, and we then only sign reorgs with *those* keys.
> Users will be able to set a flag to indicate "I want to accept sigs
> from either sets of keys, and see reorgs" or "I only want sigs from
> the non-reorg keys, and will consider the reorg keys-signed blocks
> invalid"

This seems pretty useful to me.  I think we might want multiple sets of
keys:

0. No reorgs

1. Periodic reorgs of small to moderate depth for ongoing testing
without excessive disruption (e.g. the every 8 hours proposal).  I think
this probably ought to be the default-default `-signet` in Bitcoin Core
and other nodes.

2. Either frequent reorgs (e.g. every block) or a webapp that generates
reorgs on demand to further reduce testing delays.

If we can only have two, I'd suggest dropping 0.  I think it's already
the case that too few people test their software with reorgs.

-Dave


signature.asc
Description: PGP signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Reorgs on SigNet - Looking for feedback on approach and parameters

2021-09-10 Thread Matt Corallo via bitcoin-dev
Fwiw, your email client is broken and does not properly quote in the plaintext copy. I believe this 
is a known gmail bug, but I'd recommend avoiding gmail's web interface for list posting :).


On 9/10/21 12:00, Michael Folkson wrote:

Huh? Why would the goal be to match mainnet? The goal, as I understand it, is 
to allow software to

use SigNet without modification *to make testing simpler* - keep the
header format the same to let
SPV clients function without (significant) modification, etc. The
point of the whole thing is to
make testing as easy as possible, why would we do otherwise.

I guess Kalle (and AJ) can answer this question better than me but my
understanding is that the motivation for Signet was that testnet
deviated erratically from mainnet behavior (e.g. long delays before
any blocks were mined followed by a multitude of blocks mined in a
short period of time) which meant it wasn't conducive to normal
testing of applications. Why would you want a mainnet like chain? To
check if your application works on a mainnet like chain without
risking any actual value before moving to mainnet. The same purpose as
testnet but more reliably resembling mainnet behavior. You are well
within your rights to demand more than that but my preference would be
to push some of those demands to custom signets rather than the
default Signet.


Huh? You haven't made an argument here as to why such a chain is easier to test with, only that we 
should "match mainnet". Testing on mainnet sucks, 99% of the time testing on mainnet involves no 
reorgs, which *doesn't* match in-the-field reality of mainnet, with occasional reorgs. Matching 
mainnet's behavior is, in fact, a terrible way to test if your application will run fine on mainnet.


My point is that the goal should be making it easier to test. I'm not entirely sure why there's 
debate here.  I *regularly* have lunch late because I'm waiting for blocks either on mainnet or 
testnet3, and would quite like to avoid that in the future. It takes *forever* to test things on 
mainnet and testnet3, matching their behavior would mean its equally impossible to test things on 
mainnet and testnet3, why is that something we should stirve for?




Testing out proposed soft forks in advance of them being considered
for activation would already be introducing a dimension of complexity
that is going to be hard to manage [0]. I'm generally of the view that
if you are going to introduce a complexity dimension, keep the other
dimensions as vanilla as possible. Otherwise you are battling
complexity in multiple different dimensions and it becomes hard or
impossible to maintain it and meet your initial objectives.


Yep! Great reason to not have any probabilistic nonsense or try to match mainnet or something on 
signet, just make it deterministic, reorg once a block or twice an our or whatever and call it a day!


Matt
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Reorgs on SigNet - Looking for feedback on approach and parameters

2021-09-10 Thread Matt Corallo via bitcoin-dev




On 9/10/21 06:05, Michael Folkson wrote:

I see zero reason whatsoever to not simply reorg ~every block, or as often as 
is practical. If users opt in to wanting to test with reorgs, they should be 
able to test with reorgs, not wait a day to test with reorgs.


One of the goals of the default Signet was to make the default Signet
resemble mainnet as much as possible. (You can do whatever you want on
a custom signet you set up yourself including manufacturing a re-org
every block if you wish.) Hence I'm a bit wary of making the behavior
on the default Signet deviate significantly from what you might
experience on mainnet. Given re-orgs don't occur that often on mainnet
I can see the argument for making them more regular (every 8 hours
seems reasonable to me) on the default Signet but every block seems
excessive. It makes the default Signet into an environment for purely
testing whether your application can withstand various flavors of edge
case re-orgs. You may want to test whether your application can
withstand normal mainnet behavior (no re-orgs for long periods of
time) first before you concern yourself with re-orgs.


Huh? Why would the goal be to match mainnet? The goal, as I understand it, is to allow software to 
use SigNet without modification *to make testing simpler* - keep the header format the same to let 
SPV clients function without (significant) modification, etc. The point of the whole thing is to 
make testing as easy as possible, why would we do otherwise.


Further, because one goal here is to enable clients to opt in or out of the reorg chain at will 
(presumably by just changing one config flag in bitcoin.conf), why would we worry about making it 
"similar to mainnet". If users want an experience "similar to mainnet", they can simply turn off 
reorgs and they'll see a consistent chain moving forward which never reorgs, similar to the 
practical experience of mainnet.


Once you've opted into reorgs, you almost certainly are looking to *test* reorgs - you just 
restarted Bitcoin Core with the reorg flag set, waiting around for a reorg after doing that seems 
like the experience of testnet3 today, and the whole reason why we wanted signet to begin with - 
things happen sporadically and inconsistently, making developers wait around forever. Please lets 
not replicate the "gotta wait for blocks before I can go to lunch" experience of testnet today on 
signet, I'm tired of eating lunch late.



Why bother with a version bit? This seems substantially more complicated than 
the original proposal that surfaced many times before signet launched to just 
have a different reorg signing key. Thus, users who wish to follow reorgs can 
use a 1-of-2 (or higher multisig) and users who wish to not follow reorgs would 
use a 1-of-1 (or higher multisig), simply marking the reorg blocks as invalid 
without touching any header bits that non-full clients will ever see.


If I understand this correctly this is introducing a need for users to
sign blocks when currently with the default Signet the user does not
need to concern themselves with signing blocks. That is entirely left
to the network block signers of the default Signet (who were AJ and
Kalle last time I checked). Again I don't think this additional
complexity is needed on the default Signet when you can set up your
own custom Signet if you want to test edge case scenarios that deviate
significantly from what you are likely to experience on mainnet. A
flag set via a configuration argument (the AJ, 0xB10C proposal) with
no-reorgs (or 8 hour re-orgs) as the default seems to me like it would
introduce no additional complexity to the casual (or alpha stage)
tester experience though of course it introduces implementation
complexity.

To move the default Signet in the direction of resembling mainnet even
closer would be to randomly generate batches of transactions to fill
up blocks and create a fee market. It would be great to be able to
test features like RBF and Lightning unhappy paths (justice
transactions, perhaps even pinning attacks etc) on the default Signet
in future.


I believe my suggestion was not correctly understood. I'm not suggesting *users* sign blocks or 
otherwise do anything manually here, only that the existing block producers each generate a new key, 
and we then only sign reorgs with *those* keys. Users will be able to set a flag to indicate "I want 
to accept sigs from either sets of keys, and see reorgs" or "I only want sigs from the non-reorg 
keys, and will consider the reorg keys-signed blocks invalid"


Matt
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Reorgs on SigNet - Looking for feedback on approach and parameters

2021-09-10 Thread Michael Folkson via bitcoin-dev
> I see zero reason whatsoever to not simply reorg ~every block, or as often as 
> is practical. If users opt in to wanting to test with reorgs, they should be 
> able to test with reorgs, not wait a day to test with reorgs.

One of the goals of the default Signet was to make the default Signet
resemble mainnet as much as possible. (You can do whatever you want on
a custom signet you set up yourself including manufacturing a re-org
every block if you wish.) Hence I'm a bit wary of making the behavior
on the default Signet deviate significantly from what you might
experience on mainnet. Given re-orgs don't occur that often on mainnet
I can see the argument for making them more regular (every 8 hours
seems reasonable to me) on the default Signet but every block seems
excessive. It makes the default Signet into an environment for purely
testing whether your application can withstand various flavors of edge
case re-orgs. You may want to test whether your application can
withstand normal mainnet behavior (no re-orgs for long periods of
time) first before you concern yourself with re-orgs.

> Why bother with a version bit? This seems substantially more complicated than 
> the original proposal that surfaced many times before signet launched to just 
> have a different reorg signing key. Thus, users who wish to follow reorgs can 
> use a 1-of-2 (or higher multisig) and users who wish to not follow reorgs 
> would use a 1-of-1 (or higher multisig), simply marking the reorg blocks as 
> invalid without touching any header bits that non-full clients will ever see.

If I understand this correctly this is introducing a need for users to
sign blocks when currently with the default Signet the user does not
need to concern themselves with signing blocks. That is entirely left
to the network block signers of the default Signet (who were AJ and
Kalle last time I checked). Again I don't think this additional
complexity is needed on the default Signet when you can set up your
own custom Signet if you want to test edge case scenarios that deviate
significantly from what you are likely to experience on mainnet. A
flag set via a configuration argument (the AJ, 0xB10C proposal) with
no-reorgs (or 8 hour re-orgs) as the default seems to me like it would
introduce no additional complexity to the casual (or alpha stage)
tester experience though of course it introduces implementation
complexity.

To move the default Signet in the direction of resembling mainnet even
closer would be to randomly generate batches of transactions to fill
up blocks and create a fee market. It would be great to be able to
test features like RBF and Lightning unhappy paths (justice
transactions, perhaps even pinning attacks etc) on the default Signet
in future.

-- 
Michael Folkson
Email: michaelfolk...@gmail.com
Keybase: michaelfolkson
PGP: 43ED C999 9F85 1D40 EAF4 9835 92D6 0159 214C FEE3
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Braidpool: Proposal for a decentralised mining pool

2021-09-10 Thread Filippo Merli via bitcoin-dev
Hi!

>From the proposal it is not clear why a miner must reference other miners'
shares in his shares.
What I mean is that there is a huge incentive for a rogue miner to not
reference any share from
other miner so he won't share the reward with anyone, but it will be paid
for the share that he
create because good miners will reference his shares.
The pool will probably become unprofitable for good miners.

Another thing that I do not understand is how to resolve conflicts. For
example, using figure 1 at
page 1, a node could be receive this 2 valid states:

1. L -> a1 -> a2 -> a3 -> R
2. L -> a1* -> a2* -> R

To resolve the above fork the only two method that comes to my mind are:

1. use the one that has more work
2. use the longest one

Btw both methods present an issue IMHO.

If the longest chain is used:
When a block (L) is find, a miner (a) could easily create a lot of share
with low difficulty
(L -> a1* -> a2* -> ... -> an*), then start to mine shares with his real
hashrate (L -> a1 -> a2)
and publish them so they get referenced. If someone else finds a block he
gets the reward cause he
has been referenced. If he finds the block he just attaches the funded
block to the longest chain
(that reference no one) and publishes it without sharing the reward
(L -> a1* -> a2* -> ... -> an* -> R).

If is used the one with more work:
A miner that has published the shares (L -> a1 -> a2 -> a3) when find a
block R that alone has more
work than a1 + a2 + a3 it just publish (L -> R) and he do not share the
reward with anyone.

On Wed, Sep 8, 2021 at 1:15 PM pool2win via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> > A thing I just realized about Braidpool is that the payout server is
> still a single central point-of-failure.
>
> > However, this probably complicates the design too much, and it may be
> more beneficial to get *something* working now.
>
> You have hit the nail on the head here and Chris Belcher's original
> proposal for using payment channels does provide a construction for
> multiple hubs [1]. In the Braidpool proposal however, the focus is on a
> single hub to describe the plan for an MVP.
>
> Decentralising hubs is the end goal here, and either Belcher's multiple
> hubs construction or a leadership election based construction along the
> lines you propose might be a good way forward. Belcher's idea has the added
> advantage that the required liquidity at each hub is reduced as more hubs
> join, with the cost that in case of a hubs defecting, it takes longer for
> miners to do cascading close on channels to all hubs. TBH, it might be a
> cost worth paying in the absence of better ideas. But as braidpool is
> built, more ideas will be appear as well.
>
> [1] Payment Channel Payouts: An Idea for Improving P2Pool Scalability:
> https://bitcointalk.org/index.php?topic=2135429.0
>
> -- Original Message --
> On Tue, September 7, 2021 at 11:39 PM,  ZmnSCPxj via bitcoin-dev<
> bitcoin-dev@lists.linuxfoundation.org> wrote:
> Good morning all,
>
> A thing I just realized about Braidpool is that the payout server is still
> a single central point-of-failure.
>
> Although the paper claims to use Tor hidden service to protect against
> DDoS attacks, its centrality still cannot protect against sheer accident.
> What happens if some clumsy human (all humans are clumsy, right?) fumbles
> the cables in the datacenter the hub is hosted in?
> What happens if the country the datacenter is in is plunged into war or
> anarchy, because you humans love war and chaos so much?
> What happens if Zeus has a random affair (like all those other times),
> Hera gets angry, and they get into a domestic, and then a random thrown
> lightning bolt hits the datacenter the hub is in?
>
> The paper relies on economic arguments ("such an action will end the pool
> and the stream of future profits for the hub"), but economic arguments tend
> to be a lot less powerful in a monopoly, and the hub effectively has a
> monopoly on all Braidpool miners.
> Hashers might be willing to tolerate minor peccadilloes of the hub, simply
> to let the pool continue (their other choices would be even worse).
>
> So it seems to me that it would still be nicer, if it were at all
> possible, to use multiple hubs.
> I am uncertain how easily this can be done.
>
> Perhaps a Lightning model can be considered.
> Multiple hubs may exist which offer liquidity to the Braidpool network,
> hashers measure uptime and timeliness of payouts, and the winning hasher
> elects one of the hubs.
> The hub gets paid on the coinbase, and should send payouts, minus fees, on
> the LN to the miners.
>
> However, this probably complicates the design too much, and it may be more
> beneficial to get *something* working now.
> Let not the perfect be the enemy of the good.
>
> Regards,
> ZmnSCPxj
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> 

Re: [bitcoin-dev] TAPLEAF_UPDATE_VERIFY covenant opcode

2021-09-10 Thread Antoine Riard via bitcoin-dev
Hi AJ,

Thanks for finally putting the pieces together! [0]

We've been hacking with Gleb on a paper for the CoinPool protocol [1]
during the last weeks and it should be public soon, hopefully highlighting
what kind of scheme, TAPLEAF_UPDATE_VERIFY-style of covenant enable :)

Here few early feedbacks on this specific proposal,

> So that makes it relatively easy to imagine creating a new taproot address
> based on the input you're spending by doing some or all of the following:
>
>  * Updating the internal public key (ie from P to P' = P + X)
>  * Trimming the merkle path (eg, removing CD)
>  * Removing the script you're currently executing (ie E)
>  * Adding a new step to the end of the merkle path (eg F)

"Talk is cheap. Show me the code" :p

case OP_MERKLESUB:
{
if (!(flags & SCRIPT_VERIFY_MERKLESUB)) {
break;
}

if (stack.size() < 2) {
return set_error(serror, SCRIPT_ERR_INVALID_STACK_OPERATION);
}

valtype& vchPubKey = stacktop(-1);

if (vchPubKey.size() != 32) {
break;
}

const std::vector& vch = stacktop(-2);
int nOutputPos = CScriptNum(stacktop(-2), fRequireMinimal).getint();

if (nOutputPos < 0) {
return set_error(serror, SCRIPT_ERR_NEGATIVE_MERKLEVOUT);
}

if (!checker.CheckMerkleUpdate(*execdata.m_control, nOutputPos,
vchPubKey)) {
return set_error(serror, SCRIPT_ERR_UNSATISFIED_MERKLESUB);
}
break;
}

case OP_NOP1: case OP_NOP5:



template 
bool GenericTransactionSignatureChecker::CheckMerkleUpdate(const
std::vector& control, unsigned int out_pos, const
std::vector& point) const
{
//! The internal pubkey (x-only, so no Y coordinate parity).
XOnlyPubKey p{uint256(std::vector(control.begin() +
1, control.begin() + TAPROOT_CONTROL_BASE_SIZE))};
//! Update the internal key by subtracting the point.
XOnlyPubKey s{uint256(point)};
XOnlyPubKey u;
try {
u = p.UpdateInternalKey(s).value();
} catch (const std::bad_optional_access& e) {
return false;
}

//! The first control node is made the new tapleaf hash.
//! TODO: what if there is no control node ?
uint256 updated_tapleaf_hash;
updated_tapleaf_hash = uint256(std::vector(control.data() + TAPROOT_CONTROL_BASE_SIZE, control.data() +
TAPROOT_CONTROL_BASE_SIZE + TAPROOT_CONTROL_NODE_SIZE));

//! The committed-to output must be in the spent transaction vout
range.
if (out_pos >= txTo->vout.size()) return false;
int witnessversion;
std::vector witnessprogram;
txTo->vout[out_pos].scriptPubKey.IsWitnessProgram(witnessversion,
witnessprogram);
//! The committed to output must be a witness v1 program at least
if (witnessversion == 0) {
return false;
} else if (witnessversion == 1) {
//! The committed-to output.
const XOnlyPubKey q{uint256(witnessprogram)};
//! Compute the Merkle root from the leaf and the incremented
by one path.
const uint256 merkle_root = ComputeTaprootMerkleRoot(control,
updated_tapleaf_hash, 1);
//! TODO modify MERKLESUB design
bool parity_ret = q.CheckTapTweak(u, merkle_root, true);
bool no_parity_ret = q.CheckTapTweak(u, merkle_root, false);
if (!parity_ret && !no_parity_ret) {
return false;
}
}
return true;
}


Here the main chunks for an "  OP_MERKLESUB" opcode, with `n` the
output position which is checked for update and `point` the x-only pubkey
which must be subtracted from the internal key.

I think one design advantage of explicitly passing the output position as a
stack element is giving more flexibility to your contract dev. The first
output could be SIGHASH_ALL locked-down. e.g "you have to pay Alice on
output 1 while pursuing the contract semantic on output 2".

One could also imagine a list of output positions to force the taproot
update on multiple outputs ("OP_MULTIMERKLESUB"). Taking back your citadel
joint venture example, partners could decide to split the funds in 3
equivalent amounts *while* conserving the pre-negotiated script policies [2]

For the merkle branches extension, I was thinking of introducing a separate
OP_MERKLEADD, maybe to *add* a point to the internal pubkey group signer.
If you're only interested in leaf pruning, using OP_MERKLESUB only should
save you one byte of empty vector ?

We can also explore more fancy opcodes where the updated merkle branch is
pushed on the stack for deep manipulations. Or even n-dimensions
inspections if combined with your G'root [3] ?

Note, this current OP_MERKLESUB proposal doesn't deal with committing the
parity of the internal pubkey as part of the spent utxo. As you highlighted
well in your other mail, if we want to conserve the updated 

Re: [bitcoin-dev] TAPLEAF_UPDATE_VERIFY covenant opcode

2021-09-10 Thread Anthony Towns via bitcoin-dev
On Thu, Sep 09, 2021 at 12:26:37PM -0700, Jeremy wrote:
> I'm a bit skeptical of the safety of the control byte. Have you considered the
> following issues?

> If we used the script "0 F 0 TLUV" (H=F, C=0) then we keep the current
> script, keep all the steps in the merkle path (AB and CD), and add
> a new step to the merkle path (F), giving us:
>     EF = H_TapBranch(E, F)
>     CDEF =H_TapBranch(CD, EF)
>     ABCDEF = H_TapBranch(AB, CDEF)
> 
> If we recursively apply this rule, would it not be possible to repeatedly 
> apply
> it and end up burning out path E beyond the 128 Taproot depth limit?

Sure. Suppose you had a script X which allows adding a new script A[0..n]
as its sibling. You'd start with X and then go to (A0, X), then (A0,
(A1, X)), then (A0, (A1, (A2, X))) and by the time you added A127 TLUV
would fail because it'd be trying to add a path longer than 128 elements.

But this would be bad anyway -- you'd already have a maximally unbalanced
tree. So the fix for both these things would be to do a key path spend
and rebalance the tree. With taproot, you always want to do key path
spends if possible.

Another approach would be to have X replace itself not with (X, A) but
with (X, (X, A)) -- that way you go from:

   /\
  A  X

to 
 /\
A /\
 X /\
  B  X
  
to 
  /\
 /  \
A   /\
   /  \
  /\
 /\/\
C  X  B  X

and can keep the tree height at O(log(n)) of the number of members.

This means the script X would need a way to reference its own hash, but
you could do that by invoking TLUV twice, once to check that your new
sPK is adding a sibling (X', B) to the current script X, and a second
time to check that you're replacing the current script with (X', (X',
B)). Executing it twice ensures that you've verified X' = X, so you can
provide X' on the stack, rather than trying to include the script's on
hash in itself.

> Perhaps it's OK: E can always approve burning E?

As long as you've got the key path, then I think that's the thing to do.

> If we used the script "0 F 4 TLUV" (H=F, C=4) then we keep the current
> script, but drop the last step in the merkle path, and add a new step
> (effectively replacing the *sibling* of the current script):
>     EF = H_TapBranch(E, F)
>     ABEF = H_TapBranch(AB, EF) 
> If we used the script "0 0 4 TLUV" (H=empty, C=4) then we keep the current
> script, drop the last step in the merkle path, and don't add anything new
> (effectively dropping the sibling), giving just:
>     ABE = H_TapBranch(AB, E)
> 
> Is C = 4 stable across all state transitions? I may be missing something, but
> it seems that the location of C would not be stable across transitions.

Dropping a sibling without replacing it or dropping the current script
would mean you could re-execute the same script on the new utxo, and
repeat that enough times and the only remaining ways of spending would
be that script and the key path.

> E.g., What happens when, C and E are similar scripts and C adds some clauses
> F1, F2, F3, then what does this sibling replacement do? Should a sibling not 
> be
> able to specify (e.g., by leaf version?) a NOREPLACE flag that prevents
> siblings from modifying it?

If you want a utxo where some script paths are constant, don't construct
the utxo with script paths that can modify them.

> What happens when E adds a bunch of F's F1 F2 F3, is C still in the same
> position as when E was created?

That depends how you define "position". If you have:


   /\
  R  S

and

   /\
  R /\
   S  T

then I'd say that "R" has stayed in the same position, while "S" has
been lowered to allow for a new sibling "T". But the merkle path to
R will have changed (from "H(S)" to "H(H(S),H(T))"). 

> Especially since nodes are lexicographically sorted, it seems hard to create
> stable path descriptors even if you index from the root downwards.

The merkle path will always change unless you have the exact same set
of scripts, so that doesn't seem like a very interesting way to define
"position" when you're adding/removing/replacing scripts.

The "lexical ordering" is just a modification to how the hash is
calculated that makes it commutative, so that H(A,B) = H(B,A), with
the result being that the merkle path for any script in the the R,(S,T)
tree above is the same for the corresponding script in the tree:

   /\
  /\ R
 T  S

Cheers,
aj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev