Re: [bitcoin-dev] Covenants and feebumping

2022-03-14 Thread darosior via bitcoin-dev
Hi Jeremy,

Thanks for the feedback. I indeed only compared it to existing fee-bumping 
methods. But sponsors are pretty
similar to CPFP in usage anyways, they 'just' get rid of the complexity of 
managing transaction chains in the
mempool. That's great, don't get me wrong, just it's much less ideal than a 
solution not requiring additional
UTxOs to be reserved, managed, and additional onchain transactions.

Regarding chain efficiency. First, you wrote:
> As you've noted, an approach like precomitted different fee levels might 
> work, but has substantial costs.

Well, i noted that it *does* work (at least for vaults). And it does incur a 
cost, but it's inferior to the
other solutions. Then, sure sponsors' -like CPFP's- cost can be amortized. The 
chain usage would still likely
be superior (depends on a case by case basis i'd say), but even then the 
"direct" chain usage cost isn't what
matters most. As mentioned, the cost of using funds not internal to the 
contract really is.

Regarding capital efficiency, again as noted in the post, it's the entire point 
to use funds internal to the
contract ("pre-committed"). Sure external funding (by the means of sponsors or 
any other technique) allows you
to allocate funds later on, or never. But we want contracts that are actually 
enforceable, i guess?
On the other hand, pre-committing to all the possible fee-bumped levels 
prevents you to dynamically add more
fees eventually. That's why you need to pre-commit to levels up to your assumed 
"max feerate before i close
the contract". For "cold contracts" (vaults), timelocks prevent the DOS of 
immediately using a large feerate.
For "hot contracts" a signature challenge is used to achieve the same. I know 
the latter is imperfect, since
the lower the uptime risk (increase the number of network monitors) the higher 
the DOS risk (as you duplicate
the key).. That's why i asked if anybody had some thoughts about this and if 
there was a cleverer way of doing
it.

> This is also true for vaults where you know you only want to open 1 per month 
> let's say, and not
> your vaults> per month, which pre-committing requires.

Huh? Pre-committing here is to pre-commit to levels of the revocation 
("Cancel") transaction. It has nothing
to do with "activating" (using Revault's terminology) a vault, done by sharing 
a signature for the Unvault
transaction.
You might have another vault design in mind whereby any deposited fund is 
unvault-able. In this case, and as
with any other active contract, i think you need to have funds ready to pay for 
the fees for the contract to
be enforceable. Whether these funds come from the contract's funds or from 
externally-reserved UTxOs.

> you don't need a salt, you just need a unique payout addr (e.g. hardened 
> derivation) per revocation txn and
> you cannot guess the branch.

Yeah, i preferred to go with 8 more vbytes. First because relying on never 
reusing a derivation index is
brittle and also because it would make rescan much harder. Imagine having 256 
fee levels, making 5 payments a
day for 200 days in a year. You'd have 256000 derivation indexes per year to 
scan for if restoring frombackup.

--- Original Message ---
Le dimanche 13 mars 2022 à 3:33 AM, Jeremy Rubin  a 
écrit :

> Hi Antoine,
>
> I have a few high level thoughts on your post comparing these types of 
> primitive to an explicit soft fork approach:
>
> 1) Transaction sponsors *is* a type of covenant. Precisely, it is very 
> similar to an "Impossible Input" covenant in conjunction with a "IUTXO" I 
> defined in my 2017 
> workshophttps://rubin.io/public/pdfs/multi-txn-contracts.pdf(I know, I 
> know... self citation, not cool, but helps with context).
>
> However, for Sponsors itself we optimize the properties of how it works & is 
> represented, as well as "tighten the hatches" on binding to specific TX vs 
> merely spend of the outputs (which wouldn't work as well with APO).
>
> Perhaps thinking of something like sponsors as a form of covenant, rather 
> than a special purpose thing, is helpful?
>
> There's a lot you could do with a general "observe other txns in {this block, 
> the chain}" primitive. The catch is that for sponsors we don't *care* to 
> enable people to use this as a "smart contracting primitive", we want to use 
> it for fee bumping. So we don't care about programmability, we care about 
> being able to use the covenant to bump fees.
>
> 2) On Chain Efficiency.
>
> A) Precommitted Levels
> As you've noted, an approach like precomitted different fee levels might 
> work, but has substantial costs.
>
> However, with sponsors, the minimum viable version of this (not quite what is 
> spec'd in my prior email, but it could be done this way if we care to 
> optimize for bytes) would require 1 in and 1 out with only 32 bytes extra. So 
> that's around 40 bytes outpoint + 64 bytes signature + 40 bytes output + 32 
> bytes metadata = 174 bytes per bump. Bumps in this way can also amortize, so 
> 

Re: [bitcoin-dev] Improving RBF Policy

2022-03-14 Thread Gloria Zhao via bitcoin-dev
Hi Billy,

> We should expect miners will be using a more complex, more optimal way of
determining what blocks they're working on [...] we should instead run with
the assumption that miners keep all potentially relevant transactions in
their mempools, including potentially many conflicting transctions, in
order to create the most profitable blocks. And therefore we shouldn't put
the constraint on normal non-mining full nodes to do that same more-complex
mempool behavior or add any complexity for the purpose of denying
transaction replacements.

> I think a lot of the complexity in these ideas is because of the attempt
to match relay rules with miner
inclusion rules.

I think the assumption that miners are using a completely different
implementation of mempool and block template building is false. IIUC, most
miners use Bitcoin Core and perhaps configure their node differently (e.g.
larger mempool and different minfeerates), but also use `getblocktemplate`
which means the same ancestor package-based mining algorithm.

Of course, I'm not a miner, so if anybody is a miner or has seen miners'
setups, please correct me if I'm wrong.

In either case, we would want our mining algorithm to result in block
templates that are as close as possible to perfectly incentive compatibile.

Fundamentally, I believe default mempool policy (which perhaps naturally
creates a network-wide transaction relay policy) should be as close to the
mining code as possible. Imagine node A only keeps 1 block's worth of
transactions, and node B keeps a (default) 300MB mempool. The contents of
node A's mempool should be as close as possible to a block template
generated from node B's mempool. Otherwise, node A's mempool is not very
useful - their fee estimation is flawed and compact block relay won't do
them much good if they need to re-request a lot of block transactions.
Next, imagine that node B is a miner. It would be very suboptimal if the
mining code was ancestor package-based (i.e. supports CPFP), but the
mempool policy only cared about individual feerate, and evicted low-fee
parents despite their high-fee children. It's easy to see why this would be
suboptimal.
Attempting to match mempool policy with the mining algorithm is also
arguably the point of package relay. Our mining code uses ancestor packages
which is good, but we only submit transactions one at a time to the
mempool, so a transaction's high-fee children can't be considered until
they are all already in the mempool. Package relay allows us to start
thinking about ancestor packages immediately when evaluating transactions
for submission to the mempool.

The attempt to match policy with miner inclusion rules is deliberate and
necessary.

> I want to echo James O'Beirne's opinion on this that this may be the
wrong path to go down (a path of more complexity without much gain). He
said: "Special consideration for "what should be in the next block" and/or
the caching of block templates seems like an imposing dependency, dragging
in a bunch of state and infrastructure to a question that should be solely
limited to mempool feerate aggregates and the feerate of the particular txn
package a wallet is concerned with."

It seems that I under-explained the purpose of building/caching block
templates in my original post, since both you and James have the same
misunderstanding. Since RBF's introduction, we have improved to an ancestor
package-based mining algorithm. This supports CPFP (incentive compatible)
and it is now common to see more complex "families" of transactions as
users fee-bump transactions (market is working, yay). On the other hand, we
no longer have an accurate way of determining a transaction's "mining
score," i.e., the feerate of this transaction's ancestor package when it is
included in a block template using our current mining algorithm.

This limitation is a big blocker in proposing new fee/feerate RBF rules.
For example, if we say "the transaction needs a better feerate," this is
obviously flawed, since the original transactions may have very
high-feerate children, and the replacement transaction may have low feerate
parents. So what we really want is "the transaction needs to be more
incentive compatible to mine based on our mining algorithm," but we have no
way of getting that information right now.

In my original post, I [described 4 heuristics to get transaction's "mining
score"][1] using the current data cached in the mempool (e.g. max ancestor
feerate of descendant set), as well as why they don't work. As such, the
best way to calculate a transaction's mining score AFAICT is to grab all of
the related transactions and build a mini "block template" with them. The
[implementation][2] I sent last week also cuts out some of the fluff, so
the pseudocode looks like this:

// Get ALL connected entries (ancestors, descendants, siblings, cousins,
coparents, etc.)
vector cluster = mempool.GetAllTransactionsRelatedTo(txid);
sort(cluster, ancestorfeerate);

// For