Re: [bitcoin-dev] [Lightning-dev] Inherited IDs - A safer, more powerful alternative to BIP-118 (ANYPREVOUT) for scaling Bitcoin
Good morning John Law, > (at the expense of requiring an on-chain transaction to update > the set of channels created by the factory). Hmmm this kind of loses the point of a factory? By my understanding, the point is that the set of channels can be changed *without* an onchain transaction. Otherwise, it seems to me that factories with this "expense of requiring an on-chain transaction" can be created, today, without even Taproot: * The funding transaction output pays to a simple n-of-n. * The above n-of-n is spent by an *offchain* transaction that splits the funds to the current set of channels. * To change the set of channels, the participants perform this ritual: * Create, but do not sign, an alternate transaction that spends the above n-of-n to a new n-of-n with the same participants (possibly with tweaked keys). * Create and sign, but do not broadcast, a transaction that spends the above alternate n-of-n output and splits it to the new set of channels. * Sign the alternate transaction and broadcast it, this is the on-chain transaction needed to update the set of channels. The above works today without changes to Bitcoin, and even without Taproot (though for large N the witness size does become fairly large without Taproot). The above is really just a "no updates" factory that cuts through its closing transaction with the opening of a new factory. Regards, ZmnSCPxj ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Proposal: Package Mempool Accept and Package RBF
Hi Antoine, First of all, thank you for the thorough review. I appreciate your insight on LN requirements. > IIUC, you have a package A+B+C submitted for acceptance and A is already in your mempool. You trim out A from the package and then evaluate B+C. > I think this might be an issue if A is the higher-fee element of the ABC package. B+C package fees might be under the mempool min fee and will be rejected, potentially breaking the acceptance expectations of the package issuer ? Correct, if B+C is too low feerate to be accepted, we will reject it. I prefer this because it is incentive compatible: A can be mined by itself, so there's no reason to prefer A+B+C instead of A. As another way of looking at this, consider the case where we do accept A+B+C and it sits at the "bottom" of our mempool. If our mempool reaches capacity, we evict the lowest descendant feerate transactions, which are B+C in this case. This gives us the same resulting mempool, with A and not B+C. > Further, I think the dedup should be done on wtxid, as you might have multiple valid witnesses. Though with varying vsizes and as such offering different feerates. I agree that variations of the same package with different witnesses is a case that must be handled. I consider witness replacement to be a project that can be done in parallel to package mempool acceptance because being able to accept packages does not worsen the problem of a same-txid-different-witness "pinning" attack. If or when we have witness replacement, the logic is: if the individual transaction is enough to replace the mempool one, the replacement will happen during the preceding individual transaction acceptance, and deduplication logic will work. Otherwise, we will try to deduplicate by wtxid, see that we need a package witness replacement, and use the package feerate to evaluate whether this is economically rational. See the #22290 "handle package transactions already in mempool" commit ( https://github.com/bitcoin/bitcoin/pull/22290/commits/fea75a2237b46cf76145242fecad7e274bfcb5ff), which handles the case of same-txid-different-witness by simply using the transaction in the mempool for now, with TODOs for what I just described. > I'm not clearly understanding the accepted topologies. By "parent and child to share a parent", do you mean the set of transactions A, B, C, where B is spending A and C is spending A and B would be correct ? Yes, that is what I meant. Yes, that would a valid package under these rules. > If yes, is there a width-limit introduced or we fallback on MAX_PACKAGE_COUNT=25 ? No, there is no limit on connectivity other than "child with all unconfirmed parents." We will enforce MAX_PACKAGE_COUNT=25 and child's in-mempool + in-package ancestor limits. > Considering the current Core's mempool acceptance rules, I think CPFP batching is unsafe for LN time-sensitive closure. A malicious tx-relay jamming successful on one channel commitment transaction would contamine the remaining commitments sharing the same package. > E.g, you broadcast the package A+B+C+D+E where A,B,C,D are commitment transactions and E a shared CPFP. If a malicious A' transaction has a better feerate than A, the whole package acceptance will fail. Even if A' confirms in the following block, the propagation and confirmation of B+C+D have been delayed. This could carry on a loss of funds. Please note that A may replace A' even if A' has higher fees than A individually, because the proposed package RBF utilizes the fees and size of the entire package. This just requires E to pay enough fees, although this can be pretty high if there are also potential B' and C' competing commitment transactions that we don't know about. > IMHO, I'm leaning towards deploying during a first phase 1-parent/1-child. I think it's the most conservative step still improving second-layer safety. So far, my understanding is that multi-parent-1-child is desired for batched fee-bumping ( https://github.com/bitcoin/bitcoin/pull/22674#issuecomment-897951289) and I've also seen your response which I have less context on ( https://github.com/bitcoin/bitcoin/pull/22674#issuecomment-900352202). That being said, I am happy to create a new proposal for 1 parent + 1 child (which would be slightly simpler) and plan for moving to multi-parent-1-child later if that is preferred. I am very interested in hearing feedback on that approach. > If A+B is submitted to replace A', where A pays 0 sats, B pays 200 sats and A' pays 100 sats. If we apply the individual RBF on A, A+B acceptance fails. For this reason I think the individual RBF should be bypassed and only the package RBF apply ? I think there is a misunderstanding here - let me describe what I'm proposing we'd do in this situation: we'll try individual submission for A, see that it fails due to "insufficient fees." Then, we'll try package validation for A+B and use package RBF. If A+B pays enough, it can still replace A'. If A fails for a bad signatur
Re: [bitcoin-dev] TAPLEAF_UPDATE_VERIFY covenant opcode
On Sat, Sep 18, 2021 at 10:11:10AM -0400, Antoine Riard wrote: > I think one design advantage of combining scope-minimal opcodes like MERKLESUB > with sighash malleability is the ability to update a subset of the off-chain > contract transactions fields after the funding phase. Note that it's not "update" so much as "add to"; and I mostly think graftroot (and friends), or just updating the utxo onchain, are a better general purpose way of doing that. It's definitely a tradeoff though. > Yes this is a different contract policy that I would like to set up. > Let's say you would like to express the following set of capabilities. > C0="Split the 4 BTC funds between Alice/Bob and Caroll/Dave" > C1="Alice can withdraw 1 BTC after 2 weeks" > C2="Bob can withdraw 1 BTC after 2 weeks" > C3="Caroll can withdraw 1 BTC after 2 weeks" > C4="Dave can withdraw 1 BTC after 2 weeks" > C5="If USDT price=X, Alice can withdraw 2 BTC or Caroll can withdraw 2 BTC" Hmm, I'm reading C5 as "If an oracle says X, and Alice and Carol agree, they can distribute all the remaining funds as they see fit". > If C4 is exercised, to avoid trust in the remaining counterparty, both Alice > or > Caroll should be able to conserve the C5 option, without relying on the > updated > key path. > As you're saying, as we know the group in advance, one way to setup the tree > could be: > (A, (B, C), BC), D), BCD), E, F), EF), G), EFG))) Make it: (((AB, (A,B)), (CD, (C,D))), ACO) AB = DROP DUP 0 6 TLUV CHECKSIGVERIFY IN_OUT_AMOUNT SUB 2BTC LESSTHAN CD = same but for carol+dave A = DUP 10 TLUV CHECKSIGVERIFY IN_OUT_AMOUNT SUB 1BTC LESSTHAN B' = DUP 0 2 TLUV CHECKSIGVERIFY IN_OUT_AMOUNT SUB 1BTC LESSTHAN B,C,D = same as A but for bob, etc A',C',D' = same as B' but for alice, etc ACO = CHECKSIGVERIFY CHECKSIG Probably AB, CD, A..D, A'..D' all want a CLTV delay in there as well. (Relative timelocks would probably be annoying for everyone who wasn't the first to exit the pool) > Note, this solution isn't really satisfying as the G path isn't neutralized on > the Caroll/Dave fork and could be replayed by Alice or Bob... I think the above fixes that -- when AB is spent it deletes itself and the (A,B) pair; when A is spent, it deletes (A, B and AB) and replaces them with B'; when B' is spent it just deletes itself. Cheers, aj ___ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Re: [bitcoin-dev] Proposal: Package Mempool Accept and Package RBF
Hi Gloria, Thanks for this detailed post! The illustrations you provided are very useful for this kind of graph topology problems. The rules you lay out for package RBF look good to me at first glance as there are some subtle improvements compared to BIP 125. > 1. A package cannot exceed `MAX_PACKAGE_COUNT=25` count and > `MAX_PACKAGE_SIZE=101KvB` total size [8] I have a question regarding this rule, as your example 2C could be concerning for LN (unless I didn't understand it correctly). This also touches on the package RBF rule 5 ("The package cannot replace more than 100 mempool transactions.") In your example we have a parent transaction A already in the mempool and an unrelated child B. We submit a package C + D where C spends another of A's inputs. You're highlighting that this package may be rejected because of the unrelated transaction(s) B. The way I see this, an attacker can abuse this rule to ensure transaction A stays pinned in the mempool without confirming by broadcasting a set of child transactions that reach these limits and pay low fees (where A would be a commit tx in LN). We had to create the CPFP carve-out rule explicitly to work around this limitation, and I think it would be necessary for package RBF as well, because in such cases we do want to be able to submit a package A + C where C pays high fees to speed up A's confirmation, regardless of unrelated unconfirmed children of A... We could submit only C to benefit from the existing CPFP carve-out rule, but that wouldn't work if our local mempool doesn't have A yet, but other remote mempools do. Is my concern justified? Is this something that we should dig into a bit deeper? Thanks, Bastien Le jeu. 16 sept. 2021 à 09:55, Gloria Zhao via bitcoin-dev < bitcoin-dev@lists.linuxfoundation.org> a écrit : > Hi there, > > I'm writing to propose a set of mempool policy changes to enable package > validation (in preparation for package relay) in Bitcoin Core. These would > not > be consensus or P2P protocol changes. However, since mempool policy > significantly affects transaction propagation, I believe this is relevant > for > the mailing list. > > My proposal enables packages consisting of multiple parents and 1 child. > If you > develop software that relies on specific transaction relay assumptions > and/or > are interested in using package relay in the future, I'm very interested > to hear > your feedback on the utility or restrictiveness of these package policies > for > your use cases. > > A draft implementation of this proposal can be found in [Bitcoin Core > PR#22290][1]. > > An illustrated version of this post can be found at > https://gist.github.com/glozow/dc4e9d5c5b14ade7cdfac40f43adb18a. > I have also linked the images below. > > ## Background > > Feel free to skip this section if you are already familiar with mempool > policy > and package relay terminology. > > ### Terminology Clarifications > > * Package = an ordered list of related transactions, representable by a > Directed > Acyclic Graph. > * Package Feerate = the total modified fees divided by the total virtual > size of > all transactions in the package. > - Modified fees = a transaction's base fees + fee delta applied by the > user > with `prioritisetransaction`. As such, we expect this to vary across > mempools. > - Virtual Size = the maximum of virtual sizes calculated using [BIP141 > virtual size][2] and sigop weight. [Implemented here in Bitcoin > Core][3]. > - Note that feerate is not necessarily based on the base fees and > serialized > size. > > * Fee-Bumping = user/wallet actions that take advantage of miner > incentives to > boost a transaction's candidacy for inclusion in a block, including > Child Pays > for Parent (CPFP) and [BIP125][12] Replace-by-Fee (RBF). Our intention in > mempool policy is to recognize when the new transaction is more economical > to > mine than the original one(s) but not open DoS vectors, so there are some > limitations. > > ### Policy > > The purpose of the mempool is to store the best (to be most > incentive-compatible > with miners, highest feerate) candidates for inclusion in a block. Miners > use > the mempool to build block templates. The mempool is also useful as a > cache for > boosting block relay and validation performance, aiding transaction relay, > and > generating feerate estimations. > > Ideally, all consensus-valid transactions paying reasonable fees should > make it > to miners through normal transaction relay, without any special > connectivity or > relationships with miners. On the other hand, nodes do not have unlimited > resources, and a P2P network designed to let any honest node broadcast > their > transactions also exposes the transaction validation engine to DoS attacks > from > malicious peers. > > As such, for unconfirmed transactions we are considering for our mempool, > we > apply a set of validation rules in addition to consensus, primarily to > protect > us