[bitcoin-dev] A small tweak to TLUV to enable off-chain cancellation of payment pool transactions

2022-05-16 Thread Antoine Riard via bitcoin-dev
Hi,

Proposing a small tweak to TLUV to enable cancellation of an off-chain
transaction among a set of pool participants. Namely, to give the index of
the constrained output as an opcode item.

Using CoinPool terminology, the Withdraw phase happens by a participant
publishing an Update transaction and her own Withdraw transaction, freeing
her balance from the pool control. From then, any participant can
recursively and unilaterally publish a Withdraw transaction. Or the
consensus of the remaining participants can agree to stay in the pool by
cancelling the non-published Withdraw transactions with a Snapshot one
spending the pool output. This transaction implies a rotation of the
tapscripts, effectively cancelling the Withdraws.

The presence of this latest transaction is a bit artificial and could be
removed by cancelling the non-published Withdraw transactions. This
cancellation would be manifested by producing a group signature spending
any non-published Withdraw transaction `pool_output` and `balance_output`.

If the SIGHASH_ANYPREVOUTANYSCRIPT semantic is used, this re-lifting Update
transaction could be attached on any Withdraw transaction, even if the user
balances are not equal, as the amounts are not committed. To enable
rebinding on multiple cancelled Withdraw
transactions, I think SIGHASH_ANYONECANPAY could be used.

However, the group producing the signature to spend any cancelled output
should reflect the new set of pool participants after the withdrawals have
been played out. Any withdrawing user should have been removed, as there is
no interest anymore to
contribute to the signature. We would like to avoid a former participant
with nothing at stake in the pool to block the pool operations.

E,g let's say you have Alice, Bob, Caroll and Dave as pool participants.
Each of them owns a Withdraw transaction to exit their individual balances
at any time. Alice publishes her Withdraw transaction. Bob, Caroll and Dave
would like to cancel their non-published ones to pursue the pool
operations. To cancel the non-published transactions, only Bob, Caroll and
Dave should be part of the group of signers encumbering the non-published
Withdraw transactions outputs.

That said, the composition of this group of signers is a function of the
Withdraw transactions order, and as thus is unknown at pool state
generation. Therefore, it should be constrained leveraging some covenant
mechanism.

I believe this is achievable using TLUV semantics, at the condition to add
an output index to target the second output. Currently, a Withdraw
transaction `balance_output` is only the owner pubkey. The update internal
pubkey should also be inherited there to make the output cancellable. The
owner withdrawing capability could be moved as a timelock + a key inside a
tapscript.

A tapscript from a CoinPool Withdraw transaction currently looks like this
"0 A MERKLESUB P CHECKSIGVERIFY" [0]

The new tapscript would duplicate TLUV with an output index to constrain
the spending transactions both outputs, and therefore make them cancellable:

"TLUV
   TLUV
 CHECKSIGVERIFY"

I think it is a really slight modification of TLUV and it might serve other
use-cases, beyond the payment pool one ?

Thoughts ?

[0] While it could be argue to split TLUV in two smaller opcodes like
OP_MERKLESUB or a hypothesis OP_MERKLEADD to save few bytes when only the
subtraction or the addition feature is used, I'm not sure it's worthy the
complexity increased. In the context of payment pools, the usage of a TLUV
opcode should only happen in case of "pessimistic" non-cooperative
publication...
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Conjectures on solving the high interactivity issue in payment pools and channel factories

2022-05-09 Thread Antoine Riard via bitcoin-dev
mised all bets are off - the entire channel balance could be stolen.
>
> You could do this logic inside a hardware-wallet-like device that checks
> the proposed updates and verifies the new state is favorable before
> signing. This could go a long way to hardening lightning nodes against
> potential compromise.
>
> But if we go a step further, what if we enable that logic of ensuring the
> state is more favorable with an on-chain mechanism? This was where my idea
> got a bit hand wavy, but I think it could theoretically be done. The
> receiving-key would be able to sign receiving transactions that would only
> be valid when the most recent state signed by the spending-key is also
> included in the script sig in some way. Some Script would then validate
> that the receiving-key state being published is more favorable than the
> spending-key state in that transaction's outputs. You'd have a couple
> guarantees:
>
> 1. The usual guarantee that if the presented last spending-key state is
> actually out of date, the transaction could be overridden by the newer
> state in some way (eg eltoo style or punishment).
> 2. The state being published can be no worse than the presented
> spending-key state. Yes, your channel partner could compromise your
> receiving/routing node and then publish an out of date receiving-key
> channel state that's based on the most-recent spending-key state, but it
> would limit your losses to at most the amount of money you've received
> since the last time you manually signed a channel state with your
> spending-key. Because the always-online system empowered to receive does
> *not* have the spending-key, anyone that compromises that node can't spend
> and the damage is limited.
>
> While less straight-forward than for receiving, in principle it seems like
> something similar could be done for routing (which would require presenting
> the state of multiple channels, and so has some additional complexities
> there I haven't worked out).
>
> This kind of thing might be a way of working around interactivity
> requirements of payment pools and the like. All participants still have to
> be aware of the whole state (eg of the payment pool), but this awareness
> can be delegated to a system you have limited trust in. Payment pool
> participants could delegate an always-online system empowered with a
> separate key to sign payment pool updates that user's state isn't changed
> for, allowing the payment pool to do its thing without exposing the user to
> hot-key vulnerabilities in that always-online system. Double spending is
> prevented because the user can access their always-online system to get the
> full payment pool state.
>
> So in short, while I think there may be no way to fundamentally not
> require interactivity, there are workarounds that can limit how often full
> interactivity is needed as well as ways to make it easier to provide that
> full interactivity without compromising other aspects of each participant's
> security.
>
> On Thu, Apr 28, 2022 at 8:20 AM Antoine Riard via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> Hi,
>>
>> This post recalls the noticeable interactivity issue encumbering payment
>> pools and channel factories in the context of a high number of
>> participants, describes how the problem can be understood and proposes few
>> solutions with diverse trust-minizations and efficiency assumptions. It is
>> intended to capture the theoretical bounds of the "interactivity issue",
>> where technical completeness of the solutions is exposed in future works.
>>
>> The post assumes a familiarity with the CoinPool paper concepts and
>> terminology [0].
>>
>> # The interactivity requirement grieving payment pools/channel factories
>>
>> Payment pools and channel factories are multi-party constructions
>> enabling to share the ownership of a single on-chain UTXO among many
>> off-chain/promised balances. Payment pool improves on the channel factory
>> construction fault-tolerance by reducing the number of balance outputs
>> disclosed  on-chain to a single one in case of unilateral user exits.
>>
>> However, those constructions require all the users to be online and
>> exchange rounds of signatures to update the balance distribution. Those
>> liveliness/interactivity requirements are increasing with the number of
>> users, as there are higher odds of *one* lazzy/buggy/offline user stalling
>> the pool/factory updates.
>>
>> In echo, the design of LN was envisioned for a network of
>> always-online/self-hosted participants, the early deployment of LN showed
>> the resort to delegated channel hosting solutions

[bitcoin-dev] Conjectures on solving the high interactivity issue in payment pools and channel factories

2022-04-28 Thread Antoine Riard via bitcoin-dev
Hi,

This post recalls the noticeable interactivity issue encumbering payment
pools and channel factories in the context of a high number of
participants, describes how the problem can be understood and proposes few
solutions with diverse trust-minizations and efficiency assumptions. It is
intended to capture the theoretical bounds of the "interactivity issue",
where technical completeness of the solutions is exposed in future works.

The post assumes a familiarity with the CoinPool paper concepts and
terminology [0].

# The interactivity requirement grieving payment pools/channel factories

Payment pools and channel factories are multi-party constructions enabling
to share the ownership of a single on-chain UTXO among many
off-chain/promised balances. Payment pool improves on the channel factory
construction fault-tolerance by reducing the number of balance outputs
disclosed  on-chain to a single one in case of unilateral user exits.

However, those constructions require all the users to be online and
exchange rounds of signatures to update the balance distribution. Those
liveliness/interactivity requirements are increasing with the number of
users, as there are higher odds of *one* lazzy/buggy/offline user stalling
the pool/factory updates.

In echo, the design of LN was envisioned for a network of
always-online/self-hosted participants, the early deployment of LN showed
the resort to delegated channel hosting solutions, relieving users from the
liveliness requirement. While the trust trade-offs of those solutions are
significant, they answer the reality of a world made of unreliable networks
and mobile devices.

Minding that observation, the attractiveness of pools/factories might be
questioned.

# The interactivity requirement palliatives and their limits

Relatively straightforward solutions to lower the interactivity
requirement, or its encumbered costs, can be drawn out. Pools/factories
users could own (absolute) timelocked kick-out abilities to evict offline
users who are not present before expiration.

E.g, let's say you have Alice, Bob, Caroll and Dave as pool participants.
Each of them owns a Withdraw transaction to exit their individual balances
at any time. Each user should have received the pre-signed components from
the others guaranteeing the unilateral ability to publish the Withdraw.

A kick-out ability playable by any pool user could be provided by
generating a second set of Withdraw transactions, with the difference of
the nLocktime field setup to an absolute height T + X, where T is the
height at which the corresponding Update transaction is generated and X the
kick-out delay.  For this set of kick-out transactions, the complete
witnesses should be fully shared among Alice, Bob, Caroll and Dave. That
way, if Caroll is unresponsive to move the pool state forward after X, any
one of Alice, Bob or Dave can publish the Caroll kick-out Withdraw
transaction, and pursue operations without that unresponsive party.

While decreasing the interactivity requirement to the timelock delay, this
solution is constraining the kicked user to fallback on-chain encumbering
the UTXO set with one more entry.

Another solution could be to assume the widespread usage of node towers
among the pool participants. Those towers would host the full logic and key
state necessary to receive an update request and produce a user's approval
of it. As long as one tower instance is online per-user, the pool/factory
can move forward. Yet this is forcing the pool/factory user to share their
key materials with potentially lower trusted entities, if they don't
self-host the tower instances.

Ideally, I think we would like a trust-minimized solution enabling
non-interactive, off-chain updates of the pool/factory, with no or minimal
consumption of blockspace.

For the remainder of this post, only the pool use-case will be mentioned.
Though, I think the observations/implications can be extended to factories
as well.

# Non-interactive Off-chain Pool Partitions

If a pool update fails because of lack of online unanimity, a partition
request could be exchanged among the online subset of users ("the
actives"). They decide to partition the pool by introducing a new layer of
transactions gathering the promised/off-chain outputs of the actives. The
set of outputs belonging to the passive users remains unchanged.

The actives spend their Withdraw transactions `user_balance` outputs back
to a new intermediate Update transaction. This "intermediate" Update
transaction is free to re-distribute the pool balances among the active
users. To guarantee the unilateral withdraw ability of a partitioned-up
balance, the private components of the partitioned Withdraw transactions
should be revealed among the set of active users.

E.g, let's say you have Alice, Bob, Caroll and Dave as pool participants.
Pool is at state N, Bob and Dave are offline. Alice and Caroll agree to
partition the pool, each of them owns a Withdraw transaction

Re: [bitcoin-dev] Automatically reverting ("transitory") soft forks, e.g. for CTV

2022-04-22 Thread Antoine Riard via bitcoin-dev
Hi Dave,

I think the transitory idea is interesting, though I would say it would
take far more thinking to capture the implications.

> 1. It creates a big footgun.  Anyone who uses CTV without adequately
preparing for the reversion could easily lose their money.

I think that downside should be weighed far more. If we imagine CTV being
used in the context of a said off-chain contract, it's not guaranteed you
can downgrade to equivalent semantics around the reversion date, or not at
the same witness cost which is raising implications for your cached
fee-bumping reserves.

Further, this downgrade path might have to be re-signed by your off-chain
contract counterparties to migrate a balance distribution locked by CTV to
one relying on pre-signed transactions. This contract "consensus" is not
guaranteed and it could even be leveraged by some unfair counterparties,
who have small balances at stake.

If you can't gracefully downgrade to equivalent semantics or negotiate a
migration, it's more likely the safe behavior to adopt would be to close
the off-chain contract, ahead of the reversion date.
As it might be a critical operation, the toolchain vendors might adopt the
practice to coordinate the automatic closing with a flag day (e.g "close
your LN channel at block XXX") or in a relative distributed fashion (e.g
"close your LN channel at randomly picked up block between X and Y"). Such
relatively automatic closure, if realized in mass, would provoke mempools
congestion. An adversarial event which would cloak the security of all
existing off-chain contracts.

Therefore I'm not sure if a reversion date for a contracting primitive
softfork is the soundest off-chain contract engineering practice...

Further, I think there is one more downside not considered in your list :
negative incentives for the CTV ecosystem stakeholders. As a CTV-enabled
protocol developer, as you know time is counted to
prove the worthiness of the primitive, you have an interest to design a
protocol and develop/deploy a toolchain on a short-time basis, likely not
the soundest principle in system software engineering.
Such a development attitude is more likely to grieve the ecosystem with
safety-critical bugs/vulnerabilities, of which the exploitation might
eradicate the credibility of your CTV use-case, and with it the wider CTV
ecosystem.

So I think the data-collection method itself to advance the
consensus-building process isn't neutral on the outcome yielded. The
consensus-building stakeholders themselves aren't immune to the incentives
disruptions brought by an innovation in the process.

Antoine

Le mer. 20 avr. 2022 à 21:06, David A. Harding via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> a écrit :

> Hi all,
>
> The main criticisms I'm aware of against CTV seem to be along the
> following lines:
>
> 1. Usage, either:
>a. It won't receive significant real-world usage, or
>b. It will be used but we'll end up using something better later
> 2. An unused CTV will need to be supported forever, creating extra
> maintenance
> burden, increasing security surface, and making it harder to evaluate
> later
> consensus change proposals due to their interactions with CTV
>
> Could those concerns be mitigated by making CTV an automatically
> reverting
> consensus change with an option to renew?  E.g., redefining OP_NOP4 as
> OP_CTV
> for five years from BIP119's activation date and then reverting to
> OP_NOP4.
> If, prior to the end of those five years, a second soft fork was
> activated, it
> could continue enforcing the CTV rules either for another five years or
> permanently.
>
> This would be similar in nature to the soft fork described in BIP50
> where the
> maximum block size was temporarily reduced to address the BDB locks
> issue and
> then allowed to return to its original value.  In Script terms, any use
> of
> OP_CTV would effectively be:
>
>  OP_IF
> OP_CTV
>  OP_ELSE
><5 years after activation> OP_CLTV
>  OP_ENDIF
>
> As long as we are absolutely convinced CTV will have no negative effects
> on the
> holders or receivers of non-CTV coins, I think an automatically
> reverting soft
> fork gives us some ability to experiment with new features without
> committing
> ourselves to live with them forever.
>
> The main downsides I can see are:
>
> 1. It creates a big footgun.  Anyone who uses CTV without adequately
> preparing for
> the reversion could easily lose their money.
>
> 2. Miners would be incentivized to censor spends of the reverting
> opcode near its reversion date.  E.g., if Alice receives 100 bitcoins
> to a
> script secured only by OP_CTV and attempts to spend them the day
> before it
> becomes OP_NOP4, miners might prefer to skip confirming that
> transaction even
> if it pays a high feerate in favor of spending her 100 bitcoins to
> themselves
> the next day after reversion.
>
> The degree to which this is an issue will depend on the diversity 

Re: [bitcoin-dev] Improving RBF Policy

2022-03-17 Thread Antoine Riard via bitcoin-dev
Hi Mempoololic Anonymous fellow,

> 2. Staggered broadcast of replacement transactions: within some time
> interval, maybe accept multiple replacements for the same prevout, but
only
> relay the original transaction.

If the goal of replacement staggering is to save on bandwidth, I'm not sure
it's going to be effective if you consider replacement done from a
shared-utxo. E.g, Alice broadcasts a package to confirm her commitment,
relay is staggered until T. At the same time, Bob broadcasts a package to
confirm his version of the commitment at a slightly better feerate, relay
is staggered until T.

At T, package A gradually floods from Alice's peers and package B does the
same from Bob's peers. When there is an intersection. B overrides A and
starts to replace package A in the network mempools nearest to Alice. I
think those peers won't have bandwidth saving from adopting a replacement
staggering strategy.

Or maybe that's something completely different if you have in mind ? I
think it's worth more staggering detail to guess if it's robust against all
the replacement propagations patterns.

Though if we aim to save on replacement bandwidth I wonder if a "diff-only"
strategy, assuming some new p2p mechanism, would be more interesting (as
discussed in the recent "Thoughts on fee bumping thread").

> A lingering concern that I have about this idea is it would then be
> possible to impact the propagation of another person’s transaction, i.e.,
> an attacker can censor somebody’s transaction from ever being announced by
> a node if they send enough transactions to fill up the rate limit.
> Obviously this would be expensive since they're spending a lot on fees,
but
> I imagine it could be profitable in some situations to spend a few
thousand
> dollars to prevent anyone from hearing about a transaction for a few
hours.
> This might be a non-issue in practice if the rate limit is generous and
> traffic isn’t horrendous, but is this a problem?

I think I share the concern too about an attacker exhausting a node
transaction relay ressources to prevent another person's transaction to
propagate, especially if the transaction targeted is a L2's time-sensitive
one. In that latter context, an attacker would aim to delay the relay of a
time-sensitive transaction (e.g a HTLC-success) to the miners, until the
timelock expires. The malicious delay period should swallow the go-to-chain
HTLC deadline ("the deadline for received HTLCs this node fulfilled" in
bolt 2 parlance), in that current example 18 blocks.

Let's say we allocate 10 MB of bandwidth per-block period. Once the 10 MB
are exhausted, there is no more bandwidth allocated until the next block is
issued. If the top mempool feerate is 1 sat/vb, such naive design would
allow an attacker to buy all the p2p network bandwidth period for 0.1 BTC.
If an attacker aims to jam a HTLC transaction for the 18 blocks period, the
cost is of 1,8 BTC. If the attacker is a LN counterparty to a HTLC worth
more than 1.8 BTC, the attack sounds economically profitable.

Worst, the p2p network bandwidth is a public resource while a HTLC is a
private, off-chain contract. An attacker could be counterparty to many
HTLCs, where each HTLC individual value is far inferior to the global p2p
bandwidth cost but the sum, only known to the attacker, is superior to.
Therefore, it sounds to me that p2p network bandwidth might be attractive
if the stealing are batched.

Is the attacker scenario described credible ? Are the numbers sketched out
realistic ?

If yes, I think one design insight for eventual transaction relay rate
limiting would be to make them "dynamic", and not naively fixed for a
period. By making them dynamic, an attacker would have to compete with the
effective feerate proposed by the victim transaction. E.g, if the
HTLC-success feerate is of 10 sat/vb, an attacker would have to propose a
stream of malicious transaction of more than 10 sat/vb during the whole
HTLC deadline period for the transaction-relay jamming to be effective.

Further, the attack might be invisible from the victim standpoint, the
malicious flow of feerate competitive transactions can be hard to
dissociate from an honest one. Thus, you can expect the
HTLC transaction issuer to only slowly increase the feerate at each block,
and those moves to be anticipated by the attacker. Even if the transaction
issuer adopts a scorched-earth approach for the latest blocks of the
deadline, the absolute value of the HTLC burnt in fees might still be less
than the transaction relay bandwidth exhaustion paid by the attacker
because the attack is batched by the attacker.

I'm not sure if this reasoning is correct. Though if yes, the issue sounds
really similar to "flood" attack affecting LN previously researched on
[0]. What worries me more with this "exhaust" is that if we introduce
bounded transaction relay rate limiting, it sounds a cheaper public
ressource to buy than the mempool..

[0] https://arxiv.org/pdf/2006.08513.pdf

Anyway, 

Re: [bitcoin-dev] CTV vaults in the wild

2022-03-10 Thread Antoine Riard via bitcoin-dev
Hi James,

> I don't really see the vaults case as any different from other
> sufficiently involved uses of bitcoin script - I don't remember anyone
> raising these concerns for lightning scripts or DLCs or tapscript use,
> any of which could be catastrophic if wallet implementations are not
> tested properly.

I think on the lightning side there were enough concerns w.r.t bugs
affecting the toolchains in their infancy phases to motivate developers to
bound max channel value to 2^24 for a while [0]

[0] https://github.com/lightning/bolts/pull/590

> By comparison, decreasing amount per vault step and one CSV use
> seems pretty simple. It's certainly easy to test (as the repo shows),
> and really the only parameter the user has is how many blocks to delay
> to the `tohot_tx` and perhaps fee-rate. Not too hard to test
> comprehensively as far as I can tell.

As of today you won't be able to test against bitcoin core that a CSV'ed
transaction is valid for propagation across the network because your
mempool is going to reject it as non-final [1]

[1] https://github.com/bitcoin/bitcoin/pull/21413

Verifying that your whole set of off-chain covenanted transactions is
propagating well at different feerate levels, and there is no surface
offered to a malicious vault co-owner to pin them can turn quickly as a
real challenge, I believe.

> I think the main concern I have with any hashchain-based vault design
> is the immutability of the flow paths once the funds are locked to the
> root vault UTXO.

> Isn't this kind of inherent to the idea of covenants? You're
> precommitting to a spend path. You can put in as many "escape-hatch"
> conditions as you want (e.g. Jeremy makes the good point I should
> include an immediate-to-cold step that is sibling to the unvaulting),
> but fundamentally if you're doing covenants, you're precommitting to a
> flow of funds. Otherwise what's the point?

Yeah, I agree here that's the idea of covenants to commit to a flow of
funds. However, I think leveraging hashchain covenants in terms of  vault
design comes at the price to make transaction generation errors or key
endpoint compromises hardly irrevocable.

I would say you can achieve the same end goal of precommiting to a flow of
funds with "pre-signed" transactions (and actually that's what we do for
lightning) though while still keeping the upgrade emergency option open. Of
course, you re-introduce more assumptions on the devices where the upgrade
keys are laying.

I believe both designs are viable, it's more a matter of explaining
security and reliability trade-offs to the vaults users. They might be even
complimentary as answering different classes of self-custody needs. I'm
just worried as protocol devs, we have a good understanding of those
trade-offs to convey them well to the vaults users and have them make a
well-informed decision.

> Who's saying to trust hardware? Your cold key in the vault structure
> could have been generated by performing SHA rounds with the
> pebbles in your neighbor's zen garden.
>
> Keeping an actively used multi-sig setup secure certainly isn't free or
> easy. Multi-sig ceremonies (which of course can be used in this scheme)
> can be cumbersome to coordinate.
>
> If there's a known scheme that doesn't require covenants, but has
> similar usage and security characteristics, I'd love
> to know it! But being able to lock coins up for an arbitrary amount of
> time and then have advance notice of an attempted spend only seems
> possible with some kind of covenant technique.

Well, if by covenants you include pre-signed transactions vaults designs,
no sadly I don't know schemes offering the same usage and security
characteristics...

> That said, I think this security advantage is only relevant in the
> context of recursive design, where the partial unvault sends back the
> remaining funds to vault UTXO (not the design proposed here).

> I'm not really sure why this would be. Yeah, it would be cool to be able
> to partially unvault arbitrary amounts or something, but that seems like
> another order of complexity. Personally, I'd be happy to "tranche up"
> funds I'd like to store into a collection of single-hop vaults vs.
> the techniques available to us today.

Hmmm if you would like to be able to partially unvault arbitrary amounts,
while still precommitting to the flow of funds, you might need a sighash
flag extension like SIGHASH_ANYAMOUNT ? (my 2 sats, I don't have a design)

Yes, "tranche up" funds where the remainder is sent back to a vault UTXO
sounds to me belonging to the recursive class of design, and yeah I agree
that might be one of the most interesting features of vaults.

> Pretty straightforward to send such a process (whether it's a program or
> a collection of humans) an authenticated signal that says "hey, expect a
> withdrawal." This kind of alert allows for cross-referencing the
> activity and seems a lot better than nothing!

Yep, a nice improvement. And now you enter into a new 

Re: [bitcoin-dev] CTV vaults in the wild

2022-03-10 Thread Antoine Riard via bitcoin-dev
Hi Zeeman,

> Have not looked at the actual vault design, but I observe that Taproot
allows for a master key (which can be an n-of-n, or a k-of-n with setup
(either expensive or trusted, but I repeat myself)) to back out of any
contract.
>
> This master key could be an "even colder" key that you bury in the desert
to be guarded over by generations of Fremen riding giant sandworms until
the Bitcoin Path prophesied by the Kwisatz Haderach, Satoshi Nakamoto,
arrives.

Yes I agree you can always bless your hashchain-based off-chain contract
with an upgrade path thanks to Taproot. Though now this master key become
the point-of-failure to compromise, compared to hashchain.

I think you can even go fancier than a human desert to hide a master key
with "vaults" geostationary satellites [0] !

[0] https://github.com/oleganza/bitcoin-papers/blob/master/SatelliteVault.md

> Thought: It would be nice if Alice could use Lightning watchtowers as
well, that would help increase the anonymity set of both LN watchtower
users and vault users.

Well, I'm not sure if it's really binding toward the watchtowers.
A LN channel is likely to have a high-frequency of updates (in both
LN-penalty/Eltoo design I think)
A vault is likely to have low-frequency of updates (e.g an once a day
spending)

I think that point is addressable by generating noise traffic from the
vault entity to adopt a classic LN channel pattern. However, as a vault
"high-stake" user, you might not be eager to leak your watchtower IP
address or even Tor onion service to "low-stake" LN channel swarms of
users. So it might end up on different tower deployments because off-chain
contracts' level of safety requirements are not the same, I don't know..

> With Taproot trees the versions of the cold transaction are also stored
off-chain, and each tower gets its own transaction revealing only one of
the tapleaf branches.
> It does have the disadvantage that you have O(log N) x 32 Merkle tree
path references, whereas a presigned Taproot transaction just needs a
single 64-byte signature for possibly millions of towers.

I agree here though note vaults users might be interested to pay the fee
witness premium just to get the tower accountability feature.

Antoine

Le lun. 7 mars 2022 à 19:57, ZmnSCPxj  a écrit :

> Good morning Antoine,
>
> > Hi James,
> >
> > Interesting to see a sketch of a CTV-based vault design !
> >
> > I think the main concern I have with any hashchain-based vault design is
> the immutability of the flow paths once the funds are locked to the root
> vault UTXO. By immutability, I mean there is no way to modify the
> unvault_tx/tocold_tx transactions and therefore recover from transaction
> fields
> > corruption (e.g a unvault_tx output amount superior to the root vault
> UTXO amount) or key endpoints compromise (e.g the cold storage key being
> stolen).
> >
> > Especially corruption, in the early phase of vault toolchain deployment,
> I believe it's reasonable to expect bugs to slip in affecting the output
> amount or relative-timelock setting correctness (wrong user config,
> miscomputation from automated vault management, ...) and thus definitively
> freezing the funds. Given the amounts at stake for which vaults are
> designed, errors are likely to be far more costly than the ones we see in
> the deployment of payment channels.
> >
> > It might be more conservative to leverage a presigned transaction data
> design where every decision point is a multisig. I think this design gets
> you the benefit to correct or adapt if all the multisig participants agree
> on. It should also achieve the same than a key-deletion design, as long as
> all
> > the vault's stakeholders are participating in the multisig, they can
> assert that flow paths are matching their spending policy.
>
> Have not looked at the actual vault design, but I observe that Taproot
> allows for a master key (which can be an n-of-n, or a k-of-n with setup
> (either expensive or trusted, but I repeat myself)) to back out of any
> contract.
>
> This master key could be an "even colder" key that you bury in the desert
> to be guarded over by generations of Fremen riding giant sandworms until
> the Bitcoin Path prophesied by the Kwisatz Haderach, Satoshi Nakamoto,
> arrives.
>
> > Of course, relying on presigned transactions comes with higher
> assumptions on the hardware hosting the flow keys. Though as
> hashchain-based vault design imply "secure" key endpoints (e.g
> ), as a vault user you're still encumbered with the issues of
> key management, it doesn't relieve you to find trusted hardware. If you
> want to avoid multiplying devices to trust, I believe flow keys can be
> stored on the same keys guarding the UTXOs, before sending to vault custody.
> >
> > I think the remaining presence of trusted hardware in the vault design
> might lead one to ask what's the security advantage of vaults compared to
> classic multisig setup. IMO, it's introducing the idea of privileges in the
> coins 

Re: [bitcoin-dev] Annex Purpose Discussion: OP_ANNEX, Turing Completeness, and other considerations

2022-03-07 Thread Antoine Riard via bitcoin-dev
Hi Jeremy,

> I've seen some discussion of what the Annex can be used for in Bitcoin.
For
> example, some people have discussed using the annex as a data field for
> something like CHECKSIGFROMSTACK type stuff (additional authenticated
data)
> or for something like delegation (the delegation is to the annex). I think
> before devs get too excited, we should have an open discussion about what
> this is actually for, and figure out if there are any constraints to using
> it however we may please.

I think one interesting purpose of the annex is to serve as a transaction
field extension, where we assign new consensus validity rules to the annex
payloads.

One could think about new types of locks, e.g where a transaction inclusion
is constrained before the annex payload value is superior to the chain's
`ChainWork`. This could be useful in case of contentious forks, where you
want your transaction to confirm only when enough work is accumulated, and
height isn't a reliable indicator anymore.

Or a relative-timelock where the endpoint is the presence of a state number
encumbering the spent transaction. This could be useful in the context of
payment pools, where the user withdraw transactions are all encumbered by a
bip68 relative-timelock, as you don't know which one is going to confirm
first, but where you don't care about enforcement of the timelocks once the
contestation delay has played once  and no higher-state update transaction
has confirmed.

Of course, we could reuse the nSequence field for some of those new types
of locks, though we would lose the flexibility of combining multiple locks
encumbering the same input.

Another use for the annex is locating there the SIGHASH_GROUP group count
value. One advantage over placing the value as a script stack item could be
to have annex payloads interdependency validity, where other annex payloads
are reusing the group count value as part of their own semantics.

Antoine

Le ven. 4 mars 2022 à 18:22, Jeremy Rubin via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> a écrit :

> I've seen some discussion of what the Annex can be used for in Bitcoin.
> For example, some people have discussed using the annex as a data field for
> something like CHECKSIGFROMSTACK type stuff (additional authenticated data)
> or for something like delegation (the delegation is to the annex). I think
> before devs get too excited, we should have an open discussion about what
> this is actually for, and figure out if there are any constraints to using
> it however we may please.
>
> The BIP is tight lipped about it's purpose, saying mostly only:
>
> *What is the purpose of the annex? The annex is a reserved space for
> future extensions, such as indicating the validation costs of
> computationally expensive new opcodes in a way that is recognizable without
> knowing the scriptPubKey of the output being spent. Until the meaning of
> this field is defined by another softfork, users SHOULD NOT include annex
> in transactions, or it may lead to PERMANENT FUND LOSS.*
>
> *The annex (or the lack of thereof) is always covered by the signature and
> contributes to transaction weight, but is otherwise ignored during taproot
> validation.*
>
> *Execute the script, according to the applicable script rules[11], using
> the witness stack elements excluding the script s, the control block c, and
> the annex a if present, as initial stack.*
>
> Essentially, I read this as saying: The annex is the ability to pad a
> transaction with an additional string of 0's that contribute to the virtual
> weight of a transaction, but has no validation cost itself. Therefore,
> somehow, if you needed to validate more signatures than 1 per 50 virtual
> weight units, you could add padding to buy extra gas. Or, we might somehow
> make the witness a small language (e.g., run length encoded zeros) such
> that we can very quickly compute an equivalent number of zeros to 'charge'
> without actually consuming the space but still consuming a linearizable
> resource... or something like that. We might also e.g. want to use the
> annex to reserve something else, like the amount of memory. In general, we
> are using the annex to express a resource constraint efficiently. This
> might be useful for e.g. simplicity one day.
>
> Generating an Annex: One should write a tracing executor for a script, run
> it, measure the resource costs, and then generate an annex that captures
> any externalized costs.
>
> ---
>
> Introducing OP_ANNEX: Suppose there were some sort of annex pushing
> opcode, OP_ANNEX which puts the annex on the stack as well as a 0 or 1 (to
> differentiate annex is 0 from no annex, e.g. 0 1 means annex was 0 and 0 0
> means no annex). This would be equivalent to something based on  flag> OP_TXHASH  OP_TXHASH.
>
> Now suppose that I have a computation that I am running in a script as
> follows:
>
> OP_ANNEX
> OP_IF
> `some operation that requires annex to be <1>`
> OP_ELSE
> OP_SIZE
> 

Re: [bitcoin-dev] CTV vaults in the wild

2022-03-07 Thread Antoine Riard via bitcoin-dev
Hi James,

Interesting to see a sketch of a CTV-based vault design !

I think the main concern I have with any hashchain-based vault design is
the immutability of the flow paths once the funds are locked to the root
vault UTXO. By immutability, I mean there is no way to modify the
unvault_tx/tocold_tx transactions and therefore recover from transaction
fields
corruption (e.g a unvault_tx output amount superior to the root vault UTXO
amount) or key endpoints compromise (e.g the cold storage key being
stolen).

Especially corruption, in the early phase of vault toolchain deployment, I
believe it's reasonable to expect bugs to slip in affecting the output
amount or relative-timelock setting correctness (wrong user config,
miscomputation from automated vault management, ...) and thus definitively
freezing the funds. Given the amounts at stake for which vaults are
designed, errors are likely to be far more costly than the ones we see in
the deployment of payment channels.

It might be more conservative to leverage a presigned transaction data
design where every decision point is a multisig. I think this design gets
you the benefit to correct or adapt if all the multisig participants agree
on. It should also achieve the same than a key-deletion design, as long as
all
the vault's stakeholders are participating in the multisig, they can assert
that flow paths are matching their spending policy.

Of course, relying on presigned transactions comes with higher assumptions
on the hardware hosting the flow keys. Though as hashchain-based vault
design imply "secure" key endpoints (e.g ), as a vault user
you're still encumbered with the issues of key management, it doesn't
relieve you to find trusted hardware. If you want to avoid multiplying
devices to trust, I believe flow keys can be stored on the same keys
guarding the UTXOs, before sending to vault custody.

I think the remaining presence of trusted hardware in the vault design
might lead one to ask what's the security advantage of vaults compared to
classic multisig setup. IMO, it's introducing the idea of privileges in the
coins custody : you set up the flow paths once for all at setup with the
highest level of privilege and then anytime you do a partial unvault you
don't need the same level of privilege. Partial unvault authorizations can
come with a reduced set of verifications, at lower operational costs. That
said, I think this security advantage is only relevant in the context of
recursive design, where the partial unvault sends back the remaining funds
to vault UTXO (not the design proposed here).

Few other thoughts on vault design, more minor points.

"If Alice is watching the mempool/chain, she will see that the unvault
transaction has been unexpectedly broadcast,"

I think you might need to introduce an intermediary, out-of-chain protocol
step where the unvault broadcast is formally authorized by the vault
stakeholders. Otherwise it's hard to qualify "unexpected", as hot key
compromise might not be efficiently detected.

"With  OP_CTV, we can ensure that the vault operation is enforced by
consensus itself, and the vault transaction data can be generated
deterministically without additional storage needs."

Don't you also need the endpoint scriptPubkeys (,
), the amounts and CSV value ? Though I think you can grind
amounts and CSV value in case of loss...But I'm not sure if you remove the
critical data persistence requirement, just reduce the surface.

"Because we want to be able to respond immediately, and not have to dig out
our cold private keys, we use an additional OP_CTV to encumber the "swept"
coins for spending by only the cold wallet key."

I think a robust vault deployment would imply the presence of a set of
watchtowers, redundant entities able to broadcast the cold transaction in
reaction to unexpected unvault. One feature which could be interesting is
"tower accountability", i.e knowing which tower initiated the broadcast,
especially if it's a faultive one. One way is to watermark the cold
transaction (e.g tweak nLocktime to past value). Though I believe with CTV
you would need as much different hashes than towers included in your
unvault output (can be wrapped in a Taproot tree ofc). With presigned
transactions, tagged versions of the cold transaction are stored off-chain.

"In this implementation, we make use of anchor outputs in order to allow
mummified unvault transactions to have their feerate adjusted dynamically."

I'm not sure if the usage of anchor output is safe for any vault deployment
where the funds stakeholders do not trust each other or where the
watchtowers are not trusted. If a distrusted party can spend the anchor
output it's easy to block the RBF with a pinning.

Can we think about other criterias on which to sort vault designs ?

(I would say space efficiency is of secondary concern as we can expect
vault users as a class of on-chain space demand to be in the higher ranks
of blockspace "buying power". Though it's always 

Re: [bitcoin-dev] `OP_EVICT`: An Alternative to `OP_TAPLEAFUPDATEVERIFY`

2022-02-21 Thread Antoine Riard via bitcoin-dev
Hi Zeeman,

> To reveal a single participant in a TLUV-based CoinPool, you need to
reveal O(log N) hashes.
> It is the O(log N) space consumption I want to avoid with `OP_EVICT`, and
I believe the reason for that O(log N) revelation is due precisely to the
arbitrary but necessary ordering.

AFAIU the TLUV proposal, it removes the constraint in the *outputs
publication ordering*, once they have all been generated. The tree update
mechanism ensure that whatever the order of dependency :
- the spend path can't be replayed because the user leaf is removed
- the key path can be re-used by remaining participant because the
withdrawing user point is removed

However, I agree that TLUV enforces a constraint in the *spends path
ordering* for the reason you raise.

I think `OP_EVICT` also removes the constraint in the *outputs publication
ordering*. AFAIU, opcode semantics you can mark as indicated any subset of
them. Further, it also solves the *spends paths ordering* as you don't need
to reveal O(log N) hashes anymore.

However, I don't think it's solving the *outputs publication ordering*
issues with the same non-cooperative property of TLUV. TLUV doesn't assume
cooperation among the construction participants once the Taproot tree is
setup. EVICT assumes cooperation among the remaining construction
participants to satisfy the final CHECKSIG.

So that would be a feature difference between TLUV and EVICT, I think ?

> I thought it was part of Taproot?

I checked BIP342 again, *as far as I can read* (unreliable process), it
sounds like it was proposed by BIP118 only.

> No, I considered onchain fees as the only mechanism to avoid eviction
abuse.

I'm unsure about the game-theory robustness of such abuse deterrent
mechanisms... As the pool off-chain payments are cheaper, you might break
your counterparty economic predictions by forcing them to go on-chain
before fee spikes and thus increasing their liquidity operational costs. Or
evicting them as a time where the fees are lower than they have paid to
get-in.

> A single participant withdrawing their funds unilaterally can do so by
evicting everyone else (and paying for those evictions, as sort of a
"nuisance fee").

I see, I'm more interested in the property of a single participant
withdrawing their funds, without affecting the stability of the off-chain
pool and without cooperation with other users. This is currently a
restriction of the channel factories fault-tolerance. If one channel goes
on-chain, all the outputs are published.

Antoine

Le ven. 18 févr. 2022 à 18:39, ZmnSCPxj  a écrit :

> Good morning ariard,
>
>
> > > A statechain is really just a CoinPool hosted inside a
> > >  Decker-Wattenhofer or Decker-Russell-Osuntokun construction.
> >
> > Note, to the best of my knowledge, how to use LN-Penalty in the context
> of multi-party construction is still an unsolved issue. If an invalidated
> state is published on-chain, how do you guarantee that the punished output
> value is distributed "fairly" among the "honest" set of users ? At least
> > where fairness is defined as a reasonable proportion of the balances
> they owned in the latest state.
>
> LN-Penalty I believe is what I call Poon-Dryja?
>
> Both Decker-Wattenhofer (has no common colloquial name) and
> Decker-Russell-Osuntokun ("eltoo") are safe with N > 2.
> The former has bad locktime tradeoffs in the unilateral close case, and
> the latter requires `SIGHASH_NOINPUT`/`SIGHASH_ANYPREVOUT`.
>
>
> > > In principle, a set of promised outputs, if the owners of those
> > > outputs are peers, does not have *any* inherent order.
> > > Thus, I started to think about a commitment scheme that does not
> > > impose any ordering during commitment.
> >
> > I think we should dissociate a) *outputs publication ordering* from the
> b) *spends paths ordering* itself. Even if to each spend path a output
> publication is attached, the ordering constraint might not present the same
> complexity.
> >
> > Under this distinction, are you sure that TLUV imposes an ordering on
> the output publication ?
>
> Yes, because TLUV is based on tapleaf revelation.
> Each participant gets its own unique tapleaf that lets that participant
> get evicted.
>
> In Taproot, the recommendation is to sort the hashes of each tapleaf
> before arranging them into a MAST that the Taproot address then commits to.
> This sort-by-hash *is* the arbitrary ordering I refer to when I say that
> TLUV imposes an arbitrary ordering.
> (actually the only requirement is that pairs of scripts are
> sorted-by-hash, but it is just easier to sort the whole array by hash.)
>
> To reveal a single participant in a TLUV-based CoinPool, you need to
> reveal O(log N) hashes.
> It is the O(log N) space consumption I want to avoid with `OP_EVICT`, and
> I believe the reason for that O(log N) revelation is due precisely to the
> arbitrary but necessary ordering.
>
> > > With `OP_TLUV`, however, it is possible to create an "N-of-N With
> > > Eviction" construction.
> > 

[bitcoin-dev] A Dive into CoinPool : Bitcoin Balances for Billions

2022-02-21 Thread Antoine Riard via bitcoin-dev
Hi,

We (Gleb+ me) would like to present the following of our research on
payment pools [0].

Abstract:

CoinPool is a new multi-party construction to improve Bitcoin onboarding
and transactional scaling by orders of magnitude.
CoinPool allows many users to share a UTXO and make instant off-chain
transfers inside the UTXO while allowing withdrawals at any time without
permission from other users.
In-pool accounts can be used for advanced protocols (e.g., payment
channels). Connecting them to other CoinPool instances, or even to the
Lightning Network, makes in-pool funds highly liquid.
CoinPool construction relies on SIGHASH_GROUP, SIGHASH_ANYPREVOUT and
OP_MERKLESUB changes to Bitcoin. It also assumes a high degree of
interactivity between pool participants.

CoinPool provides an interesting alternative to the LN: it allows locking
more people in a single UTXO and potentially lets them stay in the same
UTXO for longer. In the end, this expands Bitcoin payment throughput, via
better use of the block space.
CoinPool accounts can be also plugged into the LN, making them
complementary and benefiting from each other.

CoinPool explores what can be achieved with covenants, lately explored by a
few of us. It is exploration “in-depth”: what kind of protocol could be
achieved by Merkle tree subtraction check.
We hope this work can inform thinking on future softforks.

We think the 7.9 billion people could be distributed across 1000-sized
CoinPool instances. Assuming perfect cooperation among the participant, a
liquidity exhaustion rate of 6 months and a refulfillment footprint of 100
inputs (at 106 bytes each), 167 GB of blockchain space would be required by
year to enable everyone in the world to transact on Bitcoin in a
non-custodial fashion, unless one order of magnitude beyond the current
block size. By fine-tuning the pools off-chain sustainability  parameters
further, it is realistic to think to satisfy current full-node validation
requirements, thus preserving the decentralization of the network.
.
We're eager to hear everyone's feedback, what we missed, what can be
improved. We hope the ideas presented sound interesting to the community.
If so, we acknowledge it will likely take a decade of patient engineering
before we see mature payment pools in the wild.

The paper is available here :
https://coinpool.dev/v0.1.pdf [1]

The OP_MERKLESUB and SIGASH_GROUP BIPs are available here:
https://github.com/ariard/bips/blob/coinpool-bips/bip-group.mediawiki
https://github.com/ariard/bips/blob/coinpool-bips/bip-merklesub.mediawiki

The code for the pool withdraw tree is available here:
https://github.com/ariard/bitcoin/tree/2022-02-coinpool-withdraw

The transaction/scripts formats for the CoinPool transaction are available
here:
https://gist.github.com/ariard/713ce396281163337c175d9122163e8f

Sincerely,
Gleb & Antoine

PS: Thanks to the reviewers.

[0]
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-June/017964.html
[1] Always have a backup plan in Bitcoin :
https://github.com/coinpool-dev/paper/blob/master/coinpool-v0.1.pdf
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] `OP_EVICT`: An Alternative to `OP_TAPLEAFUPDATEVERIFY`

2022-02-18 Thread Antoine Riard via bitcoin-dev
Hi Zeeman,

> After some thinking, I realized that it was the use of the
> Merkle tree to represent the promised-but-offchain outputs of
> the CoinPool that lead to the O(log N) space usage.
> I then started thinking of alternative representations of
> sets of promised outputs, which would not require O(log N)
> revelations by avoiding the tree structure.

In the context of payment pools, I think the O(log N) revelations can be
avoided already today by pre-signing all the combinations of
promised-but-offchain outputs publications order. However, this approach
presents a factorial complexity and appears as an intractable problem for
high-number of pool users.

I think this factorial complexity issue is the primary problem to enable
scalable payment pools. This issue appears to be solvable by introducing an
accumulator at the script interpreter level. IMO, the efficiency of the
accumulated set representations comes as a second-order issue.

In the comparison of different covenant primitives, I believe we should ask
first if the flexibility offered is enough to solve the factorial
complexity. I would say performance trade-offs analysis can only be
conducted in logically equivalent primitives.

> A statechain is really just a CoinPool hosted inside a
>  Decker-Wattenhofer or Decker-Russell-Osuntokun construction.

Note, to the best of my knowledge, how to use LN-Penalty in the context of
multi-party construction is still an unsolved issue. If an invalidated
state is published on-chain, how do you guarantee that the punished output
value is distributed "fairly" among the "honest" set of users ? At least
where fairness is defined as a reasonable proportion of the balances they
owned in the latest state.

> (To Bitcoin Cashers: this is not an IOU, this is *committed* and
> can be enforced onchain, that is enough to threaten your offchain
> counterparties into behaving correctly.
> They cannot gain anything by denying the outputs they promised,
> you can always drop it onchain and have it enforced, thus it is
> not just merely an IOU, as IOUs are not necessarily enforceable,
> but this mechanism *would* be.
> Blockchain as judge+jury+executioner, not noisy marketplace.)

To be fair towards the Bitcoin Cashers, I think there are still limitations
of LN, we have not solved yet. Especially, w.r.t to mass exits from the
off-chain layers to the chain, where the blocks would stay fulfilled longer
than the standard HTLC timelocks, at  a fee price point that the average
user can't buy... I'm not sure if we have outlawed the "bank runs" scenario
yet of LN.

I would say yes the Blockchain is a juge authority, but in the worst-case
we might be all in market competition to get enforcement.

> In principle, a set of promised outputs, if the owners of those
> outputs are peers, does not have *any* inherent order.
> Thus, I started to think about a commitment scheme that does not
> impose any ordering during commitment.

I think we should dissociate a) *outputs publication ordering* from the b)
*spends paths ordering* itself. Even if to each spend path a output
publication is attached, the ordering constraint might not present the same
complexity.

Under this distinction, are you sure that TLUV imposes an ordering on the
output publication ?

> With `OP_TLUV`, however, it is possible to create an "N-of-N With
> Eviction" construction.
> When a participant in the N-of-N is offline, but the remaining
> participants want to advance the state of the construction, they
> instead evict the offline participant, creating a smaller N-of-N
> where *all* participants are online, and continue operating.

I think we should dissociate two types of pool spends : a) eviction by the
pool unanimity in case of irresponsive participants and b) unilateral
withdrawal by a participant because of the liquidity allocation policy. I
think the distinction is worthy, as the pool participant should be stable
and the eviction not abused.

I'm not sure if TLUV enables b), at least without transforming the
unilateral withdrawal into an eviction. To ensure the TLUV operation is
correct  (spent leaf is removed, withdrawing participant point removed,
etc), the script content must be inspected by *all* the participant.
However, I believe
knowledge of this content effectively allows you to play it out against the
pool at any time ? It's likely solvable at the price of a CHECKSIG.

`OP_EVICT`
--

>  * If it is `1` that simply means "use the Taproot internal
>pubkey", as is usual for `OP_CHECKSIG`.

IIUC, this assumes the deployment of BIP118, where if the  public key is a
single byte 0x01, the internal pubkey is used
for verification.

>  * Output indices must not be duplicated, and indicated
>outputs must be SegWit v1 ("Taproot") outputs.

I think public key duplication must not be verified. If a duplicated public
key is present, the point is subtracted twice from the internal pubkey and
therefore the aggregated
key remains unknown ? So it sounds 

Re: [bitcoin-dev] Thoughts on fee bumping

2022-02-18 Thread Antoine Riard via bitcoin-dev
While I roughly agree with the thesis that different replacement policies
offer marginal block reward gains _in the current state_ of the ecosystem,
I would be more conservative about extending the conclusions to the
medium/long-term future.

> I suspect the "economically rational" choice would be to happily trade
> off that immediate loss against even a small chance of a simpler policy
> encouraging higher adoption of bitcoin, _or_ a small chance of more
> on-chain activity due to higher adoption of bitcoin protocols like
> lightning and thus a lower chance of an empty mempool in future.

This is making the assumption that the economic interests of the different
class of actors in the Bitcoin ecosystem are not only well-understood but
also aligned. We have seen in the past mining actors behaviors delaying the
adoption of protocol upgrades which were expected to encourage higher
adoption of Bitcoin. Further, if miners likely have an incentive to see an
increase of on-chain activity, there is also the possibility that lightning
will be so throughput-efficient to drain mempools backlog, to a point where
the block demand is not high enough to pay back the cost of mining hardware
and operational infrastructure. Or at least not matching the return on
mining investments expectations.

Of course, it could be argued that as a utxo-sharing protocol like
lightning just compresses the number of payments per block space unit, it
lowers the fees burden, thus making Bitcoin as a payment system far more
attractive for a wider population of users. In fine increasing the block
space demand and satisfying the miners.

In the state of today's knowledge, this hypothesis sounds the most
plausible. Though, I would say it's better to be cautious until we
understand better the interactions between the different layers of the
Bitcoin ecosystem ?

> Certainly those percentages can be expected to double every four years as
> the block reward halves (assuming we don't also reduce the min relay fee
> and block min tx fee), but I think for both miners and network stability,
> it'd be better to have the mempool backlog increase over time, which
> would both mean there's no/less need to worry about the special case of
> the mempool being empty, and give a better incentive for people to pay
> higher fees for quicker confirmations.

Intuitively, if we assume that liquidity isn't free on lightning [0], there
should be a subjective equilibrium where it's cheaper to open new channels
to reduce one's own graph traversal instead of paying too high routing fees.

As the core of the network should start to be more busy, I think we should
see more LN actors doing that kind of arbitrage, guaranteeing in the
long-term mempools backlog.

> If you really want to do that
> optimally, I think you have to have a mempool that retains conflicting
> txs and runs a dynamic programming solution to pick the best set, rather
> than today's simple greedy algorithms both for building the block and
> populating the mempool?

As of today, I think power efficiency of mining chips and access to
affordable sources of energy are more significant factors of the
rentability of mining operations rather than optimality of block
construction/replacement policy. IMO, making the argument that small deltas
in block reward gains aren't that much relevant.

That said, the former factors might become a commodity, and the latter one
become a competitive advantage. It could incentivize the development of
such greedy algorithms, potentially in a covert way as we have seen with
AsicBoost ?

> Is there a plausible example where the difference isn't that marginal?

The paradigm might change in the future. If we see the deployment of
channel factories/payment pools, we might have users competing to spend a
shared-utxo with different liquidity needs and thus ready to overbid. Lack
of a "conflict pool" logic might make you lose income.

> Always accepting (package/descendent) fee rate increases removes the
possibility of pinning entirely, I think

I think the pinnings we're affected with today are due to the ability of a
malicious counterparty to halt the on-chain resolution of the channel. The
presence of a  pinning commitment transaction with low-chance of
confirmation (abuse of BIP125 rule 3)
prevents the honest counterparty to fee-bump her own version of the
commitment, thus redeeming a HTLC before timelock expiration. As long as
one commitment confirms, independently of who issued it, the pinning is
over. I think moving to replace-by-feerate allows the honest counterparty
to fee-bump her commitment, thus offering a compelling block space demand,
or forces the malicious counterparty to enter in a fee race.


To gather my thinking on the subject, the replace-by-feerate policy could
produce lower fees blocks in the presence of today's environment of
empty/low-fulfilled blocks. That said, the delta sounds marginal enough
w.r.t other factors of mining business units
to not be worried (or 

Re: [bitcoin-dev] Thoughts on fee bumping

2022-02-14 Thread Antoine Riard via bitcoin-dev
> In the context of fee bumping, I don't see how this is a criticism
> unique to transaction sponsors, since it also applies to CPFP: if you
> tried to bump fees for transaction A with child txn B, if some mempool
> hasn't seen parent A, it will reject B.

Agree, it's a comment raising the shenanigans of tx-diff-only propagation,
afaict affecting equally all fee-bumping primitives. It wasn't a criticism
specific to transaction sponsors, as at that point of your post, sponsors
are not introduced yet.

> This still doesn't address the issue I'm talking about, which is if you
> pre-commit to some "fee-bumping" key in your CPFP outputs and that key
> ends up being compromised. This isn't a matter of data availability or
> redundancy.

I'm not sure about the real safety risk of the compromise of the anchor
output key. Of course, if your anchor output key is compromised and the
bumped package is already public/known, an attacker can extend your package
with junk to neutralize your carve-out capability (I think). That said,
this issue sounds solved to me with package relay, as you can always
broadcast a new version of the package from the root UTXO, without
attention to the carve-out limitation.

(Side-note: I think we can slowly deprecate the carve-out once package
relay is deployed, as the fee-bumping flexibility of the latter is a
superset of the former).

> As I mentioned in the reply to Matt's message, I'm not quite
> understanding this idea of wanting to bump the fee for something
> without knowing what it is; that doesn't make much sense to me.
> The "bump fee" operation seems contingent on knowing
> what you want to bump.

>From your post : "No rebroadcast (wasted bandwidth) is required for the
original txn data."

I'm objecting to that supposed benefit of a transaction sponsor. If you
have transaction X and transaction Y spending the same UTXO, both of them
can be defined as "the original txn data". If you wish to fee-bump
transaction X with sponsor, how can you be sure that transaction
Y isn't present in the majority of network nodes, and X has _not_ been
dropped since your last broadcast ? Otherwise iirc sponsor design, your
sponsor transaction is going to be rejected.

I think you can't, and thus preventively you should broadcast as a (new
type) of package the sponsoring/sponsored transaction.

That said, I'm not sure if that issue is equally affecting vaults than
payment channels. With vaults, the tree of transactions is  known ahead,
and there is no competition in the spends. Assuming the first broadcast has
been efficient (and it could be a reasonable assumption thanks to mempool
rebroadcast), the sponsor should propagate.

So I think here for the sake of sponsor efficiency analysis, we might have
to class between the protocol with once-for-all-transaction-negotiation
(vaults) and the ones with off-chain, dynamic re-negotiation (payment
channels, factories) ?

> I'm not familiar with the L2 dust-limit issues, and I do think that
> "fixing" RBF behavior is *probably* worthwhile.

Sadly, it sounds that "fixing" RBF behavior is a requirement to eradicate
the most advanced pinnings... That fix is independent of the fee-bumping
primitive considered.

>  Those issues aside, I
> think the transaction sponsors idea may be closer to a silver bullet
> than you're giving it credit for, because designing specifically for the
> fee-management use case has some big benefits.

I don't deny the scheme is interesting, though I would argue SIGHASH_GROUP
is more efficient, while offering more flexibility. In any case, I think we
should still pursue further the collections of problems and requirements
(batching, key management, ...) that new fee-bumping primitives should aim
to solve, before engaging more on the deployment of one of them [0].

[0] In that sense see
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-May/019031.html

Le lun. 14 févr. 2022 à 15:29, James O'Beirne  a
écrit :

> Thanks for your thoughtful reply Antoine.
>
> > In a distributed system such as the Bitcoin p2p network, you might
> > have transaction A and transaction B  broadcast at the same time and
> > your peer topology might fluctuate between original send and
> > broadcast of the diff, you don't know who's seen what... You might
> > inefficiently announce diff A on top of B and diff B on top A. We
> > might leverage set reconciliation there a la Erlay, though likely
> > with increased round-trips.
>
> In the context of fee bumping, I don't see how this is a criticism
> unique to transaction sponsors, since it also applies to CPFP: if you
> tried to bump fees for transaction A with child txn B, if some mempool
> hasn't seen parent A, it will reject B.
>
> > Have you heard about SIGHASH_GROUP [0] ?
>
> I haven't - I'll spend some time reviewing this. Thanks.
>
> > > [me complaining CPFP requires lock-in to keys]
> >
> > It's true it requires to pre-specify the fee-bumping key. Though note
> > the fee-bumping key can be fully 

Re: [bitcoin-dev] Thoughts on fee bumping

2022-02-11 Thread Antoine Riard via bitcoin-dev
Hi James,

I fully agree on the need to reframe the conversation around
mempools/fee-bumping/L2s though please see my following comments, it's far
from simple!

> Layering on special cases, more carve-outs, and X and Y percentage
> thresholds is going to make reasoning about the mempool harder than it
> already is.

I think that's true with a lot of (all ?) pieces of software, there is a
trend towards complexification. As new Bitcoin use-cases such as LN or
vaults appear, it's not surprising to see the base layer upper interfaces
changing to support the requirements. Same with kernels, at beginning, you
can have a basic memory support with paging/memory rights/kernel allocators
then as you start to support more platforms/devices you might have to
support swaps/DMA/VMs management...

That we should keep the complexity reasonably sane to enable human
auditing, and maybe learn from the failures of systems engineering, that's
something to muse on.

> The countervailing force here ends up being spam prevention (a la
min-relay-fee)
> to prevent someone from consuming bandwidth and mempool space with a long
series of
> infinitesimal fee-bumps.

I think here we should dissociate a) long-chain of transactions and b)
high-number of repeated fee-bumps.

For a) _if_ SIGHASH_ANYPREVOUT is deployed and Eltoo adopted as a primary
update mechanism for stateful L2s, one might envision long-chain of update
transactions servicing as a new pinning vector, where all the chain
elements are offering a compelling feerate/fees. It might be solvable with
smarter mempool logic sorting the elements from the best offer to the lower
ones, though that issue would need more serious investigation.

For b) if we bound with a hard constant the number of RBF attempts, we
decrease the fault-tolerance of L2 transaction issuers. Some of them might
connect directly to the miners because they're offering higher number of
incentive-compatible RBF attempts than vanilla full-nodes. That might
provoke a more or slow centralization of the transaction relay topology...

> Instead of prompting a rebroadcast of the original transaction for
> replacement, which contains a lot of data not new to the network, it
> makes more sense to broadcast the "diff" which is the additive
> contribution towards some txn's feerate.

In a distributed system such as the Bitcoin p2p network, you might have
transaction A and transaction B  broadcast at the same time and your peer
topology might fluctuate between original send and broadcast
of the diff, you don't know who's seen what... You might inefficiently
announce diff A on top of B and diff B on top A. We might leverage set
reconciliation there a la Erlay, though likely with increased round-trips.

> It's probably uncontroversial at this
> point to say that even RBF itself is kind of a hack - a special
> sequence number should not be necessary for post-broadcast contribution
> toward feerate.

I think here we should dissociate the replace-by-fee mechanism itself from
the replacement signaling one. To have a functional RBF, you don't need
signaling at all, just consider all received transactions as replaceable.
The replacement signaling one has been historically motivated to protect
the applications relying on zero-confs (with all the past polemics about
the well-foundedness of such claims on other nodes' policy).

> In a sane design, no structural foresight - and certainly no wasted
>bytes in the form of unused anchor outputs - should be needed in order
>to add to a miner's reward for confirming a given transaction.

Have you heard about SIGHASH_GROUP [0] ? It would move away from the
transaction to enable arbitrary bundles of input/outputs. You will have
your pre-signed bundle of inputs/outputs enforcing your LN/vaults/L2 and
then at broadcast time, you can attach an input/output. I think it would
answer your structural foresight.

> One of the practical downsides of CPFP that I haven't seen discussed in
> this conversation is that it requires the transaction to pre-specify the
> keys needed to sign for fee bumps. This is problematic if you're, for
> example, using a vault structure that makes use of pre-signed
> transactions.

It's true it requires to pre-specify the fee-bumping key. Though note the
fee-bumping key can be fully separated from the "vaults"/"channels" set of
main keys and hosted on replicated infrastructure such as watchtowers.

> The interface for end-users is very straightforward: if you want to bump
> fees, specify a transaction that contributes incrementally to package
> feerate for some txid. Simple.

As a L2 transaction issuer you can't be sure the transaction you wish to
point to is already in the mempool, or have not been replaced by your
counterparty spending the same shared-utxo, either competitively or
maliciously. So as a measure of caution, you should broadcast sponsor +
target transactions in the same package, thus cancelling the bandwidth
saving (I think).

> This theoretical concession 

Re: [bitcoin-dev] Improving RBF policy

2022-01-31 Thread Antoine Riard via bitcoin-dev
> Is it still verboten to acknowledge that RBF is normal behavior and
disallowing it is the feature, and that feature is mostly there to appease
some people's delusions that zeroconf is a thing? It seems a bit overdue to
disrespect the RBF flag in the direction of always assuming it's on.

If you're thinking about the opt-in flag, not the RBF rules, please see
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-June/019074.html
The latest state of the discussion is here :
https://gnusha.org/bitcoin-core-dev/2021-10-21.log
A gradual, multi-year deprecation sounds to be preferred to ease adaptation
of the affected Bitcoin applications.

Ultimately, I think it might not be the last time we have to change
high-impact tx-relay/mempool rules and a more formalized Core policy
deprecation process would be good.



Le lun. 31 janv. 2022 à 18:15, Bram Cohen via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> a écrit :

> Gloria Zhao wrote:
>
>>
>> This post discusses limitations of current Bitcoin Core RBF policy and
>> attempts to start a conversation about how we can improve it,
>> summarizing some ideas that have been discussed. Please reply if you
>> have any new input on issues to be solved and ideas for improvement!
>>
>
> Is it still verboten to acknowledge that RBF is normal behavior and
> disallowing it is the feature, and that feature is mostly there to appease
> some people's delusions that zeroconf is a thing? It seems a bit overdue to
> disrespect the RBF flag in the direction of always assuming it's on.
>
>
>> - **Incentive Compatibility**: Ensure that our RBF policy would not
>>   accept replacement transactions which would decrease fee profits
>>   of a miner. In general, if our mempool policy deviates from what is
>> economically rational, it's likely that the transactions in our
>> mempool will not match the ones in miners' mempools, making our
>> fee estimation, compact block relay, and other mempool-dependent
>> functions unreliable. Incentive-incompatible policy may also
>> encourage transaction submission through routes other than the p2p
>> network, harming censorship-resistance and privacy of Bitcoin payments.
>>
>
> There are two different common regimes which result in different
> incentivized behavior. One of them is that there's more than a block's
> backlog in the mempool in which case between two conflicting transactions
> the one with the higher fee rate should win. In the other case where there
> isn't a whole block's worth of transactions the one with higher total value
> should win. It would be nice to have consolidated logic which handles both,
> it seems the issue has to do with the slope of the supply/demand curve
> which in the first case is gentle enough to keep the one transaction from
> hitting the rate but in the second one is basically infinite.
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Improving RBF Policy

2022-01-30 Thread Antoine Riard via bitcoin-dev
Hi Gloria,

Thanks for this RBF sum up. Few thoughts and more context comments if it
can help other readers.

> For starters, the absolute fee pinning attack is especially
> problematic if we apply the same rules (i.e. Rule #3 and #4) in
> Package RBF. Imagine that Alice (honest) and Bob (adversary) share a
> LN channel. The mempool is rather full, so their pre-negotiated
> commitment transactions' feerates would not be considered high
> priority by miners.  Bob broadcasts his commitment transaction and
> attaches a very large child (100KvB with 100,000sat in fees) to his
> anchor output. Alice broadcasts her commitment transaction with a
> fee-bumping child (200vB with 50,000sat fees which is a generous
> 250sat/vB), but this does not meet the absolute fee requirement. She
> would need to add another 50,000sat to replace Bob's commitment
> transaction.

Solving LN pinning attacks, what we're aiming for is enabling a fair
feerate bid between the  counterparties, thus either forcing the adversary
to overbid or to disengage from the confirmation competition. If the
replace-by-feerate rule is adopted, there shouldn't be an incentive for Bob
to
pick up the first option. Though if he does, that's a winning outcome for
Alice, as one of the commitment transactions confirms and her
time-sensitive second-stage HTLC can be subsequently confirmed.

> It's unclear to me if
> we have a very strong reason to change this, but noting it as a
> limitation of our current replacement policy. See [#24007][12].

Deployment of Taproot opens interesting possibilities in the vaults/payment
channels design space, where the tapscripts can commit to different set of
timelocks/quorum of keys. Even if the pre-signed states stay symmetric,
whoever is the publisher, the feerate cost to spend can fluctuate.

> While this isn't completely broken, and the user interface is
> secondary to the safety of the mempool policy

I think with L2s transaction broadcast backend, the stability and clarity
of the RBF user interface is primary. What we could be worried about is a
too-much complex interface easing the way for an attacker to trigger your
L2 node to issue policy-invalid chain of transactions. Especially, when we
consider that an attacker might have leverage on chain of transactions
composition ("force broadcast of commitment A then commitment B, knowing
they will share a CPFP") or even transactions size ("overload commitment A
with HTLCs").

> * If the original transaction is in the top {0.75MvB, 1MvB} of the
>   mempool, apply the current rules (absolute fees must increase and
> pay for the replacement transaction's new bandwidth). Otherwise, use a
> feerate-only rule.

How this new replacement rule would behave if you have a parent in the
"replace-by-feerate" half but the child is in the "replace-by-fee" one ?

If we allow the replacement of the parent based on the feerate, we might
decrease the top block absolute fees.

If we block the replacement of the parent based on the feerate because the
replacement absolute fees aren't above the replaced package, we still
preclude a pinning vector. The child might be low-feerate junk and even
attached to a low ancestor-score branch.

If I'm correct on this limitation, maybe we could turn off the
"replace-by-fee" behavior as soon as the mempool is fulfilled with a few
blocks ?

> * Rate-limit how many replacements we allow per prevout.

Depending on how it is implemented, though I would be concerned it
introduces a new pinning vector in the context of shared-utxo. If it's a
hardcoded constant, it could be exhausted by an adversary starting at the
lowest acceptable feerate then slowly increasing while still not reaching
the top of the mempool. Same if it's time-based or block-based, no
guarantee the replacement slot is honestly used by your counterparty.

Further, an above-the-average replacement frequency might just be the
reflection of your confirmation strategy reacting to block schedule or
mempools historical data. As long as the feerate penalty is paid, I lean to
allow replacement.

(One solution could be to associate per-user "tag" to the LN transactions,
where each "tag" would have its own replacement slots, but privacy?)

> * Rate-limit transaction validation in general, per peer.

I think we could improve on the Core's new transaction requester logic.
Maybe we could bind the peer announced flow based on the feerate score
(modulo validation time) of the previously validated transactions from that
peer ? That said, while related to RBF, it sounds to me that enhancing
Core's rate-limiting transaction strategy is a whole discussion in itself
[0]. Especially ensuring it's tolerant to the specific requirements of LN &
consorts.

> What should they be? We can do some arithmetic to see what happens if
> you start with the biggest/lowest feerate transaction and do a bunch
> of replacements. Maybe we end up with values that are high enough to
> prevent abuse and make sense for applications/users 

Re: [bitcoin-dev] Bitcoin Legal Defense Fund

2022-01-13 Thread Antoine Riard via bitcoin-dev
> One question I have is how you might describe the differences between
what BLDF can accomplish and what e.g. EFF can accomplish. Having been
represented by the EFF on more than one occasion, they are fantastic. Do
you feel that the Bitcoin-specific focus of BLDF outweighs the more general
(but deeper experience/track record) of an organization like the EFF (or
others, like Berkman Cyberlaw Clinic, etc)? My main opinion is "the more
the merrier", so don't consider it a critique, more a question so that you
have the opportunity to highlight the unique strengths of this approach.

I think one opportunity could be building legal assistance in a diversity
of jurisdictions, beyond the US one.

I join the kudos about the EFF, though you won't find the institutional
equivalent in term of subjects expertise/readiness-to-assist in most of the
other countries.
Especially considering the growing number of developers located outside
US/Europe and a lot of great ecosystem initiatives nurturing that trend.

Cheers,
Antoine

Le jeu. 13 janv. 2022 à 14:06, Jeremy via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> a écrit :

> A further point -- were it to be a norm if a contributor to something like
> this be denied their full capacity for "free speech" by social convention,
> it would either encourage anonymous funding (less accountable) or would
> disincentivize creating such initiatives in the future.
>
> Both of those outcomes would be potentially bad, so I don't see limiting
> speech on an unrelated topic as a valid action.
>
> However, I think the inverse could have merit -- perhaps funders can
> somehow commit to 'abstracting' themselves from involvement in cases / the
> process of accepting prospective clients. As neither Alex nor Jack are
> lawyers (afaict?), this should already be true to an extent as the legal
> counsel would be bound to attorney client privilege.
>
> Of course we live in a free country and however Jack and Alex determine
> they should spend their own money is their god-given right, as much as it
> is unfortunately the right of anyone to sue a developer for some alleged
> infringement. I'm personally glad that Jack and Alex are using their money
> to help developers and not harass them -- many thanks for that!
>
> One question I have is how you might describe the differences between what
> BLDF can accomplish and what e.g. EFF can accomplish. Having been
> represented by the EFF on more than one occasion, they are fantastic. Do
> you feel that the Bitcoin-specific focus of BLDF outweighs the more general
> (but deeper experience/track record) of an organization like the EFF (or
> others, like Berkman Cyberlaw Clinic, etc)? My main opinion is "the more
> the merrier", so don't consider it a critique, more a question so that you
> have the opportunity to highlight the unique strengths of this approach.
>
> Best,
>
> Jeremy
> --
> @JeremyRubin 
> 
>
>
> On Thu, Jan 13, 2022 at 10:50 AM Steve Lee via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> I think the word "The" is important. The title of the email and the name
>> of the fund is Bitcoin Legal Defense Fund. It is "a" legal defense fund;
>> not THE Bitcoin Legal Defense Fund. There is room for other funds and
>> strategies and anyone is welcome to create alternatives.
>>
>> I also don't see why Alex or anyone should be denied the opportunity to
>> comment on future soft forks or anything about bitcoin. Alex should have no
>> more or less right to participate and his comments should be judged on
>> their merit, just like yours and mine.
>>
>> On Thu, Jan 13, 2022 at 9:37 AM Prayank via bitcoin-dev <
>> bitcoin-dev@lists.linuxfoundation.org> wrote:
>>
>>> Hi Jack,
>>>
>>>
>>> > The main purpose of this Fund is to defend developers from lawsuits
>>> regarding their activities in the Bitcoin ecosystem, including finding and
>>> retaining defense counsel, developing litigation strategy, and paying legal
>>> bills. This is a free and voluntary option for developers to take advantage
>>> of if they so wish. The Fund will start with a corps of volunteer and
>>> part-time lawyers. The board of the Fund will be responsible for
>>> determining which lawsuits and defendants it will help defend.
>>>
>>> Thanks for helping the developers in legal issues. Appreciate your
>>> efforts and I understand your intentions are to help Bitcoin in every
>>> possible way.
>>>
>>>
>>> Positives that I see in this initiative:
>>>
>>> 1.Developers don't need to worry about rich scammers and can focus on
>>> development.
>>>
>>> 2.Financial help for developers as legal issues can end up in wasting
>>> lot of time and money.
>>>
>>> 3.People who have misused courts to affect bitcoin developers will get
>>> better response that they deserve.
>>>
>>>
>>> I had few suggestions and feel free to ignore them if they do not make
>>> sense:
>>>
>>> 1.Name of this fund could be anything and 

Re: [bitcoin-dev] Proposal: Full-RBF in Bitcoin Core 24.0

2021-12-19 Thread Antoine Riard via bitcoin-dev
> we might start with 60 seconds, and then double every release till we get
to 600 at which point we disable it.

This is clearly new. However, I'm not sure if it's solving multi-party
funding transaction DoS, which was of the motivation to propose to
deprecate opt-in RBF. The malicious counterparty can broadcast its
low-feerate, opt-out spending of its own collateral input far before to
engage in the cooperative funding.

When the funding transaction starts to propagate, the opt-out has been
"first seen" for a while, the replaceability is turned off, the honest
funding is bounced off ?


Taking opportunity to laid out another proposal which has whispered to me
offline :

"(what) if the nversion of outputs (which is set by their creating
transaction) were inspected and
triggered any spend of the output to be required to be flagged to be
replaceable-- as a standardness rule?"

While working to solve the DoS, I believe this approach is introducing an
overhead cost in the funding of multi-party transactions, as from now on,
you have to sanitize your collateral inputs by sending them first to a
replaceable nVersion outputs ? (iirc, this is done by Lightning Pool, where
you have a first step where the inputs are locked in a 2-of-2 with the
orchester before to engage in the batch execution tx).

Current state of the discussion is to introduce a `fullrbf` config-knob
turned to false, see more context here :
https://gnusha.org/bitcoin-core-dev/2021-10-21.log. Proposing an
implementation soon.

Antoine

Le sam. 18 déc. 2021 à 11:51, Jeremy  a écrit :

> Small idea:
>
> ease into getting rid of full-rbf by keeping the flag working, but make
> enforcement of non-replaceability something that happens n seconds after
> first seen.
>
> this reduces the ability to partition the mempools by broadcasting
> irreplaceable conflicts all at once, and slowly eases clients off of
> relying on non-RBF.
>
> we might start with 60 seconds, and then double every release till we get
> to 600 at which point we disable it.
> --
> @JeremyRubin <https://twitter.com/JeremyRubin>
> <https://twitter.com/JeremyRubin>
>
>
> On Tue, Jun 15, 2021 at 10:00 AM Antoine Riard via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> Hi,
>>
>> I'm writing to propose deprecation of opt-in RBF in favor of full-RBF as
>> the Bitcoin Core's default replacement policy in version 24.0. As a
>> reminder, the next release is 22.0, aimed for August 1st, assuming
>> agreement is reached, this policy change would enter into deployment phase
>> a year from now.
>>
>> Even if this replacement policy has been deemed as highly controversial a
>> few years ago, ongoing and anticipated changes in the Bitcoin ecosystem are
>> motivating this proposal.
>>
>> # RBF opt-out as a DoS Vector against Multi-Party Funded Transactions
>>
>> As explained in "On Mempool Funny Games against Multi-Party Funded
>> Transactions'', 2nd issue [0], an attacker can easily DoS a multi-party
>> funded transactions by propagating an RBF opt-out double-spend of its
>> contributed input before the honest transaction is broadcasted by the
>> protocol orchester. DoSes are qualified in the sense of either an attacker
>> wasting timevalue of victim's inputs or forcing exhaustion of the
>> fee-bumping  reserve.
>>
>> This affects a series of Bitcoin protocols such as Coinjoin, onchain DLCs
>> and dual-funded LN channels. As those protocols are still in the early
>> phase of deployment, it doesn't seem to have been executed in the wild for
>> now.  That said, considering that dual-funded are more efficient from a
>> liquidity standpoint, we can expect them to be widely relied on, once
>> Lightning enters in a more mature phase. At that point, it should become
>> economically rational for liquidity service providers to launch those DoS
>> attacks against their competitors to hijack user traffic.
>>
>> Beyond that, presence of those DoSes will complicate the design and
>> deployment of multi-party Bitcoin protocols such as payment
>> pools/multi-party channels. Note, Lightning Pool isn't affected as there is
>> a preliminary stage where batch participants are locked-in their funds
>> within an account witnessScript shared with the orchestrer.
>>
>> Of course, even assuming full-rbf, propagation of the multi-party funded
>> transactions can still be interfered with by an attacker, simply
>> broadcasting a double-spend with a feerate equivalent to the honest
>> transaction. However, it tightens the attack scenario to a scorched earth
>> approach, where the attacker has to commit equivalent fee-bumping reserve
>> to maint

Re: [bitcoin-dev] A fee-bumping model

2021-12-09 Thread Antoine Riard via bitcoin-dev
Hi Gloria,

For LN, I think 3 tower rewards models have been discussed : per-penalty
on-chain bounty/per-job micropayment/customer subscription. If curious, see
the wip specification :
https://github.com/sr-gi/bolt13/blob/master/13-watchtowers.md

> - Do we expect watchtowers tracking multiple vaults to be batching
multiple
> Cancel transaction fee-bumps?

For LN, I can definitely see LSP to batch closure of their spokes, with one
CPFP spending multiple anchors outputs of commitment transactions, and
RBF'ing when needed.

> - Do we expect vault users to be using multiple watchtowers for a better
> trust model? If so, and we're expecting batched fee-bumps, won't those
> conflict?

Even worse, a malicious counterparty could force an unilateral closure by
the honest participant and observe the fee-bumping transaction propagation
by the towers to discover their full-nodes topologies. Might be good to
have an ordering algo among your towers to select who is fee-bumping first,
and broadcast all when you're reaching near timelock expiration.

> Well stated about CPFP carve out. I suppose the generalization is that
> allowing n extra ancestorcount=2 descendants to a transaction means it can
> help contracts with <=n+1 parties (more accurately, outputs)? I wonder if
> it's possible to devise a different approach for limiting
> ancestors/descendants, e.g. by height/width/branching factor of the family
> instead of count... :shrug:

I think CPFP carve out can be deprecated once package relay and a
pinning-hardened RBF is deployed ?  Like if your counterparty is abusing
the ancestors/descendants limits, your RBF'ed package should evict the
malicious pinning starting by the root commitment transaction (I think).
And I believe it can be generalized to n-parties contracts, if your
transaction includes one "any-contract-can-spend" anchor ouput.

> - Should the fee-bumping strategy depend on how close you are to your
> timelock expiry? (though this seems like a potential privacy leak, and the
> game theory could get weird as you mentioned).

Yes, at first it's hard to predict how tight it is going to be and it's
nice to save on fees. At some point, you might fall-back off this
fee-bumping warm up-phase to accelerate the rate and start to be more
aggressive. In that direction, see DLC spec fee-bumping recommendation :
https://github.com/discreetlogcontracts/dlcspecs/blob/master/Non-Interactive-Protocol.md

Note, at least for LN, the transaction weight isn't proportional with the
value at stake, and there  is a focal point where it's more interesting to
save fee reserves rather than keep bumping.

> - As long as you have a good fee estimator (i.e. given a current mempool,
can get an accurate feerate given a % probability of getting into target
block n), is there any reason to devise a fee-bumping strategy beyond
picking a time interval?

You might be a LSP, you observe rapid changes in the global network HTLC
traffic and would like to react in consequence. You accelerate the
fee-bumping to free/reallocate your liquidity elsewhere.

> So the equation is more like: a miner with 1/N of the hashrate, employing
this censorship strategy, gains only if `max(f2-g2, 0) > N * (f1-g1)`. More
broadly, the miner only profits if `f2` is significantly higher than `g2

This is where it becomes hard. From your "limited rationality" of a
fee-bumping node `g2` is unknown, And you might be incentivized to
overshoot to front-run `g2` issuer (?)

> In general, I agree it would really suck to inadvertently create a game
where miners can drive feerates up by triggering desperation-driven
fee-bumping procedures. I guess this is a reason to avoid
increasingly-aggressive feebumping, or strategies where we predictably
overshoot.

Good topic of research! Few other vectors of analysis :
https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-February/002569.html

Cheers,
Antoine

Le mar. 7 déc. 2021 à 12:24, Gloria Zhao  a écrit :

> Hi Darosior and Ariard,
>
> Thank you for your work looking into fee-bumping so thoroughly, and for
> sharing your results. I agree about fee-bumping's importance in contract
> security and feel that it's often under-prioritized. In general, what
> you've described in this post, to me, is strong motivation for some of the
> proposed changes to RBF we've been discussing. Mostly, I have some
> questions.
>
> > The part of Revault we are interested in for this study is the
> delegation process, and more
> > specifically the application of spending policies by network monitors
> (watchtowers).
>
> I'd like to better understand how fee-bumping would be used, i.e. how the
> watchtower model works:
> - Do all of the vault parties both deposit to the vault and a refill/fee
> to the watchtower, is there a reward the watchtower collects for a
> successful Cancel, or something else? (Apologies if there's a thorough
> explanation somewhere that I haven't already seen).
> - Do we expect watchtowers tracking multiple vaults to be 

Re: [bitcoin-dev] A fee-bumping model

2021-12-09 Thread Antoine Riard via bitcoin-dev
Hi Antoine,

> It seems to me the only policy-level mitigation for RBF pinning around
the "don't decrease the abolute fees of a less-than-a-block mempool" would
be to drop the requirement on increasing absolute fees if the mempool is
"full enough" (and the feerate increases exponentially, of course).

Yes, it's hard to say the "less-than-a-block-mempool" scenario is long-term
realistic. In the future, you can expect liquidity operations to be
triggered as soon as the network mempools start to be empty.  At a given
block space price, there is always room to improve your routing topology.

That said, you would like the default block construction strategy to be
"all-weather" economically aligned. To build such a more robust strategy, I
think a miner would have interest to level the  "full enough" bar.

I still think a policy-level mitigation is possible, where you have a
replace-by-fee rate above X MB of blocks and replace-by-fee under X.
Responsibility is on the L2 fee-bumper to guarantee the  honest bid is in
the X MB of blocks or the malicious pinning attacker has to overbid.

At first sight, yes committing the maximum tx size in the annex covered by
your counterparty signature should still allow you to add high-feerate
input. Though niice if we can save a consensus rule to fix pinnings.

> In any case, for Lightning i think it's a bad idea to re-introduce trust
on this side post anchor outputs. For Revault it's clearly out of the
question to introduce trust in your counterparties (why would you bother
having a fee-bumping mechanism in the >first place then?). Probably the
same holds for all offchain contracts.

Yeah it was a strawman exercise on the question "not knowledge of other
primitive that can be used by multi-party" :) I wouldn't recommend that
kind of fee-bumping "shared cache" scheme for a  trust-minimized setup.
Maybe interesting for watchtowers/LSP topologies.

> Black swan event 2.0? Just rule n°3 is inherent to any kind of fee
estimation.

It's just the old good massive mempool congestion systemic risk known since
the LN whitepaper. AFAIK, anchor output fee-bumping schemes have not really
started the work to be robust against that. What I'm aiming to point out is
that it might be even harder to build a fault-tolerant fee-bumping strategy
because of the "limited rationality" of your local node towards the
behaviors of the other bitcoin users in face of this phenomena. Would be
nice to have more research on that front.

> I don't think any kind of mempool-based estimate generalizes well, since
at any point the expected time before the next block is 10 minutes (and a
lot can happen in 10min).

Sure, you might be off-bid because of block variance, though if you're
ready to pay multiple RBF penalties which are linear, you might adjust your
shots in function of "real-time" mempool congestion.

> I'm very concerned that large stakeholders of the "offchain contracts
ecosystem" would just go this (easier) way and further increase mining
centralisation pressure.

*back on the whiteboard sweating on a consensus-enforced timestop primitive*

Cheers,
Antoine

Le mar. 30 nov. 2021 à 10:19, darosior  a écrit :

> Hi Antoine,
>
> Thanks for your comment. I believe for Lightning it's simpler with regard
> to the management of the UTxO pool, but harder with regard to choosing
> a threat model.
> Responses inline.
>
>
> For any opened channel, ensure the confirmation of a Commitment
> transaction and the children HTLC-Success/HTLC-Timeout transactions. Note,
> in the Lightning security game you have to consider (at least) 4 types of
> players moves and incentives : your node, your channel counterparties, the
> miners, the crowd of bitcoin users. The number of the last type of players
> is unknown from your node, however it should not be forgotten you're in
> competition for block space, therefore their block demands bids should be
> anticipated and reacted to in consequence. With that remark in mind,
> implications for your LN fee-bumping strategy will be raised afterwards.
>
> For a LN service provider, on-chain overpayments are bearing on your
> operational costs, thus downgrading your economic competitiveness. For the
> average LN user, overpayment might price out outside a LN non-custodial
> deployment, as you don't have the minimal security budget to be on your own.
>
>
> I think this problem statement can be easily generalised to any offchain
> contract. And your points stand for all of them.
> "For any opened contract, ensure at any point the confirmation of a (set
> of) transaction(s) in a given number of blocks"
>
>
> Same issue with Lightning, we can be pinned today on the basis of
> replace-by-fee rule 3. We can be also blinded by network mempool
> partitions, a pinning counterparty can segregate all the full-nodes  in as
> many subsets by broadcasting a revoked Commitment transaction different for
> each. For Revault, I think you can also do unlimited partitions by mutating
> the 

Re: [bitcoin-dev] A fee-bumping model

2021-11-30 Thread Antoine Riard via bitcoin-dev
Hi Darosior,

Nice work, few thoughts binding further your model for Lightning.

> For any delegated vault, ensure the confirmation of a Cancel transaction
in a configured number of
> blocks at any point. In so doing, minimize the overpayments and the UTxO
set footprint. Overpayments
> increase the burden on the watchtower operator by increasing the required
frequency of refills of the
> fee-bumping wallet, which is already the worst user experience. You are
likely to manage a number of
> UTxOs with your number of vaults, which comes at a cost for you as well
as everyone running a full
> node.

For any opened channel, ensure the confirmation of a Commitment transaction
and the children HTLC-Success/HTLC-Timeout transactions. Note, in the
Lightning security game you have to consider (at least) 4 types of players
moves and incentives : your node, your channel counterparties, the miners,
the crowd of bitcoin users. The number of the last type of players is
unknown from your node, however it should not be forgotten you're in
competition for block space, therefore their block demands bids should be
anticipated and reacted to in consequence. With that remark in mind,
implications for your LN fee-bumping strategy will be raised afterwards.

For a LN service provider, on-chain overpayments are bearing on your
operational costs, thus downgrading your economic competitiveness. For the
average LN user, overpayment might price out outside a LN non-custodial
deployment, as you don't have the minimal security budget to be on your own.

> This opens up a pinning vector, or at least a significant nuisance: any
other party can largely
> increase the absolute fee without increasing the feerate, leveraging the
RBF rules to prevent you
> from replacing it without paying an insane fee. And you might not see it
in your own mempool and
> could only suppose it's happening by receiving non-full blocks or with
transactions paying a lower
> feerate.

Same issue with Lightning, we can be pinned today on the basis of
replace-by-fee rule 3. We can be also blinded by network mempool
partitions, a pinning counterparty can segregate all the full-nodes  in as
many subsets by broadcasting a revoked Commitment transaction different for
each. For Revault, I think you can also do unlimited partitions by mutating
the ANYONECANPAY-input of the Cancel.

That said, if you have a distributed towers deployment, spread across the
p2p network topology, and they can't be clustered together through
cross-layers or intra-layer heuristics, you should be able to reliably
observe such partitions. I think such distributed monitors are deployed by
few L1 merchants accepting 0-conf to detect naive double-spend.

> Unfortunately i know of no other primitive that can be used by
multi-party (i mean, >2) presigned
> transactions protocols for fee-bumping that aren't (more) vulnerable to
pinning.

Have we already discussed a fee-bumping "shared cache", a CPFP variation ?
Strawman idea: Alice and Bob commit collateral inputs to a separate UTXO
from the main "offchain contract" one. This UTXO is locked by a multi-sig.
For any Commitment transaction pre-signed, also counter-sign a CPFP with
top mempool feerate included, spending a Commitment anchor output and the
shared-cache UTXO. If the fees spike,  you can re-sign a high-feerate CPFP,
assuming interactivity. As the CPFP is counter-signed by everyone, the
outputs can be CSV-1 encumbered to prevent pinnings. If the share-cache is
feeded at parity, there shouldn't be an incentive to waste or maliciously
inflate the feerate. I think this solution can be easily generalized to
more than 2 counterparties by using a multi-signature scheme. Big issue, if
the feerate is short due to fee spikes and you need to re-sign a
higher-feerate CPFP, you're trusting your counterparty to interact, though
arguably not worse than the current update fee mechanism.

> For Lightning, it'd mean keeping an equivalent amount of funds as the sum
of all your
channels balances sitting there unallocated "just in case". This is not
reasonable.

Agree, game-theory wise, you would like to keep a full fee-bumping reserve,
ready to burn as much in fees as the contested HTLC value, as it's the
maximum gain of your counterparty. Though perfect equilibrium is hard to
achieve because your malicious counterparty might have an edge pushing you
to broadcast your Commitment first by witholding HTLC resolution.

Fractional fee-bumping reserves are much more realistic to expect in the LN
network. Lower fee-bumping reserve, higher liquidity deployed, in theory
higher routing fees. By observing historical feerates, average offchain
balances at risk and routing fees expected gains, you should be able to
discover an equilibrium where higher levels of reserve aren't worth the
opportunity cost. I guess this  equilibrium could be your LN fee-bumping
reserve max feerate.

Note, I think the LN approach is a bit different from what suits a custody
protocol like Revault,  

Re: [bitcoin-dev] death to the mempool, long live the mempool

2021-10-27 Thread Antoine Riard via bitcoin-dev
Hi Lisa,

Network mempools constitute a blockspace marketplace where block demand
meets the offer in real-time. Block producers are acting to discover the
best feerate bids compensating for their operational costs and transaction
proposers are acting to offer the best feerate in function of their
confirmation preferences.

Of course in a distributed system like bitcoin, we can't guarantee perfect
information from the market participants. But moving away from this model
by decreasing the ability of the non-mining nodes to observe the current
demand is softening the requirements for potential attackers.

As transaction proposers are competing with each other to publish, they
have an interest to "front-run" each other by querying the pending
transactions to the block producers instead of observing only the published
blocks. Therefore good connections to
the block producers are now critical and censorship-resistance of the
mining endpoints must be guaranteed.

Such a list of endpoints couldn't be static otherwise it's an artificial
barrier to enter in the mining competition, and as such a centralization
vector. Dynamic, trust-minimized discovery of the mining endpoints assumes
an address-relay network, of which the robustness must be high enough
against sophisticated sybil attacks. One current defense mechanism in core
to achieve that is selecting outbound peers based in different /16 subnets
as it's harder for an attacker to obtain IP addresses. Replicating this
mechanism for the mining endpoints binds the mining topology to the
Internet one, which is downgrading the mining competition.

Relying on tor to guarantee the confidentiality of the transaction
announcement is raising its own issues. Flowing by default all the bitcoin
traffic over tor will change the incentive structure of tor attackers,
potentially attracting a new class of attackers able to do deanonymization
attacks, not that expensive in practice [0]. Tor bridges are another
censorship vector as the fingerprint of the bitcoin traffic (a block every
10 min, etc) make it possible to drop or delay the tor channel, in the lack
of high-bandwidth consuming "synthetic" traffic.

Further, identified mining endpoints make it easier to launch partition
attacks, where mining mempools are sent low-feerate clusters of
transactions, to prevent the replacement by a better feerate offer. This is
especially concerning for L2 nodes with time-sensitive requirements [1]

Lastly, removing the mempool won't solve the current issues inherent with
pre-signed transactions under the mempool min fee as ultimately miner's
mempools are also finite in memory and a dynamic lower bound must exist to
prevent spam. These lower bounds potentially increase after the signature
exchange of the time-sensitive transactions.

Antoine

[0] https://www.usenix.org/system/files/sec19-jansen.pdf
[1] See "The Ugly"
https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-June/002758.html

Le mar. 26 oct. 2021 à 03:37, lisa neigut via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> a écrit :

> Hi all,
>
> In a recent conversation with @glozow, I had the realization that the
> mempool is obsolete and should be eliminated.
>
> Instead, users should submit their transactions directly to mining pools,
> preferably over an anonymous communication network such as tor. This can
> easily be achieved by mining pools running a tor onion node for this
> express purpose (or via a lightning network extension etc)
>
> Mempools make sense in a world where mining is done by a large number of
> participating nodes, eg where the block template is constructed by a
> majority of the participants on the network. In this case, it is necessary
> to socialize pending transaction data to all participants, as you don’t
> know which participant will be constructing the winning block template.
>
> In reality however, mempool relay is unnecessary where the majority of
> hashpower and thus block template creation is concentrated in a
> semi-restricted set.
>
> Removing the mempool would greatly reduce the bandwidth requirement for
> running a node, keep intentionality of transactions private until
> confirmed/irrevocable, and naturally resolve all current issues inherent in
> package relay and rbf rules. It also resolves the recent minimum relay
> questions, as relay is no longer a concern for unmined transactions.
>
> Provided the number of block template producing actors remains beneath,
> say 1000, it’d be quite feasible to publish a list of tor endpoints that
> nodes can independently  + directly submit their transactions to. In fact,
> merely allowing users to select their own list of endpoints to use
> alternatively to the mempool would be a low effort starting point for the
> eventual replacement.
>
> On the other hand, removing the mempool would greatly complicate solo
> mining and would also make BetterHash proposals, which move the block
> template construction away from a centralized mining pool 

Re: [bitcoin-dev] Proposal: Package Mempool Accept and Package RBF

2021-09-29 Thread Antoine Riard via bitcoin-dev
Hi Bastien

> In the case of LN, an attacker can game this and heavily restrict
your RBF attempts if you're only allowed to use confirmed inputs
and have many channels (and a limited number of confirmed inputs).
Otherwise you'll need node operators to pre-emptively split their
utxos into many small utxos just for fee bumping, which is inefficient...

I share the concern about splitting utxos into smaller ones.
IIRC, the carve-out tolerance is only 2txn/10_000 vb. If one of your
counterparties attach a junk branch on her own anchor output, are you
allowed to chain your self-owned unconfirmed CPFP ?
I'm thinking about the topology "Chained CPFPs" exposed here :
https://github.com/rust-bitcoin/rust-lightning/issues/989.
Or if you have another L2 broadcast topology which could be safe w.r.t our
current mempool logic :) ?


Le lun. 27 sept. 2021 à 03:15, Bastien TEINTURIER  a
écrit :

> I think we could restrain package acceptance to only confirmed inputs for
>> now and revisit later this point ? For LN-anchor, you can assume that the
>> fee-bumping UTXO feeding the CPFP is already
>> confirmed. Or are there currently-deployed use-cases which would benefit
>> from your proposed Rule #2 ?
>>
>
> I think constraining package acceptance to only confirmed inputs
> is very limiting and quite dangerous for L2 protocols.
>
> In the case of LN, an attacker can game this and heavily restrict
> your RBF attempts if you're only allowed to use confirmed inputs
> and have many channels (and a limited number of confirmed inputs).
> Otherwise you'll need node operators to pre-emptively split their
> utxos into many small utxos just for fee bumping, which is inefficient...
>
> Bastien
>
> Le lun. 27 sept. 2021 à 00:27, Antoine Riard via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> a écrit :
>
>> Hi Gloria,
>>
>> Thanks for your answers,
>>
>> > In summary, it seems that the decisions that might still need
>> > attention/input from devs on this mailing list are:
>> > 1. Whether we should start with multiple-parent-1-child or
>> 1-parent-1-child.
>> > 2. Whether it's ok to require that the child not have conflicts with
>> > mempool transactions.
>>
>> Yes 1) it would be good to have inputs of more potential users of package
>> acceptance . And 2) I think it's more a matter of clearer wording of the
>> proposal.
>>
>> However, see my final point on the relaxation around "unconfirmed inputs"
>> which might in fact alter our current block construction strategy.
>>
>> > Right, the fact that we essentially always choose the first-seen
>> witness is
>> > an unfortunate limitation that exists already. Adding package mempool
>> > accept doesn't worsen this, but the procedure in the future is to
>> replace
>> > the witness when it makes sense economically. We can also add logic to
>> > allow package feerate to pay for witness replacements as well. This is
>> > pretty far into the future, though.
>>
>> Yes I agree package mempool doesn't worsen this. And it's not an issue
>> for current LN as you can't significantly inflate a spending witness for
>> the 2-of-2 funding output.
>> However, it might be an issue for multi-party protocol where the spending
>> script has alternative branches with asymmetric valid witness weights.
>> Taproot should ease that kind of script so hopefully we would deploy
>> wtxid-replacement not too far in the future.
>>
>> > I could be misunderstanding, but an attacker wouldn't be able to
>> > batch-attack like this. Alice's package only conflicts with A' + D',
>> not A'
>> > + B' + C' + D'. She only needs to pay for evicting 2 transactions.
>>
>> Yeah I can be clearer, I think you have 2 pinning attacks scenarios to
>> consider.
>>
>> In LN, if you're trying to confirm a commitment transaction to time-out
>> or claim on-chain a HTLC and the timelock is near-expiration, you should be
>> ready to pay in commitment+2nd-stage HTLC transaction fees as much as the
>> value offered by the HTLC.
>>
>> Following this security assumption, an attacker can exploit it by
>> targeting together commitment transactions from different channels by
>> blocking them under a high-fee child, of which the fee value
>> is equal to the top-value HTLC + 1. Victims's fee-bumping logics won't
>> overbid as it's not worthy to offer fees beyond their competed HTLCs. Apart
>> from observing mempools state, victims can't learn they're targeted by the
>> same attacker.
>>
>> To draw from the aforementioned topology, Mallory broadcasts A' + B' + C'
&g

Re: [bitcoin-dev] Proposal: Package Mempool Accept and Package RBF

2021-09-26 Thread Antoine Riard via bitcoin-dev
Hi Gloria,

Thanks for your answers,

> In summary, it seems that the decisions that might still need
> attention/input from devs on this mailing list are:
> 1. Whether we should start with multiple-parent-1-child or
1-parent-1-child.
> 2. Whether it's ok to require that the child not have conflicts with
> mempool transactions.

Yes 1) it would be good to have inputs of more potential users of package
acceptance . And 2) I think it's more a matter of clearer wording of the
proposal.

However, see my final point on the relaxation around "unconfirmed inputs"
which might in fact alter our current block construction strategy.

> Right, the fact that we essentially always choose the first-seen witness
is
> an unfortunate limitation that exists already. Adding package mempool
> accept doesn't worsen this, but the procedure in the future is to replace
> the witness when it makes sense economically. We can also add logic to
> allow package feerate to pay for witness replacements as well. This is
> pretty far into the future, though.

Yes I agree package mempool doesn't worsen this. And it's not an issue for
current LN as you can't significantly inflate a spending witness for the
2-of-2 funding output.
However, it might be an issue for multi-party protocol where the spending
script has alternative branches with asymmetric valid witness weights.
Taproot should ease that kind of script so hopefully we would deploy
wtxid-replacement not too far in the future.

> I could be misunderstanding, but an attacker wouldn't be able to
> batch-attack like this. Alice's package only conflicts with A' + D', not
A'
> + B' + C' + D'. She only needs to pay for evicting 2 transactions.

Yeah I can be clearer, I think you have 2 pinning attacks scenarios to
consider.

In LN, if you're trying to confirm a commitment transaction to time-out or
claim on-chain a HTLC and the timelock is near-expiration, you should be
ready to pay in commitment+2nd-stage HTLC transaction fees as much as the
value offered by the HTLC.

Following this security assumption, an attacker can exploit it by targeting
together commitment transactions from different channels by blocking them
under a high-fee child, of which the fee value
is equal to the top-value HTLC + 1. Victims's fee-bumping logics won't
overbid as it's not worthy to offer fees beyond their competed HTLCs. Apart
from observing mempools state, victims can't learn they're targeted by the
same attacker.

To draw from the aforementioned topology, Mallory broadcasts A' + B' + C' +
D', where A' conflicts with Alice's P1, B' conflicts with Bob's P2, C'
conflicts with Caroll's P3. Let's assume P1 is confirming the top-value
HTLC of the set. If D' fees is higher than P1 + 1, it won't be rational for
Alice or Bob or Caroll to keep offering competing feerates. Mallory will be
at loss on stealing P1, as she has paid more in fees but will realize a
gain on P2+P3.

In this model, Alice is allowed to evict those 2 transactions (A' + D') but
as she is economically-bounded she won't succeed.

Mallory is maliciously exploiting RBF rule 3 on absolute fee. I think this
1st pinning scenario is correct and "lucractive" when you sum the global
gain/loss.

There is a 2nd attack scenario where A + B + C + D, where D is the child of
A,B,C. All those transactions are honestly issued by Alice. Once A + B + C
+ D are propagated in network mempools, Mallory is able to replace A + D
with  A' + D' where D' is paying a higher fee. This package A' + D' will
confirm soon if D feerate was compelling but Mallory succeeds in delaying
the confirmation
of B + C for one or more blocks. As B + C are pre-signed commitments with a
low-fee rate they won't confirm without Alice issuing a new child E.
Mallory can repeat the same trick by broadcasting
B' + E' and delay again the confirmation of C.

If the remaining package pending HTLC has a higher-value than all the
malicious fees over-bid, Mallory should realize a gain. With this 2nd
pinning attack, the malicious entity buys confirmation delay of your
packaged-together commitments.

Assuming those attacks are correct, I'm leaning towards being conservative
with the LDK broadcast backend. Though once again, other L2 devs have
likely other use-cases and opinions :)

>  B' only needs to pay for itself in this case.

Yes I think it's a nice discount when UTXO is single-owned. In the context
of shared-owned UTXO (e.g LN), you might not if there is an in-mempool
package already spending the UTXO and have to assume the worst-case
scenario. I.e have B' committing enough fee to pay for A' replacement
bandwidth. I think we can't do that much for this case...

> If a package meets feerate requirements as a
package, the parents in the transaction are allowed to replace-by-fee
mempool transactions. The child cannot replace mempool transactions."

I agree with the Mallory-vs-Alice case. Though if Alice broadcasts A+B' to
replace A+B because the first broadcast isn't satisfying anymore due to
mempool 

Re: [bitcoin-dev] Proposal: Package Mempool Accept and Package RBF

2021-09-23 Thread Antoine Riard via bitcoin-dev
> Correct, if B+C is too low feerate to be accepted, we will reject it. I
> prefer this because it is incentive compatible: A can be mined by itself,
> so there's no reason to prefer A+B+C instead of A.
> As another way of looking at this, consider the case where we do accept
> A+B+C and it sits at the "bottom" of our mempool. If our mempool reaches
> capacity, we evict the lowest descendant feerate transactions, which are
> B+C in this case. This gives us the same resulting mempool, with A and not
> B+C.

I agree here. Doing otherwise, we might evict other transactions mempool in
`MempoolAccept::Finalize` with a higher-feerate than B+C while those
evicted transactions are the most compelling for block construction.

I thought at first missing this acceptance requirement would break a
fee-bumping scheme like Parent-Pay-For-Child where a high-fee parent is
attached to a child signed with SIGHASH_ANYONECANPAY but in this case the
child fee is capturing the parent value. I can't think of other fee-bumping
schemes potentially affected. If they do exist I would say they're wrong in
their design assumptions.

> If or when we have witness replacement, the logic is: if the individual
> transaction is enough to replace the mempool one, the replacement will
> happen during the preceding individual transaction acceptance, and
> deduplication logic will work. Otherwise, we will try to deduplicate by
> wtxid, see that we need a package witness replacement, and use the package
> feerate to evaluate whether this is economically rational.

IIUC, you have package A+B, during the dedup phase early in
`AcceptMultipleTransactions` if you observe same-txid-different-wtixd A'
and A' is higher feerate than A, you trim A and replace by A' ?

I think this approach is safe, the one who appears unsafe to me is when A'
has a _lower_ feerate, even if A' is already accepted by our mempool ? In
that case iirc that would be a pinning.

Good to see progress on witness replacement before we see usage of Taproot
tree in the context of multi-party, where a malicious counterparty inflates
its witness to jam a honest spending.

(Note, the commit linked currently points nowhere :))


> Please note that A may replace A' even if A' has higher fees than A
> individually, because the proposed package RBF utilizes the fees and size
> of the entire package. This just requires E to pay enough fees, although
> this can be pretty high if there are also potential B' and C' competing
> commitment transactions that we don't know about.

Ah right, if the package acceptance waives `PaysMoreThanConflicts` for the
individual check on A, the honest package should replace the pinning
attempt. I've not fully parsed the proposed implementation yet.

Though note, I think it's still unsafe for a Lightning
multi-commitment-broadcast-as-one-package as a malicious A' might have an
absolute fee higher than E. It sounds uneconomical for
an attacker but I think it's not when you consider than you can "batch"
attack against multiple honest counterparties. E.g, Mallory broadcast A' +
B' + C' + D' where A' conflicts with Alice's honest package P1, B'
conflicts with Bob's honest package P2, C' conflicts with Caroll's honest
package P3. And D' is a high-fee child of A' + B' + C'.

If D' is higher-fee than P1 or P2 or P3 but inferior to the sum of HTLCs
confirmed by P1+P2+P3, I think it's lucrative for the attacker ?

> So far, my understanding is that multi-parent-1-child is desired for
> batched fee-bumping (
> https://github.com/bitcoin/bitcoin/pull/22674#issuecomment-897951289) and
> I've also seen your response which I have less context on (
> https://github.com/bitcoin/bitcoin/pull/22674#issuecomment-900352202).
That
> being said, I am happy to create a new proposal for 1 parent + 1 child
> (which would be slightly simpler) and plan for moving to
> multi-parent-1-child later if that is preferred. I am very interested in
> hearing feedback on that approach.

I think batched fee-bumping is okay as long as you don't have
time-sensitive outputs encumbering your commitment transactions. For the
reasons mentioned above, I think that's unsafe.

What I'm worried about is  L2 developers, potentially not aware about all
the mempool subtleties blurring the difference and always batching their
broadcast by default.

IMO, a good thing by restraining to 1-parent + 1 child,  we artificially
constraint L2 design space for now and minimize risks of unsafe usage of
the package API :)

I think that's a point where it would be relevant to have the opinion of
more L2 devs.

> I think there is a misunderstanding here - let me describe what I'm
> proposing we'd do in this situation: we'll try individual submission for
A,
> see that it fails due to "insufficient fees." Then, we'll try package
> validation for A+B and use package RBF. If A+B pays enough, it can still
> replace A'. If A fails for a bad signature, we won't look at B or A+B.
Does
> this meet your expectations?

Yes there was a 

Re: [bitcoin-dev] TAPLEAF_UPDATE_VERIFY covenant opcode

2021-09-22 Thread Antoine Riard via bitcoin-dev
> Hmm, I'm reading C5 as "If an oracle says X, and Alice and Carol agree,
> they can distribute all the remaining funds as they see fit".

Should be read as an OR:

IF 2   2 CHECKMULTISIG
ELSE 2   2 CHECKMULTISIG
ENDIF
<> 2 IN_OUT_AMOUNT

The empty vector is a wildcard on the spent amount, as this tapscript may
be executed before/
after the split or any withdraw option.

> (Relative timelocks would probably be annoying for everyone who wasn't
> the first to exit the pool)

And I think unsafe, if you're wrapping a time-sensitive output in your
withdraw scriptPubkey.

> I think the above fixes that -- when AB is spent it deletes itself and
> the (A,B) pair; when A is spent, it deletes (A, B and AB) and replaces
> them with B'; when B' is spent it just deletes itself.

Right, here the subtlety in reading the scripts is about the B'
substitution tapscript in the
A one. And it sounds correct to me that AB exercise deletes the withdraw
pair (A, B).

Le lun. 20 sept. 2021 à 10:52, Anthony Towns  a écrit :

> On Sat, Sep 18, 2021 at 10:11:10AM -0400, Antoine Riard wrote:
> > I think one design advantage of combining scope-minimal opcodes like
> MERKLESUB
> > with sighash malleability is the ability to update a subset of the
> off-chain
> > contract transactions fields after the funding phase.
>
> Note that it's not "update" so much as "add to"; and I mostly think
> graftroot (and friends), or just updating the utxo onchain, are a better
> general purpose way of doing that. It's definitely a tradeoff though.
>
> > Yes this is a different contract policy that I would like to set up.
> > Let's say you would like to express the following set of capabilities.
> > C0="Split the 4 BTC funds between Alice/Bob and Caroll/Dave"
> > C1="Alice can withdraw 1 BTC after 2 weeks"
> > C2="Bob can withdraw 1 BTC after 2 weeks"
> > C3="Caroll can withdraw 1 BTC after 2 weeks"
> > C4="Dave can withdraw 1 BTC after 2 weeks"
> > C5="If USDT price=X, Alice can withdraw 2 BTC or Caroll can withdraw 2
> BTC"
>
> Hmm, I'm reading C5 as "If an oracle says X, and Alice and Carol agree,
> they can distribute all the remaining funds as they see fit".
>
> > If C4 is exercised, to avoid trust in the remaining counterparty, both
> Alice or
> > Caroll should be able to conserve the C5 option, without relying on the
> updated
> > key path.
>
> > As you're saying, as we know the group in advance, one way to setup the
> tree
> > could be:
> >(A, (B, C), BC), D), BCD), E, F), EF), G), EFG)))
>
> Make it:
>
>   (((AB, (A,B)), (CD, (C,D))), ACO)
>
> AB = DROP  DUP 0 6 TLUV CHECKSIGVERIFY IN_OUT_AMOUNT SUB 2BTC
> LESSTHAN
> CD = same but for carol+dave
> A =  DUP  10 TLUV CHECKSIGVERIFY IN_OUT_AMOUNT SUB 1BTC LESSTHAN
> B' =  DUP 0 2 TLUV CHECKSIGVERIFY IN_OUT_AMOUNT SUB 1BTC LESSTHAN
> B,C,D = same as A but for bob, etc
> A',C',D' = same as B' but for alice, etc
> ACO =  CHECKSIGVERIFY  CHECKSIG
>
> Probably AB, CD, A..D, A'..D' all want a CLTV delay in there as well.
> (Relative timelocks would probably be annoying for everyone who wasn't
> the first to exit the pool)
>
> > Note, this solution isn't really satisfying as the G path isn't
> neutralized on
> > the Caroll/Dave fork and could be replayed by Alice or Bob...
>
> I think the above fixes that -- when AB is spent it deletes itself and
> the (A,B) pair; when A is spent, it deletes (A, B and AB) and replaces
> them with B'; when B' is spent it just deletes itself.
>
> Cheers,
> aj
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Proposal: Package Mempool Accept and Package RBF

2021-09-19 Thread Antoine Riard via bitcoin-dev
Hi Gloria,

> A package may contain transactions that are already in the mempool. We
> remove
> ("deduplicate") those transactions from the package for the purposes of
> package
> mempool acceptance. If a package is empty after deduplication, we do
> nothing.

IIUC, you have a package A+B+C submitted for acceptance and A is already in
your mempool. You trim out A from the package and then evaluate B+C.

I think this might be an issue if A is the higher-fee element of the ABC
package. B+C package fees might be under the mempool min fee and will be
rejected, potentially breaking the acceptance expectations of the package
issuer ?

Further, I think the dedup should be done on wtxid, as you might have
multiple valid witnesses. Though with varying vsizes and as such offering
different feerates.

E.g you're going to evaluate the package A+B and A' is already in your
mempool with a bigger valid witness. You trim A based on txid, then you
evaluate A'+B, which fails the fee checks. However, evaluating A+B would
have been a success.

AFAICT, the dedup rationale would be to save on CPU time/IO disk, to avoid
repeated signatures verification and parent UTXOs fetches ? Can we achieve
the same goal by bypassing tx-level checks for already-in txn while
conserving the package integrity for package-level checks ?

> Note that it's possible for the parents to be
> indirect
> descendants/ancestors of one another, or for parent and child to share a
> parent,
> so we cannot make any other topology assumptions.

I'm not clearly understanding the accepted topologies. By "parent and child
to share a parent", do you mean the set of transactions A, B, C, where B is
spending A and C is spending A and B would be correct ?

If yes, is there a width-limit introduced or we fallback on
MAX_PACKAGE_COUNT=25 ?

IIRC, one rationale to come with this topology limitation was to lower the
DoS risks when potentially deploying p2p packages.

Considering the current Core's mempool acceptance rules, I think CPFP
batching is unsafe for LN time-sensitive closure. A malicious tx-relay
jamming successful on one channel commitment transaction would contamine
the remaining commitments sharing the same package.

E.g, you broadcast the package A+B+C+D+E where A,B,C,D are commitment
transactions and E a shared CPFP. If a malicious A' transaction has a
better feerate than A, the whole package acceptance will fail. Even if A'
confirms in the following block,
the propagation and confirmation of B+C+D have been delayed. This could
carry on a loss of funds.

That said, if you're broadcasting commitment transactions without
time-sensitive HTLC outputs, I think the batching is effectively a fee
saving as you don't have to duplicate the CPFP.

IMHO, I'm leaning towards deploying during a first phase 1-parent/1-child.
I think it's the most conservative step still improving second-layer safety.

> *Rationale*:  It would be incorrect to use the fees of transactions that
are
> already in the mempool, as we do not want a transaction's fees to be
> double-counted for both its individual RBF and package RBF.

I'm unsure about the logical order of the checks proposed.

If A+B is submitted to replace A', where A pays 0 sats, B pays 200 sats and
A' pays 100 sats. If we apply the individual RBF on A, A+B acceptance
fails. For this reason I think the individual RBF should be bypassed and
only the package RBF apply ?

Note this situation is plausible, with current LN design, your counterparty
can have a commitment transaction with a better fee just by selecting a
higher `dust_limit_satoshis` than yours.

> Examples F and G [14] show the same package, but P1 is submitted
> individually before
> the package in example G. In example F, we can see that the 300vB package
> pays
> an additional 200sat in fees, which is not enough to pay for its own
> bandwidth
> (BIP125#4). In example G, we can see that P1 pays enough to replace M1,
but
> using P1's fees again during package submission would make it look like a
> 300sat
> increase for a 200vB package. Even including its fees and size would not
be
> sufficient in this example, since the 300sat looks like enough for the
300vB
> package. The calculcation after deduplication is 100sat increase for a
> package
> of size 200vB, which correctly fails BIP125#4. Assume all transactions
have
> a
> size of 100vB.

What problem are you trying to solve by the package feerate *after* dedup
rule ?

My understanding is that an in-package transaction might be already in the
mempool. Therefore, to compute a correct RBF penalty replacement, the vsize
of this transaction could be discarded lowering the cost of package RBF.

If we keep a "safe" dedup mechanism (see my point above), I think this
discount is justified, as the validation cost of node operators is paid for
?

> The child cannot replace mempool transactions.

Let's say you issue package A+B, then package C+B', where B' is a child of
both A and C. This rule fails the acceptance of C+B' ?

I 

Re: [bitcoin-dev] TAPLEAF_UPDATE_VERIFY covenant opcode

2021-09-18 Thread Antoine Riard via bitcoin-dev
ulation is a hard fork; existing software
> already verifies the calculation even if the script version is unknown.

Thinking more, you're right...

In case of TapTweakV2, non-upgraded nodes won't be able to pass the
validation of unknown script version (0x20), and the failure will provoke a
fork.

Could we commit the spent internal pubkey parity bit as a one-more-tweak
transparent to non-upgrades nodes ?

For upgraded, P = R + (t2 * G) and Q = P + (t1 * G)
For non-upgraded, Q = P + (t1 * G).

Could we add a new validation rule (e.g VerifyInternalPubkeyCommitment)
conditional on a newer tapscript version just before
VerifyTaprootCommitment ?

> That is, the strategy isn't "tweak the scripts by delaying them 3 months"
> it's "tweak the merkle tree, to replace the scripts that would be delayed
> with a new script that has a delay and then allows itself to be replaced
> by the original scripts that we now want back".

Yes, that's a good strategy to have logically equivalent subtree embedded
in the modifying tapscript.

If you have multiple modifying scripts and you can't predict the order, I
think the tree complexity will be quickly too high and grafroot-like
approaches are likely better

Le mer. 15 sept. 2021 à 02:51, Anthony Towns  a écrit :

> On Sun, Sep 12, 2021 at 07:37:56PM -0400, Antoine Riard via bitcoin-dev
> wrote:
> > While MERKLESUB is still WIP, here the semantic. [...]
> > I believe this is matching your description and the main difference
> compared to
> > your TLUV proposal is the lack of merkle tree extension, where a new
> merkle
> > path is added in place of the removed tapscript.
>
> I think "  MERKLESUB" is the same as " OP_0 2 TLUV", provided
>  happens to be the same index as the current input. So it misses the
> ability to add branches (replacing OP_0 with a hash), the ability to
> preserve the current script (replacing 2 with 0), and the ability to
> remove some of the parent paths (replacing 2 with 4*n); but gains the
> ability to refer to non-corresponding outputs.
>
> > > That would mean anyone who could do a valid spend of the tx could
> > > violate the covenant by spending to an unencumbered witness v2 output
> > > and (by collaborating with a miner) steal the funds. I don't think
> > > there's a reasonable way to have existing covenants be forward
> > > compatible with future destination addresses (beyond something like CTV
> > > that strictly hardcodes them).
> > That's a good catch, thanks for raising it :)
> > Depends how you define reasonable, but I think one straightforward fix
> is to
> > extend the signature digest algorithm to encompass the segwit version
> (and
> > maybe program-size ?) of the spending transaction outputs.
>
> That... doesn't sound very straightforward to me; it's basically
> introducing a new covenant approach, that's getting fixed into a
> signature, rather than being a separate opcode.
>
> I think a better approach for that would be to introduce the opcode (eg,
> PUSH_OUTPUT_SCRIPTPUBKEY, and SUBSTR to be able to analyse the segwit
> version), and make use of graftroot to allow a signature to declare that
> it's conditional on some extra script code. But it feels like it's going
> a bit off topic.
>
> > > Having the output position parameter might be an interesting way to
> > > merge/split a vault/pool, but it's not clear to me how much sense it
> > > makes sense to optimise for that, rather than just doing that via the
> key
> > > path. For pools, you want the key path to be common anyway (for privacy
> > > and efficiency), so it shouldn't be a problem; but even for vaults,
> > > you want the cold wallet accessible enough to be useful for the case
> > > where theft is attempted, and maybe that's also accessible enough for
> > > the ocassional merge/split to keep your utxo count/sizes reasonable.
> > I think you can come up with interesting contract policies. Let's say
> you want
> > to authorize the emergency path of your pool/vault balances if X happens
> (e.g a
> > massive drop in USDT price signed by DLC oracles). You have (A+B+C+D)
> forking
> > into (A+B) and (C+D) pooled funds. To conserve the contracts
> pre-negotiated
> > economic equilibrium, all the participants would like the emergency path
> to be
> > inherited on both forks. Without relying on the key path interactivity,
> which
> > is ultimately a trust on the post-fork cooperation of your counterparty ?
>
> I'm not really sure what you're saying there; is that any different to a
> pool of (A and B) where A suddenly wants to withdraw funds ASAP and can't
> wait for a key path signature? In that case 

Re: [bitcoin-dev] TAPLEAF_UPDATE_VERIFY covenant opcode

2021-09-12 Thread Antoine Riard via bitcoin-dev
Sorry for the lack of clarity, sometimes it sounds easier to explain ideas
with code.

While MERKLESUB is still WIP, here the semantic. If the input spent is a
SegWit v1 Taproot output, and the script path spending is used, the top
stack item is interpreted as an output position of the spending
transaction. The second top stack item is interpreted as a 32-byte x-only
pubkey to be negated and added to the spent internal pubkey.

The spent tapscript is removed from the merkle tree of tapscripts and a new
merkle root is recomputed with the first node element of the spending
control block as the tapleaf hash. From then, this new merkle root is added
as the taproot tweak to the updated internal pubkey, while correcting for
parity. This new tweaked pubkey is interpreted as a v1 witness program and
must match the scriptPubKey of the spending transaction output as the
passed position. Otherwise, MERKLESUB returns a failure.

I believe this is matching your description and the main difference
compared to your TLUV proposal is the lack of merkle tree extension, where
a new merkle path is added in place of the removed tapscript. Motivation is
saving up the one byte of the new merkle path step, which is not necessary
for our CoinPool use-case.

> That would mean anyone who could do a valid spend of the tx could
> violate the covenant by spending to an unencumbered witness v2 output
> and (by collaborating with a miner) steal the funds. I don't think
> there's a reasonable way to have existing covenants be forward
> compatible with future destination addresses (beyond something like CTV
> that strictly hardcodes them).

That's a good catch, thanks for raising it :)

Depends how you define reasonable, but I think one straightforward fix is
to extend the signature digest algorithm to encompass the segwit version
(and maybe program-size ?) of the spending transaction outputs.

Then you add a "contract" aggregated-key in every tapscript where a
TLUV/MERKLESUB covenant is present. The off-chain contract participant can
exchange signatures at initial setup committing to the segwit version. I
think this addresses the sent-to-unknown-witness-output point ?

When future destination addresses are deployed, assuming a new round of
interactivity, the participants can send the fund to a v1+ by exchanging
signatures with SIGHASH_ALL, that way authorizing the bypass of
TLUV/MERKLESUB.

Of course, in case of v1+ deployment, the key path could be used. Though
this path could have been "burnt" by picking up an internal point with an
unknown scalar following the off-chain contract/use-case semantic ?

> Having the output position parameter might be an interesting way to
> merge/split a vault/pool, but it's not clear to me how much sense it
> makes sense to optimise for that, rather than just doing that via the key
> path. For pools, you want the key path to be common anyway (for privacy
> and efficiency), so it shouldn't be a problem; but even for vaults,
> you want the cold wallet accessible enough to be useful for the case
> where theft is attempted, and maybe that's also accessible enough for
> the ocassional merge/split to keep your utxo count/sizes reasonable.

I think you can come up with interesting contract policies. Let's say you
want to authorize the emergency path of your pool/vault balances if X
happens (e.g a massive drop in USDT price signed by DLC oracles). You have
(A+B+C+D) forking into (A+B) and (C+D) pooled funds. To conserve the
contracts pre-negotiated economic equilibrium, all the participants would
like the emergency path to be inherited on both forks. Without relying on
the key path interactivity, which is ultimately a trust on the post-fork
cooperation of your counterparty ?

> Saving a byte of witness data at the cost of specifying additional
> opcodes seems like optimising the wrong thing to me.

I think we should keep in mind that any overhead cost in the usage of a
script primitive is echoed to the user of off-chain contract/payment
channels. If the tapscripts are bigger, your average on-chain spends in
case of non-cooperative scenarios are increased in consequence, and as such
your fee-bumping reserve. Thus making those systems less economically
accessible.

If we really envision having billions of Bitcoin users owning a utxo or
shards of them, we should also think that those users might have limited
means to pay on-chain fees. Where should be the line between resource
optimizations and protocol/implementation complexity ? Hard to tell.

> I don't think that works, because different scripts in the same merkle
> tree can have different script versions, which would here indicate
> different parities for the same internal pub key.

Let me make it clearer. We introduce a new tapscript version 0x20, forcing
a new bit in the first byte of the control block to be interpreted as the
parity bit of the spent internal pubkey. To ensure this parity bit is
faithful and won't break the updated key path, it's committed in 

Re: [bitcoin-dev] TAPLEAF_UPDATE_VERIFY covenant opcode

2021-09-10 Thread Antoine Riard via bitcoin-dev
Hi AJ,

Thanks for finally putting the pieces together! [0]

We've been hacking with Gleb on a paper for the CoinPool protocol [1]
during the last weeks and it should be public soon, hopefully highlighting
what kind of scheme, TAPLEAF_UPDATE_VERIFY-style of covenant enable :)

Here few early feedbacks on this specific proposal,

> So that makes it relatively easy to imagine creating a new taproot address
> based on the input you're spending by doing some or all of the following:
>
>  * Updating the internal public key (ie from P to P' = P + X)
>  * Trimming the merkle path (eg, removing CD)
>  * Removing the script you're currently executing (ie E)
>  * Adding a new step to the end of the merkle path (eg F)

"Talk is cheap. Show me the code" :p

case OP_MERKLESUB:
{
if (!(flags & SCRIPT_VERIFY_MERKLESUB)) {
break;
}

if (stack.size() < 2) {
return set_error(serror, SCRIPT_ERR_INVALID_STACK_OPERATION);
}

valtype& vchPubKey = stacktop(-1);

if (vchPubKey.size() != 32) {
break;
}

const std::vector& vch = stacktop(-2);
int nOutputPos = CScriptNum(stacktop(-2), fRequireMinimal).getint();

if (nOutputPos < 0) {
return set_error(serror, SCRIPT_ERR_NEGATIVE_MERKLEVOUT);
}

if (!checker.CheckMerkleUpdate(*execdata.m_control, nOutputPos,
vchPubKey)) {
return set_error(serror, SCRIPT_ERR_UNSATISFIED_MERKLESUB);
}
break;
}

case OP_NOP1: case OP_NOP5:



template 
bool GenericTransactionSignatureChecker::CheckMerkleUpdate(const
std::vector& control, unsigned int out_pos, const
std::vector& point) const
{
//! The internal pubkey (x-only, so no Y coordinate parity).
XOnlyPubKey p{uint256(std::vector(control.begin() +
1, control.begin() + TAPROOT_CONTROL_BASE_SIZE))};
//! Update the internal key by subtracting the point.
XOnlyPubKey s{uint256(point)};
XOnlyPubKey u;
try {
u = p.UpdateInternalKey(s).value();
} catch (const std::bad_optional_access& e) {
return false;
}

//! The first control node is made the new tapleaf hash.
//! TODO: what if there is no control node ?
uint256 updated_tapleaf_hash;
updated_tapleaf_hash = uint256(std::vector(control.data() + TAPROOT_CONTROL_BASE_SIZE, control.data() +
TAPROOT_CONTROL_BASE_SIZE + TAPROOT_CONTROL_NODE_SIZE));

//! The committed-to output must be in the spent transaction vout
range.
if (out_pos >= txTo->vout.size()) return false;
int witnessversion;
std::vector witnessprogram;
txTo->vout[out_pos].scriptPubKey.IsWitnessProgram(witnessversion,
witnessprogram);
//! The committed to output must be a witness v1 program at least
if (witnessversion == 0) {
return false;
} else if (witnessversion == 1) {
//! The committed-to output.
const XOnlyPubKey q{uint256(witnessprogram)};
//! Compute the Merkle root from the leaf and the incremented
by one path.
const uint256 merkle_root = ComputeTaprootMerkleRoot(control,
updated_tapleaf_hash, 1);
//! TODO modify MERKLESUB design
bool parity_ret = q.CheckTapTweak(u, merkle_root, true);
bool no_parity_ret = q.CheckTapTweak(u, merkle_root, false);
if (!parity_ret && !no_parity_ret) {
return false;
}
}
return true;
}


Here the main chunks for an "  OP_MERKLESUB" opcode, with `n` the
output position which is checked for update and `point` the x-only pubkey
which must be subtracted from the internal key.

I think one design advantage of explicitly passing the output position as a
stack element is giving more flexibility to your contract dev. The first
output could be SIGHASH_ALL locked-down. e.g "you have to pay Alice on
output 1 while pursuing the contract semantic on output 2".

One could also imagine a list of output positions to force the taproot
update on multiple outputs ("OP_MULTIMERKLESUB"). Taking back your citadel
joint venture example, partners could decide to split the funds in 3
equivalent amounts *while* conserving the pre-negotiated script policies [2]

For the merkle branches extension, I was thinking of introducing a separate
OP_MERKLEADD, maybe to *add* a point to the internal pubkey group signer.
If you're only interested in leaf pruning, using OP_MERKLESUB only should
save you one byte of empty vector ?

We can also explore more fancy opcodes where the updated merkle branch is
pushed on the stack for deep manipulations. Or even n-dimensions
inspections if combined with your G'root [3] ?

Note, this current OP_MERKLESUB proposal doesn't deal with committing the
parity of the internal pubkey as part of the spent utxo. As you highlighted
well in your other mail, if we want to conserve the updated 

Re: [bitcoin-dev] Note on Sequence Lock Upgrades Defect

2021-09-09 Thread Antoine Riard via bitcoin-dev
Hi Jeremy,

Answering here from #22871 discussions.

I agree on the general principle to not blur mempool policies signaling in
committed transaction data. Beyond preserving upgradeability, another good
argument is to let L2 nodes update the mempool policies signaling their
pre-signed transactions non-interactively. If one of the transaction fields
is assigned mempool semantics, in case of tightening policy changes, you
will need to re-sign or bear the risks of having non-propagating
transactions which opens the door for exploitation by a malicious
counterparty. I think this point is kinda relevant if we have future
cross-layer coordinated safety fixes to deal with a la CVE-2021-31876.

Even further, a set of L2 counterparties would like to pick up divergent
tx-relay/mempool policies, having the signaling fields as part of the
signature force them to come to consensus.

I think we can take the opportunity of p2p packages to introduce a new
field to signal policy. Of course, a malicious tx-relay peer could modify
its content to jam your transaction's propagation but in that case it is
easier to just drop it.

One issue with taking back the `nSequence` field for consensus-semantic
sounds is depriving the application-layer from a discrete, zero-cost
payload (e.g the LN obfuscated commitment number watermark). This might be
controversial as we'll increase the price of such applications if they're
still willingly to relay application specific data through the p2p network
(e.g force them to use a costly OP_RETURN output or payer/payee
interactions to setup a pay-to-contract)

W.r.t flag day activation to smooth policy deployment, I think that's
something we might rely on in the future though we could distinguish few
types of policy deployments :
1) loosening changes (e.g full-rbf/dust threshold removal), a transaction
which was relaying under
the former policy should relay under the new one
2) tightening changes (e.g #22871), a transaction which was relaying under
the former policy
might not relay under the new one
3) new feature introduced (e.g packages), a transaction is offered a new
mode of relay

I think 1) doesn't need that level of ecosystem coordination as
applications/second-layers should always benefit from such changes. Maybe
with the exception of full-rbf, where we have historical 0-conf softwares,
with (broken) security assumptions made on the opt-out RBF mechanism. Same
with 3), better to have new features deployed gradually, a flag day
activation day in this case won't mean that all higher stacks will jump to
use package-relay ?

Where a flag day might make sense would be for 2) ? It would create a
higher level of commitment by the base layer software instead of a pure
communication on the ML/GH, which might not be concretized in the announced
release due to slow review process/feature freeze/rebase conflicts...
Reversing the process and asking for Bitcoin applications/higher layers to
update first might get us in the trap of never doing the change, as someone
might have a small use-case in the corner relying on a given policy
behavior.

That said, w.r.t to the proposed policy change in #22871, I think it's
better to deploy full-rbf first, then give a time buffer to higher
applications to free up the `nSequence` field and finally start to
discourage the usage. Otherwise, by introducing new discouragement waivers,
e.g not rejecting the usage of the top 8 bits, I think we're moving away
from the policy design principle we're trying to establish (separation of
mempool policies signaling from consensus data)

Le ven. 3 sept. 2021 à 23:32, Jeremy via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> a écrit :

> Hi Bitcoin Devs,
>
> I recently noticed a flaw in the Sequence lock implementation with respect
> to upgradability. It might be the case that this is protected against by
> some transaction level policy (didn't see any in policy.cpp, but if not,
> I've put up a blogpost explaining the defect and patching it
> https://rubin.io/bitcoin/2021/09/03/upgradable-nops-flaw/
>
> I've proposed patching it here
> https://github.com/bitcoin/bitcoin/pull/22871, it is proper to widely
> survey the community before patching to ensure no one is depending on the
> current semantics in any live application lest this tightening of
> standardness rules engender a confiscatory effect.
>
> Best,
>
> Jeremy
>
> --
> @JeremyRubin 
> 
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] Removing the Dust Limit

2021-08-10 Thread Antoine Riard via bitcoin-dev
>  As developers, we have no
control over prevailing feerates, so this is a problem LN needs to deal
with regardless of Bitcoin Core's dust limit.

Right, as of today, we're going to trim-to-dust any commitment output of
which the value is inferior to the transaction owner's
`dust_limit_satoshis` plus the HTLC-claim (either success/timeout) fee at
the agreed on feerate. So the feerate is the most significant variable in
defining what's a LN *uneconomical output*.

IMO this approach presents annoying limitations. First, you still need to
come with an agreement among channel operators on the mempools feerate.
Such agreement might be problematic to find, as on one side you would like
to let your counterparty free to pick up a feerate gauged as efficient for
the confirmation of their transactions but at the same time not too high to
burn to fees your low-values HTLCs that *your* fee-estimator judged as sane
to claim.

Secondly, the trim-to-dust evaluation doesn't correctly match the lifetime
of the HTLC. A HTLC might be considered as dust at block 100, at which
mempools are full. Though its expiration only occurs at block 200, at which
mempools are empty and this HTLC is fine to claim again. I think this
inaccuracy will even become worse with a wider deployment of long-lived
routed packets over LN, such as DLCs or hodl invoices.

All this to say, if for those reasons LN devs remove feerate negotiation
from the trim-to-dust definition to a static feerate, it would likely put a
higher pressure on the full-nodes operators, as the number of uneconomical
outputs might increase.

(From a LN viewpoint, I would say we're trying to solve a price discovery
issue, namely the cost to write on the UTXO set, in a distributed system,
where any deviation from the "honest" price means you trust more your LN
counterparty)

> They could also use trustless probabalistic payments, which have been
discussed in the context of LN for handling the problem of payments too
small to be represented onchain since early 2016:
https://docs.google.com/presentation/d/1G4xchDGcO37DJ2lPC_XYyZIUkJc2khnLrCaZXgvDN0U/edit?pref=2=1#slide=id.g85f425098

Thanks to bringing to the surface probabilistic payments, yes that's a
worthy alternative approach for low-value payments to keep in mind.

Le mar. 10 août 2021 à 02:15, David A. Harding  a écrit :

> On Mon, Aug 09, 2021 at 09:22:28AM -0400, Antoine Riard wrote:
> > I'm pretty conservative about increasing the standard dust limit in any
> > way. This would convert a higher percentage of LN channels capacity into
> > dust, which is coming with a lowering of funds safety [0].
>
> I think that reasoning is incomplete.  There are two related things here:
>
> - **Uneconomical outputs:** outputs that would cost more to spend than
>   the value they contain.
>
> - **Dust limit:** an output amount below which Bitcoin Core (and other
>   nodes) will not relay the transaction containing that output.
>
> Although raising the dust limit can have the effect you describe,
> increases in the minimum necessary feerate to get a transaction
> confirmed in an appropriate amount of time also "converts a higher
> percentage of LN channel capacity into dust".  As developers, we have no
> control over prevailing feerates, so this is a problem LN needs to deal
> with regardless of Bitcoin Core's dust limit.
>
> (Related to your linked thread, that seems to be about the risk of
> "burning funds" by paying them to a miner who may be a party to the
> attack.  There's plenty of other alternative ways to burn funds that can
> change the risk profile.)
>
> > the standard dust limit [...] introduces a trust vector
>
> My point above is that any trust vector is introduced not by the dust
> limit but by the economics of outputs being worth less than they cost to
> spend.
>
> > LN node operators might be willingly to compensate this "dust" trust
> vector
> > by relying on side-trust model
>
> They could also use trustless probabalistic payments, which have been
> discussed in the context of LN for handling the problem of payments too
> small to be represented onchain since early 2016:
>
> https://docs.google.com/presentation/d/1G4xchDGcO37DJ2lPC_XYyZIUkJc2khnLrCaZXgvDN0U/edit?pref=2=1#slide=id.g85f425098_0_178
>
> (Probabalistic payments were discussed in the general context of Bitcoin
> well before LN was proposed, and Elements even includes an opcode for
> creating them.)
>
> > smarter engineering such as utreexo on the base-layer side
>
> Utreexo doesn't solve this problem.  Many nodes (such as miners) will
> still want to store the full UTXO set and access it quickly,  Utreexo
> proofs will grow in size with UTXO set size (though, at best, only
> log(n)), so full node operators will still not want their bandwidth
> wasted by people who create UTXOs they have no reason to spend.
>
> > I think the status quo is good enough for now
>
> I agree.
>
> -Dave
>
___
bitcoin-dev mailing list

Re: [bitcoin-dev] [Lightning-dev] Removing the Dust Limit

2021-08-09 Thread Antoine Riard via bitcoin-dev
I'm pretty conservative about increasing the standard dust limit in any
way. This would convert a higher percentage of LN channels capacity into
dust, which is coming with a lowering of funds safety [0]. Of course, we
can adjust the LN security model around dust handling to mitigate the
safety risk in case of adversarial settings, but ultimately the standard
dust limit creates a  "hard" bound, and as such it introduces a trust
vector in the reliability of your peer to not goes
onchain with a commitment heavily-loaded with dust-HTLC you own.

LN node operators might be willingly to compensate this "dust" trust vector
by relying on side-trust model, such as PKI to authenticate their peers or
API tokens (LSATs, PoW tokens), probably not free from consequences for the
"openness" of the LN topology...

Further, I think any authoritative setting of the dust limit presents the
risk of becoming ill-adjusted  w.r.t to market realities after a few months
or years, and would need periodic reevaluations. Those reevaluations, if
not automated, would become a vector of endless dramas and bikeshedding as
the L2s ecosystems grow bigger...

Note, this would also constrain the design space of newer fee schemes. Such
as negotiated-with-mining-pool and discounted consolidation during low
feerate periods deployed by such producers of low-value outputs.
`
Moreover as an operational point, if we proceed to such an increase on the
base-layer, e.g to 20 sat/vb, we're going to severely damage the
propagation of any LN transaction, where a commitment transaction is built
with less than 20 sat/vb outputs. Of course, core's policy deployment on
the base layer is gradual, but we should first give a time window for the
LN ecosystem to upgrade and as of today we're still devoid of the mechanism
to do it cleanly and asynchronously (e.g dynamic upgrade or quiescence
protocol [1]).

That said, as raised by other commentators, I don't deny we have a
long-term tension between L2 nodes and full-nodes operators about the UTXO
set growth, but for now I would rather solve this with smarter engineering
such as utreexo on the base-layer side or multi-party shared-utxo or
compressed colored coins/authentication smart contracts (e.g
opentimestamp's merkle tree in OP_RETURN) on the upper layers rather than
altering the current equilibrium.

I think the status quo is good enough for now, and I believe we would be
better off to learn from another development cycle before tweaking the dust
limit in any sense.

Antoine

[0]
https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-May/002714.html
[1] https://github.com/lightningnetwork/lightning-rfc/pull/869

Le dim. 8 août 2021 à 14:53, Jeremy  a écrit :

> We should remove the dust limit from Bitcoin. Five reasons:
>
> 1) it's not our business what outputs people want to create
> 2) dust outputs can be used in various authentication/delegation smart
> contracts
> 3) dust sized htlcs in lightning (
> https://bitcoin.stackexchange.com/questions/46730/can-you-send-amounts-that-would-typically-be-considered-dust-through-the-light)
> force channels to operate in a semi-trusted mode which has implications
> (AFAIU) for the regulatory classification of channels in various
> jurisdictions; agnostic treatment of fund transfers would simplify this
> (like getting a 0.01 cent dividend check in the mail)
> 4) thinly divisible colored coin protocols might make use of sats as value
> markers for transactions.
> 5) should we ever do confidential transactions we can't prevent it without
> compromising privacy / allowed transfers
>
> The main reasons I'm aware of not allow dust creation is that:
>
> 1) dust is spam
> 2) dust fingerprinting attacks
>
> 1 is (IMO) not valid given the 5 reasons above, and 2 is preventable by
> well behaved wallets to not redeem outputs that cost more in fees than they
> are worth.
>
> cheers,
>
> jeremy
>
> --
> @JeremyRubin 
> 
> ___
> Lightning-dev mailing list
> lightning-...@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] A Stroll through Fee-Bumping Techniques : Input-Based vs Child-Pay-For-Parent

2021-07-11 Thread Antoine Riard via bitcoin-dev
> So the sha256 of the span of the group doesn't commit to start and end
> -- it just serializes a vector, so commits to the number of elements,
> the order, and the elements themselves.

Gotcha wasn't clear to me that the new state pair isn't committed as part
of the annex.

Have been confused by "Introduce a new SIGHASH_GROUP flag, as an
alternative to ALL/SINGLE/NONE, that commits to each output i, start <= i <
end."

> Does the above resolve that?

I think so. It shouldn't be susceptible to any spend replay attack, as the
state pair prevents output group overlapping though you might still have to
be careful about siphoning ? Something you should already care about if you
use SIGHASH_SINGLE and your x's amount > y's value.

Le ven. 9 juil. 2021 à 21:47, Anthony Towns  a écrit :

> On Fri, Jul 09, 2021 at 09:19:45AM -0400, Antoine Riard via bitcoin-dev
> wrote:
> > > The easy way to avoid O(n^2) behaviour in (3) is to disallow partial
> > > overlaps. So let's treat the tx as being distinct bundles of x-inputs
> > > and y-outputs, and we'll use the annex for grouping, since that is
> > > committed to by singatures. Call the annex field "sig_group_count".
> > > When processing inputs, setup a new state pair, (start, end), initially
> > > (0,0).
> > > When evaluating an input, lookup sig_group_count. If it's not present,
> > > then set start := end. If it's present and 0, leave start and end
> > > unchanged. Otherwise, if it's present and greather than 0, set
> > > start := end, and then set end := start + sig_group_count.
> > IIUC the design rationale, the "sig_group_count" lockdowns the hashing of
> > outputs for a given input, thus allowing midstate reuse across signatures
> > input.
>
> No midstates, the message being signed would just replace
> SIGHASH_SINGLE's:
>
>   sha_single_output: the SHA256 of the corresponding output in CTxOut
>   format
>
> with
>
>   sha_group_outputs: the SHA256 of the serialization of the group
>   outputs in CTxOut format.
>
> ie, you'd take span{start,end}, serialize it (same as if it were
> a vector of just those CTxOuts), and sha256 it.
>
> > Let's say you want to combine {x_1, y_1} and {x_2, y_2} where {x, y}
> denotes
> > bundles of Lightning commitment transactions.
> > x_1 is dual-signed by Alice and Bob under the SIGHASH_GROUP flag with
> > `sig_group_count`=3.
> > x_2 is dual-signed by Alice and Caroll under the SIGHASH_GROUP flag, with
> > `sig_group_count`=2.
> > y_1 and y_2 are disjunctive.
> > At broadcast, Alice is not able to combine {x_1,y_1} and {x_2, y_2} for
> the
> > reason that x_1, x_2 are colliding on the absolute output position.
>
> So the sha256 of the span of the group doesn't commit to start and end
> -- it just serializes a vector, so commits to the number of elements,
> the order, and the elements themselves. So you're taking serialize(y_1)
> and serialize(y_2), and each of x_1 signs against the former, and each
> of x_2 signs against the latter.
>
> (Note that the annex for x_1_0 specifies sig_group_count=len(y_1)
> and the annex for x_1_{1..} specifies sig_group_count=0, for "reuse
> previous input's group", and the signatures for each input commit to
> the annex anyway)
>
> > One fix could be to skim the "end > num_ouputs" semantic,
>
> That's only there to ensure the span doesn't go out of range, so I don't
> think it makes any sense to skip it?
>
> > I think this SIGHASH_GROUP proposal might solve other use-cases, but if I
> > understand the semantics correctly, it doesn't seem to achieve the batch
> > fee-bumping of multiple Lightning commitment with O(1) onchain footprint
> I was
> > thinking of for IOMAP...
>
> Does the above resolve that?
>
> Cheers,
> aj
>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] A Stroll through Fee-Bumping Techniques : Input-Based vs Child-Pay-For-Parent

2021-07-09 Thread Antoine Riard via bitcoin-dev
On Thu, May 27, 2021 at 04:14:13PM -0400, Antoine Riard via bitcoin-dev
wrote:
> This overhead could be smoothed even further in the future with more
advanced
> sighash malleability flags like SIGHASH_IOMAP, allowing transaction
signers to
> commit to a map of inputs/outputs [2]. In the context of input-based, the
> overflowed fee value could be redirected to an outgoing output.

> Input-based (SIGHASH_ANYPREVOUT+SIGHASH_IOMAP): Multiple chains of
transactions
> might be aggregated together *non-interactively*. One bumping input and
> outgoing output can be attached to the aggregated root.

> [2] https://bitcointalk.org/index.php?topic=252960.0

> I haven't seen any recent specs for "IOMAP", but there are a few things
> that have bugged me about them in the past:

TBH, I don't think we have been further with Darosior than comparing the
compression schemes relevant for the bitfield :)

Thanks to start the hard grinding work!

>  (1) allowing partially overlapping sets of outputs could allow "theft",
>  eg if I give you a signature "you can spend A+B as long as I get X"
>  and "you can spend A+C as long as I get X", you could combine them
>  to spend A+B+C instead but still only give me 1 X.

Yes I think there is an even more unsafe case than described. A transaction
third-party knowledgeable about the partial sets could combine them, then
attach an additional siphoning output Y. E.g, if {A=50, B=50, C=50} and
X=100 the third-party could attach output Y=50 ?

Though I believe the validity of those thefts are a function of further
specification of the transaction digest coverage, as you might have a
malleability scheme where B or C's signatures hash are implicitly
committing to subset inputs order. If you have `H_prevouts(A || B)` and
`H_prevouts(A || C)`, an attacker wouldn't be able to satisfy both B and C
scripts in the same transaction ?

One mitigation which was mentioned in previous pinning discussion was to
add a per-participant finalizing key to A's script and thus lockdown
transaction template at broadcast. I don't think it works here as you can't
assume that your counterparties, from different protocol sessions, won't
collude together to combine their finalizing signatures and achieve a spend
replay across sessions ?

That said, I'm not even sure we should disallow partially overlapping sets
of outputs at the consensus-level, one could imagine a crowdfunding
application where you delegate A+B and A+C to different parties, and you
implicitly allow them to cooperate as long as they fulfill X's output value
?

>  (2) a range specification or a whole bitfield is a lot heavier than an
>  extra bit to add to the sighash

Yes, one quick optimization in case of far-depth output committed in the
bitfield could be to have a few initial bits serving as vectors to blank
out unused bitfield spaces. Though I concede a new sighash bits arithmetic
might be too fancy for consensus-code.


>  (3) this lets you specify lots of different ways of hashing the
>  outputs, which then can't be cached, so you get kind-of quadratic
>  behaviour -- O(n^2/8) where n/2 is the size of the inputs, which
>  gives you the number of signatures, and n/2 is also the size of the
>  outputs, so n/4 is a different half of the output selected for each
>  signature in the input.

If you assume n size of transaction data, and that each signature hash is
committing to inputs + half of outputs, yes I think it's even worst kind-of
quadratic, like O(3n^2/4) ? And you might even worsen the hashing in
function of flexibility allowed, like still committing to the whole
transaction size but a different combination order of outputs selected for
each signature.

But under the "don't bring me problems, bring me solutions" banner, here's
an idea.

> The easy way to avoid O(n^2) behaviour in (3) is to disallow partial
> overlaps. So let's treat the tx as being distinct bundles of x-inputs
> and y-outputs, and we'll use the annex for grouping, since that is
> committed to by singatures. Call the annex field "sig_group_count".

> When processing inputs, setup a new state pair, (start, end), initially
> (0,0).
>
> When evaluating an input, lookup sig_group_count. If it's not present,
> then set start := end. If it's present and 0, leave start and end
> unchanged. Otherwise, if it's present and greather than 0, set
> start := end, and then set end := start + sig_group_count.

IIUC the design rationale, the "sig_group_count" lockdowns the hashing of
outputs for a given input, thus allowing midstate reuse across signatures
input.

> Introduce a new SIGHASH_GROUP flag, as an alternative to ALL/SINGLE/NONE,
> that commits to each output i, start <= i < end. If start==end or end >
> num_outputs, signature is invalid.
>
> That means each 

Re: [bitcoin-dev] Proposal: Full-RBF in Bitcoin Core 24.0

2021-06-25 Thread Antoine Riard via bitcoin-dev
> Do we as a community want to support 0-conf payments in any way at this
> point? It seems rather silly to make software design decisions to
> accommodate 0-conf payments when there are better mechanisms for fast
> payments (ie lightning).

Well, we have zero-conf LN channels ? Actually, Lightning channel funding
transactions should be buried under a few blocks, though few services
providers are offering zero-conf channels, where you can start to spend
instantly [0]. I believe that's an interesting usage, though IMHO as
mentioned we can explore different security models to make 0-conf safe
(reputation/fidelity-bond).

> One question I have is: how does software generally inform the user about
0-conf payment detection?

Yes generally it's something like an "Unconfirmed" annotation on incoming
txn, though at least this is what Blockstream Green or Electrum are doing.

> But I
suppose it would depend on how often 0-conf is used in the bitcoin
ecosystem at this point, which I don't have any data on.

There are few Bitcoin services well-known to rely on 0-conf. Beyond how
much of the Bitcoin traffic is tied to a 0-conf is a hard question, a lot
of 0-confs service providers are going to be reluctant to share the
information, for a really good reason you will learn a subset of their
business volumes.

I'll see if I can come up with some Fermi estimation on this front.

[0] https://www.bitrefill.com/thor-turbo-channels/

Le mer. 16 juin 2021 à 20:58, Billy Tetrud  a
écrit :

> Russel O'Connor recently opined
> <https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-June/019061.html>
> that RBF should be standard treatment of all transactions, rather than as a
> transaction opt-in/out. I agree with that. Any configuration in a
> transaction that has not been committed into a block yet simply can't be
> relied upon. Miners also have a clear incentive to ignore RBF rules and
> mine anything that passes consensus. At best opting out of RBF is a weak
> defense, and at worst it's simply a false sense of security that is likely
> to actively lead to theft events.
>
> Do we as a community want to support 0-conf payments in any way at this
> point? It seems rather silly to make software design decisions to
> accommodate 0-conf payments when there are better mechanisms for fast
> payments (ie lightning).
>
> One question I have is: how does software generally inform the user about
> 0-conf payment detection? Does software generally tell the user something
> along the lines of "This payment has not been finalized yet. All recipients
> should wait until the transaction has at least 1 confirmation, and most
> recipients should wait for 6 confirmations" ? I think unless we pressure
> software to be very explicit about what counts as finality, users will
> simply continue to do what they've always done. Rolling out this policy
> change over the course of a year or two seems fine, no need to rush. But I
> suppose it would depend on how often 0-conf is used in the bitcoin
> ecosystem at this point, which I don't have any data on.
>
> On Tue, Jun 15, 2021 at 10:00 AM Antoine Riard via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> Hi,
>>
>> I'm writing to propose deprecation of opt-in RBF in favor of full-RBF as
>> the Bitcoin Core's default replacement policy in version 24.0. As a
>> reminder, the next release is 22.0, aimed for August 1st, assuming
>> agreement is reached, this policy change would enter into deployment phase
>> a year from now.
>>
>> Even if this replacement policy has been deemed as highly controversial a
>> few years ago, ongoing and anticipated changes in the Bitcoin ecosystem are
>> motivating this proposal.
>>
>> # RBF opt-out as a DoS Vector against Multi-Party Funded Transactions
>>
>> As explained in "On Mempool Funny Games against Multi-Party Funded
>> Transactions'', 2nd issue [0], an attacker can easily DoS a multi-party
>> funded transactions by propagating an RBF opt-out double-spend of its
>> contributed input before the honest transaction is broadcasted by the
>> protocol orchester. DoSes are qualified in the sense of either an attacker
>> wasting timevalue of victim's inputs or forcing exhaustion of the
>> fee-bumping  reserve.
>>
>> This affects a series of Bitcoin protocols such as Coinjoin, onchain DLCs
>> and dual-funded LN channels. As those protocols are still in the early
>> phase of deployment, it doesn't seem to have been executed in the wild for
>> now.  That said, considering that dual-funded are more efficient from a
>> liquidity standpoint, we can expect them to be widely relied on, once
>> Lightning enters in a more mature phase. At that point, it

Re: [bitcoin-dev] [Lightning-dev] Waiting SIGHASH_ANYPREVOUT and Packing Packages

2021-06-24 Thread Antoine Riard via bitcoin-dev
Hi Michael,

> Browsing quickly through Greg's piece, a lot of the reasoning is based on
FOSS experience from Linux/Juniper, which to the best of my knowledge are
centralized software projects ?

> That is Greg's point. If Linux doesn't look further than the current
> version and the next version with a BDFL (Linus) a decentralized
> project like Bitcoin Core is going to struggle even more with longer
> term roadmaps.

I was far more inclined to recall the unsolved problems for Lightning/L2s
(pre-signed feerate/tx-pinnings) than calling out strong solutions to them.
I believe problem spaces are quite something stable in engineering/science,
at least until they're formalized differently. But even coming to consensus
on  the existence of problems and a shared perception of the severity of
them can take a long time. In fact, it might even be the hardest step in a
decentralized ecosystem like Bitcoin.

And I fill in on the low-relevance of roadmaps, real development is a
continuous zigzag. If we look in the past and take the transaction
malleability issue, I think we can observe it took multiple proposals (bip
62, normalized txid,  sighash_noinput, ...),  of which we're even
implemented in Core, before to finally settle on segwit. Though I would say
lessons were drawn about shortcomings of every transient proposal.

> I think it is important to discuss what order changes should be
> attempted but I agree with David that putting specific future version
> numbers on changes is speculative at best and misleading at worst. The
> record of previous predictions of what will be included in particular
> future versions is not strong :)

I recognize it wasn't delicate to put exact version numbers, though note
multiple, alternative versions numbers were deliberately proposed for each
specific change and timelines given in terms of years,  more as an invite
to open a discussion on such changes and where/when they could take place,
that in anyway a finite, consistent deployment proposal.

Further, I still believe it would be cool to have a bit more coordination
when Core implements sophisticated mechanisms designed for downstream
support, in the sense of feedback exchanged across projects all along their
release schedules. For e.g, with package-relay, as a Lightning team it's
likely you will have to rework your tx-broadcast module which might take a
few good weeks of review and test. Though, coming to this best practice
(imho) across the different Bitcoin layers might take years and that's
perfectly fine, we'll see what emerges :)

> What was making sense when you had like ~20 Bitcoin dev with 90% of the
technical knowledge doesn't scale when you have multiple second-layers
specifications

> It is great that we have a larger set of contributors in the ecosystem
> today than back in say pre 2017. But today that set of contributors is
> spread widely across a number of different projects that didn't exist
> pre 2017. Changes to Core are (generally) likely to be implemented and
> reviewed by current Core contributors as Lightning implementation
> developers (generally) seem to have their hands full with their own
> implementations.

Well I strongly believe that the Core review process is open to anyone :) ?
If some upper layers contributors are generously offering their time to
share back their experiences, especially during the design phase of
software features, I hope we might be on path to deliver better stuff.

Further, that's a more personal note, I'm worried long-term about
layer-monoculture cropping up in the ecosystem, a concern echoing the
history of Internet development [0].

> I think we can get the balance right by making progress on this
> (important) discussion whilst also maintaining humility that we don't
> know exact timelines and that getting things merged into Core relies
> on a number of people who have varying levels of interest and
> understanding of L2 protocols.

Yes, as answers to my post are showing, I might have lacked patience in
this case :/ Sometimes, it's hard to gauge your own cognitive dissonance on
topics.

Cheers,
Antoine

[0] See "Interactions between Layers" in "General Architectural and Policy
Considerations", RFC 3426

Le lun. 21 juin 2021 à 06:20, Michael Folkson  a
écrit :

> I don't want to divert from the topic of this thread ("Waiting
> SIGHASH_ANYPREVOUT and Packing Packages"), we can set up a separate
> thread if we want to discuss this further. But just a couple of
> things.
>
> > Browsing quickly through Greg's piece, a lot of the reasoning is based
> on FOSS experience from Linux/Juniper, which to the best of my knowledge
> are centralized software projects ?
>
> That is Greg's point. If Linux doesn't look further than the current
> version and the next version with a BDFL (Linus) a decentralized
> project like Bitcoin Core is going to struggle even more with longer
> term roadmaps.
>
> I think it is important to discuss what order changes should be
> attempted but I agree 

[bitcoin-dev] On the recent softforks survey, forget to fulfill my answer!

2021-06-21 Thread Antoine Riard via bitcoin-dev
Hi,

I was super glad to see the recent survey on potential softforks for the
near-future of Bitcoin! I didn't have time to answer this one but will do
so for the future. I wanna to salute the grassroots involvement in bitcoin
protocol development, that's cool to see :)

Though softforks are what shine in the media and social networks, one
should not ignore they represent the aggregation of thousands of hours of
sweat from contributors all across the ecosystem with discussion extending
from IRC public or private chans, mailing list, medias, etc.

What makes softfork discussion especially hard is that no one is following
all those communications channels to collect the trace of information and
as such it can be hard to reason on the Big Picture(tm). That's why
soft-forks take time, and we might somehow be prepared for them to take
even more time in the future...

That said, where I would like to draw awareness of the community is about
the submerged part of bitcoin protocol development iceberg. Softforks are
sexy, though you have far more areas of Bitcoin dev who would benefit from
a gentle boost by happy hands :p

For e.g, if you take Bitcoin Core, you have few ongoing projects were folks
have a hard time moving forward, e.g assumeutxo/mempool
refactos/addr-relay/rebroadcasting module/mutation testing/
multiprocess/wallet external signer/GUI maintenance/libbitcoin_kernel[0]

Those projects start to be "softfork"-in-itself-size-of-engineering, and
for a lot of them might  require more than pure "coding" skills, such as
specification, simulations, extensive code coverage, up-to-date meeting
documents. See what is currently done with the Core wiki [1]

All those projects are modifying critical areas of Bitcoin such as the
validation engine or the p2p stack and AFAICT, they deserve more care.
Hopefully, by drawing the light there, more folks are going to understand
them, we'll have more skilled reviewers, reducing the reliance on a few
segments of the codebase being only understood by some seen experts and
ideally, ingenious, "Many Eyes Make All Bugs Shallow" :)

That said, it's only the technical ground and I believe the human layer of
Bitcoin dev might be the one where grassroots-involvement might be the most
fruitful.

I would say the Bitcoin dev stage has changed a bit since the last 18
months, especially w.r.t to few factors, the arrival of massive development
funding, the sudden mediatisation of protocol developers and the pursued
geographical spreading, diversification and education of the poolset of
contributors.

When I did arrive on the stage a few years ago, funding was still a hard
question, even for well-known, long-term contributors and only a few actors
were taking care of Bitcoin. Really differently, from what we have seen on
the last months, where we have seen a plethora of new organisations
entering the game and benefiting from the generosity of the Bitcoin
industry [2]

Things have been so fast that sometimes one can wonder if there isn't a
bubble around Bitcoin dev ? Few OGs might suggest we're back to 2017, with
ICO-like webpage pinning "developers-as-brands".  In reality, we see new
grant announcements every month or week, but still the number of reviewers
on Core doesn't seem to increase ? [3]

Hopefully, a lot of those new structures pretending to work for Bitcoin
betterness will get out of their childerness phase and slowly mature to
something as sound as Chaincode or Square Crypto. Small, friendly,
politics-free engineering teams with years-long stability, solving bitcoin
problems with a "forever" perspective mindset.

Though, as of today, you do have the opposite with the grant model. Being
funded on the rational that yours peers "appreciate" your work is more
going to generate implicit compliance at review time where you should
instead spot their errors. Bitcoin development process is highly contrarian
per nature, and constantly challenging your peers assumptions has been
preserving software robustness.

Time will separate the wheat from the chaff though how to make things
better in the short term ? I don't know, maybe those structures could be
exemplary and outsource their grant allocation decisions framework ? Or ask
them to publish grant contract under which contributors are engaging
themselves to observe if the usual independence provisions are present [4]

In another direction, I believe the ongoing mediatization increase of the
Bitcoin dev stage in the last months or so didn't improve the current state
of affairs. We now see technical proposals, of which the soundness have not
been thoroughly discussed in the traditional venues, being announced in big
pump as some kind of "done-deal", potentially sustaining the false belief
it has been already blessed or approved by the rest of the development
community.

And honestly, it's quite easy to approach any Bitcoin media today once
you're a bit technical, and rely on lingo to create a perception of
competency towards your 

Re: [bitcoin-dev] [Lightning-dev] Waiting SIGHASH_ANYPREVOUT and Packing Packages

2021-06-21 Thread Antoine Riard via bitcoin-dev
Hi Dave,

> That might work for current LN-penalty, but I'm not sure it works for
eltoo.

Well, we have not settled yet on the eltoo design but if we take the later
proposal in date [0], signing the update transaction with
SIGHGASH_ANYPREVOUT lets you attach non-interactively a single-party
controlled input at broadcast-time. Providing the input amount is high
enough to bump the transaction feerate over network mempools, it should
allow the tx to propagate across network mempools and that way solve the
pre-signed feerate problem as defined in the post ?

>  If Bitcoin Core can rewrite the blind CPFP fee bump transaction
> to refer to any prevout, that implies anyone else can do the same.
> Miners who were aware of two or more states from an eltoo channel would
> be incentivized to rewrite to the oldest state, giving them fee revenue
> now and ensuring fee revenue in the future when a later state update is
> broadcast.

Yep, you can add a per-participant key to lockdown the transaction and
avoid any in-flight malleability ? I think this is discussed in the "A
Stroll through Fee-Bumping Techniques" thread.

> If the attacker using pinning is able to reuse their attack at no cost,
> they can re-pin the channel again and force the honest user to pay
> another anyprevout bounty to miners.

This is also true with package-relay where your counterparty, with a better
knowledge of network mempools, can always re-broadcast a CPFP-bumped
malicious package ? Under this assumption, I think you should always be
ready to bump our honest package.

Further, for the clarity of the discussion, can you point to which pinning
scenario you're thinking of or if it's new under SIGHASH_ANYPREVOUT,
describe it ?

> Repeat this a bunch of times and the honest user has now spent more on
fees than their balance from the
closed channel.

And sadly, as this concern also exists in case of a miner-harvesting attack
against LN nodes, a concern that Gleb and I expressed more than a year ago
in a public post [1], a good L2 client should always upper bound its
fee-bumping reserve. I've a short though-unclear note on this notion of
fee-bumping upper to warn other L2 engineers  in "On Mempool Funny Games
against Multi-Party Funded Transactions"

Please read so:

"A L2 client, with only a view of its mempool at best, won't understand why
 the transaction doesn't confirm and if it's responsible for the
 fee-bumping, it might do multiple rounds of feerate increase through CPFP,
 in vain. As the fee-bumping algorithm is assumed to be known if the victim
 client is open source code, the attacker can predict when the fee-bumping
 logic reaches its upper bound."

Though thanks for the recall! I should log dynamic-balances in RL's
`ChannelMonitorUpdate` for our ongoing implementation of anchor, updating
my TODO :p

> Even if my analysis above is wrong, I would encourage you or Matt or
someone to write up this anyprevout idea in more detail and distribute
it before you promote it much more.

That's a really fair point, as a lot of the reasoning was based on private
discussion with Matt. Though as SIGHASH_ANYPREVOUT isn't advocated for
community consensus and those things take time, should just take a few
hours of my time.

> Even if every protocol based on presigned transactions can magically
allow dynamically adding inputs and modifying outputs for fees, and we
also have a magic perfect transaction replacement protocol,

"“Any sufficiently advanced technology is indistinguishable from magic.”
Arthur C. Clarke

Wit apart, that might be the outcome with careful bitcoin protocol
development, where technical issues are laid out in a best effort (of
mine!) and spread to the Bitcoin community on the most public bitcoin
communication channel ?

And humbly, on all those L2 issues I did change my opinion, as I've written
so much explicitly in this thread post by pointing to an older post of mine
("Advances in Bitcoin Contracting : Uniform Policy and Package Relay").
This reversal, partially motivated by a lot of discussion with folks,
including yourself, initiated since at least mid last year.

> package relay is still fundamentally useful for CPFP fee bumping very low
> feerate transactions received from an external party.  E.g. Alice pays
> Bob, mempool min feerates increase and Alice's transaction is dropped,
> Bob still wants the money, so he submits a package with Alice's
> transaction plus his own high feerate spend of it.

I think this point would be a reverse of our p2p design where we are now
making the sender responsible for the receiver quality of its mempool
feerate ? This question has never been clear up during the years-long
discussion of package-relay design [1].

Though referring to the thread post and last week's transaction-relay
workshop, I did point out that package-relay might serve in the long-term
as a mempool-sync mechanism to prevent potential malicious mempool
partitions [2].

> Package relay is a clear improvement now, and one I 

Re: [bitcoin-dev] Waiting SIGHASH_ANYPREVOUT and Packing Packages

2021-06-18 Thread Antoine Riard via bitcoin-dev
> That's a question I hope we'll gather feedback during next Thursday's
transaction relay workshops.

As someone kindly pointed out to me, workshop is happening Tuesday, June
22th. Not Thursday, mistake of mine :/



Le ven. 18 juin 2021 à 18:11, Antoine Riard  a
écrit :

> Hi,
>
> It's a big chunk, so if you don't have time browse parts 1 and 2 and share
> your 2 sats on the deployment timeline :p
>
> This post recalls some unsolved safety holes about Lightning, how
> package-relay or SIGHASH_ANYPREVOUT can solve the first one, how a mempool
> hardening can solve the second one, few considerations on package-relay
> design trade-offs and propose a rough deployment timeline.
>
> 1) Lightning Safety Holes : Pre-Signed Feerate and Tx-Pinning (to skip if
> you're a LN dev)
>
> As of today, Lightning is suffering from 2 safety holes w.r.t to
> base-layer interactions, widely discussed among ln devs.
>
> The first one, the pre-signed feerate issue with future broadcasted
> time-sensitive transactions is laid out clearly in Matt Corallo's "CPFP
> Carve-Out Fee-Prediction Issues in Contracting Applications (eg Lightning)"
> [0]. This issue might provoke loss of funds, even in non-adversarial
> settings, i.e a Lightning routing hub not being able to settle backward
> onchain a successful HTLC during occurrences of sudden mempool congestion.
>
> As blockspace demand increases with an always growing number of
> onchain/offchain bitcoin users, coupling effects are more likely to happen
> and this pre-signed feerate issue is going to become more urgent to solve
> [1]. For e.g, few percentiles of increases in feerate being overpriced by
> Lightning routing hubs to close "fractional-reserve" backed anchor
> channels, driving mempools congestions, provoking anchor channels
> fee-bumping reserves becoming even more under-provisioned and thus close
> down, etc.
>
> The second issue, malicious transaction pinnings, is documented in Bastien
> Teinturier's "Pinning Attacks" [2]. AFAIK, there is a rough consensus among
> devs on the conceptual feasibility of such a class of attacks against a LN
> node, though so far we have not seen them executed in the wild and I'm not
> aware of anyone having realized them in real-world conditions. Note, there
> is a variety of attack scenarios to consider which is function of a wide
> matrix (channel types, LN implementation's `update_fee` policy, LN
> implementation's `cltv_delta` policy, mempool congestion feerate groups,
> routing hubs or end nodes) Demoing against deployed LN implementations with
> default settings has been on my todo for a while, though a priori One
> Scenario To Exploit Them All doesn't fit well.
>
> Side-note, as a LN operator, if you're worried about those security risks,
> you can bump your `cltv_delta`/`cltv_expiry_delta` to significantly coarse
> the attacks.
>
> I think there is an important point to underscore. Considering the state
> of knowledge we have today, I believe there is no strong interdependency
> between solving pre-signed feerate and tx-pinning with the same mechanism
> from a safety/usability standpoint. Or last such mechanism can be deployed
> by stages.
>
> 2) Solving the Pre-Signed Feerate problem : Package-Relay or
> SIGHASH_ANYPREVOUT
>
> For Lightning, either package-relay or SIGHASH_ANYPREVOUT should be able
> to solve the pre-signed feerate issue [3]
>
> One of the interesting points recalled during the first transaction relay
> workshops was that L2s making unbounded security assumptions on
> non-normative tx-relay/mempool acceptance rules sounds a wrong direction
> for the Bitcoin ecosystem long-term, and more prone to subtle bugs/safety
> risks across the ecosystem.
>
> I did express the contrary, public opinion a while back [4]. That said, I
> start to agree it's wiser ecosystem-wise to keep those non-normatives rules
> as only a groundwork for weaker assumptions than consensus ones. Though it
> would be nice for long-term L2s stability to consider them with more care
> than today in our base-layer protocol development process [4]
>
> On this rational, I now share the opinion it's better long-term to solve
> the pre-signed feerate problem with a consensus change such as
> SIGHASH_ANYPREVOUT rather than having too much off-chain coins relying on
> the weaker assumptions offered by bitcoin core's tx-relay/mempool
> acceptance rules, and far harder to replicate and disseminate across the
> ecosystem.
>
> However, if SIGHASH_ANYPREVOUT is Things Done Right(tm), should we discard
> package-relay ?
>
> Sadly, in the worst-case scenario we might never reach consensus again
> across the ecosystem and Taproot is the last softfork. Ever :/ *sad violons
> and tissues jingle*
>
> With this dilemma in mind, it might be wise for the LN/L2 ecosystems to
> have a fall-back plan to solve their safety/usability issues and
> package-relay sounds a reasonable, temporary "patch".
>
> Even if package-relay requires serious engineering effort in Bitcoin 

[bitcoin-dev] Waiting SIGHASH_ANYPREVOUT and Packing Packages

2021-06-18 Thread Antoine Riard via bitcoin-dev
Hi,

It's a big chunk, so if you don't have time browse parts 1 and 2 and share
your 2 sats on the deployment timeline :p

This post recalls some unsolved safety holes about Lightning, how
package-relay or SIGHASH_ANYPREVOUT can solve the first one, how a mempool
hardening can solve the second one, few considerations on package-relay
design trade-offs and propose a rough deployment timeline.

1) Lightning Safety Holes : Pre-Signed Feerate and Tx-Pinning (to skip if
you're a LN dev)

As of today, Lightning is suffering from 2 safety holes w.r.t to base-layer
interactions, widely discussed among ln devs.

The first one, the pre-signed feerate issue with future broadcasted
time-sensitive transactions is laid out clearly in Matt Corallo's "CPFP
Carve-Out Fee-Prediction Issues in Contracting Applications (eg Lightning)"
[0]. This issue might provoke loss of funds, even in non-adversarial
settings, i.e a Lightning routing hub not being able to settle backward
onchain a successful HTLC during occurrences of sudden mempool congestion.

As blockspace demand increases with an always growing number of
onchain/offchain bitcoin users, coupling effects are more likely to happen
and this pre-signed feerate issue is going to become more urgent to solve
[1]. For e.g, few percentiles of increases in feerate being overpriced by
Lightning routing hubs to close "fractional-reserve" backed anchor
channels, driving mempools congestions, provoking anchor channels
fee-bumping reserves becoming even more under-provisioned and thus close
down, etc.

The second issue, malicious transaction pinnings, is documented in Bastien
Teinturier's "Pinning Attacks" [2]. AFAIK, there is a rough consensus among
devs on the conceptual feasibility of such a class of attacks against a LN
node, though so far we have not seen them executed in the wild and I'm not
aware of anyone having realized them in real-world conditions. Note, there
is a variety of attack scenarios to consider which is function of a wide
matrix (channel types, LN implementation's `update_fee` policy, LN
implementation's `cltv_delta` policy, mempool congestion feerate groups,
routing hubs or end nodes) Demoing against deployed LN implementations with
default settings has been on my todo for a while, though a priori One
Scenario To Exploit Them All doesn't fit well.

Side-note, as a LN operator, if you're worried about those security risks,
you can bump your `cltv_delta`/`cltv_expiry_delta` to significantly coarse
the attacks.

I think there is an important point to underscore. Considering the state of
knowledge we have today, I believe there is no strong interdependency
between solving pre-signed feerate and tx-pinning with the same mechanism
from a safety/usability standpoint. Or last such mechanism can be deployed
by stages.

2) Solving the Pre-Signed Feerate problem : Package-Relay or
SIGHASH_ANYPREVOUT

For Lightning, either package-relay or SIGHASH_ANYPREVOUT should be able to
solve the pre-signed feerate issue [3]

One of the interesting points recalled during the first transaction relay
workshops was that L2s making unbounded security assumptions on
non-normative tx-relay/mempool acceptance rules sounds a wrong direction
for the Bitcoin ecosystem long-term, and more prone to subtle bugs/safety
risks across the ecosystem.

I did express the contrary, public opinion a while back [4]. That said, I
start to agree it's wiser ecosystem-wise to keep those non-normatives rules
as only a groundwork for weaker assumptions than consensus ones. Though it
would be nice for long-term L2s stability to consider them with more care
than today in our base-layer protocol development process [4]

On this rational, I now share the opinion it's better long-term to solve
the pre-signed feerate problem with a consensus change such as
SIGHASH_ANYPREVOUT rather than having too much off-chain coins relying on
the weaker assumptions offered by bitcoin core's tx-relay/mempool
acceptance rules, and far harder to replicate and disseminate across the
ecosystem.

However, if SIGHASH_ANYPREVOUT is Things Done Right(tm), should we discard
package-relay ?

Sadly, in the worst-case scenario we might never reach consensus again
across the ecosystem and Taproot is the last softfork. Ever :/ *sad violons
and tissues jingle*

With this dilemma in mind, it might be wise for the LN/L2 ecosystems to
have a fall-back plan to solve their safety/usability issues and
package-relay sounds a reasonable, temporary "patch".

Even if package-relay requires serious engineering effort in Bitcoin Core
to avoid introducing new DoSes, swallowing well the complexity increase in
critical code paths such as the mempool/p2p stack and a gentle API design
for our friends the L2 devs, I believe it's worthy the engineering
resources cost. From-my-completely-biased-LN-dev viewpoint :p

In the best-case scenario, we'll activate SIGHASH_ANYPREVOUT and better
fee-bumping primitives softforks [5] slowly strip off the "L2 fee-bumping
primitive" 

[bitcoin-dev] Proposal: Full-RBF in Bitcoin Core 24.0

2021-06-15 Thread Antoine Riard via bitcoin-dev
Hi,

I'm writing to propose deprecation of opt-in RBF in favor of full-RBF as
the Bitcoin Core's default replacement policy in version 24.0. As a
reminder, the next release is 22.0, aimed for August 1st, assuming
agreement is reached, this policy change would enter into deployment phase
a year from now.

Even if this replacement policy has been deemed as highly controversial a
few years ago, ongoing and anticipated changes in the Bitcoin ecosystem are
motivating this proposal.

# RBF opt-out as a DoS Vector against Multi-Party Funded Transactions

As explained in "On Mempool Funny Games against Multi-Party Funded
Transactions'', 2nd issue [0], an attacker can easily DoS a multi-party
funded transactions by propagating an RBF opt-out double-spend of its
contributed input before the honest transaction is broadcasted by the
protocol orchester. DoSes are qualified in the sense of either an attacker
wasting timevalue of victim's inputs or forcing exhaustion of the
fee-bumping  reserve.

This affects a series of Bitcoin protocols such as Coinjoin, onchain DLCs
and dual-funded LN channels. As those protocols are still in the early
phase of deployment, it doesn't seem to have been executed in the wild for
now.  That said, considering that dual-funded are more efficient from a
liquidity standpoint, we can expect them to be widely relied on, once
Lightning enters in a more mature phase. At that point, it should become
economically rational for liquidity service providers to launch those DoS
attacks against their competitors to hijack user traffic.

Beyond that, presence of those DoSes will complicate the design and
deployment of multi-party Bitcoin protocols such as payment
pools/multi-party channels. Note, Lightning Pool isn't affected as there is
a preliminary stage where batch participants are locked-in their funds
within an account witnessScript shared with the orchestrer.

Of course, even assuming full-rbf, propagation of the multi-party funded
transactions can still be interfered with by an attacker, simply
broadcasting a double-spend with a feerate equivalent to the honest
transaction. However, it tightens the attack scenario to a scorched earth
approach, where the attacker has to commit equivalent fee-bumping reserve
to maintain the pinning and might lose the "competing" fees to miners.

# RBF opt-out as a Mempools Partitions Vector

A longer-term issue is the risk of mempools malicious partitions, where an
attacker exploits network topology or divergence in mempools policies to
partition network mempools in different subsets. From then a wide range of
attacks can be envisioned such as package pinning [1], artificial
congestion to provoke LN channels closure or manipulation of
fee-estimator's feerate (the Core's one wouldn't be affected as it relies
on block confirmation, though other fee estimators designs deployed across
the ecosystem are likely going to be affected).

Traditionally, mempools partitions have been gauged as a spontaneous
outcome of a distributed systems like Bitcoin p2p network and I'm not aware
it has been studied in-depth for adversarial purposes. Though, deployment
of second-layer
protocols, heavily relying on sanity of a local mempool for fee-estimation
and robust propagation of their time-sensitive transactions might lead to
reconsider this position. Acknowledging this, RBF opt-out is a low-cost
partitioning tool, of which the existence nullifies most of potential
progresses to mitigate malicious partitioning.


To resume, opt-in RBF doesn't suit well deployment of robust second-layers
protocol, even if those issues are still early and deserve more research.
At the same time, I believe a meaningful subset of the ecosystem  are still
relying
on 0-confs transactions, even if their security is relying on far weaker
assumptions (opt-in RBF rule is a policy rule, not a consensus one) [2] A
rapid change of Core's mempool rules would be harming their quality of
services and should be
weighed carefully. On the other hand, it would be great to nudge them
towards more secure handling of their 0-confs flows [3]

Let's examine what could be deployed ecosystem-wise as enhancements to the
0-confs security model.

# Proactive security models : Double-spend Monitoring/Receiver-side
Fee-Topping with Package Relay

>From an attacker viewpoint, opt-in RBF isn't a big blocker to successful
double-spends. Any motivated attacker can modify Core to mass-connect to a
wide portion of the network, announce txA to this subset, announce txA' to
the
merchant. TxA' propagation will be encumbered by the privacy-preserving
inventory timers (`OUTBOUND_INVENTORY_BROADCAST_INTERVAL`), of which an
attacker has no care to respect.

To detect a successful double-spend attempt, a Bitcoin service should run
few full-nodes with well-spread connection graphs and unlinkable between
them, to avoid being identified then maliciously partitioned from the rest
of the network.

I believe this tactic is already deployed by few 

Re: [bitcoin-dev] A Stroll through Fee-Bumping Techniques : Input-Based vs Child-Pay-For-Parent

2021-06-14 Thread Antoine Riard via bitcoin-dev
Thanks for this analysis of a sponsor-like mechanism.

For sure, "watchtower friendly" and "post hoc" are really good point
towards sponsorship, at least other proposals are struggling with
watchtower support, at least in way where your watchtower policy doesn't
leak to your counterparties (which is really gross from a security
standpoint when you think about it!)

W.r.t to sponsorship chain/fee overhead (at least compared to
ANYPREVOUT+IOMAP), I think it's ultimately a question of how many contracts
are closed cooperatively-vs-non-coop on the long-term. Even if we can hope
for emergency closure for security reasons to be pretty rare in practice,
we might still have significant non-coop closing when counterparties can't
agree on the economic opportunity of pursuing the contract or not. E.g, a
big LN hub unilaterally closes small channels, either because it doesn't
earn routing fees or those mobile nodes have been offline for too long.

Still, I think the next step of the discussion would be to come up with a
consistent simulation against which we can all agree on and score all the
proposals against it.

Le dim. 13 juin 2021 à 10:16, Jeremy via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> a écrit :

> The API of a sponsor-like mechanism is close to ideal in my opinion:
>
> - compatible with non malleable transactions
> - 0 overhead if fees accurately estimated
> - watchtower friendly
> - post hoc, requires minimal 'protocol awareness'
> - friendly with most mempool eviction policies, not much new required
> - can work to atomically bump multiple txns
> - can be bumped cooperatively by multiple sponsors w/o coordination
> - 0 'rebroadcast overhead' (e.g., for a large batch) leasing to cascading
> retransmission fees for replacement
> - can be piggy backed with other future transactions or protocols (e.g.
> coinjoin)
> - compatible with change being in cold storage
>
> The main drawback is it is chain space - wise less efficient, as an
> additional transaction gets made. However, I think the API benefits
> 'product market fit' over alternative solutions outweigh other concerns,
> and if the 'sponsorship efficiency hypothesis' holds true, then most
> transactions will not require sponsors and therefore the savings of not
> needing to preplan a few bumping mechanism will be more efficient overall
> (efficient market will drive accuracy in estimating fees rather than
> needing to sponsor).
>
>
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] A Stroll through Fee-Bumping Techniques : Input-Based vs Child-Pay-For-Parent

2021-06-14 Thread Antoine Riard via bitcoin-dev
d of party specific "finalizing_alice_key" or
> p1-fee-bump-key as I denoted it, we just use the key of the output whose
> value we are reducing. This also solves the O(log(n)) tapleaves for
> OP_CHECKSIG_MUTATED approach as well -- just have one tapleaf for fee
> bumping but authorize it under the key of the output we are reducing. Thus
> we need something like OP_PUSH_TAPROOT_OUTPUT_KEY  which
> takes the taproot external key at that output (fail if not taproot) and
> puts it on the stack. So to be clear you have the  on the
> witness stack rather than having it fixed in a particular tapleaf (as per
> my original post) and then use OP_DUP to pass it to both
> OP_CHECKSIG_MUTATED and OP_PUSH_TAPROOT_OUTPUT_KEY.
> This makes a lot of sense as it matches the semantics of what we are
> trying to achieve: allow the owner of an output (whether an individual or
> group) to reduce that output's value to pay a higher fee.
> Furthermore this removes all keys from the tapleaf since they are all
> aliased to either the input we are spending or one of the output keys of
> the tx we are spending to. This is quite a big improvement over my original
> idea.
>
> This works for lightning commit tx and for the case of a PTLC contract. It
> also seems to work for the DLC funding output. I'd be interested to know if
> anyone can think of a protocol where this would be inconvenient or
> impossible to use as the main pre-signed tx fee bumping system.
>
> Cheers,
>
> LL
>
> Le dim. 6 juin 2021 à 22:28, Lloyd Fournier  a
>> écrit :
>>
>>> Hi Antione,
>>>
>>> Thanks for bringing up this important topic. I think there might be
>>> another class of solutions over input based, CPFP and sponsorship. I'll
>>> call them tx mutation schemes. The idea is that you can set a key that can
>>> increase the fee by lowering a particular output after the tx is signed
>>> without invalidating the signature. The premise is that anytime you need to
>>> bump the fee of a transaction you must necessarily have funds in an output
>>> that are going to you and therefore you can sacrifice some of them to
>>> increase the fee. This is obviously destructive to txids so child presigned
>>> transactions will have to use ANYPREVOUT as in your proposal. The advantage
>>> is that it does not require keeping extra inputs around to bump the fee.
>>>
>>> So imagine a new opcode OP_CHECKSIG_MUTATED  
>>>  .
>>> This would check that  is valid against  if the
>>> current transaction had the output at  reduced by . To
>>> make this more efficient, if the public key is one byte: 0x02 it references
>>> the taproot *external key* (similar to how ANYPREVOUT uses 0x01 to refer to
>>> internal key[1]).
>>> Now for our protocol we want both parties (p1 and p2) to be able to fee
>>> bump a commitment transaction. They use MuSig to sign the commitment tx
>>> under the external key with a decent fee for the current conditions. But in
>>> case it proves insufficient they have added the following two leaves to
>>> their key in the funding output as a backup so that p1 and p2 can
>>> unilaterally bump the fee of anything they sign spending from the funding
>>> output:
>>>
>>> 1. OP_CHECKSIG_MUTATED(0, 0x02, , )
>>> OP_CHECKSIGADD(p1-fee-bump-key, )  OP_2
>>> OP_NUMEQUALVERIFY
>>> 2. OP_CHECKSIG_MUTATED(1, 0x02, , )
>>> OP_CHECKSIGADD(p2-fee-bump-key, ) OP_2
>>> OP_NUMEQUALVERIFY
>>>
>>> where <...> indicates the thing comes from the witness stack.
>>> So to bump the fee of the commit tx after it has been signed either
>>> party takes the  and adds a signature under their
>>> fee-bump-key for the new tx and reveals their fee bump leaf.
>>>  is checked against the old transaction while the fee
>>> bumped transaction is checked against the fee bump key.
>>>
>>> I know I have left out how to change mempool eviction rules to
>>> accommodate this kind of fee bumping without DoS or pinning attacks but
>>> hopefully I have demonstrated that this class of solutions also exists.
>>>
>>> [1]
>>> https://github.com/ajtowns/bips/blob/bip-anyprevout/bip-0118.mediawiki
>>>
>>> Cheers,
>>>
>>> LL
>>>
>>>
>>>
>>> On Fri, 28 May 2021 at 07:13, Antoine Riard via bitcoin-dev <
>>> bitcoin-dev@lists.linuxfoundation.org> wrote:
>>>
>>>> Hi,
>>>>
>>>> This post is pursuing a wider discussion around better fee-bumping
>>>> strategi

[bitcoin-dev] Reminder: Transaction relay workshop on IRC Libera - Tuesday 15th June 19:00 UTC

2021-06-14 Thread Antoine Riard via bitcoin-dev
Hi,

A short reminder about the 1st transaction relay workshop happening
tomorrow on #l2-onchain-support Libera chat (!), Tuesday 15th June, from
19:00 UTC to 20:30 UTC

Scheduled topics are:
* "Guidelines about L2 protocols onchain security design"
* "Coordinated cross-layers security disclosures"
* "Full-RBF proposal"

Find notes and open questions for the two first topics here:
* https://github.com/ariard/L2-zoology/blob/master/workshops/guidelines.md
* https://github.com/ariard/L2-zoology/blob/master/workshops/coordinated.md

Going to send the "Move toward full-rbf" proposal soon, deserves its own
thread. Workshops will stick to a socratic format to foster as much
knowledge sharing among attendees and ideally we'll reach rough consensus
about expected goals.

If you're a second-layer protocol designer, a Lightning dev, a Bitcoin Core
dev contributing around mempool/p2p areas, or a Bitcoin service operator
with intense usage of the mempool, I hope you'll find those workshops of
interest and you'll learn a lot :)

Again it's happening on Libera, not Freenode, contrary to the former mail
about agenda & schedule.

Cheers,
Antoine
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] A Stroll through Fee-Bumping Techniques : Input-Based vs Child-Pay-For-Parent

2021-06-10 Thread Antoine Riard via bitcoin-dev
Hi Lloyd,

Thanks for this tx mutation proposal extending the scope of fee-bumping
techniques. IIUC, the  serves as a pointer to increase the
output amount by value to recover the recompute the transaction hash
against which the original signature is valid ?

Let's do a quick analysis of this scheme.
* onchain footprint : one tapleaf per contract participant, with O(log n)
increase of witness size, also one output per contract participant
* tx-relay bandwidth rebroadcast : assuming aforementioned in-place mempool
substitution policy, the mutated transaction
* batching : fee-bumping value is extract from contract transaction itself,
so O(n) per contract
* mempool flexibility : the mutated transaction
* watchtower key management : to enable outsourcing, the mutating key must
be shared, in theory enabling contract value siphoning to miner fees ?

Further, I think tx mutation scheme can be achieved in another way, with
SIGHASH_ANYAMOUNT. A contract participant tapscript will be the following :

 

Where  is committed with SIGHASH_ANYAMOUNT, blanking
nValue of one or more outputs. That way, the fee-to-contract-value
distribution can be unilaterally finalized at a later time through the
finalizing key [0].

Note, I think that the tx mutation proposal relies on interactivity in the
worst-case scenario where a counterparty wants to increase its fee-bumping
output from the contract balance. This interactivity may lure a
counterparty to alway lock the worst-case fee-bumping reserve in the
output. I believe anchor output enables more "real-time" fee-bumping
reserve adjustment ?

Cheers,
Antoine

[0] Incautious sighash alleability is unsafe. Be careful, otherwise kitties
will perish by the thousands :
https://github.com/revault/practical-revault/pull/83

Le dim. 6 juin 2021 à 22:28, Lloyd Fournier  a
écrit :

> Hi Antione,
>
> Thanks for bringing up this important topic. I think there might be
> another class of solutions over input based, CPFP and sponsorship. I'll
> call them tx mutation schemes. The idea is that you can set a key that can
> increase the fee by lowering a particular output after the tx is signed
> without invalidating the signature. The premise is that anytime you need to
> bump the fee of a transaction you must necessarily have funds in an output
> that are going to you and therefore you can sacrifice some of them to
> increase the fee. This is obviously destructive to txids so child presigned
> transactions will have to use ANYPREVOUT as in your proposal. The advantage
> is that it does not require keeping extra inputs around to bump the fee.
>
> So imagine a new opcode OP_CHECKSIG_MUTATED  
>  .
> This would check that  is valid against  if the
> current transaction had the output at  reduced by . To
> make this more efficient, if the public key is one byte: 0x02 it references
> the taproot *external key* (similar to how ANYPREVOUT uses 0x01 to refer to
> internal key[1]).
> Now for our protocol we want both parties (p1 and p2) to be able to fee
> bump a commitment transaction. They use MuSig to sign the commitment tx
> under the external key with a decent fee for the current conditions. But in
> case it proves insufficient they have added the following two leaves to
> their key in the funding output as a backup so that p1 and p2 can
> unilaterally bump the fee of anything they sign spending from the funding
> output:
>
> 1. OP_CHECKSIG_MUTATED(0, 0x02, , )
> OP_CHECKSIGADD(p1-fee-bump-key, )  OP_2
> OP_NUMEQUALVERIFY
> 2. OP_CHECKSIG_MUTATED(1, 0x02, , )
> OP_CHECKSIGADD(p2-fee-bump-key, ) OP_2
> OP_NUMEQUALVERIFY
>
> where <...> indicates the thing comes from the witness stack.
> So to bump the fee of the commit tx after it has been signed either party
> takes the  and adds a signature under their
> fee-bump-key for the new tx and reveals their fee bump leaf.
>  is checked against the old transaction while the fee
> bumped transaction is checked against the fee bump key.
>
> I know I have left out how to change mempool eviction rules to accommodate
> this kind of fee bumping without DoS or pinning attacks but hopefully I
> have demonstrated that this class of solutions also exists.
>
> [1] https://github.com/ajtowns/bips/blob/bip-anyprevout/bip-0118.mediawiki
>
> Cheers,
>
> LL
>
>
>
> On Fri, 28 May 2021 at 07:13, Antoine Riard via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> Hi,
>>
>> This post is pursuing a wider discussion around better fee-bumping
>> strategies for second-layer protocols. It draws out a comparison between
>> input-based and CPFP fee-bumping techniques, and their apparent trade-offs
>> in terms of onchain footprint, tx-relay bandwidth rebroadcast, batching
>> opportunity and mempool flexibility.
>>
>> 

Re: [bitcoin-dev] A Stroll through Fee-Bumping Techniques : Input-Based vs Child-Pay-For-Parent

2021-06-10 Thread Antoine Riard via bitcoin-dev
> So something like
`or(and(pk(FB),pk(A)),and(pk(FB),pk(B)),and(pk(FB),pk(C)))` with each `or`
in their own leaf? I think it works, but only if the keys `A`, `B`, `C` are
"hot", as in available to the
fee-bumper. For Revault it means introducing a key for each watchtower in
the vaults descriptors, which is meh but technically feasible since they
are identified. This kinda break our replication
model though. On the other end for Lightning... You'd need to know what
watchtower (or your node) is going to be willing to feebump? The descriptor
can very quickly get very convoluted:
`or(and(pk(FB),pk(A_NODE)),and(pk(FB),pk(A_WT1)),and(pk(FB),pk(A_WT2)),and(pk(FB),pk(B_NODE)),and(pk(FB),pk(B_WT1)),and(pk(FB),pk(B_WT2)))`
for only 2 participants in a channel
where one of either the node or two watchtowers (identified beforehand !!)
can feebump.

I'm not sure if we agree on the purpose of the finalizing key ? Its goal is
to finalize the transaction state once another fee-bumping input has been
attached and should be part of the witnessScript of the "main" input. If a
third-party try to attach a malicious pinning input, doing so breaks the
finalizing signature and the transaction will be rejected as invalid by
network mempools.

This key doesn't secure funds and as such can be shared to any fee-bumper
entity (contract source, sourced towers, outsourced towers ?). Of course,
it means an outsourced tower can re-introduce malicious transaction
malleability but at least it's moving away malleability from the
contract-level and it's now a holder tower policy decision ?

Overall I agree any fee-bumping techniques comparison should also account
tower key management complexity (and this one was missing).

> Yes. That's a bit concerning, but i guess it's a tradeoff. Amusingly the
incentive is at odds with routing: you want to keep your channels
unbalanced if you run on fractional fee-bumping reserves
so that if things go south you can still salvage most of your funds by
focusing your fee-bumping on the unbalanced (to you) channels :p .

That's a good point! Switching to anchor now rebalances a security matter,
not sure if it was an intended effect of the design :) Also, you might take
HTLC forwarding acceptance decisions holistically instead of a per-channel
level. If your number of HTLC in-flight expressed as outputs on one
commitment transaction goes up, don't accept anymore HTLC on other
channels, otherwise, you might run short of fee-bumping reserve...

Le ven. 28 mai 2021 à 18:25, darosior  a écrit :

>
> Oh yes, I should have mentioned this pinning vector. The witnessScript
> I've in mind to make secure that type of chain of transactions would be one
> MuSig key for all contract participants, where signature are committed with
> SIGHASH_ANYPREVOUT | SIGHASH_IOMAP, one pubkey per participant to lockdown
> the transaction with SIGHASH_ALL. I think it works and prevents malicious
> in-flight attachment of input/output to a multi-party transaction ?
>
>
> So something like
> `or(and(pk(FB),pk(A)),and(pk(FB),pk(B)),and(pk(FB),pk(C)))` with each `or`
> in their own leaf? I think it works, but only if the keys `A`, `B`, `C` are
> "hot", as in available to the
> fee-bumper. For Revault it means introducing a key for each watchtower in
> the vaults descriptors, which is meh but technically feasible since they
> are identified. This kinda break our replication
> model though. On the other end for Lightning... You'd need to know what
> watchtower (or your node) is going to be willing to feebump? The descriptor
> can very quickly get very convoluted:
> `or(and(pk(FB),pk(A_NODE)),and(pk(FB),pk(A_WT1)),and(pk(FB),pk(A_WT2)),and(pk(FB),pk(B_NODE)),and(pk(FB),pk(B_WT1)),and(pk(FB),pk(B_WT2)))`
> for only 2 participants in a channel
> where one of either the node or two watchtowers (identified beforehand !!)
> can feebump.
>
> I see, so you spread your bumping UTXO pool in two ranges : at least one
> bumping utxo per contract, and a subpool of emergency smaller coins, ready
> to be attached on any contract. I think this strategy makes sense for
> vaults as you can afford a bunch of small coins at different feerates,
> spending the ones not used afterwards. And higher cells of feerate reserve
> as the worst historical feerate are relatively not that much compared to
> locked-in vaults value. That said, I'm more dubious about LN, where node
> operators might not keep the worst-case fee-bumping reserve, as the time
> value of the coins aren't worth the channel liquidity at stake.
>
>
> Yes. That's a bit concerning, but i guess it's a tradeoff. Amusingly the
> incentive is at odds with routing: you want to keep your channels
> unbalanced if you run on fractional fee-bumping reserves
> so that if things go south you can still salvage most of your funds by
> focusing your fee-bumping on the unbalanced (to you) channels :p .
>
> Yes, input-based bumping targeting the tail of the chain works at the
> transaction level. But if you assume 

Re: [bitcoin-dev] Improvement on Blockbuilding

2021-05-29 Thread Antoine Riard via bitcoin-dev
Hi Mark and Clara,

Great research, thanks for it!

Few questions out of mind after a first read.

> This approach enables block building to consider Child Pays For Parent
(CPFP) constellations.

I think that's a really interesting point, it's likely that such
transaction graphs with multiple disjunctive branches become far more
regular in the future. One can think about OP_CTV's style
congestion-tree, LN's splicing or chain of coinjoins. If this phenomenon
happens, can you expect CSB feerate perf to improve ?

> CSB is more complex and requires more computation

Let's say a malicious miner identifies and connects to its competitors'
mempools then starts to broadcast to them hard-to-traverse CPFP
constellations. Doing so, this malicious miner would prevent them either
from assembling block templates at all or slow down their assemblage
computation enough to gain an advantage in fee collection. Following
current mempools limits, it would be relevant to know by how much CSB makes
that kind of DoS possible/efficient.

> From the point of view of global blockspace demand, if miners generally
became DPFA-sensitive,
it could encourage creation of additional transactions for the sole purpose
of bumping stuck ancestors.

As ASB's ancestor set and CSB's candidate set, a fee bidder, we'll have to
pay the feerate to cover the new transaction fields, high enough to catch
up with the already-present feerate set ? Likely more feerate efficient to
RBF the first child, though you have to swallow the replacement feerate
penalty (default 1 sat/vbyte iirc)

Antoine

Le mar. 25 mai 2021 à 10:34, Murch via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> a écrit :

> Hi Bitcoin Devs,
>
> We are writing to share with you a suggested improvement to the current
> bitcoin core block building algorithm. In short, currently Bitcoin Core
> uses a straightforward greedy algorithm which evaluates each
> transaction’s effective fee rate in the context of its unconfirmed
> ancestors. This overlooks situations in which multiple descendant
> transactions could be grouped with their shared ancestors to form a more
> attractive transaction set for block inclusion.
>
> For example, if we have 4 transactions A,B,C, and D, with fee rates and
> weights as follows
>
> Tx Fee Weight
> A51
> B   101
> C   151
> D   141
>
> And dependencies:
> • B is a descendant of A
> • C is a descendant of B
> • D is a descendant of A
> The current algorithm will consider {A,B,C} best which has an effective
> fee rate of 10. Our suggested algorithm will also consider {A,B,C,D},
> which has an effective fee rate of 11.
>
> Experimental data shows that our suggested algorithm did better on more
> than 94% of blocks (99% for times of high congestion). We have also
> compared the results to CBC and SAT Linear Programming solvers. The LP
> solvers did slightly better, at the price of longer running times. Greg
> Maxwell has also studied LP solvers in the past, and his results suggest
> that better running times are possible.
>
> The full details are given in this document, and we are happy to hear
> any comment, critic or suggestion!
>
> Best,
> Mark and Clara
>
> Full details:
>
> https://gist.github.com/Xekyo/5cb413fe9f26dbce57abfd344ebbfaf2#file-candidate-set-based-block-building-md
>
> Research Code:
> https://github.com/Xekyo/blockbuilding
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] A Stroll through Fee-Bumping Techniques : Input-Based vs Child-Pay-For-Parent

2021-05-28 Thread Antoine Riard via bitcoin-dev
> Unfortunately, ACP | SINGLE is trivially pinable [0] (TL;DR: i can just
attach an output paying immediately to me, and construct a tx chain
spending it). We are using ACP | ALL for Revault,
> which is the reason why we need a well laid-out pool of fee-bumping UTXOs
(as you need to consume them entirely).

Oh yes, I should have mentioned this pinning vector. The witnessScript I've
in mind to make secure that type of chain of transactions would be one
MuSig key for all contract participants, where signature are committed with
SIGHASH_ANYPREVOUT | SIGHASH_IOMAP, one pubkey per participant to lockdown
the transaction with SIGHASH_ALL. I think it works and prevents malicious
in-flight attachment of input/output to a multi-party transaction ?

> I believe that it's better to broadcast a single fan-out transaction
creating your entire UTXO pool in advance. You could create one coin per
contract you are watching which value would be
> used to bump your transaction feerate from the presigned one to -say- the
average feerate over the past month, and then have smaller coins that you
could attach to any transaction to bump
> by a certain threshold (say, 10sat/vbyte). You would create as many small
coin as your reserve algorithm tells you (which could be "i need to be
able, worst case, to close all my contracts
> with the worst historical feerate." or (fractional reserve version) "i
need to be able, worst case, to close 10% of my contracts at the average
feerate of the past year, the remaining ones sorry
> for my loss"). [1]

> This method is both much more optimal (though you need to sometimes incur
the cost of many small additional inputs) and also makes sure that your
feebump does not depend on the confirmation of a first stage transaction
(as you can only RBF with new inputs if they are confirmed).

I see, so you spread your bumping UTXO pool in two ranges : at least one
bumping utxo per contract, and a subpool of emergency smaller coins, ready
to be attached on any contract. I think this strategy makes sense for
vaults as you can afford a bunch of small coins at different feerates,
spending the ones not used afterwards. And higher cells of feerate reserve
as the worst historical feerate are relatively not that much compared to
locked-in vaults value. That said, I'm more dubious about LN, where node
operators might not keep the worst-case fee-bumping reserve, as the time
value of the coins aren't worth the channel liquidity at stake.

> Why not just attaching it at the tail of the chain? Bumping the last
child with additional input would effectively be a CPFP for the entire
chain in this case.

Yes, input-based bumping targeting the tail of the chain works at the
transaction level. But if you assume bounded visibility of network
mempools, one of your counterparties might have broadcast a concurrent
state, thus making your CPFP irrelevant for propagation. Though smarter
tx-relay techniques such as "attach-on-contract-utxo-root" CPFP (or also
known as "blinded CPFP") might solve this issue.

Le jeu. 27 mai 2021 à 17:45, darosior  a écrit :

> Hi,
>
> ## Input-Based
>
> I think input-based fee-bumping has been less studied as fee-bumping
> primitive for L2s [1]. One variant of input-based fee-bumping usable today
> is the leverage of the SIGHASH_ANYONECANPAY/SIGHASH_SINGLE malleability
> flags. If the transaction is the latest stage of the contract, a bumping
> input can be attached just-in-time, thus increasing the feerate of the
> whole package.
>
>
> Unfortunately, ACP | SINGLE is trivially pinable [0] (TL;DR: i can just
> attach an output paying immediately to me, and construct a tx chain
> spending it). We are using ACP | ALL for Revault,
> which is the reason why we need a well laid-out pool of fee-bumping UTXOs
> (as you need to consume them entirely).
>
> Input-based (today): If the bumping utxo is offering an adequate feerate
> point in function of network mempools congestion at time of broadcast, only
> 1 input. If a preliminary fan-out transaction to adjust feerate point must
> be broadcasted first, 1 input and 2 outputs more must be accounted for.
> Onchain footprint: 2 inputs + 3 outputs.
>
>
> I believe that it's better to broadcast a single fan-out transaction
> creating your entire UTXO pool in advance. You could create one coin per
> contract you are watching which value would be
> used to bump your transaction feerate from the presigned one to -say- the
> average feerate over the past month, and then have smaller coins that you
> could attach to any transaction to bump
> by a certain threshold (say, 10sat/vbyte). You would create as many small
> coin as your reserve algorithm tells you (which could be "i need to be
> able, worst case, to close all my contracts
> with the worst historical feerate." or (fractional reserve version) "i
> need to be able, worst case, to close 10% of my contracts at the average
> feerate of the past year, the remaining ones sorry
> for my loss"). [1]
>
> This method is 

[bitcoin-dev] A Stroll through Fee-Bumping Techniques : Input-Based vs Child-Pay-For-Parent

2021-05-27 Thread Antoine Riard via bitcoin-dev
Hi,

This post is pursuing a wider discussion around better fee-bumping
strategies for second-layer protocols. It draws out a comparison between
input-based and CPFP fee-bumping techniques, and their apparent trade-offs
in terms of onchain footprint, tx-relay bandwidth rebroadcast, batching
opportunity and mempool flexibility.

Thanks to Darosior for reviews, ideas and discussions.

## Child-Pay-For-Parent

CPFP is a mature fee-bumping technique, known and used for a while in the
Bitcoin ecosystem. However, its usage in contract protocols with
distrusting counterparties raised some security issues. As mempool's chain
of unconfirmed transactions are limited in size, if any output is spendable
by any contract participant, it can be leveraged as a pinning vector to
downgrade odds of transaction confirmation [0].

That said, contract transactions interested to be protected under the
carve-out logic require to add a new output for any contract participant,
even if ultimately only one of them serves as an anchor to attach a CPFP.

## Input-Based

I think input-based fee-bumping has been less studied as fee-bumping
primitive for L2s [1]. One variant of input-based fee-bumping usable today
is the leverage of the SIGHASH_ANYONECANPAY/SIGHASH_SINGLE malleability
flags. If the transaction is the latest stage of the contract, a bumping
input can be attached just-in-time, thus increasing the feerate of the
whole package.

However, as of today, input-based fee-bumping doesn't work to bump first
stages of contract transactions as it's destructive of the txid, and as
such breaks chain of pre-signed transactions. A first improvement would be
the deployment of the SIGHASH_ANYPREVOUT softfork proposal. This new
malleability flag allows a transaction to be signed without reference to
any specific previous output. That way,  spent transactions can be
fee-bumped without altering validity of the chain of transactions.

Even assuming SIGHASH_ANYPREVOUT, if the first stage contract transaction
includes multiple outputs (e.g the LN's commitment tx has multiple HTLC
outputs), SIGHASH_SINGLE can't be used and the fee-bumping input value
might be wasted. This edge can be smoothed by broadcasting a preliminary
fan-out transaction with a set of outputs providing a range of feerate
points for the bumped transaction.

This overhead could be smoothed even further in the future with more
advanced sighash malleability flags like SIGHASH_IOMAP, allowing
transaction signers to commit to a map of inputs/outputs [2]. In the
context of input-based, the overflowed fee value could be redirected to an
outgoing output.

## Onchain Footprint

CPFP: One anchor output per participant must be included in the commitment
transaction. To this anchor must be attached a child transaction with 2
inputs (one for the commitment, one for the bumping utxo) and 1 output.
Onchain footprint: 2 inputs + 3 outputs.

Input-based (today): If the bumping utxo is offering an adequate feerate
point in function of network mempools congestion at time of broadcast, only
1 input. If a preliminary fan-out transaction to adjust feerate point must
be broadcasted first, 1 input and 2 outputs more must be accounted for.
Onchain footprint: 2 inputs + 3 outputs.

Input-based (SIGHASH_ANYPREVOUT+SIGHASH_IOMAP): As long as the bumping
utxo's value is wide enough to cover the worst-case of mempools congestion,
the bumped transaction can be attached 1 input and 1 output. Onchain
footprint: 1 input + 1 output.

## Tx-Relay Bandwidth Rebroadcast

CPFP: In the context of multi-party protocols, we should assume bounded
rationality of the participants w.r.t to an unconfirmed spend of the
contract utxo across network mempools. Under this assumption, the bumped
transaction might have been replaced by a concurrent state. To guarantee
efficiency of the CPFP the whole chain of transactions should be
rebroadcast, perhaps wasting bandwidth consumption for a still-identical
bumped transaction [3]. Rebroadcast footprint: the whole chain of
transactions.

Input-based (today): In case of rebroadcast, the fee-bumping input is
attached to the root of the chain of transactions and as such breaks the
chain validity in itself. Beyond the rebroadcast of the updated root under
replacement policy, the remaining transactions must be updated and
rebroadcast. Rebroadcast footprint: the whole chain of transactions.

Input-based(SIGHASH_ANYPREVOUT+SIGHASH_IOMAP): In case of rebroadcast, the
fee-bumping is attached to the root of the chain of transactions but it
doesn't break the chain validity in itself. Assuming a future mempool
acceptance logic to authorize in-place substitution, the rest of the chain
could be preserved. Rebroadcast footprint: the root of the chain of
transactions.

## Fee-Bumping Batching

CPFP: In the context of multi-party protocols, in optimistic scenarios, we
can assume aggregation of multiple chains of transactions. For e.g, a LN
operator is desirous to non-cooperatively close multiple 

[bitcoin-dev] L2s Onchain Support IRC Workshop : Agenda & Schedule

2021-05-12 Thread Antoine Riard via bitcoin-dev
Hi,

Following-up on the workshop announcement [0], I'm proposing today's early
agenda and schedule.

Dates have been picked up 2 weeks after the end of the Miami's conference
as the american crowd will travel around and won't be necessary on their
keyboards. Also, if folks from Asia/Pacific timezones want to join or
participate, I'm all good to set up additional, more-friendly sessions.

* 1st meeting: Tuesday 15th June 19:00 - 20:30  UTC

Topics will be "Guidelines about L2 protocols onchain security design",
"Coordinated cross-layers security disclosures", "Full-RBF proposal".

* 2nd meeting: Tuesday 22th June 19:00 - 20:30 UTC

Topics will be "Package relay design and/or generic L2 fee-bumping
primitive".

I hope to address most of the issues in two sessions. If it doesn't fit, we
can still do a third one on the 29th June.

Discussions will happen on the new chan #l2-onchain-support freenode.

Meanwhile, I'll prepare a series of open questions and resource pointers
for each topic. Also, coming up with weaponized pinning attacks against LN
nodes and proposals on how you can improve double-spend protection
hopefully with package relay or p2p extensions if we move towards full-rbf.

If any L2 protocol dev/designer have specific transaction relay
requirements for their projects, especially ones far different from
Lightning, I'm really interested if you want to share them in the
L2-zoology repo (https://github.com/ariard/L2-zoology/tree/master/protocols)
or anywhere else. I'm doing my best to keep up with most L2 security
models, that said the space is growing fast :)

Cheers,
Antoine

[0]
https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-April/003002.html
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Full Disclosure: CVE-2021-31876 Defect in Bitcoin Core's bip125 logic

2021-05-12 Thread Antoine Riard via bitcoin-dev
Hi Luke,

> Is there a list of software impacted by this CVE, and the versions it is
fixed
in?

Speaking only for LN clients, as I think they're the only ones deployed
with real funds at stake. Defect is mitigated by new "anchor" channel type,
forcing RBF-signaling on all transactions.
* lnd v0.12 "anchor" activated by default, lnd v0.10 "anchor" activated by
user flag
* c-lightning, no release yet with "anchor" support
* eclair, no release yet with "anchor" support
* rust-lightning, no release yet with "anchor" support

> (Note this isn't a vulnerability in Bitcoin Core; BIP125 is strictly a
policy
matter, not part of the consensus rules and never safe to rely on in any
case...)

Answering two-folds.

First I somehow agree it's not a "vulnerability" in Bitcoin Core but at
least a clear lack of compliance to a heavily relied-on bitcoin standard
through the ecosystem. Even if BIPs are descriptive documentation and not
prescriptive, I don't think we have guidelines for now on how to proceed
with identified flaws with security implications for downstream projects.

Should the Bitcoin Core project adopt a security bulletin or advisories for
security-related info pertinent for downstream ?

Secondly, opinions were divergent on the security list on how to report on
this. On one side, tx-relay and mempool acceptance rules aren't considered
as reliable or strongly normative and purely a matter of node's policy.

On the other side, we have more and more deployed Bitcoin
applications/protocols (e.g LN, vaults, ...) directly making security
assumptions on them. Even if we consider such beliefs misplaced or
ingenious, we're laying on top of a permissionless system and can't really
prevent developers and users from deploying such softwares.

Should we stay on this statu quo and invite such Bitcoin users to deploy
their own overlay network for transaction propagation satisfying their
advanced requirements, at the price of far-less censorship-resistance and a
more opaque fee market for everyone ?

Or instead qualify a new, third set of rules between consensus and pure
"policy", especially crafted to support Bitcoin applications requiring
transparency and stability of tx-relay and mempool acceptance rules, at the
price of ossifying some part of full-nodes ? Of course, the degree of
normativity we could guarantee for those rules is making them compatible
with economic incentives of everyone. Hopefully fostering their wide
dissemination across full-node implementations and miner mempools.

You have good arguments and trade-offs on both sides.

Overall, I agree that describing tx-relay/mempool rules as non-normative,
non-reliable is the most understood mental model among developers *today*.
That said, I would like to underscore that this model might not be adequate
in light of recent ecosystem evolutions and reveal itself a bit crippling
the future...

Cheers,
Antoine

Le mar. 11 mai 2021 à 17:51, Luke Dashjr  a écrit :

> Is there a list of software impacted by this CVE, and the versions it is
> fixed
> in?
>
> (Note this isn't a vulnerability in Bitcoin Core; BIP125 is strictly a
> policy
> matter, not part of the consensus rules and never safe to rely on in any
> case...)
>
>
> On Thursday 06 May 2021 13:55:53 Antoine Riard via bitcoin-dev wrote:
> > Hi,
> >
> > I'm writing to report a defect in Bitcoin Core bip125 logic with minor
> > security and operational implications for downstream projects. Though
> this
> > defect grieves Bitcoin Core nodes 0.12.0 and above, base layer safety
> isn't
> > impacted.
> >
> > # Problem
> >
> > Bip 125 specification describes the following signalling mechanism :
> >
> > "
> > This policy specifies two ways a transaction can signal that it is
> > replaceable.
> >
> > * Explicit signaling: A transaction is considered to have opted in to
> > allowing replacement of itself if any of its inputs have an nSequence
> > number less than (0x - 1).
> >
> > * Inherited signaling: Transactions that don't explicitly signal
> > replaceability are replaceable under this policy for as long as any one
> of
> > their ancestors signals replaceability and remains unconfirmed.
> >
> > One or more transactions currently in the mempool (original transactions)
> > will be replaced by a new transaction (replacement transaction) that
> spends
> > one or more of the same inputs if,
> >
> > # The original transactions signal replaceability explicitly or through
> > inheritance as described in the above Summary section.
> > "
> >
> > An unconfirmed child transaction with nSequence = 0xff_ff_ff_ff spending
> an
> > unconfirmed parent with nSequence <= 0xff_ff

Re: [bitcoin-dev] Full Disclosure: CVE-2021-31876 Defect in Bitcoin Core's bip125 logic

2021-05-12 Thread Antoine Riard via bitcoin-dev
Hi Ruben,

Thanks for raising awareness about spacechains/BMM, I didn't have knowledge
it was relying on a fee-based English auction to mine side-blocks. IIUC,
it's another type of dynamic membership
multi-party signature where parties are block-signing with a fee proposal
instead of a PoW ? Though you still assume mainchain miners aren't
colluding and blindly applying the RBF policy

Effectively, if you can block RBF by opting-out, parties are not competing
anymore on feerate but on gaining propagation advantage in the tx-relay
topology. And such advantage is quite easy to
gain with a modified client, mass-connecting and not enforcing inventory
broadcast interval timers.

> As it stands, this bug gets in the way of being able to deploy
spacechains.

Noted, yet another good-point to transition towards full-rbf!

Cheers,
Antoine

Le mar. 11 mai 2021 à 17:16, Ruben Somsen  a écrit :

> Hi Antoine,
>
> Thanks for bringing this up.
>
> It seems spacechains[0] are impacted by this. Simply explained, the idea
> is to allow for fee-bidding Blind Merged Mining[1] by creating one
> transaction for each block, to which anyone can attach a block hash. The
> preferred mechanism utilizes sighash_anyprevout and is not affected, but
> there is also a practical variant that could be used without requiring the
> anyprevout soft fork, which unfortunately does seem to be impacted. Here's
> a brief description:
>
> TX0:
>
> input 0
>
> output 1a*
> output 1b
>
> TX1:
>
> input 1a*
>
> output 2a**
> output 2b
>
> TX2:
>
> input 2a**
>
> output 3a***
> output 3b
>
> Etc.
>
> Every TX has two outputs, one of which ("a") is used as the input for the
> next TX (these are pre-signed and act as a covenant), resulting in a
> continuous chain of transactions. The other output ("b") can be spent by
> anyone, and is meant to CPFP the parent TX and, importantly, be RBF
> replaceable by anyone. This allows whoever pays the highest CPFP fee to
> "win the RBF auction" and attach their TX to the output, containing the
> winning spacechain block hash.
>
> With inherited signalling, this works because each pre-signed TX is RBF
> enabled, so each CPFP transaction inherits RBF as well. But if inherited
> signalling does not function, the first person who makes a CPFP transaction
> can simply disable RBF and win the auction, thus breaking the intended
> fee-bidding mechanism.
>
> You can also find a diagram in this spacechains presentation (timestamped
> link): https://youtu.be/N2ow4Q34Jeg?t=2555
>
> As it stands, this bug gets in the way of being able to deploy spacechains.
>
> -- Ruben Somsen
>
>
>
> [0]
> https://medium.com/@RubenSomsen/21-million-bitcoins-to-rule-all-sidechains-the-perpetual-one-way-peg-96cb2f8ac302
>
> [1] https://gist.github.com/RubenSomsen/5e4be6d18e5fa526b17d8b34906b16a5
>
>
>
>
> On Sun, May 9, 2021 at 10:41 AM darosior via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> Hi Antoine,
>>
>>
>> Thank you for the disclosure.
>>
>>
>>
>> > * Onchain DLC/Coinswap/Vault : Those contract protocols have also
>> multiple stages of execution with time-sensitive transactions opening the
>> way to pinning attacks. Those protocols being non-deployed or in early
>> phase, I would recommend that any in-protocol competing transactions
>> explicitly signal RBF.
>>
>>
>> For what it's worth, Revault isn't vulnerable as all transactions signal
>> RBF and there is no way to sneak a non-signaling competing transaction (as
>> long as the CSV isn't matured yet).
>>
>>
>>
>> Thanks,
>>
>> Antoine (the other one)___
>> bitcoin-dev mailing list
>> bitcoin-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Full Disclosure: CVE-2021-31876 Defect in Bitcoin Core's bip125 logic

2021-05-06 Thread Antoine Riard via bitcoin-dev
Hi,

I'm writing to report a defect in Bitcoin Core bip125 logic with minor
security and operational implications for downstream projects. Though this
defect grieves Bitcoin Core nodes 0.12.0 and above, base layer safety isn't
impacted.

# Problem

Bip 125 specification describes the following signalling mechanism :

"
This policy specifies two ways a transaction can signal that it is
replaceable.

* Explicit signaling: A transaction is considered to have opted in to
allowing replacement of itself if any of its inputs have an nSequence
number less than (0x - 1).

* Inherited signaling: Transactions that don't explicitly signal
replaceability are replaceable under this policy for as long as any one of
their ancestors signals replaceability and remains unconfirmed.

One or more transactions currently in the mempool (original transactions)
will be replaced by a new transaction (replacement transaction) that spends
one or more of the same inputs if,

# The original transactions signal replaceability explicitly or through
inheritance as described in the above Summary section.
"

An unconfirmed child transaction with nSequence = 0xff_ff_ff_ff spending an
unconfirmed parent with nSequence <= 0xff_ff_ff_fd should be replaceable as
the child transaction signals "through inheritance". However, the
replacement code as implemented in Core's `PreChecks()` shows that this
behavior isn't  enforced and Core's mempool rejects replacement attempts of
an unconfirmed child transaction.

Branch asserting the behavior is here :
https://github.com/ariard/bitcoin/commits/2021-03-test-rbf

# Solution

The defect has not been patched.

# Downstream Projects Affected

* LN : State-of-the-art pinning attacks against second-stage HTLCs
transactions were thought to be only possible by exploiting RBF rule 3 on
the necessity of a higher absolute fee [0]. However, this replacement
defect opens the way for an attacker to only pin with an opt-out child
without a higher fee than the honest competing transaction. This lowers the
cost of attack as the malicious pinning transaction only has to be above
mempools'min feerate. This also increases odds of attack success for a
reduced feerate diminishes odds of confirmation ending the pinning.

A functional test demo illustrating cases is available on this branch:
https://github.com/ariard/bitcoin/commits/2021-05-htlc-preimage-pinnings

LN nodes operators concerned by this defect might favor anchor outputs
channels, fully mitigating this specific pinning vector.

* Onchain DLC/Coinswap/Vault : Those contract protocols have also multiple
stages of execution with time-sensitive transactions opening the way to
pinning attacks. Those protocols being non-deployed or in early phase, I
would recommend that any in-protocol competing transactions explicitly
signal RBF.

* Coinjoin/Cut-Through : if CPFP is employed as a fee-bumping strategy, if
the coinjoin transaction is still laying in network mempools, if a
fee-bumping output is spendable by any protocol participant, this
fee-bumping mechanism might be halted by a malicious protocol participant
broadcasting an low-feerate opt-out child. According to bip125, if the
coinjoin parent tx signals replaceability, the child transaction should be
replaceable, whatever its signaling. However Core doesn't apply this
policy. RBF of the coinjoin transaction itself should be used as a
fallback. I'm not aware of any deployed coinjoin using such
"anyone-can-bump" fee-bumping strategy.

* Simple wallets : RBF engines' behaviors might be altered in ways not
matching the intent of their developers. I invite RBF engines dev to verify
what those components are doing in the light of disclosed information.

# Discovery

While reviewing the LN dual-funding flow, I inquired on potential new DoS
vectors introduced by relying on counterparty utxos in this following
analysis [1]. The second DoS issue "RBF opt-out by a Counterparty
Double-Spend" is relying on a malicious chain of transactions where the
parent is signaling RBF opt-in through nSequence<=0xff_ff_ff_ff-1 but the
child, servicing as a pinning transaction, opt-out from the RBF policy.
This pinning trick conception was matching my understanding of Core code
but while reading again the specification, I observed that it was
inconsistent from the inherited signaling mechanism as described in the
bip's "Summary" section.

After exercising the logic, I did submit the defect to Dave Harding, asking
confirmation of divergence between Bitcoin Core and BIP 125. Soon after, he
did confirm it and pointed that the defect has been there since the 2015's
PR introducing the opt-in RBF, advicing to to consider security
implications for deployed second-layer protocols. After noticing the minor
implications for pinning attacks on second-stage LN transactions while
talking with Matt Corallo, I did disclose to the Bitcoin Core security list.

My initial report was recommending avoiding a covert patch in the mempool
as risks of 

Re: [bitcoin-dev] L2s Onchain Support IRC Workshop

2021-04-27 Thread Antoine Riard via bitcoin-dev
Hi Gloria,

Thanks for your interest in joining.

> A small note - I believe package relay and sponsorship (or other
> fee-bumping primitive) should be separate discussions.

Here my thinking on the question, ideally we would have one generic
fee-bumping primitive suiting any contracting protocol or Bitcoin
applications onchain requirements. In the future, that
would avoid the mempool and transaction relay rules being lobbied by any L2
community to add support for their specific onchain desiratas. Of course,
L2 communities are always able to deploy their own overlay infrastructure
but at the price of losing the censorship-resistance guarantees of the
current base layer p2p network.

Further, we already have concerns of competing onchain requirements between
Bitcoin merchants and Lightning protocol dev about RBF. IMO, full-rbf will
harden LN against some state-of-art attacks but at same time make it easier
to double-spend merchants.

How do we arbiter between categories of users requirements ? I don't know,
best is to have an open discussion about it ?

Back to package relay, I also think that's the easiest candidate to deploy
because it doesn't rely on any consensus change. What I'm concerned about
is one package relay design working fine for the vast majority of cases but
irrelevant or broken to address adversarial settings. Even more, it might
work fine for LN but not at all for more fancy protocols still on the
whiteboard like op_ctv-style
congestion tree.

Though in many cases it is better to adopt an almost complete solution now,
rather than to wait until a perfect solution can be found. Likely, the best
we can do is keep design modular, version everything and be ready to deploy
multiple versions of package relay in the coming years as our knowledge in
those areas improves.

> Re: L2-zoology... In general, for the purpose of creating a stable API /
> set of assumptions between layers, I'd like to be as concrete as possible.
> Speaking for myself, if I'm TDDing for a specific L2 attack, I need test
> vectors. A simple description of mempool contents + p2p messages sent is
> fine, but pubkeys + transaction hex would be appreciated because we don't
> (and probably shouldn't, for the purpose of maintainability) have a lot of
> tooling to build L2 transactions in Bitcoin Core. In the other direction,
> it's hard to make any guarantees given the complexity of mempool policy,
> but perhaps it could be helpful to expose a configurable RPC (e.g. #21413
> <https://github.com/bitcoin/bitcoin/pull/21413>) to test a range of
> scenarios?

We're aligned here, I'd like to be as concrete as possible too. As a L1/L2
dev, I've just a bunch of questions and don't pretend to have clear answers
for each of them yet nor I think those answers will be the best ones. So
maybe the first step is just tracking and explaining problems better,
hopefully avoiding to waste too much engineering hours on could-be-enhanced
solutions ?

Actively working on better demonstrations and will share them soon. That
said, anyone interested in improving their own understanding in those areas
are free to make their own investigations :)

Cheers,
Antoine

Le lun. 26 avr. 2021 à 19:06, Gloria Zhao  a écrit :

> Hi Antoine,
>
> Thanks for initiating this! I'm interested in joining. Since I mostly live
> in L1, my primary goal is to understand what simplest version of package
> relay would be sufficient to support transaction relay assumptions made by
> L2 applications. For example, if a parent + child package covers the vast
> majority of cases and a package limit of 2 is considered acceptable, that
> could simplify things quite a bit.
>
> A small note - I believe package relay and sponsorship (or other
> fee-bumping primitive) should be separate discussions.
>
> Re: L2-zoology... In general, for the purpose of creating a stable API /
> set of assumptions between layers, I'd like to be as concrete as possible.
> Speaking for myself, if I'm TDDing for a specific L2 attack, I need test
> vectors. A simple description of mempool contents + p2p messages sent is
> fine, but pubkeys + transaction hex would be appreciated because we don't
> (and probably shouldn't, for the purpose of maintainability) have a lot of
> tooling to build L2 transactions in Bitcoin Core. In the other direction,
> it's hard to make any guarantees given the complexity of mempool policy,
> but perhaps it could be helpful to expose a configurable RPC (e.g. #21413
> <https://github.com/bitcoin/bitcoin/pull/21413>) to test a range of
> scenarios?
>
> Anyway, looking forward to discussions :)
>
> Best,
> Gloria
>
> On Fri, Apr 23, 2021 at 8:51 AM Antoine Riard via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> Hi,
>>
>> During the lastest years, tx-relay and mempool acceptances rules of the

Re: [bitcoin-dev] Proposed BIP editor: Kalle Alm

2021-04-23 Thread Antoine Riard via bitcoin-dev
Hi Luke,

For the records and the subscribers of this list not following
#bitcoin-core-dev, this mail follows a discussion which did happen during
yesterday irc meetings.
Logs here : http://gnusha.org/bitcoin-core-dev/2021-04-22.log

I'll reiterate my opinion expressed during the meeting. If this proposal to
extend the bip editorship membership doesn't satisfy parties involved or
anyone in the community, I'm strongly opposed to have the matter sliced by
admins of the Bitcoin github org. I believe that defect or uncertainty in
the BIP Process shouldn't be solved by GH janitorial roles and I think
their roles don't bestow to intervene in case of loopholes. Further, you
have far more contributors involved in the BIP Process rather than only
Bitcoin Core ones. FWIW, such precedent merits would be quite similar to
lobby directly GH staff...

Unless we harm Bitcoin users by not acting, I think we should always be
respectful of procedural forms. And in the lack of such forms, stay patient
until a solution satisfy everyone.

I would recommend the BIP editorship, once extended or not, to move in its
own repository in the future.

Cheers,
Antoine




Le jeu. 22 avr. 2021 à 22:09, Luke Dashjr via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> a écrit :

> Unless there are objections, I intend to add Kalle Alm as a BIP editor to
> assist in merging PRs into the bips git repo.
>
> Since there is no explicit process to adding BIP editors, IMO it should be
> fine to use BIP 2's Process BIP progression:
>
> > A process BIP may change status from Draft to Active when it achieves
> > rough consensus on the mailing list. Such a proposal is said to have
> > rough consensus if it has been open to discussion on the development
> > mailing list for at least one month, and no person maintains any
> > unaddressed substantiated objections to it.
>
> A Process BIP could be opened for each new editor, but IMO that is
> unnecessary. If anyone feels there is a need for a new Process BIP, we can
> go
> that route, but there is prior precedent for BIP editors appointing new
> BIP
> editors, so I think this should be fine.
>
> Please speak up soon if you disagree.
>
> Luke
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] L2s Onchain Support IRC Workshop

2021-04-23 Thread Antoine Riard via bitcoin-dev
Hi Jeremy,

Yes dates are floating for now. After Bitcoin 2021, sounds a good idea.

Awesome, I'll be really interested to review again an improved version of
sponsorship. And I'll try to sketch out the sighash_no-input fee-bumping
idea which was floating around last year during pinnings discussions. Yet
another set of trade-offs :)

Le ven. 23 avr. 2021 à 11:25, Jeremy  a écrit :

> I'd be excited to join. Recommend bumping the date  to mid June, if that's
> ok, as many Americans will be at Bitcoin 2021.
>
> I was thinking about reviving the sponsors proposal with a 100 block lock
> on spending a sponsoring tx which would hopefully make less controversial,
> this would be a great place to discuss those tradeoffs.
>
> On Fri, Apr 23, 2021, 8:17 AM Antoine Riard 
> wrote:
>
>> Hi,
>>
>> During the lastest years, tx-relay and mempool acceptances rules of the
>> base layer have been sources of major security and operational concerns for
>> Lightning and other Bitcoin second-layers [0]. I think those areas require
>> significant improvements to ease design and deployment of higher Bitcoin
>> layers and I believe this opinion is shared among the L2 dev community. In
>> order to make advancements, it has been discussed a few times in the last
>> months to organize in-person workshops to discuss those issues with the
>> presence of both L1/L2 devs to make exchange fruitful.
>>
>> Unfortunately, I don't think we'll be able to organize such in-person
>> workshops this year (because you know travel is hard those days...) As a
>> substitution, I'm proposing a series of one or more irc meetings. That
>> said, this substitution has the happy benefit to gather far more folks
>> interested by those issues that you can fit in a room.
>>
>> # Scope
>>
>> I would like to propose the following 4 items as topics of discussion.
>>
>> 1) Package relay design or another generic L2 fee-bumping primitive like
>> sponsorship [0]. IMHO, this primitive should at least solve mempools spikes
>> making obsolete propagation of transactions with pre-signed feerate, solve
>> pinning attacks compromising Lightning/multi-party contract protocol
>> safety, offer an usable and stable API to L2 software stack, stay
>> compatible with miner and full-node operators incentives and obviously
>> minimize CPU/memory DoS vectors.
>>
>> 2) Deprecation of opt-in RBF toward full-rbf. Opt-in RBF makes it trivial
>> for an attacker to partition network mempools in divergent subsets and from
>> then launch advanced security or privacy attacks against a Lightning node.
>> Note, it might also be a concern for bandwidth bleeding attacks against L1
>> nodes.
>>
>> 3) Guidelines about coordinated cross-layers security disclosures.
>> Mitigating a security issue around tx-relay or the mempool in Core might
>> have harmful implications for downstream projects. Ideally, L2 projects
>> maintainers should be ready to upgrade their protocols in emergency in
>> coordination with base layers developers.
>>
>> 4) Guidelines about L2 protocols onchain security design. Currently
>> deployed like Lightning are making a bunch of assumptions on tx-relay and
>> mempool acceptances rules. Those rules are non-normative, non-reliable and
>> lack documentation. Further, they're devoid of tooling to enforce them at
>> runtime [2]. IMHO, it could be preferable to identify a subset of them on
>> which second-layers protocols can do assumptions without encroaching too
>> much on nodes's policy realm or making the base layer development in those
>> areas too cumbersome.
>>
>> I'm aware that some folks are interested in other topics such as
>> extension of Core's mempools package limits or better pricing of RBF
>> replacement. So l propose a 2-week concertation period to submit other
>> topics related to tx-relay or mempools improvements towards L2s before to
>> propose a finalized scope and agenda.
>>
>> # Goals
>>
>> 1) Reaching technical consensus.
>> 2) Reaching technical consensus, before seeking community consensus as it
>> likely has ecosystem-wide implications.
>> 3) Establishing a security incident response policy which can be applied
>> by dev teams in the future.
>> 4) Establishing a philosophy design and associated documentations (BIPs,
>> best practices, ...)
>>
>> # Timeline
>>
>> 2021-04-23: Start of concertation period
>> 2021-05-07: End of concertation period
>> 2021-05-10: Proposition of workshop agenda and schedule
>> late 2021-05/2021-06: IRC meetings
>>
>> As the problem space is savagely wide, I've started a collection of
>> documents to assist this workshop : https://github.com/ariard/L2-zoology
>> Still wip, but I'll have them in a good shape at agenda publication, with
>> reading suggestions and open questions to structure discussions.
>> Also working on transaction pinning and mempool partitions attacks
>> simulations.
>>
>> If L2s security/p2p/mempool is your jam, feel free to get involved :)
>>
>> Cheers,
>> Antoine
>>
>> [0] For e.g see optech section on 

[bitcoin-dev] L2s Onchain Support IRC Workshop

2021-04-23 Thread Antoine Riard via bitcoin-dev
Hi,

During the lastest years, tx-relay and mempool acceptances rules of the
base layer have been sources of major security and operational concerns for
Lightning and other Bitcoin second-layers [0]. I think those areas require
significant improvements to ease design and deployment of higher Bitcoin
layers and I believe this opinion is shared among the L2 dev community. In
order to make advancements, it has been discussed a few times in the last
months to organize in-person workshops to discuss those issues with the
presence of both L1/L2 devs to make exchange fruitful.

Unfortunately, I don't think we'll be able to organize such in-person
workshops this year (because you know travel is hard those days...) As a
substitution, I'm proposing a series of one or more irc meetings. That
said, this substitution has the happy benefit to gather far more folks
interested by those issues that you can fit in a room.

# Scope

I would like to propose the following 4 items as topics of discussion.

1) Package relay design or another generic L2 fee-bumping primitive like
sponsorship [0]. IMHO, this primitive should at least solve mempools spikes
making obsolete propagation of transactions with pre-signed feerate, solve
pinning attacks compromising Lightning/multi-party contract protocol
safety, offer an usable and stable API to L2 software stack, stay
compatible with miner and full-node operators incentives and obviously
minimize CPU/memory DoS vectors.

2) Deprecation of opt-in RBF toward full-rbf. Opt-in RBF makes it trivial
for an attacker to partition network mempools in divergent subsets and from
then launch advanced security or privacy attacks against a Lightning node.
Note, it might also be a concern for bandwidth bleeding attacks against L1
nodes.

3) Guidelines about coordinated cross-layers security disclosures.
Mitigating a security issue around tx-relay or the mempool in Core might
have harmful implications for downstream projects. Ideally, L2 projects
maintainers should be ready to upgrade their protocols in emergency in
coordination with base layers developers.

4) Guidelines about L2 protocols onchain security design. Currently
deployed like Lightning are making a bunch of assumptions on tx-relay and
mempool acceptances rules. Those rules are non-normative, non-reliable and
lack documentation. Further, they're devoid of tooling to enforce them at
runtime [2]. IMHO, it could be preferable to identify a subset of them on
which second-layers protocols can do assumptions without encroaching too
much on nodes's policy realm or making the base layer development in those
areas too cumbersome.

I'm aware that some folks are interested in other topics such as extension
of Core's mempools package limits or better pricing of RBF replacement. So
l propose a 2-week concertation period to submit other topics related to
tx-relay or mempools improvements towards L2s before to propose a finalized
scope and agenda.

# Goals

1) Reaching technical consensus.
2) Reaching technical consensus, before seeking community consensus as it
likely has ecosystem-wide implications.
3) Establishing a security incident response policy which can be applied by
dev teams in the future.
4) Establishing a philosophy design and associated documentations (BIPs,
best practices, ...)

# Timeline

2021-04-23: Start of concertation period
2021-05-07: End of concertation period
2021-05-10: Proposition of workshop agenda and schedule
late 2021-05/2021-06: IRC meetings

As the problem space is savagely wide, I've started a collection of
documents to assist this workshop : https://github.com/ariard/L2-zoology
Still wip, but I'll have them in a good shape at agenda publication, with
reading suggestions and open questions to structure discussions.
Also working on transaction pinning and mempool partitions attacks
simulations.

If L2s security/p2p/mempool is your jam, feel free to get involved :)

Cheers,
Antoine

[0] For e.g see optech section on transaction pinning attacks :
https://bitcoinops.org/en/topics/transaction-pinning/
[1]
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-September/018168.html
[2] Lack of reference tooling make it easier to have bug slip in like
https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-October/002858.html
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Proposal for new "disabletx" p2p message

2021-03-03 Thread Antoine Riard via bitcoin-dev
> I believe this is what BIP 60 does, or did you have something else in
> mind?

Right, it achieves the first goal of dissociating `fRelay` from BIP37 but
it doesn't document Core specific behavior of disconnecting peers for raw
TX messages reception
from outbound block-relay-only peers, as implemented by PR 15759. I think
BIP 60 is as much unclear as BIP37 "Whether the remote peer should announce
relayed transactions or not, see BIP 0037, since version >= 70001". A first
interpretation could be that all tx-relay messages are disabled. A second
interpretation could be that only _tx-announcement_ messages (e.g INV(TX))
are disabled.

It could be argued that #15759 introduced incompatible changes between a
Bitcon Core 0.19.0 node and a BIP37 compliant peer on the p2p network.
Post-15759, the message space allowed to a BIP37 peer has been
reduced...Note that BIP60 isn't listed as implemented in
bitcoin/doc/bips.md.

I believe that BIP338 has the merit of making those subjects clear and easy
to follow by any Bitcoin software. Instead of spawning discussion around
old, lightclient-related BIPs or Core undocumented disabling transaction
relay mechanism being de facto part of the p2p protocol.

> Sorry - I meant that Bitcoin Core should allow a certain number of
> inbound peers that do not relay txs. This would be in addition to the
> full-relay inbound peers.

Yes, I agree on the purpose. But I don't think we need to "allow" further
disabled-tx peers by our inbound connection selection or eviction logics.
Turning a few bits in a protocol message sounds a too-cheap burden on
potential attackers contrary to most of our current eviction heuristics,
forcing some work ("announce transaction fast, "be located in some subnet",
"announce block fast"). Though better to discuss this later, not the main
point of your proposal.

Antoine

Le mar. 2 mars 2021 à 07:22, John Newbery via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> a écrit :

> Antoine,
>
> Nothing in my proposal below precludes introducing a more comprehensive
> feature negotiation mechanism at some later date. The only changes I'm
> proposing are to Bitcoin Core's policy for how it treats its peer
> connections.
>
> > If we don't want to introduce a new message and
> > corresponding code changes, it would be wise at least to extract
> VERSION's
> > `fRelay` and how Core handles it in its own BIP.
>
> I believe this is what BIP 60 does, or did you have something else in
> mind?
>
> > Explicit addr-relay negotiation will offer more
> > flexibility
>
> I agree!
>
> > (and more hygienic code paths rather than triggering data
> > structures initialization in few different locations).
>
> Not sure what you mean by hygienic here. This seems like a code style
> preference.
>
> > Given inbound connections might be attacker-controlled and tx-relay
> opt-out
> > signaling is also attacker-controlled, wouldn't this give a bias toward
> an
> > attacker in occupying our inbound slots ? Compared to honest inbound
> peers,
> > which in average are going to be full-relay.
>
> Sorry - I meant that Bitcoin Core should allow a certain number of
> inbound peers that do not relay txs. This would be in addition to the
> full-relay inbound peers.
>
> John
>
> On Mon, Mar 1, 2021 at 11:11 PM Antoine Riard 
> wrote:
>
>> Hi John,
>>
>> > I think a good counter-argument against simply using `fRelay` for this
>> > purpose is that we shouldn't reuse a protocol feature designed for one
>> > function to achieve a totally different aim. However, we know that nodes
>> > on the network have been using `fRelay` to disable transaction relay
>> > since Bitcoin Core version 0.12 (when `-blocksonly` was added), and that
>> > usage was expanded to _all_ nodes running Bitcoin Core version 0.19 or
>> > later (when block-relay-only connections were introduced), so using
>> > `fRelay` to disable transaction relay is now de facto part of the p2p
>> > protocol.
>>
>>
>> I don't think this is good practice ecosystem-wise. To understand
>> tx-relay opt-out from peers correctly, a _non_ Bitcoin Core client has to
>> implement the `fRelay` subset of BIP37, but ignore the wider part around
>> FILTER* messages. Or implement those messages, only to disconnect peers
>> sending them, thus following BIP111 requirements.
>>
>> Thus, future developers of bitcoin software have the choice between
>> implementing a standard in a non-compliant way or implementing p2p messages
>> for a light client protocol in a way of deprecation ? Even further, an
>> interpretation of BIP 37 ("Being able to opt-out of _inv_ messages until
>> the filter is set prevents a client being flooded with traffic in the brief
>> window of time") would make it okay to send TX messages to your inbound
>> block-relay-only peers. And that your client shouldn't be disconnected for
>> such behavior.
>>
>> On the long-term, IMHO, better to have a well-defined standard with a
>> clean negotiation mechanism rather than relying on code specifics 

Re: [bitcoin-dev] Proposal for new "disabletx" p2p message

2021-03-01 Thread Antoine Riard via bitcoin-dev
Hi John,

> I think a good counter-argument against simply using `fRelay` for this
> purpose is that we shouldn't reuse a protocol feature designed for one
> function to achieve a totally different aim. However, we know that nodes
> on the network have been using `fRelay` to disable transaction relay
> since Bitcoin Core version 0.12 (when `-blocksonly` was added), and that
> usage was expanded to _all_ nodes running Bitcoin Core version 0.19 or
> later (when block-relay-only connections were introduced), so using
> `fRelay` to disable transaction relay is now de facto part of the p2p
> protocol.


I don't think this is good practice ecosystem-wise. To understand tx-relay
opt-out from peers correctly, a _non_ Bitcoin Core client has to implement
the `fRelay` subset of BIP37, but ignore the wider part around FILTER*
messages. Or implement those messages, only to disconnect peers sending
them, thus following BIP111 requirements.

Thus, future developers of bitcoin software have the choice between
implementing a standard in a non-compliant way or implementing p2p messages
for a light client protocol in a way of deprecation ? Even further, an
interpretation of BIP 37 ("Being able to opt-out of _inv_ messages until
the filter is set prevents a client being flooded with traffic in the brief
window of time") would make it okay to send TX messages to your inbound
block-relay-only peers. And that your client shouldn't be disconnected for
such behavior.

On the long-term, IMHO, better to have a well-defined standard with a clean
negotiation mechanism rather than relying on code specifics of a given
Bitcoin client. If we don't want to introduce a new message and
corresponding code changes, it would be wise at least to extract VERSION's
`fRelay` and how Core handles it in its own BIP.

> I think a better approach would be for Bitcoin Core to only relay addr
> records to an inbound peer if it has previously received an `addr` or
> `addrv2` message from that peer, since that indicates definitively that
> the peer actively gossips `addr` records. This approach was first
> suggested by AJ in the original block-relay-only PR[15].

If a node is willingly to opt-out from addr-relay from one of its inbound
peers, how is it supposed to do ? Of course, you can drop such messages on
the floor, your peer is just going to waste bandwidth for nothing. IIRC
from past irc p2p meetings, we're really unclear about what a
good-propagation-and-privacy-preserving addr-relay strategy should look
like. Note, that distrusting your inbound peers with your addr-relay might
be a sane direction. Explicit addr-relay negotiation will offer more
flexibility (and more hygienic code paths rather than triggering data
structures initialization in few different locations).

> - update the inbound eviction logic to protect more inbound peers which
> do not have transaction relay data structures.

Given inbound connections might be attacker-controlled and tx-relay opt-out
signaling is also attacker-controlled, wouldn't this give a bias toward an
attacker in occupying our inbound slots ? Compared to honest inbound peers,
which in average are going to be full-relay.

Cheers,
Antoine



Le lun. 1 mars 2021 à 16:07, John Newbery via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> a écrit :

> Hi Suhas,
>
> Thank you for this proposal. I agree with your aims, but I think a new
> P2P message isn't necessary to achieve them.
>
> # Motivation
>
> There are two distinct (but interacting) motivations:
>
> 1. Allow a node to accept more incoming connections which will only be
>used for block propagation (no transaction relay or addr gossip),
>while minimizing resource requirements.
>
> 2. Prevent `addr` gossip messages from being sent to peers which will
>'black hole' those addrs (i.e. not relay them further).
>
> These motivations interact because if we simply increase the number of
> block-relay-only connections that nodes make without making any
> allowance for the fact those connections won't gossip addr records, then
> we'll increase the number of addr black holes and worsen addr gossip.
>
> # Using fRelay=false to signal no transaction relay.
>
> `fRelay` is an optional field in the `version` message. There are three
> BIPs concerned with `fRelay`:
>
> - BIP 37[1] introduced the `fRelay` field to indicate to the recipient
>   that they must not relay transactions over the connection until a
>   `filteradd` message has been received.
>
> - BIP 60[2] aimed to make the `fRelay` field mandatory. It is not clear
>   how widely this BIP has been adopted by implementations.
>
> - BIP 111[3] introduced a `NODE_BLOOM` service bit to indicate that
>   bloom filters are served by this node. According to this BIP, "If a
>   node does not support bloom filters but receives a "filterload",
>   "filteradd", or "filterclear" message from a peer the node should
>   disconnect that peer immediately."
>
> Within Bitcoin Core:
>
> - PR 1795[4] (merged in January 

Re: [bitcoin-dev] Proposal to stop processing of unrequested transactions in Bitcoin Core

2021-02-12 Thread Antoine Riard via bitcoin-dev
Hi Jeremy,

If I understand correctly your concern, you're worried that change would
ease discovery of the node's tx-relay topology ? I don't scope transaction
origin inference, if you suppose the
unrequested-tx peer sending is the attacker it must have learnt the
transaction from somewhere else which is more likely to be the tx owner
rather than the probed node.

As far I can think of this change, a peer might send an unrequested
transaction to this node and observe that it's either a) processed, the
node has learnt about the txid from another peer or b) rejected, the node
has never learnt about the txid. The outcome can be queried by sending a
GETDATA for the "is-unrequested" txid.

I think the same result can already be achieved by sending an INV and
observing if a GETDATA is replied back to guess the presence of another
peer with already the knowledge of the txid. Or alternatively, just connect
to this other peer and wait for an announcement.

What else can we think of ?

>From my side, compared to the already-existing heuristics, I don't see how
this change is easing attackers' work. That said, I don't deny our
transaction announcements/requests logic is worthy of more study about its
privacy properties, especially when you acknowledge the recent overhaul of
the transaction request and the upcoming Erlay changes.

Cheers,
Antoine

Le jeu. 11 févr. 2021 à 16:15, Pieter Wuille  a
écrit :

>
> I'm not sure of the existing behavior is of when we issue a getdata
> request, but noting that there could be a privacy implication of this sort
> of change. Could you (or someone else) expand on why this is not a concern
> here?
>
>
> What kind of privacy concern are you talking about? I'm not sure I see how
> this could matter.
>
> Cheers,
>
> --
> Pieter
>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Proposal to stop processing of unrequested transactions in Bitcoin Core

2021-02-10 Thread Antoine Riard via bitcoin-dev
Hi,

I'm proposing to stop the processing of unrequested transactions in Bitcoin
Core 22.0+ at TX message reception. An unrequested transaction is one
defined by which a "getdata" message for its specific identifier (either
txid or wtxid) has not been previously issued by the node [0].

This change is motivated by reducing the CPU DoS surface of Bitcoin Core
around mempool acceptance. Currently, an attacker can open multiple inbound
connections to a node and send expensive to validate, junk transactions.
Once the canonical INV/GETDATA sequence is enforced on the network, a
further protection would be to deprioritize bandwidth and validation
resources allocation, or even to wither connections with such DoSy peers. A
permissioned peer (PF_RELAY) will still be able to bypass such restrictions.

Raw TX message processing has always been tolerated by Core and as such
some Bitcoin clients aren't bothering with an INV/GETDATA sequence. Such
change will break their tx-relay capabilities on the p2p network and
require adaptation from them. Given deployment time of any release, I hope
it provides a window time wide enough before the old tx-processing behavior
becomes the minority.

Eager to gather feedback on this proposal, especially if such change is
deemed as too much constraining or fast on any Bitcoin software.

Cheers,
Antoine

[0] See https://github.com/bitcoin/bitcoin/pull/20277
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Hardware wallets and "advanced" Bitcoin features

2021-01-16 Thread Antoine Riard via bitcoin-dev
Hello Kevin,

Thanks for starting this thread, that's a really relevant discussion
ecosystem-wise !

> * Proposed improvement: The HW should display the Bitcoin Script itself
when possible (including the unlock conditions).

What level of script literacy are you assuming on your users ? I can see
enterprise/hobbyist folks to know enough of Script to understand the
intended behavior but I don't think that's a reasonable assumption for your
average user. Of course, Miniscript Policy makes things easier, but IMHO, I
still hope to see some mature, higher-level language (e.g Ivy) to ease
script semantic understanding and thus widen the crowd of users.

Further, I would do a bit on UX research on the correctness model expected
by your users. I.e if they fail to verify accordingly, are they losing
funds, transaction doesn't confirm, transaction doesn't even propagate,
etc. You should also make assumptions on the mental resources you're
required from them. Time-sensitive L2 protocols have a wide scope to check,
e.g not verifying the nSequence/nLocktime fields can provoke funds failures.

> This applies to pre-signed transaction protocols especially well as the
template of these transactions could be known
and recognized by the HW. Typically for Revault, the HW could display:
"Unvault Transaction, all expected pubkeys
present in the script".

In the future, I would expect templates of high-security protocols like
vaults to be part of the trusted computing base of any decent HW. I think
good standards there would avoid HW vendors to come with some kind of
certified-templates scheme and thus having to bless custom scripts of every
vaults implementations.

> Proposed improvement: The HW could know pubkeys or xpubs it does not hold
the private keys
for, and display a label (or
understand it for logic reasons, such as "expected pubkeys" as the previous
example).

I don't think you even need user input on this, the absence of pubkeys
knowledge itself is a trigger to display a label or ask for further
information. Where absence of pubkeys knowledge can be interpreted as
devoid from key whitelisting or privkey ownership.

> Going further, the xpubs could be
aliased the first time they are entered/verified (as part of, say, an
initial setup ceremony) for instance with the
previously mentioned Miniscript policy: or(pk(Alice), and(pk(Bob),
after(42))).

I would be careful about accidental or malicious alias collisions. But yes
that can be something, you can even conserve a merkle tree root in the
Secure Element where the hashed element are
previously authenticated alias/pubkeys. And require from the non-trusted
challenger to come with a merkle branch to validate address inclusion.

> Then there is PSBT support and the maximum transaction size limit for
these: we need more transparency from HW manufacturers on their li
mitations.

I understand them, Script is full of subtleties, taproot is likely to have
more of them and if you take sighash malleability that's not something you
want your average user to play with. Maybe it
would be better to come up with a first wave of script features on which
you expect transparency ? For sure, OP_CSV is a good candidate.

> Once any input of a (pre-signed)transaction is
spent, this transaction isn't valid anymore. Most pre-signed transactions
protocols are used today as a form of defense
mechanism, spending any input would mean incapacitating the entire defense
mechanism.

I don't see the exact issue here. E.g in Lightning, even if you pre-sign a
justice transaction punishing every revokeable outputs on counterparty
transaction, and one input is spent, will current HWs prevent you to-resign
an updated justice transaction ?

> I understand some of these changes may be very difficult, especially
given the low memory and computational power of
secure elements.

Instead of relying on hand-sized devices, what about relying on HSMs for a
first-wave of adoptions, those ones have far enough resources to run a
reasonable L2 stack on the trusted-side ?

But overall agree, on the requirement to level-up HWs for L2. IMO, a first
step could be to list a  common set of features beyond
deployed/soon-to-be-deployed L2s, that would make things easier for HW
vendors to have a unique list of grievances. Before they engage in further,
dedicated tweaks to adapt for each protocol security model. OP_CSV/OP_CTLV
decoding/"burned" standard scripts support would be a good starter.

> Feel free to reply with your comments or adding suggestions, I am not a
hardware wallet expert and would take criticism
wit
hout being offended.

I don't know yet any *L2* hardware wallet expert :)

Cheers,
Antoine

Le jeu. 14 janv. 2021 à 13:46, Kevin Loaec via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> a écrit :

> Hello everyone,
>
> I would like to start a discussion on improving Hardware Wallets.
>
> My approach to this right now is from a vault protocol we are developing
> (Revault, [1]), and its Hardware Wallet
> 

Re: [bitcoin-dev] A Replacement for RBF and CPFP: Non-Destructive TXID Dependencies for Fee Sponsoring

2020-09-22 Thread Antoine Riard via bitcoin-dev
Hello AC,

Yes that's a real issue. In the context of multi-party protocols, you may
pre-signed transactions with the feerate of _today_ and then only going to
be broadcast later with a feerate of _tomorrow_.
In that case the pre-signed feerate may be so low that the transaction
won't even propagate across network mempools with a local minimal feerate
higher.

That's why you want to be sure that the feerate of your  package of
transactions (either sponsor+sponsoree or parent+CPFP) is going to be
evaluated as a whole to decide acceptance of each element of the package.

Antoine


Le mar. 22 sept. 2020 à 03:28, ArmchairCryptologist via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> a écrit :

> Not sure if I'm missing something, but I'm curious if (how) this will work
> if the sponsored transaction's feerate is so low that it has been largely
> evicted from mempools due to fee pressure, and is too low to be widely
> accepted when re-broadcast? It seems to me that the following requirement
>
> >1. The Sponsor Vector's entry must be present in the mempool
>
> means that you enter a catch-22 where the sponsor transaction cannot be
> broadcast because the sponsored transaction is not in the mempool, and the
> sponsored transaction cannot be (re-)broadcast because the fee is too low.
> This requirement might therefore need to be revised.
>
> There is of course no global mempool, but RBF by its nature would still
> work in this case, by replacing the transaction if it exists and inserting
> it if it does not.
>
> --AC
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] A Replacement for RBF and CPFP: Non-Destructive TXID Dependencies for Fee Sponsoring

2020-09-21 Thread Antoine Riard via bitcoin-dev
I think this is a worthy idea as the funding outpoint of any off-chain
protocols is an invariant known by participants. Thus by sponsoring an
outpoint you're requiring from network mempools to increase the feerate of
the package locally known without assuming about the concrete state as any
of them confirming is moving protocol forward.

That said, a malicious counterparty can still broadcast a heavy-weighted
transaction such as an honest party, devoid of knowledge of this weight,
won't attach a sponsor with a fee high enough to timely confirm the
sponsoree. This counterparty capability is a function of package
malleability allowed by the off-chain protocol.

Thus an honest party has to overshoot your bump as a default setting. Now
this is a new concern as such a mechanism can be used as a fee-burning one
by your counterparty. I believe we want a fee-burning equilibrium for any
pinning solution, Mallet shouldn't force Alice to overpay in fee more than
Mallet is ready to feerate-bid in network mempools.

> I don't think package relay based only on feerate solves RBF transaction
> pinning (and maybe also doesn't solve ancestor/dependent limit pinning).

Yes I agree with this. There are some really nasty cases of pinning where
an adversary with knowledge of the tx-relay topology can block your
compelling feerate bids (sponsors/package relay/anchor whatever) from
propagating by leveraging conflicts and RBF logic.

Outbound tx-relay peers rotation which makes the tx-relay topology harder
to observe could help.

Antoine

Le lun. 21 sept. 2020 à 12:27, Jeremy  a écrit :

> Responses Inline:
>
> Would it make sense that, instead of sponsor vectors
>> pointing to txids, they point to input outpoints?  E.g.:
>>
>> 1. Alice and Bob open a channel with funding transaction 0123...cdef,
>>output 0.
>>
>> 2. After a bunch of state updates, Alice unilaterally broadcasts a
>>commitment transaction, which has a minimal fee.
>>
>> 3. Bob doesn't immediately care whether or not Alice tried to close the
>>channel in the latest state---he just wants the commitment
>>transaction confirmed so that he either gets his money directly or he
>>can send any necessary penalty transactions.  So Bob broadcasts a
>>sponsor transaction with a vector of 0123...cdef:0
>>
>> 4. Miners can include that sponsor transaction in any block that has a
>>transaction with an input of 0123...cdef:0.  Otherwise the sponsor
>>transaction is consensus invalid.
>>
>> (Note: alternatively, sponsor vectors could point to either txids OR
>> input outpoints.  This complicates the serialization of the vector but
>> seems otherwise fine to me.)
>>
>
> *This seems like a fine suggestion and I think addresses Antoine's issue.*
>
>
> *I think there are likely some cases where you do want TXID and not Output
> (e.g., if you *
>
> *are sponsoring a payment to your locktime'd cold storage wallet (no CPFP)
> from an untrusted third party (no RBF), they can grift you into paying for
> an unrelated payment). This isn't a concern when the root utxo is multisig
> & you are a participant.*
>
> *The serialization to support both, while slightly more complicated, can
> be done in a manner that permits future extensibility as well if there are
> other modes people require.*
>
>
>
>>
>> > If we want to solve the hard cases of pinning, I still think mempool
>> > acceptance of a whole package only on the merits of feerate is the
>> easiest
>> > solution to reason on.
>>
>> I don't think package relay based only on feerate solves RBF transaction
>> pinning (and maybe also doesn't solve ancestor/dependent limit pinning).
>> Though, certainly, package relay has the major advantage over this
>> proposal (IMO) in that it doesn't require any consensus changes.
>> Package relay is also very nice for fixing other protocol rough edges
>> that are needed anyway.
>>
>> -Dave
>>
>
> *I think it's important to keep in mind this is not a rival to package
> relay; I think you also want package relay in addition to this, as they
> solve different but related problems.*
>
>
> *Where you might be able to simplify package relay with sponsors is by
> doing a sponsor-only package relay, which is always limited to 2
> transactions, 1 sponsor, 1 sponsoree. This would not have some of the
> challenges with arbitrary-package package-relay, and would (at least from a
> ux perspective) allow users to successfully get parents with insufficient
> fee into the mempool.*
>
>
>
>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] A Replacement for RBF and CPFP: Non-Destructive TXID Dependencies for Fee Sponsoring

2020-09-20 Thread Antoine Riard via bitcoin-dev
Right, I was off the shot. Thanks for the explanation.

As you mentioned, if the goal of the sponsor mechanism is to let any party
drive a state N's first tx to completion, you still have the issue of
concurrent states being pinned and thus non-observable for sponsoring by an
honest party.

E.g, Bob can broadcast a thousand of revoked LN states and pin them with
low-feerate sponsors such as these malicious packages absolute fee are
higher than the honest state N. Alice can't fee-sponsor
them as we can assume she hasn't a global view of network mempools. Due to
the proposed policy rule "The Sponsor Vector's entry must be present in the
mempool", Alice's sponsors won't propagate. Even amending this rule, we
can't assume Alice has a thousand of sponsoring utxos to avoid conflict
between her own broadcast.

Of course, offchain protocols designers can limit a participant's
capability to construct a pinning package by constraining its malleability
and thus to always have a compelling feerate. E.g in Lightning you can bind
the size of a commitment transaction by refusing relayed HTLCs and thus
have less HTLC outputs. This security increase comes at the price of less
protocol flexibility, e.g reducing payments throughput.

Further, a malicious counterparty can still take advantage of
mempool-congestion spikes. Even if the pinning package has a compelling
feerate, high enough to bounce off a honest broadcast, there is no
guarantee it stays such. Just after the pinning, congestion can increase
and bury it for long-enough until a timelock expires.

If we want to solve the hard cases of pinning, I still think mempool
acceptance of a whole package only on the merits of feerate is the easiest
solution to reason on.

Le sam. 19 sept. 2020 à 15:46, Jeremy  a écrit :

> Antoine,
>
> Yes I think you're a bit confused on where the actual sponsor vector is.
> If you have a transaction chain A->B->C and a sponsor S_A, S_A commits to
> txid A and A is unaware of S.
>
>
> W.r.t your other points, I fully agree that the 1-to-N sponsored case is
> very compelling. The consensus rules are clear that sponsor commitments are
> non-rival, so there's no issue with allowing as many sponsors as possible
> and including them in aggregate. E.g., if S_A and S'_A both sponsor A with
> feerate(S*) > feerate(A), there's no reason not to include all of them in a
> block. The only issue is denial of service in the mempool. In the future,
> it would definitely be desirable to figure out rules that allow mempools to
> track both multiple sponsors and multiple sponsor targets. But in the
> interest of KISS, the current policy rules are designed to be minimally
> invasive and maximally functional.
>
> In terms of location for the sponsor vector, I'm relatively indifferent.
> The annex is a possible location, but it's a bit odd as we really only need
> to allow one such vector per tx, not one per input, and one per input would
> enable some new use cases (maybe good, maybe bad). Further, being in the
> witness space would mean that if two parties create a 2 input transaction
> with a desired sponsor vector they would both need to specify it as you
> can't sign another input's witness data. I wholeheartedly agree with the
> sentiment though; there could be a more efficient place to put this data,
> but nothing jumps out to me as both efficient and simple in implementation
> (a new tx-level field sounds like a lot of complexity).
>
>
> > n >=1 ? I think you can have at least one vector and this is matching
> the code
>
> yes, this has been fixed in the gist (cred to Dmitry Petukhov for pointing
> it out first), but is correct in the code. Thank you for your careful
> reading.
>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] A Replacement for RBF and CPFP: Non-Destructive TXID Dependencies for Fee Sponsoring

2020-09-19 Thread Antoine Riard via bitcoin-dev
EDIT: I misunderstood the emplacement of the sponsor vector, please
disregard previous review :( Beyond where the convenient place should live,
which is still accurate I think.

> The
> Sponsor Vector TXIDs  must also be
> in the block the transaction is validated in, with no restriction on
> order or on specifying a TXID
> more than once.


Le sam. 19 sept. 2020 à 14:39, Antoine Riard  a
écrit :

> Hi Jeremy,
>
> This is a really interesting proposal to widen the scope of fee
> mechanisms.
>
> First, a wider point on what this proposal brings with regards to pinning,
> to the best of my knowledge.
>
> A pinning may have different vectors by exploiting a) mempools limits (e.g
> descendants) or b) mempools absolute-fee/feerate/conflicts logic. The lack
> of a global mempool means you can creatively combine them to provoke
> mempools-partitions [0]
>
> As far as I understand this proposal, it aims to solve the class a) of
> pinnings by allowing fee-bumping with a new definition of dependencies. I'm
> not sure it achieves to do  so as the Sponsor Vector TXIDs being committed
> in the Sponsoree signature hash means the Sponsor feerate is part of this
> commitment and can't be unilaterally adjusted to actual mempool-congestion.
>
> After broadcasting the Sponsor/Sponsoree pair, mempools feerate may
> increase again and thus obsoleting the previous fee-bump. Or you need a
> Sponsor Vector for every blockspace feerate, in the worst-case bound by the
> value of the Sponsoree funds.
>
> Further, I would say this proposal won't solve class b) of pinnings for
> multi-party time-sensitive protocols without further modifications. E.g in
> a LN-channel, assuming the commitment transaction is the Sponsoree, Alice
> the honest party can't increase Sponsor feerate by mal eating its outputs
> without breaking the sponsoring dependency. And thus evict a Bob's
> malicious pin across network mempools.
>
> I think a further softfork proposal with regards to sighash malleability
> is needed to achieve the security semantic for Lightning type of protocols.
> Roughly, a SIGHASH_IOVECTOR allows N-inputs to commit to N-outputs, thus
> committing to all the balance/HTLC outputs minus the last output Vector,
> non-interactively malleable by channel participants. This would be a form
> of transaction finalization delegation, allowing Alice to direct the
> Sponsor vector to a good-feerate adjusted transaction.
>
> Note, I may have misunderstood completely the proposal as the feerate
> observed might be the Sponsor _package_ one and each party could have a
> pair of outputs to spend from to non-interactively increase the Sponsoree.
> Though sounds like re-introducing the limits issues...
>
> That said, see following review points.
>
> > This is insufficient because if new attacks are found, there is
> > limited ability to deploy fixes for
> > them against deployed contract instances (such as open lightning
> > channels). What is required is a
> > fully abstracted primitive that requires no special structure from an
> > underlying transaction in
> > order to increase fees to confirm the transactions.
>
> This is really true, in case of vulnerability discovered mass closing of
> the channel would be in itself a concern as it would congest mempools and
> open to looter behaviors [1]. Though I don't think a special structure can
> claim covering every potential source of vulnerability for  off-chain
> protocols as some of them might be tx-relay based (e.g reject-filters for
> segwit txn).
>
> Further, a "fully abstracted primitive" is loosely defined, one could
> argue that anchor outputs don't require special structure from an
> underlying transaction (i.e on the order of outputs ?).
>
> >  where
> n>1, it is interpreted as a vector of TXIDs (Sponsor Vector).
>
> n >=1 ? I think you can have at least one vector and this is matching the
> code
>
> > If there is another convenient place to put the TXID vector, that's fine
> too.
>
> You might use the per-input future Taproot annex, and even apply a witness
> discount as this mechanism could be argued to be less blockspace expensive
> than a CPFP for the same semantic.
>
> An alternative could be a new transaction field like a new `stxid` :
>
>
> `[nVersion][marker][flag][txins][txouts][witness][nLockTime][nSponsor][nVersion][n*STXID]`
>
> It would be cheaper as you likely save the output amount size and OP_VER.
> And you don't have to subtract a dust output + 1 from the other output
> amount to make sure the Sponsor output meets dust propagation requirements.
>
> Though it's more demanding on the tx-relay layer (new serialization and
> transaction identifier) and new a version bump of the signature digest algo
> to avoid a third-party malleating the per-transaction sponsor field
>
> > To prevent garbage sponsors, we also require that:
>
> Does the reverse hold ? Garbage Sponsoree by breaking the dependency and
> double-spending the utxo spent by the Sponsor and thus decreasing
> Sponsoree's 

Re: [bitcoin-dev] A Replacement for RBF and CPFP: Non-Destructive TXID Dependencies for Fee Sponsoring

2020-09-19 Thread Antoine Riard via bitcoin-dev
Hi Jeremy,

This is a really interesting proposal to widen the scope of fee mechanisms.

First, a wider point on what this proposal brings with regards to pinning,
to the best of my knowledge.

A pinning may have different vectors by exploiting a) mempools limits (e.g
descendants) or b) mempools absolute-fee/feerate/conflicts logic. The lack
of a global mempool means you can creatively combine them to provoke
mempools-partitions [0]

As far as I understand this proposal, it aims to solve the class a) of
pinnings by allowing fee-bumping with a new definition of dependencies. I'm
not sure it achieves to do  so as the Sponsor Vector TXIDs being committed
in the Sponsoree signature hash means the Sponsor feerate is part of this
commitment and can't be unilaterally adjusted to actual mempool-congestion.

After broadcasting the Sponsor/Sponsoree pair, mempools feerate may
increase again and thus obsoleting the previous fee-bump. Or you need a
Sponsor Vector for every blockspace feerate, in the worst-case bound by the
value of the Sponsoree funds.

Further, I would say this proposal won't solve class b) of pinnings for
multi-party time-sensitive protocols without further modifications. E.g in
a LN-channel, assuming the commitment transaction is the Sponsoree, Alice
the honest party can't increase Sponsor feerate by mal eating its outputs
without breaking the sponsoring dependency. And thus evict a Bob's
malicious pin across network mempools.

I think a further softfork proposal with regards to sighash malleability is
needed to achieve the security semantic for Lightning type of protocols.
Roughly, a SIGHASH_IOVECTOR allows N-inputs to commit to N-outputs, thus
committing to all the balance/HTLC outputs minus the last output Vector,
non-interactively malleable by channel participants. This would be a form
of transaction finalization delegation, allowing Alice to direct the
Sponsor vector to a good-feerate adjusted transaction.

Note, I may have misunderstood completely the proposal as the feerate
observed might be the Sponsor _package_ one and each party could have a
pair of outputs to spend from to non-interactively increase the Sponsoree.
Though sounds like re-introducing the limits issues...

That said, see following review points.

> This is insufficient because if new attacks are found, there is
> limited ability to deploy fixes for
> them against deployed contract instances (such as open lightning
> channels). What is required is a
> fully abstracted primitive that requires no special structure from an
> underlying transaction in
> order to increase fees to confirm the transactions.

This is really true, in case of vulnerability discovered mass closing of
the channel would be in itself a concern as it would congest mempools and
open to looter behaviors [1]. Though I don't think a special structure can
claim covering every potential source of vulnerability for  off-chain
protocols as some of them might be tx-relay based (e.g reject-filters for
segwit txn).

Further, a "fully abstracted primitive" is loosely defined, one could argue
that anchor outputs don't require special structure from an underlying
transaction (i.e on the order of outputs ?).

>  where
n>1, it is interpreted as a vector of TXIDs (Sponsor Vector).

n >=1 ? I think you can have at least one vector and this is matching the
code

> If there is another convenient place to put the TXID vector, that's fine
too.

You might use the per-input future Taproot annex, and even apply a witness
discount as this mechanism could be argued to be less blockspace expensive
than a CPFP for the same semantic.

An alternative could be a new transaction field like a new `stxid` :

`[nVersion][marker][flag][txins][txouts][witness][nLockTime][nSponsor][nVersion][n*STXID]`

It would be cheaper as you likely save the output amount size and OP_VER.
And you don't have to subtract a dust output + 1 from the other output
amount to make sure the Sponsor output meets dust propagation requirements.

Though it's more demanding on the tx-relay layer (new serialization and
transaction identifier) and new a version bump of the signature digest algo
to avoid a third-party malleating the per-transaction sponsor field

> To prevent garbage sponsors, we also require that:

Does the reverse hold ? Garbage Sponsoree by breaking the dependency and
double-spending the utxo spent by the Sponsor and thus decreasing
Sponsoree's feerate to mempool bottom. AFAIK you can't do this with CPFP.

> rational miners may wish to permit multiple sponsor
> targets, or multiple sponsoring
> transactions,

I'm not sure if your policy sktech prevents multiple
1-Sponsor-to-N-Sponsoree. Such a scheme would have some edges. A mempool
might receive Sponsoree in different order than evaluated by original
sender and thus allocate the Sponsor feerate to the less-urgent Sponsoree.

> This is treated as a separate
> concern, as any strides on
> package relay generally should be able to support sponsors 

Re: [bitcoin-dev] Detailed protocol design for routed multi-transaction CoinSwap

2020-09-05 Thread Antoine Riard via bitcoin-dev
Hi Zeeman,

I think one of the general problems for any participant in an
interdependent chain of contracts like Lightning or CoinSwap is to avoid a
disequilibrium in its local HTLC ledger. Concretely sending forward more
than you receive backward. W.r.t, timelocks delta aim to enforce order of
events, namely that a forward contract must be terminated before any
backward contract to avoid a discrepancy in settlement. Order of events can
be enforced by a) absolute timelocks and thus linearized on the same scale
by blockchain ticks or b) by a counterparty to two relative-time locked
contracts which observe the broadcast of the backward transaction and thus
manually trigger the kickoff of forward timelock by broadcasting the
corresponding transaction.

With this rough model in mind, pinning an absolute or relative timelocked
transaction produce the same effect, i.e breaking contracts settlement
order.

> This can be arranged by having one side offer partial signatures for the
transaction of the other, and once completing the signature, not sharing it
with the other until we are ready to actually broadcast the transaction of
our own volition.
> There is no transaction that both participants hold in completely-signed
form

I don't think that's different from the current model where you have either
a valid HTLC-timeout or HTLC-Sucess tx to solve a HTLC output but never
full witness material to build both ?

I see a theoretical issue with RBF-range, if you're likely to lose the
balance, you can broadcast your highest-RBF version thus incentivizing
miners to censor counterparty claim tx. Kind of a "nothing at stake" issue.
As of today, you have to take this fee out of your pocket if you want to
incentivize miners to act so, not promising a fee from an ongoing disputed
balance.

> Private key turnover is still useful even in an absolute-timelock world.

The way I understand the either-HTLC-or-private-key-turnover construction
in CoinSwap is for the HTLC to serve as a security backup in case the
cooperative key turnover fails. Lightning don't have this model as you
don't switch funding transaction ownership.

> To reduce this risk, A can instead first swap A->B->A, then when that
completes, A->C->A.
This limits its funding lockup to 1 week.

Okay I think I understand your point. So by intermediating the chain with
the taker you ensure that in case of previous hop failure, taker funds are
only timelocked for the delta of this faulting hop not the whole route. But
still not anchoring onchain the next route segment means that any moment
the next maker can exit from the proposed position ?

That's interesting, so a) you require all takers to lock their funds
onchain before initiating the whole routing and you will pay more in
service fees or b) you only lock them step by step but you increase risk of
next hop default and thus latency. Roughly.

It might be an interesting construction to explore on its own, minus the
downside of producing weird spend patterns due to next hop maker bidding
with another party.

Cheers,

Antoine

Le lun. 24 août 2020 à 23:16, ZmnSCPxj  a écrit :

>
> Good morning Antoine,
>
>
> > Note, I think this is independent of picking up either relative or
> absolute timelocks as what matters is the block delta between two links.
>
> I believe it is quite dependent on relative locktimes.
> Relative locktimes *require* a contract transaction to kick off the
> relative locktime period.
> On the other hand, with Scriptless Script (which we know how to do with
> 2p-ECDSA only, i.e. doable pre-Taproot), absolute locktimes do not need a
> contract transaction.
>
> With absolute locktimes + Scriptless SCript, in a single onchain PTLC, one
> participant holds a completely-signed timelock transaction while the other
> participant holds a completely-signed pointlock transaction.
> This can be arranged by having one side offer partial signatures for the
> transaction of the other, and once completing the signature, not sharing it
> with the other until we are ready to actually broadcast the transaction of
> our own volition.
> There is no transaction that both participants hold in completely-signed
> form.
>
> This should remove most of the shenanigans possible, and makes the 30xRBF
> safe for any range of fees.
> I think.
>
> Since for each PTLC a participant holds only its "own" transaction, it is
> possible for a participant to define its range of fees for the RBF versions
> of the transaction it owns, without negotiation with the other participant.
> Since the fee involved is deducted from its own transaction, each
> participant can define this range of RBFed fees and impose it on the
> partial signatures it gets from the other participant.
>
> --
>
> Private key turnover is still useful even in an absolute-timelock world.
>
> If we need to bump up the block delta between links, it might be
> impractical to have the total delta of a multi-hop swap be too long at the
> taker.
>
> As a concrete example, 

Re: [bitcoin-dev] Detailed protocol design for routed multi-transaction CoinSwap

2020-09-05 Thread Antoine Riard via bitcoin-dev
Hi Chris,

I forgot to underscore that contract transaction output must be grieved by
at least a CSV of 1. Otherwise, a malicious counterparty can occupy with
garbage both the timelock-or-preimage output and its own anchor output thus
blocking you to use the bumping capability of your own anchor ouput.

A part of this, I think it works.

> Another possible fix for both vulnerabilities is to separate the
> timelock and hashlock cases into two separate transactions as described
> by ZmnSCPxj in a recent email to this list. This comes at the cost of
> breaking private key handover allowing coins to remain unspent
indefinitely.

This works too assuming these second-stage transactions aren't malleable at
all (e.g SIGASH_SINGLE). Other ways you can increase their feerate/absolute
fee and you're back to the initial situation.

Beyond note also that anchors-on-second-stage are more risky here, as
otherwise your counterparty can again attach a low-feerate child. In case
of concurrent broadcast (assuming you haven't achieved to claim the output
before timelock expiration due to network outage/mempool-congestion) you
might not see your counterparty version. I.e, your local mempool has the
timelock tx and the rest of the network the hashlock and your CPFP bump
won't propagate as being an orphan.

So you're left with a RBF-range, which is mostly okay minus a theoretical
concern : a party guessing the odds to lose the balance are high can
broadcast/send out-of-band the highest-fee bound to miners thus
incentivizing them to censor a honest, low-fee  preimage tx. A
"nothing-at-stake-for-genuinely-evil-counterparty" issue.

> Another possible fix for the second attack, is to encumber the output
> with a `1 OP_CSV` which stops that output being spent while unconfirmed.
> This seems to be the simplest way if your aim is to only fix the second
> attack.

Yes you don't package fee malleability so an honest party can always
unilaterally bump the feerate and override concurrent bids.


That said, I would lean towards anchors and thus unileratel fee bumping.
Feerate interactivity among a multi-party protocol should be seen as an
oracle to leak the full-node of a participant. By sending a range of
conflicting transactions with different feerates to a set of network
mempools I could theoretically observe variations in the protocol feerate
announced.

I would recommend you to have a look on this paper, if it's not done yet :
https://arxiv.org/pdf/2007.00764.pdf, the first one analyzing privacy
holistically across Bitcoin layers.

Cheers,

Antoine

Le sam. 29 août 2020 à 18:03, Chris Belcher  a écrit :

> Hello Antoine,
>
> Thanks for the very useful insights.
>
> It seems having just one contract transaction which includes anchor
> outputs in the style already used by Lightning is one way to fix both
> these vulnerabilities.
>
> For the first attack, the other side cannot burn the entire balance
> because they only have access to the small amount of satoshi of the
> anchor output, and to add miner fees they must add their own inputs. So
> they'd burn their own coins to miner fees, not the coins in the contract.
>
> For the second attack, the other side cannot do transaction pinning
> because there is only one contract transaction, and all the protections
> already developed for use with Lightning apply here as well, such as
> CPFP carve out.
>
>
> Another possible fix for both vulnerabilities is to separate the
> timelock and hashlock cases into two separate transactions as described
> by ZmnSCPxj in a recent email to this list. This comes at the cost of
> breaking private key handover allowing coins to remain unspent
> indefinitely.
>
> Another possible fix for the second attack, is to encumber the output
> with a `1 OP_CSV` which stops that output being spent while unconfirmed.
> This seems to be the simplest way if your aim is to only fix the second
> attack.
>
>
> These are all the possible fixes I can think of.
>
> Regards
> Chris
>
> On 24/08/2020 20:30, Antoine Riard wrote:
> > Hello Chris,
> >
> > I think you might have vulnerability issues with the current design.
> >
> > With regards to the fee model for contract transactions, AFAICT timely
> > confirmation is a fund safety matter for an intermediate hop. Between the
> > offchain preimage reveal phase and the offchain private key handover
> phase,
> > the next hop can broadcast your outgoing contract transactions, thus
> > forcing you to claim quickly backward as you can't assume previous hop
> will
> > honestly cooperate to achieve the private key handover. This means that
> > your range of pre-signed RBF-transactions must theoretically have for fee
> > upper bound the maximum of the contested balance, as game-theory side,
> it's
> > rational to you to burn your balance instead of letting your counterparty
> > claim it after timelock expiration, in face of mempool congestion. Where
> > the issue dwells is that this fee is pre-committed and not cancelled when
> > the 

Re: [bitcoin-dev] Detailed protocol design for routed multi-transaction CoinSwap

2020-08-24 Thread Antoine Riard via bitcoin-dev
Hello Chris,

I think you might have vulnerability issues with the current design.

With regards to the fee model for contract transactions, AFAICT timely
confirmation is a fund safety matter for an intermediate hop. Between the
offchain preimage reveal phase and the offchain private key handover phase,
the next hop can broadcast your outgoing contract transactions, thus
forcing you to claim quickly backward as you can't assume previous hop will
honestly cooperate to achieve the private key handover. This means that
your range of pre-signed RBF-transactions must theoretically have for fee
upper bound the maximum of the contested balance, as game-theory side, it's
rational to you to burn your balance instead of letting your counterparty
claim it after timelock expiration, in face of mempool congestion. Where
the issue dwells is that this fee is pre-committed and not cancelled when
the balance change of ownership by the outgoing hop learning the preimage
of the haslock output. Thus the previous hop is free to broadcast the
highest-fee RBF-transactions and burn your balance, as for him, his balance
is now encoded in the output of the contract transactions on the previous
link, for which he knows the preimage.

Note, I think this is independent of picking up either relative or absolute
timelocks as what matters is the block delta between two links. Of course
you can increase this delta to be week-lengthy and thus decrease the need
for a compelling fee but a) you may force quickly close with contract
transactions if the private key handover doesn't happen soon, you don't
want to be caught by surprise by congestion so you would close far behind
delta period expiration like half of it, and b) you increase the time-value
of makers funds in case of faulty hop, thus logically increasing the maker
fee and making the cost of the system higher in average. I guess a better
solution would be to use dual-anchor outputs has spec'ed out by Lightning,
it lets the party who has a balance at stake unilaterally increase feerate
with a CPFP. The CPFP is obviously a higher blockchain cost but a) it's a
safety mechanism for a worst-case scenario, 99% of the time they won't be
committed, b) you might use this CPFP to aggregate change outputs or other
opportunistically side-usage.

With regards to the preimage release phase, I think you might have a
pinning scenario. The victim would be an intermediate hop, targeted by a
malicious taker. The preimage isn't revealed offchain to this victim hop. A
low-feerate version of the outgoing contract transaction is broadcast and
not going to confirm, assuming a bit of congestion. As preimage is known,
the malicious taker can directly attach a high-fee, low-feerate child
transaction and thus prevent any replacement of the pinned parent by a
honest broadcast of a high-fee RBF-transaction under BIP 125 rules. At the
same time, the malicious taker broadcasts the contract tx on the previous
link and gets it confirmed. At relative timelock expiration, malicious
taker claims back the funds. When the pinned transaction spending the
outgoing link gets evicted (either by replacing child by a higher feerate
or waiting for mempool expiration after 2 weeks), taker gets it confirmed
this time and claims output through hashlock. Given the relative timelock
blocking the victim, there is not even a race.

I guess restraining the contract transaction to one and only one version
would overcome this attack. A honest intermediate hop, as soon as seeing a
relative timelock triggered backward would immediately broadcast the
outgoing link contract tx or if it's already in network mempools broadcast
a higher-feerate child. As you don't have valid multiple contract
transactions, an attacker can't obstruct you to propagate the correct
child, as you are not blind about the parent txid.

Lastly, one downside of using relative timelocks, in case of one downstream
link failure, it forces every other upstream hops to go onchain to protect
against this kind of pinning scenario. And this would be a privacy
breakdown, as a maker would be able to provoke one, thus constraining every
upstream hops to go onchain with the same hash and revealing the CoinSwap
route.

Let me know if I reviewed the correct transactions circuit model or
misunderstood associated semantic. I might be completely wrong, coming from
a LN perspective.

Cheers,
Antoine

Le mar. 11 août 2020 à 13:06, Chris Belcher via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> a écrit :

> I'm currently working on implementing CoinSwap (see my other email
> "Design for a CoinSwap implementation for massively improving Bitcoin
> privacy and fungibility").
>
> CoinSwaps are special because they look just like regular bitcoin
> transactions, so they improve the privacy even for people who do not use
> them. Once CoinSwap is deployed, anyone attempting surveillance of
> bitcoin transactions will be forced to ask themselves the question: how
> do we know this transaction 

[bitcoin-dev] Advances in Bitcoin Contracting : Uniform Policy and Package Relay

2020-07-29 Thread Antoine Riard via bitcoin-dev
Hi list,

Security and operations of higher-layer protocols (vaults, LN, CoinJoin,
watchtowers, ...) come with different assumptions and demands with regards
to tx-relay and fee models. As the Bitcoin stack is quite young, it would
be great to make those ones more understood and what p2p/mempool changes we
might adopt at the base layer to better answer them. I would like to
explore this with my current post.

### Time-Sensitive Protocols Security-Model (you can skip this if you know
LN)

Lightning, the most deployed time-sensitive protocol as of now, relies on
the timely confirmations of some of its transactions to enforce its
security model. Like timing out an outgoing HTLC, claiming an incoming HTLC
or punishing a revoked commitment. Ensuring timely confirmation is
two-fold: a) propagating well-transactions across the network to quickly
hit miner mempools b) offering a competitive feerate to get in next coming
blocks.

Updating feerate just-in-time is quite challenging for LN as you can't
resign a commitment once your counterparty is non-responsive or malicious,
and thus any fee strategy assuming interactivity is closed. With current
constraints of maintaining a trustless chain of transactions (no
Parent-Pay-For-Child), the only option is a CPFP. Ongoing update of LN
protocol (anchor-outputs) will allow a channel participant to unilaterally
bump feerate of its commitment/HTLCs txn, assuming there is no
_adversarial_ network mempool conditions like a concurrent broadcast.

Beyond enforcing the need to secure its funds by bumping feerate, an
offchain user might be willingly to accelerate confirmation of a broadcast
for liquidity management in face of mempool-congestion. This issue is
likely shared by any multi-party protocol like Coinjoins where resigning is
painful and a party may have different liquidity preferences than other
participants and would like to express them in an unilateral fee bumping.

### Effective Transaction Propagation and Uniform Relay Policy

Even before competing on feerate, the first and foremost point of the
laid-out security model was the well-propagation of transactions across the
p2p network. Its effectiveness is determined by compliance to 1) consensus
rules 2) policy rules. This second set is a tighter one governing different
aspects of your transactions (like size, output type, feerate,
ancestors/descendants, ...) and introduced to sanitize the p2p network
against a wide scope of resources abuses (RBF bandwidth waste, package
evaluation CPU DoS, economic nonsense outputs, ...)

These rules diverge across implementations/versions and a subset of them
can be tightened or relaxed by node operators. This heterogeneity is
actually where the risk is scored for higher protocols, your LN's full-node
might be connected to tx-relay peers with more constraining policies than
yours and thus will always reject your time-sensitive transactions,
silently breaking security of your channels [0].

Of course, LN protocols devs have always been aware of these issues and
carefully reflect policies enforcement in their codebase. That said an
important subset of them aren't documented or even standardized and thus
hard to incorporate in upper layers specs. Testing them in a black box
approach (i.e `testmempoolaccept`) before production doesn't work as your
broadcast has to be valid against the union of your yet-unknown tx-relay
topology, further static checks are blurred with dynamic ones (the feerate
now is different than the one at a future broadcast), and your transaction
might be malleate by your counterparty (like a ridiculous feerate).

And the other side, AFAIK, Core developers have always acknowledged these
issues and been really conscientious when updating such API policy. The
concerning change with protocol like LN is the severity consequences in
case of incompatible changes. Previously, your basic transaction would have
been rejected by the network and your application could have been updated
before successfully rebroadcasting. Now, such changes potentially outlawing
your time-sensitive broadcasts is a direct, measurable risk of fund loss,
either triggered by mempool-congestion or exploited by a malicious
counterparty.

Therefore, moving towards such stable tx-relay/bumping API, I propose:
a) Identifying and documenting the subset of policy rules on which upper
layers have to rely on to enforce their security model
b) Guaranteeing backward-compatibility of those rules or, in case of
tightening change, making sure there is ecosystem coordination with some
minimal warning period (1 release ?)

Committing to a uniform policy would be a philosophical change, it would
ossify some parts of full-node implementations. Another side-effect means
that upper layer devs would be incentivized to rely on such stable API. In
case of new DoS on the base layer, we might have to tighten them in a short
timeline at the price of breaking some offchain applications [1] On the
other side, full-node operators 

[bitcoin-dev] Pinning : The Good, The Bad, The Ugly

2020-06-28 Thread Antoine Riard via bitcoin-dev
(tl;dr Ideally network mempools should be an efficient marketplace leading
to discovery of best-feerate blockspace demand by miners. It's not due to
current anti-DoS rules assumptions and it's quite harmful for shared-utxo
protocols like LN)

Hello all,

Lightning security model relies on the unilateral capability for a channel
participant to confirm transactions, like timing out an outgoing HTLC,
claiming an incoming HTLC or punishing a revoked commitment transaction and
thus enforcing onchain a balance negotiated offchain. This security model
is actually turning back the double-spend problem to a private matter,
making the duty of each channel participant to timely enforce its balance
against the competing interest of its counterparties. Or laid out
otherwise, contrary to a miner violating a consensus rules, base layer
peers don't care about your LN node failing to broadcast a justice
transaction before the corresponding timelock expiration (CSV delay).

Ensuring effective propagation and timely confirmation of LN transactions
is so a critical-safety operation.  Its efficiency should be always
evaluated with regards to base layer network topology, tx-relay propagation
rules, mempools behaviors, consistent policy applied by majority of nodes
and ongoing blockspace demand. All these components are direct parameters
of LN security. Due to the network being public, a malicious channel
counterparty do have an incentive to tweak them to steal from you.

The pinning attacks which have been discussed since a few months are a
direct illustration of this model. Before digging into each pinning
scenario, few properties of the base layer components should be evocated
[0].

Network mempools aren't guaranteed to be convergent, the local order of
events determines the next events accepted. I.e Alice may observe tx X, tx
Y, tx Z and Bob may observe tx Z, tx X, tx Y. If tx Z disable-RBF and tx X
try to replace Z, Alice accepts X and Bob rejects it. This divergence may
persevere until a new block.

Tx-relay topology can be observed by spying nodes [1]. An attacker can
exploit this fact to partition network mempools in different subset and
hamper propagation across them of same-spending output concurrent
transactions. If subset X observes Alice commitment transaction and subset
Y observes Bob commitment transaction, Alice's HTLC-timeout spending her
commitment won't propagate beyond the X-Y set boundaries. An attacker can
always win the propagation race through massive connections or bypassing
tx-relay privacy timers.

Miners mempools are likely identifiable, you could announce a series of
conflicting transactions to different subsets of the network and observe
"tainted" block composition to assign to each subset a miner mempool. I'm
not aware of any research on this, but it sounds plausible to identify all
power-miner mempool, i.e the ones likely to mine a block during the block
delay of the timelock you're looking to exploit. If you can't bid a
transaction in such miner mempools your channel state will stale and your
funds may be in danger.

### Scenario 1) HTLC-Preimage Pinning

As Matt previously explained in his original mail on RBF-pinning, a
malicious counterparty has an interest to pin a low-feerate HTLC-preimage
transaction in some network mempools and thus preventing a honest
HTLC-timeout to confirm. For details, refer to Optech newsletter [2].

This scenario doesn't bear any risk to the attacker, is easy to execute and
has double-digit rate of success. You don't assume network topologies
manipulation, mempools partitions or LN-node-to-full-node mapping [3] That
said this should be solved by implementing and deploying anchor outputs,
which effectively allows a party to unilaterally bump feerate of its
HTLC-timeout transactions.

### The Anchor Output Proposal

Anchor Output proposal is a current spec object implemented by the LN dev
community, it introduces the ability to _unilaterally_ and _dynamically_
bump feerate of any commitment transaction. It also opened the way to bump
local 2nd-stage transactions.

Beyond solving scenario 1), it makes LN node safe with regards to
unexpected mempool congestion. If your commitment transaction is stucking
in network mempools you can bump its feerate by attaching a CPFP on the new
`to_local` anchor. If the remote commitment gets stuck in network mempools,
you're able to bump it by attaching a CPFP on the `to_remote` anchor. This
should keep your safe against an unresponsive or lazy counterparty in case
of onchain funds to claim.

IMO, it comes with a trade-off as it introduces a mapping oracle, i.e a
linking vector between a LN node and its full-node. In this case, a spying
node may establish a dummy, low-value channel with a probed LN node, break
it by broadcasting thousands of different versions of the (revoked)
commitment and observes which one broadcast a CPFP first on the p2p layer.
Obviously, you can mitigate it by not chasing after low-value HTLC, but
that is a 

Re: [bitcoin-dev] CoinPool, exploring generic payment pools for Fun and Privacy

2020-06-12 Thread Antoine Riard via bitcoin-dev
Hi ZmnSCPxj,

> I have not studied the proposal in close detail yet, but anyway, my main
takeaway roughly is:
>
> * The core of CoinPool is some kind of multiparticipant (N > 2) offchain
update mechanism (Decker-Wattenhofer or Decker-Russell-Osuntokun).
>   * The output at each state of the update mechanism is some kind of
splitting construction (which I have not studied in detail).
>   * At each update of the state, all participants must sign off on the
new state.

Overall, that's a really accurate description. I would add you can embed a
funding outpoint of any offchain protocol on the splitting construction,
modulo some timelocks shenanigans.

> In order to hide transfers from the elected WabiSabi server, participants
can maintain two coins in every state, and move coins randomly across the
two coins they own at each state update, in order to hide "real" transfers
from the elected server.

Yes I'm quite sure you can reuse WabiSabi as a communication channel
between participants, assuming you support tapscript and merkle branch
transports, and server build a tree. Generally, we tried to keep design as
flexible as we can to reuse privacy tools.

> Indeed, from what I can understand, in order to properly set up the
splitting transactions in the first place, at each state every participant
needs to know how much each other participant actually owns in the CoinPool
at that point in time.

Yes, that's part of future research, defining better *in-pool* observer.
Sadly, right now, even if you use mask construction inside, it's quite easy
to trace leaves by value weight. Of course, you can enforce equal-value
leaves, as for a regular onchain CoinJoin. I think it comes with a higher
onchain cost in case of pool breakage.

> That way, output addresses can be to fresh pseudonyms of the participant,
removing all linkages of participant to amount they own, and each
participant can maintain multiple outputs per state for their own purposes
and to mildly obscure exactly how much they own in total.

That's right that an in-pool observer may learn a link between an exit and
an onchain withdraw. There is a future optimization, if you can swap your
withdraw with an already onchain output, therefore breaking heuristics.

> We can do this by using `SIGHASH_ANYPREVOUT` to force whoever performs a
unilateral close of the CoinPool to pay the onchain fees involved, so that
it would have to be a good reason indeed to perform a unilateral close.

Absolutely, for the fee structure, as the withdraw output is at the
discretion of user, I was thinking some CPFP. There is maybe a better
solution, haven't spend that much on the exact adequate, incentives-align
mechanism beyond a "withdraw-must-pay-its-fees".

Thanks for the high-quality review, as usual ;)

Antoine

Le ven. 12 juin 2020 à 04:39, ZmnSCPxj  a écrit :

> Good morning Antoine and Gleb,
>
> I have not studied the proposal in close detail yet, but anyway, my main
> takeaway roughly is:
>
> * The core of CoinPool is some kind of multiparticipant (N > 2) offchain
> update mechanism (Decker-Wattenhofer or Decker-Russell-Osuntokun).
>   * The output at each state of the update mechanism is some kind of
> splitting construction (which I have not studied in detail).
>   * At each update of the state, all participants must sign off on the new
> state.
>
> It seems to me that it would be possible to use a [WabiSabi protocol](
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-June/017969.html)
> during negotiation of a new state.
>
> Now, WabiSabi is a client-server protocol.
> As all participants in the CoinPool are needed in order to ratify each new
> state anyway, they can simply elect one of their number by drawing lots, to
> act as server for a particular state update.
>
> Then the participants can operate as WabiSabi clients.
> Each participant registers the outputs they currently own in the current
> state, getting credentials that sum up to the correct value.
> Then, during the WabiSabi run, they can exchange credentials among the
> participants in order to perform value transfers inside the WabiSabi
> construction.
> Then, at output registration, they register new outputs to put in the next
> state of the CoinPool.
>
> In order to hide transfers from the elected WabiSabi server, participants
> can maintain two coins in every state, and move coins randomly across the
> two coins they own at each state update, in order to hide "real" transfers
> from the elected server.
>
> Then, after output registration, the participants ratify the new state by
> signing off on the new state and revoking the previous state, using the
> update mechanism.
>
> Of course, we should note that one desired feature for CoinPool in the
> original proposal is that a participant can exit, and the CoinPool would
> still remain valid, but only for the remaining participants.
>
> This is arguably a mild privacy leak: every other participant now knows
> how much that particular participant 

Re: [bitcoin-dev] CoinPool, exploring generic payment pools for Fun and Privacy

2020-06-12 Thread Antoine Riard via bitcoin-dev
Hi Jeremy,

For the records, I didn't know between Greg and you was at the origin of
payment pools. Thanks for your pioneer work here, obviously this draws
inspiration from OP_CTV use cases and Channel Factories works, even if we
picked up different assumptions and tried to address another set of issues.

With regards to scalability, I hit it on my own while inquiring
covenanted-Bitcoin contracts for international trade. I mentioned the
any-order issue on such multi-party complex contracts in a talk last summer
(https://github.com/ariard/talk-slides/blob/master/advanced-contracts.pdf).

> All of these channels can be constructed and set up non-interatively using
> CTV, and updated interactively. By default payments can happen with
minimal
> coordination of parties by standard lightning channel updates at the leaf
> nodes, and channels can be rebalanced at higher layers with more
> participation.

Side review note on OP_CTV: I think it would be great to define
non-interactivity better, namely at least between 3 phases: establishment,
operation, closing.

Even OP_CTV protocols assume interactivity at establishment, at least 1) to
learn payees pubkeys endpoint (and internal leaves pubkeys if you want
update at operation) 2) validate transaction tree correctness between
participants.

At operation, it depends if participants want to dynamically rebalance
value across channels or not. If you desire dynamically rebalancing, assume
internal leaves scriptpubkeys are (multisig-all OR OP_CTV'ed merkle_tree).
Using OP_CTV is a saving in message rounds for every constant expression
across tree updates.

At closing, depends again if participants have committed update keys or
not. If dynamic update, you can prune the whole tree and just commit final
balances onchain, either with a O(N) fan-out transaction (N outputs) or a
O(log(N)) congestion tree (N transactions).

So I would say the originality of a hashchain covenant like OP_CTV is to
provide onchain *immutability* (unforgeability?) of the offchain
transaction tree and thus provides instant finality to payees. You can get
the same semantic with off-chain covenant, pre-signed set of transactions,
assuming more communications rounds and performance hit.

That said, IMO, immutability comes with a security trade-off, namely if any
payout key committed in your OP_CTV tree gets compromised your funds are at
stake. And you can't update the tree anymore at the root to rotate keys. I
think this should be weighted by anyone designing covenant protocols,
especially vaults.

> I don't think the following requirement: "A
> CoinPool must satisfy the following *non-interactive any-order withdrawal*
> property: at any point in time and any possible sequence of previous
> CoinPool events, a participant should be able to move their funds from the
> CoinPool to any address the participant wants without cooperation with
> other CoinPool members." is desirable in O(1) space.

With current design (Pool_tx+Split_tx) it's O(2) space. Pool_tx is similar
to a commitment tx and thus enables off-chain novation of pool distribution.

> Let's be favorable to Accumulators and assume O(1), but keep in mind
constant may
> be somewhat large/operations might be expensive in validation for updates.

Using a Merkle Tree as an accumulator should be constant-size in space, but
likely it has to be O(log(N) in computation (N set elements). This overhead
in computation should be accounted for in accumulator sigops to avoid
network validation resources free-riding, but I think it's a better
trade-off minimizing chain footprint.

> So in this context, CTV Pool has a clear benefit. The last recipient can
> always clear in Log(N) time whereas in the accumulator pool, the last
> recipient has to wait much much longer. There's no asymptotic difference
in
> Tx Size, but I suspect that CTV is at least as good or cheaper since it's
> just one tx hash and doesn't depend on implementation.

Yes I agree CTV pool performs better in the worst-case scenario. In my
opinon what we should really look on is the probability of withdrawal
scenarios. I see 2 failure cases:
* a pool participant being offline, thus halting the pool
* a pool participant with external protocol requirement to fulfill, like a
HTLC to timeout onchain

With regards to 1) we assume that watchtower infra are likely to become
ubiquitous in the future (if you want a secure LN experience), so user
uptime should be near to 100%. Of course,  it's a new architecture which
comes with trade-offs, but interesting to explore.

With regards to 2) as of today channel-failure-rate (like unilateral close)
it's still quite important (30% IIRC) so it plays in favor of OP_CTV pool
but in the future I expect single-digit
therefore making CoinPool far more competitive. Do we envision protocol
more time-sensitive than LN in the future (atomic swaps...) ? Hard to gauge.

Do you see other ways to refine model, like integrating out-of-pool
liquidity needs rate ?

Note, I think 

[bitcoin-dev] CoinPool, exploring generic payment pools for Fun and Privacy

2020-06-11 Thread Antoine Riard via bitcoin-dev
Hi list,

We (Gleb Naumenko + I) think that a wide range of second-layer protocols
(LN, vaults, inheritance, etc) will be used by average Bitcoin users. We
are interested in finding and addressing the privacy issues coming from the
unique fingerprints these protocols bring.

More specifically, we are interested in answering the following questions:
1. How bad are privacy leaks from on-chain txn of second-layer protocols
and how much is leaked via protocol-specific metadatas (LN domain names,
watchtowers, ...) ?
2. How to establish a list of Bitcoin fingerprints and their severity to
inform protocol designers and clarify threat models ?
3. What kind of sophisticated heuristics spies may use in the future ?
4. How to mitigate privacy leaks ? Should each protocol adopt a common
toolbox (scriptless scripts, taproot, ...) in its own way or should we
design a confidential-layer to wrap around all of them ?
5. How to make the solution usable (cheaper, easier to integrate, safer)
for a daily basis ?

We suggest CoinPool: a generic payment pool [0] as a solution to those
problems. Although the design we propose is somewhat a scaling solution, we
won't focus on this aspect. This work is rather an exploration of *how a
pool construction could serve as a TLS for Bitcoin, enhancing both on-chain
and off-chain privacy*.

### Motivation: cross-protocols privacy

It has always been a challenge to make the on-chain UTXO graph more
private. We all know the issues with cleartext amounts, the linkability of
inputs/outputs, and other metadatas. Combining with p2p-level spying
(transaction-to-IP mapping) or some other patterns leading to real-world
identities enable serious spying.

Protocols on top of Bitcoin (LN, vaults[1], complicated spending conditions
based on Miniscript, DLC [2] are even more vulnerable to spying because:
- each of them brings new unique fingerprint/metadata [3]
- known spying techniques against second-layer are currently limited to
trivial heuristics, but we can't assume spies will always this
unsophisticated

There is already a wiki list [4] attempting to cover all issues like that,
although maintaining it would be challenging considering privacy is a
moving target.

Let's consider this example: Alice is a well-known LN merchant with a node
tied to a domain name. She always directs the output of channel closing to
her vault address. If she has another vault address on-chain with the same
unique unlocking script (like a CSV timelock with a specific delta) this
can be leveraged to cluster her transactions. And since one of her
addresses is tied to a domain name, all her funds can now be linked to a
real-world identity.

In theory, one may use CoinJoin-like solutions to mask cross-protocol
on-chain transfers. Unfortunately, robust designs like CoinSwap depend on
timelocking coins, extensive use of the on-chain space, and paying fees to
provide sufficient privacy, as we explain further. These properties imply
we can't expect users to be using strong CoinSwaps by default.

That's why instead of specialized high-latency, high-chain-use
CoinJoin-style protocols, we propose CoinPool: a low-latency, generic
off-chain protocol used to be wrapped around any other protocol. CoinPool
is based on shared UTXO ownership. It may reasonably improve on-chain
privacy while avoiding latency and locked liquidity issues. CoinPool may
also reduce the on-chain use (thus, help to scale Bitcoin) if participants
cooperate sufficiently.

We do believe that CoinSwap and other CoinJoins are of interest, but we
have to consider the trade-offs and choose the best tool for a job to make
privacy usable with regards to user resources. We will compare CoinPool to
CoinSwap in more detail later in this write-up.

### Extra-motivation: on-chain scalability

Even though it's not the main focus of this proposal, we also want to
mention that since CoinPool is a payment pool, it helps with on-chain
scalability. More specifically:
1. Shared UTXO ownership allows to represent many outputs as one, reducing
the UTXO set in size.
2. The CoinPool design enables off-chain transfers within the pool, helping
to save the block space by committing fewer transactions on-chain.
3. CoinPool provides decent support for batching activities from different
users, also helping to have fewer individual transactions on-chain.

Since the CoinPool provides scalability benefits, users will be even
incentivized to join CoinPools due to the conservative chain resources
usage and such enjoy privacy as a side-effect.

### CoinPool design

A CoinPool must satisfy the following *non-interactive any-order
withdrawal* property: at any point in time and any possible sequence of
previous CoinPool events, a participant should be able to move their funds
from the CoinPool to any address the participant wants without cooperation
with other CoinPool members.

The state of a CoinPool is represented by one on-chain UTXO (a funding
multisig of all pool participants) and a set of 

Re: [bitcoin-dev] Time-dilation Attacks on the Lightning Network

2020-06-11 Thread Antoine Riard via bitcoin-dev
Hi ZmnSCPxj

Well your deeclipser is already WIP ;)

See my AltNet+Watchdog proposals in Core:
https://github.com/bitcoin/bitcoin/pull/18987/https://github.com/bitcoin/bitcoin/pull/18988

It's almost covering what you mention, a driver framework to plug
alternative transports protocols : radio, DNS, even LN Noise, Tor's
Snowflake... Proposal is a PoC with a multi-threaded process but yes I want
production-design to be a multi-process for the reasons you mentioned.
Drivers should be developed out-of-tree but with an interface to plug them
smoothly (tm).

Proposal is more generic than pure LN, like some privacy-concerned users
may want to broadcast by default their transactions over radio. But for LN
support it should a) detect network/block issuance anomalies b) dynamically
react by closing channels or c) fetch headers/blocks through redundant
communication channels and d) provide emergency transactions broadcast if
your time-sensitive transactions are censored.

It's long-term work so be patient but getting opt-in support in Core would
make it far easier for any LN routing/vaulting node to deploy it. In the
meanwhile you can have multiple nodes on different infrastructures to serve
as a backend for your LN node.

Bonus: if LN nodes are incentivized to deploy such strong anti-eclipsing
measures to mitigate time-dilation it would benefit base layer p2p security
network-wise. In case of network partition, your node with link layer
redundancy will keep it in-sync its connected peers on the same side of the
partition, even if they don't deploy anything.

I'm sure you have improvements to suggest !

Best,
Antoine


Le mer. 10 juin 2020 à 19:35, ZmnSCPxj  a écrit :

> Good morning Antoine and Gleb,
>
> One thing I have been idly thinking about would be to have a *separate*
> software daemon that performs de-eclipsing for your Bitcoin fullnode.
>
> For example, you could run this deeclipser on the same hardware as your
> Bitcoin fullnode, and have the deeclipser bind to port 8334.
> Then you set your Bitcoin fullnode with `addnode=localhost:8334` in your
> `bitcoind.conf`.
>
> Your Bitcoin fullnode would then connect to the deeclipser using normal
> P2P protocol.
>
> The deeclipser would periodically, every five minutes or so, check the
> latest headers known by your fullnode, via the P2P protocol connection your
> fullnode makes.
> Then it would attempt to discover any blocks with greater blockheight.
>
> The reason why we have a separate deeclipser process is so that the
> deeclipser can use a plugin system, and isolate the plugins from the main
> fullnode software.
> For example, the deeclipser could query a number of plugins:
>
> * One plugin could just try connecting to some random node, in the hopes
> of getting a new connection that is not eclipsed.
> * Another plugin could try polling known blockchain explorers and using
> their APIs over HTTPS, possibly over Tor as well.
> * Another plugin could try connecting to known Electrum servers.
> * New plugins can be developed for new mitigations, such as sending
> headers over DNS or blocks over mesh or etc.
>
> Then if any plugin discovers a block later than that known by your
> fullnode, the deeclipser can send an unsolicited `block` or `header`
> message to your fullnode to update it.
>
> The advantage of using a plugin system is that it becomes easier to
> prototype, deploy, and maybe even test new de-eclipsing mitigations.
>
> At the same time, by running a separate daemon from the fullnode, we
> provide some amount of process isolation in case some problem with the
> plugin system exists.
> The deeclipser could be run by a completely different user, for example,
> and you might even run multiple deeclipser daemons in the same hardware,
> with different non-overlapping plugins, so that an exploit of one plugin
> will only bring down one deeclipser, with other deeclipser daemons
> remaining functional and still protecting your fullnode.
>
> Finally, by using the P2P protocol, the fullnode you run could be a
> non-Bitcoin-Core fullnode, such as btcd or rust-bitcoin or whatever other
> fullnode implementations exist, assuming you actually want to use them for
> some reason.
>
> What do you think?
>
> Regards,
> ZmnSCPxj
>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Time-dilation Attacks on the Lightning Network

2020-06-07 Thread Antoine Riard via bitcoin-dev
Hi ZmnSCPxj,

> (Of note as well, is that the onchain contract provided by such services
is the same in spirit as those instantiated in channels of the Lightning
Network, thus the same attack schema works on the onchain side.)

If you onchain contract uses a timelock and has concurrent transactions
arbiter by this one , it's subject to time-dilation attack. So yes
submarine swaps, or any kind of atomic swap is concerned. We note this in
discussion.
But you're right for the attack cost, you don't need a channel to these
services, which is also concerning for their attack surface.

> Since the issue here is that eclipsing of Bitcoin nodes is risky, it
strikes me that a mitigation would be to run your Bitcoin fullnode on
clearnet while running your Lightning node over Tor

We clearly mention that risk of running a Bitcoin node over Tor, where do
we recommend running a LN node over Tor ?

>   And this seems to tie with what you propose: that the LN node should
use a different view-fullnode from the broadcast-fullnode.

Yes in Countermeasures - Link layer diversity, specially if it's easy for
an attacker to provoke a transaction broadcast by buying a channel to the
LN node.

> A mitigation to this would be to run a background process which sleeps
for 20 minutes, then does `bitcoin-cli addnode ${BITCOINNODE} onetry`.

Yeah instead of having every node operator running their own hacky scripts,
without them being bulletproofs on detection, I'm working on getting such
mitigations directly in Core, easily deployable for everyone.

> The victim *could* instead check that the absolute timelocks seem very
far in the future relative to its own view of the current blockheight.

I think you're right it's really dependent on CLTV_delta deployed on the
path and time-dilation offset. The alternative you're proposing is a good
one, but you shouldn't know where you're in the path and max CLTV is 2048
blocks IIRC.

Thanks for your reading and review,

Cheers,
Antoine

Le mer. 3 juin 2020 à 22:58, ZmnSCPxj via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> a écrit :

> Good morning Gleb and Antoine,
>
> This is good research, thank you for your work.
>
> > **Targeting Per-Hop Packet Delay** is based on routing via the victim,
> and the victim should have at least two channels with the attacker.
>
> The existence of offchain-to-onchain swap services means that the attacker
> needs only build one channel to the victim for this attack to work.
> Rather than route to themselves, the attacker routes to a convenient
> service providing such a swap service, and receives the stolen funds
> onchain, with no need even for an incoming channel from a different node.
> (Of note as well, is that the onchain contract provided by such services
> is the same in spirit as those instantiated in channels of the Lightning
> Network, thus the same attack schema works on the onchain side.)
>
> Indeed, the attack can be mounted on such a service directly.
>
> Even without such a service, the incoming channel need not be directly
> connected to the victim.
>
>
> > [Tor is tricky](https://arxiv.org/abs/1410.6079) too
>
> Since the issue here is that eclipsing of Bitcoin nodes is risky, it
> strikes me that a mitigation would be to run your Bitcoin fullnode on
> clearnet while running your Lightning node over Tor.
> Eclipsing the Lightning node (but not the Bitcoin fullnode it depends on)
> "only" loses you the ability to pay, receive, or route (and thereby earn
> forwarding fees), but as long as your blockchain view is clear, it should
> be fine.
>
> Of course, the Lightning node could still be correlated with the Bitcoin
> node when transactions are broadcast with the attached Bitcoin node (as
> noted in the paper).
> Instead the Lightning node should probably connect, over Tor, to some
> random Bitcoin fullnodes / Electrum servers and broadcast txes to them.
>
> And this seems to tie with what you propose: that the LN node should use a
> different view-fullnode from the broadcast-fullnode.
>
>
> > if a node doesn’t observe a block within the last 30 minutes, it
> attempts to make a new random connection to someone in the network.
>
> A mitigation to this would be to run a background process which sleeps for
> 20 minutes, then does `bitcoin-cli addnode ${BITCOINNODE} onetry`.
> It might want to `disconnectnode` any previous node it attempted to
> connect to.
>
> However I note that the help for `addnode` contains the text "though such
> peers will not be synced from", which confuses me, since it also refers to
> the `-connect` command line option, and `-connect` means you only connect
> out to the specific nodes, so if those are not synced from huh?
>
> And of course the interesting part is "how do we get a `${BITCOINNODE}`
> that we think is not part of the eclipsing attacker?"
>
>
> > If a Lightning node is behind in its Bitcoin blockchain view, but
> Lightning payments between honest nodes are still flowing through it, this
> node will 

Re: [bitcoin-dev] [Lightning-dev] On the scalability issues of onboarding millions of LN mobile clients

2020-05-17 Thread Antoine Riard via bitcoin-dev
> * At the same time, it retains your-keys-your-coins noncustodiality,
because every update of a Lightning channel requires your keys to sign off
on it.

Yes I agree, I can foresee an easier step where managing low-value channel
and get your familiar with smooth key management maybe a first step before
running a full-node and getting a more full-fledged key management solution.

> It may even be possible, that the Lightning future with massive SPV might
end up with more economic weight in SPV nodes, than in the world without
Lightning and dependent on centralized custodial services to scale.

Even evaluating economic weight in Lightning is hard, both parties have
their own chain view, and it's likely if you assume a hub-and-spoke
topology, leaf nodes are going to be SPV and internal nodes full-nodes ?

> Money makes the world go round, so such backup servers that are
publicly-facing rather than privately-owned should be somehow incentivized
to do so, or else they would not exist in the first place.

I was thinking about the current workflow, Alice downloads her New Shiny
LN-wallet, she is asked to backup the seed, she is asked to pick-up
backup(s) nodes among her friends, relatives or business partners and is
NOT provided any automatic hint and register backup nodes addresses, maybe
even do out-of-band key exchange with this full-node operator. Therefore
you may avoid centralization by having not such publicly-facing servers. Of
course, Alice can still scrawl the web to and be lured to pickup malicious
public servers but if she is severely notified to not do so that may be
enough.

So it would be a combination of UX+user education+fallback security
mechanism to avoid economy hijack. That maybe a better solution rather than
PoW-only SPV. We have an open network so you can't prevent someone to run
such type of client but at least if they have to do so you can provide them
with a better option ?

Antoine




Le jeu. 14 mai 2020 à 00:02, ZmnSCPxj  a écrit :

> Good morning Antoine,
>
>
> > While approaching this question, I think you should consider economic
> weight of nodes in evaluating miner consensus-hijack success. Even if you
> expect a disproportionate ratio of full-nodes-vs-SPV, they may not have the
> same  economic weight at all, therefore even if miners are able to lure a
> majority of SPV clients they may not be able to stir economic nodes. SPV
> clients users will now have an incentive to cancel their hijacked history
> to stay on the most economic meaningful chain. And it's already assumed,
> that if you run a bitcoin business or LN routing node, you do want to run
> your own full-node.
>
> One hope I have for Lightning is that it will replace centralized
> custodial services, because:
>
> * Lightning gains some of the scalability advantage of centralized
> custodial services, because you can now transfer to any Lightning client
> without touching the blockchain, for much reduced transfer fees.
> * At the same time, it retains your-keys-your-coins noncustodiality,
> because every update of a Lightning channel requires your keys to sign off
> on it.
>
> If most Lightning clients are SPV, then if we compare these two worlds:
>
> * There are a few highly-important centralized custodial services with
> significant economic weight running fullnodes (i.e. now).
> * There are no highly-important centralized custodial services, and most
> everyone uses Lightning, but with SPV (i.e. a Lightning future).
>
> Then the distribution of economic weight would be different between these
> two worlds.
> It may even be possible, that the Lightning future with massive SPV might
> end up with more economic weight in SPV nodes, than in the world without
> Lightning and dependent on centralized custodial services to scale.
>
>
> It is also entirely possible that custodial services for Lightning will
> arise anyway and my hope is already dashed, come on universe, work harder
> will you, would you really disappoint some randomly-generated Internet
> person like that.
>
>
> >
> > I agree it may be hard to evaluate economic-weight-to-chain-backend
> segments, specially with offchain you disentangle an onchain output value
> from its real payment traffic. To strengthen SPV, you may implement forks
> detection and fallback to some backup node(s) which would serve as an
> authoritative source to arbiter between branches. Such backup node(s) must
> be picked up manually at client initialization, before any risk of conflict
> to avoid Reddit-style of hijack during contentious period or other massive
> social engineering. You don't want autopilot-style of recommendations for
> picking up a backup nodes and avoid cenralization of backups, but somehow a
> uniform distribution. A backup node may be a private one, it won't serve
> you any data beyond headers, and therefore you preserve public nodes
> bandwidth, which IMO is the real bottleneck. I concede it won't work well
> if you have a ratio of 1000-SPV for 1-full-node and 

Re: [bitcoin-dev] [Lightning-dev] On the scalability issues of onboarding millions of LN mobile clients

2020-05-13 Thread Antoine Riard via bitcoin-dev
Hi Chris,

While approaching this question, I think you should consider economic
weight of nodes in evaluating miner consensus-hijack success. Even if you
expect a disproportionate ratio of full-nodes-vs-SPV, they may not have the
same  economic weight at all, therefore even if miners are able to lure a
majority of SPV clients they may not be able to stir economic nodes. SPV
clients users will now have an incentive to cancel their hijacked history
to stay on the most economic meaningful chain. And it's already assumed,
that if you run a bitcoin business or LN routing node, you do want to run
your own full-node.

I agree it may be hard to evaluate economic-weight-to-chain-backend
segments, specially with offchain you disentangle an onchain output value
from its real payment traffic. To strengthen SPV, you may implement forks
detection and fallback to some backup node(s) which would serve as an
authoritative source to arbiter between branches. Such backup node(s) must
be picked up manually at client initialization, before any risk of conflict
to avoid Reddit-style of hijack during contentious period or other massive
social engineering. You don't want autopilot-style of recommendations for
picking up a backup nodes and avoid cenralization of backups, but somehow a
uniform distribution. A backup node may be a private one, it won't serve
you any data beyond headers, and therefore you preserve public nodes
bandwidth, which IMO is the real bottleneck. I concede it won't work well
if you have a ratio of 1000-SPV for 1-full-node and people are not
effectively able to pickup a backup among their social environment.

What do you think about this model ?

Cheers,

Antoine

Le mar. 12 mai 2020 à 17:06, Chris Belcher  a écrit :

> On 05/05/2020 16:16, Lloyd Fournier via bitcoin-dev wrote:
> > On Tue, May 5, 2020 at 9:01 PM Luke Dashjr via bitcoin-dev <
> > bitcoin-dev@lists.linuxfoundation.org> wrote:
> >
> >> On Tuesday 05 May 2020 10:17:37 Antoine Riard via bitcoin-dev wrote:
> >>> Trust-minimization of Bitcoin security model has always relied first
> and
> >>> above on running a full-node. This current paradigm may be shifted by
> LN
> >>> where fast, affordable, confidential, censorship-resistant payment
> >> services
> >>> may attract a lot of adoption without users running a full-node.
> >>
> >> No, it cannot be shifted. This would compromise Bitcoin itself, which
> for
> >> security depends on the assumption that a supermajority of the economy
> is
> >> verifying their incoming transactions using their own full node.
> >>
> >
> > Hi Luke,
> >
> > I have heard this claim made several times but have never understood the
> > argument behind it. The question I always have is: If I get scammed by
> not
> > verifying my incoming transactions properly how can this affect anyone
> > else? It's very unintuative.  I've been scammed several times in my life
> in
> > fiat currency transactions but as far as I could tell it never negatively
> > affected the currency overall!
> >
> > The links you point and from what I've seen you say before refer to
> "miner
> > control" as the culprit. My only thought is that this is because a light
> > client could follow a dishonest majority of hash power chain. But this
> just
> > brings me back to the question. If, instead of BTC, I get a payment in
> some
> > miner scamcoin on their dishonest fork (but I think it's BTC because I'm
> > running a light client) that still seems to only to damage me. Where does
> > the side effect onto others on the network come from?
> >
> > Cheers,
> >
> > LL
> >
>
> Hello Lloyd,
>
> The problem comes when a large part of the ecosystem gets scammed at
> once, which is how such an attack would happen in practice.
>
> For example, consider if bitcoin had 1 users. 10 of them use a full
> node wallet while the other 9990 use an SPV wallet. If a miner attacked
> the system by printing infinite bitcoins and spending coins without a
> valid signature, then the 9990 SPV wallets would accept those fake coins
> as payment, and trade the coins amongst themselves. After a time those
> coins would likely be the ancestors of most active coins in the
> 9990-SPV-wallet ecosystem. Bitcoin would split into two currencies:
> full-node-coin and SPV-coin.
>
> Now the fraud miners may become well known, perhaps being published on
> bitcoin news portals, but the 9990-SPV-wallet ecosystem has a strong
> incentive to be against any rollback. Their recent transactions would
> disappear and they'd lose money. They would argue that they've already
> been using the coin for a while, and it works perfectly fine, and

Re: [bitcoin-dev] [Lightning-dev] On the scalability issues of onboarding millions of LN mobile clients

2020-05-09 Thread Antoine Riard via bitcoin-dev
Hi Christopher,

Thanks for Blockchain Commons and Learning Bitcoin from the Command Line!

> If there are people interested in coordinating some proposals on how to
defining different sets of wallet functionality, Blockchain Commons would
be interested in hosting that collaboration. This could start as just being
a transparent shim between bitcoin-core & remote RPC, but later could
inform proposals for the future of the core wallet functionality as it gets
refactored.

Yes generally refactoring in Core wallets are making good progress [0]. I'm
pretty sure feedbacks and proposals on future changes with regards to
usability would be greatly appreciated.

Maybe you can bring these during a IRC meeting ?

Antoine

[0] See https://github.com/bitcoin/bitcoin/pull/16528 or
https://github.com/bitcoin/bitcoin/pull/16426

Le ven. 8 mai 2020 à 17:31, Christopher Allen via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> a écrit :

> On Fri, May 8, 2020 at 2:00 PM Keagan McClelland via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> Perhaps I wasn't explicit in my previous note but what I mean is that
>> there seems to be a demand for something *in between* a peer interface,
>> and an owner interface. I have little opinion as to whether this belongs in
>> core or not, I think there are much more experienced folks who can weight
>> in on that, but without something like this, you cannot limit your exposure
>> for serving something like bip157 filters without removing your own ability
>> to make use of some of those same services.
>>
>
> Our FullyNoded2 multisig wallet on iOS & Mac, communicates with your own
> personal node over RPC, securing the connection using Tor over a hidden
> onion service and two-way client authentication using a v3 Tor
> Authentication key: https://github.com/BlockchainCommons/FullyNoded-2
>
> It many ways the app (and its predecessor FullyNoded1) is an interface
> between a personal full node and a user.
>
> However, we do wish that the full RPC functionality was not exposed in
> bitcoin-core. I’d love to see a cryptographic capability mechanism such
> that the remote wallet could only m ask the node functions that it needs,
> and allow escalation for other rarer services it needs with addition
> authorization.
>
> This capability mechanism feature set should go both ways, to a minimum
> subset needed for being a watch-only transaction verification tool, all the
> way to things RPC can’t do like deleting a wallet and changing bitcoin.conf
> parameters and rebooting, without requiring full ssh access to the server
> running the node.
>
> If there are people interested in coordinating some proposals on how to
> defining different sets of wallet functionality, Blockchain Commons would
> be interested in hosting that collaboration. This could start as just being
> a transparent shim between bitcoin-core & remote RPC, but later could
> inform proposals for the future of the core wallet functionality as it gets
> refactored.
>
> — Christopher Allen
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] On the scalability issues of onboarding millions of LN mobile clients

2020-05-09 Thread Antoine Riard via bitcoin-dev
Hi Igor,

Thanks for sharing about what it's technically possible to do for a
full-node on phone, specially with regards to lower grade devices.

I do see 2 limitations for sleeping nodes:
- a lightning specific one, i.e you need to process block data real-time in
case of incoming HTLC you need to claim on chain or a HTLC timeout. There
is a bunch of timelocks implications in LN,  with regards to CSV,
CLTV_DELTA, incoming policy, outgoing policy, ... and you can't really
afford to be late without loosing a payment. I don't see timelocks being
increase, that would hinder liquidity.
- a p2p bandwidth concern, even if this new class of nodes turn as public
ones, they would still have a heavy sync period due to be fallen-behind
during the day, so you would have huge bandwidth spikes every a timezone
falls asleep and a risk of choking upload links of stable full-nodes.

I think assume-utxo may be interesting in the future in case of long-fork
detection, you may be able to download a utxo-set on the fly, and fall-back
to a full-node. But that would be only an emergency measure, not a regular
cost on the backbone network.

Antoine


Le jeu. 7 mai 2020 à 12:41, Igor Cota  a écrit :

> Hi Antoine et al,
>
> Maybe I'm completely wrong, missing some numbers, and it's maybe fine to
>> just rely on few thousands of full-node operators being nice and servicing
>> friendly millions of LN mobiles clients. But just in case it may be good to
>> consider a reasonable alternative.
>>
>
>
>> So you may want to separate control/data plane, get filters from CDN and
>> headers as check-and-control directly from the backbone network. "Hybrid"
>> models should clearly be explored.
>
>
> For some months now I've been exploring the feasibility of running full
> nodes on everyday phones [1]. One of my first thoughts was how to avoid the
> phones mooching off the network. Obviously due to battery, storage and
> bandwidth constraints it is not reasonable to expect pocket full nodes to
> serve blocks during day time.
>
> Huge exception to this is the time we are asleep and our phones are
> connected to wifi and charging. IMO this is a huge untapped resource that
> would allow mobile nodes to earn their keep. If we limit full node
> operation to sleepy night time the only constraining resource is storage:
> 512 gb of internal storage in phones is quite rare, probably about $100 for
> an SD card with full archival node capacity but phones with memory card
> slots rarer still - no one is going to bother.
>
> So depending on their storage capacity phone nodes could decide to store
> and serve just a randomly selected range of blocks during their nighttime
> operation. With trivial changes to P2P they could advertise the blocks they
> are able to serve.
> If there comes a time that normal full nodes feel DoS'ed they can
> challenge such nodes to produce the blocks they advertise and ban them as
> moochers if they fail to do so. Others may elect to be more charitable and
> serve everyone.
>
> These types of nodes would truly be part-timing since they only carry a
> subset of the blockchain and work while their operator is asleep. Probably
> should be called part-time or Sleeper Nodes™.
>
> They could be user friendly as well, with Assume UTXO they could be
> bootstrapped quickly and while they do the IBD in the background instead of
> traditional pruning they can keep the randomly assigned bit of blockchain
> to later serve the network.
>
> Save for the elderly, all the people I know could run such a node, and I
> don't live in a first world country.
>
> There is also the feel-good kumbaya aspect of American phone nodes serving
> the African continent while the Americans are asleep, Africans and
> Europeans serving the Asians in kind. By plugging in our phones and going
> to sleep we could blanket the whole world in (somewhat) full nodes!
>
> Cheers,
> Igor
>
> [1] https://icota.github.io/
>
> On Tue, 5 May 2020 at 12:18, Antoine Riard 
> wrote:
>
>> Hi,
>>
>> (cross-posting as it's really both layers concerned)
>>
>> Ongoing advancement of BIP 157 implementation in Core maybe the
>> opportunity to reflect on the future of light client protocols and use this
>> knowledge to make better-informed decisions about what kind of
>> infrastructure is needed to support mobile clients at large scale.
>>
>> Trust-minimization of Bitcoin security model has always relied first and
>> above on running a full-node. This current paradigm may be shifted by LN
>> where fast, affordable, confidential, censorship-resistant payment services
>> may attract a lot of adoption without users running a full-node. Assuming a
>> user adoption path where a full-node is required to benefit for LN may
>> deprive a lot of users, especially those who are already denied a real
>> financial infrastructure access. It doesn't mean we shouldn't foster node
>> adoption when people are able to do so, and having a LN wallet maybe even a
>> first-step to it.
>>
>> Designing a 

Re: [bitcoin-dev] [Lightning-dev] On the scalability issues of onboarding millions of LN mobile clients

2020-05-07 Thread Antoine Riard via bitcoin-dev
What I'm thinking more is if the costs of security are being too much
externalized from the light clients onto full nodes, nodes operators are
just going to stop servicing light clients `peercfilters=false`. The
backbone p2p network is going to be fine. But the massive LN light clients
network built on top is going to rely on centralized services for its chain
access and now you may have consensus capture by those..

Le mer. 6 mai 2020 à 12:00, Keagan McClelland 
a écrit :

> Hi Antoine,
>
> Consensus capture by miners isn't the only concern here. Consensus capture
> by any subset of users whose interests diverge from the overall consensus
> is equally damaging. The scenario I can imagine here is that the more light
> clients outpace full nodes, the more the costs of security are being
> externalized from the light clients onto the full nodes. In this situation,
> it can make full nodes harder to run. If they are harder to run it will
> price out some marginal set of full node operators, which causes a net new
> increase in light clients (as the disaffected full nodes convert), AND a
> redistribution of load onto a smaller surface area. This is a naturally
> unstable process. It is safe to say that as node counts drop, the set of
> node operators will increasingly represent economic actors with extreme
> weight. The more this process unfolds, the more likely their interests will
> diverge from the population at large, and also the more likely they can be
> coerced into behavior they otherwise wouldn't. After all it is easier to
> find agents who carry lots of economic weight. This is true independent of
> their mining status, we should be just as wary of consensus capture by
> exchanges or HNWI's as we are about miners.
>
> Keagan
>
> On Wed, May 6, 2020 at 3:06 AM Antoine Riard 
> wrote:
>
>> I do see the consensus capture argument by miners but in reality isn't
>> this attack scenario have a lot of assumptions on topology an deployment ?
>>
>> For such attack to succeed you need miners nodes to be connected to
>> clients to feed directly the invalid headers and if these ones are
>> connected to headers/filters gateways, themselves doing full-nodes
>> validation invalid chain is going to be sanitized out ?
>>
>> Sure now you trust these gateways, but if you have multiple connections
>> to them and can guarantee they aren't run by the same entity, that maybe an
>> acceptable security model, depending of staked amount and your
>> expectations. I more concerned of having a lot of them and being
>> diversified enough to avoid collusion between gateways/chain access
>> providers/miners.
>>
>> But even if you light clients is directly connected to the backbone
>> network and may be reached by miners you can implement fork anomalies
>> detection and from then you may have multiples options:
>> * halt the wallet, wait for human intervention
>> * fallback connection to a trusted server, authoritative on your chain
>> view
>> * invalidity proofs?
>>
>> Now I agree you need a wide-enough, sane backbone network to build on
>> top, and we should foster node adoption as much as we can.
>>
>> Le mar. 5 mai 2020 à 09:01, Luke Dashjr  a écrit :
>>
>>> On Tuesday 05 May 2020 10:17:37 Antoine Riard via bitcoin-dev wrote:
>>> > Trust-minimization of Bitcoin security model has always relied first
>>> and
>>> > above on running a full-node. This current paradigm may be shifted by
>>> LN
>>> > where fast, affordable, confidential, censorship-resistant payment
>>> services
>>> > may attract a lot of adoption without users running a full-node.
>>>
>>> No, it cannot be shifted. This would compromise Bitcoin itself, which
>>> for
>>> security depends on the assumption that a supermajority of the economy
>>> is
>>> verifying their incoming transactions using their own full node.
>>>
>>> The past few years has seen severe regressions in this area, to the
>>> point
>>> where Bitcoin's future seems quite bleak. Without serious improvements
>>> to the
>>> full node ratio, Bitcoin is likely to fail.
>>>
>>> Therefore, all efforts to improve the "full node-less" experience are
>>> harmful,
>>> and should be actively avoided. BIP 157 improves privacy of fn-less
>>> usage,
>>> while providing no real benefits to full node users (compared to more
>>> efficient protocols like Stratum/Electrum).
>>>
>>> For this reason, myself and a few others oppose merging support for BIP
>>> 157 in
>>> Core.
>&g

Re: [bitcoin-dev] [Lightning-dev] On the scalability issues of onboarding millions of LN mobile clients

2020-05-06 Thread Antoine Riard via bitcoin-dev
> As a result, the entire protocol could be served over something like
HTTP, taking advantage of all the established CDNs and anycast serving
infrastructure,

Yes it's moving the issue of being a computation one to a distribution one.
But still you need the bandwidth capacities. What I'm concerned is the
trust model of relying on few-establish CDNs, you don't want to make it
easy to have "headers-routing" hijack and therefore having massive channel
closure or time-locks interference due to LN clients not seeing the last
few block. So you may want to separate control/data plane, get filters from
CDN and headers as check-and-control directly from the backbone network.
"Hybrid" models should clearly be explored.

Web-of-trust style of deployments should be also envisioned, you may get
huge scaling improvement, assuming client may be peers between themselves
and the ones belonging to the same social entity should be able to share
the same chain view without too much risk.

> Piggy backing off the above idea, if the data starts being widely served
over HTTP, then LSATs[1][2] can be used to add a lightweight payment
mechanism by inserting a new proxy server in front of the filter/header
infrastructure.

Yeah, I hadn't time to read the spec yet but that was clearly something
like LSATs I meaned speaking about monetary compensation to price
resources. I just hope it isn't too much tie to HTTP because you may want
to read/write over other communication channels like
tx-broadcast-over-radio to solve first-hop privacy.

Le mar. 5 mai 2020 à 20:31, Olaoluwa Osuntokun  a écrit :

> Hi Antoine,
>
> > Even with cheaper, more efficient protocols like BIP 157, you may have a
> > huge discrepancy between what is asked and what is offered. Assuming 10M
> > light clients [0] each of them consuming ~100MB/month for
> filters/headers,
> > that means you're asking 1PB/month of traffic to the backbone network. If
> > you assume 10K public nodes, like today, assuming _all_ of them opt-in to
> > signal BIP 157, that's an increase of 100GB/month for each. Which is
> > consequent with regards to the estimated cost of 350GB/month for running
> > an actual public node
>
> One really dope thing about BIP 157+158, is that the protocol makes serving
> light clients now _stateless_, since the full node doesn't need to perform
> any unique work for a given client. As a result, the entire protocol could
> be served over something like HTTP, taking advantage of all the established
> CDNs and anycast serving infrastructure, which can reduce syncing time
> (less latency to
> fetch data) and also more widely distributed the load of light clients
> using
> the existing web infrastructure. Going further, with HTTP/2's server-push
> capabilities, those serving this data can still push out notifications for
> new headers, etc.
>
> > Therefore, you may want to introduce monetary compensation in exchange of
> > servicing filters. Light client not dedicating resources to maintain the
> > network but free-riding on it, you may use their micro-payment
> > capabilities to price chain access resources [3]
>
> Piggy backing off the above idea, if the data starts being widely served
> over HTTP, then LSATs[1][2] can be used to add a lightweight payment
> mechanism by inserting a new proxy server in front of the filter/header
> infrastructure. The minted tokens themselves may allow a user to purchase
> access to a single header/filter, a range of them in the past, or N headers
> past the known chain tip, etc, etc.
>
> -- Laolu
>
> [1]: https://lsat.tech/
> [2]: https://lightning.engineering/posts/2020-03-30-lsat/
>
>
> On Tue, May 5, 2020 at 3:17 AM Antoine Riard 
> wrote:
>
>> Hi,
>>
>> (cross-posting as it's really both layers concerned)
>>
>> Ongoing advancement of BIP 157 implementation in Core maybe the
>> opportunity to reflect on the future of light client protocols and use this
>> knowledge to make better-informed decisions about what kind of
>> infrastructure is needed to support mobile clients at large scale.
>>
>> Trust-minimization of Bitcoin security model has always relied first and
>> above on running a full-node. This current paradigm may be shifted by LN
>> where fast, affordable, confidential, censorship-resistant payment services
>> may attract a lot of adoption without users running a full-node. Assuming a
>> user adoption path where a full-node is required to benefit for LN may
>> deprive a lot of users, especially those who are already denied a real
>> financial infrastructure access. It doesn't mean we shouldn't foster node
>> adoption when people are able to do so, and having a LN wallet maybe even a
>> first-step to it.
>>
>> Designing a mobile-first LN experience opens its own gap of challenges
>> especially in terms of security and privacy. The problem can be scoped as
>> how to build a scalable, secure, private chain access backend for millions
>> of LN clients ?
>>
>> Light client protocols for LN exist (either BIP157 or Electrum 

Re: [bitcoin-dev] [Lightning-dev] On the scalability issues of onboarding millions of LN mobile clients

2020-05-06 Thread Antoine Riard via bitcoin-dev
> The choice between whether we offer them a light client technology that
is better or worse for privacy and scalability.

And offer them a solution which would scale in the long-term.

Again it's not an argumentation against BIP 157 protocol in itself, the
problem I'm interested in is how implementing BIP157 in Core will address
this issue ?

Le mar. 5 mai 2020 à 13:36, John Newbery via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> a écrit :

> There doesn't seem to be anything in the original email that's specific to
> BIP 157. It's a restatement of the arguments against light clients:
>
> - light clients are a burden on the full nodes that serve them
> - if light clients become more popular, there won't be enough full nodes
> to serve them
> - people might build products that depend on altruistic nodes serving
> data, which is unsustainable
> - maybe at some point in the future, light clients will need to pay for
> services
>
> The choice isn't between people using light clients or not. People already
> use light clients. The choice between whether we offer them a light client
> technology that is better or worse for privacy and scalability.
>
> The arguments for why BIP 157 is better than the existing light client
> technologies are available elsewhere, but to summarize:
>
> - they're unique for a block, which means they can easily be cached.
> Serving a filter requires no computation, just i/o (or memory access for
> cached filter/header data) and bandwidth. There are plenty of other
> services that a full node offers that use i/o and bandwidth, such as
> serving blocks.
> - unique-for-block means clients can download from multiple sources
> - the linked-headers/filters model allows hybrid approaches, where headers
> checkpoints can be fetched from trusted/signed nodes, with intermediate
> headers and filters fetched from untrusted sources
> - less possibilities to DoS/waste resources on the serving node
> - better for privacy
>
> > The intention, as I understood it, of putting BIP157 directly into
> bitcoind was to essentially force all `bitcoind` users to possibly service
> BIP157 clients
>
> Please. No-one is forcing anyone to do anything. To serve filters, a node
> user needs to download the latest version, set `-blockfilterindex=basic` to
> build the compact filters index, and set `-peercfilters` to serve them over
> P2P. This is an optional, off-by-default feature.
>
> Regards,
> John
>
>
> On Tue, May 5, 2020 at 9:50 AM ZmnSCPxj via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> Good morning ariard and luke-jr
>>
>>
>> > > Trust-minimization of Bitcoin security model has always relied first
>> and
>> > > above on running a full-node. This current paradigm may be shifted by
>> LN
>> > > where fast, affordable, confidential, censorship-resistant payment
>> services
>> > > may attract a lot of adoption without users running a full-node.
>> >
>> > No, it cannot be shifted. This would compromise Bitcoin itself, which
>> for
>> > security depends on the assumption that a supermajority of the economy
>> is
>> > verifying their incoming transactions using their own full node.
>> >
>> > The past few years has seen severe regressions in this area, to the
>> point
>> > where Bitcoin's future seems quite bleak. Without serious improvements
>> to the
>> > full node ratio, Bitcoin is likely to fail.
>> >
>> > Therefore, all efforts to improve the "full node-less" experience are
>> harmful,
>> > and should be actively avoided. BIP 157 improves privacy of fn-less
>> usage,
>> > while providing no real benefits to full node users (compared to more
>> > efficient protocols like Stratum/Electrum).
>> >
>> > For this reason, myself and a few others oppose merging support for BIP
>> 157 in
>> > Core.
>>
>> BIP 157 can be implemented as a separate daemon that processes the blocks
>> downloaded by an attached `bitcoind`, i.e. what Wasabi does.
>>
>> The intention, as I understood it, of putting BIP157 directly into
>> bitcoind was to essentially force all `bitcoind` users to possibly service
>> BIP157 clients, in the hope that a BIP157 client can contact any arbitrary
>> fullnode to get BIP157 service.
>> This is supposed to improve to the situation relative to e.g. Electrum,
>> where there are far fewer Electrum servers than fullnodes.
>>
>> Of course, as ariard computes, deploying BIP157 could lead to an
>> effective DDoS on the fullnode network if a large number of BIP157 clients
>> arise.
>> Though maybe this will not occur very fast?  We hope?
>>
>> It seems to me that the thing that *could* be done would be to have
>> watchtowers provide light-client services, since that seems to be the major
>> business model of watchtowers, as suggested by ariard as well.
>> This is still less than ideal, but maybe is better than nothing.
>>
>> Regards,
>> ZmnSCPxj
>> ___
>> bitcoin-dev mailing list
>> bitcoin-dev@lists.linuxfoundation.org
>> 

Re: [bitcoin-dev] On the scalability issues of onboarding millions of LN mobile clients

2020-05-06 Thread Antoine Riard via bitcoin-dev
I do see the consensus capture argument by miners but in reality isn't this
attack scenario have a lot of assumptions on topology an deployment ?

For such attack to succeed you need miners nodes to be connected to clients
to feed directly the invalid headers and if these ones are connected to
headers/filters gateways, themselves doing full-nodes validation invalid
chain is going to be sanitized out ?

Sure now you trust these gateways, but if you have multiple connections to
them and can guarantee they aren't run by the same entity, that maybe an
acceptable security model, depending of staked amount and your
expectations. I more concerned of having a lot of them and being
diversified enough to avoid collusion between gateways/chain access
providers/miners.

But even if you light clients is directly connected to the backbone network
and may be reached by miners you can implement fork anomalies detection and
from then you may have multiples options:
* halt the wallet, wait for human intervention
* fallback connection to a trusted server, authoritative on your chain view
* invalidity proofs?

Now I agree you need a wide-enough, sane backbone network to build on top,
and we should foster node adoption as much as we can.

Le mar. 5 mai 2020 à 09:01, Luke Dashjr  a écrit :

> On Tuesday 05 May 2020 10:17:37 Antoine Riard via bitcoin-dev wrote:
> > Trust-minimization of Bitcoin security model has always relied first and
> > above on running a full-node. This current paradigm may be shifted by LN
> > where fast, affordable, confidential, censorship-resistant payment
> services
> > may attract a lot of adoption without users running a full-node.
>
> No, it cannot be shifted. This would compromise Bitcoin itself, which for
> security depends on the assumption that a supermajority of the economy is
> verifying their incoming transactions using their own full node.
>
> The past few years has seen severe regressions in this area, to the point
> where Bitcoin's future seems quite bleak. Without serious improvements to
> the
> full node ratio, Bitcoin is likely to fail.
>
> Therefore, all efforts to improve the "full node-less" experience are
> harmful,
> and should be actively avoided. BIP 157 improves privacy of fn-less usage,
> while providing no real benefits to full node users (compared to more
> efficient protocols like Stratum/Electrum).
>
> For this reason, myself and a few others oppose merging support for BIP
> 157 in
> Core.
>
> > Assuming a user adoption path where a full-node is required to benefit
> for
> > LN may deprive a lot of users, especially those who are already denied a
> > real financial infrastructure access.
>
> If Bitcoin can't do it, then Bitcoin can't do it.
> Bitcoin can't solve *any* problem if it becomes insecure itself.
>
> Luke
>
> P.S. See also
>
> https://medium.com/@nicolasdorier/why-i-dont-celebrate-neutrino-206bafa5fda0
>
> https://medium.com/@nicolasdorier/neutrino-is-dangerous-for-my-self-sovereignty-18fac5bcdc25
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] On the scalability issues of onboarding millions of LN mobile clients

2020-05-06 Thread Antoine Riard via bitcoin-dev
I didn't trust myself and verify. In fact the [3] is the real [2].

Le mar. 5 mai 2020 à 06:28, Andrés G. Aragoneses  a
écrit :

> Hey Antoine, just a small note, [3] is missing in your footnotes, can you
> add it? Thanks
>
> On Tue, 5 May 2020 at 18:17, Antoine Riard 
> wrote:
>
>> Hi,
>>
>> (cross-posting as it's really both layers concerned)
>>
>> Ongoing advancement of BIP 157 implementation in Core maybe the
>> opportunity to reflect on the future of light client protocols and use this
>> knowledge to make better-informed decisions about what kind of
>> infrastructure is needed to support mobile clients at large scale.
>>
>> Trust-minimization of Bitcoin security model has always relied first and
>> above on running a full-node. This current paradigm may be shifted by LN
>> where fast, affordable, confidential, censorship-resistant payment services
>> may attract a lot of adoption without users running a full-node. Assuming a
>> user adoption path where a full-node is required to benefit for LN may
>> deprive a lot of users, especially those who are already denied a real
>> financial infrastructure access. It doesn't mean we shouldn't foster node
>> adoption when people are able to do so, and having a LN wallet maybe even a
>> first-step to it.
>>
>> Designing a mobile-first LN experience opens its own gap of challenges
>> especially in terms of security and privacy. The problem can be scoped as
>> how to build a scalable, secure, private chain access backend for millions
>> of LN clients ?
>>
>> Light client protocols for LN exist (either BIP157 or Electrum are used),
>> although their privacy and security guarantees with regards to
>> implementation on the client-side may still be an object of concern
>> (aggressive tx-rebroadcast, sybillable outbound peer selection, trusted fee
>> estimation). That said, one of the bottlenecks is likely the number of
>> full-nodes being willingly to dedicate resources to serve those clients.
>> It's not about _which_ protocol is deployed but more about _incentives_ for
>> node operators to dedicate long-term resources to client they have lower
>> reasons to care about otherwise.
>>
>> Even with cheaper, more efficient protocols like BIP 157, you may have a
>> huge discrepancy between what is asked and what is offered. Assuming 10M
>> light clients [0] each of them consuming ~100MB/month for filters/headers,
>> that means you're asking 1PB/month of traffic to the backbone network. If
>> you assume 10K public nodes, like today, assuming _all_ of them opt-in to
>> signal BIP 157, that's an increase of 100GB/month for each. Which is
>> consequent with regards to the estimated cost of 350GB/month for running an
>> actual public node. Widening full-node adoption, specially in term of
>> geographic distribution means as much as we can to bound its operational
>> cost.
>>
>> Obviously,  deployment of more efficient tx-relay protocol like Erlay
>> will free up some resources but it maybe wiser to dedicate them to increase
>> health and security of the backbone network like deploying more outbound
>> connections.
>>
>> Unless your light client protocol is so ridiculous cheap to rely on
>> niceness of a subset of node operators offering free resources, it won't
>> scale. And it's likely you will always have a ratio disequilibrium between
>> numbers of clients and numbers of full-node, even worst their growth rate
>> won't be the same, first ones are so much easier to setup.
>>
>> It doesn't mean servicing filters for free won't work for now, numbers of
>> BIP157 clients is still pretty low, but what is worrying is  wallet vendors
>> building such chain access backend, hitting a bandwidth scalability wall
>> few years from now instead of pursuing better solutions. And if this
>> happen, maybe suddenly, isn't the quick fix going to be to rely on
>> centralized services, so much easier to deploy ?
>>
>> Of course, it may be brought that actually current full-node operators
>> don't get anything back from servicing blocks, transactions, addresses...
>> It may be replied that you have an indirect incentive to participate in
>> network relay and therefore guarantee censorship-resistance, instead of
>> directly connecting to miners. You do have today ways to select your
>> resources exposure like pruning, block-only or being private but the wider
>> point is the current (non?)-incentives model seems to work for the base
>> layer. For light clients data, are node operators going to be satisfied to
>> serve this new *class* of traffic en masse ?
>>
>> This doesn't mean you won't find BIP157 servers, ready to serve you with
>> unlimited credit, but it's more likely their intentions maybe not aligned,
>> like spying on your transaction broadcast or block fetched. And you do want
>> peer diversity to avoid every BIP157 servers being on few ASNs for
>> fault-tolerance. Do people expect a scenario a la Cloudflare, where
>> everyone connections is to far or less the same set of entities ?

[bitcoin-dev] On the scalability issues of onboarding millions of LN mobile clients

2020-05-05 Thread Antoine Riard via bitcoin-dev
Hi,

(cross-posting as it's really both layers concerned)

Ongoing advancement of BIP 157 implementation in Core maybe the opportunity
to reflect on the future of light client protocols and use this knowledge
to make better-informed decisions about what kind of infrastructure is
needed to support mobile clients at large scale.

Trust-minimization of Bitcoin security model has always relied first and
above on running a full-node. This current paradigm may be shifted by LN
where fast, affordable, confidential, censorship-resistant payment services
may attract a lot of adoption without users running a full-node. Assuming a
user adoption path where a full-node is required to benefit for LN may
deprive a lot of users, especially those who are already denied a real
financial infrastructure access. It doesn't mean we shouldn't foster node
adoption when people are able to do so, and having a LN wallet maybe even a
first-step to it.

Designing a mobile-first LN experience opens its own gap of challenges
especially in terms of security and privacy. The problem can be scoped as
how to build a scalable, secure, private chain access backend for millions
of LN clients ?

Light client protocols for LN exist (either BIP157 or Electrum are used),
although their privacy and security guarantees with regards to
implementation on the client-side may still be an object of concern
(aggressive tx-rebroadcast, sybillable outbound peer selection, trusted fee
estimation). That said, one of the bottlenecks is likely the number of
full-nodes being willingly to dedicate resources to serve those clients.
It's not about _which_ protocol is deployed but more about _incentives_ for
node operators to dedicate long-term resources to client they have lower
reasons to care about otherwise.

Even with cheaper, more efficient protocols like BIP 157, you may have a
huge discrepancy between what is asked and what is offered. Assuming 10M
light clients [0] each of them consuming ~100MB/month for filters/headers,
that means you're asking 1PB/month of traffic to the backbone network. If
you assume 10K public nodes, like today, assuming _all_ of them opt-in to
signal BIP 157, that's an increase of 100GB/month for each. Which is
consequent with regards to the estimated cost of 350GB/month for running an
actual public node. Widening full-node adoption, specially in term of
geographic distribution means as much as we can to bound its operational
cost.

Obviously,  deployment of more efficient tx-relay protocol like Erlay will
free up some resources but it maybe wiser to dedicate them to increase
health and security of the backbone network like deploying more outbound
connections.

Unless your light client protocol is so ridiculous cheap to rely on
niceness of a subset of node operators offering free resources, it won't
scale. And it's likely you will always have a ratio disequilibrium between
numbers of clients and numbers of full-node, even worst their growth rate
won't be the same, first ones are so much easier to setup.

It doesn't mean servicing filters for free won't work for now, numbers of
BIP157 clients is still pretty low, but what is worrying is  wallet vendors
building such chain access backend, hitting a bandwidth scalability wall
few years from now instead of pursuing better solutions. And if this
happen, maybe suddenly, isn't the quick fix going to be to rely on
centralized services, so much easier to deploy ?

Of course, it may be brought that actually current full-node operators
don't get anything back from servicing blocks, transactions, addresses...
It may be replied that you have an indirect incentive to participate in
network relay and therefore guarantee censorship-resistance, instead of
directly connecting to miners. You do have today ways to select your
resources exposure like pruning, block-only or being private but the wider
point is the current (non?)-incentives model seems to work for the base
layer. For light clients data, are node operators going to be satisfied to
serve this new *class* of traffic en masse ?

This doesn't mean you won't find BIP157 servers, ready to serve you with
unlimited credit, but it's more likely their intentions maybe not aligned,
like spying on your transaction broadcast or block fetched. And you do want
peer diversity to avoid every BIP157 servers being on few ASNs for
fault-tolerance. Do people expect a scenario a la Cloudflare, where
everyone connections is to far or less the same set of entities ?

Moreover, the LN security model diverges hugely from basic on-chain
transactions. Worst-case attack on-chain a malicious light client server
showing a longest, invalid, PoW-signed chain to double-spend the user. On
LN, the *liveliness* requirement means the entity owning your view of the
chain can lie to you on whether your channel has been spent by a revoked
commitment, the real tip of the blockchain or even dry-up block
announcement to trigger unexpected behavior in the client logic. A
malicious 

Re: [bitcoin-dev] RBF Pinning with Counterparties and Competing Interest

2020-04-22 Thread Antoine Riard via bitcoin-dev
> In that case, would it be worth re-implementing something like a BIP61
reject message but with an extension that returns the txids of any
conflicts?

That's an interesting idea, but an attacker can create a local conflict in
your mempool
and then send the preimage tx to make hit recentRejects until next tip so
when the rejection code with conflict is received transaction isn't going
to be fetched.
Of course you can make an exception for this, but seems a DoS vector...

And also if you have a private full-node and connect only to 8 outbounds,
an attacker
can do a bit of tx-relay topology discovery and blind your tx-relay peers
too...

I think p2p/mempool hardening measures will only make attack harder but not
erase it, we
should avoid tie too much the security model of Lightning on a given p2p
topology. If you don't
do manual peering (whitelist,addnode), this one may change without
visibility (like stale tip).



Le mer. 22 avr. 2020 à 14:25, David A. Harding via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> a écrit :

> On Mon, Apr 20, 2020 at 10:43:14PM -0400, Matt Corallo via Lightning-dev
> wrote:
> > A lightning counterparty (C, who received the HTLC from B, who
> > received it from A) today could, if B broadcasts the commitment
> > transaction, spend an HTLC using the preimage with a low-fee,
> > RBF-disabled transaction.  After a few blocks, A could claim the HTLC
> > from B via the timeout mechanism, and then after a few days, C could
> > get the HTLC-claiming transaction mined via some out-of-band agreement
> > with a small miner. This leaves B short the HTLC value.
>
> IIUC, the main problem is honest Bob will broadcast a transaction
> without realizing it conflicts with a pinned transaction that's already
> in most node's mempools.  If Bob knew about the pinned transaction and
> could get a copy of it, he'd be fine.
>
> In that case, would it be worth re-implementing something like a BIP61
> reject message but with an extension that returns the txids of any
> conflicts?  For example, when Bob connects to a bunch of Bitcoin nodes
> and sends his conflicting transaction, the nodes would reply with
> something like "rejected: code 123: conflicts with txid 0123...cdef".
> Bob could then reply with a a getdata('tx', '0123...cdef') to get the
> pinned transaction, parse out its preimage, and resolve the HTLC.
>
> This approach isn't perfect (if it even makes sense at all---I could be
> misunderstanding the problem) because one of the problems that caused
> BIP61 to be disabled in Bitcoin Core was its unreliability, but I think
> if Bob had at least one honest peer that had the pinned transaction in
> its mempool and which implemented reject-with-conflicting-txid, Bob
> might be ok.
>
> -Dave
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] RBF Pinning with Counterparties and Competing Interest

2020-04-22 Thread Antoine Riard via bitcoin-dev
Personally, I would have wait a bit before to go public on this, like
letting some implementations
increasing their CLTV deltas, but anyway, it's here now.

Mempool-pinning attacks were already discussed on this list [0], but what
we found is you
can _reverse_ the scenario, where it's not the malicious party delaying
confirmation of honest
party transactions but malicious deliberately stucking its own transactions
in the mempool to avoid
confirmation of timeout. And therefore gaming inter-link timelock to
provoke an unbalanced
settlement for the victim ("aka you pay forward, but don't get pay
backward").

How much attacks are practical is based on how you can leverage mempool
rules to pin your own
transaction. What you're looking for is a  _mempool-obstruction_ trick, i.e
a way to get honest party
transaction being bounce off due to your transaction being already there.

Beyond disabling RBF on your transaction (with current protocol, not anchor
proposal), there is
two likely candidates:
* BIP 125 rule 3: "The replacement transaction pays an absolute fee of at
least the sum paid by the original transactions."
* BIP 125 rule 5: "The number of original transactions to be replaced and
their descendant transactions which will be evicted from the mempool must
not exceed a total of 100 transactions."

Let's go through whole scenario:
* Mallory and Eve are colluding
* Eve and Mallory are opening channels with Alice, Mallory do a bit of
rebalancing
to get full incoming capacity, like receiving funds on an onchain address
through another Alice
link
* Eve send a HTLC #1 to Mallory through Alice expirying at block 100
* Eve send a second HTLC #2 to Mallory through Alice, expirying at block
110 on outgoing link
(A<->M), 120 on incoming link (E<->A)
* Before block 100, without cancellation from Mallory, Alice will
force-close channel and broadcast
her local commitment and HTLC-timeout to get back HTLC #1
* Alice can't broadcast HTLC-timeout for HTLC #2 as it's only expires at 110
* Mallory can broadcast its Pinning Preimage Tx on offered HTLC #2 output
on Alice's transaction,
feerate is maliciously chosen to get in network mempools but never to
confirm. Absolute fee must
be higher than HTLC-timeout #2, a fact known to Mallory. There is no p2p
race.
* As Alice doesn't watch the mempool, she is never going to learn the
preimage to redeeem incoming
HTLC #2
* At block 110, Alice is going to broadcast HTLC-timeout #2, feerate may be
higher but as absolute
fee is lower, it's going to be rejected from network mempools as
replacement for Pinning Preimage
Tx (BIP 125 rule 3)
* At block 120, Eve closes channel and HTLC-timeout HTLC #2
* Mallory can RBF its Pinning Preimage Tx by a high-feerate one and get it
confirmed

New anchor_output proposal, by disabling RBF, forces attacker to bid on the
absolute fee. It may
be now a risk to loose the fee if Pinning Tx is confirming. You may extend
your "pinning
lease" by ejecting your malicious tx, like conflicting or trimming out of
the mempool one of its
parents. And then reannounce your preimage tx with a
lower-feerate-but-still-high-fee before a
new block and a honest HTLC-timeout rebroadcast.

AFAICT, even with anchor_output deployed, even assuming empty mempools,
success rate and economic
rationality of attacks is finding such cheap, reliable "pinning lease
extension" trick.

I think any mempool watching mitigation is at best a cat-and-mouse hack.
Contrary to node
advancing towards a global blockchain view thanks to PoW, network mempools
don't have a convergence
guarantee. This means,  in a distributed system like bitcoin, node don't
see events in the same
order, Alice may observe tx X, tx Y, tx Z and Bob may observe tx Z, tx X,
tx Y. And order of events
affects if a future event is going to be rejected or not, like if tx Z
disable-RBF and tx X try to
replace Z, Alice accepts X and Bob rejects it. And this divergence may
perserve until a new block.

Practically, it means an attacker can provoke a local conflict to bounce
off HTLC preimage tx out
of your mempool while broadcasting preimage tx without conflict to the rest
of the network by
tweaking tx-relay protocol and so easily manipulating order of events for
every node. A local
conflict is easy to provoke, just make tx A double-spent by both
HTLC-preimage-tx and non-RBF-tx-B.
Announce txA+txB to mempool victim and txA+HTLC-preimage-tx to rest of
network. When rest of
network announce HTLC-preimage-tx, it's going to rejected by your mempool.

Provoking local conflict assumes of course _interlayer_ mapping by an
attacker, i.e mapping your LN
node to your full-node(s). Last time, we check, there was 982 match by IP
for 4,500 LN/52,000
full-node. Mapping heuristics is an ongoing research subject and sadly
seems affordable.

Yes a) you can enable full-RBF on your local node but blinding conflicting
may still be with higher
feerate as everything is attacker malleable b) you may want to catch tx and
extract preimage
on the p2p wire, but 

Re: [bitcoin-dev] LN & Coinjoin, a Great Tx Format Wedding

2020-02-25 Thread Antoine Riard via bitcoin-dev
Morning Zeeman,

> I proposed before to consider splicing as a form of merged closing plus
funding, rather than a modification of channel state; in particular we
might note that, for compatibility with our existing system, a spliced
channel would have to change its short channel ID > and channel ID, so it
is arguably a different channel already.

Yes but you may want alias to keep your channel routing-score across
splicing, though how to do this is more LN-dev specific.

> Emulating LN splices mildly makes ConJoinXT less desirable, however, as
the mix takes longer and is more costly.

Intuitively, a lot of Coinjoin traffic may be redirected in the future
through LN when protocol matures, privacy properties may be better (though
need careful analysis).
Coinjoins would be only for high-amounts for which security/liquidity isn't
offered by LN, and in this case time for increasing privacy is IMO an
acceptable tradeoff.

> Does not Electrum do RBF by default?

Dunno, for more context on RBF and its controversies see
https://bitcoincore.org/en/faq/optin_rbf/ (or Optech resources)

> 1.5RTT with MuSig

Yes right I meaned you don't need to assume latter interactivity if it's a
multi-party tx construction you sign multiple RBF versions at same time.
Still need to think about privacy-preserving fee bumping wrt to mempool
observer

> This can be mitigated if all participants contribute equal or
nearly-equally to the fees, though that complicates single-funding, and may
violate Initiator Pays Principle (the initiator of an action should pay all
fees related to the action, as otherwise it may be  possible to create a
null operation that the acceptor of the action ends up paying fees for,
which can be used as a financial attack to drain acceptors).

Yes, but also you want the acceptor to pay for its inputs announced to
avoid pouring the spending burden on the initiator only, or doing any
free-ride aggregation .

> There may be other protocols interested in this as well --- for instance
"submarine swaps" and "lightning loops", which are the same thing.

Yes good point, specially batched submarine swaps are good candidates, also
DLCs (will enquiry on tx pattern of more bitcoin protocol)


Le lun. 24 févr. 2020 à 18:36, ZmnSCPxj  a écrit :

> Good morning Antoine,
>
>
> > > On mutual closes, we should probably set `nLockTime` to the current
> blockheight + 1 as well.
> > > This has greater benefit later in a Taproot world.
> >
> > I assume mutual closes would fall under the aforementioned tx
> construction proposal, so a closing may be a batch to fund other channels or
> > splice existent ones.
>
> Ah, that is indeed of great interest.
> I proposed before to consider splicing as a form of merged closing plus
> funding, rather than a modification of channel state; in particular we
> might note that, for compatibility with our existing system, a spliced
> channel would have to change its short channel ID and channel ID, so it is
> arguably a different channel already.
>
> >
> > > A kind of non-equal-value CoinJoin could emulate a Lightning open +
> close, but most Lightning channels will have a large number of blocks
> (thousands or tens of thousands) between the open and the close; it seems
> unlikely that a short-term channel will exist > that matches the
> non-equal-value CoinJoin.
> >
> > That's a really acute point, utxo age and spending frequency may be
> obvious protocol leaks.
>
> Yes; I am curious how JoinMarket reconciles how makers mix their coins vs.
> how takers do; presumably the tumbler.py emulates the behavior of a maker
> somehow.
>
> > Splicing may help there because a LN node would do multiple chain writes
> during channel lifecycle for liquidity reasons but it's
> > near-impossible to predict its frequency without deployment.
>
> Long ago, I proposed an alternative to splicing, which would today be
> recognizable as a "submarine swap" or "lightning loop".
> https://lists.linuxfoundation.org/pipermail/lightning-dev/2017-May/000692.html
> Perhaps the frequencies of those operations may hint as to how much
> splicing would occur in practice in the future.
>
> > Even with this, I do fear an analysis gap between Coinjoin spending
> delta and LN ones. A way to circumvent this would be for CoinjoinXT to
> timelock its PTG
> > transactions to mimick actively-spliced LN channels. That's where
> adoption of a common format by other onchain transactions than LN ones
> would help a lot.
>
> Well, one way to implement splice-in would be to have an output that is
> first dedicated to the splice-in, and *then* a separate transaction which
> actually does the splice-in.
> This has a drawback of requiring an extra transaction, which wins us the
> facility to continue operation of the channel even while the splice-in
> transactions are being confirmed while retaining only one state.
> (the latest proposal, I believe, does *not* use this construction, and
> instead requires both sides to maintain two sets of states, with 

Re: [bitcoin-dev] LN & Coinjoin, a Great Tx Format Wedding

2020-02-24 Thread Antoine Riard via bitcoin-dev
> I notice your post puts little spotlight on unilateral cases.
> A thing to note, is that we only use `nSequence` and the weird watermark
on unilateral closes.
> Even HTLCs only exist on unilateral closes --- on mutual closes we wait
for HTLCs to settle one way or the other before doing the mutual close.

Yes, I'm only aiming LN-cooperative cases, as your noticed HTLCs only exist
on commitment txn and masquerading them in some Taptree would come
with its own challenges. Cooperative closings should be the majority of
channels if network is reliable and so would be a set big enough to achieve
the goal
of blurring Coinjoins among LN transactions.

Right now we don't use `nSequence` but the current interactive tx
construction proposal uses it for RBF (weird watermark was an example).

> On mutual closes, we should probably set `nLockTime` to the current
blockheight + 1 as well.
> This has greater benefit later in a Taproot world.

I assume mutual closes would fall under the aforementioned tx construction
proposal, so a closing may be a batch to fund other channels or
splice existent ones.

> A kind of non-equal-value CoinJoin could emulate a Lightning open +
close, but most Lightning channels will have a large number of blocks
(thousands or tens of thousands) between the open and the close; it seems
unlikely that a short-term channel will exist > that matches the
non-equal-value CoinJoin.

That's a really acute point, utxo age and spending frequency may be obvious
protocol leaks. Splicing may help there because a LN node would do multiple
chain writes during channel lifecycle for liquidity reasons but it's
near-impossible to predict its frequency without deployment. Even with
this, I do fear an analysis gap between Coinjoin spending delta and LN
ones. A way to circumvent this would be for CoinjoinXT to timelock its PTG
transactions to mimick actively-spliced LN channels. That's where adoption
of a common format by other onchain transactions than LN ones would help a
lot.

> Should always be on, even if we do not (yet) have a facility to
re-interact to bump fees higher.
> While it is true that a surveillor can determine that a transaction has
in fact been replaced (by observing the mempool) and thus eliminate the set
of transactions that arose from protocols that mark RBF but do not (yet)
have a facility to bump fees higher, this > information is not permanently
recorded on all fullnodes and at least we force surveillors to record this
information themselves.

Yes but if you do this for Core and given some merchants are refusing RBF
transactions for onchain payments, people are going to complain...
Also see footnote on spurious-RBF about not-having facility to bump fees
higher (you can sign multiple RBF transactions in 1-RTT and agree to
broadcast them later to obfuscate mempool analysis).

> However, it seems to me that we need to as well nail down the details of
this format.

Of course, just curious of people opinions right now but if it's a good way
to solve the described problem, will draft a spec.

Le sam. 22 févr. 2020 à 20:29, ZmnSCPxj  a écrit :

> Ggood morning Antoine, and list,
>
>
> > * nLocktime/nSequence
> > ...
> > * weird watermark (LN commitment tx obfuscated commitment number)
> > ...
> > LN (cooperative case):
>
> I notice your post puts little spotlight on unilateral cases.
> A thing to note, is that we only use `nSequence` and the weird watermark
> on unilateral closes.
> Even HTLCs only exist on unilateral closes --- on mutual closes we wait
> for HTLCs to settle one way or the other before doing the mutual close.
>
> If we assume that unilateral closes are rare, then it might be an
> acceptable risk to lose privacy in that case.
> Of course, it takes two to tango, and it takes two to make a Lightning
> channel, so ---
> In any case, I explored some of the difficulties with unilateral closes as
> well:
>
> *
> https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-January/002421.html
> *
> https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-January/002415.html
>
> On mutual closes, we should probably set `nLockTime` to the current
> blockheight + 1 as well.
> This has greater benefit later in a Taproot world.
>
> > Questions:
> > * Are there any protocol-specific semantic wrt to onchain transactions
> incompatibility
> > between Coinjoin and cooperative LN txn ?
>
> A kind of non-equal-value CoinJoin could emulate a Lightning open + close,
> but most Lightning channels will have a large number of blocks (thousands
> or tens of thousands) between the open and the close; it seems unlikely
> that a short-term channel will exist that matches the non-equal-value
> CoinJoin.
>
> In particular, a LN cooperative close will, in general, have only one
> input.
> A new form of CoinJoin could, instead of using a single transaction, use
> two, with an entry transaction that spends into an n-of-n of the
> participants, and the n-of-n being spent to split the coin back to their
> 

Re: [bitcoin-dev] LN & Coinjoin, a Great Tx Format Wedding

2020-02-24 Thread Antoine Riard via bitcoin-dev
> Another one, usually wouldn't be *protocol* as much as wallet leakage,
but could be: utxo selection algorithm (which of course may be difficult to
deduce, but often, far from impossible).

Yes sure that's a good point, it may affect protocol too if your LN
implementation has its own onchain wallet. If not, and it reuses a non-LN
wallet you just carry on its fingerprint.
An extension in the future could be for closing/splicing transaction, your
liquidity algorithm may select in a really specific fashion which channels
must be closed or increased...

> But I would ask people to consider CoinJoinXT[1] more seriously in a
taproot/schnorr world, since it addresses this exact point.

The equal value paradigm is such a watermark and I assume it leans to
increase the number of outputs so I don't see it followed by any other
protocol. But yes CoinjoinXT, if you can come up with a easy interactive
multi-tx construction protocol that would be interesting (and could be
reused by any cut-through implementation I guess).

Overall, my thinking was to start specifying this now because such thing
would take a fair amount of time/coordination to get adopted. This way if
and when Taproot/Schnorr happen we don't
have to wait another period to start enjoying the privacy enhancement
(worst-case we can fallback on 2p-ecdsa).



Le sam. 22 févr. 2020 à 07:10, AdamISZ  a écrit :

> ‐‐‐ Original Message ‐‐‐
> On Friday, 21 February 2020 22:17, Antoine Riard via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
> > How can a Bitcoin tranaction leak protocol usage ?
> > * the output type (p2sh, p2wsh, ...)
> > * the spending policy (2-of-3 multisig, timelock, hashlock,...)
> > * outputs ordering (BIP69)
> > * nLocktime/nSequence
> > * RBF-signaling
> > * Equal-value outputs
> > * weird watermark (LN commitment tx obfuscated commitment number)
> > * fees strategy like CPFP
> > * in-protocol announcements [0]
> >
> Good list.
> Another one, usually wouldn't be *protocol* as much as wallet leakage, but
> could be: utxo selection algorithm (which of course may be difficult to
> deduce, but often, far from impossible).
> (Also trivial and increasingly irrelevant, but nVersion).
>
> With regards to coinjoin in this context (I know your points are much
> broader), my comment is:
> For existing protocols (joinmarket's, wasabi's, samourai's), in the
> equal-outs paradigm, I don't see much that can be done in this area.
> But I would ask people to consider CoinJoinXT[1] more seriously in a
> taproot/schnorr world, since it addresses this exact point. With a short
> (not cross-block like swaps or LN setup) interaction, participants can
> arrange the effect of coinjoin without the on-chain watermark of coinjoin
> (so, steganographic). The taproot/schnorr part is needed there because
> multisig is required from transaction to transaction in that protocol, so
> doing it today is less interesting (albeit still interesting).
>
> waxwing
>
> [1] https://joinmarket.me/blog/blog/coinjoinxt/
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] LN & Coinjoin, a Great Tx Format Wedding

2020-02-21 Thread Antoine Riard via bitcoin-dev
Coinjoins interceptions seem to raise at an increasing pace. Their onchain
fingerprint (high-number of inputs/outputs, lack of anti-fee snipping,
script
type, ...) makes their detection quite easy for a chain observer. A ban of
coinjoin'ed coins or any other coins linked through a common ownwer would
undermine the long-term fungibility of the whole ecosystem.

Of course, they do provide privacy for the participating coins but at the
tradeoffs of creating two observable sets: coinjoin'ed vs non-coinjoin'ed.
Ideally, all onchain transactions should conform to a common transaction
pattern that provides unobservability -- i.e a specific transaction would
be indistinguishable from any other transaction at all. For LN or Coinjoin
it means an external observer, not-involved in the protocol, should be
unable to tell which protocol is being used, or if _any_ specific protocol
is being used.

How can a Bitcoin tranaction leak protocol usage ?
* the output type (p2sh, p2wsh, ...)
* the spending policy (2-of-3 multisig, timelock, hashlock,...)
* outputs ordering (BIP69)
* nLocktime/nSequence
* RBF-signaling
* Equal-value outputs
* weird watermark (LN commitment tx obfuscated commitment number)
* fees strategy like CPFP
* in-protocol announcements [0]

A solution could be to blur multiple protocol onchain transactions into
one common transaction format [1]. For example, if one of them uses
nSequence
for some protocol semantic all the other ones should do it too. Any
deviation
would be enough to be leverage as a watermark and blow up all other tweaks.
If Schnorr-Taproot gets adopted and deployed by the community and LN
specifies
an interactive tx construction protocol [2], the timing would be pretty good
to adopt such format IMO.

Coinjoin:
* nSequence can be set, it's still secure if party don't resign [3]
* nLocktime can be set for anti-fee snipping
* Taproot spending

LN (cooperative case):
* splicing may blur funding/closing as the same thing, closing
address can be a funding output
* splice-in would allow equal value outputs
* nSequence likely to be set for multi-party tx construction
* nLocktime can be set for anti-fee snipping

Adopting a common transaction format isn't a cure-all solution
on the long-term privacy road but if it circumvent ban of some class
of transactions that would be already a nice win and a worthy effort
to do so.

Questions:
* Are there any protocol-specific semantic wrt to onchain transactions
incompatibility
between Coinjoin and cooperative LN txn ?
* What about RBF-by-default ?
* Core wallet or any other protocol or even batching algorithms could adopt
to this format ?
* Is artificially increasing the number of outputs to mimic Coinjoins txn
acceptable wrt to utxo bloat/fees ?

Cheers,

Antoine

[0] Like LN announcing public channels with signatures committing both
to onchain utxos and nodes static pubkeys. And them being display on LN
search engines with full owner info...

[1] By format, I don't mean a *binary* format a la PSBT but mere something
like BOLT3, a *logical* format.

[2]
https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-February/002500.html

[3] But "blank" RBF would be a privacy leak if Coinjoin are never bumped,
because if you see both a low-fees and high-fees transaction you now know
they are a LN one, so Coinjoins implems should do some time spurious RBFs
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Taproot (and graftroot) complexity

2020-02-09 Thread Antoine Riard via bitcoin-dev
 > In particular, you care more about privacy when you are contesting a
> close of a channel or other script path because then the miners could be
more
> likely to extract a rent from you as "ransom" for properly closing your
channel
> (or in other words, in a contested close the value of the closing
transaction is
> larger than usual).

Not sure this point holds, independently of which Taproot/MASTmechanism
deployed,
any time-sensitive transaction will likely leak its "contestness" by the
setting of its
nSequence/nLocktime fields. E.g, for LN, justice tx are not encumbered by a
CSV
delay which distinguish them from a non-revoked spend. And when you're
relaying
htlcs and need to close unilaterally channel to prevent different
settlement on
incoming/outgoing links the HTLC-timeout tx broadcast have a nLocktime set.

Beyond LN, timelocks are a privacy leak and miner-withholding vector for any
offchain protocols but this problem is not tied to Taproot design.
Confidential
enforcement of them would be great but that's another debate..

Antoine








Le dim. 9 févr. 2020 à 15:40, Matt Corallo via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> a écrit :

> Responding purely to one point as this may be sufficient to clear up
> lots of discussion:
>
> On 2/9/20 8:19 PM, Bryan Bishop via bitcoin-dev wrote:
> > Is Taproot just a probability assumption about the frequency and
> > likelihood of
> > the signature case over the script case? Is this a good assumption?  The
> BIP
> > only goes as far as to claim that the advantage is apparent if the
> outputs
> > *could be spent* as an N of N, but doesn't make representations about
> > how likely
> > that N of N case would be in practice compared to the script paths.
> Perhaps
> > among use cases, more than half of the ones we expect people to be doing
> > could be
> > spent as an N of N. But how frequently would that path get used?
> > Further, while
> > the *use cases* might skew toward things with N of N opt-out, we might
> > end up in
> > a power law case where it's the one case that doesn't use an N of N opt
> > out at
> > all (or at a de minimis level) that becomes very popular, thereby making
> > Taproot
> > more costly then beneficial.
> Its not just about the frequency and likelihood, no. If there is a
> clearly-provided optimization for this common case in the protocol, then
> it becomes further more likely that developers put in the additional
> effort required to make this possibility a reality. This has a very
> significant positive impact on user privacy, especially those who wish
> to utilize more advanced functionality in Bitcoin. Further, yes, it is
> anticipated that the N of N case is possible to take in the vast
> majority of deployed use-cases for advanced scripting systems, ensuring
> that it is maximally efficient to do so (and thereby encouraging
> developers to do so) is a key goal in this work.
>
> Matt
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev