Re: [Lightning-dev] Dynamic Commitments Part 2: Taprooty Edition

2022-03-25 Thread Antoine Riard
Hi Laolu,

Thanks for the proposal, quick feedback.

> It *is* still the case that _ultimately_ the two transactions to close the
> old segwit v0 funding output, and re-open the channel with a new segwit v1
> funding output are unavoidable. However this adapter commitment lets peers
> _defer_ these two transactions until closing time.

I think there is one downside coming with adapter commitment, which is the
uncertainty of the fee overhead at the closing time. Instead of closing
your segwit v0 channel _now_ with known fees, when your commitment is empty
of time-sensitive HTLCs, you're taking the risk of closing during fees
spikes, due a move triggered by your counterparty, when you might have
HTLCs at stake.

It might be more economically rational for a LN node operator to pay the
upgrade cost now if they wish  to benefit from the taproot upgrade early,
especially if long-term we expect block fees to increase, or wait when
there is a "normal" cooperative closing.

So it's unclear to me what the economic gain of adapter commitments ?

> In the remainder of this mail, I'll describe an alternative
> approach that would allow upgrading nearly all channel/commitment related
> values (dust limit, max in flight, etc), which is inspired by the way the
> Raft consensus protocol handles configuration/member changes.

Long-term, I think we'll likely need a consensus protocol anyway for
multi-party constructions (channel factories/payment pools). AFAIU this
proposal doesn't aim to roll out a full-fledged consensus protocol *now*
though it could be wise to ensure what we're building slowly moves in this
direction. Less critical code to maintain across bitcoin
codebases/toolchains.

> The role of the signature it to prevent "spoofing" by one of the parties
> (authenticate the param change), and also it serves to convince a party
that
> they actually sent a prior commitment propose update during the
> retransmission phase.

What's the purpose of data origin authentication if we assume only
two-parties running over Noise_XK ?

I think it's already a security property we have. Though if we think we're
going to reuse these dynamic upgrades for N counterparties communicating
through a coordinator, yes I think it's useful.

> In the past, when ideas like this were brought up, some were concerned
that
> it wouldn't really be possible to do this type of updates while existing
> HTLCs were in flight (hence some of the ideas to clear out the commitment
> beforehand).

The dynamic upgrade might serve in an emergency context where we don't have
the leisury to wait for the settlement of the pending HTLCs. The timing of
those ones might be beyond the coordination of link counterparties. Thus,
we have to allow upgrade of non-empty commitments (and if there are
undesirable interferences between new commitment types and HTLCs/PTLCs
present, deal case-by-case).

Antoine

Le jeu. 24 mars 2022 à 18:53, Olaoluwa Osuntokun  a
écrit :

> Hi y'all,
>
> ## Dynamic Commitments Retrospective
>
> Two years-ish ago I made a mailing list post on some ideas re dynamic
> commitments [1], and how the concept can be used to allow us to upgrade
> channel types on the fly, and also remove pesky hard coded limits like the
> 483 HTLC in-flight limit that's present today. Back then my main target was
> upgrading all the existing channels over to the anchor output commitment
> variant, so the core internal routing network would be more resilient in a
> persistent high fee environment (which hasn't really happened over the past
> 2 years for various reasons tbh). Fast forward to today, and with taproot
> now active on mainnet, and some initial design work/sketches for
> taproot-native channels underway, I figure it would be good to bump this
> concept as it gives us a way to upgrade all 80k+ public channels to taproot
> without any on chain transactions.
>
> ## Updating Across Witness Versions w/ Adapter Commitments
>
> In my original mail, I incorrectly concluded that the dynamic commitments
> concept would only really work within the confines of a "static" multi-sig
> output, meaning that it couldn't be used to help channels upgrade to future
> segwit witness versions.  Thankfully this reply [2] by ZmnSCPxj, outlined a
> way to achieve this in practice. At a high level he proposes an "adaptor
> commitment" (similar to the kickoff transaction in eltoo/duplex), which is
> basically an upgrade transaction that spends one witness version type, and
> produces an output with the next (upgraded) type. In the context of
> converting from segwit v0 to v1 (taproot), two peers would collaboratively
> create a new adapter commitment that spends the old v0 multi-sig output,
> and
> produces a _new_ v1 multi-sig output. The new commitment transaction would
> then be anchored using this new output.
>
> Here's a rough sequence diagram of the before and after state to better
> convey the concept:
>
>   * Before: fundingOutputV0 -> commitmentTransaction
>
>   * After 

Re: [Lightning-dev] Interesting thing about Offered HTLCs

2022-03-07 Thread Antoine Riard
Hi Eugene,

> Since the remote party gives them a signature, after the timeout, the
offering party can
claim with the remote's signature + preimage, but can only spend with the
HTLC-timeout transaction because of SIGHASH_ALL.

I've not exercised the witness against our test framework though the
description sounds to me correct.

The offering counterparty spends the offered HTLC output with a
HTLC-timeout transaction where the witness is <
>. SIGHASH_ALL is not committing to the spent Script
branch intended to be used. As you raised, it doesn't alleviate the
offering counterparty to respect the CLTV delay and as such the offered
HTLC timespan cannot be shortened. The implication I can think of, in case
of competing HTLC race, once the absolute timelock is expired, the offering
counterparty is able to compete against the receiving one with a more
feerate-efficient witness. However, from a receiving counterparty safety
viewpoint, if you're already suffering a contest, it means your HTLC-claim
on your own local commitment transaction inbound HTLC output has been
inefficient, and your fee-bumping strategy is to blame.

If we think the issue is relevant, I believe splitting the Script branches
in two tapleaves and having bip342 signature digest committing to the
tapleaf_hash solves it.

Antoine

Le lun. 7 mars 2022 à 15:27, Eugene Siegel  a écrit :

> I'm not sure if this is known, but I'm pretty sure it's benign and so I
> thought I'd share since I found it interesting and maybe someone else will
> too. I'm not sure if this is already known either.
>
>
> https://github.com/lightning/bolts/blob/master/03-transactions.md#offered-htlc-outputs
> Offered HTLCs have three claim paths: the revocation case, the offerer
> claiming through the HTLC-timeout transaction, and the receiver claiming
> via their sig + preimage. The offering party can claim via the HTLC-timeout
> case on their commitment transaction with their signature and the remote's
> signature (SIGHASH_ALL) after the cltv_expiry timeout. Since the remote
> party gives them a signature, after the timeout, the offering party can
> claim with the remote's signature + preimage, but can only spend with the
> HTLC-timeout transaction because of SIGHASH_ALL. This assumes that the
> remote party doesn't claim it first. I can't think of any cases where the
> offering party would know the preimage AND want to force close, so that's
> why I think it's benign. It does make the witness smaller. The same trick
> isn't possible with the Received HTLC's due to OP_CHECKLOCKTIMEVERIFY.
>
> Eugene (Crypt-iQ on github)
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Full Disclosure: CVE-2021-41591/ CVE-2021-41592 / CVE-2021-41593 "Dust HTLC Exposure Considered Harmful"

2021-10-04 Thread Antoine Riard
I've been informed by Mitre, the correct CVE assignment:
* c-lightning : CVE-2021-41592
* lnd: CVE-2021-41593

Not the assignement disclosed in the initial mail.

Le lun. 4 oct. 2021 à 11:09, Antoine Riard  a
écrit :

> Hi,
>
> I'm writing a report to disclose specification-level vulnerabilities
> affecting the Lightning implementations.
>
> The vulnerabilities are expected to be patched in:
> * Eclair: v0.6.2+ (CVE-2021-41591)
> * LND: v0.13.3+ (CVE-2021-41592)
> * LDK: v0.0.102 (not released as production software yet)
>
> The vulnerabilities are also affecting c-lightning (CVE-2021-41593).
>
> Those vulnerabilities can be exploited in a wide range of attacks, going
> from fee blackmailing of node operators, burning liquidity of your
> competing LSPs or even stealing your counterparty channel balance if you
> avail mining capabilities. Exercise of the vulnerability revealed that a
> majority of the balance funds can be at loss.
>
> Credit to Eugene Siegel (Crypt-iQ) for reporting the trimmed-to-dust
> exploitation and multiple insights about attacks.
>
> Thanks to Bastien Teinturier and Matt Corallo for numerous contributions
> about mitigations development.
>
> # Problem
>
> The current BOLT specification only requires Alice's `dust_limit_satoshis`
> (applied on Alice's commitment) to be under Alice's
> `channel_reserve_satoshis` (applied on Bob). As those 2 parameters are
> selectable by Alice, she can inflate the dust limit until reaching the
> implementation-defined max value (e.g LND: 20% of chan capacity, LDK: 100%
> of chan capacity).
>
> Any in-flight incoming HTLC under Alice's dust limit will be converted as
> miner fees on Alice's commitment. This HTLC is deducted from Bob's balance
> and as such they're still owned by Bob, until resolution (i.e a RAA
> removing the HTLC from Alice's commitment). This limitation only applies
> per-HTLC. No implementation enforces a limit on the sum of in-flight HTLCs
> burned as fees. Therefore, Alice is free to inflict a substantial loss to
> Bob funds by publishing her commitment on-chain.
>
> In-flight outgoing HTLC are also committed as fees on Bob's commitment if
> they're under Bob's threshold. Alice can also exploit from this angle by
> circular routing HTLCs until reaching Bob's
> `max_htlc_value_in_flight_msat`. Alice withholds HTLCs resolution until Bob
> goes on-chain to timeout an offered HTLC or claim an accepted HTLC.
>
> Dust HTLC processing can be also exploited at `update_fee` reception.
>
> As the BOLT3's fees computation encompasses the negotiated feerate from
> `update_fee` for the 2nd-stage HTLC fees to decide if the HTLC must be
> trimmed, the amount of balance at risk is a function of current mempool
> feerates.
>
> The maximum of funds at risk on a counterparty commitment is:
>
> counterparty's `max_accepted_htlcs` * (`htlc_success_tx_kw` * opener's
> `feerate_per_kw` + counterparty's `dust_limit_satoshis`) + holder's
> `max_accepted_htlcs` * (`htlc_timeout_tx_kw` * opener's `feerate_per_kw` +
> counterparty's `dust_limit_satoshis`)
>
> If the opener is also the attacker, the negotiated feerate can be
> manipulated beyond the "honest" mempool feerates only upper bounded
> implementation-defined value (before fixes, LDK: 2 * high-feerate of our
> fee-estimator). If the opener is the victim, the negotiated feerate is
> still a safety concern in case of spontaneous mempool spikes.
>
> Note, `anchors_zero_htlc_fee` channels are not affected by the feerate
> inflation as the trimmed-to-dust fee computation mechanism for 2nd-stage
> HTLC is removed. They're still at risk of the sum of the HTLCs under the
> dust limit being maliciously burned.
>
> # Solution
>
> A first mitigation is to verify the counterparty's announced
> `dust_limit_satoshis` at channel opening (`open_channel`/`accept_channel`)
> reception and reject if it's estimated too large (see #894)
>
> For LDK, we choose the value of 660 satoshis as it's beyond the highest
> dust threshold enforced by Bitcoin Core (p2pkh: 546) with a margin of
> safety. Propagation of Lightning time-sensitive transactions shouldn't be
> affected.
>
> A second mitigation is to define a new configurable limit
> `max_dust_htlc_exposure` and apply this one at incoming and outgoing of
> HTLC.
>
> For LDK, we choose the value of 5 000 000 milli-satoshis as we gauged this
> value as a substantial loss for our class of users. Setting this too low
> may prevent the sending or receipt of low-value HTLCs on high-traffic
> nodes. A node operator should fine-tune this value in function of what
> qualifies as an acceptable loss.
>
> We would like to ensure that the node isn't suddenly exposed to
> significantly more trimme

Re: [Lightning-dev] Full Disclosure: CVE-2021-41591/ CVE-2021-41592 / CVE-2021-41593 "Dust HTLC Exposure Considered Harmful"

2021-10-04 Thread Antoine Riard
> * C-lightning v0.10.2 (CVE-2021-41593)

Thanks I was unsure about the exact version number. I'll update the CVE
quickly.

Le lun. 4 oct. 2021 à 14:16, lisa neigut  a écrit :

> FYI the next version of c-lightning will contain the proposed
> `max_dust_htlc_exposure_msat` as outlined in #919
> <https://github.com/lightningnetwork/lightning-rfc/pull/919/files>; the
> given expected vulnerabilities patch table should have reflected this.
>
> > The vulnerabilities are expected to be patched in:
> > * Eclair: v0.6.2+ (CVE-2021-41591)
> > * LND: v0.13.3+ (CVE-2021-41592)
> > * LDK: v0.0.102 (not released as production software yet)
>
> * C-lightning v0.10.2 (CVE-2021-41593)
>
>
> Lisa
>
> On Mon, Oct 4, 2021 at 10:09 AM Antoine Riard 
> wrote:
>
>> Hi,
>>
>> I'm writing a report to disclose specification-level vulnerabilities
>> affecting the Lightning implementations.
>>
>> The vulnerabilities are expected to be patched in:
>> * Eclair: v0.6.2+ (CVE-2021-41591)
>> * LND: v0.13.3+ (CVE-2021-41592)
>> * LDK: v0.0.102 (not released as production software yet)
>>
>> The vulnerabilities are also affecting c-lightning (CVE-2021-41593).
>>
>> Those vulnerabilities can be exploited in a wide range of attacks, going
>> from fee blackmailing of node operators, burning liquidity of your
>> competing LSPs or even stealing your counterparty channel balance if you
>> avail mining capabilities. Exercise of the vulnerability revealed that a
>> majority of the balance funds can be at loss.
>>
>> Credit to Eugene Siegel (Crypt-iQ) for reporting the trimmed-to-dust
>> exploitation and multiple insights about attacks.
>>
>> Thanks to Bastien Teinturier and Matt Corallo for numerous contributions
>> about mitigations development.
>>
>> # Problem
>>
>> The current BOLT specification only requires Alice's
>> `dust_limit_satoshis` (applied on Alice's commitment) to be under Alice's
>> `channel_reserve_satoshis` (applied on Bob). As those 2 parameters are
>> selectable by Alice, she can inflate the dust limit until reaching the
>> implementation-defined max value (e.g LND: 20% of chan capacity, LDK: 100%
>> of chan capacity).
>>
>> Any in-flight incoming HTLC under Alice's dust limit will be converted as
>> miner fees on Alice's commitment. This HTLC is deducted from Bob's balance
>> and as such they're still owned by Bob, until resolution (i.e a RAA
>> removing the HTLC from Alice's commitment). This limitation only applies
>> per-HTLC. No implementation enforces a limit on the sum of in-flight HTLCs
>> burned as fees. Therefore, Alice is free to inflict a substantial loss to
>> Bob funds by publishing her commitment on-chain.
>>
>> In-flight outgoing HTLC are also committed as fees on Bob's commitment if
>> they're under Bob's threshold. Alice can also exploit from this angle by
>> circular routing HTLCs until reaching Bob's
>> `max_htlc_value_in_flight_msat`. Alice withholds HTLCs resolution until Bob
>> goes on-chain to timeout an offered HTLC or claim an accepted HTLC.
>>
>> Dust HTLC processing can be also exploited at `update_fee` reception.
>>
>> As the BOLT3's fees computation encompasses the negotiated feerate from
>> `update_fee` for the 2nd-stage HTLC fees to decide if the HTLC must be
>> trimmed, the amount of balance at risk is a function of current mempool
>> feerates.
>>
>> The maximum of funds at risk on a counterparty commitment is:
>>
>> counterparty's `max_accepted_htlcs` * (`htlc_success_tx_kw` * opener's
>> `feerate_per_kw` + counterparty's `dust_limit_satoshis`) + holder's
>> `max_accepted_htlcs` * (`htlc_timeout_tx_kw` * opener's `feerate_per_kw` +
>> counterparty's `dust_limit_satoshis`)
>>
>> If the opener is also the attacker, the negotiated feerate can be
>> manipulated beyond the "honest" mempool feerates only upper bounded
>> implementation-defined value (before fixes, LDK: 2 * high-feerate of our
>> fee-estimator). If the opener is the victim, the negotiated feerate is
>> still a safety concern in case of spontaneous mempool spikes.
>>
>> Note, `anchors_zero_htlc_fee` channels are not affected by the feerate
>> inflation as the trimmed-to-dust fee computation mechanism for 2nd-stage
>> HTLC is removed. They're still at risk of the sum of the HTLCs under the
>> dust limit being maliciously burned.
>>
>> # Solution
>>
>> A first mitigation is to verify the counterparty's announced
>> `dust_limit_satoshis` at channel opening (`open_channel`/`accept_channel`)
>> receptio

Re: [Lightning-dev] Full Disclosure: CVE-2021-41591/ CVE-2021-41592 / CVE-2021-41593 "Dust HTLC Exposure Considered Harmful"

2021-10-04 Thread Antoine Riard
> In other words, simply not secured.

How do you define Bitcoin base-layer security ? How strong are the
assumptions we're relying on the base-layer ?
Not easy answers :/

> L2s shouldn't build on flawed assumptions.

Waiting for your proposal to scale Bitcoin payments relying on pure
consensus assumptions :)

> No thanks. Not sure that would even help (since policies can always be
set to
a higher dust limit than any consensus rule)

Sure. Policies can always be more restrictive. One of them could be to not
relay transactions at all. If widely-deployed, such policy would make the
network quite unusable

More seriously, I think when we consider this policy discussion, we should
have more in mind the consequences of adopting a given policy or another
one.
As long as they're economically-compatible, they should be followed by an
economically rational node operator.
I think we're already making that kind of social or economic assumption on
the user behavior w.r.t to full-node design. Blocks and transactions are
relayed for "free" today, not satoshis are received in exchange.


Le lun. 4 oct. 2021 à 12:28, Luke Dashjr  a écrit :

> On Monday 04 October 2021 16:14:20 Antoine Riard wrote:
> > > The "dust limit" is arbitrarily decided by each node, and cannot be
> > > relied upon for security at all. Expecting it to be a given default
> value
> > > is in itself a security vulnerability
> >
> > Reality is that an increasing number of funds are secured by assumptions
> > around mempool behavior.
>
> In other words, simply not secured.
>
> > And sadly that's going to increase with Lightning growth and deployment
> of
> > other L2s.
>
> L2s shouldn't build on flawed assumptions.
>
> > Maybe we could dry-up some policy rules in consensus like the dust limit
> > one :)
>
> No thanks. Not sure that would even help (since policies can always be set
> to
> a higher dust limit than any consensus rule)
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Full Disclosure: CVE-2021-41591/ CVE-2021-41592 / CVE-2021-41593 "Dust HTLC Exposure Considered Harmful"

2021-10-04 Thread Antoine Riard
> The "dust limit" is arbitrarily decided by each node, and cannot be
relied
upon for security at all. Expecting it to be a given default value is in
itself a security vulnerability

Reality is that an increasing number of funds are secured by assumptions
around mempool behavior.
And sadly that's going to increase with Lightning growth and deployment of
other L2s.

Maybe we could dry-up some policy rules in consensus like the dust limit
one :)


Le lun. 4 oct. 2021 à 11:57, Luke Dashjr  a écrit :

> On Monday 04 October 2021 15:09:28 Antoine Riard wrote:
> > Still during August 2021, the Bitcoin Core dust limit was actively
> > discussed on the mailing list. Changes of this dust limit would have
> > affected the ongoing development of the mitigations.
>
> The "dust limit" is arbitrarily decided by each node, and cannot be relied
> upon for security at all. Expecting it to be a given default value is in
> itself a security vulnerability.
>
>
> P.S. It'd be nice if someone familiar with these could fill in
> https://en.bitcoin.it/wiki/CVEs
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Full Disclosure: CVE-2021-41591/ CVE-2021-41592 / CVE-2021-41593 "Dust HTLC Exposure Considered Harmful"

2021-10-04 Thread Antoine Riard
Hi,

I'm writing a report to disclose specification-level vulnerabilities
affecting the Lightning implementations.

The vulnerabilities are expected to be patched in:
* Eclair: v0.6.2+ (CVE-2021-41591)
* LND: v0.13.3+ (CVE-2021-41592)
* LDK: v0.0.102 (not released as production software yet)

The vulnerabilities are also affecting c-lightning (CVE-2021-41593).

Those vulnerabilities can be exploited in a wide range of attacks, going
from fee blackmailing of node operators, burning liquidity of your
competing LSPs or even stealing your counterparty channel balance if you
avail mining capabilities. Exercise of the vulnerability revealed that a
majority of the balance funds can be at loss.

Credit to Eugene Siegel (Crypt-iQ) for reporting the trimmed-to-dust
exploitation and multiple insights about attacks.

Thanks to Bastien Teinturier and Matt Corallo for numerous contributions
about mitigations development.

# Problem

The current BOLT specification only requires Alice's `dust_limit_satoshis`
(applied on Alice's commitment) to be under Alice's
`channel_reserve_satoshis` (applied on Bob). As those 2 parameters are
selectable by Alice, she can inflate the dust limit until reaching the
implementation-defined max value (e.g LND: 20% of chan capacity, LDK: 100%
of chan capacity).

Any in-flight incoming HTLC under Alice's dust limit will be converted as
miner fees on Alice's commitment. This HTLC is deducted from Bob's balance
and as such they're still owned by Bob, until resolution (i.e a RAA
removing the HTLC from Alice's commitment). This limitation only applies
per-HTLC. No implementation enforces a limit on the sum of in-flight HTLCs
burned as fees. Therefore, Alice is free to inflict a substantial loss to
Bob funds by publishing her commitment on-chain.

In-flight outgoing HTLC are also committed as fees on Bob's commitment if
they're under Bob's threshold. Alice can also exploit from this angle by
circular routing HTLCs until reaching Bob's
`max_htlc_value_in_flight_msat`. Alice withholds HTLCs resolution until Bob
goes on-chain to timeout an offered HTLC or claim an accepted HTLC.

Dust HTLC processing can be also exploited at `update_fee` reception.

As the BOLT3's fees computation encompasses the negotiated feerate from
`update_fee` for the 2nd-stage HTLC fees to decide if the HTLC must be
trimmed, the amount of balance at risk is a function of current mempool
feerates.

The maximum of funds at risk on a counterparty commitment is:

counterparty's `max_accepted_htlcs` * (`htlc_success_tx_kw` * opener's
`feerate_per_kw` + counterparty's `dust_limit_satoshis`) + holder's
`max_accepted_htlcs` * (`htlc_timeout_tx_kw` * opener's `feerate_per_kw` +
counterparty's `dust_limit_satoshis`)

If the opener is also the attacker, the negotiated feerate can be
manipulated beyond the "honest" mempool feerates only upper bounded
implementation-defined value (before fixes, LDK: 2 * high-feerate of our
fee-estimator). If the opener is the victim, the negotiated feerate is
still a safety concern in case of spontaneous mempool spikes.

Note, `anchors_zero_htlc_fee` channels are not affected by the feerate
inflation as the trimmed-to-dust fee computation mechanism for 2nd-stage
HTLC is removed. They're still at risk of the sum of the HTLCs under the
dust limit being maliciously burned.

# Solution

A first mitigation is to verify the counterparty's announced
`dust_limit_satoshis` at channel opening (`open_channel`/`accept_channel`)
reception and reject if it's estimated too large (see #894)

For LDK, we choose the value of 660 satoshis as it's beyond the highest
dust threshold enforced by Bitcoin Core (p2pkh: 546) with a margin of
safety. Propagation of Lightning time-sensitive transactions shouldn't be
affected.

A second mitigation is to define a new configurable limit
`max_dust_htlc_exposure` and apply this one at incoming and outgoing of
HTLC.

For LDK, we choose the value of 5 000 000 milli-satoshis as we gauged this
value as a substantial loss for our class of users. Setting this too low
may prevent the sending or receipt of low-value HTLCs on high-traffic
nodes. A node operator should fine-tune this value in function of what
qualifies as an acceptable loss.

We would like to ensure that the node isn't suddenly exposed to
significantly more trimmed balance if the feerate increases when we have
several HTLCs pending which are near the dust limit.

To achieve this goal, we introduce a new `dust_buffer_feerate` defined as
the maximum of either 2530 sats per kWU or 125% of the current
`feerate_per_kw` (implementation-defined values).

Then, upon an incoming HTLC, if the HTLC's `amount_msat` is inferior to the
counterparty's `dust_limit_satoshis` plus the HTLC-timeout fee at the
`dust_buffer_feerate`. If the `amount_msat` plus the
`dust_balance_on_counterparty_tx` is superior to `max_dust_htlc_exposure`,
the HTLC should be failed once it's committed.

Upon an outgoing HTLC, if the HTLC's `amount_msat` is inferior to the

Re: [Lightning-dev] Removing the Dust Limit

2021-08-10 Thread Antoine Riard
>  As developers, we have no
control over prevailing feerates, so this is a problem LN needs to deal
with regardless of Bitcoin Core's dust limit.

Right, as of today, we're going to trim-to-dust any commitment output of
which the value is inferior to the transaction owner's
`dust_limit_satoshis` plus the HTLC-claim (either success/timeout) fee at
the agreed on feerate. So the feerate is the most significant variable in
defining what's a LN *uneconomical output*.

IMO this approach presents annoying limitations. First, you still need to
come with an agreement among channel operators on the mempools feerate.
Such agreement might be problematic to find, as on one side you would like
to let your counterparty free to pick up a feerate gauged as efficient for
the confirmation of their transactions but at the same time not too high to
burn to fees your low-values HTLCs that *your* fee-estimator judged as sane
to claim.

Secondly, the trim-to-dust evaluation doesn't correctly match the lifetime
of the HTLC. A HTLC might be considered as dust at block 100, at which
mempools are full. Though its expiration only occurs at block 200, at which
mempools are empty and this HTLC is fine to claim again. I think this
inaccuracy will even become worse with a wider deployment of long-lived
routed packets over LN, such as DLCs or hodl invoices.

All this to say, if for those reasons LN devs remove feerate negotiation
from the trim-to-dust definition to a static feerate, it would likely put a
higher pressure on the full-nodes operators, as the number of uneconomical
outputs might increase.

(From a LN viewpoint, I would say we're trying to solve a price discovery
issue, namely the cost to write on the UTXO set, in a distributed system,
where any deviation from the "honest" price means you trust more your LN
counterparty)

> They could also use trustless probabalistic payments, which have been
discussed in the context of LN for handling the problem of payments too
small to be represented onchain since early 2016:
https://docs.google.com/presentation/d/1G4xchDGcO37DJ2lPC_XYyZIUkJc2khnLrCaZXgvDN0U/edit?pref=2=1#slide=id.g85f425098

Thanks to bringing to the surface probabilistic payments, yes that's a
worthy alternative approach for low-value payments to keep in mind.

Le mar. 10 août 2021 à 02:15, David A. Harding  a écrit :

> On Mon, Aug 09, 2021 at 09:22:28AM -0400, Antoine Riard wrote:
> > I'm pretty conservative about increasing the standard dust limit in any
> > way. This would convert a higher percentage of LN channels capacity into
> > dust, which is coming with a lowering of funds safety [0].
>
> I think that reasoning is incomplete.  There are two related things here:
>
> - **Uneconomical outputs:** outputs that would cost more to spend than
>   the value they contain.
>
> - **Dust limit:** an output amount below which Bitcoin Core (and other
>   nodes) will not relay the transaction containing that output.
>
> Although raising the dust limit can have the effect you describe,
> increases in the minimum necessary feerate to get a transaction
> confirmed in an appropriate amount of time also "converts a higher
> percentage of LN channel capacity into dust".  As developers, we have no
> control over prevailing feerates, so this is a problem LN needs to deal
> with regardless of Bitcoin Core's dust limit.
>
> (Related to your linked thread, that seems to be about the risk of
> "burning funds" by paying them to a miner who may be a party to the
> attack.  There's plenty of other alternative ways to burn funds that can
> change the risk profile.)
>
> > the standard dust limit [...] introduces a trust vector
>
> My point above is that any trust vector is introduced not by the dust
> limit but by the economics of outputs being worth less than they cost to
> spend.
>
> > LN node operators might be willingly to compensate this "dust" trust
> vector
> > by relying on side-trust model
>
> They could also use trustless probabalistic payments, which have been
> discussed in the context of LN for handling the problem of payments too
> small to be represented onchain since early 2016:
>
> https://docs.google.com/presentation/d/1G4xchDGcO37DJ2lPC_XYyZIUkJc2khnLrCaZXgvDN0U/edit?pref=2=1#slide=id.g85f425098_0_178
>
> (Probabalistic payments were discussed in the general context of Bitcoin
> well before LN was proposed, and Elements even includes an opcode for
> creating them.)
>
> > smarter engineering such as utreexo on the base-layer side
>
> Utreexo doesn't solve this problem.  Many nodes (such as miners) will
> still want to store the full UTXO set and access it quickly,  Utreexo
> proofs will grow in size with UTXO set size (though, at best, only
> log(n)), so full node operators will still not want their bandwid

Re: [Lightning-dev] Removing the Dust Limit

2021-08-09 Thread Antoine Riard
I'm pretty conservative about increasing the standard dust limit in any
way. This would convert a higher percentage of LN channels capacity into
dust, which is coming with a lowering of funds safety [0]. Of course, we
can adjust the LN security model around dust handling to mitigate the
safety risk in case of adversarial settings, but ultimately the standard
dust limit creates a  "hard" bound, and as such it introduces a trust
vector in the reliability of your peer to not goes
onchain with a commitment heavily-loaded with dust-HTLC you own.

LN node operators might be willingly to compensate this "dust" trust vector
by relying on side-trust model, such as PKI to authenticate their peers or
API tokens (LSATs, PoW tokens), probably not free from consequences for the
"openness" of the LN topology...

Further, I think any authoritative setting of the dust limit presents the
risk of becoming ill-adjusted  w.r.t to market realities after a few months
or years, and would need periodic reevaluations. Those reevaluations, if
not automated, would become a vector of endless dramas and bikeshedding as
the L2s ecosystems grow bigger...

Note, this would also constrain the design space of newer fee schemes. Such
as negotiated-with-mining-pool and discounted consolidation during low
feerate periods deployed by such producers of low-value outputs.
`
Moreover as an operational point, if we proceed to such an increase on the
base-layer, e.g to 20 sat/vb, we're going to severely damage the
propagation of any LN transaction, where a commitment transaction is built
with less than 20 sat/vb outputs. Of course, core's policy deployment on
the base layer is gradual, but we should first give a time window for the
LN ecosystem to upgrade and as of today we're still devoid of the mechanism
to do it cleanly and asynchronously (e.g dynamic upgrade or quiescence
protocol [1]).

That said, as raised by other commentators, I don't deny we have a
long-term tension between L2 nodes and full-nodes operators about the UTXO
set growth, but for now I would rather solve this with smarter engineering
such as utreexo on the base-layer side or multi-party shared-utxo or
compressed colored coins/authentication smart contracts (e.g
opentimestamp's merkle tree in OP_RETURN) on the upper layers rather than
altering the current equilibrium.

I think the status quo is good enough for now, and I believe we would be
better off to learn from another development cycle before tweaking the dust
limit in any sense.

Antoine

[0]
https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-May/002714.html
[1] https://github.com/lightningnetwork/lightning-rfc/pull/869

Le dim. 8 août 2021 à 14:53, Jeremy  a écrit :

> We should remove the dust limit from Bitcoin. Five reasons:
>
> 1) it's not our business what outputs people want to create
> 2) dust outputs can be used in various authentication/delegation smart
> contracts
> 3) dust sized htlcs in lightning (
> https://bitcoin.stackexchange.com/questions/46730/can-you-send-amounts-that-would-typically-be-considered-dust-through-the-light)
> force channels to operate in a semi-trusted mode which has implications
> (AFAIU) for the regulatory classification of channels in various
> jurisdictions; agnostic treatment of fund transfers would simplify this
> (like getting a 0.01 cent dividend check in the mail)
> 4) thinly divisible colored coin protocols might make use of sats as value
> markers for transactions.
> 5) should we ever do confidential transactions we can't prevent it without
> compromising privacy / allowed transfers
>
> The main reasons I'm aware of not allow dust creation is that:
>
> 1) dust is spam
> 2) dust fingerprinting attacks
>
> 1 is (IMO) not valid given the 5 reasons above, and 2 is preventable by
> well behaved wallets to not redeem outputs that cost more in fees than they
> are worth.
>
> cheers,
>
> jeremy
>
> --
> @JeremyRubin 
> 
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] bLIPs: A proposal for community-driven app layer and protocol extension standardization

2021-07-02 Thread Antoine Riard
Hi Ryan,

Thanks for starting this discussion, I agree it's a good time for the
Lightning development community to start this self-introspection on its own
specification process :)

First and foremost, maybe we could take a minute off to celebrate the
success of the BOLT process and the road traveled so far ? What was a fuzzy
heap of ideas on a whiteboard a few years ago has bloomed up to a living
and pulsating distributed ecosystem of thousands of nodes all around the
world. If the bet was to deliver on fast, instant, cheap, reasonably
scalable, reasonably confidential Bitcoin payments, it's a won one and
that's really cool.

Retrospectively, it was a foolhardy bet for a wide diversity of factors.
One could think about opinionated, early design choices deeply affecting
protocol safety and efficiency of which the ultimate validity was still a
function of fluky base layer evolutions [0]. Another could consider the
communication challenges of softly aligning development teams on the common
effort of designing and deploying from scratch a cryptographic protocol as
sophisticated as Lightning. Not an easy task when you're mindful about the
timezones spread, the diversity of software engineering backgrounds and the
differing schedules of priorities.

So kudos to everyone who has played a part in the Lightning dev process.
The OGs who started the tale, the rookies who jumped on the wagon on the
way and today newcomers showing up with new seeds to nurture the ecosystem
:)

Now, I would say we more-or-less all agree that the current BOLT process
has reached its limits. Both from private conservations across the teams
but also frustrations expressed during the irc meetings in the past months.
Or as a simple data point, the only meaningful spec object we did merge on
the last 18 months is anchor output, it did consumes a lot of review and
engineering bandwidth from contributors, took few refinement to finalize
(`option_anchors_zero_fee_htlc_tx`) and I believe every implementations are
still scratching their heads on a robust, default fee-bumping strategy.

So if we agree about the BOLT process limitations, the next question to
raise is how to improve it. Though there, as expressed in other replies,
I'm more we're not going to be able to do that much, as ultimately we're
upper bounded by a fast-pacing, always-growing, permissionless ecosystem of
applications and experiments moving forward in baazar-style and
lower-bounded by a decentralized process across teams allocating their
engineering resources with different priorities or even exploring Lightning
massive evolution stages in heterogenous, synergic directions.

Breeding another specification process on top of Lightning sounds a good
way forward. Though I believe it might be better to take time to operate
the disentanglement nicely. If we take the list of ideas which could be
part of such a process, one of them, dynamic commitments could make a lot
of sense to be well-designed and well-supported by every implementation. In
case of emergency fixes to deploy safer channel types, if you have to close
all your channels with other implementations, on a holistic scale, it might
cloak the mempools and spike the feerate, strickening safety of every other
channel on the network. Yes we might have safety interdepencies between
implementations :/

And it's also good to have thoughtful, well-defined specification bounds
when you're working on coordinated security disclosures to know who has
implemented what and whom you should reach out when something is broken.

Another orthogonal point to consider is the existence of already
higher-layer protocol specifications such as the dlcspecs. Even if the
ecosystem is still in the bootstrap phase for now, we already have a
discussion to split between a "consensus" track and more optional features.
I believe some features discussed there such as negotiation layer about
premium fee to compensate unilateral fee-bumping responsibility risk could
belong to such a new bLIPs process ?

So here my thinking, as a BOLT contributor, what the common subset of
problems we want to keep tackling down together in the coming years, what
is the remaining subset we're happy to be engage by a higher layer
development community and how to draw both communication and software
interfaces in-between ?

Personally, I would be glad if we not extend the scope of the current BOLT
coverage and focus more on fixing the known-issues, simplifying state
machines, fixing oddities of channel policies announcements [1], writing
down best practices on fee-bumping strategies, agreeing on channel types
upgrades raw mechanisms, features discovery and if we want to innovate
focus on taproot well-done integration which should keep us busy for few
years, among others PTLC support, funding output taproot support,
composable taptree for revokeable outputs, ...

IHMO, if the BOLT process is officialized it will enter in a more boring
phase, focused on safety/reliability/privacy 

Re: [Lightning-dev] Waiting SIGHASH_ANYPREVOUT and Packing Packages

2021-06-24 Thread Antoine Riard
truggle even more with longer
> term roadmaps.
>
> I think it is important to discuss what order changes should be
> attempted but I agree with David that putting specific future version
> numbers on changes is speculative at best and misleading at worst. The
> record of previous predictions of what will be included in particular
> future versions is not strong :)
>
> > What was making sense when you had like ~20 Bitcoin dev with 90% of the
> technical knowledge doesn't scale when you have multiple second-layers
> specifications
>
> It is great that we have a larger set of contributors in the ecosystem
> today than back in say pre 2017. But today that set of contributors is
> spread widely across a number of different projects that didn't exist
> pre 2017. Changes to Core are (generally) likely to be implemented and
> reviewed by current Core contributors as Lightning implementation
> developers (generally) seem to have their hands full with their own
> implementations.
>
> I think we can get the balance right by making progress on this
> (important) discussion whilst also maintaining humility that we don't
> know exact timelines and that getting things merged into Core relies
> on a number of people who have varying levels of interest and
> understanding of L2 protocols.
>
> On Mon, Jun 21, 2021 at 9:13 AM Antoine Riard 
> wrote:
> >
> > Hi Dave,
> >
> > > That might work for current LN-penalty, but I'm not sure it works for
> > eltoo.
> >
> > Well, we have not settled yet on the eltoo design but if we take the
> later proposal in date [0], signing the update transaction with
> SIGHGASH_ANYPREVOUT lets you attach non-interactively a single-party
> controlled input at broadcast-time. Providing the input amount is high
> enough to bump the transaction feerate over network mempools, it should
> allow the tx to propagate across network mempools and that way solve the
> pre-signed feerate problem as defined in the post ?
> >
> > >  If Bitcoin Core can rewrite the blind CPFP fee bump transaction
> > > to refer to any prevout, that implies anyone else can do the same.
> > > Miners who were aware of two or more states from an eltoo channel would
> > > be incentivized to rewrite to the oldest state, giving them fee revenue
> > > now and ensuring fee revenue in the future when a later state update is
> > > broadcast.
> >
> > Yep, you can add a per-participant key to lockdown the transaction and
> avoid any in-flight malleability ? I think this is discussed in the "A
> Stroll through Fee-Bumping Techniques" thread.
> >
> > > If the attacker using pinning is able to reuse their attack at no cost,
> > > they can re-pin the channel again and force the honest user to pay
> > > another anyprevout bounty to miners.
> >
> > This is also true with package-relay where your counterparty, with a
> better knowledge of network mempools, can always re-broadcast a CPFP-bumped
> malicious package ? Under this assumption, I think you should always be
> ready to bump our honest package.
> >
> > Further, for the clarity of the discussion, can you point to which
> pinning scenario you're thinking of or if it's new under
> SIGHASH_ANYPREVOUT, describe it ?
> >
> > > Repeat this a bunch of times and the honest user has now spent more on
> fees than their balance from the
> > closed channel.
> >
> > And sadly, as this concern also exists in case of a miner-harvesting
> attack against LN nodes, a concern that Gleb and I expressed more than a
> year ago in a public post [1], a good L2 client should always upper bound
> its fee-bumping reserve. I've a short though-unclear note on this notion of
> fee-bumping upper to warn other L2 engineers  in "On Mempool Funny Games
> against Multi-Party Funded Transactions"
> >
> > Please read so:
> >
> > "A L2 client, with only a view of its mempool at best, won't understand
> why
> >  the transaction doesn't confirm and if it's responsible for the
> >  fee-bumping, it might do multiple rounds of feerate increase through
> CPFP,
> >  in vain. As the fee-bumping algorithm is assumed to be known if the
> victim
> >  client is open source code, the attacker can predict when the
> fee-bumping
> >  logic reaches its upper bound."
> >
> > Though thanks for the recall! I should log dynamic-balances in RL's
> `ChannelMonitorUpdate` for our ongoing implementation of anchor, updating
> my TODO :p
> >
> > > Even if my analysis above is wrong, I would encourage you or Matt or
> > someone to write up this anyprevout idea in more deta

[Lightning-dev] On the recent softforks survey, forget to fulfill my answer!

2021-06-21 Thread Antoine Riard
Hi,

I was super glad to see the recent survey on potential softforks for the
near-future of Bitcoin! I didn't have time to answer this one but will do
so for the future. I wanna to salute the grassroots involvement in bitcoin
protocol development, that's cool to see :)

Though softforks are what shine in the media and social networks, one
should not ignore they represent the aggregation of thousands of hours of
sweat from contributors all across the ecosystem with discussion extending
from IRC public or private chans, mailing list, medias, etc.

What makes softfork discussion especially hard is that no one is following
all those communications channels to collect the trace of information and
as such it can be hard to reason on the Big Picture(tm). That's why
soft-forks take time, and we might somehow be prepared for them to take
even more time in the future...

That said, where I would like to draw awareness of the community is about
the submerged part of bitcoin protocol development iceberg. Softforks are
sexy, though you have far more areas of Bitcoin dev who would benefit from
a gentle boost by happy hands :p

For e.g, if you take Bitcoin Core, you have few ongoing projects were folks
have a hard time moving forward, e.g assumeutxo/mempool
refactos/addr-relay/rebroadcasting module/mutation testing/
multiprocess/wallet external signer/GUI maintenance/libbitcoin_kernel[0]

Those projects start to be "softfork"-in-itself-size-of-engineering, and
for a lot of them might  require more than pure "coding" skills, such as
specification, simulations, extensive code coverage, up-to-date meeting
documents. See what is currently done with the Core wiki [1]

All those projects are modifying critical areas of Bitcoin such as the
validation engine or the p2p stack and AFAICT, they deserve more care.
Hopefully, by drawing the light there, more folks are going to understand
them, we'll have more skilled reviewers, reducing the reliance on a few
segments of the codebase being only understood by some seen experts and
ideally, ingenious, "Many Eyes Make All Bugs Shallow" :)

That said, it's only the technical ground and I believe the human layer of
Bitcoin dev might be the one where grassroots-involvement might be the most
fruitful.

I would say the Bitcoin dev stage has changed a bit since the last 18
months, especially w.r.t to few factors, the arrival of massive development
funding, the sudden mediatisation of protocol developers and the pursued
geographical spreading, diversification and education of the poolset of
contributors.

When I did arrive on the stage a few years ago, funding was still a hard
question, even for well-known, long-term contributors and only a few actors
were taking care of Bitcoin. Really differently, from what we have seen on
the last months, where we have seen a plethora of new organisations
entering the game and benefiting from the generosity of the Bitcoin
industry [2]

Things have been so fast that sometimes one can wonder if there isn't a
bubble around Bitcoin dev ? Few OGs might suggest we're back to 2017, with
ICO-like webpage pinning "developers-as-brands".  In reality, we see new
grant announcements every month or week, but still the number of reviewers
on Core doesn't seem to increase ? [3]

Hopefully, a lot of those new structures pretending to work for Bitcoin
betterness will get out of their childerness phase and slowly mature to
something as sound as Chaincode or Square Crypto. Small, friendly,
politics-free engineering teams with years-long stability, solving bitcoin
problems with a "forever" perspective mindset.

Though, as of today, you do have the opposite with the grant model. Being
funded on the rational that yours peers "appreciate" your work is more
going to generate implicit compliance at review time where you should
instead spot their errors. Bitcoin development process is highly contrarian
per nature, and constantly challenging your peers assumptions has been
preserving software robustness.

Time will separate the wheat from the chaff though how to make things
better in the short term ? I don't know, maybe those structures could be
exemplary and outsource their grant allocation decisions framework ? Or ask
them to publish grant contract under which contributors are engaging
themselves to observe if the usual independence provisions are present [4]

In another direction, I believe the ongoing mediatization increase of the
Bitcoin dev stage in the last months or so didn't improve the current state
of affairs. We now see technical proposals, of which the soundness have not
been thoroughly discussed in the traditional venues, being announced in big
pump as some kind of "done-deal", potentially sustaining the false belief
it has been already blessed or approved by the rest of the development
community.

And honestly, it's quite easy to approach any Bitcoin media today once
you're a bit technical, and rely on lingo to create a perception of
competency towards your 

Re: [Lightning-dev] Waiting SIGHASH_ANYPREVOUT and Packing Packages

2021-06-21 Thread Antoine Riard
 potential malicious mempool
partitions [2].

> Package relay is a clear improvement now, and one I expect to be
permanent for as long as we're using anything like the current protocol

Again, reading my post, I did point out that we might keep the "lower half"
of package-relay and deprecate only the higher part of it as we have more
feerate-efficient fee-bumping primitive available. If  it sounds too much
of a release engineering effort to synchronize on the scale of an
ecosystem, think about the ongoing deprecation of Tor V2.

Further, you did express a far less assertive opinion during last Tuesday
transaction-relay workshops, to which a lot of folks attended, where you
pointed it might not be a good idea for L2s to make more assumptions on
non-normative:

"harding> I do think we should be using miners profit incentive more for
stuff rather than trying to normalize mempool policy (which not entirely
possible), e.g. things like
https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-April/002664.html
"

Arguing for package-relay "permanence" moves in the contrary decision IMHO ?

> I don't think it's appropriate to be creating timelines like this that
depend on the work of a large number of contributors who I don't believe

Thanks Dave, this is your opinion and I respect this. I'll let any
participant of this mailing list make an opinion on its own, following
their private judgement. It might be based from a lot of different factors,
e.g "trusting the experts" or gathering diverse in-field authorities'
opinions or reasoning from scratch based on raw, public facts.

Though might I ask you on which information sources are you finding your
belief ? I'm curious if you're aware of any contributors who feel entitled
to be consulted in a decentralized development process...

For the records, I did consult no one. As even in the technical circle that
would have been a lot of open source projects teams to reach out : LND,
c-ligtning, Eclair, coin-teleport, revault, sapio, btcsuite, bcoin,
libbitcoin, wasabi's coinjoin, samourai wallet's coinjoin, ...

I was lazy, I just shot a mail :p

W.r.t to Greg's 4-year old's piece, I'll let him express his opinion on how
the expressed framework applies to my post, the Bitcoin dev stage has grown
a lot since then. What was making sense when you had like ~20 Bitcoin dev
with 90% of the technical knowledge doesn't scale when you have multiple
second-layers specifications of which you have multiple implementations
teams, some of them  decentralized and spread through different
countries/timezones, IMHO.

Though, Dave if you strongly hold your opinion on my behavior, I would
invite you to do this intellectual work by yourself.

Browsing quickly through Greg's piece, a lot of the reasoning is based on
FOSS experience from Linux/Juniper, which to the best of my knowledge are
centralized software projects ?

Note, also Paul Storzc's post has the simple phrase :

"I emphasized concrete numbers, and concrete dates"

I believe my post doesn't have such numbers and concrete dates ?

Presence of Core version numbers are motivated as clear signalling for L2
developpers to update their backend in case of undocumented, subtle policy
changes slipping in the codebase. Let's minimize CVE-2020-26895
style-of-bugs across the ecosystem :/

Finally, the presence of timelines in this post is also a gentle call for
the Bitcoin ecosystem to act on those safety holes, of which the
seriousness has been underscored by a lot of contributors in the past,
especially for the pre-signed feerate problem and even before I was in the
Bitcoin stage.

So better to patch them before they do manifest in the wild, and folks
start to bleed coins.  What you learn from practicing security research,
the lack of action can be harmful :/

> Stuff will get done when it gets done.

Don't forget bugs might slip in but that's fine if you have the skilled
folks around to catch them :)

And yes I really care about Lightning, and all those cute new L2 protocols
fostering in the community :)

Finally, you know Dave, I'm just spreading ideas.

If those ideas are sound and folks love them, awesome! They're free to use,
study, share and modify them to build better systems.

If I'm wrong, like I've been in the past, like I might be today and like
I'll be in the future, I hope they will be patient to teach me back and
we'll learn.

Hacker ethos :) ?

Cheers,
Antoine

[0]
https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-January/002448.html

[1] https://github.com/bitcoin/bitcoin/issues/14895

[2]
https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-February/002569.html

Le sam. 19 juin 2021 à 09:38, David A. Harding  a écrit :

> On Fri, Jun 18, 2021 at 06:11:38PM -0400, Antoine Riard wrote:
> > 2) Solving the Pre-Signed Feerate problem : Package-Relay or
> > SIGHASH_ANYPREVOUT
> >
> > For Lightning, either package-relay o

Re: [Lightning-dev] Waiting SIGHASH_ANYPREVOUT and Packing Packages

2021-06-18 Thread Antoine Riard
> That's a question I hope we'll gather feedback during next Thursday's
transaction relay workshops.

As someone kindly pointed out to me, workshop is happening Tuesday, June
22th. Not Thursday, mistake of mine :/



Le ven. 18 juin 2021 à 18:11, Antoine Riard  a
écrit :

> Hi,
>
> It's a big chunk, so if you don't have time browse parts 1 and 2 and share
> your 2 sats on the deployment timeline :p
>
> This post recalls some unsolved safety holes about Lightning, how
> package-relay or SIGHASH_ANYPREVOUT can solve the first one, how a mempool
> hardening can solve the second one, few considerations on package-relay
> design trade-offs and propose a rough deployment timeline.
>
> 1) Lightning Safety Holes : Pre-Signed Feerate and Tx-Pinning (to skip if
> you're a LN dev)
>
> As of today, Lightning is suffering from 2 safety holes w.r.t to
> base-layer interactions, widely discussed among ln devs.
>
> The first one, the pre-signed feerate issue with future broadcasted
> time-sensitive transactions is laid out clearly in Matt Corallo's "CPFP
> Carve-Out Fee-Prediction Issues in Contracting Applications (eg Lightning)"
> [0]. This issue might provoke loss of funds, even in non-adversarial
> settings, i.e a Lightning routing hub not being able to settle backward
> onchain a successful HTLC during occurrences of sudden mempool congestion.
>
> As blockspace demand increases with an always growing number of
> onchain/offchain bitcoin users, coupling effects are more likely to happen
> and this pre-signed feerate issue is going to become more urgent to solve
> [1]. For e.g, few percentiles of increases in feerate being overpriced by
> Lightning routing hubs to close "fractional-reserve" backed anchor
> channels, driving mempools congestions, provoking anchor channels
> fee-bumping reserves becoming even more under-provisioned and thus close
> down, etc.
>
> The second issue, malicious transaction pinnings, is documented in Bastien
> Teinturier's "Pinning Attacks" [2]. AFAIK, there is a rough consensus among
> devs on the conceptual feasibility of such a class of attacks against a LN
> node, though so far we have not seen them executed in the wild and I'm not
> aware of anyone having realized them in real-world conditions. Note, there
> is a variety of attack scenarios to consider which is function of a wide
> matrix (channel types, LN implementation's `update_fee` policy, LN
> implementation's `cltv_delta` policy, mempool congestion feerate groups,
> routing hubs or end nodes) Demoing against deployed LN implementations with
> default settings has been on my todo for a while, though a priori One
> Scenario To Exploit Them All doesn't fit well.
>
> Side-note, as a LN operator, if you're worried about those security risks,
> you can bump your `cltv_delta`/`cltv_expiry_delta` to significantly coarse
> the attacks.
>
> I think there is an important point to underscore. Considering the state
> of knowledge we have today, I believe there is no strong interdependency
> between solving pre-signed feerate and tx-pinning with the same mechanism
> from a safety/usability standpoint. Or last such mechanism can be deployed
> by stages.
>
> 2) Solving the Pre-Signed Feerate problem : Package-Relay or
> SIGHASH_ANYPREVOUT
>
> For Lightning, either package-relay or SIGHASH_ANYPREVOUT should be able
> to solve the pre-signed feerate issue [3]
>
> One of the interesting points recalled during the first transaction relay
> workshops was that L2s making unbounded security assumptions on
> non-normative tx-relay/mempool acceptance rules sounds a wrong direction
> for the Bitcoin ecosystem long-term, and more prone to subtle bugs/safety
> risks across the ecosystem.
>
> I did express the contrary, public opinion a while back [4]. That said, I
> start to agree it's wiser ecosystem-wise to keep those non-normatives rules
> as only a groundwork for weaker assumptions than consensus ones. Though it
> would be nice for long-term L2s stability to consider them with more care
> than today in our base-layer protocol development process [4]
>
> On this rational, I now share the opinion it's better long-term to solve
> the pre-signed feerate problem with a consensus change such as
> SIGHASH_ANYPREVOUT rather than having too much off-chain coins relying on
> the weaker assumptions offered by bitcoin core's tx-relay/mempool
> acceptance rules, and far harder to replicate and disseminate across the
> ecosystem.
>
> However, if SIGHASH_ANYPREVOUT is Things Done Right(tm), should we discard
> package-relay ?
>
> Sadly, in the worst-case scenario we might never reach consensus again
> across the ecosystem and Taproot is the last softfork. Ever :/ *sad violons
> and

[Lightning-dev] Waiting SIGHASH_ANYPREVOUT and Packing Packages

2021-06-18 Thread Antoine Riard
Hi,

It's a big chunk, so if you don't have time browse parts 1 and 2 and share
your 2 sats on the deployment timeline :p

This post recalls some unsolved safety holes about Lightning, how
package-relay or SIGHASH_ANYPREVOUT can solve the first one, how a mempool
hardening can solve the second one, few considerations on package-relay
design trade-offs and propose a rough deployment timeline.

1) Lightning Safety Holes : Pre-Signed Feerate and Tx-Pinning (to skip if
you're a LN dev)

As of today, Lightning is suffering from 2 safety holes w.r.t to base-layer
interactions, widely discussed among ln devs.

The first one, the pre-signed feerate issue with future broadcasted
time-sensitive transactions is laid out clearly in Matt Corallo's "CPFP
Carve-Out Fee-Prediction Issues in Contracting Applications (eg Lightning)"
[0]. This issue might provoke loss of funds, even in non-adversarial
settings, i.e a Lightning routing hub not being able to settle backward
onchain a successful HTLC during occurrences of sudden mempool congestion.

As blockspace demand increases with an always growing number of
onchain/offchain bitcoin users, coupling effects are more likely to happen
and this pre-signed feerate issue is going to become more urgent to solve
[1]. For e.g, few percentiles of increases in feerate being overpriced by
Lightning routing hubs to close "fractional-reserve" backed anchor
channels, driving mempools congestions, provoking anchor channels
fee-bumping reserves becoming even more under-provisioned and thus close
down, etc.

The second issue, malicious transaction pinnings, is documented in Bastien
Teinturier's "Pinning Attacks" [2]. AFAIK, there is a rough consensus among
devs on the conceptual feasibility of such a class of attacks against a LN
node, though so far we have not seen them executed in the wild and I'm not
aware of anyone having realized them in real-world conditions. Note, there
is a variety of attack scenarios to consider which is function of a wide
matrix (channel types, LN implementation's `update_fee` policy, LN
implementation's `cltv_delta` policy, mempool congestion feerate groups,
routing hubs or end nodes) Demoing against deployed LN implementations with
default settings has been on my todo for a while, though a priori One
Scenario To Exploit Them All doesn't fit well.

Side-note, as a LN operator, if you're worried about those security risks,
you can bump your `cltv_delta`/`cltv_expiry_delta` to significantly coarse
the attacks.

I think there is an important point to underscore. Considering the state of
knowledge we have today, I believe there is no strong interdependency
between solving pre-signed feerate and tx-pinning with the same mechanism
from a safety/usability standpoint. Or last such mechanism can be deployed
by stages.

2) Solving the Pre-Signed Feerate problem : Package-Relay or
SIGHASH_ANYPREVOUT

For Lightning, either package-relay or SIGHASH_ANYPREVOUT should be able to
solve the pre-signed feerate issue [3]

One of the interesting points recalled during the first transaction relay
workshops was that L2s making unbounded security assumptions on
non-normative tx-relay/mempool acceptance rules sounds a wrong direction
for the Bitcoin ecosystem long-term, and more prone to subtle bugs/safety
risks across the ecosystem.

I did express the contrary, public opinion a while back [4]. That said, I
start to agree it's wiser ecosystem-wise to keep those non-normatives rules
as only a groundwork for weaker assumptions than consensus ones. Though it
would be nice for long-term L2s stability to consider them with more care
than today in our base-layer protocol development process [4]

On this rational, I now share the opinion it's better long-term to solve
the pre-signed feerate problem with a consensus change such as
SIGHASH_ANYPREVOUT rather than having too much off-chain coins relying on
the weaker assumptions offered by bitcoin core's tx-relay/mempool
acceptance rules, and far harder to replicate and disseminate across the
ecosystem.

However, if SIGHASH_ANYPREVOUT is Things Done Right(tm), should we discard
package-relay ?

Sadly, in the worst-case scenario we might never reach consensus again
across the ecosystem and Taproot is the last softfork. Ever :/ *sad violons
and tissues jingle*

With this dilemma in mind, it might be wise for the LN/L2 ecosystems to
have a fall-back plan to solve their safety/usability issues and
package-relay sounds a reasonable, temporary "patch".

Even if package-relay requires serious engineering effort in Bitcoin Core
to avoid introducing new DoSes, swallowing well the complexity increase in
critical code paths such as the mempool/p2p stack and a gentle API design
for our friends the L2 devs, I believe it's worthy the engineering
resources cost. From-my-completely-biased-LN-dev viewpoint :p

In the best-case scenario, we'll activate SIGHASH_ANYPREVOUT and better
fee-bumping primitives softforks [5] slowly strip off the "L2 fee-bumping
primitive" 

[Lightning-dev] Reminder: Transaction relay workshop on IRC Libera - Tuesday 15th June 19:00 UTC

2021-06-14 Thread Antoine Riard
Hi,

A short reminder about the 1st transaction relay workshop happening
tomorrow on #l2-onchain-support Libera chat (!), Tuesday 15th June, from
19:00 UTC to 20:30 UTC

Scheduled topics are:
* "Guidelines about L2 protocols onchain security design"
* "Coordinated cross-layers security disclosures"
* "Full-RBF proposal"

Find notes and open questions for the two first topics here:
* https://github.com/ariard/L2-zoology/blob/master/workshops/guidelines.md
* https://github.com/ariard/L2-zoology/blob/master/workshops/coordinated.md

Going to send the "Move toward full-rbf" proposal soon, deserves its own
thread. Workshops will stick to a socratic format to foster as much
knowledge sharing among attendees and ideally we'll reach rough consensus
about expected goals.

If you're a second-layer protocol designer, a Lightning dev, a Bitcoin Core
dev contributing around mempool/p2p areas, or a Bitcoin service operator
with intense usage of the mempool, I hope you'll find those workshops of
interest and you'll learn a lot :)

Again it's happening on Libera, not Freenode, contrary to the former mail
about agenda & schedule.

Cheers,
Antoine
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] On Mempool Funny Games against Multi-Party Funded Transactions

2021-05-06 Thread Antoine Riard
Hi,

In this post I would like to highlight some DoS attacks against multi-party
Bitcoin protocols during their funding phases. Recent discussions around
DLC funding flow [0] and dual-funding of LN channel [1] remind me that some
timevalue DoS/fee inflation issues are common to any multi-party funded
transactions. I'm not sure how much developers meddle in that kind of
protocols/applications are aware of them and how well they mitigate against
them.

The first issue is a timevalue DoS by exploiting standardness malleability.
The second one is a fee inflation one by exploiting RBF policy rules. User
utxo aren't directly at risk but those attacks might reveal themselves as
severe nuisances. More sophisticated variations do exist, but those ones
are pretty easy and cheap to execute with a high-rate of success.

# The Model : Multi-Party Funded Transaction

Let's say Alice, Bob and Caroll commit one input in a single transaction.
Each of them receive inputs from others, verify the outpoint existence (and
is a segwit one if you have a chain of child transactions) and sign the
whole transaction data with sighash_all to enforce the expected protocol
semantics. One of them collects all the witnesses and broadcasts the
finalized transaction. This broadcaster might be responsible to fee-bump
the transaction through CPFP if the feerate as previously negotiated isn't
good enough for a quick confirmation.

Once the transaction is confirmed, the protocol moves in its operation
phase (e.g start channel updates) or might even end up here (e.g a basic
one-stage coinjoin). But those later phases are out of concern.

I think this rough model applies to a wide set of L2 bitcoin protocols
(DLC, Coinjoin, Payjoin, dual-funded LN channel, swaps ?). Notice that
*single-party funded*, multi-party transactions (e.g batching_tx to N
payouts) are excluded from this discussion. Although they do share the same
risks, exploits against them are a bit harder, as the attacker has to
execute a real RBF-pinning on a payout output, more costly in feerate.

Accepting input and committing coins with a low-trusted counterparty opens
the way to some troubles.

# 1st issue : Standardness Malleability of Counterparty Input

Current Core's script interpreter is applying some stricter checks beyond
the consensus one, like MINIMALIF or NULLFAIL (bip143). While non-compliant
with those checks, witness data might still succeed consensus checks. A L2
client only verifying input for consensus validity will miss standardness
validity and sign/broadcast a non-propagating transaction.

In the model described above, Alice might furnish a non-MINIMALIF compliant
p2wsh spending input to Bob and Caroll, they will accept it as a valid
input, finish the transaction finalization and try to broadcast. It will
fail to propagate and confirm. If Bob and Caroll are relying on a
full-node, they can observe the failure directly and move their coins.
Otherwise, if they don't have access to a mempool policy verifier, they
should move their coins after some timeout.

In both cases, victims of this malleability will waste timevalue on their
coins and likely fees for a double-spend of their honest inputs, as it's
better to cancel out the corrupted multi-party funding transaction. If the
double-spend timeout only occurs after a meaningful timeout, e.g 2048
blocks ahead from signatures exchange like for the recent LN change [2],
this timevalue loss might be in the same range that one's suffered on LN's
revokeable outputs. The attacker coin might be of far lower value than the
victim ones and the asymmetry should be underscored, *one* malicious input
lets you affect *N* victim ones.

As a simple mitigation, participants of the multi-party funded transaction
should verify the absence of standardness malleability of contributed
witnessScripts. Though AFAIK, we don't have such tooling available
ready-to-integrate in L2 stack client [3].

Notice, I'm not considering timevalue DoS inflicted by malicious
broadcaster/orchestrator, where signatures are collected but transaction
broadcast is withheld. This should be minded at counterparty/service
selection but it's beyond the scope of an analysis centered on
mempool/tx-relay risks.

# 2nd issue : RBF opt-out by a Counterparty Double-Spend

Current bip125 RBF rules make signaling mandatory to enable replacement,
otherwise even a better-feerate candidate won't replace a conflicting
transaction with a finalized nSequence field [4]. A L2 client might be in
possession of better-feerate multi-party funded transactions but it won't
propagate on today's network if a opt-out double-spend is already present.

In the model described above, Alice might provide a consensus-and-standard
valid input to Bob and Caroll, they will verify and accept it, finish the
transaction finalization and broadcast. Meantimes, Alice will mass-connect
to the network and announce a double-spend of its input with
nSequence=0x. Alice-Bob-Caroll's funding 

Re: [Lightning-dev] L2s Onchain Support IRC Workshop

2021-04-23 Thread Antoine Riard
Hi Jeremy,

Yes dates are floating for now. After Bitcoin 2021, sounds a good idea.

Awesome, I'll be really interested to review again an improved version of
sponsorship. And I'll try to sketch out the sighash_no-input fee-bumping
idea which was floating around last year during pinnings discussions. Yet
another set of trade-offs :)

Le ven. 23 avr. 2021 à 11:25, Jeremy  a écrit :

> I'd be excited to join. Recommend bumping the date  to mid June, if that's
> ok, as many Americans will be at Bitcoin 2021.
>
> I was thinking about reviving the sponsors proposal with a 100 block lock
> on spending a sponsoring tx which would hopefully make less controversial,
> this would be a great place to discuss those tradeoffs.
>
> On Fri, Apr 23, 2021, 8:17 AM Antoine Riard 
> wrote:
>
>> Hi,
>>
>> During the lastest years, tx-relay and mempool acceptances rules of the
>> base layer have been sources of major security and operational concerns for
>> Lightning and other Bitcoin second-layers [0]. I think those areas require
>> significant improvements to ease design and deployment of higher Bitcoin
>> layers and I believe this opinion is shared among the L2 dev community. In
>> order to make advancements, it has been discussed a few times in the last
>> months to organize in-person workshops to discuss those issues with the
>> presence of both L1/L2 devs to make exchange fruitful.
>>
>> Unfortunately, I don't think we'll be able to organize such in-person
>> workshops this year (because you know travel is hard those days...) As a
>> substitution, I'm proposing a series of one or more irc meetings. That
>> said, this substitution has the happy benefit to gather far more folks
>> interested by those issues that you can fit in a room.
>>
>> # Scope
>>
>> I would like to propose the following 4 items as topics of discussion.
>>
>> 1) Package relay design or another generic L2 fee-bumping primitive like
>> sponsorship [0]. IMHO, this primitive should at least solve mempools spikes
>> making obsolete propagation of transactions with pre-signed feerate, solve
>> pinning attacks compromising Lightning/multi-party contract protocol
>> safety, offer an usable and stable API to L2 software stack, stay
>> compatible with miner and full-node operators incentives and obviously
>> minimize CPU/memory DoS vectors.
>>
>> 2) Deprecation of opt-in RBF toward full-rbf. Opt-in RBF makes it trivial
>> for an attacker to partition network mempools in divergent subsets and from
>> then launch advanced security or privacy attacks against a Lightning node.
>> Note, it might also be a concern for bandwidth bleeding attacks against L1
>> nodes.
>>
>> 3) Guidelines about coordinated cross-layers security disclosures.
>> Mitigating a security issue around tx-relay or the mempool in Core might
>> have harmful implications for downstream projects. Ideally, L2 projects
>> maintainers should be ready to upgrade their protocols in emergency in
>> coordination with base layers developers.
>>
>> 4) Guidelines about L2 protocols onchain security design. Currently
>> deployed like Lightning are making a bunch of assumptions on tx-relay and
>> mempool acceptances rules. Those rules are non-normative, non-reliable and
>> lack documentation. Further, they're devoid of tooling to enforce them at
>> runtime [2]. IMHO, it could be preferable to identify a subset of them on
>> which second-layers protocols can do assumptions without encroaching too
>> much on nodes's policy realm or making the base layer development in those
>> areas too cumbersome.
>>
>> I'm aware that some folks are interested in other topics such as
>> extension of Core's mempools package limits or better pricing of RBF
>> replacement. So l propose a 2-week concertation period to submit other
>> topics related to tx-relay or mempools improvements towards L2s before to
>> propose a finalized scope and agenda.
>>
>> # Goals
>>
>> 1) Reaching technical consensus.
>> 2) Reaching technical consensus, before seeking community consensus as it
>> likely has ecosystem-wide implications.
>> 3) Establishing a security incident response policy which can be applied
>> by dev teams in the future.
>> 4) Establishing a philosophy design and associated documentations (BIPs,
>> best practices, ...)
>>
>> # Timeline
>>
>> 2021-04-23: Start of concertation period
>> 2021-05-07: End of concertation period
>> 2021-05-10: Proposition of workshop agenda and schedule
>> late 2021-05/2021-06: IRC meetings
>>
>> As the problem space is savagely wide,

[Lightning-dev] L2s Onchain Support IRC Workshop

2021-04-23 Thread Antoine Riard
Hi,

During the lastest years, tx-relay and mempool acceptances rules of the
base layer have been sources of major security and operational concerns for
Lightning and other Bitcoin second-layers [0]. I think those areas require
significant improvements to ease design and deployment of higher Bitcoin
layers and I believe this opinion is shared among the L2 dev community. In
order to make advancements, it has been discussed a few times in the last
months to organize in-person workshops to discuss those issues with the
presence of both L1/L2 devs to make exchange fruitful.

Unfortunately, I don't think we'll be able to organize such in-person
workshops this year (because you know travel is hard those days...) As a
substitution, I'm proposing a series of one or more irc meetings. That
said, this substitution has the happy benefit to gather far more folks
interested by those issues that you can fit in a room.

# Scope

I would like to propose the following 4 items as topics of discussion.

1) Package relay design or another generic L2 fee-bumping primitive like
sponsorship [0]. IMHO, this primitive should at least solve mempools spikes
making obsolete propagation of transactions with pre-signed feerate, solve
pinning attacks compromising Lightning/multi-party contract protocol
safety, offer an usable and stable API to L2 software stack, stay
compatible with miner and full-node operators incentives and obviously
minimize CPU/memory DoS vectors.

2) Deprecation of opt-in RBF toward full-rbf. Opt-in RBF makes it trivial
for an attacker to partition network mempools in divergent subsets and from
then launch advanced security or privacy attacks against a Lightning node.
Note, it might also be a concern for bandwidth bleeding attacks against L1
nodes.

3) Guidelines about coordinated cross-layers security disclosures.
Mitigating a security issue around tx-relay or the mempool in Core might
have harmful implications for downstream projects. Ideally, L2 projects
maintainers should be ready to upgrade their protocols in emergency in
coordination with base layers developers.

4) Guidelines about L2 protocols onchain security design. Currently
deployed like Lightning are making a bunch of assumptions on tx-relay and
mempool acceptances rules. Those rules are non-normative, non-reliable and
lack documentation. Further, they're devoid of tooling to enforce them at
runtime [2]. IMHO, it could be preferable to identify a subset of them on
which second-layers protocols can do assumptions without encroaching too
much on nodes's policy realm or making the base layer development in those
areas too cumbersome.

I'm aware that some folks are interested in other topics such as extension
of Core's mempools package limits or better pricing of RBF replacement. So
l propose a 2-week concertation period to submit other topics related to
tx-relay or mempools improvements towards L2s before to propose a finalized
scope and agenda.

# Goals

1) Reaching technical consensus.
2) Reaching technical consensus, before seeking community consensus as it
likely has ecosystem-wide implications.
3) Establishing a security incident response policy which can be applied by
dev teams in the future.
4) Establishing a philosophy design and associated documentations (BIPs,
best practices, ...)

# Timeline

2021-04-23: Start of concertation period
2021-05-07: End of concertation period
2021-05-10: Proposition of workshop agenda and schedule
late 2021-05/2021-06: IRC meetings

As the problem space is savagely wide, I've started a collection of
documents to assist this workshop : https://github.com/ariard/L2-zoology
Still wip, but I'll have them in a good shape at agenda publication, with
reading suggestions and open questions to structure discussions.
Also working on transaction pinning and mempool partitions attacks
simulations.

If L2s security/p2p/mempool is your jam, feel free to get involved :)

Cheers,
Antoine

[0] For e.g see optech section on transaction pinning attacks :
https://bitcoinops.org/en/topics/transaction-pinning/
[1]
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-September/018168.html
[2] Lack of reference tooling make it easier to have bug slip in like
https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-October/002858.html
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Hold fee rates as DoS protection (channel spamming and jamming)

2021-02-15 Thread Antoine Riard
> The risk of hitting the chain that you mention can be factored into this
> base part as well. The hold fee rate would then be defined in the form (2
> sat + 1%) per minute.

I think this works if the base fee is paid at HTLC commitment lock-in.
Otherwise, you're still exposed to the chain-hit risk, the channel might
break at any time and the timevalues of the pending forward HTLC set will
be lost. Note, it's likely the hedge to be probabilistic as you won't make
HTLC receivers pay for the effective timevalue but only a fragment computed
from the expected outgoing HTLC traffic during channel lifetime.

A smart channel relay policy will discount chain-hit risks for stable
links, incentivizing your counterparties to be good peers.

> Is this the same concern as above or slightly different? Or do you mean
> clock differences between the endpoints of a channel? For that, I'd think
> that there needs to be some tolerance to smooth out disagreements. But
yes,
> in general as long as a node is getting a positive amount, it is probably
> okay to tolerate a few rounding errors here and there.

This is slightly different concern, HTLC settlement may happen at different
wall clock time periods between downstream/upstream.

Let's say a HTLC is relayed across Alice, Bob, Caroll. The HTLC is
successfully settled at N between Bob and Caroll, with N the number of
minutes since htlc lock-in. Caroll pays N * `hold_fee_rate` to Bob. As
upstream settlement isn't atomic, it happens at N+1. Bob pays N *
`hold_fee_rate` to Alice. Bob loss 1 * `hold_fee_rate`.

For normal operations, I concede it should happen rarely enough for this
being an edge-case.

> Yes, that is a good point. But I do think that it is reasonable that a
node
> that can go offline doesn't charge a hodl fee. Those nodes aren't
generally
> forwarding htlcs anyway, so it would just be for their own outgoing
> payments. Without charging a hodl fee for outgoing payments, they risk
that
> their channel peer delays the htlc for free. So they should choose their
> peers carefully. It seems that at the moment mobile nodes are often
> connected to a known LSP already, so this may not be a real problem.

I think the reasoning holds for mobile clients not charging hold fees. Most
of the time, your LSP doesn't have an interest to delay your HTLC, better
to succeed/fail quickly to release the liquidity to potentially earn fees
on another payment. The only case might be when all the outgoing liquidity
of your LSPs are already near-busy and they have to select between your
HTLC to forward and the one from another spoke. Minding this, hold fees
charged by mobile clients might be a way to prioritize their payments.

> All of this indeed also implies that nodes that do charge hold fees, need
> to make sure to stay online. Otherwise peers may close channels with them
> because they are unreliable and charging for their own outage.

And this is the point where it becomes tricky. A malicious upstream node
mimic offliness to inflate its due hold fee. Considering the long-term
protocol trend to pour the unilateral closing fee burden on the
counterparty deciding to go onchain, it won't be economical to do so if the
extorted hold fees is inferior to channel closing onchain fees. So you
might have this cat-and-mouse game play many times until it's obvious for a
channel scoring logic that this peer is malicious and the channel must be
closed. In-between the accumulated hold fees might have been superior to
the cost of channel opening...

I can really see a sophisticated attacker able to escape such channel
blacklist heuristics.

> Yes, we should be careful not to outlaw micropayments. But I don't think
> the hold fees as described above do this. Because the fees are modeled as
> close to the real costs as possible, it can only be fair? Tiny amounts
that
> settle quickly should need only very small hold fees. But if the tiny
> amount gets stuck for a week and occupies an htlc slot in each of its 25
> hops through multi-btc wumbo channels, yes, than it should be costly?

I agree that any packet which holds liquidity for a while should cover slot
costs and timevalue, even if the value transferred is in fine inferior to
the final hold fees. In that case, it should be up to the original sender
to arbitrate between its expected traffic and the opportunity to open a
channel to shorten its payment paths.

Still, a "very small hold fees" might scope a lot of use-cases, which are
hard to care about because they might not exist yet. Compared to leveraging
a resource (channel UTXOs) which is already assumed to be owned by
Lightning users.

Le dim. 14 févr. 2021 à 13:05, Joost Jager  a écrit :

> I've made a first attempt at projecting this idea onto the existing spec:
> https://github.com/lightningnetwork/lightning-rfc/pull/843. This may also
> clarify some of the questions that haven't been answer

Re: [Lightning-dev] Hold fee rates as DoS protection (channel spamming and jamming)

2021-02-12 Thread Antoine Riard
Hi Joost,

Thanks for working on this and keeping raising awareness about channel
jamming.

> In this post I'd like to present a variation of bidirectional upfront
payments that uses a time-proportional hold fee rate to address the
limitation above. I also tried to come up with a system that aims > relate
the fees paid more directly to the actual costs incurred and thereby reduce
the number of parameters.

Not considering hold invoices and other long-term held packets was one of
my main concerns in the previous bidirectional upfront payments. This new
"hodl_fee_rate" is better by binding the hold fee to the effectively
consumed timelocked period of the liquidity and not its potential maximum.

That said, routing nodes might still include the risk of hitting the chain
in the computation of their `hodl_fee_rate` and the corresponding cost of
having onchain timelocked funds. Given that HTLC deltas are decreasing
along the path, it's more likely that `hodl_fee_rate` will be decreasing
along the path. Even in case of lawfully solved hodl HTLC, routing nodes
might be at loss for having paid a higher hold_fee on their upstream link
than received on the downstream one.

Is assuming increasing `hodl_fee_rate` along a payment path at odds with
the ordering of timelocks ?

> But this would also mean that anyone can send out an htlc and collect
hold fees unconditionally. Therefore routing nodes advertise on the network
their `hold_grace_period`. When routing nodes accept an htl> to forward,
they're willing to pay hold fees for it. But only if they added a delay
greater than `hold_grace_period` for relaying the payment and its response.
If they relayed in a timely fashion, they exp> ect the sender of the htlc
to cover those costs themselves. If the sender is also a routing node, the
sender should expect the node before them to cover it. Of course, routing
nodes can't be trusted. So in> practice we can just as well assume that
they'll always try to claim from the prior node the maximum amount in
compensation.

Assuming `hodl_fee_rate` are near-similar along the payment path, you have
a concern when the HTLC settlement happens at period N on the outgoing link
and at period N+1 on the incoming link due to clock differences. In this
case, a routing node will pay a higher `hodl_fee_rate` than received.

I think this is okay, that's an edge case, only leaking a few sats.

A more concerning one is when the HTLC settlement happens at period N on
the outgoing link and your incoming counterparty goes offline. According to
the HTLC relay contract, the `hodl_fee_rate` will be inflated until the
counterparty goes back online and thus the routing node is at loss. And
going offline is a really lawful behavior for mobile clients, even further
if you consider mailbox-style of HTLC delivery (e.g Lightning Rod). You
can't simply label such counterparty as malicious.

And I don't think counterparties can trust themselves about their onliness
to suspend the `hodl_fee_rate` inflation. Both sides have an interest to
equivocate, the HTLC sender to gain a higher fee, the HTLC relayer to save
the fee while having received one on the incoming link ?

> Even though the proposal above is not fundamentally different from what
was known already, I do think that it adds the flexibility that we need to
not take a step back in terms of functionality (fair prici> ng for hodl
invoices and its applications). Plus that it simplifies the parameter set.

Minding the concerns raised above, I think this proposal is an improvement
and would merit a specification draft, at least to ease further reasoning
on its economic and security soundness. As a side-note, we're working
further on Stake Certificates, which I believe is better for long-term
network economics by not adding a new fee burden on payments. We should be
careful to not economically outlaw micropayments. If we think channel
jamming is concerning enough in the short-term, we can deploy a
bidirectional upfront payment-style of proposal now and consider a better
solution when it's technically mature.


Antoine

Le jeu. 11 févr. 2021 à 10:25, Joost Jager  a écrit :

> Hi ZmnSCPxj,
>
> Not quite up-to-speed back into this, but, I believe an issue with using
>> feerates rather than fixed fees is "what happens if a channel is forced
>> onchain"?
>>
>> Suppose after C offers the HTLC to D, the C-D channel, for any reason, is
>> forced onchain, and the blockchain is bloated and the transaction remains
>> floating in mempools until very close to the timeout of C-D.
>> C is now liable for a large time the payment is held, and because the C-D
>> channel was dropped onchain, presumably any parameters of the HTLC
>> (including penalties D owes to C) have gotten fixed at the time the channel
>> was dropped onchain.
>>
>
> The simplicity of the fixed fee is that it bounds the amount of risk that
>> C has in case its outgoing channel is dropped onchain.
>>
>
> The risk is bound in both cases. If you want you can cap 

[Lightning-dev] Full Disclosure: CVE-2020-26895 LND "Hodl my Shitsig"

2020-10-20 Thread Antoine Riard
# Problem

A lightning node must verify that its channel transactions are not only
consensus-valid but also tx-relay standard. The counterparty signatures are
part of the local txn (commitment/HTLC) as provided in the
`commitment_signed`. Verifying consensus-validity of these signatures but
not their tx-relay standardness, let an attacker provoke a permanent
tx-relay failure of the victim transactions. The attacker can steal
in-flight HTLCs once they expire as the victim can't trustlessly claim them
onchain during the CLTV delay.

Bitcoin ECDSA signatures are made of the scalar pair (R, S). Since Bitcoin
Core 0.10 [0], high-S (above curve order / 2) signatures aren't standard
and thus any transaction containing one won't be relayed/mined on the
regular p2p network. Note, that this check isn't part of consensus rules
even if it has been proposed to be soft-forked [1].

Prior to v0.10, LND would have accepted counterparty high-S signature and
broadcast tx-relay invalid local commitment/HTLC transactions. This can be
exploited by any peer with an already open channel whatever victim
situation (routing node, payment-receiver, payment-sender [2]).

Contrary to other Lightning implementations, LND uses
`btcd.btcec.signature` for verifying counterparty signatures at commitment
signed exchange. This go package relies itself on the default golang crypto
ecdsa package. It differs from libsecp256k1, as the verification method
doesn't enforce the lower-S form of the signature, thus a libsecp256k1
signature validity is tighter than a golang ecdsa package one.

The `btcec` serialization code was correctly normalizing signature to low-S
but this step wasn’t included when that code was brought into LND’s
`lnwire`.

Note, that LND didn't suffer from this vulnerability on opening/closing as
it was relying on btcd.txscript witness verification method with the
correct standard flags, enforcing low-S signatures.

Note that Bitcoin Core (`CPubKey::Verify`) always normalizes signatures
before passing them to libsecp256k1 verification method which
unconditionally enforces low-S (`secp256k1_ecdsa_verify`).

# Solution

As ECDSA signatures are inherently malleable, even if the counterparty
provides a high-S signature it can be normalized by the receiver to a
tx-relay standard one.

A more proactive solution is to fail the channel at any reception of a
high-S signature as it's a clear signal that your counterparty is either
malicious or buggy (most bitcoin softwares generates low-S signature since
a while [3]).

For now, the first solution has been adopted by the LND team. A spec change
has been proposed to make the second a requirement.

# Background

A lightning node security underlies the assumption to be always able to
unilaterally broadcast channel transactions in the aim to timely confirm
them on-chain to enforce an off-chain negotiated balance.

It must be remembered that channel transactions are asymmetric, thus each
party owns a different version including all parties's balances/HTLCs. To
broadcast its version, a party must own a valid witness at any time.

For commitment transactions, the witness stack is the following :

0  

For HTLC-Success:

0   

For HTLC-Timeout:

0   <> (empty vector)

The / are the ones which might have been
maliciously malleated by an attacker.

These signatures are provided at channels updated by a counterparty's
`commitment signed`. Once it's accepted, the local node must release the
revocation secret for the previous channel state, thus relying on the
validity of highest state transactions for its funds' safety.

These transactions must be unilaterally broadcasted in case of reaching the
off-chain resolution deadline for [4]:
* offered HTLCs for a routing/original sender
* received HTLCs for a routing/last receiver

Note, this off-chain resolution deadline even if it's expressed as block
height it's not equal to a HTLC absolute timelock but must always be
inferior. It offers a block buffer for a local node to broadcast, fee-bump
and hopefully confirm its transactions.

A non-standard transaction can still be confirmed by out-of-band agreement
with a miner or a user intervention to correct the transaction if possible.
In the case of Lightning, the security model doesn't assume this kind of
user intervention and deployed timelocks would have been too short for a
reasonable intervention of a node operator.

# Discovery

While working on Rust-Lightning, I observed that the implementation was
generating MINIMALIF-invalid transactions due to a regression. This case
wasn't covered by our test framework as there is no easy-to-integrate
utility to test transaction standardness. After patching the spec, to
recall the MINIMALIF requirement on some channel transactions witnesses
[5], I audited deployed Lightning implementations w.r.t to Core script
interpreter policy flags. A quick test against LND (65f5119) revealed this
vulnerability.

After informing the LND team, I also informed the c-lightning and 

[Lightning-dev] Full Disclosure: CVE-2020-26895 LND "Hodl my Shitsig"

2020-10-20 Thread Antoine Riard
# Problem

A lightning node must verify that its channel transactions are not only
consensus-valid but also tx-relay standard. The counterparty signatures are
part of the local txn (commitment/HTLC) as provided in the
`commitment_signed`. Verifying consensus-validity of these signatures but
not their tx-relay standardness, let an attacker provoke a permanent
tx-relay failure of the victim transactions. The attacker can steal
in-flight HTLCs once they expire as the victim can't trustlessly claim them
onchain during the CLTV delay.

Bitcoin ECDSA signatures are made of the scalar pair (R, S). Since Bitcoin
Core 0.10 [0], high-S (above curve order / 2) signatures aren't standard
and thus any transaction containing one won't be relayed/mined on the
regular p2p network. Note, that this check isn't part of consensus rules
even if it has been proposed to be soft-forked [1].

Prior to v0.10, LND would have accepted counterparty high-S signature and
broadcast tx-relay invalid local commitment/HTLC transactions. This can be
exploited by any peer with an already open channel whatever victim
situation (routing node, payment-receiver, payment-sender [2]).

Contrary to other Lightning implementations, LND uses
`btcd.btcec.signature` for verifying counterparty signatures at commitment
signed exchange. This go package relies itself on the default golang crypto
ecdsa package. It differs from libsecp256k1, as the verification method
doesn't enforce the lower-S form of the signature, thus a libsecp256k1
signature validity is tighter than a golang ecdsa package one.

The `btcec` serialization code was correctly normalizing signature to low-S
but this step wasn’t included when that code was brought into LND’s
`lnwire`.

Note, that LND didn't suffer from this vulnerability on opening/closing as
it was relying on btcd.txscript witness verification method with the
correct standard flags, enforcing low-S signatures.

Note that Bitcoin Core (`CPubKey::Verify`) always normalizes signatures
before passing them to libsecp256k1 verification method which
unconditionally enforces low-S (`secp256k1_ecdsa_verify`).

# Solution

As ECDSA signatures are inherently malleable, even if the counterparty
provides a high-S signature it can be normalized by the receiver to a
tx-relay standard one.

A more proactive solution is to fail the channel at any reception of a
high-S signature as it's a clear signal that your counterparty is either
malicious or buggy (most bitcoin softwares generates low-S signature since
a while [3]).

For now, the first solution has been adopted by the LND team. A spec change
has been proposed to make the second a requirement.

# Background

A lightning node security underlies the assumption to be always able to
unilaterally broadcast channel transactions in the aim to timely confirm
them on-chain to enforce an off-chain negotiated balance.

It must be remembered that channel transactions are asymmetric, thus each
party owns a different version including all parties's balances/HTLCs. To
broadcast its version, a party must own a valid witness at any time.

For commitment transactions, the witness stack is the following :

0  

For HTLC-Success:

0   

For HTLC-Timeout:

0   <> (empty vector)

The / are the ones which might have been
maliciously malleated by an attacker.

These signatures are provided at channels updated by a counterparty's
`commitment signed`. Once it's accepted, the local node must release the
revocation secret for the previous channel state, thus relying on the
validity of highest state transactions for its funds' safety.

These transactions must be unilaterally broadcasted in case of reaching the
off-chain resolution deadline for [4]:
* offered HTLCs for a routing/original sender
* received HTLCs for a routing/last receiver

Note, this off-chain resolution deadline even if it's expressed as block
height it's not equal to a HTLC absolute timelock but must always be
inferior. It offers a block buffer for a local node to broadcast, fee-bump
and hopefully confirm its transactions.

A non-standard transaction can still be confirmed by out-of-band agreement
with a miner or a user intervention to correct the transaction if possible.
In the case of Lightning, the security model doesn't assume this kind of
user intervention and deployed timelocks would have been too short for a
reasonable intervention of a node operator.

# Discovery

While working on Rust-Lightning, I observed that the implementation was
generating MINIMALIF-invalid transactions due to a regression. This case
wasn't covered by our test framework as there is no easy-to-integrate
utility to test transaction standardness. After patching the spec, to
recall the MINIMALIF requirement on some channel transactions witnesses
[5], I audited deployed Lightning implementations w.r.t to Core script
interpreter policy flags. A quick test against LND (65f5119) revealed this
vulnerability.

After informing the LND team, I also informed the c-lightning and 

[Lightning-dev] Full Disclosure: CVE-2020-26896 LND "The (un)covert channel"

2020-10-20 Thread Antoine Riard
# Problem

In case of a relayed HTLC hash-and-amount collision with an expected
payment HTLC on the same channel, LND was releasing the preimage for the
later while claiming onchain the former. A malicious peer could have
deliberately intercepted a HTLC intended for the victim node, probe the
victim to learn the preimage and steal the intercepted HTLC.

Prior to v0.11, LND had a vulnerability in its invoice database while
claiming onchain a received HTLC output, it didn't verify that the
corresponding outgoing off-chain HTLC was already settled before releasing
the preimage. In case of hash-and-amount collision with an invoice, the
preimage for an expected payment was instead released. A malicious peer
could have deliberately intercept a HTLC intended for the victim node,
probe the preimage through a colluding relayed HTLC and steal the
intercepted HTLC.

Note that MPP payments (invoices with `s` field`) aren't subject to this
vulnerability as an attacker wouldn't have the payment secret to
authenticate the probe HTLC. As of today, this class of invoices isn’t
well-deployed as it outlaws non-upgraded payers.

This vulnerability was exposing routing nodes also receiving payments (e.g
merchant nodes). A peer servicing as an intermediary hop of a payment path
could have guessed that the next hop is the final receiver by comparing the
HTLC's `amount_msat` with any goods/services advertised by the victim node
merchant website. This is not affecting non-routing nodes.

Note, this is a case of _indirect_ fund loss of which the exploitation
scope depends on application logic. As the invoice state would have been
marked as settled, an application directly subscribing to it might have
automatically released the offered good/service. Otherwise, the loss may
have been characterized if the preimage had a direct liquid value (e.g
atomic data exchange where the preimage is a decryption key).

# Solution

The current spec requirement is the following :

"A local node if it receives (or already possesses) a payment preimage for
an unresolved HTLC output that it has been offered AND for which it has
committed to an outgoing HTLC MUST resolve the output by spending it, using
the HTLC-Success transaction".

This point could be clearer and precise the risk of HTLC hash-and-amount
collision : "MUST NOT reveal a preimage for an incoming HTLC if it has not
learnt the preimage for the claiming of the outgoing HTLC" [0]

I engage the LN ecosystem to consider the mandatory deployment of
`payment_secret`, reducing the surface for this class of bugs, among other
benefits.


# Background

Alice, a merchant, is sending an invoice for amount A and hash H to Bob.

Bob is routing the HTLC A/H to Alice through Mallory.

Channel topology [0]


HTLC (A,H') HTLC (A,H')  HTLC (A,H)
 _   _  _
/ \ /  \   / \
   V   \   V\ V   \
  Mallet <---> Alice <> Mallory <---> Bob
 \^
  \__/

  invoice (A,H)

Mallory intercepts HTLC (A,H) from Bob and doesn't relay it forward.
Mallory, knowing that Alice node pubkey is tied to a merchant node, is
browsing through her goods/services index to decide if Bob's HTLC is
intended for Alice by comparing the relayed amount with the price of an
item.

If there is a match, more-or-less some noise of sats, Mallory draws a new
payment path to Mallet, a colluding node. Mallory sends a HTLC (A,H') to
Alice, which relays it forward to Mallet.

Mallet holds the HTLC and doesn't do it immediately. Instead, Mallet routes
back a new HTLC Y to Mallory through Alice. This HTLC Y won't be failed by
Mallory, thus forcing Alice to force-close the channel when HTLC Y
off-chain settlement deadline is crossed.

Alice goes onchain with her commitment for chan Alice-Mallory. She claims
back offered output for HTLC Y and received output for HTLC H'. She reveals
preimage P, with H'=sha256(P) equivalent to H=sha256(P).

Mallory learns preimage P onchain and sends it out-of-band to Mallet which
claims the incoming HTLC H' from Alice. Mallory claims the previously
intercepted HTLC H from Bob.

Bob learns the preimage P.

The coalition Mallet and Mallory gain the value of HTLC H minus Alice's
routing fee for HTLC H'. Also, they may have to pay the channels
opening/closing fees depending on who initiates.

Alice reveals a preimage P which corresponds to a provided good/service
without receiving the payment for it, thus being at loss.

Alice might not have learnt about her loss until reconciling her merchant
inventory with her HTLC accounting, thus this exploitation might have been
stealth for a while.

# Discovery

While working on Rust-Lightning, 

Re: [Lightning-dev] Hold fees: 402 Payment Required for Lightning itself

2020-10-15 Thread Antoine Riard
Hi Joost,

Thanks for your proposal, please find my following opinion which is
deliberately on a high-level as IMO defining better threats model and
agreeing on expected network dynamics resulting from any solution
trade-offs sounds required before to work on any solution.

> We've looked at all kinds of trustless payment schemes to keep users

> honest, but it appears that none of them is satisfactory. Maybe it is
even
> theoretically impossible to create a scheme that is trustless and has all

> the properties that we're looking for. (A proof of that would also be

> useful information to have.)

I don't think anyone has drawn yet a formal proof of this, but roughly a
routing peer Bob, aiming to prevent resource abuse at HTLC relay is seeking
to answer the following question "Is this payment coming from Alice and
going to Caroll will compensate for my resources consumption ?". With the
current LN system, the compensation is conditional on payment settlement
success and both Alice and Caroll are distrusted yet discretionary on
failure/success. Thus the underscored question is undecidable for a routing
peer making relay decisions only on packet observation.

One way to mitigate this, is to introduce statistical observation of
sender/receiver, namely a reputation system. It can be achieved through a
scoring system, web-of-trust, or whatever other solution with the same
properties.
But still it must be underscored that statistical observations are only
probabilistic and don't provide resource consumption security to Bob, the
routing peer, in a deterministic way. A well-scored peer may start to
suddenly misbehave.

In that sense, the efficiency evaluation of a reputation-based solution to
deter DoS must be evaluated based based on the loss of the reputation
bearer related to the potential damage which can be inflicted. It's just
reputation sounds harder to compute accurately than a pure payment-based
DoS protection system.

> Perhaps a small bit of trust isn't so bad. There is trust in Lightning

> already. For example when you open a channel, you trust (or hope) that
your
> peer remains well connected, keeps charging reasonable fees, doesn't

> force-close in a bad way, etc.

That's a good recall, obviously we should avoid getting stuck in a false
trust-vs-trustlessness dichotomy but always bound the discussion to a
specific situation. Even the base layer involved some trust assumptions,
like fetching your initial p2p peers from DNS seeds, all the matter is how
do you minimize this assumption. You might not have the same expectation
when it's miners which might completely screw up the safety of your coin
stack than routing nodes which might only make your loss a tiny routing
fee, a minor nuisance.

> What I can see working is a system where peers charge each other a hold
fee
> for forwarded HTLCs based on the actual lock time (not the maximum lock

> time) and the htlc value. This is just for the cost of holding and
separate
> from the routing fee that is earned when the payment settles

Yes I guess any solution will work as long as it enforces an asymmetry
between the liquidity requester and a honest routing peer. This asymmetry
can be defined as guaranteeing that the routing peer's incoming/outgoing
balance is always increasing, independently of payment success. Obviously
this increase should be materialized by a payment, while minding it might
be discounted based on requester reputation ("pay-with-your-reputation").
This reputation evaluation can be fully delegated to the routing node
policy, without network-wise guidance.

That said, where I'm skeptical on any reputation-heavy system is on the
long-term implications.

Either, due to the wants of a subset of actors deliberately willingly to
trade satoshis against discounted payment flow by buying well-scored
pubkeys, we see the emergence of a reputation market. Thus enabling
reputation to be fungible to satoshis, but with now a weird "reputation"
token to care about.

Or, reputation is too hard to make liquid (e.g hard to disentangle pubkeys
from channel ownership or export your score across routing peers) and thus
you now have reputation scarcity which is introducing a bias from a "purer"
market, where agents are only routing based on advertised fees. IMO, we
should strive for the more liquid Lightning market we can, as it avoids
bias towards past actors and thus may contain centralization inertia. I'm
curious about your opinion on this last point.

Moving forward, I think t-bast is working on gathering materials to
checkbox the first step, establishing a fully-fledged threat model.

Cheers,

Antoine

Le lun. 12 oct. 2020 à 07:04, Joost Jager  a écrit :

> Hello list,
>
> Many discussions have taken place on this list on how to prevent undesired
> use of the Lightning network. Spamming the network with HTLCs (for probing
> purposes or otherwise) or holding HTLCs to incapacitate channels can be
> done on today's network at very little cost to an 

Re: [Lightning-dev] Incremental Routing (Was: Making (some) channel limits dynamic)

2020-10-08 Thread Antoine Riard
Hi Zeeman,

> * It requires a lot more communication rounds and (symmetric, at least)
cryptographic operations.

At first sight, it sounds similar to HORNET/rendez-vous, at least in the
goal of achieving bidirectional communications.

* Intermediate nodes can guess the distance from the source by measuring
timing of a previous response to the next message from the payer.

Yes an intermediary node servicing also a message relay one can likely
learn a lot from message RTT timings _and_ CLTV/routed value...

Note, for the point raised about untrusted upfront payment, if your payment
path hops are stealing upfront, just consider this as a normal routing
failure and downgrade them in your routing algorithm. Thus incentivizing
them to behave well to keep their routing fees. Of course, assigning the
blame to the real faultive hop is likely hard without making onion errors
reliable but I think each hop would be incentivized to sign correctly its
failures and police its neighbouring peers for their laziness.

For sure, upfront payments need more grinding. But I think it will also
solve adjacent issues like your counterparty updating a channel for nothing
until you exhaust your watchtower update storage credit.

Best,
Antoine

Le mer. 7 oct. 2020 à 13:33, ZmnSCPxj  a écrit :

> Good morning Antoine, Bastien, and list,
>
> > > Instead of relying on reputation, the other alternative is just to
> have an upfront payment system, where a relay node doesn't have to account
> for a HTLC issuer reputation to decide acceptance and can just forward a
> HTLC as long it paid enough. More, I think it's better to mitigate jamming
> with a fees-based system than a web-of-trust one, less burden on network
> newcomers.
> >
> > Let us consider some of the complications here.
> >
> > A newcomer wants to make an outgoing payment.
> > Speculatively, it connects to some existing nodes based on some policy.
> >
> > Now, since forwarding is upfront, the newcomer fears that the node it
> connected to might not even bother forwarding the payment, and instead just
> fail it and claim the upfront fees.
> >
> > In particular: how would the newcomer offer upfront fees to a node it is
> not directly channeled with?
> > In order to do that, we would have to offer the upfront fees for that
> node, to the node we are channeled with, so it can forward this as well.
> >
> > -   We can give the upfront fee outright to the first hop, and trust
> that if it forwards, it will also forward the upfront fee for the next hop.
> > -   The first hop would then prefer to just fail the HTLC then and
> there and steal all the upfront fees.
> > -   After all, the offerrer is a newcomer, and might be the
> sybil of a hacker that is trying to tie up its liquidity.
> > The first hop would (1) avoid this risk and (2) earn more
> upfront fees because it does not forward those fees to later hops.
> >
> > -   This is arguably custodial and not your keys not your coins
> applies.
> > Thus, it returns us back to tr\*st anyway.
> >
> > -   We can require that the first hop prove where along the route
> errored.
> > If it provably failed at a later hop, then the first hop can claim
> more as upfront fees, since it will forward the upfront fees to the later
> hop as well.
> > -   This has to be enforcable onchain in case the channel gets
> dropped onchain.
> > Is there a proposal SCRIPT which can enforce this?
> >
> > -   If not enforcable onchain, then there may be onchain shenanigans
> possible and thus this solution might introduce an attack vector even as it
> fixes another.
> > -   On the other hand, sub-satoshi amounts are not enforcable
> onchain too, and nobody cares, so...
>
> One thing I have been thinking about, but have not proposed seriously yet,
> would be "incremental routing".
>
> Basically, the route of pending HTLCs also doubles as an encrypted
> bidirectional tunnel.
>
> Let me first describe how I imagine this "incremental routing" would look
> like.
>
> First, you offer an HTLC with a direct peer.
> The data with this HTLC includes a point, which the peer will ECDH with
> its own privkey, to form a shared secret.
> You can then send additional messages to that node, which it will decrypt
> using the shared secret as the symmetric encryption key.
> The node can also reply to those messages, by encrypting it with the same
> symmetric encryption key.
> Typically this will be via a stream cipher which is XORed with the real
> data.
>
> One of the messages you can send to that node (your direct peer) would be
> "please send out an HTLC to this peer of yours".
> Together with that message, you could also bump up the value of the HTLC,
> and possibly the CLTV delta, you have with that node.
> This bumping up is the forwarding fee and resolution time you have to give
> to that node in order to have it safely put an HTLC to the next hop.
>
> If there is a problem on the next hop, the node replies 

Re: [Lightning-dev] Making (some) channel limits dynamic

2020-10-08 Thread Antoine Riard
> There is no need to stop the channel's operations while you're updating
these parameters, since
they can be updated unilaterally anyway

I think it's just how you defne channel's operations, either emptying out
all pending HTLCs or more a `update_fee` alike semantic. You're right that
the latter should be good enough for the set of parameters you're proposing.
A lightweight `update_policy` doesn't sound to bear difficulty at first
sight.

Le jeu. 8 oct. 2020 à 08:23, Bastien TEINTURIER  a écrit :

> Good morning Antoine and Zman,
>
> Thanks for your answers!
>
> I was thinking dynamic policy adjustment would be covered by the dynamic
>> commitment mechanism proposed by Laolu
>
>
> I didn't mention this as I think we still have a long-ish way to go before
> dynamic commitments
> are spec-ed, implemented and deployed, and I think the parameters I'm
> interested in don't require
> that complexity to be updated.
>
> Please forget about channel jamming, upfront fees et al and simply
> consider the parameters I'm
> mentioning. It feels to me that these are by nature dynamic channel
> parameters (some of them are
> even present in `channel_update`, but no-one updates them yet because
> direct peers don't take the
> update into account anyway). I'd like to raise `htlc_minimum_msat` on some
> big channels because
> I'd like these channels to be used only for big-ish payments. Today I
> can't, I have to close that
> channel and open a new one for such a trivial configuration update, which
> is sad.
>
> There is no need to stop the channel's operations while you're updating
> these parameters, since
> they can be updated unilaterally anyway. The only downside is that if you
> make your policy stricter,
> your peer may send you some HTLCs that you will immediately fail
> afterwards; it's only a minor
> inconvenience that won't trigger a channel closure.
>
> I'd like to know if other implementations than eclair have specificities
> that would make this
> feature particularly hard to implement or undesirable.
>
> Thanks,
> Bastien
>
> Le mar. 6 oct. 2020 à 18:43, ZmnSCPxj  a écrit :
>
>> Good morning Antoine, and Bastien,
>>
>>
>> > Instead of relying on reputation, the other alternative is just to have
>> an upfront payment system, where a relay node doesn't have to account for a
>> HTLC issuer reputation to decide acceptance and can just forward a HTLC as
>> long it paid enough. More, I think it's better to mitigate jamming with a
>> fees-based system than a web-of-trust one, less burden on network newcomers.
>>
>> Let us consider some of the complications here.
>>
>> A newcomer wants to make an outgoing payment.
>> Speculatively, it connects to some existing nodes based on some policy.
>>
>> Now, since forwarding is upfront, the newcomer fears that the node it
>> connected to might not even bother forwarding the payment, and instead just
>> fail it and claim the upfront fees.
>>
>> In particular: how would the newcomer offer upfront fees to a node it is
>> not directly channeled with?
>> In order to do that, we would have to offer the upfront fees for that
>> node, to the node we *are* channeled with, so it can forward this as well.
>>
>> * We can give the upfront fee outright to the first hop, and trust that
>> if it forwards, it will also forward the upfront fee for the next hop.
>>   * The first hop would then prefer to just fail the HTLC then and there
>> and steal all the upfront fees.
>> * After all, the offerrer is a newcomer, and might be the sybil of a
>> hacker that is trying to tie up its liquidity.
>>   The first hop would (1) avoid this risk and (2) earn more upfront
>> fees because it does not forward those fees to later hops.
>>   * This is arguably custodial and not your keys not your coins applies.
>> Thus, it returns us back to tr\*st anyway.
>> * We can require that the first hop prove *where* along the route errored.
>>  If it provably failed at a later hop, then the first hop can claim more
>> as upfront fees, since it will forward the upfront fees to the later hop as
>> well.
>>   * This has to be enforcable onchain in case the channel gets dropped
>> onchain.
>> Is there a proposal SCRIPT which can enforce this?
>>   * If not enforcable onchain, then there may be onchain shenanigans
>> possible and thus this solution might introduce an attack vector even as it
>> fixes another.
>> * On the other hand, sub-satoshi amounts are not enforcable onchain
>> too, and nobody cares, so...
>>
>> On the other hand, a web-of-tr\*st might not be *that* bad.
>>
>> One can say that "tr\*st is risk", and consider that the size and age of
>> a channel to a peer represents your tr\*st that that peer will behave
>> correctly for fast and timely resolution of payments.
>> And anyone can look at the blockchain and the network gossip to get an
>> idea of who is generally considered tr\*stworthy, and since that
>> information is backed by Bitcoins locked in channels, this is reasonably
>> hard 

Re: [Lightning-dev] Why should funders always pay on-chain fees?

2020-10-06 Thread Antoine Riard
Hello Bastien,

I'm all in for a model where channel transactions are pre-signed with a
reasonable minimal relay fee and the adjustment is done by the closer. The
channel initiator shouldn't have to pay for channel-closing as it's somehow
a liquidity allocation decision ("My balance could be better allocated
elsewhere than in this channel").

That said, a channel closing might be triggered due to a security
mechanism, like a HTLC to timeout onchain. Thus a malicious counterparty
can easily loop a HTLC forwarding on an honest peer. Then not cancel it
on-time to force the honest counterparty to pay onchain fees to avoid a
offered HTLC not being claimed back on time.

AFAICT, this issue is not solved by anchor outputs. A way to decentivize
this kind of behavior from a malicious counterparty is an upfront payment
where the upholding HTLC fee * HTLC block-buffer-before-onchain is higher
than the cost of going onchain. It should cost higher for the counterparty
to withhold a HTLC than paying onchain-fees to close the channel.

Or can you think about another mitigation for the issue raised above ?

Antoine

Le lun. 5 oct. 2020 à 09:13, Bastien TEINTURIER via Lightning-dev <
lightning-dev@lists.linuxfoundation.org> a écrit :

> Good morning list,
>
> It seems to me that the "funder pays all the commit tx fees" rule exists
> solely for simplicity
> (which was totally reasonable). I haven't been able to find much
> discussion about this decision
> on the mailing list nor in the spec commits.
>
> At first glance, it's true that at the beginning of the channel lifetime,
> the funder should be
> responsible for the fee (it's his decision to open a channel after all).
> But as time goes by and
> both peers earn value from this channel, this rule becomes questionable.
> We've discovered since
> then that there is some risk associated with having pending HTLCs
> (flood-and-loot type of attacks,
> pinning, channel jamming, etc).
>
> I think that *in some cases*, fundees should be paying a portion of the
> commit-tx on-chain fees,
> otherwise we may end up with a web-of-trust network where channels would
> only exist between peers
> that trust each other, which is quite limiting (I'm hoping we can do
> better).
>
> Routing nodes may be at risk when they *receive* HTLCs. All the attacks
> that steal funds come from
> the fact that a routing node has paid downstream but cannot claim the
> upstream HTLCs (correct me
> if that's incorrect). Thus I'd like nodes to pay for the on-chain fees of
> the HTLCs they offer
> while they're pending in the commit-tx, regardless of whether they're
> funder or fundee.
>
> The simplest way to do this would be to deduce the HTLC cost (172 *
> feerate) from the offerer's
> main output (instead of the funder's main output, while keeping the base
> commit tx weight paid
> by the funder).
>
> A more extreme proposal would be to tie the *total* commit-tx fee to the
> channel usage:
>
> * if there are no pending HTLCs, the funder pays all the fee
> * if there are pending HTLCs, each node pays a proportion of the fee
> proportional to the number of
> HTLCs they offered. If Alice offered 1 HTLC and Bob offered 3 HTLCs, Bob
> pays 75% of the
> commit-tx fee and Alice pays 25%. When the HTLCs settle, the fee is
> redistributed.
>
> This model uses the on-chain fee as collateral for usage of the channel.
> If Alice wants to forward
> HTLCs through this channel (because she has something to gain - routing
> fees), she should be taking
> on some of the associated risk, not Bob. Bob will be taking the same risk
> downstream if he chooses
> to forward.
>
> I believe it also forces the fundee to care about on-chain feerates, which
> is a healthy incentive.
> It may create a feedback loop between on-chain feerates and routing fees,
> which I believe is also
> a good long-term thing (but it's hard to predict as there may be negative
> side-effects as well).
>
> What do you all think? Is this a terrible idea? Is it okay-ish, but not
> worth the additional
> complexity? Is it an amazing idea worth a lightning nobel? Please don't
> take any of my claims
> for granted and challenge them, there may be negative side-effects I'm
> completely missing, this is
> a fragile game of incentives...
>
> Side-note: don't forget to take into account that the fees for HTLC
> transactions (second-level txs)
> are always paid by the party that broadcasts them (which makes sense). I
> still think this is not
> enough and can even be abused by fundees in some setups.
>
> Thanks,
> Bastien
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Making (some) channel limits dynamic

2020-10-06 Thread Antoine Riard
Hello Bastien,

As a first note , I was thinking dynamic policy adjustment would be covered
by the dynamic commitment mechanism proposed by Laolu as it presents the
same trade-offs, you need to stop channel HTLC processing before upgrading,
otherwise it might falsify your whole in-flight HTLC accounting.

> Recent discussions around channel jamming [1] have highlighted again the
> need to think twice when
> configuring your channels parameters.

I'm still dubious that straighter channel parameters are the best solution
to solve channel jamming. As a routing node evaluating a HTLC, I think the
question you're trying to answer is : "Is this a _honest_ HTLC to relay ?",
where honest is defined both as paying more fees that the liquidity lock
and with high odds of a positive settlement, otherwise you won't get paid.

The first predicate is easy to evaluate, just verify that the HTLC is
paying more as an incoming packet that you have to send forward.

On the other hand, the second predicate is hard to evaluate. A first lead
of a solution is to evaluate the packet forwarder instead of the packet
itself. You may have a web-of-trust/reputation system with a one-level rank
of trust which would be enforced at the channel opening layer, i.e don't
open/accept channels with random nodes. A less constraining version is
still a reputation system but where you statically attribute a HTLC
forwarding policy based on counterparty reputation (~today).

The more evolved reputation system version you implicitly seem to argue for
is adapting the forwarding policy based on counterparty past behavior, e.g
like relaxing channel parameters for a counterparty upstreaming a lot of
successful HTLCs. IMO, this is still presenting hurdles.

If you have 1 BTC of outgoing bandwidth but your counterparty is enforcing
a `max_htlc_value_in_flight_msat` of 0.5 BTC, it means you have a
"sleeping" outgoing liquidity. Rationally, you should only open a channel
with a capacity somehow equivalent to what is authorized by your
counterparty relay policy.

A lesson would be to negotiate first a policy then an opening, as of today
they're still bundled in one message flow. I don't think you can reduce the
capacity once you learn acceptor policy ? Don't overstake liquidity more
than you can actually gain from.

That said, if you have a dynamic policy model, at policy relaxation, you
need to increase channel capacity to profit from relaxation, let's say
through some kind of splice-in. But now you have on-chain fees at each
policy/liquidity adjustment.

Under a dynamic policy model based on accumulated reputation, it sounds
like there is some kind of trade-off between useless off-chain liquidity
and on-chain fees.

Instead of relying on reputation, the other alternative is just to have an
upfront payment system, where a relay node doesn't have to account for a
HTLC issuer reputation to decide acceptance and can just forward a HTLC as
long it paid enough. More, I think it's better to mitigate jamming with a
fees-based system than a web-of-trust one, less burden on network newcomers.

This doesn't prevent hybrid models where you might reward your good
behaving peers with a discount on your upfront payment policy.

What's your opinion ?

Antoine

Le lun. 5 oct. 2020 à 07:54, Bastien TEINTURIER via Lightning-dev <
lightning-dev@lists.linuxfoundation.org> a écrit :

> Good evening list,
>
> Recent discussions around channel jamming [1] have highlighted again the
> need to think twice when
> configuring your channels parameters. There are currently parameters that
> are set once at channel
> creation that would benefit a lot from being configurable throughout the
> lifetime of the channel
> to avoid closing channels when we just want to reconfigure them:
>
> * max_htlc_value_in_flight_msat
> * max_accepted_htlcs
> * htlc_minimum_msat
> * htlc_maximum_msat
>
> Nodes can currently unilaterally udpate these by applying forwarding
> heuristics, but it would be
> better to tell our peer about the limits we want to put in place
> (otherwise we're wasting a whole
> cycle of add/commit/revoke/fail messages for no good reason).
>
> I suggest adding tlv records in `commitment_signed` to tell our channel
> peer that we're changing
> the values of these fields.
>
> Is someone opposed to that?
> Are there other fields you think would need to become dynamic as well?
> Do you think that needs a new message instead of using extensions of
> `commitment_signed`?
>
> Cheers,
> Bastien
>
> [1] https://twitter.com/joostjgr/status/1308414364911841281
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] SIGHASH_SINGLE + update_fee Considered Harmful

2020-09-13 Thread Antoine Riard
Hi Johan,

> I would be open to patching the spec to disallow update_fee for anchor
> channels, but maybe we can just add a warning and discourage it.

My initial thinking was just to restrain it for the commitment-level only.

Completely adhering to the bring-your-own-fee model for HTLC-txn sounds
better as it splits more fairly fees burden between channel participants.
The initiator won't have to pay for the remote's HTLC-txn, especially in
periods of high-congestion. A participant shouldn't have to bear the cost
of the counterparty choosing to go onchain, as it's mostly a client
security parameter ("how many blocks it will take me to confirm ?")  or an
economic decision ("is this HTLC worthy to claim/expire ?").

One could argue it's increasing the blockspace footprint as you will use
one more pair of input-output but if you're paying the feerate that's
lawful usage.

Antoine

Le ven. 11 sept. 2020 à 04:15, Johan Torås Halseth  a
écrit :

> Hi,
>
> Very good observation, most definitely not a type of attack I forseen!
>
> Luckily, it was the plan to phase out update_fee all along, in favor
> of only accepting the minimum relay fee (zero fee if/when package
> relay is a reality). If I understand the scenario correctly, that
> should mitigate this attack completely, as the attacker cannot impact
> the intended miner fees on the HTLCs, and could only siphon off the
> minimal miner fee if anything at all.
>
> I would be open to patching the spec to disallow update_fee for anchor
> channels, but maybe we can just add a warning and discourage it.
>
> Johan
>
>
> On Thu, Sep 10, 2020 at 8:13 PM Olaoluwa Osuntokun 
> wrote:
> >
> > Hi Antoine,
> >
> > Great findings!
> >
> > I think an even simpler mitigation is just for the non-initiator to
> _reject_
> > update_fee proposals that are "unreasonable". The non-initiator can run a
> > "fee leak calculation" to compute the worst-case leakage of fees in the
> > revocation case. This can be done to day without any significant updates
> to
> > implementations, and some implementations may already be doing this.
> >
> > One issue is that we don't have a way to do a "soft reject" of an
> update_fee
> > as is. However, depending on the implementations, it may be possible to
> just
> > reconnect and issue a co-op close if there're no HTLCs on the commitment
> > transaction.
> >
> > As you mentioned by setting proper values for max allowed htlcs, max in
> > flight, reserve, etc, nodes are able to quantify this fee leak risk
> ahead of
> > time, and set reasonable parameters based on their security model. One
> issue
> > is that these values are set in stone rn when the channel is opened, but
> > future iterations of dynamic commitments may allow us to update them on
> the
> > fly.
> >
> > In the mid-term, implementations can start to phase out usage of
> update_fee
> > by setting a minimal commitment fee when the channel is first opened,
> then
> > relying on CPFP to bump up the commitment and any HTLCs if needed. This
> > discovery might very well hasten the demise of update_fee in the protocol
> > all together as well.  I don't think we need to depend entirely on a
> > theoretical package relay Bitcoin p2p upgrade assuming implementations
> are
> > willing to make an assumption that say 20 sat/byte or w/e has a good
> chance
> > of widespread propagation into mempools.
> >
> > From the perspective of channel safety, and variations of attacks like
> > "flood & loot", imo it's absolutely critical that nodes are able to
> update
> > the fees on their second-level HTLC transactions. As this is where the
> real
> > danger lies: if nodes aren't able to get 2nd level HTLCs in the chain in
> > time, then the incoming HTLC expiry will expire, creating a race
> condition
> > across both commitments which can potentially cascade.
> >
> > In lnd today, anchors is still behind a build flag, but we plan to enable
> > it by default for our upcoming 0.12 release. The blockers on our end
> were to
> > add support for towers, and add basic deadline aware bumping, both of
> which
> > are currently on track. We'll now also look into setting clamps on the
> > receiver end to just not accept unreasonable values for the fee rate of a
> > commitment, as this ends up eating into the true HTLC values for both
> sides.
> >
> > -- Laolu
> >
> >
> > On Thu, Sep 10, 2020 at 9:28 AM Antoine Riard 
> wrote:
> >>
> >> Hi,
> >>
> >> In this post, I would like to expose a potential vulnerability

Re: [Lightning-dev] SIGHASH_SINGLE + update_fee Considered Harmful

2020-09-13 Thread Antoine Riard
Hi Laolu,

> I think an even simpler mitigation is just for the non-initiator to
_reject_
> update_fee proposals that are "unreasonable". The non-initiator can run a
> "fee leak calculation" to compute the worst-case leakage of fees in the
> revocation case. This can be done to day without any significant updates
to
> implementations, and some implementations may already be doing this.

Yes that is what I meant by acceptance bounds of `update_fee`. I think such
an algorithm is likely a good short-term measure. But such `worst-case
leakage` will function first of your other policy parameters
(`max_accepted_htlcs`, `max_htlc_value_in_flight`) and secondly of mempool
fluctuations.

Being constrained by the mempool isn't great as you might have to
effectively accept a high-feerate `update_fee` which can be exploited
against you in a low-congestion period.

> One issue
> is that these values are set in stone rn when the channel is opened, but
> future iterations of dynamic commitments may allow us to update them on
the
> fly.

Yep we should remember to layer dynamic commitments, you should be able to
upgrade as much the channel type then refresh your channel policy,
parameters being function of type. That way node operators will be able to
loose them as the peer has proven being reliable and honest, e.g providing
a minimal amount of routing fees or fair uptimes.

> I don't think we need to depend entirely on a
> theoretical package relay Bitcoin p2p upgrade assuming implementations are
> willing to make an assumption that say 20 sat/byte or w/e has a good
chance
> of widespread propagation into mempools.

Let's not make any assumptions on near-package relay support ;)

We can get rid of `update_fee` for HTLC-txn only. And keep it for the
commitment transaction for now, as the vulnerability is more a limited
griefing concern. As I guess we're all implementing CPFP (mainly keeping
fresh bump utxos), we can reuse this piece of logic to alternatively attach
them to HTLC-txn. We keep the malleability that lets you unilaterally react
to congestion/flood & loot and remove the inflation vector.

Antoine

Le jeu. 10 sept. 2020 à 14:13, Olaoluwa Osuntokun  a
écrit :

> Hi Antoine,
>
> Great findings!
>
> I think an even simpler mitigation is just for the non-initiator to
> _reject_
> update_fee proposals that are "unreasonable". The non-initiator can run a
> "fee leak calculation" to compute the worst-case leakage of fees in the
> revocation case. This can be done to day without any significant updates to
> implementations, and some implementations may already be doing this.
>
> One issue is that we don't have a way to do a "soft reject" of an
> update_fee
> as is. However, depending on the implementations, it may be possible to
> just
> reconnect and issue a co-op close if there're no HTLCs on the commitment
> transaction.
>
> As you mentioned by setting proper values for max allowed htlcs, max in
> flight, reserve, etc, nodes are able to quantify this fee leak risk ahead
> of
> time, and set reasonable parameters based on their security model. One
> issue
> is that these values are set in stone rn when the channel is opened, but
> future iterations of dynamic commitments may allow us to update them on the
> fly.
>
> In the mid-term, implementations can start to phase out usage of update_fee
> by setting a minimal commitment fee when the channel is first opened, then
> relying on CPFP to bump up the commitment and any HTLCs if needed. This
> discovery might very well hasten the demise of update_fee in the protocol
> all together as well.  I don't think we need to depend entirely on a
> theoretical package relay Bitcoin p2p upgrade assuming implementations are
> willing to make an assumption that say 20 sat/byte or w/e has a good chance
> of widespread propagation into mempools.
>
> From the perspective of channel safety, and variations of attacks like
> "flood & loot", imo it's absolutely critical that nodes are able to update
> the fees on their second-level HTLC transactions. As this is where the real
> danger lies: if nodes aren't able to get 2nd level HTLCs in the chain in
> time, then the incoming HTLC expiry will expire, creating a race condition
> across both commitments which can potentially cascade.
>
> In lnd today, anchors is still behind a build flag, but we plan to enable
> it by default for our upcoming 0.12 release. The blockers on our end were
> to
> add support for towers, and add basic deadline aware bumping, both of which
> are currently on track. We'll now also look into setting clamps on the
> receiver end to just not accept unreasonable values for the fee rate of a
> commitment, as this ends up eating into the true HTLC values for both
>

[Lightning-dev] SIGHASH_SINGLE + update_fee Considered Harmful

2020-09-10 Thread Antoine Riard
Hi,

In this post, I would like to expose a potential vulnerability introduced
by the recent anchor output spec update related to the new usage of
SIGHASH_SINGLE for HTLC transactions. This new malleability combined with
the currently deployed mechanism of `update_fee` is likely harmful for
funds safety.

This has been previously shared with deployed implementations devs, as
anchor channels are flagged as experimental it's better to discuss and
solve this publicly. That said, if you're currently running experimental
anchor channels with non-trusted parties on mainnet, you might prefer to
close them.

# SIGHASH_SINGLE and `update_fee` (skip it if you're familiar)

First, let's get started by a quick reminder of the data set committed by
signature digest algorithm of Segwit transactions (BIP 143):
* nVersion
* hashPrevouts
* hashSequence
* outpoint
* scriptCode of the input
* value of the output spent by this input
* nSequence of the input
* hashOutputs
* nLocktime
* sighash type of the signature

Anchor output switched the sighash type from SIGHASH_ALL to SIGHASH_SINGLE
| SIGHASH_ANYONECANPAY for HTLC signatures sent to your counterparty. Thus
it can spend non-cooperatively its HTLC outputs on its commitment
transactions. I.e when Alice broadcasts her commitment transaction, every
Bob's signatures on Alice's HTLC-Success/Timeout transactions are now
flagging the new sighash type.

Thus `hashPrevouts`, `hashSequence` (ANYONECANPAY) and `hashOutputs`
(SINGLE) aren't committed anymore. SINGLE only enforces commitment to the
output scriptpubkey/amount at the same index that
the spending input. Alice is free to attach additional inputs/outputs to
her HTLC transaction. This change is aiming to let a single-party bump the
feerate of 2nd-stage HTLC transactions in case of mempool-congestion,
without counterparty cooperation and thus make HTLC funds safer.

The attached outputs are _not_ encumbered by a revokeable redeemscript for
a potential punishment.

That said, anchor ouput spec didn't change disable the current fee
mechanism already covering HTLC transactions. Pre/post-anchor channels are
negotiating a feerate through `update_fee` exchange, initiated by the
channel funder. This `update_fee` can be rejected by the receiver if it's
deemed unreasonable compared to your local fee estimator view, but as of
today implementations are pretty liberal in their acceptance, admitting a
divergence from a scale of 1 to no-bound at all.

This negotiated feerate (`feerate_per_kw`) is used by channel participants
to compute effective fees which have to be deduced either from the funder
balance output for commitment transactions or from HTLC output value for
HTLC transactions.

# The Vulnerability : a Penalty Escape Vector

By increasing the feerate thanks to `update_fee`, a malicious party can
inflate fees committed on HTLC input/output pairs and redirect this
inflated fee to a single-controlled output attached to these malleable
pairs. This won't be punishable by an honest party in case of revoked state
broadcast and thus enable to partially escape the penalty.

As an example, Alice and Bob have a 100_000 sats channel. `feerate_per_kw`
is 1 sats.

At state N, Alice balance is all on her side. She announces 10 outgoing
HTLCs of value 7000 sats.

As Commitment tx weight with 10 outputs is 2844 (post-anchor), the absolute
fee committed is 28440 sats.

As HTLC-timeout weight is 666 (post-anchor), the absolute fee committed is
of 6660 sat, the HTLC tx output as counter-signed by Bob is of 340 sat.
This absolute fee aims to pay the miner fee in case Alice needs to timeout
HTLC onchain.

Her remaining balance is 1560 sat, above both dust_limit_satoshi and the
channel reserve as constrained by Bob (likely 1%).

Alice waits for HTLCs to expire and advances state to N+1. Then she empties
her balance minus reserve by sending a HTLC relayed by Bob either to a
colluding channel on the rest of network or back to an onchain address
thanks to a swap service.

At state N+2, Alice finalizes HTLC-timeout of state N by capturing almost
all of the absolute fee to a new P2WPKH output only controlled by her. She
broadcasts the revoked commitment tx N and burns 28440 sats in commitment
fee.

Her balance of 1560 sats is punished by Bob's justice transaction.

After confirmation and thus maturing of the CSV of 1 on her HTLC output
Alice broadcasts her 10 HTLC-timeout sending back to her 6660 sat - 660 to
pay a low-fee. Bob punishes the 10 HTLC-timeout outputs of 340 sats.

Alice gain =  99_000 (swap spend) + 66_660 (HTLCs escape) - 1560
(commitment balance punishment) - 28440 (commitment fee) - 660*10 (HTLCs
fees) - 340*10 (HTLCs output) = 125600 sats.

Alice's gain is superior at channel value as it has been partially
double-spend by bypassing the revocation punishment.

# Limitations of Attacker Success

A first limitation of attack success which can be point of is the fact that
post-anchor HTLC outputs are CSV'ed by 1, which means in theory a 

Re: [Lightning-dev] Proposal for skip channel confirmation.

2020-08-25 Thread Antoine Riard
Hi Zeeman,

> i.e. I send my high-fee RBF-enabled channel funding to you, at the same
time I send a conflicting low-fee RBF-disabled transaction (that pays the
entire channel amount to myself) to all the miners I can find.

Mapping miners mempools will be a cost in spying infrastructure and thus
make the malicious routing node job harder, providing a security
improvement for zero-conf channels. I used "lower trust" intentionally,
it's not binary (what about opening a channel with a reorg-powerful
counterparty ?).

> And your fullnode will not see the conflicting low-fee RBF-disabled tx
either because it is lower fee than what you have in your mempool and you
will reject it.

I was assuming a no-mempool mobile LN client, thus not going to be blind by
your high-fee RBF. But still able to speak with the p2p network thus you
can actively seek that your transaction has been accepted by ~100 peers.

Overall, I don't think this scheme is worthy to work on unless double-spend
of zero-conf chans become a real issue, just to mention we have potential
solutions in this case.

> There Ain't No Such Thing As A Global Mempool!

I know :)

Le mar. 25 août 2020 à 03:38, ZmnSCPxj  a écrit :

> Good morning Antoine,
>
> > Hi Roei,
> > You might have a mechanism to lower trust in zero-conf channel opener.
> Actually the local party can be in charge of broadcasting the funding
> transaction, thus ensuring it's well-propagated across network mempools and
> then start to accept incoming payment on the zero-conf channel. Per BIP 125
> rules, a malicious funder/opener would have to pay a higher fee to replace
> the channel funding tx and thus double-spend the HTLC. A local party may
> require a higher fee funding transaction than it is necessary wrt ongoing
> congestion to increase level of protection. And I think it's okay on the
> economic-side, you will amortize this fee premium on the channel lifecycle.
> Until the transaction gets confirmed you might only accept HTLC under this
> fee. So you have game-theory security for your zero-conf channels as it
> would cost more in fees than a HTLC double-spend win for the malicious
> opener, under the assumption of non-miner-collusion with the attacker.
>
> Since RBF is opt-in for Bitcoin Core nodes, and I believe most miners are
> running Bitcoin Core, it is trivial to double-broadcast.
> i.e. I send my high-fee RBF-enabled channel funding to you, at the same
> time I send a conflicting low-fee RBF-disabled transaction (that pays the
> entire channel amount to myself) to all the miners I can find.
>
> Since the miners received an RBF-disabled tx, they will not evict it even
> if they see a higher-fee RBF-enabled tx.
> And your fullnode will not see the conflicting low-fee RBF-disabled tx
> either because it is lower fee than what you have in your mempool and you
> will reject it.
>
> You really have to trust that I do not do this when I offer a channel to
> you.
>
> There Ain't No Such Thing As A Global Mempool!
>
> Regards,
> ZmnSCPxj
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Proposal for skip channel confirmation.

2020-08-24 Thread Antoine Riard
Hi Roei,

You might have a mechanism to lower trust in zero-conf channel opener.
Actually the local party can be in charge of broadcasting the funding
transaction, thus ensuring it's well-propagated across network mempools and
then start to accept incoming payment on the zero-conf channel. Per BIP 125
rules, a malicious funder/opener would have to pay a higher fee to replace
the channel funding tx and thus double-spend the HTLC. A local party may
require a higher fee funding transaction than it is necessary wrt ongoing
congestion to increase level of protection. And I think it's okay on the
economic-side, you will amortize this fee premium on the channel lifecycle.
Until the transaction gets confirmed you might only accept HTLC under this
fee. So you have game-theory security for your zero-conf channels as it
would cost more in fees than a HTLC double-spend win for the malicious
opener, under the assumption of non-miner-collusion with the attacker.

Actually this higher fee could be attached by a zero-conf channel insurance
service through a CPFP on a special output, and spread across a batch of
funding transaction, thus replacement cost would be far higher for attacker
but give the same value protection to any user involved in the batch. That
said, it might be a bit slower and more expensive than a UX-fast zero-conf
channel, as it's likely offered by wallet today, so maybe not a user demand
for zero-conf channel improved safety.

With regards to the current proposal in itself, skimming quickly on BOLT 2
we don't explicitly require minimum_depth > 0 ? So only the new requirement
zero-conf chan is about the channel identifier in the lack of an anchoring
block ? If the channel is private, it doesn't matter id to be random,
that's just a convention between channel peers. A local alias
short_channel_id can be sent to any potential payer.

Overall, I lean to think this kind of proposal might belong to a higher
layer spec, as it's more a matter of policy between nodes than a global
network mechanism. Such higher spec could gather specifications on other
point-to-point Lightning programmable interfaces, like onion content,
assuming we keep track of collision types. It would fasten ecosystem
features while letting BOLT specifications focus on the core layer
requirements. Though it might be too early for such a thing.

Cheers,
Antoine

Le lun. 24 août 2020 à 20:22, Matt Corallo  a
écrit :

> A few notes.
>
> Given gossip messages will be rejected by many nodes if no such on-chain
> transaction exists, I don't think you can
> "re-broadcast" gossip messages at that time, instead I believe you simply
> need to not gossip until the funding
> transaction has some confirmations. Still, this shouldn't prevent
> receiving payments, as invoices carrying a last-hop
> hint should be able to indicate any short_channel_id value and have it be
> accepted.
>
> It may make sense to reuse some "private short channel ID negotiation"
> feature for the temporary 0-conf short channel id
> value.
>
> One thing this protocol doesn't capture is unidirectional 0-conf - maybe
> the channel initiator is happy to receive
> payments (since its their funds which opened the channel, this is
> reasonable), but the channel initie-ee (?) isn't
> (which, again, is reasonable). This leaves only the push_msat value
> pay-able, and only once, but is a perfectly
> reasonable trust model and I believe some wallets use this today.
>
> Matt
>
> On 8/24/20 4:16 AM, Roei Erez wrote:
> > Hello everyone,
> >
> > I would like to discuss the ability to skip a channel funding
> > transaction confirmation, making the channel fully operational before
> > its on-chain confirmation (aka a zero-conf channel).
> > Till confirmation, this channel requires trust between its two parties
> > and in the case of a remote initiator, it puts the received funds of
> > the local party at risk.
> > Nevertheless, there are cases where it makes sense to support this
> > behavior. For example, in cases both parties decide to trust each
> > other. Or, in cases where trust between the parties already exists
> > (buying a pre-loaded channel from a service like Bitrefill).
> >
> > The motivation is gained from the "Immediate on-boarding" use case:
> > * Bob is connected to a routing node and issues an invoice with a
> > routing hint that points to a fake channel between Bob and that node.
> > * When Alice pays Bob's invoice, the routing node intercepts the HTLC
> > and holds it.
> > * Then, the routing node does the following:
> >* Opens a channel to Bob where Bob has a choice of skipping funding
> >   confirmation (channel is open and active).
> >* Pays Bob the original Alices' invoice (potentially, minus a service
> fee)
> >
> >  From Bob perspective it is his choice on whether to agree for the
> > payment via this channel (and by that increase the trust) or disagree
> > and wait for confirmation.
> > Another practical way for Bob is to skip confirmation and 

Re: [Lightning-dev] Dynamic Commitments: Upgrading Channels Without On-Chain Transactions

2020-07-21 Thread Antoine Riard
Hi Laolu,

I think that's a must before we introduce a bunch of new features and the
number of channels explodes. The de-synchronized side could be underscored
more as any scheduled, automatic, massive upgrade for security forcing
chain writes can be exploited to launch  mempool-congestion attacks.

> Finally, the ability to update the commitment format itself will also
allow
> us to re-parametrize portions of the channels which are currently set in
> stone. As an example, right now the # of max allowed outstanding HTLCs is
> set in stone once the channel has opened. With the ability to also swap
out
> commitment _parameters_, we can start to experiment with flow-control like
> ideas such as limiting a new channel peer to only a handful of HTLC slots,
> which is then progressively increased based on "good behavior" (or the
other
> way around as well). Beyond just updating the channel parameters, it's
also
> possible to "change the rules" of a channel on the fly. An example of this
> variant would be creating a new psuedo-type that implements a fee policy
> other than "the initiator pays all fees".

Yes, there is a wide scope of events for which you want to upgrade:
* fine-grained security policies, like scaling your
`channel_reserve_satoshis`/`to_self_delay` based on my balance increasing
* fee-market adjustments, like increasing `dust_limit_satoshis` or
`feerate_per_kw`
* new features deployments, scoping channel types, commitment types, new
channel attributes (`max_dust_value`), new packets types (DLC/PTLC)
* basepoints rotation, as a good practice you may want to swap them every X
* routing strategies adjustments, increase `max_accepted_htlcs` to attract
more traffic

Given complexity negotiation, we should cleanly dissociate the signaling of
upgraded component  ("I want to update output type") from the upgrade
rational ("I want to update because my fee-estimator sees a feerate
increase"). Former should be obviously part of the spec, latter should be
left as a client-side policy, beyond avoiding footgunish behaviors.

> To solve this existing ambiguity in the channel type negotiation, we'll
> need to make the channel type used for funding _explicit_. Thankfully, we
> recently modified the message format to be forwarding looking in order to
> allow _TLV extensions_ to be added for all existing message types. A new
> `channel_type` (type #???) TLV would be added which makes the channel type
> used in funding explicit, with the existing feature bit advertisement
system
> being kept in place.

I think we may need more than channel_type, like
commitment_type/output_type. Or else we may have factorial complexity to
check every features combination. As you said, TLV should make it easy.

> As we add more types, there may not be a "default" type, so making this
process explicit is important to future exploration and extensibility.

We may want a "default" type based on the most secure/private one to avoid
downgrade attack a la TLS between implementations. If we introduce dynamic
negotiation, Alice supporting type X,Y,Z with X being the less secure, Bob
a malicious routing node only signals X to force a X channel opening. To
avoid this, we may deprecate from negotiation older types once newer ones
are widely deployed. Maybe that you meant by "`channel_type transition
isn't allowed". Experimental/advanced features can stay in allowed
negotiation range but feature-gated behind some flags.


> An alternative to attaching the `channel_type` message to the `commit_sig`
> and having _that_ kick off the commitment upgrade, we could instead
possibly
> add a _new_ update message (like `update_fee`) to make the process more
> explicit. In either case, we may want to restrict things a bit by only
> allowing the initiator to trigger a commitment format update.

I favor this alternative, as your new channel type may imply a channel
data/round-trip extension  (like signatures round-trip for PTLCs). Or if
you constrain channel-flow policy, your added HTLCs might in fact be
unlawful. We should avoid dependencies where validity of upgrades depend on
channel state.

Moving forward, we may want to start with a minimal first-step with just
introducing a new `upgrade_channel` message and its handling flow + new TLV
in `open/accept`. And then slowly increase scope of what you can actually
upgrade/negotiate.

Overall, that's a really cool proposal, dynamic commitment doesn't say
enough on scope, maybe "flexible channels" :) ?

Antoine

Le lun. 20 juil. 2020 à 21:18, Olaoluwa Osuntokun  a
écrit :

> Hi y'all,
>
> In this post, I'd like to share an early version of an extension to the
> spec
> and channel state machine that would allow for on-the-fly commitment
> _format/type_ changes. Notably, this would allow for us to _upgrade_
> commitment types without any on-chain activity, executed in a
> de-synchronized and distributed manner. The core realization these proposal
> is based on the fact that the funding output is the _only_ 

[Lightning-dev] Pinning : The Good, The Bad, The Ugly

2020-06-28 Thread Antoine Riard
(tl;dr Ideally network mempools should be an efficient marketplace leading
to discovery of best-feerate blockspace demand by miners. It's not due to
current anti-DoS rules assumptions and it's quite harmful for shared-utxo
protocols like LN)

Hello all,

Lightning security model relies on the unilateral capability for a channel
participant to confirm transactions, like timing out an outgoing HTLC,
claiming an incoming HTLC or punishing a revoked commitment transaction and
thus enforcing onchain a balance negotiated offchain. This security model
is actually turning back the double-spend problem to a private matter,
making the duty of each channel participant to timely enforce its balance
against the competing interest of its counterparties. Or laid out
otherwise, contrary to a miner violating a consensus rules, base layer
peers don't care about your LN node failing to broadcast a justice
transaction before the corresponding timelock expiration (CSV delay).

Ensuring effective propagation and timely confirmation of LN transactions
is so a critical-safety operation.  Its efficiency should be always
evaluated with regards to base layer network topology, tx-relay propagation
rules, mempools behaviors, consistent policy applied by majority of nodes
and ongoing blockspace demand. All these components are direct parameters
of LN security. Due to the network being public, a malicious channel
counterparty do have an incentive to tweak them to steal from you.

The pinning attacks which have been discussed since a few months are a
direct illustration of this model. Before digging into each pinning
scenario, few properties of the base layer components should be evocated
[0].

Network mempools aren't guaranteed to be convergent, the local order of
events determines the next events accepted. I.e Alice may observe tx X, tx
Y, tx Z and Bob may observe tx Z, tx X, tx Y. If tx Z disable-RBF and tx X
try to replace Z, Alice accepts X and Bob rejects it. This divergence may
persevere until a new block.

Tx-relay topology can be observed by spying nodes [1]. An attacker can
exploit this fact to partition network mempools in different subset and
hamper propagation across them of same-spending output concurrent
transactions. If subset X observes Alice commitment transaction and subset
Y observes Bob commitment transaction, Alice's HTLC-timeout spending her
commitment won't propagate beyond the X-Y set boundaries. An attacker can
always win the propagation race through massive connections or bypassing
tx-relay privacy timers.

Miners mempools are likely identifiable, you could announce a series of
conflicting transactions to different subsets of the network and observe
"tainted" block composition to assign to each subset a miner mempool. I'm
not aware of any research on this, but it sounds plausible to identify all
power-miner mempool, i.e the ones likely to mine a block during the block
delay of the timelock you're looking to exploit. If you can't bid a
transaction in such miner mempools your channel state will stale and your
funds may be in danger.

### Scenario 1) HTLC-Preimage Pinning

As Matt previously explained in his original mail on RBF-pinning, a
malicious counterparty has an interest to pin a low-feerate HTLC-preimage
transaction in some network mempools and thus preventing a honest
HTLC-timeout to confirm. For details, refer to Optech newsletter [2].

This scenario doesn't bear any risk to the attacker, is easy to execute and
has double-digit rate of success. You don't assume network topologies
manipulation, mempools partitions or LN-node-to-full-node mapping [3] That
said this should be solved by implementing and deploying anchor outputs,
which effectively allows a party to unilaterally bump feerate of its
HTLC-timeout transactions.

### The Anchor Output Proposal

Anchor Output proposal is a current spec object implemented by the LN dev
community, it introduces the ability to _unilaterally_ and _dynamically_
bump feerate of any commitment transaction. It also opened the way to bump
local 2nd-stage transactions.

Beyond solving scenario 1), it makes LN node safe with regards to
unexpected mempool congestion. If your commitment transaction is stucking
in network mempools you can bump its feerate by attaching a CPFP on the new
`to_local` anchor. If the remote commitment gets stuck in network mempools,
you're able to bump it by attaching a CPFP on the `to_remote` anchor. This
should keep your safe against an unresponsive or lazy counterparty in case
of onchain funds to claim.

IMO, it comes with a trade-off as it introduces a mapping oracle, i.e a
linking vector between a LN node and its full-node. In this case, a spying
node may establish a dummy, low-value channel with a probed LN node, break
it by broadcasting thousands of different versions of the (revoked)
commitment and observes which one broadcast a CPFP first on the p2p layer.
Obviously, you can mitigate it by not chasing after low-value HTLC, but
that is a 

Re: [Lightning-dev] Disclosure of a fee blackmail attack that can make a victim loose almost all funds of a non Wumbo channel and potential fixes

2020-06-18 Thread Antoine Riard
Hi Rene,

Thanks for disclosing this vulnerability,

I think this blackmail scenario holds but sadly there is a lower scenario.

Both "Flood & Loot" and your blackmail attack rely on `update_fee`
mechanism and unbounded commitment transaction size inflation. Though the
first to provoke block congestion and yours to lockdown in-flight fees as
funds hostage situation.

> 1. The current solution is to just not use up the max value of
htlc's. Eclaire and c-lightning by default only use up to 30 htlcs.

As of today, yes I would recommend capping commitment size both for
ensuring competitive propagation/block selection and limiting HTLC exposure.

> 2. Probably the best fix (not sure if I understand the consequences
correctly) is coming from this PR to bitcoin core (c.f.
https://github.com/bitcoin/bitcoin/pull/15681 by @TheBlueMatt . If I get it
correctly with that we could always have low fees and ask the person who
want to claim their outputs to pay fees. This excludes overpayment and
could happen at a later stage when fees are not spiked. Still the victim
who offered the htlcs would have to spend those outputs at some time.

It's a bit more complex, carve-out output, even combined with anchor output
support on the LN-side won't protect against different flavors of pinning.
I invite you to go through logs of past 2 LN dev meetings.

> 3. Don't overpay fees in commitment transactions. We can't foresee the
future anyway

Once 2. is well-addressed we may deprecate `update_fee`.

> 4. Don't add htlcs for which the on chain fee is higher than the HTLCs
value (like we do with sub dust amounts and sub satoshi amounts. This would
at least make the attack expensive as the attacker would have to bind a lot
of liquidity.

Ideally we want dust_limit to be dynamic, dust cap should be based on HTLC
economic value, feerate of its output, feerate of HTLC-transaction, feerate
estimation of any CPFP to bump it. I think that's kind of worthy to do once
we solved 3. and 4

> 5. Somehow be able to aggregate htlc's. In a world where we use payment
points instead of preimages we might be able to do so. It would be really
cool if separate HTLC's could be combined to 1 single output. I played
around a little bit but I have not come up with a scheme that is more
compact in all cases. Thus I just threw in the idea.

Yes we may encode all HTLC in some Taproot tree in the future. There are
some wrinkles but for a high-level theoretical construction see my post on
CoinPool.

> 6. Split onchain fees differently (now the attacker would also lose fees
by conducting this attack) - No I don't want to start yet another fee
bikeshadding debate. (In particular I believe that a different split of
fees might make the Flood & Loot attack economically more viable which
relies on the same principle)

Likely a bit more of fee bikeshedding is something we have to do to make LN
secure... Switching fee from pre-committed ones to a single-party, dynamic
one.

> Independently I think we should have a hint in our readme file about
where and how people can disclose attacks and vulnerabilities.
Implementations have this but the BOLTs do not.

I 100% agree, that's exactly
https://github.com/lightningnetwork/lightning-rfc/pull/772, waiting for
your feedback :)

Cheers,

Antoine

Le mer. 17 juin 2020 à 09:41, ZmnSCPxj via Lightning-dev <
lightning-dev@lists.linuxfoundation.org> a écrit :

>
> Good morning all,
>
> >
> > Fee futures could help against this.
> > I remember writing about this some time ago but cannot find where (not
> sure if it was in lightning-dev or bitcoin-dev).
>
> `harding` found it:
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-January/017601.html
>
> Regards,
> ZmnSCPxj
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Miners Dust Inflation attacks on Lightning Network

2020-05-19 Thread Antoine Riard
Hi ZmnSCPxj,

As of today, you can setup a `htlc_minimum_msat` higher than remote's
`dust_limit_satoshis`, but you don't necessarily know it before announcing
your channel parameters if you're initiator.
In practice, assuming you can do so, with fees going higher and HTLC
outputs being encumbered, their cost-to-spend will increase so forbidding
dust HTLC will outlaw low-value payments, which them are a constant.

> Adding this to the spec does have the advantage that an honest forwarder
can hold an HTLC for a while once it notices that the next hop has a bunch
of dusty HTLCs in-flight that are beyond the negotiated
`max_dust_htlc_value_in_flight_msat`, which might help reliability of
micropayments slightly, but there is still the reduction of reliability.

I agree you can already fail HTLC as a local forwarding policy, which is
not great for reliability. So you may have either a negotiated
`max_dust_htlc_value_in_flight_msat` or refuse an
`open_channel`/`accept_channel` by receiver considering remote's
`dust_limit_satoshi` too high.

I do think that's a pretty low-risk scenario but that would be better if
implementations somehow bound in-flight dust to lower attack incentive.

Antoine

Le lun. 18 mai 2020 à 20:52, ZmnSCPxj  a écrit :

> Good morning Antoine,
>
>
> > Mitigating may come by negotiating a new
> `max_dust_htlc_value_in_flight_msat` enforced by HTLC recipient, therefore
> expressing its maximum trust tolerance with regards to dust. Bearing a cost
> on a HTLC holder will also render the attack more expensive, even if for
> counter-measure efficiency you likely need a different order of magnitude
> that spam-protection.
>
> Even without a spec change, such a setting may be enforced by a forwarding
> node by the simple act of refusing to forward an HTLC once a certain level
> of incoming dust HTLCs are currently in-flight.
> That is, the forwarding node can simply accept the incoming new dusty
> HTLC, but instead of forwarding, claim a `temporary_channel_failure` on the
> next channel.
> The attack requires that the forwarding node actually forward the HTLC,
> after all.
>
> This will of course lead to reduced reliability on micropayments.
>
> Adding this to the spec does have the advantage that an honest forwarder
> can hold an HTLC for a while once it notices that the next hop has a bunch
> of dusty HTLCs in-flight that are beyond the negotiated
> `max_dust_htlc_value_in_flight_msat`, which might help reliability of
> micropayments slightly, but there is still the reduction of reliability.
> Not to mention that the easiest code change to respect such a limit would
> be simply to fail forwarding anyway.
>
> Regards,
> ZmnSCPxj
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Miners Dust Inflation attacks on Lightning Network

2020-05-18 Thread Antoine Riard
Lightning protocol supports a floating dust output selection at channel
creation, where each party declares a dust parameter applying to its local
transactions. The current spec doesn't enforce or recommend any bound on
this value, beyond the requirement of being lower that
`channel_reserve_satoshis`. When a HTLC is routed through the channel but
its value is under the local party dust limit, it's burned as fees and not
added to the commitment transaction. This rule, which makes LN a good
citizen of the Bitcoin blockchain comes at the price of more trust in your
counterparty..

Let's consider the following scenario. Mallory announces a channel to Alice
with dust-limit-satoshi set to 20% of channel value. Alice should accept
incoming channels as long as its under her implementation-specific
`max_dust_limit_satoshis`. Now Mallory can route 4 dust-HTLCs to Mallet
through Alice claiming ~80% of channel value.

Mallory --> Alice --> Mallet

Mallet, in collusion with Mallory, can claim the whole set of HTLCs by
revealing the corresponding preimage for each. At the exact same time,
Mallory broadcast her latest commitment transaction on which there is _no_
HTLCs because all of them are dust. Alice can't claim them onchain but has
already paid Mallet forward.

At first-look, this attack doesn't seem economically rational because
dust-HTLCs are all committed as fees. But if you assume that Mallory can
collude with some mining pool, economics change completely because it's now
a almost zero-cost to add Mallory commitment transaction in a block,
hashrate won't be wasted. Fees are going back to the miner, and Alice is
still robbed. Mallory commitment transaction may stay in miner pool as long
block isn't found, without being announced to the rest of the network, and
HTLCs timelocks don't expire. Attack may still stealth if block isn't
signed. It's almost a zero-cost because if you assume block being full,
commitment transaction is now competing for block space and there is an
opportunity cost.

It's that kind of low-probability-and-hard-to-exploit-vulnerability but you
would prefer not having to think about your big LSP hub being targeted by a
rogue mining pool employee. Even if it's a really small mining pool, you
may batch the attack on multiple channels at once for one block found.

Deployment of Stratum V2 may make the attack easier by giving more leverage
to the local miner.

Mitigating may come by negotiating a new
`max_dust_htlc_value_in_flight_msat` enforced by HTLC recipient, therefore
expressing its maximum trust tolerance with regards to dust. Bearing a cost
on a HTLC holder will also render the attack more expensive, even if for
counter-measure efficiency you likely need a different order of magnitude
that spam-protection.

Cheers,

Antoine
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [bitcoin-dev] On the scalability issues of onboarding millions of LN mobile clients

2020-05-16 Thread Antoine Riard
> * At the same time, it retains your-keys-your-coins noncustodiality,
because every update of a Lightning channel requires your keys to sign off
on it.

Yes I agree, I can foresee an easier step where managing low-value channel
and get your familiar with smooth key management maybe a first step before
running a full-node and getting a more full-fledged key management solution.

> It may even be possible, that the Lightning future with massive SPV might
end up with more economic weight in SPV nodes, than in the world without
Lightning and dependent on centralized custodial services to scale.

Even evaluating economic weight in Lightning is hard, both parties have
their own chain view, and it's likely if you assume a hub-and-spoke
topology, leaf nodes are going to be SPV and internal nodes full-nodes ?

> Money makes the world go round, so such backup servers that are
publicly-facing rather than privately-owned should be somehow incentivized
to do so, or else they would not exist in the first place.

I was thinking about the current workflow, Alice downloads her New Shiny
LN-wallet, she is asked to backup the seed, she is asked to pick-up
backup(s) nodes among her friends, relatives or business partners and is
NOT provided any automatic hint and register backup nodes addresses, maybe
even do out-of-band key exchange with this full-node operator. Therefore
you may avoid centralization by having not such publicly-facing servers. Of
course, Alice can still scrawl the web to and be lured to pickup malicious
public servers but if she is severely notified to not do so that may be
enough.

So it would be a combination of UX+user education+fallback security
mechanism to avoid economy hijack. That maybe a better solution rather than
PoW-only SPV. We have an open network so you can't prevent someone to run
such type of client but at least if they have to do so you can provide them
with a better option ?

Antoine




Le jeu. 14 mai 2020 à 00:02, ZmnSCPxj  a écrit :

> Good morning Antoine,
>
>
> > While approaching this question, I think you should consider economic
> weight of nodes in evaluating miner consensus-hijack success. Even if you
> expect a disproportionate ratio of full-nodes-vs-SPV, they may not have the
> same  economic weight at all, therefore even if miners are able to lure a
> majority of SPV clients they may not be able to stir economic nodes. SPV
> clients users will now have an incentive to cancel their hijacked history
> to stay on the most economic meaningful chain. And it's already assumed,
> that if you run a bitcoin business or LN routing node, you do want to run
> your own full-node.
>
> One hope I have for Lightning is that it will replace centralized
> custodial services, because:
>
> * Lightning gains some of the scalability advantage of centralized
> custodial services, because you can now transfer to any Lightning client
> without touching the blockchain, for much reduced transfer fees.
> * At the same time, it retains your-keys-your-coins noncustodiality,
> because every update of a Lightning channel requires your keys to sign off
> on it.
>
> If most Lightning clients are SPV, then if we compare these two worlds:
>
> * There are a few highly-important centralized custodial services with
> significant economic weight running fullnodes (i.e. now).
> * There are no highly-important centralized custodial services, and most
> everyone uses Lightning, but with SPV (i.e. a Lightning future).
>
> Then the distribution of economic weight would be different between these
> two worlds.
> It may even be possible, that the Lightning future with massive SPV might
> end up with more economic weight in SPV nodes, than in the world without
> Lightning and dependent on centralized custodial services to scale.
>
>
> It is also entirely possible that custodial services for Lightning will
> arise anyway and my hope is already dashed, come on universe, work harder
> will you, would you really disappoint some randomly-generated Internet
> person like that.
>
>
> >
> > I agree it may be hard to evaluate economic-weight-to-chain-backend
> segments, specially with offchain you disentangle an onchain output value
> from its real payment traffic. To strengthen SPV, you may implement forks
> detection and fallback to some backup node(s) which would serve as an
> authoritative source to arbiter between branches. Such backup node(s) must
> be picked up manually at client initialization, before any risk of conflict
> to avoid Reddit-style of hijack during contentious period or other massive
> social engineering. You don't want autopilot-style of recommendations for
> picking up a backup nodes and avoid cenralization of backups, but somehow a
> uniform distribution. A backup node may be a private one, it won't serve
> you any data beyond headers, and therefore you preserve public nodes
> bandwidth, which IMO is the real bottleneck. I concede it won't work well
> if you have a ratio of 1000-SPV for 1-full-node and 

Re: [Lightning-dev] [bitcoin-dev] On the scalability issues of onboarding millions of LN mobile clients

2020-05-13 Thread Antoine Riard
Hi Chris,

While approaching this question, I think you should consider economic
weight of nodes in evaluating miner consensus-hijack success. Even if you
expect a disproportionate ratio of full-nodes-vs-SPV, they may not have the
same  economic weight at all, therefore even if miners are able to lure a
majority of SPV clients they may not be able to stir economic nodes. SPV
clients users will now have an incentive to cancel their hijacked history
to stay on the most economic meaningful chain. And it's already assumed,
that if you run a bitcoin business or LN routing node, you do want to run
your own full-node.

I agree it may be hard to evaluate economic-weight-to-chain-backend
segments, specially with offchain you disentangle an onchain output value
from its real payment traffic. To strengthen SPV, you may implement forks
detection and fallback to some backup node(s) which would serve as an
authoritative source to arbiter between branches. Such backup node(s) must
be picked up manually at client initialization, before any risk of conflict
to avoid Reddit-style of hijack during contentious period or other massive
social engineering. You don't want autopilot-style of recommendations for
picking up a backup nodes and avoid cenralization of backups, but somehow a
uniform distribution. A backup node may be a private one, it won't serve
you any data beyond headers, and therefore you preserve public nodes
bandwidth, which IMO is the real bottleneck. I concede it won't work well
if you have a ratio of 1000-SPV for 1-full-node and people are not
effectively able to pickup a backup among their social environment.

What do you think about this model ?

Cheers,

Antoine

Le mar. 12 mai 2020 à 17:06, Chris Belcher  a écrit :

> On 05/05/2020 16:16, Lloyd Fournier via bitcoin-dev wrote:
> > On Tue, May 5, 2020 at 9:01 PM Luke Dashjr via bitcoin-dev <
> > bitcoin-...@lists.linuxfoundation.org> wrote:
> >
> >> On Tuesday 05 May 2020 10:17:37 Antoine Riard via bitcoin-dev wrote:
> >>> Trust-minimization of Bitcoin security model has always relied first
> and
> >>> above on running a full-node. This current paradigm may be shifted by
> LN
> >>> where fast, affordable, confidential, censorship-resistant payment
> >> services
> >>> may attract a lot of adoption without users running a full-node.
> >>
> >> No, it cannot be shifted. This would compromise Bitcoin itself, which
> for
> >> security depends on the assumption that a supermajority of the economy
> is
> >> verifying their incoming transactions using their own full node.
> >>
> >
> > Hi Luke,
> >
> > I have heard this claim made several times but have never understood the
> > argument behind it. The question I always have is: If I get scammed by
> not
> > verifying my incoming transactions properly how can this affect anyone
> > else? It's very unintuative.  I've been scammed several times in my life
> in
> > fiat currency transactions but as far as I could tell it never negatively
> > affected the currency overall!
> >
> > The links you point and from what I've seen you say before refer to
> "miner
> > control" as the culprit. My only thought is that this is because a light
> > client could follow a dishonest majority of hash power chain. But this
> just
> > brings me back to the question. If, instead of BTC, I get a payment in
> some
> > miner scamcoin on their dishonest fork (but I think it's BTC because I'm
> > running a light client) that still seems to only to damage me. Where does
> > the side effect onto others on the network come from?
> >
> > Cheers,
> >
> > LL
> >
>
> Hello Lloyd,
>
> The problem comes when a large part of the ecosystem gets scammed at
> once, which is how such an attack would happen in practice.
>
> For example, consider if bitcoin had 1 users. 10 of them use a full
> node wallet while the other 9990 use an SPV wallet. If a miner attacked
> the system by printing infinite bitcoins and spending coins without a
> valid signature, then the 9990 SPV wallets would accept those fake coins
> as payment, and trade the coins amongst themselves. After a time those
> coins would likely be the ancestors of most active coins in the
> 9990-SPV-wallet ecosystem. Bitcoin would split into two currencies:
> full-node-coin and SPV-coin.
>
> Now the fraud miners may become well known, perhaps being published on
> bitcoin news portals, but the 9990-SPV-wallet ecosystem has a strong
> incentive to be against any rollback. Their recent transactions would
> disappear and they'd lose money. They would argue that they've already
> been using the coin for a while, and it works perfectly fine, and

Re: [Lightning-dev] [bitcoin-dev] On the scalability issues of onboarding millions of LN mobile clients

2020-05-09 Thread Antoine Riard
Hi Christopher,

Thanks for Blockchain Commons and Learning Bitcoin from the Command Line!

> If there are people interested in coordinating some proposals on how to
defining different sets of wallet functionality, Blockchain Commons would
be interested in hosting that collaboration. This could start as just being
a transparent shim between bitcoin-core & remote RPC, but later could
inform proposals for the future of the core wallet functionality as it gets
refactored.

Yes generally refactoring in Core wallets are making good progress [0]. I'm
pretty sure feedbacks and proposals on future changes with regards to
usability would be greatly appreciated.

Maybe you can bring these during a IRC meeting ?

Antoine

[0] See https://github.com/bitcoin/bitcoin/pull/16528 or
https://github.com/bitcoin/bitcoin/pull/16426

Le ven. 8 mai 2020 à 17:31, Christopher Allen via bitcoin-dev <
bitcoin-...@lists.linuxfoundation.org> a écrit :

> On Fri, May 8, 2020 at 2:00 PM Keagan McClelland via bitcoin-dev <
> bitcoin-...@lists.linuxfoundation.org> wrote:
>
>> Perhaps I wasn't explicit in my previous note but what I mean is that
>> there seems to be a demand for something *in between* a peer interface,
>> and an owner interface. I have little opinion as to whether this belongs in
>> core or not, I think there are much more experienced folks who can weight
>> in on that, but without something like this, you cannot limit your exposure
>> for serving something like bip157 filters without removing your own ability
>> to make use of some of those same services.
>>
>
> Our FullyNoded2 multisig wallet on iOS & Mac, communicates with your own
> personal node over RPC, securing the connection using Tor over a hidden
> onion service and two-way client authentication using a v3 Tor
> Authentication key: https://github.com/BlockchainCommons/FullyNoded-2
>
> It many ways the app (and its predecessor FullyNoded1) is an interface
> between a personal full node and a user.
>
> However, we do wish that the full RPC functionality was not exposed in
> bitcoin-core. I’d love to see a cryptographic capability mechanism such
> that the remote wallet could only m ask the node functions that it needs,
> and allow escalation for other rarer services it needs with addition
> authorization.
>
> This capability mechanism feature set should go both ways, to a minimum
> subset needed for being a watch-only transaction verification tool, all the
> way to things RPC can’t do like deleting a wallet and changing bitcoin.conf
> parameters and rebooting, without requiring full ssh access to the server
> running the node.
>
> If there are people interested in coordinating some proposals on how to
> defining different sets of wallet functionality, Blockchain Commons would
> be interested in hosting that collaboration. This could start as just being
> a transparent shim between bitcoin-core & remote RPC, but later could
> inform proposals for the future of the core wallet functionality as it gets
> refactored.
>
> — Christopher Allen
> ___
> bitcoin-dev mailing list
> bitcoin-...@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] On the scalability issues of onboarding millions of LN mobile clients

2020-05-09 Thread Antoine Riard
Hi Igor,

Thanks for sharing about what it's technically possible to do for a
full-node on phone, specially with regards to lower grade devices.

I do see 2 limitations for sleeping nodes:
- a lightning specific one, i.e you need to process block data real-time in
case of incoming HTLC you need to claim on chain or a HTLC timeout. There
is a bunch of timelocks implications in LN,  with regards to CSV,
CLTV_DELTA, incoming policy, outgoing policy, ... and you can't really
afford to be late without loosing a payment. I don't see timelocks being
increase, that would hinder liquidity.
- a p2p bandwidth concern, even if this new class of nodes turn as public
ones, they would still have a heavy sync period due to be fallen-behind
during the day, so you would have huge bandwidth spikes every a timezone
falls asleep and a risk of choking upload links of stable full-nodes.

I think assume-utxo may be interesting in the future in case of long-fork
detection, you may be able to download a utxo-set on the fly, and fall-back
to a full-node. But that would be only an emergency measure, not a regular
cost on the backbone network.

Antoine


Le jeu. 7 mai 2020 à 12:41, Igor Cota  a écrit :

> Hi Antoine et al,
>
> Maybe I'm completely wrong, missing some numbers, and it's maybe fine to
>> just rely on few thousands of full-node operators being nice and servicing
>> friendly millions of LN mobiles clients. But just in case it may be good to
>> consider a reasonable alternative.
>>
>
>
>> So you may want to separate control/data plane, get filters from CDN and
>> headers as check-and-control directly from the backbone network. "Hybrid"
>> models should clearly be explored.
>
>
> For some months now I've been exploring the feasibility of running full
> nodes on everyday phones [1]. One of my first thoughts was how to avoid the
> phones mooching off the network. Obviously due to battery, storage and
> bandwidth constraints it is not reasonable to expect pocket full nodes to
> serve blocks during day time.
>
> Huge exception to this is the time we are asleep and our phones are
> connected to wifi and charging. IMO this is a huge untapped resource that
> would allow mobile nodes to earn their keep. If we limit full node
> operation to sleepy night time the only constraining resource is storage:
> 512 gb of internal storage in phones is quite rare, probably about $100 for
> an SD card with full archival node capacity but phones with memory card
> slots rarer still - no one is going to bother.
>
> So depending on their storage capacity phone nodes could decide to store
> and serve just a randomly selected range of blocks during their nighttime
> operation. With trivial changes to P2P they could advertise the blocks they
> are able to serve.
> If there comes a time that normal full nodes feel DoS'ed they can
> challenge such nodes to produce the blocks they advertise and ban them as
> moochers if they fail to do so. Others may elect to be more charitable and
> serve everyone.
>
> These types of nodes would truly be part-timing since they only carry a
> subset of the blockchain and work while their operator is asleep. Probably
> should be called part-time or Sleeper Nodes™.
>
> They could be user friendly as well, with Assume UTXO they could be
> bootstrapped quickly and while they do the IBD in the background instead of
> traditional pruning they can keep the randomly assigned bit of blockchain
> to later serve the network.
>
> Save for the elderly, all the people I know could run such a node, and I
> don't live in a first world country.
>
> There is also the feel-good kumbaya aspect of American phone nodes serving
> the African continent while the Americans are asleep, Africans and
> Europeans serving the Asians in kind. By plugging in our phones and going
> to sleep we could blanket the whole world in (somewhat) full nodes!
>
> Cheers,
> Igor
>
> [1] https://icota.github.io/
>
> On Tue, 5 May 2020 at 12:18, Antoine Riard 
> wrote:
>
>> Hi,
>>
>> (cross-posting as it's really both layers concerned)
>>
>> Ongoing advancement of BIP 157 implementation in Core maybe the
>> opportunity to reflect on the future of light client protocols and use this
>> knowledge to make better-informed decisions about what kind of
>> infrastructure is needed to support mobile clients at large scale.
>>
>> Trust-minimization of Bitcoin security model has always relied first and
>> above on running a full-node. This current paradigm may be shifted by LN
>> where fast, affordable, confidential, censorship-resistant payment services
>> may attract a lot of adoption without users running a full-node. Assuming a
>> user adoption path where a full-node is requi

Re: [Lightning-dev] [bitcoin-dev] On the scalability issues of onboarding millions of LN mobile clients

2020-05-06 Thread Antoine Riard
What I'm thinking more is if the costs of security are being too much
externalized from the light clients onto full nodes, nodes operators are
just going to stop servicing light clients `peercfilters=false`. The
backbone p2p network is going to be fine. But the massive LN light clients
network built on top is going to rely on centralized services for its chain
access and now you may have consensus capture by those..

Le mer. 6 mai 2020 à 12:00, Keagan McClelland 
a écrit :

> Hi Antoine,
>
> Consensus capture by miners isn't the only concern here. Consensus capture
> by any subset of users whose interests diverge from the overall consensus
> is equally damaging. The scenario I can imagine here is that the more light
> clients outpace full nodes, the more the costs of security are being
> externalized from the light clients onto the full nodes. In this situation,
> it can make full nodes harder to run. If they are harder to run it will
> price out some marginal set of full node operators, which causes a net new
> increase in light clients (as the disaffected full nodes convert), AND a
> redistribution of load onto a smaller surface area. This is a naturally
> unstable process. It is safe to say that as node counts drop, the set of
> node operators will increasingly represent economic actors with extreme
> weight. The more this process unfolds, the more likely their interests will
> diverge from the population at large, and also the more likely they can be
> coerced into behavior they otherwise wouldn't. After all it is easier to
> find agents who carry lots of economic weight. This is true independent of
> their mining status, we should be just as wary of consensus capture by
> exchanges or HNWI's as we are about miners.
>
> Keagan
>
> On Wed, May 6, 2020 at 3:06 AM Antoine Riard 
> wrote:
>
>> I do see the consensus capture argument by miners but in reality isn't
>> this attack scenario have a lot of assumptions on topology an deployment ?
>>
>> For such attack to succeed you need miners nodes to be connected to
>> clients to feed directly the invalid headers and if these ones are
>> connected to headers/filters gateways, themselves doing full-nodes
>> validation invalid chain is going to be sanitized out ?
>>
>> Sure now you trust these gateways, but if you have multiple connections
>> to them and can guarantee they aren't run by the same entity, that maybe an
>> acceptable security model, depending of staked amount and your
>> expectations. I more concerned of having a lot of them and being
>> diversified enough to avoid collusion between gateways/chain access
>> providers/miners.
>>
>> But even if you light clients is directly connected to the backbone
>> network and may be reached by miners you can implement fork anomalies
>> detection and from then you may have multiples options:
>> * halt the wallet, wait for human intervention
>> * fallback connection to a trusted server, authoritative on your chain
>> view
>> * invalidity proofs?
>>
>> Now I agree you need a wide-enough, sane backbone network to build on
>> top, and we should foster node adoption as much as we can.
>>
>> Le mar. 5 mai 2020 à 09:01, Luke Dashjr  a écrit :
>>
>>> On Tuesday 05 May 2020 10:17:37 Antoine Riard via bitcoin-dev wrote:
>>> > Trust-minimization of Bitcoin security model has always relied first
>>> and
>>> > above on running a full-node. This current paradigm may be shifted by
>>> LN
>>> > where fast, affordable, confidential, censorship-resistant payment
>>> services
>>> > may attract a lot of adoption without users running a full-node.
>>>
>>> No, it cannot be shifted. This would compromise Bitcoin itself, which
>>> for
>>> security depends on the assumption that a supermajority of the economy
>>> is
>>> verifying their incoming transactions using their own full node.
>>>
>>> The past few years has seen severe regressions in this area, to the
>>> point
>>> where Bitcoin's future seems quite bleak. Without serious improvements
>>> to the
>>> full node ratio, Bitcoin is likely to fail.
>>>
>>> Therefore, all efforts to improve the "full node-less" experience are
>>> harmful,
>>> and should be actively avoided. BIP 157 improves privacy of fn-less
>>> usage,
>>> while providing no real benefits to full node users (compared to more
>>> efficient protocols like Stratum/Electrum).
>>>
>>> For this reason, myself and a few others oppose merging support for BIP
>>> 157 in
>>> Core.
>&g

Re: [Lightning-dev] On the scalability issues of onboarding millions of LN mobile clients

2020-05-06 Thread Antoine Riard
> As a result, the entire protocol could be served over something like
HTTP, taking advantage of all the established CDNs and anycast serving
infrastructure,

Yes it's moving the issue of being a computation one to a distribution one.
But still you need the bandwidth capacities. What I'm concerned is the
trust model of relying on few-establish CDNs, you don't want to make it
easy to have "headers-routing" hijack and therefore having massive channel
closure or time-locks interference due to LN clients not seeing the last
few block. So you may want to separate control/data plane, get filters from
CDN and headers as check-and-control directly from the backbone network.
"Hybrid" models should clearly be explored.

Web-of-trust style of deployments should be also envisioned, you may get
huge scaling improvement, assuming client may be peers between themselves
and the ones belonging to the same social entity should be able to share
the same chain view without too much risk.

> Piggy backing off the above idea, if the data starts being widely served
over HTTP, then LSATs[1][2] can be used to add a lightweight payment
mechanism by inserting a new proxy server in front of the filter/header
infrastructure.

Yeah, I hadn't time to read the spec yet but that was clearly something
like LSATs I meaned speaking about monetary compensation to price
resources. I just hope it isn't too much tie to HTTP because you may want
to read/write over other communication channels like
tx-broadcast-over-radio to solve first-hop privacy.

Le mar. 5 mai 2020 à 20:31, Olaoluwa Osuntokun  a écrit :

> Hi Antoine,
>
> > Even with cheaper, more efficient protocols like BIP 157, you may have a
> > huge discrepancy between what is asked and what is offered. Assuming 10M
> > light clients [0] each of them consuming ~100MB/month for
> filters/headers,
> > that means you're asking 1PB/month of traffic to the backbone network. If
> > you assume 10K public nodes, like today, assuming _all_ of them opt-in to
> > signal BIP 157, that's an increase of 100GB/month for each. Which is
> > consequent with regards to the estimated cost of 350GB/month for running
> > an actual public node
>
> One really dope thing about BIP 157+158, is that the protocol makes serving
> light clients now _stateless_, since the full node doesn't need to perform
> any unique work for a given client. As a result, the entire protocol could
> be served over something like HTTP, taking advantage of all the established
> CDNs and anycast serving infrastructure, which can reduce syncing time
> (less latency to
> fetch data) and also more widely distributed the load of light clients
> using
> the existing web infrastructure. Going further, with HTTP/2's server-push
> capabilities, those serving this data can still push out notifications for
> new headers, etc.
>
> > Therefore, you may want to introduce monetary compensation in exchange of
> > servicing filters. Light client not dedicating resources to maintain the
> > network but free-riding on it, you may use their micro-payment
> > capabilities to price chain access resources [3]
>
> Piggy backing off the above idea, if the data starts being widely served
> over HTTP, then LSATs[1][2] can be used to add a lightweight payment
> mechanism by inserting a new proxy server in front of the filter/header
> infrastructure. The minted tokens themselves may allow a user to purchase
> access to a single header/filter, a range of them in the past, or N headers
> past the known chain tip, etc, etc.
>
> -- Laolu
>
> [1]: https://lsat.tech/
> [2]: https://lightning.engineering/posts/2020-03-30-lsat/
>
>
> On Tue, May 5, 2020 at 3:17 AM Antoine Riard 
> wrote:
>
>> Hi,
>>
>> (cross-posting as it's really both layers concerned)
>>
>> Ongoing advancement of BIP 157 implementation in Core maybe the
>> opportunity to reflect on the future of light client protocols and use this
>> knowledge to make better-informed decisions about what kind of
>> infrastructure is needed to support mobile clients at large scale.
>>
>> Trust-minimization of Bitcoin security model has always relied first and
>> above on running a full-node. This current paradigm may be shifted by LN
>> where fast, affordable, confidential, censorship-resistant payment services
>> may attract a lot of adoption without users running a full-node. Assuming a
>> user adoption path where a full-node is required to benefit for LN may
>> deprive a lot of users, especially those who are already denied a real
>> financial infrastructure access. It doesn't mean we shouldn't foster node
>> adoption when people are able to do so, and having a LN wallet maybe even a
>> first-step to it.
>>
>&

Re: [Lightning-dev] [bitcoin-dev] On the scalability issues of onboarding millions of LN mobile clients

2020-05-06 Thread Antoine Riard
> The choice between whether we offer them a light client technology that
is better or worse for privacy and scalability.

And offer them a solution which would scale in the long-term.

Again it's not an argumentation against BIP 157 protocol in itself, the
problem I'm interested in is how implementing BIP157 in Core will address
this issue ?

Le mar. 5 mai 2020 à 13:36, John Newbery via bitcoin-dev <
bitcoin-...@lists.linuxfoundation.org> a écrit :

> There doesn't seem to be anything in the original email that's specific to
> BIP 157. It's a restatement of the arguments against light clients:
>
> - light clients are a burden on the full nodes that serve them
> - if light clients become more popular, there won't be enough full nodes
> to serve them
> - people might build products that depend on altruistic nodes serving
> data, which is unsustainable
> - maybe at some point in the future, light clients will need to pay for
> services
>
> The choice isn't between people using light clients or not. People already
> use light clients. The choice between whether we offer them a light client
> technology that is better or worse for privacy and scalability.
>
> The arguments for why BIP 157 is better than the existing light client
> technologies are available elsewhere, but to summarize:
>
> - they're unique for a block, which means they can easily be cached.
> Serving a filter requires no computation, just i/o (or memory access for
> cached filter/header data) and bandwidth. There are plenty of other
> services that a full node offers that use i/o and bandwidth, such as
> serving blocks.
> - unique-for-block means clients can download from multiple sources
> - the linked-headers/filters model allows hybrid approaches, where headers
> checkpoints can be fetched from trusted/signed nodes, with intermediate
> headers and filters fetched from untrusted sources
> - less possibilities to DoS/waste resources on the serving node
> - better for privacy
>
> > The intention, as I understood it, of putting BIP157 directly into
> bitcoind was to essentially force all `bitcoind` users to possibly service
> BIP157 clients
>
> Please. No-one is forcing anyone to do anything. To serve filters, a node
> user needs to download the latest version, set `-blockfilterindex=basic` to
> build the compact filters index, and set `-peercfilters` to serve them over
> P2P. This is an optional, off-by-default feature.
>
> Regards,
> John
>
>
> On Tue, May 5, 2020 at 9:50 AM ZmnSCPxj via bitcoin-dev <
> bitcoin-...@lists.linuxfoundation.org> wrote:
>
>> Good morning ariard and luke-jr
>>
>>
>> > > Trust-minimization of Bitcoin security model has always relied first
>> and
>> > > above on running a full-node. This current paradigm may be shifted by
>> LN
>> > > where fast, affordable, confidential, censorship-resistant payment
>> services
>> > > may attract a lot of adoption without users running a full-node.
>> >
>> > No, it cannot be shifted. This would compromise Bitcoin itself, which
>> for
>> > security depends on the assumption that a supermajority of the economy
>> is
>> > verifying their incoming transactions using their own full node.
>> >
>> > The past few years has seen severe regressions in this area, to the
>> point
>> > where Bitcoin's future seems quite bleak. Without serious improvements
>> to the
>> > full node ratio, Bitcoin is likely to fail.
>> >
>> > Therefore, all efforts to improve the "full node-less" experience are
>> harmful,
>> > and should be actively avoided. BIP 157 improves privacy of fn-less
>> usage,
>> > while providing no real benefits to full node users (compared to more
>> > efficient protocols like Stratum/Electrum).
>> >
>> > For this reason, myself and a few others oppose merging support for BIP
>> 157 in
>> > Core.
>>
>> BIP 157 can be implemented as a separate daemon that processes the blocks
>> downloaded by an attached `bitcoind`, i.e. what Wasabi does.
>>
>> The intention, as I understood it, of putting BIP157 directly into
>> bitcoind was to essentially force all `bitcoind` users to possibly service
>> BIP157 clients, in the hope that a BIP157 client can contact any arbitrary
>> fullnode to get BIP157 service.
>> This is supposed to improve to the situation relative to e.g. Electrum,
>> where there are far fewer Electrum servers than fullnodes.
>>
>> Of course, as ariard computes, deploying BIP157 could lead to an
>> effective DDoS on the fullnode network if a large number of BIP157 clients
>> arise.
>> Though maybe this will not occur very fast?  We hope?
>>
>> It seems to me that the thing that *could* be done would be to have
>> watchtowers provide light-client services, since that seems to be the major
>> business model of watchtowers, as suggested by ariard as well.
>> This is still less than ideal, but maybe is better than nothing.
>>
>> Regards,
>> ZmnSCPxj
>> ___
>> bitcoin-dev mailing list
>> bitcoin-...@lists.linuxfoundation.org
>> 

Re: [Lightning-dev] [bitcoin-dev] On the scalability issues of onboarding millions of LN mobile clients

2020-05-06 Thread Antoine Riard
I do see the consensus capture argument by miners but in reality isn't this
attack scenario have a lot of assumptions on topology an deployment ?

For such attack to succeed you need miners nodes to be connected to clients
to feed directly the invalid headers and if these ones are connected to
headers/filters gateways, themselves doing full-nodes validation invalid
chain is going to be sanitized out ?

Sure now you trust these gateways, but if you have multiple connections to
them and can guarantee they aren't run by the same entity, that maybe an
acceptable security model, depending of staked amount and your
expectations. I more concerned of having a lot of them and being
diversified enough to avoid collusion between gateways/chain access
providers/miners.

But even if you light clients is directly connected to the backbone network
and may be reached by miners you can implement fork anomalies detection and
from then you may have multiples options:
* halt the wallet, wait for human intervention
* fallback connection to a trusted server, authoritative on your chain view
* invalidity proofs?

Now I agree you need a wide-enough, sane backbone network to build on top,
and we should foster node adoption as much as we can.

Le mar. 5 mai 2020 à 09:01, Luke Dashjr  a écrit :

> On Tuesday 05 May 2020 10:17:37 Antoine Riard via bitcoin-dev wrote:
> > Trust-minimization of Bitcoin security model has always relied first and
> > above on running a full-node. This current paradigm may be shifted by LN
> > where fast, affordable, confidential, censorship-resistant payment
> services
> > may attract a lot of adoption without users running a full-node.
>
> No, it cannot be shifted. This would compromise Bitcoin itself, which for
> security depends on the assumption that a supermajority of the economy is
> verifying their incoming transactions using their own full node.
>
> The past few years has seen severe regressions in this area, to the point
> where Bitcoin's future seems quite bleak. Without serious improvements to
> the
> full node ratio, Bitcoin is likely to fail.
>
> Therefore, all efforts to improve the "full node-less" experience are
> harmful,
> and should be actively avoided. BIP 157 improves privacy of fn-less usage,
> while providing no real benefits to full node users (compared to more
> efficient protocols like Stratum/Electrum).
>
> For this reason, myself and a few others oppose merging support for BIP
> 157 in
> Core.
>
> > Assuming a user adoption path where a full-node is required to benefit
> for
> > LN may deprive a lot of users, especially those who are already denied a
> > real financial infrastructure access.
>
> If Bitcoin can't do it, then Bitcoin can't do it.
> Bitcoin can't solve *any* problem if it becomes insecure itself.
>
> Luke
>
> P.S. See also
>
> https://medium.com/@nicolasdorier/why-i-dont-celebrate-neutrino-206bafa5fda0
>
> https://medium.com/@nicolasdorier/neutrino-is-dangerous-for-my-self-sovereignty-18fac5bcdc25
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] On the scalability issues of onboarding millions of LN mobile clients

2020-05-06 Thread Antoine Riard
I didn't trust myself and verify. In fact the [3] is the real [2].

Le mar. 5 mai 2020 à 06:28, Andrés G. Aragoneses  a
écrit :

> Hey Antoine, just a small note, [3] is missing in your footnotes, can you
> add it? Thanks
>
> On Tue, 5 May 2020 at 18:17, Antoine Riard 
> wrote:
>
>> Hi,
>>
>> (cross-posting as it's really both layers concerned)
>>
>> Ongoing advancement of BIP 157 implementation in Core maybe the
>> opportunity to reflect on the future of light client protocols and use this
>> knowledge to make better-informed decisions about what kind of
>> infrastructure is needed to support mobile clients at large scale.
>>
>> Trust-minimization of Bitcoin security model has always relied first and
>> above on running a full-node. This current paradigm may be shifted by LN
>> where fast, affordable, confidential, censorship-resistant payment services
>> may attract a lot of adoption without users running a full-node. Assuming a
>> user adoption path where a full-node is required to benefit for LN may
>> deprive a lot of users, especially those who are already denied a real
>> financial infrastructure access. It doesn't mean we shouldn't foster node
>> adoption when people are able to do so, and having a LN wallet maybe even a
>> first-step to it.
>>
>> Designing a mobile-first LN experience opens its own gap of challenges
>> especially in terms of security and privacy. The problem can be scoped as
>> how to build a scalable, secure, private chain access backend for millions
>> of LN clients ?
>>
>> Light client protocols for LN exist (either BIP157 or Electrum are used),
>> although their privacy and security guarantees with regards to
>> implementation on the client-side may still be an object of concern
>> (aggressive tx-rebroadcast, sybillable outbound peer selection, trusted fee
>> estimation). That said, one of the bottlenecks is likely the number of
>> full-nodes being willingly to dedicate resources to serve those clients.
>> It's not about _which_ protocol is deployed but more about _incentives_ for
>> node operators to dedicate long-term resources to client they have lower
>> reasons to care about otherwise.
>>
>> Even with cheaper, more efficient protocols like BIP 157, you may have a
>> huge discrepancy between what is asked and what is offered. Assuming 10M
>> light clients [0] each of them consuming ~100MB/month for filters/headers,
>> that means you're asking 1PB/month of traffic to the backbone network. If
>> you assume 10K public nodes, like today, assuming _all_ of them opt-in to
>> signal BIP 157, that's an increase of 100GB/month for each. Which is
>> consequent with regards to the estimated cost of 350GB/month for running an
>> actual public node. Widening full-node adoption, specially in term of
>> geographic distribution means as much as we can to bound its operational
>> cost.
>>
>> Obviously,  deployment of more efficient tx-relay protocol like Erlay
>> will free up some resources but it maybe wiser to dedicate them to increase
>> health and security of the backbone network like deploying more outbound
>> connections.
>>
>> Unless your light client protocol is so ridiculous cheap to rely on
>> niceness of a subset of node operators offering free resources, it won't
>> scale. And it's likely you will always have a ratio disequilibrium between
>> numbers of clients and numbers of full-node, even worst their growth rate
>> won't be the same, first ones are so much easier to setup.
>>
>> It doesn't mean servicing filters for free won't work for now, numbers of
>> BIP157 clients is still pretty low, but what is worrying is  wallet vendors
>> building such chain access backend, hitting a bandwidth scalability wall
>> few years from now instead of pursuing better solutions. And if this
>> happen, maybe suddenly, isn't the quick fix going to be to rely on
>> centralized services, so much easier to deploy ?
>>
>> Of course, it may be brought that actually current full-node operators
>> don't get anything back from servicing blocks, transactions, addresses...
>> It may be replied that you have an indirect incentive to participate in
>> network relay and therefore guarantee censorship-resistance, instead of
>> directly connecting to miners. You do have today ways to select your
>> resources exposure like pruning, block-only or being private but the wider
>> point is the current (non?)-incentives model seems to work for the base
>> layer. For light clients data, are node operators going to be satisfied to
>> serve this new *class* of traffic en m

[Lightning-dev] On the scalability issues of onboarding millions of LN mobile clients

2020-05-05 Thread Antoine Riard
Hi,

(cross-posting as it's really both layers concerned)

Ongoing advancement of BIP 157 implementation in Core maybe the opportunity
to reflect on the future of light client protocols and use this knowledge
to make better-informed decisions about what kind of infrastructure is
needed to support mobile clients at large scale.

Trust-minimization of Bitcoin security model has always relied first and
above on running a full-node. This current paradigm may be shifted by LN
where fast, affordable, confidential, censorship-resistant payment services
may attract a lot of adoption without users running a full-node. Assuming a
user adoption path where a full-node is required to benefit for LN may
deprive a lot of users, especially those who are already denied a real
financial infrastructure access. It doesn't mean we shouldn't foster node
adoption when people are able to do so, and having a LN wallet maybe even a
first-step to it.

Designing a mobile-first LN experience opens its own gap of challenges
especially in terms of security and privacy. The problem can be scoped as
how to build a scalable, secure, private chain access backend for millions
of LN clients ?

Light client protocols for LN exist (either BIP157 or Electrum are used),
although their privacy and security guarantees with regards to
implementation on the client-side may still be an object of concern
(aggressive tx-rebroadcast, sybillable outbound peer selection, trusted fee
estimation). That said, one of the bottlenecks is likely the number of
full-nodes being willingly to dedicate resources to serve those clients.
It's not about _which_ protocol is deployed but more about _incentives_ for
node operators to dedicate long-term resources to client they have lower
reasons to care about otherwise.

Even with cheaper, more efficient protocols like BIP 157, you may have a
huge discrepancy between what is asked and what is offered. Assuming 10M
light clients [0] each of them consuming ~100MB/month for filters/headers,
that means you're asking 1PB/month of traffic to the backbone network. If
you assume 10K public nodes, like today, assuming _all_ of them opt-in to
signal BIP 157, that's an increase of 100GB/month for each. Which is
consequent with regards to the estimated cost of 350GB/month for running an
actual public node. Widening full-node adoption, specially in term of
geographic distribution means as much as we can to bound its operational
cost.

Obviously,  deployment of more efficient tx-relay protocol like Erlay will
free up some resources but it maybe wiser to dedicate them to increase
health and security of the backbone network like deploying more outbound
connections.

Unless your light client protocol is so ridiculous cheap to rely on
niceness of a subset of node operators offering free resources, it won't
scale. And it's likely you will always have a ratio disequilibrium between
numbers of clients and numbers of full-node, even worst their growth rate
won't be the same, first ones are so much easier to setup.

It doesn't mean servicing filters for free won't work for now, numbers of
BIP157 clients is still pretty low, but what is worrying is  wallet vendors
building such chain access backend, hitting a bandwidth scalability wall
few years from now instead of pursuing better solutions. And if this
happen, maybe suddenly, isn't the quick fix going to be to rely on
centralized services, so much easier to deploy ?

Of course, it may be brought that actually current full-node operators
don't get anything back from servicing blocks, transactions, addresses...
It may be replied that you have an indirect incentive to participate in
network relay and therefore guarantee censorship-resistance, instead of
directly connecting to miners. You do have today ways to select your
resources exposure like pruning, block-only or being private but the wider
point is the current (non?)-incentives model seems to work for the base
layer. For light clients data, are node operators going to be satisfied to
serve this new *class* of traffic en masse ?

This doesn't mean you won't find BIP157 servers, ready to serve you with
unlimited credit, but it's more likely their intentions maybe not aligned,
like spying on your transaction broadcast or block fetched. And you do want
peer diversity to avoid every BIP157 servers being on few ASNs for
fault-tolerance. Do people expect a scenario a la Cloudflare, where
everyone connections is to far or less the same set of entities ?

Moreover, the LN security model diverges hugely from basic on-chain
transactions. Worst-case attack on-chain a malicious light client server
showing a longest, invalid, PoW-signed chain to double-spend the user. On
LN, the *liveliness* requirement means the entity owning your view of the
chain can lie to you on whether your channel has been spent by a revoked
commitment, the real tip of the blockchain or even dry-up block
announcement to trigger unexpected behavior in the client logic. A
malicious 

Re: [Lightning-dev] [bitcoin-dev] RBF Pinning with Counterparties and Competing Interest

2020-04-22 Thread Antoine Riard
Personally, I would have wait a bit before to go public on this, like
letting some implementations
increasing their CLTV deltas, but anyway, it's here now.

Mempool-pinning attacks were already discussed on this list [0], but what
we found is you
can _reverse_ the scenario, where it's not the malicious party delaying
confirmation of honest
party transactions but malicious deliberately stucking its own transactions
in the mempool to avoid
confirmation of timeout. And therefore gaming inter-link timelock to
provoke an unbalanced
settlement for the victim ("aka you pay forward, but don't get pay
backward").

How much attacks are practical is based on how you can leverage mempool
rules to pin your own
transaction. What you're looking for is a  _mempool-obstruction_ trick, i.e
a way to get honest party
transaction being bounce off due to your transaction being already there.

Beyond disabling RBF on your transaction (with current protocol, not anchor
proposal), there is
two likely candidates:
* BIP 125 rule 3: "The replacement transaction pays an absolute fee of at
least the sum paid by the original transactions."
* BIP 125 rule 5: "The number of original transactions to be replaced and
their descendant transactions which will be evicted from the mempool must
not exceed a total of 100 transactions."

Let's go through whole scenario:
* Mallory and Eve are colluding
* Eve and Mallory are opening channels with Alice, Mallory do a bit of
rebalancing
to get full incoming capacity, like receiving funds on an onchain address
through another Alice
link
* Eve send a HTLC #1 to Mallory through Alice expirying at block 100
* Eve send a second HTLC #2 to Mallory through Alice, expirying at block
110 on outgoing link
(A<->M), 120 on incoming link (E<->A)
* Before block 100, without cancellation from Mallory, Alice will
force-close channel and broadcast
her local commitment and HTLC-timeout to get back HTLC #1
* Alice can't broadcast HTLC-timeout for HTLC #2 as it's only expires at 110
* Mallory can broadcast its Pinning Preimage Tx on offered HTLC #2 output
on Alice's transaction,
feerate is maliciously chosen to get in network mempools but never to
confirm. Absolute fee must
be higher than HTLC-timeout #2, a fact known to Mallory. There is no p2p
race.
* As Alice doesn't watch the mempool, she is never going to learn the
preimage to redeeem incoming
HTLC #2
* At block 110, Alice is going to broadcast HTLC-timeout #2, feerate may be
higher but as absolute
fee is lower, it's going to be rejected from network mempools as
replacement for Pinning Preimage
Tx (BIP 125 rule 3)
* At block 120, Eve closes channel and HTLC-timeout HTLC #2
* Mallory can RBF its Pinning Preimage Tx by a high-feerate one and get it
confirmed

New anchor_output proposal, by disabling RBF, forces attacker to bid on the
absolute fee. It may
be now a risk to loose the fee if Pinning Tx is confirming. You may extend
your "pinning
lease" by ejecting your malicious tx, like conflicting or trimming out of
the mempool one of its
parents. And then reannounce your preimage tx with a
lower-feerate-but-still-high-fee before a
new block and a honest HTLC-timeout rebroadcast.

AFAICT, even with anchor_output deployed, even assuming empty mempools,
success rate and economic
rationality of attacks is finding such cheap, reliable "pinning lease
extension" trick.

I think any mempool watching mitigation is at best a cat-and-mouse hack.
Contrary to node
advancing towards a global blockchain view thanks to PoW, network mempools
don't have a convergence
guarantee. This means,  in a distributed system like bitcoin, node don't
see events in the same
order, Alice may observe tx X, tx Y, tx Z and Bob may observe tx Z, tx X,
tx Y. And order of events
affects if a future event is going to be rejected or not, like if tx Z
disable-RBF and tx X try to
replace Z, Alice accepts X and Bob rejects it. And this divergence may
perserve until a new block.

Practically, it means an attacker can provoke a local conflict to bounce
off HTLC preimage tx out
of your mempool while broadcasting preimage tx without conflict to the rest
of the network by
tweaking tx-relay protocol and so easily manipulating order of events for
every node. A local
conflict is easy to provoke, just make tx A double-spent by both
HTLC-preimage-tx and non-RBF-tx-B.
Announce txA+txB to mempool victim and txA+HTLC-preimage-tx to rest of
network. When rest of
network announce HTLC-preimage-tx, it's going to rejected by your mempool.

Provoking local conflict assumes of course _interlayer_ mapping by an
attacker, i.e mapping your LN
node to your full-node(s). Last time, we check, there was 982 match by IP
for 4,500 LN/52,000
full-node. Mapping heuristics is an ongoing research subject and sadly
seems affordable.

Yes a) you can enable full-RBF on your local node but blinding conflicting
may still be with higher
feerate as everything is attacker malleable b) you may want to catch tx and
extract preimage
on the p2p wire, but 

Re: [Lightning-dev] DRAFT: interactive tx construction protocol

2020-01-30 Thread Antoine Riard
> The funding transaction sig would actually fail verification if tip
differs between funder and fundee

Yes that's the reason I wrote the initiator can just
announce its own and receiver use it to sign the funding tx,
even if receiver tip is backward. Funding tx won't propagate
from receiver mempool but that's fine if it does from the initiator
one.

Or are you talking about the commitment tx (different issue and there is
broader privacy leaks there) ?

> Darosior ( i'll stick with my pseudo, first names definitely don't have
enough entropy :-) )

Ahaha yeah this pseudo-random-name-generator is definitely not trustworthy
:p


Le jeu. 30 janv. 2020 à 13:19, darosior  a écrit :

> Sorry I wasn't clear enough in the `(cdecker)` paragraph.
>
>
> The funding transaction sig would actually fail verification if tip
> differs between funder and fundee.
>
>
> Darosior ( i'll stick with my pseudo, first names definitely don't have
> enough entropy :-) )
>  Original Message ----
> On Jan 30, 2020, 19:09, Antoine Riard < antoine.ri...@gmail.com> wrote:
>
>
> Hey Darosior,
>
> You don't need a strict synchronization between both peers,
> just let nLocktime picked up by initiator and announce it at
> same time than feerate or at `tx_complete`. Worst-case,
> a slow-block-processing receiver may not be able to get
> the transaction accepted by its local mempool, but IMO that's
> fine if at least the initiator is able to do so. We are requiring peers
> to be weakly in sync before operating channel anyway (`funding_locked`
> exchange).
>
> Funding_tx can already be drop from mempool for others
> reasons like mempool shrinks or expiry so broadcaster
> should always be ready to re-send it or bump feerate.
>
> Or are you describing another issue ?
>
> Le jeu. 30 janv. 2020 à 04:06, darosior  a
> écrit :
>
>> Hi Antoine and all,
>>
>>
>> About nLockTime fun thing is Lisa, Cdecker and I had this conversation to
>> integrate it to C-lightning just yesterday.
>>
>>
>> Unfortunately you need to add a "My tip is " to the openchannel msg,
>> otherwise if you set nLockTime to tip. (cdecker)
>>
>>
>> Moreover in case of reorg the funding tx (now non-final) would be dropped
>> from mempool ? But you could set nLockTime to, say, tip - 6. (niftynei)
>>
>>
>> Antoine
>>
>>
>>  Original Message 
>> On Jan 30, 2020, 01:21, Antoine Riard < antoine.ri...@gmail.com> wrote:
>>
>>
>> Hey thanks for this proposal!
>>
>> 2 high-level questions:
>>
>> What about multi-party tx construction ? By multi-party, let's define
>> Alice initiate a tx construction to Bob and then Bob announce a
>> construction to Caroll and "bridge" all inputs/outputs
>> additions/substractions
>> in both directions. I think the current proposal hold, if you are a bit
>> more
>> tolerant and bridge peer don't send a tx_complete before receiving ones
>> from all its peers.
>>
>> What about transactions format ? I think we should coordinate with
>> Coinjoin
>> people to converge to a common one to avoid leaking protocol usage when
>> we can hinder under Taproot. Like setting the nLocktime or sorting inputs
>> in some protocol-specific fashion. Ideally we should have a BIP for format
>> but every layer 2 protocols its own set of messages concerning the
>> construction.
>>
>> > nLocktime is always set to 0x00
>> Maybe we can implement anti-fee sniping and mask among wallet core
>> txn set:
>> https://github.com/bitcoin/bitcoin/blob/aabec94541e23a67a9f30dc2c80dab3383a01737/src/wallet/wallet.cpp#L2519
>> ?
>>
>> > In the case of a close, a failed collaborative close would result in an
>> error and a uninlateral close"
>> Or can we do first a mutual closing tx, hold tx broadcast for a bit if
>> "opt_dual_fund"
>> is signaled to see if a tx_construction + add_funding_input for the
>> channel is received
>> soon ? At least that would be a dual opt-in to know than one party can
>> submit a funding-outpoint
>> as part of a composed tx ?
>>
>> Antoine
>>
>> Le lun. 27 janv. 2020 à 20:51, lisa neigut  a écrit :
>>
>>> Some of the feedback I received from the check-in for the dual-funding
>>> proposal this past Monday was along the lines that we look at simplifying
>>> for breaking it into smaller, more manageable chunks.
>>>
>>> The biggest piece of the dual-funding protocol update is definitely the
>>> move from a single peer constructing a transaction to two participa

Re: [Lightning-dev] DRAFT: interactive tx construction protocol

2020-01-30 Thread Antoine Riard
Hey Darosior,

You don't need a strict synchronization between both peers,
just let nLocktime picked up by initiator and announce it at
same time than feerate or at `tx_complete`. Worst-case,
a slow-block-processing receiver may not be able to get
the transaction accepted by its local mempool, but IMO that's
fine if at least the initiator is able to do so. We are requiring peers
to be weakly in sync before operating channel anyway (`funding_locked`
exchange).

Funding_tx can already be drop from mempool for others
reasons like mempool shrinks or expiry so broadcaster
should always be ready to re-send it or bump feerate.

Or are you describing another issue ?

Le jeu. 30 janv. 2020 à 04:06, darosior  a écrit :

> Hi Antoine and all,
>
>
> About nLockTime fun thing is Lisa, Cdecker and I had this conversation to
> integrate it to C-lightning just yesterday.
>
>
> Unfortunately you need to add a "My tip is " to the openchannel msg,
> otherwise if you set nLockTime to tip. (cdecker)
>
>
> Moreover in case of reorg the funding tx (now non-final) would be dropped
> from mempool ? But you could set nLockTime to, say, tip - 6. (niftynei)
>
>
> Antoine
>
>
> ---- Original Message 
> On Jan 30, 2020, 01:21, Antoine Riard < antoine.ri...@gmail.com> wrote:
>
>
> Hey thanks for this proposal!
>
> 2 high-level questions:
>
> What about multi-party tx construction ? By multi-party, let's define
> Alice initiate a tx construction to Bob and then Bob announce a
> construction to Caroll and "bridge" all inputs/outputs
> additions/substractions
> in both directions. I think the current proposal hold, if you are a bit
> more
> tolerant and bridge peer don't send a tx_complete before receiving ones
> from all its peers.
>
> What about transactions format ? I think we should coordinate with Coinjoin
> people to converge to a common one to avoid leaking protocol usage when
> we can hinder under Taproot. Like setting the nLocktime or sorting inputs
> in some protocol-specific fashion. Ideally we should have a BIP for format
> but every layer 2 protocols its own set of messages concerning the
> construction.
>
> > nLocktime is always set to 0x00
> Maybe we can implement anti-fee sniping and mask among wallet core
> txn set:
> https://github.com/bitcoin/bitcoin/blob/aabec94541e23a67a9f30dc2c80dab3383a01737/src/wallet/wallet.cpp#L2519
> ?
>
> > In the case of a close, a failed collaborative close would result in an
> error and a uninlateral close"
> Or can we do first a mutual closing tx, hold tx broadcast for a bit if
> "opt_dual_fund"
> is signaled to see if a tx_construction + add_funding_input for the
> channel is received
> soon ? At least that would be a dual opt-in to know than one party can
> submit a funding-outpoint
> as part of a composed tx ?
>
> Antoine
>
> Le lun. 27 janv. 2020 à 20:51, lisa neigut  a écrit :
>
>> Some of the feedback I received from the check-in for the dual-funding
>> proposal this past Monday was along the lines that we look at simplifying
>> for breaking it into smaller, more manageable chunks.
>>
>> The biggest piece of the dual-funding protocol update is definitely the
>> move from a single peer constructing a transaction to two participants.
>> We're also going to likely want to reuse this portion of the protocol
>> for batched closings and splicing. To that extent, it seemed useful to
>> highlight it in a separate email.
>>
>> This is a change from the existing proposal in the dual-funding PR #524
>> <https://github.com/lightningnetwork/lightning-rfc/pull/524> -- it
>> allows for the removal of inputs and outputs.
>>
>> The set of messages are as follows.
>>
>>
>> Note that the 'initiation' of this protocol will be different depending
>> on the case of the transaction (open, close or splice):
>>
>> 1. type:   440 `tx_add_input`
>>
>> 2. data:
>>
>> * [`32*byte`:`channel_identifier`]
>>
>> * [`u64`:`sats`]
>>
>> * [`sha256`:`prevtx_txid`]
>>
>> * [`u32`:`prevtx_vout`]
>>
>> * [`u16`:`prevtx_scriptpubkey_len`]
>>
>> * [`prevtx_scriptpubkey_len*byte`:`prevtx_scriptpubkey`]
>>
>> * [`u16`:`max_witness_len`]
>>
>> * [`u16`:`scriptlen`]
>>
>> * [`scriptlen*byte`:`script`]
>>
>> * [`byte`:`signal_rbf`]
>>
>> 1. type: 442 `tx_add_output`
>>
>> 2. data:
>>
>> * [`32*byte`:`channel_identifier`]
>>
>> * [`u64`:`sats`]
>>
>> * [`u16`:`scriptlen`]
>>
>> * [`scriptlen*

Re: [Lightning-dev] DRAFT: interactive tx construction protocol

2020-01-29 Thread Antoine Riard
Hi Max,

Sorry by transaction format I didn't mean a binary transaction format,
but format like we use in BOLT3 :
https://github.com/lightningnetwork/lightning-rfc/blob/master/03-transactions.md

My concern is, e.g LN implementations setting nLocktime to 0x,
Coinjoin wallets always disabling nSequence and core wallet transactions
doing anti-fee snipping. Now even if all of them are using Taproot outputs
you're still leaking what protocol/tooling you're using to an external
observer
due to discrepancies in transaction fields. So we should obfuscate or using
standard values as much as protocol semantics let us doing it to break chain
analysis heuristics.

Le mer. 29 janv. 2020 à 21:00, lisa neigut  a écrit :

> hi max — great question. PSBT is a great protocol for wallet interop but a
> bit overweight for tx collaboration between two peers
>
> On Wed, Jan 29, 2020 at 17:29 Max Dignan  wrote:
>
>> Hey Antoine,
>>
>> Would PSBT (BIP 174 -
>> https://github.com/bitcoin/bips/blob/master/bip-0174.mediawiki) be a
>> good solution to this?
>>
>> -Max
>> ___
>> Lightning-dev mailing list
>> Lightning-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>>
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] DRAFT: interactive tx construction protocol

2020-01-29 Thread Antoine Riard
Hey thanks for this proposal!

2 high-level questions:

What about multi-party tx construction ? By multi-party, let's define
Alice initiate a tx construction to Bob and then Bob announce a
construction to Caroll and "bridge" all inputs/outputs
additions/substractions
in both directions. I think the current proposal hold, if you are a bit more
tolerant and bridge peer don't send a tx_complete before receiving ones
from all its peers.

What about transactions format ? I think we should coordinate with Coinjoin
people to converge to a common one to avoid leaking protocol usage when
we can hinder under Taproot. Like setting the nLocktime or sorting inputs
in some protocol-specific fashion. Ideally we should have a BIP for format
but every layer 2 protocols its own set of messages concerning the
construction.

> nLocktime is always set to 0x00
Maybe we can implement anti-fee sniping and mask among wallet core
txn set:
https://github.com/bitcoin/bitcoin/blob/aabec94541e23a67a9f30dc2c80dab3383a01737/src/wallet/wallet.cpp#L2519
?

> In the case of a close, a failed collaborative close would result in an
error and a uninlateral close"
Or can we do first a mutual closing tx, hold tx broadcast for a bit if
"opt_dual_fund"
is signaled to see if a tx_construction + add_funding_input for the channel
is received
soon ? At least that would be a dual opt-in to know than one party can
submit a funding-outpoint
as part of a composed tx ?

Antoine

Le lun. 27 janv. 2020 à 20:51, lisa neigut  a écrit :

> Some of the feedback I received from the check-in for the dual-funding
> proposal this past Monday was along the lines that we look at simplifying
> for breaking it into smaller, more manageable chunks.
>
> The biggest piece of the dual-funding protocol update is definitely the
> move from a single peer constructing a transaction to two participants.
> We're also going to likely want to reuse this portion of the protocol for
> batched closings and splicing. To that extent, it seemed useful to
> highlight it in a separate email.
>
> This is a change from the existing proposal in the dual-funding PR #524
>  -- it allows
> for the removal of inputs and outputs.
>
> The set of messages are as follows.
>
>
> Note that the 'initiation' of this protocol will be different depending
> on the case of the transaction (open, close or splice):
>
> 1. type:   440 `tx_add_input`
>
> 2. data:
>
> * [`32*byte`:`channel_identifier`]
>
> * [`u64`:`sats`]
>
> * [`sha256`:`prevtx_txid`]
>
> * [`u32`:`prevtx_vout`]
>
> * [`u16`:`prevtx_scriptpubkey_len`]
>
> * [`prevtx_scriptpubkey_len*byte`:`prevtx_scriptpubkey`]
>
> * [`u16`:`max_witness_len`]
>
> * [`u16`:`scriptlen`]
>
> * [`scriptlen*byte`:`script`]
>
> * [`byte`:`signal_rbf`]
>
> 1. type: 442 `tx_add_output`
>
> 2. data:
>
> * [`32*byte`:`channel_identifier`]
>
> * [`u64`:`sats`]
>
> * [`u16`:`scriptlen`]
>
> * [`scriptlen*byte`:`script`]
>
> 1. type: 444 `tx_remove_input`
>
> 2. data:
>
> * [`32*byte`:`channel_identifier`]
>
> * [`sha256`:`prevtx_txid`]
>
> * [`u32`:`prevtx_vout`]
>
> 1. type: 446 `tx_remove_output`
>
> 2. data:
>
> * [`32*byte`:`channel_identifier`]
>
> * [`u64`:`sats`]
>
> * [`u16`:`scriptlen`]
>
> * [`scriptlen*byte`:`script`]
>
> 1. type: 448 `tx_complete`
>
> 2. data:
>
> * [`32*byte`:`channel_identifier`]
>
> * [`u16`:`num_inputs`]
>
> * [`u16`:`num_outputs`]
>
> 1. type:  448 `tx_sigs`
>
> 2. data:
>
> * [`channel_id`:`channel_identifier`]
>
> * [`u16`:`num_witnesses`]
>
> * [`num_witnesses*witness_stack`:`witness_stack`]
>
> 1. subtype: `witness_stack`
>
> 2. data:
>
> * [`sha256`:`prevtx_txid`]
>
> * [`u32`:`prevtx_vout`]
>
> * [`u16`:`num_input_witness`]
>
> * [`num_input_witness*witness_element`:`witness_element`]
>
> 1. subtype: `witness_element`
>
> 2. data:
>
> * [`u16`:`len`]
>
> * [`len*byte`:`witness`]
>
>
>
> ## General Notes
>
> - Validity of inputs/outputs is not checked until both peers have sent
> consecutive `tx_complete`  messages.
>
> - Duplicate inputs or outputs is a protocol error.
>
> - Feerate is set by the initiator, or in the case of a closing
> transaction, negotiated before the transaction construction is initiated.
>
> - Every peer pays fees for the inputs + outputs they contribute, plus
> enough to cover the maximum estimate of their witnesses. Overpayment of
> fees is permissible.
>
> - Initiator is responsible for contributing the output/input in question,
> i.e. the
>
>   funding output in the case of an opening, or the funding input in the
> case of a close.
>
>   (This means that the opener will pay for the opening output). In the
> case of a splice,
>
>   the initiator of the splice pays for the funding tx's inclusion as an
> input and the
>
>   new 'funding tx' output.
>
> - Any contributor may signal that their input is RBF'able. The 

Re: [Lightning-dev] Speculations on hardware wallet support for Lightning

2020-01-16 Thread Antoine Riard
Hey Zeeman,

tl;dr A LN node paired with an external signer can be distrusted and LN
funds be safe in any case
if signer is connected to a N-set of watchtowers and at least one of them
is non-compromised.

Thanks for this interesting post, I was thinking about LN hardware wallets
support for a while too,
I do think LN model has its own pitfalls compare to base layer but that
doesn't mean we can't
substantially improve current one
monolithic-LN-node-with-unsafe-key-material-in-RAM deployment and
still have automatic processing of HTLCs.

LN security model differs compare to base layer by requiring onchain
monitoring and
reaction to keep your funds safe. That's quite the contrary on how HW have
been designed
until now where UTXO state access isn't assumed to be secure. That's said
the cool
thing is you may have multiple monitoring bckends/watchtowers running on
different
infrastructures, if one of them stay non-compromised and enforce protocol
rules you
should be fine [0].

So let's go through whole LN operations with a deployment such that Alice
the LN processing
node, Bob and Caroll the channel peers, Sally the external signer, Will a
set of N watchtowers.
Will is part of the same entity than Alice, run on different infra, is
seeded with
same secret (to derive same local keys), and have authenticated
communication with Sally.

Attacks scenario are 1) node compromised, where an attacker would leverage
secret keys
to sign closing/justice/sweep transaftions to a attacker-controlled
address. 2) node
compromised + peer collusion, where an attacker broadcast revoked
commitment/non-revoked
commitment with a HTLC to timeout/obtain a commitment with outgoing HTLC
but not incoming one.
>From the perspective of signer, you can't dissociate peer collusion so it
should always
be assumed and the straighter policy enforced.

Alice inits Sally with 2
funding_transactions+remote_sigs+per_commitment_point+balances states,
for both link AB and AC. Balance states have to be user-input validated (or
can be deduced if Sally
is also an onchain wallet and fundings spent from controlled-keys modulo
push_msat max being hardcoded).

Sally send to Will these requests through Alice. When Will see fundings
confirmed it replies
the computed channel_id to Sally and stores sigs, who then can safely
considers these channels
activated.

Bob initiates a payment to Caroll through link AB+AC, Alice receives the
messages and ask Sally to
sign Bob's remote, as it's a balance increases, no proof is asked. Sally
send to Will vector
of HTLCs (hash+locktime)+previous_state revocation
secret+per_commitment_point. Will can return
Sally an error if locktime are in the past of channel has been closed so
Sally won't accept latter
outgoing HTLC paired with such incoming channel. If a revoked commitment is
broadcast on AB and Alice
is compromised, Will generate justice transactions and take the funds back.

Alice builds outgoing payment and asks Sally to sign outgoing commitment
decreasing local
balance with incoming HTLC as a proof. Sally can send Will the new outgoing
commitment
tied with HTLC proof, if HTLC proof can't be tied to an incoming valid
chan, an error should
be returned to Sally and signature aborted.

At HTLC fulfillement by Caroll, Alice should pass the preimage to Caroll
who pass it to Will.
If Alice is compromised, Will be able to claim incoming HTLC. Will should
also be able to parse
onchain HTLCs and extract preimage to claim backward in case of Alice
withholding the preimage.

At any block reception, Will should be able to broadcast commitment and
timeout HTLCs in
case of Alice being unreliable/compromised.

To avoid burning-to-fees attack, if Alice asks Sally to sign a
decrasing-balance commitment
without a HTLC proof, which is credible for update_fee, fees can be bounded
to some value.
Bounds should scope dust HTLCs too.

At channel closing, Will can observe it, if Sally ask to valid any HTLC
proof, return an error.

External signer should store commitment numbers and balances for each
channel and do key derivation
locally (at least for local, for remote you can't trust provided
per_commitment_point anyway).


Voila, I think this describes a master-slaves scheme, where external signer
is coupled with
watchtowers to serve as UTXO oracles, while mitigating node compromised. It
would be
fairly complex to design and implement it right but on the long-term it's
worth it if you
assume a world of wumbo channels and mutiple-btcs HTLCs [1]

Devrandom work is a pretty headstart towards such safe API and we should
keep experimenting
with this to patch unthought corner-cases. The futur alternative is every
HW vendor designing
its LN-implementation specific support, and a lot of them being flawed by
missing LN security
model oddities. A lack of such standard programming interface on base layer
HW and the number
of vuln due to mishandling change or derivation is something to meditate on.


> It would have to be a sophisticated threshold: a 

Re: [Lightning-dev] Pay-to-Open and UX improvements

2019-12-17 Thread Antoine Riard
Hi Bastien,

The use case you're describing strikes me as similar to a slashing protocol
for a LN node and a watchtower, i.e punishing
a lazy watchtower for not broadcasting a penalty tx on remote revoked
state. In both case you want "if A don't do X
unlock some funds for B".

Here a rough slashing protocol I've sketched out to someone else off-list,
it may work for you use case if you replace the penalty tx
by the funding transaction as a way for the trusted channel funder to clear
his liability. Though you will need onchain interactivity
before the fact but you may be able to reuse slashing outpoint for multiple
channel funding.

Slashing Protocol
--

Alice and Bob lock fund in channel outpoint X. They issue commitment tx N.
Will the accountable watchtower locks fund
in a 2-of-2 slashing outpoint Y with Bob the client.

When Alice and Bob update channels to N', Bob and Will use some output from
commitment N (like upcoming anchor output)
to create an accountable tx M. M is paying to Bob after timelock+Bob sig or
is paying to transaction success_penalty P
with Will sig + Bob sig. Success_penalty P will have 2 inputs, one from M
and from J the justice tx than Bob has given
to Will. J is spending Alice's revoked commitment N.

So this slashing protocol should avoid Bob making false claim, because you
need a revoked broadcast to enable the claim
and at same time we use a justice tx output as a proof than Will have done
its monitoring+punishment job. Will shouldn't
learn commitment balance if there is no channel breach and Alice and Bob
wouldn't be able to collude against Will, if
watchtower have a penalty tx on Alice non-revoked commitment tx, that's her
concern.

So topology would be:

   to_Bob
 /
  X  <-- N   <- J
^   ^
   \   \
Y <---  M  <-- P - to_Will
   \
to_Bob


Main idea of the protocol is to use transactions topology of a first
contract as proofs for a subsidiary contract.

I'm quite sure it's insecure, just quick ideas, any thoughts ?

(but would be really cool to have one accountable protocol to both
watchtower and pay-to-open use cases to save
engineering costs)

Cheers,

Antoine


Le mar. 17 déc. 2019 à 16:08, Ethan Heilman  a écrit :

> From where I'm sitting the fact that OP_CAT allows people to build
> more powerful constructions in Bitcoin without introducing additional
> complexity at the consensus layer is a positive not a negative. Using
> OP_CAT or OP_SUBSTRING to enforce ECDSA nonce reuse is a very powerful
> protocol tool for enforcing fairness in layer two protocols.
>
> On Tue, Dec 17, 2019 at 11:27 AM ZmnSCPxj via Lightning-dev
>  wrote:
> >
> > Good morning t-bast,
> >
> > Further, we can enforce that RBF is signalled for every spend of the
> output by:
> >
> > <0> OP_CHECKSEQUENCEVERIFY OP_DROP  OP_SWAP OP_CAT 
> OP_CHECKSIG
> >
> > Requiring that RBF is signalled gives a little more assurance.
> > Suppose ACINQ becomes evil and double-spends the output.
> > The transaction that is posted in the mempool must be marked by RBF due
> to the `OP_CHECKSEQUENCEVERIFY` opcode, since `nSequence` also doubles as
> RBF opt-in.
> > Then anyone who notices the double-spend can RBF the double-spending
> transaction to themselves rather than ACINQ.
> > This also further publishes ACINQ private key, until the winning
> transaction has an `OP_RETURN` output that pays the entire value as fees
> and nobody can RBF it further.
> >
> > This is a minor increase in the assurability of the construction, by
> making any output that is double-spent directly revocable in favor of the
> miners.
> > Again, it requires `OP_CAT`, which is a very dangerous opcode, allowing
> such powerful constructions.
> >
> > Regards,
> > ZmnSCPxj
> >
> >
> > > Thanks a lot David for the suggestion and pointers, that's a really
> interesting solution.
> > > I will dive into that in-depth, it could be very useful for many
> layer-2 constructions.
> > >
> > > Thanks ZmnSCPxj as well for the quick feedback and the `OP_CAT`
> construction,
> > > a lot of cool tricks coming up once (if?) we have such tools in the
> future ;)
> > >
> > > Le mar. 17 déc. 2019 à 16:14, ZmnSCPxj  a
> écrit :
> > >
> > > > Good morning David, t-bast, and all,
> > > >
> > > > > I'm not aware of any way to currently force single-show signatures
> in
> > > > > Bitcoin, so this is pretty theoretical. Also, single-show
> signatures
> > > > > add a lot of fragility to any setup and make useful features like
> RBF
> > > > > fee bumping unavailable.
> > > >
> > > > With `OP_CAT`, we can enforce that a particular `R` is used, which
> allows to implement single-show signatures.
> > > >
> > > > # Assuming 

Re: [Lightning-dev] Time-Dilation Attacks on Offchain Protocols

2019-12-16 Thread Antoine Riard
> It seems pretty easy to me to detect the difference between the normal
> case (Alice's chaintip is old but she's still successfully downloading>
> blocks) and the pathological case (Alice's chaintip is old and she's
> unable to obtain more recent blocks).

I think if alarm is implemented at the validation level it's not going to
be reliable due to IBD. While connecting and validating headers, it's okay
to
process headers timestamp that's hours or days old. Current
IsInitialBlockDownload
logic returns false if tip is less than a day old. By slowing blocks
announcement I
could pin you indefinitely in IBD and alarms are not going to be triggered.
The issue
being than the comparison point can be manipulated by the attacker.

Now, if alarm is implemented at the net_processing level, I think that's
something like
CheckForStaleTipAndEvictPeers is doable but tricky. If you're past
headers-sync from one
peer and best block header announced by a peer is too way in the past,
disconnect it. Still,
you can't be sure because maybe this node was buggy or its connection was
faulty, so you need to
repeat this few times and if all these peers announce block in the past
there something is wrong
and raise an alarm. But it seems hard to have detection without doing
active peer rotation and this
may have bad side effects on connectivity..

You want a reliable detection mechanism because if it's cheaply triggered
you now have DoS attack
vectors on the LN layer, like delaying blocks knowing it's going to trigger
alarm and than a LN processing
node will close its channels.  You want scoping the issue beyond "something
is wrong" (and like you mention
there is also the edge case of a legit several hours delay) that's why
fetching headers thanks to some
redundant communication channel seems to me better.

> To a point, transaction censorship just looks a failure to pay a
> sufficient feerate---so a node will probably fee bump a
> commitment/penalty transaction a few times before it starts to panic.

I don't do the assumption of hashrate-attackers but yes that's interesting
than you may combine
with some mempool tricks to optimize the attack.

Antoine

Le lun. 16 déc. 2019 à 10:29, Matt Corallo  a
écrit :

> Right, I kinda agree here in the sense that there are things that very
> significantly help mitigate the issue, but a) I'm not aware of any
> clients implementing it (and the equivalent mitigations in Bitcoin Core
> are targeted at a different class of issue, and are not sufficient
> here), and b) its somewhat unclear what the "emergency action" would be.
> Even if you implement detection, figuring out how to do a fallback is
> nontrivial, especially if you are concerned with user privacy.
>
> Matt
>
> On 12/16/19 9:10 AM, David A. Harding wrote:
> > On Mon, Dec 16, 2019 at 02:17:31AM -0500, Antoine Riard wrote:
> >> If well executed, attacks described stay stealth until it's too late
> >> to react.
> >
> > I suspect the attacks you described are pretty easy to detect (assuming
> > block relay is significantly delayed) by simply comparing the time of
> > the latest block header to real time.  If the difference is too large,
> > then an emergency action should be taken.[1]
> >
> > You mention IBD as confounding this strategy, but I don't think that's
> > the case.  Compare the normal case to the pathological case:
> >
> > - Normal: when Alice is requesting blocks from honest nodes because
> >   she's far behind, those nodes are telling her their current block
> >   height and are promptly serving any blocks she requests.
> >
> > - Pathological: when Alice is requesting blocks from a eclipse attacker,
> >   those dishonest nodes are telling her she's already at the chain tip
> >   even though the latest block they serve her has a header timestamp
> >   that's hours or days old.  (Alternatively, they're reporting the
> >   latest block height but refusing to serve her blocks/headers past a
> >   certain point in the past.)
> >
> > It seems pretty easy to me to detect the difference between the normal
> > case (Alice's chaintip is old but she's still successfully downloading
> > blocks) and the pathological case (Alice's chaintip is old and she's
> > unable to obtain more recent blocks).
> >
> > A possibly optimal attack strategy would be to combine
> > commitment/penalty transaction censorship with plausible block delays.
> > To a point, transaction censorship just looks a failure to pay a
> > sufficient feerate---so a node will probably fee bump a
> > commitment/penalty transaction a few times before it starts to panic.
> > Also to a point, a delay of up to several hours[2] just looks like
> > regular stochastic block production.  By using 

Re: [Lightning-dev] Time-Dilation Attacks on Offchain Protocols

2019-12-15 Thread Antoine Riard
> The proposed countermeasure here of "raising alarms" in case the
> best-block nTime field is too far behind is compelling in a
> SPV-assumption world, though it is far from sufficient if the time delay
> is small

No that simple without interfering with IBD. While IBDing, alarms should be
off to avoid raising false positives so attacker
who succeeds to eclipse you before you synced to top won't raise it. And
your validation software needs to remember than
you're out of IBD to avoid being pinned back in the past, fallback to IBD
and disable alarms.

> This is useful if and only if the Bitcoin fullnode we use is differently
eclisable from the Lightning node we use, e.g. the Bitcoin fullnode is
openly on an IPv4 address while the Lightning node is on a Tor hidden
service.

I don't consider than your Lightning node is eclipsed. It needs further
research but IMO it's harder to eclipse without
detection on LN due to node pubkeys. And given than connectivity is cheaper
than base layer (no per-peer inventories to maintain)
if we have such header protocol, you should open connections to well-known
LN pubkeys. Even if you assume an infrastructure attacker,
given encrypted transport, it's hard to drop 80-bytes headers without
tampering others messages and so being easily detected.

Now how are you sure than LN pubkeys you get are the ones you intended to
connect to ? That's an authenticity problem and not
sure than copy-pasting from LN search engines is the best practice..

> I guess the sophisticated attacks try to trick the victim into believing
that no attack is underway, but I may be wrong.

Yes, a basic eclipse attack where you halt blocks update would be easily
detectable. Eclipsing by discarding
commitment/penalty txn still let CLTV/CSV time for the victim to react and
you can't be sure than victim
doesn't have a fallback way to broadcast tx. If well executed, attacks
described stay stealth
until it's too late to react. I think for analysis we should split
detection from reaction, even if in practice we
use same communication channel for both.



Le sam. 14 déc. 2019 à 19:07, Orfeas Stefanos Thyfronitis Litos <
o.thyfroni...@ed.ac.uk> a écrit :

> I guess the sophisticated attacks try to trick the victim into believing
> that no attack is underway, but I may be wrong.
>
> Orfeas
>
> On 14 December 2019 19:54:19 CET, "David A. Harding" 
> wrote:
>>
>> On Mon, Dec 09, 2019 at 01:04:07PM -0500, Antoine Riard wrote:
>>
>>> Time-Dilation Attacks on Offchain Protocols
>>> --
>>>
>>
>> What is the advantage of these sophisticated attacks over the eclipse
>> attacker simply not relaying the honest user's commitment or penality
>> transactions to miners?
>>
>> -Dave
>>
>> The University of Edinburgh is a charitable body, registered in
> Scotland, with registration number SC005336.
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Time-Dilation Attacks on Offchain Protocols

2019-12-09 Thread Antoine Riard
Time-Dilation Attacks on Offchain Protocols
===

Lightning works on reversing the double-spend problem to a private
state between parties instead of being a public issue verified by every
network peer. The security model is based on revocation of previous
states and in case of broadcast of any of them, being aware of it to
generate justice transactions to claim misbehaving peer onchain outputs
before contest period expiration. This period is driven by the blockchain
which is here the system clock.

Eclipse attacks's end-goal is to monopolize a victim's incoming and
outgoing connections, by this way isolating a node from the rest of its
peers in the network. A successful Eclipse attacks lets the attacker
filter the victim's view of the blockchain, i.e he controls transactions
and blocks announcements [0].

Every LN node must be tied to a bitcoin full-node or light-client to
verify independently channels opening/closing, HTLCs expiration and
previous/latest state broadcast. To operate securely, the view of the
blockchain must be up-to-date with the one shared with the rest of the
network. By considering Eclipse attacks on the base layer, this assumption
can be broken.

First scenario : Targeting the CSV security delay
--

Alice and Mallory are LN peers with a channel opened at state N. They
use a CSV of 144 blocks as a security parameter for contestation period.
Mallory is able to identify Alice full-node and start to eclipse it.
When done, it keeps announcing blocks to Alice node but delaying them by
2min. Given a variance of 10min, after 6 blocks, Mallory will have a
height ahead of Alice of 1, after 24 blocks, a lead of 4, after 144,
a lead of 24, after 1008, a lead of 168.

After maintaining the eclipse for a week, Mallory will have more than a
day of height advance on Alice view of blockchain. The difference being
superior at the CSV timelock, Mallory can broadcast a previous
commitment transaction at state N - 10 with a balance far more favorable
than the legit one at height H, the synchronized height with the rest
of the network.

At revoked commitment transaction broadcast, Alice is slow down at
height H - 168. At H+144, Mallory can unlock its funds out of the
commitment transaction outputs and by so closing the contestation period
while Alice is still stuck at H-24. When Alice learn about revoked
broadcast at H, it's already too late. Mallory may have stopped the
Eclipse attack after H+144, or he may pursue the attack on Alice because
of targeting multiple of her channels in parallel.

Second scenario : Targeting the per-hop CLTV-delta
---

Alice, Bob and Caroll are LN peers with channel Alice-Bob and Bob-Caroll.
Bob enforce a cltv_delta of 12 blocks on incoming HTLCs. Alice and Caroll
are both malicious and start to eclipse Alice full-node until they gain a
lead of 15 blocks on Alice.

At height N, Alice route a payment E to Caroll through Bob with a base
delta of 24 blocks. On channel AB, HTLC will expire at height H+24, on
channel BC, it will expire at height H+12 while Alice ticking herself
at height H-15.

When real-network blockchain height reaches H+24, Caroll provides
preimage P to Bob and following protocol rules she gets a new
commitment transaction with balance increased of HTLC E. At same time,
Alice broadcast unilaterally commitment transaction for AB and claim
back HTLC E with a HTLC-timeout transaction as its nLockTime is already
final. Bob wrongly clocking at height H+9, HTLC on channel BC won't have
been timeout by him and provided preimage is now useless as HTLC as
already claimed back on mainnet blockchain.

Attack difficulty
---

Following Eclipse attack paper publication, multiple counter-measures
have been implemented [1], [2]. Even if it's far harder, this kind of
attacks may still be possible for an off-path attacker. Including recent
changes, research should be done to ascertain it. Beyond, it still
widely feasible for attackers controlling key infrastructure between
the victim and the network [3]. A simple Tier 3 ISP doing deep packet
inspection and delaying your blocks may be able to practice this class
of attacks.

The case of LN nodes using light clients, like BIP157 or Electrum, as a
backend should be specially considered. Given the low number of servers
providing these services today it should be trivial for an attacker to run
swarm of them and from then control blockchain view of this class of LN
nodes.

Another difficulty of the attack, is to map the LN node to its full-node.
This can range from being trivial if both run in clear with same ipv4
address. It would be interesting to scan both networks and see how many
nodes pair are in this case. Learning the mapping can still be doable if
they are using an anonymous network like Tor with same exit node base on
message timing.

Further inter-layer 

Re: [Lightning-dev] [DRAFT] BOLT 13(?): WatchTower protocol

2019-11-28 Thread Antoine Riard
Thanks for working on this, a bunch of interesting ideas!

I think it could be noted in the motivation, that's having an interoperable
watchtower protocol is really cool, because every watchtower you add is
a liveness reliability increase (modulo privacy loss), specially if these
watchtowers are from different implementations in case of a vuln affecting
breach
monitoring code of your LN node.

Some generic remarks, you should define another TCP port than the LN one of
9735
because this is client-server relationship and you want to avoid leak of
p2p
messages to your watchtower.

Messages should also use the TLV format, will remove a lot *_len field and
each
QoS could be a tlv_record in `appointment`.

For the init protocol, I was considering the following scheme.


init
-->

version
<--
   Alice Bob
payment protocol
--->
 ...

appointment hiring
--->
 ...

appointment firing
--->

The `init` message would contain a method to establish a secure connection
between client and server. Watchtower shouldn't learn LN pubkey of client
as it may be a conflict of interest and be leveraged to build more
sophisticated
attacks. So client should implement identity contingement properly and use
the `init` message to start a Noise session or something like BIP324.

After secure connection establishement, `version` would be the reply with
a features field, wider than only QoS like also the payment protocol
supported,
and maybe an invoice for the payment protocol preferred. In a future,
features may
extend beyond channel watching, like timing out client HTLC or
synchronization
server for multi-party channels...

Client would then execute the one or multiple steps of the payment protocol.
This one may be complex, i.e include parameters negotiation like update
rate-limiting, feerate for encrypted blob, storage throttling after time X,
...
I do think this kind of parameters belong there compare to
`appointment_hiring`,
as they may cover watching operations of one or more channels and secondly
they
are DoS protections, and payment scheme and DoS are going to be really tied
in watchtowers protocol.

Then `appointment_hiring` with QoS and their parameters, is there reasons
for
not having them being stable for the lifetime of client-server interaction ?

Finally, some `appointment_firing` to let the client cut its subscription
and
authorize the server to clean storage.


> * `start_block` is further than one block behind the current chain tip.
> * `start_block` is further than one block ahead the current chain tip.

Is a 3 block window enough if the client is a mobile which a lot of latency
and weak
processing compare to a watchtower's competitive full-node ? I think it's
only
a block issuance edge case but maybe could be easier if client set
start_block to
current_seen_block_height+6 and server would reject if height already past..

> minimum_viable_transaction_size and maximum_viable_transaction_size refer
to the minimum/maximum size required to create a valid transaction.

Couldn't these limits be implictly MAX_STANDARD_TX_WEIGHT and
MAX_STANDARD_TX_NONWITNESS_SIZE, current mempool policy limits ?

Also, nothing is specified on disconnection/reconnection, you want to be
sure than
watchtower as ACK every justice tx sent as every one of them maybe
critical. A client
doesn't want to assume is channel is covered and finally not due to its
network
connection being rotten.


> * MUST set `dispute_delta` to the CLTV value specified in the
> `commitment_transaction`.

What's a dispute delta ? You mean the justice-CSV locktime encumbering
outputs ?
Given this one is fixed at channel opening, it should be fixed also for the
channel
hiring lifetime. And server should announce a min_dispute_delta at QoS
`accountability`
announcement.

> * MUST set `transaction_size` to the size of the serialized
> `justice_transaction`, in bytes.

I would remove the transaction size, given that all outputs are
standardized in LN, that would
be a leak on how much payment traffic is going through the client without
any channel breach.

> * MUST set `transaction_fee` to the fee set in the `justice_transaction`,
> in satoshis.

Generally, the idea to provide justice tx with pre-signed fees to a
watchtower and expect
this one to do is job reliably somewhere in the future seem a weak
assumption... Every watchtower
following this protocol should handle dynamic fees, that's should be a
basic service not even
a QoS. It may through CPFP (but won't be reliable until package relay( or
RBF'ing the justice
tx through usage of SIGHASH_ANYONECANPAY, no need interactivity with the
user at broadcast,
but you 

Re: [Lightning-dev] Rendez-vous on a Trampoline

2019-10-27 Thread Antoine Riard
Hi,

Design reason of trampoline routing was to avoid lite nodes having
to store the whole network graph and compute long-hop route. Trick
lays in getting away from source-base routing, which has the nice
property to hide hop position along the payment route (if we forget
payment hash correleation), by enabling a mechanism for route
computation delegation.

This delegation trades hardware requirements against privacy leaks
and higher fees. And we also have now to re-design privacy mechanism
to constitue an anonymous network on top of the network one. Rendez-vous
is one of them, multipe-trampoline hops another one. We may want also to
be inspired by I2P and its concept of outbound/inbound tunnels, like payer
concatenating a second trampoline onion to the rendez-vous onion acquired
from
the payee. Trick are known but hard and complex to get right in practice.

That's said, current trampoline proposal which enables legacy payee doxing
without
any opt-in from its side is a bit gross. Yes rendez-vous routing by
receiver solves
it (beyond being cool in itself)! but stucks on the same requirement to
update payee nodes.
If so, implementing trampoline routing on receiver could be easier and let
it hide behind the
feature flag.

If Eclair go forward with trampoline, are you going to enforce that
trampoline
routing is only done with payee flagging support ?

That's a slowdown but if not people are going to be upset learning that a
chunk of their
incoming payment is potentially logged by some intermediate node.

Also, I'm a bit worried too on how AMP is going to interact with trampoline
routing.
Depend on topology, but a naive implementation only using public channels
and one-hop
trampoline node would let the trampoline learn who is the payer by doing
intersection
of the multiple payment paths.

Long-term we may be pleased to have this flexible tools to enable wide-scale
networking without assessing huge routing tables for everyone but I think we
should be really careful on how we design and deploy this stuff to avoid
another
false promise of privacy like we have known on the base layer, e.g
bloom-filters.

Antoine

Le ven. 25 oct. 2019 à 03:20, Corné Plooy via Lightning-dev <
lightning-dev@lists.linuxfoundation.org> a écrit :

> Cool: non-source rendez-vous routing. Getting closer to 2013 Amiko Pay,
> with the added experience of 2019 Lightning with Sphinx routing and AMP.
>
> https://cornwarecjp.github.io/amiko-pay/doc/amiko_draft_2.pdf
>
> (esp. section 2.1.3)
>
> Please forgive the use of the term "Ripple". 2013 was a different time.
>
>
> CJP
>
>
> On 22-10-19 14:01, Bastien TEINTURIER wrote:
> > Good morning everyone,
> >
> > Since I'm a one-trick pony, I'd like to talk to you about...guess
> > what? Trampoline!
> > If you watched my talk at LNConf2019, I mentioned at the end that
> > Trampoline enables high AMP very easily.
> > Every Trampoline node in the route may aggregate an incoming
> > multi-part payment and then decide on how
> > to split the outgoing aggregated payment. It looks like this:
> >
> >  . 1mBTC ..--- 2mBTC ---.
> > /\ /
> > \
> > Alice - 3mBTC --> Ted -- 4mBTC > Terry - 6mBTC
> > > Bob
> >\ /
> > `--- 2mBTC --'
> >
> > In this example, Alice only has small-ish channels to Ted so she has
> > to split in 3 parts. Ted has good outgoing
> > capacity to Terry so he's able to split in only two parts. And Terry
> > has a big channel to Bob so he doesn't need
> > to split at all.
> > This is interesting because each intermediate Trampoline node has
> > knowledge of his local channels balances,
> > thus can make more informed decisions than Alice on how to efficiently
> > split to reach the next node.
> >
> > But it doesn't stop there. Trampoline also enables a better
> > rendez-vous routing than normal payments.
> > Christian has done most of the hard work to figure out how we could do
> > rendez-vous on top of Sphinx [1]
> > (thanks Christian!), so I won't detail that here (but I do plan on
> > submitting a detailed spec proposal with all
> > the crypto equations and nice diagrams someday, unless Christian does
> > it first).
> >
> > One of the issues with rendez-vous routing is that once Alice (the
> > recipient) has created her part of the onion,
> > she needs to communicate that to Bob (the sender). If we use a Bolt 11
> > invoice for that, it means we need to
> > put 1366 additional bytes to the invoice (plus some additional
> > information for the ephemeral key switch).
> > If the amount Alice wants to receive is big and may require
> > multi-part, Alice has to decide upfront on how to split
> > and provide multiple pre-encrypted onions (so we need 1366 bytes /per
> > partial payment/, which kinda sucks).
> >
> > But guess what? Bitcoin Trampoline fixes that*™*. Instead of doing the
> > pre-encryption on a normal onion, Alice
> > would 

Re: [Lightning-dev] Using Per-Update Credential to enable Eltoo-Penalty

2019-07-16 Thread Antoine Riard
Hi ZmnSCPxj,

Awesome resume, it's better lay-out than I did myself !!

> Thus, I would like to thank you for your tolerance and continued
attention.

Personally, it's a pleasure to read your weird but always thoughtful
proposals in other threads :)

"We have identified two requirements:

1.  We must identify which participant initiated the unilateral close
onchain.
We do so that if later, we find that the unilateral close was to an
older state, we can punish the participant that initiated the unilateral
close.
2.  We must identify that a unilateral close was, in fact, to an older
state."

I think you have well-scoped the assignment problem. But it would add
another requirement :

3. Identity commitment must not be replayable or counterfeited by another
participant.

I thought to use unique preimage in a previous versions of my proposal but
it seems really unsafe due to reorgs and mempool snooping. If another
channel participant is able to take back your identity preimage and uses it
to spend with a lower state update tx you are know flagged as the
cheater. So we want preimage to be tied to a state number and best scheme
I've thought of is not using preimage but signatures.

May we build a commitment with preimage and state number without signatures
?

> * comment: we use the common key, and the requirement to provide the
Alice fingerprint preimage, *and* the requirement to enable RBF, to force
the output to be revoked to miners as fees: when the entire output is given
as fee, no higher RBF is possible.
  * comment: outputs may become too tiny to care about if we
split up a tiny reserve between dozens of honest participants.

On the other side, giving back funds to participants let them cover back
the expenses to pay onchain fees for last state enforcement.

> * Add branches for revocation, where proof that one side attempted to
steal allows the other side to control the coin immediately.

Hmm that's the point we argue a lot, but by broadcasting a previous update,
if HTLC is Bob->Caroll, Bob didn't only try to rob Caroll
but potentially every others channel participants. Why they should get a
part of Bob funds as compensation ?

> It is safe to publish the revocation key if you publish only the latest
Update Transaction, as the latest Update Transaction cannot enable any
Litigation Transaction.

Exact, even if you're a honest participant you have to commit to a secret
or revocation key, because the blockchain can
only know after the nSequence expiration delay of the Friendly Settlement
transaction that you are honest.

Le lun. 15 juil. 2019 à 05:58, ZmnSCPxj via Lightning-dev <
lightning-dev@lists.linuxfoundation.org> a écrit :

> Good morning list,
>
> As usual, I am spamming the list for my amusement.
>
> Thus, I would like to thank you for your tolerance and continued attention.
>
> 
>
> We have identified two requirements:
>
> 1.  We must identify which participant initiated the unilateral close
> onchain.
> We do so that if later, we find that the unilateral close was to an
> older state, we can punish the participant that initiated the unilateral
> close.
> 2.  We must identify that a unilateral close was, in fact, to an older
> state.
>
> Thus, I will counterpropose a construction similar to that originally
> proposed here, but with the weaknesses fixed and key details filled in.
> (while part of it is similar to the Decker-Russell-Osuntokun "eltoo", it
> is different enough that I would not suggest calling it "eltoo-penalty")
>
> 
>
> On initiation, Alice, Bob, and Charlie indicate:
>
> * Alice/Bob/Charlie "fingerprint" hash/preimage.
>   Alice/Bob/Charlie publish the fingerprint hashes, but keep the
> fingerprint preimages secret.
> * Alice/Bob/Charlie "normal" pubkey.
> * Alice/Bob/Charlie "lawyer" pubkey.
> * All participants indicate a `delay`, a number of blocks.
>   Funds may be locked, in worst case, up to `2 * delay`.
>
> We also introduce a "common" key whose private key is known to all
> participants.
> For example, we can use a key whose private key is `SHA256("ZmnSCPxj is a
> human being and not any kind of AI")` as a consensus-accepted fact.
>
> We have the below transactions:
>
> * Funding transaction
>   * inputs: unspecified
>   * outputs:
> * change output(s): unspecified
> * funding output:
>   * Internal Taproot Key: `P = MuSig(Alice normal pubkey, Bob normal
> pubkey, Charlie normal pubkey)`
>   * Scripts:
> * `OP_1 OP_CHECKSIGVERIFY OP_HASH160 
> OP_EQUALVERIFY`
> * `OP_1 OP_CHECKSIGVERIFY OP_HASH160 
> OP_EQUALVERIFY`
> * `OP_1 OP_CHECKSIGVERIFY OP_HASH160 
> OP_EQUALVERIFY`
>
> * Update Transaction
>   * comment: this transaction initiates a unilateral close attempt.
>   * comment: Updates have a "hidden" `n`, which is an "update" number
> incrementing from 0.
> This number could be encoded as `nLockTime` by using `500e6 + n`, but
> in principle does not need to be encoded there (we could use the current

Re: [Lightning-dev] Using Per-Update Credential to enable Eltoo-Penalty

2019-07-16 Thread Antoine Riard
Hi ZmnSCPxj,

> Just a minor correction here: your own commitment transactions are not
> being signed until we want to release them. Therefore having access to
> your DB doesn't give an attacker the ability to frame the user with an
> old version, since that'd still require access to the keys to add our
> own signature.

Okay, do you plan to handover keys to the watchtower to be able to sign and
broadcast local commitment
transactions or are you thinking about another design like giving him
signed commitment ? That's an area
we still need to think more for rust-lightning too.

> And this signature is intended to be used to identify Alice as the
culprit, by also being used to sign for , but `SIGHASH_ALL`
would strongly bind the signature to this particular transaction.

Okay, you maybe also need a special pubkey type to force signatures to use
a given SIGHASH. that's a lot of tricks but
that could be a use-case for taproot introducing key types.

> Now, an economically-maximizing thief would prefer to steal as much as
possible, thus such a thief would initiate a channel, then send out funds
until only 1% is left to the thief, then "freeze" the channel (fail all
incoming HTLCs from this channel without bothering to check if they would
succeed) until the participant is offline, then perform the theft attempt
by using their initial commitment (the one where they own all the funds in
the channel).

I think it's quite dubious to base our reasoning on current network
situation and from then say on-chain fees in Eltoo model are enough
to deter attackers (or even lazy parties). Channels are going to increase
in size and people are learning how they are structured.
With more knowledge of timeout locks and deltas, attackers may try to game
them, specially by exploiting others elements
like mempool congestion or eclipse your onchain node. I see penalty as the
price the attacker have to pay to test your onchain monitoring setup.
If it's cheap and the probabilities to win are high, given he is able to
trouble confirmation of your update tx, a rational attacker will try.

"Still, a node could refuse incoming channel open requests for
Decker-Russell-Osuntokun that are larger than 100 times the typical fee for
a 1-input 1-output transaction, and still get similar protection to
Poon-Dryja using the de facto standard 1% reserve"

And maybe people are going to limit of their channels to stay in a multiple
of onchain fees, but they may also refuse to open
channel with unknown or "untrusted" parties. Relying on economical
incentives is better than social ones.

> Such outputs have shared ownership, as the offerer of the HTLC will be
able to reclaim the money after the timelock, and the accepter of the HTLC
will be able to reclaim the money before the timelock.

Yes, I think a contract design principle is that we should enforce an order
of claims between different channel participants. In a
multiparty channel between Alice, Bob, Caroll and Dave, if Alice offered a
HTLC to Bob and then cheats, if Bob come with a
preimage to unlock the HTLC, his claim shouldn't be canceled by the
punitive one raised by Bob, Caroll or Dave.

"I would argue that channel factories are better used than multiparticipant
channels, as channel factories allow *some* limited transport of funds even
if one participant is offline, whereas multiparticipant channels prevent
*all* transport of funds as soon as any one participant is offline"

I agree too, with my current proposal I was just thinking in multiparty
setup and not a 2-party one because it's far more
adversarial. IMO, channel factories are a better design because I would say
the "valuespace" is well isolated between
participants.I.e if Alice has 1, Bob has 1, Caroll has 3, Alice shouldn't
be able offer a HTLC worth 3 to Bob. It's more
tricky to enforce on a multiparty channel rather than on a channel factory.

> Let us consider what happens if Alice the thief performs the theft
attempt during various states:
> * Suppose the current state is that Charlie owns the entire funds of the
channel right now.
> Alice steals by publishing old state, but the old-state Alice->Bob HTLC
is revocable only by Bob.
> Thus the money (that rightfully belongs to Charlie) goes to Bob instead.
> * Alice and Bob could be in cahoots, with Bob as the mastermind and Alice
as the fall guy.
> * Suppose we decide that the Alice->Bob HTLC is revocable split by Bob
and Charlie.
> Suppose the current state is that Bob owns the entire funds of the
channel right now.
>  Alice steals by publishing old state, but the old-state Alice->Bob HTLC
is revocable split by Bob and Charlie.
>  Thus the money (that rightfully belongs only to Bob) goes partly to
Charlie instead.
> * Alice and Charlie could be in cahoots, with Charlie as the mastermind
and Alice as the fall guy.

If a HTLC output is claimable by both a preimage + sig from a channel
participant or MuSig from all
channel participants to a punitive tx, the punitive tx may be 

[Lightning-dev] Using Per-Update Credential to enable Eltoo-Penalty

2019-07-12 Thread Antoine Riard
Hi all,

Eltoo has been criticized to lower the cost for a malicious party to
test your monitoring of the chain. If we're able to reintroduce some
form of punishment without breaking transaction symmetry that would be
great.

Transaction symmetry implies that we can't deduce from observing
txid which party broadcast a previous state. How to assign the
faulty broadcast to the right party to punish it in consequence ?
Thanks to taproot we have cheap witness asymmetry.
Witness asymmetry can be used as a way to force the broadcaster to reveal
a secret, and so committing thatn the transaction is the latest one.

If the party misbehaves, we wish to use the revealed secret to punish
him on a second stage transaction. Doing so would be really insecure
in case of reorg or even mempool monitoring by enabling a replay attack
of your committed secret on a lower state update tx. i.e Mallory
would counterfeit being Alice, and so enable the use of a punishment tx
against an honest peer.

To solve the assignment problem, we need to have per-update credentials,
a secret committed to a state number. You need a scheme were both your
highest credential can't be used against you while at the same time if some
attacker broadcast a transaction with a lower credential you are able to
punish him.

How to make Bitcoin Script aware of a secret committed to
a lower state number ? To do so, we may use some SIGHASH magic, if you sign
two messages with the same key and we can be sure thatn the only difference
between
them is the nLocktime (encoding the state-number in eltoo), that means you
tried to breach the contract.

Without access to arbitrary messages on the stack, the only messages we can
enforce signatures on are Bitcoin transactions. We force a party
broadcasting an Update tx to sign it with
SIGHASH_ANYPREVOUTSCRIPT|SIGHASH_NONE|SIGHASH_SINGLE. If someone can shows
a
Litigation Tx with a higher state than the Update, we know that this one
has
been revoked, and someone is cheating among channel parties. We enter in a
Litigation phase, the Settlement Tx will be encumbered by a Challenge Tx
against
which you will need to produce a signature with the same SIGHASH flags as
the Update Tx.,
The only difference will be the nLocktime inherited from Litigation.

Assume Alice is trying to cheat, now Bob can take the signature from her
broadcast Update tx
and Alice’s signature on the Challenge tx, pass it as witness to a script
verifying their validity
and identity. If their validity is true and identity is false, you can
spend with a Justice tx,
splitting Alice’s funds between the other parties. If validity is true and
identity is true, then the script should fail. After timelock expiration,
if no one has proven Alice misbehaved,
she can redeem her funds.

Eltoo-Penalty Transaction Tree
==




   Friendly Settlement Tx
Challenge Tx -- Justice Tx
 /
  /
/
   /
Funding-Output -- Update Tx -- Litigation Tx -- .. -- Hostile Settlement Tx
--  Challenge Tx -- Justice Tx

  \

   \  Challenge Tx -- Justice Tx


Eltoo-Penalty Scripts


(I've omitted chaperon signatures)

FUNDING_OUTPUT:
output 0:
Q = P + tG
P = muSig(A,B,C)
scripts = [
"OP_1 CHECKSIGVERIFY  CHECKSIGVERIFY" (Alice script path)
"OP_1 CHECKSIGVERIFY  CHECKSIGVERIFY" (Bob script path)
"OP_1 CHECKSIGVERIFY  CHECKSIGVERIFY" (Caroll script path)
]

UPDATE TX:
nLocktime: 500e6 + n
output 0:
P = muSig(A,B,C)
scripts = [
"OP_1 CHECKSIGVERIFY" (friendly settlement script path)
"OP_1 CHECKSGIVERIFY 500e6+n OP_CLTV OP_DROP" (litigation script path)
witness:
"sig(A, hash_type=SINGLE|ANYPREVOUTANYSCRIPT|NONE) sig(P,
hash_type=SINGLE)"  (Alice commitment signature)
"sig(B, hash_type=SINGLE|ANYPREVOUTANYSCRIPT|NONE) sig(P,
hash_type=SINGLE)"  (Bob commitment signature)
"sig(C, hash_type=SINGLE|ANYPREVOUTANYSCRIPT|NONE) sig(P,
hash_type=SINGLE)"  (Caroll commitment signature)

LITIGATION TX:
nLocktime: 500e6 + n
nSequence: [delay]
output 0:
P = muSig(A,B,C)
scripts = [
"OP_1 CHECKSIG" (litigation script path)
"OP_1 CHECKSIGVERIFY" (hostile settlement script path)
witness:
"sig(P, hash_type=SINGLE|ANYPREVOUTANYSCRIPT)


HOSTILE SETTLEMENT TX:
nLocktime: 0
nSequence: [delay]
output 0: (to_Alice)
P = muSig(A,B,C)
scripts = [
"OP_1 CHECHSIGVERIFY  CHECKSIGVERIFY 500e6n OP_CLTV OP_DROP"
(Alice challenge script path)
]
output 1: (to_Bob)
P = muSig(A,B,C)
scripts = [
"OP_1 CHECHSIGVERIFY  CHECKSIGVERIFY 500e6n OP_CLTV OP_DROP" (Bob
challenge script path)
]
output 2: (to_Caroll)
P = muSig(A,B,C)
scripts = [
"OP_1 CHECHSIGVERIFY  CHECKSIGVERIFY 500e6n OP_CLTV OP_DROP"
(Caroll challenge script path)
]
output N (pending HTLCs)
witness:
"sig(P, hash_type=ALL)

CHALLENGE TX: (Alice case)
nLocktime: 500e6n
nSequence: 0
output 0:
P = muSig(A,B,C)
scripts = [
"OP_1 CHECKSIGVERIFY