Re: [Lightning-dev] [bitcoin-dev] Batch exchange withdrawal to lightning requires covenants

2023-10-18 Thread Greg Sanders
> I do not know if existing splice implementations actually perform such a
check.
Unless all splice implementations do this, then any kind of batched
splicing is risky.

As long as the implementation decides to splice again at some point when a
prior
splice isn't confirming, it will self-resolve once any subsequent splice is
confirmed.

Cheers,
Greg

On Tue, Oct 17, 2023 at 1:04 PM ZmnSCPxj via bitcoin-dev <
bitcoin-...@lists.linuxfoundation.org> wrote:

> Good morning Bastien,
>
> I have not gotten around to posting it yet, but I have a write-up in my
> computer with the title:
>
> > Batched Splicing Considered Risky
>
> The core of the risk is that if:
>
> * I have no funds right now in a channel (e.g. the LSP allowed me to have
> 0 reserve, or this is a newly-singlefunded channel from the LSP to me).
> * I have an old state (e.g. for a newly-singlefunded channel, it could
> have been `update_fee`d, so that the initial transaction is old state).
>
> Then if I participate in a batched splice, I can disrupt the batched
> splice by broadcasting the old state and somehow convincing miners to
> confirm it before the batched splice.
>
> Thus, it is important for *any* batched splicing mechanism to have a
> backout, where if the batched splice transaction can no longer be confirmed
> due to some participant disrupting it by posting an old commitment
> transaction, either a subset of the splice is re-created or the channels
> revert back to pre-splice state (with knowledge that the post-splice state
> can no longer be confirmed).
>
> I know that current splicing tech is to run both the pre-splice and
> post-splice state simultaneously until the splicing transaction is
> confirmed.
> However we need to *also* check if the splicing transaction *cannot* be
> confirmed --- by checking if the other inputs to the splice transaction
> were already consumed by transactions that have deeply confirmed, and in
> that case, to drop the post-splice state and revert to the pre-splice state.
> I do not know if existing splice implementations actually perform such a
> check.
> Unless all splice implementations do this, then any kind of batched
> splicing is risky.
>
> Regards,
> ZmnSCPxj
>
> ___
> bitcoin-dev mailing list
> bitcoin-...@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Practical PTLCs, a little more concretely

2023-09-06 Thread Greg Sanders
Hi devs,

Since taproot channels are deploying Soon(TM), I think it behooves us to
turn some attention to PTLCs as a practical matter, drilling down a bit
deeper. I've found it helpful to recap technical topics that have been
going on for over half a decade(ala ln-symmetry), and PTLCs fall into that
bucket. In that spirit I made an attempt:

https://gist.github.com/instagibbs/1d02d0251640c250ceea1c5ec163

I spent some time drilling down exactly what the messages would look like,
varying:

1) single-sig adaptors vs MuSig2
2) async updates vs sync aka "simplified updates"
3) amount of message re-ordering
4) futuristic updates to mempool/consensus, including ANYPREVOUT like
updates

Hopefully all these choices are compatible/orthogonal to schemes like
overpayment, stuckless, and other exciting jargon to increase reliability
of payments. The messages detailed are pedantic and couple be packaged any
which way; I just couldn't keep track of correctness without detailing what
was being sent.

I didn't specify "fast-forward" as previously envisioned since that was
deemed "future work", and once there I think the engineering choices
balloon up quite a bit.

Hopefully this is a useful refresher and can perhaps start the discussion
of where on the performance/engineering lift curve we want to end up, maybe
even leading up to a standardization effort. Or maybe all this is wrong,
let me know!

Best,
Greg
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] "Updates Overflow" Attacks against Two-Party Eltoo ?

2022-12-13 Thread Greg Sanders
Hi Antoine,

Nothing you say here(about vanilla eltoo) sounds absurd.

> Therefore, transaction RN.0 should fail to punish update transaction 0 as
it's double-spent by update transaction 1, transaction RN.1 should fail to
punish update transaction 1 as it's double-spent by update transaction 2,
transaction RN.2 should fail to punish update transaction 2 as it's
double-spent by update transaction 3...

>While there is a RBF-race, I think this can be easily won by Malicia by
mass-connecting on the transaction-relay network and ignoring the Core
transaction-relay delay timers (here for privacy purposes iirc).

Right, there are some network-level games that can be played, however
honest participants can be given a leg up through as AJ notes, alternative
relays, or even a "rebinder" widget which means only the single highest fee
bidding copy of the final update transaction has to make it to the miners'
mempool. So if the honest party bids X% of the HTLC value in fees, the
attacker will be paying more than that every single block,
constantly racing, until it loses either a mempool race or the latest bid.

I'm starting simple and assuming we don't need all this machinery, and hope
the risk of counterparty either losing the race a single time, or being
outbid a single time is enough to dissuade an attack at all.

> A mitigation could be for a fee-bumping strategy to adopt a scorched
approach when the HTLC-timeout is approaching, and there is a corresponding
incoming HTLC. When the HTLC-timeout is near expiration (e.g X blocks from
incoming HTLC expiry), probably 100% of the HTLC value should be burnt in
update transaction fees.

I kind of always thought that's how HTLCs would have worked in theory and
practice eventually. As the clock runs down you're willing to spend more to
take less of the full value.

> Assuming the attack holds, and scorched approach are adopted by default
to mitigate this concern, there is a second-order concern, we might open
Lightning channels to miner-harvesting attacks, where the confirmation of
the update transactions are deferred to kick-out the scorched earth
reaction of the fee-bumping engine. In my opinion, this would be still an
improvement, as we're moving a (plausible) security risk triggerable by a
Lightning counterparty to (hypothetical) one triggerable by a wide
coalition of miners.

As I said before, I think this is already the case. We're assuming
liveliness of blockchain for these contracts, if a unilateral close gets
targeted by a large fraction of miners, I don't think eltoo is the risk,
it's the HTLC constract that's the risk.

> There is another caveat, it sounds if the update transaction can be
malleable (i.e SIGHASH_SINGLE|ANYONECANPAY), update transactions across
Lightning channels could be aggregated by the attacker, changing the
economy there in a way defavorable to the victims. I.e the attacker can
select the targeted channels, but the victim cannot coordinate with each
other to respond with a collective fee-bumping.

In my current eltoo design, I'm assuming *by policy* APO (V3?) transactions
can only have one input, and each transaction is only allowed a single
ephemeral anchor which is attached but not committed to by the
SIGHASH_SINGLE|APOAS signature. This results in a 1-input-2-output
transaction that isn't malleable. If and when we figure out how to un-pin
these kinds of transactions, this policy can be relaxed, and we can get the
benefits of aggregated transactions.

Cheers,
Greg

On Mon, Dec 12, 2022 at 8:39 PM Antoine Riard 
wrote:

> Hi list,
>
> The following post describes a potential attack vector against eltoo-based
> Lightning channels, from my understanding also including the recent
> two-party eltoo w/ punishment construction. While I think this concern has
> been known for a while among devs, and I believe it's mitigable by adopting
> an adequate fee-bumping strategy, I still wonder how exactly it affects
> eltoo-based constructions.
>
> AFAICT, the eltoo 2-stage proposal relies on a serie of pre-signed update
> transactions, of which in the optimistic case only one of them confirms.
> There is a script-spend path, where an update transaction N can spend an
> update transaction K, assuming K checksigverify.
>
> The attack purpose is to delay the confirmation of the final settlement
> transaction S, to double-spend a HTLC forwarded by a routing hop. I.e you
> have Ned the routing hop receiving the HTLC from Mallory upstream and
> sending the HTLC to Malicia downstream. Thanks to the cltv_expiry_delta,
> the HTLC forward should be safe as Ned can timeout the HTLC on the
> Ned-Malicia link before it is timed-out by Mallory on the Mallory-Ned link.
> In case of timeout failure, Malicia can claim the HTLC forward with the
> corresponding preimage, at the same block height than Mallory timeout the
> HTLC, effectively double-spending Ned.
>
> The cltv_expiry_delta requested by Ned is equal to N=144.
>
> The attack scenario works in the following way: Ma

Re: [Lightning-dev] Two-party eltoo w/ punishment

2022-12-08 Thread Greg Sanders
Antoine,

> While the 2*to_self_delay sounds the maximum time delay in the state
publication scenario where the cheating counterparty publishes a old state
then the honest counterparty publishes the latest one, there could be the
case where the cheating counterparty broadcast chain of old states, up to
mempool's `limitancestorcount`. However, this chain of eltoo transactions
could be replaced by the honest party paying a higher-feerate (assuming
something like nversion=3). I think there might still be an attack
triggerable under certain economic conditions, where the attacker overbids
with the higher-feerate transaction until the HTLC cltv expires. If this
attack is plausible, it could be even opportun if you're batching against
multiple channels, where the victims are not able to coordinate response.

Feel free to assume that we've worked around mempool pinning for all of
these discussions, otherwise we're pretty hosed regardless. I'm implicitly
assuming V3+ephemeral anchors, which disallows batched bumps, for example.
You'll need to give some room for "slippage", but I think
shared_delay/2*shared_delay is going to end up dominating UX in any
non-layered scheme.

> I wonder if the introduction of watchtower specific transactions doesn't
break the 2*to_self_delay assumption

This architecture doesn't suffer from 2*self_delay, and each transition
aside from Slow/Settle/SX.y has no relative timelock so that relative
timelock is all that matters. It does introduce a watchtower cycle, so it's
not longer a one-shot architecture, or even k-shot exactly, it ends up
looking like vanilla eltoo for that single path.

Cheers,
Greg

On Thu, Dec 8, 2022 at 2:14 PM Antoine Riard 
wrote:

> Hi AJ,
>
> The eltoo irc channel is ##eltoo on Libera chat.
>
> >  - 2022-10-21, eltoo/chia:
> https://twitter.com/bramcohen/status/1583122833932099585
>
> On the eltoo/chia variant, from my (quick) understanding, the main
> innovation aimed for is the limitation of the publication of eltoo states
> more than once by a counterparty, by introducing a cryptographic puzzle,
> where the witness can be produced once and only once ? I would say you
> might need the inheritance of the updated scriptpubkey across the chain of
> eltoo states, with a TLUV-like mechanism.
>
> > The basic idea is "if it's a two party channel with just Alice and Bob,
> > then if Alice starts a unilateral close, then she's already had her say,
> > so it's only Bob's opinion that matters from now on, and he should be
> > able to act immediately", and once it's only Bob's opinion that matters,
> > you can simplify a bunch of things.
>
> From my understanding, assuming Eltoo paper terminology, Alice can publish
> an update K transaction, and then after Bob can publish an update
> transaction K can publish an update transaction N. The main advantage of this
> construction I can see is a strict bound on the shared_delay encumbered in
> the on-chain publication of the channel ?
>
> > fast forwards: we might want to allow our channel partner
> > to immediately rely on a new state we propose without needing a
> > round-trip delay -- this potentially makes forwarding payments much
> > faster (though with some risk of locking the funds up, if you do a
> > fast forward to someone who's gone offline)
>
> IIRC, there has already been a "fast-forward" protocol upgrade proposal
> based on update-turn in the LN-penalty paradigm [0]. I think reducing the
> latency of HTLC propagation across payment paths would constitute a UX
> improvement, especially a link-level update mechanism upgrade deployment
> might be incentivized by routing algorithms starting to penalize routing
> hops HTLC relay latency. What is unclear is the additional risk of locking
> the funds up. If you don't receive acknowledgement the fast forward state
> has been received, you should still be able to exit with the state N-1 ?
> However, the fast-forward trade-off might sound acceptable, with time you
> might expect reliable routing hops in the core of the graph, and flappy
> spokes at the edge.
>
> > doubled delays: once we publish the latest state we can, we want to
> > be able to claim the funds immediately after to_self_delay expires;
> > however if our counterparty has signatures for a newer state than we
> > do (which will happen if it was fast forwarded), they could post that
> > state shortly before to_self_delay expires, potentially increasing
> > the total delay to 2*to_self_delay.
>
> While the 2*to_self_delay sounds the maximum time delay in the state
> publication scenario where the cheating counterparty publishes a old state
> then the honest counterparty publishes the latest one, there could be the
> case where the cheating counterparty broadcast chain of old states, up to
> mempool's `limitancestorcount`. However, this chain of eltoo transactions
> could be replaced by the honest party paying a higher-feerate (assuming
> something like nversion=3). I think there might still be an attack
> trigger

Re: [Lightning-dev] Splice Pinning Prevention w/o Anchors

2022-09-26 Thread Greg Sanders
> I think this mitigation requires reliable access to the UTXO set

In this case, how about just setting nsequence to the value 1? UTXO may not
exist, but maybe that's ok since it means it cannot pin the commitment tx.

> If this concern is correct, I'm not sure we have a current good solution,
the WIP package RBF proposal would be limited to only 2 descendants [1],
and here we might have 3 generations: the splice, a commitment, a CPFP.

I maybe misunderstood the point, but if we're assuming some future V3
transaction update, you could certainly add anchors to the splice and CPFP
it from there. I think the effort was to attempt to avoid waiting for such
an update.

Best,
Greg

On Mon, Sep 26, 2022 at 3:51 PM Antoine Riard 
wrote:

> Hi Dustin,
>
> From my understanding, splice pinning is problematic for channel funds
> safety. In the sense once you have a splice floating in network mempools
> and your latest valid commitment transaction pre-signed fees isn't enough
> to replace the splice, lack of confirmation might damage the claim of HTLCs.
>
> I don't know if the current splice proposal discourages pending HTLCs
> during the splice lifetime, this would at least downgrade the pinning
> severity in the splicing case to a simple liquidity timevalue loss.
>
> W.r.t, about the mitigation proposed.
>
> > For “ancestor bulking”, every `tx_add_input` proposed by a peer must be
> > included in the UTXO set. A node MUST verify the presence of a proposed
> > input before adding it to the splicing transaction.
>
> I think this mitigation requires reliable access to the UTXO set, a
> significant constraint for LN mobile clients relying on lightweight
> validation backends. While this requirement already exists in matters of
> routing to authenticate channel announcements, on the LDK-side we have
> alternative infrastructure to offer source-based routing to such a class of
> clients, without them having to care about the UTXO set [0]. I don't
> exclude there would be infrastructure in the future to access a subset of
> the UTXO set (e.g if utreexo is deployed on the p2p network) for
> resource-constraint clients, however as of today this is still pure
> speculation and vaporware.
>
> In the meantime, mobile clients might not be able to partake in splicing
> operations with their LSPs, or without a decrease in trust-minimization
> (e.g assuming your LSP doesn't initiate malicious pinnings against you).
>
> > 1) You cannot CPFP a splice transaction. All splices must be RBF’d to be
> > fee-bumped. The interactive tx protocol already provides a protocol for
> > initiating an RBF, which we re-use for splicing.
>
> The issue with RBF, it assumes interactivity with your counterparties. As
> splicing is built on top of the interactive transaction construction
> protocol, from my understanding you could have a high order of participants
> to coordinate with, without knowledge of their signing policies (e.g if
> they're time-constraints) therefore any re-signing operations might have
> odds to fail. Moreover, one of these participants could be malicious and
> refuses straightly to sign, therefore the  already splicing transactions
> stay as a pin in the network mempools.
>
> If this concern is correct, I'm not sure we have a current good solution,
> the WIP package RBF proposal would be limited to only 2 descendants [1],
> and here we might have 3 generations: the splice, a commitment, a CPFP.
>
> [0] https://github.com/lightningdevkit/rapid-gossip-sync-server
> [1]
> https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-September/020937.html
>
> Le mar. 9 août 2022 à 16:15, Dustin Dettmer  a
> écrit :
>
>> As raised by @crypto-iq and @roasbeef, splices which permit arbitrary
>> script and input inclusion are at risk of being mempool pinned. Here we
>> present a solution to this splice pinning problem.
>>
>>
>> ## Background
>>
>> Pinning can be done by building a very large “junk” transaction that
>> spends from an important pending one. There are two known pinning vectors:
>> ancestor bulking thru addition of new inputs and junk pinning via the
>> spending of outputs.
>>
>>
>> Pinning pushes transactions to the bottom of the priority list without a
>> practical way of bumping it up. It is in effect a griefing attack, but in
>> the case of lightning can risk funds loss for HTLCs that have timed out for
>> a pinned commitment transaction.
>>
>>
>> Anchor outputs were introduced to lightning to mitigate the junk pinning
>> vector; they work by adding a minimum of  `1 CSV` lock to all outputs on
>> the commitment transaction except for two “anchor” outputs, one for each
>> channel peer. (These take advantage of a 1-tx carve-out exception to enable
>> propagation of anchors despite any junk attached to the peer’s anchor).
>>
>>
>> ## Mitigation
>>
>> Splice transactions are susceptible to both junk and bulk pinning
>> attacks. Here’s how we propose mitigating these for splice.
>>
>>
>> [https://i.imgur.com/ayiO1Qt.png]
>>

Re: [Lightning-dev] Splice Pinning Prevention w/o Anchors

2022-08-10 Thread Greg Sanders
In this vector, I'm pretty sure feerate/total fee pinning is the same
issue. Even if you don't have to increase feerate, you have to pay for
100*1000=100,000 sats due to rule#3.

There's been some work trying to document the exact replacement behavior
implemented in core since it has drifted so much, and the feerate one would
be "rule#6" in the doc:
https://github.com/bitcoin/bitcoin/blob/master/doc/policy/mempool-replacements.md

Making sure inputs are confirmed completely mitigates this, so I'm guessing
it's not much of an issue.


On Wed, Aug 10, 2022 at 2:03 PM Eugene Siegel  wrote:

> I quickly looked it up and it seems that bitcoind has a function
> PaysMoreThanConflicts which checks that the tx pays a higher feerate than
> the replaced tx. This isn't a BIP125 rule AFAICT so I think that's what
> tripped me up. That means I'm wrong about the ancestor bulking variant as a
> malicious counterparty can put a high feerate splice tx at the bottom of
> the mempool, requiring a higher feerate to replace it.
>
> On Wed, Aug 10, 2022 at 12:31 PM Greg Sanders 
> wrote:
>
>> Your reading is correct.
>>
>> My example was that if TxB, size 100vB with feerate 1000 sat/vbyte, has
>> an 100kvB ancestor paying 1 sat/vbyte. The effective package rate for those
>> two transactions will be (100*1,000 + 100,000*1)/(100,000 + 100) = ~2
>> sat/vybte
>>
>> This means TxB will not be picked up if the prevailing rate is > 2
>> sat/byte.  Let's say it's 4 sat/vbyte prevailing rate. To replace it with
>> TxB', one still has to pay to evict TxB, at roughly 1000/4=250 times the
>> normal feerate.
>>
>> Sorry if I got the math wrong here, but at least trying to get the idea
>> across.
>>
>> On Wed, Aug 10, 2022 at 12:20 PM Eugene Siegel 
>> wrote:
>>
>>> Looking it up, rule 3 is "The replacement transaction pays an absolute
>>> fee of at least the sum paid by the original transactions." but here the
>>> ancestors aren't getting replaced so I don't think the replacement has to
>>> pay for them? Or maybe your comment was just generally about how it can
>>> matter in certain cases
>>>
>>> On Wed, Aug 10, 2022 at 12:06 PM Greg Sanders 
>>> wrote:
>>>
>>>> > I think the ancestor bulking variant of pinning only matters if you
>>>> are trying to add a new descendant and can't due to the ancestor/descendant
>>>> limits.
>>>>
>>>> Not quite. It also matters if you want to RBF that transaction, since
>>>> low feerate ancestor junk puts the transaction at the bottom of the
>>>> mempool, so to speak, even if it has a high feerate itself. You are forced
>>>> to pay "full freight" to replace it via bip125 rule#3 even though it's not
>>>> going to be mined.
>>>>
>>>> (I don't know if that applies here, just noting the wrinkle)
>>>>
>>>> On Wed, Aug 10, 2022 at 11:37 AM Eugene Siegel 
>>>> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> I think the ancestor bulking variant of pinning only matters if you
>>>>> are trying to add a new descendant and can't due to the 
>>>>> ancestor/descendant
>>>>> limits. In this  example, since all of the outputs are locked with `1
>>>>> OP_CSV`, you can't add a descendant to the splice tx. The ancestor bulking
>>>>> also shouldn't matter for RBF since you wouldn't be replacing any of the
>>>>> ancestors, only the splice tx. I think it might matter if the new funding
>>>>> output isn't encumbered.
>>>>>
>>>>> The new funding output can't have `1 OP_CSV` unless we also change the
>>>>> commit tx format, and I'm not sure if it would work. The commit tx has the
>>>>> disable bit set in nSequence so it isn't compatible with the sequence 
>>>>> lock.
>>>>> Enabling the bit might be tricky since then the commit tx may have a
>>>>> time-based or block-based locktime based on the lower bits of the obscured
>>>>> commitment number, and it must be block-based (and non-zero) for the
>>>>> sequence lock to work. That means if it's not encumbered, pinning exists
>>>>> since an attacker can make a junk tree using the anchor output. It is
>>>>> replaceable using RBF since you have your own commit tx (with anchor) to
>>>>> broadcast.
>>>>>
>>>>> Eugene
>>>>> ___
>>>>> Lightning-dev mailing list
>>>>> Lightning-dev@lists.linuxfoundation.org
>>>>> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>>>>>
>>>>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Splice Pinning Prevention w/o Anchors

2022-08-10 Thread Greg Sanders
Your reading is correct.

My example was that if TxB, size 100vB with feerate 1000 sat/vbyte, has an
100kvB ancestor paying 1 sat/vbyte. The effective package rate for those
two transactions will be (100*1,000 + 100,000*1)/(100,000 + 100) = ~2
sat/vybte

This means TxB will not be picked up if the prevailing rate is > 2
sat/byte.  Let's say it's 4 sat/vbyte prevailing rate. To replace it with
TxB', one still has to pay to evict TxB, at roughly 1000/4=250 times the
normal feerate.

Sorry if I got the math wrong here, but at least trying to get the idea
across.

On Wed, Aug 10, 2022 at 12:20 PM Eugene Siegel  wrote:

> Looking it up, rule 3 is "The replacement transaction pays an absolute fee
> of at least the sum paid by the original transactions." but here the
> ancestors aren't getting replaced so I don't think the replacement has to
> pay for them? Or maybe your comment was just generally about how it can
> matter in certain cases
>
> On Wed, Aug 10, 2022 at 12:06 PM Greg Sanders 
> wrote:
>
>> > I think the ancestor bulking variant of pinning only matters if you are
>> trying to add a new descendant and can't due to the ancestor/descendant
>> limits.
>>
>> Not quite. It also matters if you want to RBF that transaction, since low
>> feerate ancestor junk puts the transaction at the bottom of the mempool, so
>> to speak, even if it has a high feerate itself. You are forced to pay "full
>> freight" to replace it via bip125 rule#3 even though it's not going to be
>> mined.
>>
>> (I don't know if that applies here, just noting the wrinkle)
>>
>> On Wed, Aug 10, 2022 at 11:37 AM Eugene Siegel 
>> wrote:
>>
>>> Hi,
>>>
>>> I think the ancestor bulking variant of pinning only matters if you are
>>> trying to add a new descendant and can't due to the ancestor/descendant
>>> limits. In this  example, since all of the outputs are locked with `1
>>> OP_CSV`, you can't add a descendant to the splice tx. The ancestor bulking
>>> also shouldn't matter for RBF since you wouldn't be replacing any of the
>>> ancestors, only the splice tx. I think it might matter if the new funding
>>> output isn't encumbered.
>>>
>>> The new funding output can't have `1 OP_CSV` unless we also change the
>>> commit tx format, and I'm not sure if it would work. The commit tx has the
>>> disable bit set in nSequence so it isn't compatible with the sequence lock.
>>> Enabling the bit might be tricky since then the commit tx may have a
>>> time-based or block-based locktime based on the lower bits of the obscured
>>> commitment number, and it must be block-based (and non-zero) for the
>>> sequence lock to work. That means if it's not encumbered, pinning exists
>>> since an attacker can make a junk tree using the anchor output. It is
>>> replaceable using RBF since you have your own commit tx (with anchor) to
>>> broadcast.
>>>
>>> Eugene
>>> ___
>>> Lightning-dev mailing list
>>> Lightning-dev@lists.linuxfoundation.org
>>> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>>>
>>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Splice Pinning Prevention w/o Anchors

2022-08-10 Thread Greg Sanders
> I think the ancestor bulking variant of pinning only matters if you are
trying to add a new descendant and can't due to the ancestor/descendant
limits.

Not quite. It also matters if you want to RBF that transaction, since low
feerate ancestor junk puts the transaction at the bottom of the mempool, so
to speak, even if it has a high feerate itself. You are forced to pay "full
freight" to replace it via bip125 rule#3 even though it's not going to be
mined.

(I don't know if that applies here, just noting the wrinkle)

On Wed, Aug 10, 2022 at 11:37 AM Eugene Siegel  wrote:

> Hi,
>
> I think the ancestor bulking variant of pinning only matters if you are
> trying to add a new descendant and can't due to the ancestor/descendant
> limits. In this  example, since all of the outputs are locked with `1
> OP_CSV`, you can't add a descendant to the splice tx. The ancestor bulking
> also shouldn't matter for RBF since you wouldn't be replacing any of the
> ancestors, only the splice tx. I think it might matter if the new funding
> output isn't encumbered.
>
> The new funding output can't have `1 OP_CSV` unless we also change the
> commit tx format, and I'm not sure if it would work. The commit tx has the
> disable bit set in nSequence so it isn't compatible with the sequence lock.
> Enabling the bit might be tricky since then the commit tx may have a
> time-based or block-based locktime based on the lower bits of the obscured
> commitment number, and it must be block-based (and non-zero) for the
> sequence lock to work. That means if it's not encumbered, pinning exists
> since an attacker can make a junk tree using the anchor output. It is
> replaceable using RBF since you have your own commit tx (with anchor) to
> broadcast.
>
> Eugene
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Gossip Propagation, Anti-spam, and Set Reconciliation

2022-04-21 Thread Greg Sanders
I think I mentioned this out of band to Alex, but (b) is what Erlay's
proposal is for Bitcoin gossip, so it's worth studying up.

On Thu, Apr 21, 2022 at 9:18 AM Matt Corallo 
wrote:

> Instead of trying to make sure everyone’s gossip acceptance matches
> exactly, which as you point it seems like a quagmire, why not (a) do a sync
> on startup and (b) do syncs of the *new* things. This way you aren’t stuck
> staring at the same channels every time you do a sync. Sure, if you’re
> rejecting a large % of channel updates in total you’re gonna end up hitting
> degenerate cases, but we can consider tuning the sync frequency if that
> becomes an issue.
>
> Like eclair, we don’t bother to rate limit and don’t see any issues with
> it, though we will skip relaying outbound updates if we’re saturating
> outbound connections.
>
> On Apr 14, 2022, at 17:06, Alex Myers  wrote:
>
> 
>
> Hello lightning developers,
>
>
> I’ve been investigating set reconciliation as a means to reduce bandwidth
> and redundancy of gossip message propagation. This builds on some earlier work
> from Rusty using the minisketch library [1]. The idea is that each node
> will build a sketch representing it’s own gossip set. Alice’s node will
> encode and transmit this sketch to Bob’s node, where it will be merged with
> his own sketch, and the differences produced. These differences should
> ideally be exactly the latest missing gossip of both nodes. Due to size
> constraints, the set differences will necessarily be encoded, but Bob’s
> node will be able to identify which gossip Alice is missing, and may then
> transmit exactly those messages.
>
>
> This process is relatively straightforward, with the caveat that the sets
> must otherwise match very closely (each sketch has a maximum capacity for
> differences.) The difficulty here is that each node and lightning
> implementation may have its own rules for gossip acceptance and
> propagation. Depending on their gossip partners, not all gossip may
> propagate to the entire network.
>
>
> Core-lightning implements rate limiting for incoming channel updates and
> node announcements. The default rate limit is 1 per day, with a burst of
> 4. I analyzed my node’s gossip over a 14 day period, and found that, of
> all publicly broadcasting half-channels, 18% of them fell afoul of our
> spam-limiting rules at least once. [2]
>
>
> Picking several offending channel ids, and digging further, the majority
> of these appear to be flapping due to Tor or otherwise intermittent
> connections. Well connected nodes may be more susceptible to this due to more
> frequent routing attempts, and failures resulting in a returned channel
> update (which otherwise might not have been broadcast.) A slight
> relaxation of the rate limit resolves the majority of these cases.
>
>
> A smaller subset of channels broadcast frequent channel updates with minor
> adjustments to htlc_maximum_msat and fee_proportional_millionths
> parameters. These nodes appear to be power users, with many channels and
> large balances. I assume this is automated channel management at work.
>
>
> Core-Lightning has updated rate-limiting in the upcoming release to
> achieve a higher acceptance of incoming gossip, however, it seems that a
> broader discussion of rate limits may now be worthwhile. A few immediate
> ideas:
>
> - A common listing of current default rate limits across lightning
> network implementations.
>
> - Internal checks of RPC input to limit or warn of network propagation
> issues if certain rates are exceeded.
>
> - A commonly adopted rate-limit standard.
>
>
> My aim is a set reconciliation gossip type, which will use a common,
> simple heuristic to accept or reject a gossip message. (Think one channel
> update per block, or perhaps one per block_height << 5.) See my github
> for my current draft. [3] This solution allows tighter consensus, yet suffers
> from the same problem as original anti-spam measures – it remains
> somewhat arbitrary. I would like to start a conversation regarding gossip
> propagation, channel_update and node_announcement usage, and perhaps even
> bandwidth goals for syncing gossip in the future (how about a million
> channels?) This would aid in the development of gossip set
> reconciliation, but could also benefit current node connection and
> routing reliability more generally.
>
>
> Thanks,
>
> Alex
>
>
> [1] https://github.com/sipa/minisketch
>
> [2]
> https://github.com/endothermicdev/lnspammityspam/blob/main/sampleoutput.txt
>
> [3]
> https://github.com/endothermicdev/lightning-rfc/blob/gossip-minisketch/07-routing-gossip.md#set-reconciliation
>
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listi

Re: [Lightning-dev] Scriptless Scripts with ECDSA

2018-05-08 Thread Greg Sanders
>From what I understand talking to folks, the linear properties of these
signature tricks are maintained under a number of post-quantum schemes.

On Tue, May 8, 2018 at 8:44 AM, Benjamin Mord  wrote:

>
> If I'm not mistaken, the scriptless scripts concept (as currently
> formulated) falls to Schor's algorithm, and at present there is no
> alternative implementation of the concept to fall back on. Correct? Lest we
> build a house of cards, I'd strongly urge everyone to not depend on
> functional concepts whose underlying cryptographic primitives cannot be
> swapped in an emergency.
>
> Sure, we use ecdsa for example (which is also vulnerable to Schor's
> algorithm), but in contrast to scriptless scripts we have a variety of
> backup primitives at our disposal that fulfill the same functional
> objective.
>
> If scriptless scripts are found possible under lattice-based cryptography
> for example, that would be something I suppose. The functional concept of
> scriptless scripts is indeed very awesome - we just need to add some
> cryptographic conservatism before we build on it.
>
>
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] lightning operation during / following a chain fork (e.g. BIP 50)

2018-01-30 Thread Greg Sanders
Not sure there is much to be done in simple consensus failures. Agreed it's
a bit floaty unless there's an actual proposal.

On Tue, Jan 30, 2018 at 11:42 AM, Benjamin Mord  wrote:

>
> Ugh, correction - BOLTs are presently written explicitly require segwit
> (not segwit2x! need more coffee...). Sorry for the 'typo'
>
> On Tue, Jan 30, 2018 at 11:41 AM, Benjamin Mord  wrote:
>
>>
>> Greg, I think you are confusing two different topics: adversarial forks,
>> versus segwit as fix to transaction malleability.
>>
>> If you remove segwit, i.e. if you reintroduce the txid malleability bug,
>> then lightning becomes unsafe - any nodes which attempt to follow such a
>> fork would suffer. Incentives strongly motivate maintenance of consensus,
>> so that scenario (I think?) is automatically covered and of no concern. (So
>> actually, BCH is presently of no concern.) BOLTs as presently written
>> explicitly require segwit2x anyhow, and for this reason.
>>
>> I understand an "adversarial fork" is one which lacks replay protection,
>> . This is very much something worth addressing, as that is the case by
>> default with a BIP 50-style accidental fork, and also appeared likely with
>> the failed (and poorly named) "segwit2x". But I'm thinking out loud, will
>> stop spamming people on the list unless / until I have a usefully concrete
>> solution to offer. (Or until someone else comes up with something.)
>>
>>
>> On Tue, Jan 30, 2018 at 11:31 AM, Greg Sanders 
>> wrote:
>>
>>> "Adversarial" forks that rip out segwit, or maliciously do not change
>>> their signature algorithm, are basically impossible to defend against. May
>>> be best to focus energies on forks that use strong replay protection in the
>>> form of FORKID.
>>>
>>> On Tue, Jan 30, 2018 at 11:26 AM, Benjamin Mord  wrote:
>>>
>>>>
>>>> Thank you, ZmnSCPxj. BCH is a warmup question for several reasons, I
>>>> believe they don't even support segwit (!) so lightning would be unsafe due
>>>> to their txid mutability bug. I agree altcoin support should be lower
>>>> priority, whenever it is obvious which is the altcoin (as indeed, is
>>>> abundantly clear wrt BTC vs BCH). But it might one day become unclear.
>>>>
>>>> I remain concerned about safety despite BIP 50 scenarios, forks with
>>>> more legitimate contention than so far seen, and also system stability in
>>>> face of increasingly unsophisticated / gullible user base. As a
>>>> cryptocurrency is little more than a trustless consensus mechanism, it
>>>> seems circular to assume consensus in its design, especially if there are
>>>> entities financially motivated to fracture that consensus. Resilience
>>>> against forks would seem core to safety. If I think of a concrete solution,
>>>> I'll send it first to this list for discussion - as I believe that is the
>>>> preferred process?
>>>>
>>>> Thanks,
>>>> Ben
>>>>
>>>>
>>>> On Tue, Jan 30, 2018 at 1:16 AM, ZmnSCPxj 
>>>> wrote:
>>>>
>>>>> Good morning Ben,
>>>>>
>>>>> Hi,
>>>>>
>>>>> One topic I can't seem to find in the BOLTs is how lightning nodes
>>>>> maintain consensus during or after a fork of the underlying blockchain(s).
>>>>> For example, channel_announcement messages use a chain_hash, defined as
>>>>> hash of underlying block chain's genesis block, to identify the currency 
>>>>> in
>>>>> use. Today, one might ask which hash identifies BTC as opposed to BCH?
>>>>>
>>>>>
>>>>> I believe the rough consensus among most Lightning developers is that
>>>>> BTC is "the real BTC" and gets the Satoshi genesis hash, while BCH is an
>>>>> altcoin that was forked off BTC and gets as hash the branching-off point.
>>>>> You could try to convince people developing and using Lightning software 
>>>>> to
>>>>> do the reverse, but I think it is unlikely that many people would agree to
>>>>> that.
>>>>>
>>>>>
>>>>> A more difficult question arises in how existing channels handle
>>>>> intentional forks which arise after funding of a payment channel.
>>>>>
>>>>> An even more difficult question arises in the handling of

Re: [Lightning-dev] lightning operation during / following a chain fork (e.g. BIP 50)

2018-01-30 Thread Greg Sanders
"Adversarial" forks that rip out segwit, or maliciously do not change their
signature algorithm, are basically impossible to defend against. May be
best to focus energies on forks that use strong replay protection in the
form of FORKID.

On Tue, Jan 30, 2018 at 11:26 AM, Benjamin Mord  wrote:

>
> Thank you, ZmnSCPxj. BCH is a warmup question for several reasons, I
> believe they don't even support segwit (!) so lightning would be unsafe due
> to their txid mutability bug. I agree altcoin support should be lower
> priority, whenever it is obvious which is the altcoin (as indeed, is
> abundantly clear wrt BTC vs BCH). But it might one day become unclear.
>
> I remain concerned about safety despite BIP 50 scenarios, forks with more
> legitimate contention than so far seen, and also system stability in face
> of increasingly unsophisticated / gullible user base. As a cryptocurrency
> is little more than a trustless consensus mechanism, it seems circular to
> assume consensus in its design, especially if there are entities
> financially motivated to fracture that consensus. Resilience against forks
> would seem core to safety. If I think of a concrete solution, I'll send it
> first to this list for discussion - as I believe that is the preferred
> process?
>
> Thanks,
> Ben
>
>
> On Tue, Jan 30, 2018 at 1:16 AM, ZmnSCPxj  wrote:
>
>> Good morning Ben,
>>
>> Hi,
>>
>> One topic I can't seem to find in the BOLTs is how lightning nodes
>> maintain consensus during or after a fork of the underlying blockchain(s).
>> For example, channel_announcement messages use a chain_hash, defined as
>> hash of underlying block chain's genesis block, to identify the currency in
>> use. Today, one might ask which hash identifies BTC as opposed to BCH?
>>
>>
>> I believe the rough consensus among most Lightning developers is that BTC
>> is "the real BTC" and gets the Satoshi genesis hash, while BCH is an
>> altcoin that was forked off BTC and gets as hash the branching-off point.
>> You could try to convince people developing and using Lightning software to
>> do the reverse, but I think it is unlikely that many people would agree to
>> that.
>>
>>
>> A more difficult question arises in how existing channels handle
>> intentional forks which arise after funding of a payment channel.
>>
>> An even more difficult question arises in the handling of unintentional
>> forks, as documented for example in BIP 50.
>>
>> Have these scenarios been analyzed / designed yet, or does that work
>> remain?
>>
>>
>> The work remains.  For the most part, the priority is to get
>> implementations to a state, where we can safely deploy on Bitcoin Mainnet.
>> Then optimize further by adding RBF and multi-channel funding, then
>> integrate Burchert-Decker-Wattenhofer channel factories, splicing, and so
>> on.  Greater support for altcoins can be done later.
>>
>> For forked altcoins, short channel IDs contain the block height at which
>> the funding transaction confirmed.  This might be used to judge if a
>> channel contains forked coins or not.
>>
>> Regards,
>> ZmnSCPxj
>>
>>
>> Thanks!
>> Ben
>>
>>
>>
>
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev