Re: [Lightning-dev] Unjamming lightning (new research paper)

2022-11-07 Thread Clara Shikhelman
Hi Antoine,

Thank you for the detailed response!



> On the framework for mitigation evaluation, there are few other dimensions
> we have considered in the past with Gleb for our research that could be
> relevant to integrate. One is "centralization", the solution shouldn't
> centralize sensibly the LN ecosystem around any of its actors: LSP,
> Lightning Wallet Providers (e.g watchtower or Greenlight-style infra) or
> routing node, where centralization could be defined in terms of "market
> entry cost" for new incumbents. "Protocol evolvability" could be another
> one, as we don't want a solution rendering the design and operations of
> things like offline-receive, trampoline, negative fees, etc harder.
> "Ecosystem impacts" was one more category we thought about, e.g introducing
> a mass mempool congestion vector (one of the versions of Stakes
> Certificates did it...).
>

These are indeed important dimensions. I think that our solution gets “good
marks” in all of them, but this should definitely be stated explicitly in
the future.


> For the dimensions of your evaluation framework, the "effectiveness"
> sounds to be understood as attacker-centric. However few times after in the
> paper, the viewpoint of the routing nodes sounds to be adopted
> ("compensating them for the financial damage of jamming", "breakeven point
> n"). If this distinction is real, the first way would be more searching for
> a game-theory equilibrium whereas much damage is inflicted to the attacker.
> The second one would be more to ensure a compensation for the loss income
> for the routing nodes. I believe the first approach is limited as the
> attacker's resources could overwhelm the victim's economic sustainability,
> and rationality might be uneconomic. Maybe those two approaches could be
> combined, in the sense that loss income compensation should only be borne
> by "identified" attackers, however this doesn't sound the direction taken
> by unconditional fees.
>

The effectiveness evaluation does have a few facets. From the attacker's
viewpoint, it might be that mitigation makes the attack impossible,
difficult, or expensive. From the victim's point of view, we can talk about
protection, compensation, or any combination of the two. Of course, the
best outcome is when the attack is impossible. As this is oftentimes not
something we can do, we have to choose one of the other outcomes.


> About "incentive compatibility", one element valuable to integrate is how
> much the existence of scoring algorithms allows the routing nodes to adopt
> "honest behaviors" and high-level of service availability. I don't know if
> a jamming solution can be devised without considerations of the inner
> workings of routing/scoring algorithms, and so far every LN implementation
> has its own cooking.
>

We focused on the most basic incentive of not failing transactions that
could have been forwarded. I'll be happy to discuss other potential
pitfalls if you have something in mind.


>
> On the structure of the monetary strategy, I think there could be a
> solution to implement a proof-of-burn, where the fee is captured in a
> commitment output sending to a provably unspendable output. Theoretically,
> it's interesting as "unburning" the fee is dependent on counterparty
> cooperation, the one potentially encumbering the jamming risk.
> Proof-of-work "fee" has been discussed in the past by LN devs, however it
> was quickly dismissed, as it would give an edge to the attacker who is able
> to gather ASICs farms while completely burning the batteries of LN mobile
> clients. It has also been widely discussed to make the fees conditional on
> either outgoing HTLC CLTV value or effective duration. For effective
> duration, an upfront fee shard could be paid after each clock tick (either
> epoch or block-based).
>

The main problem with proof of burn or PoW is that it does not compensate
the victim, we write this explicitly in the newer version of the paper.
Thanks for this comment, we will add further details on previous
discussions.


> On the structure of reputation strategy, I think one interesting missing
> point to me is the blurring of the local/global reputation definition in
> Lightning. At least maybe in a way traditionally defined in P2P
> litterature. Reputation could be enforced on the HTLC sender, as we've
> aimed with Stakes Certificates. The upstream peer reputation is not
> accounted for at all. I think it's an open question if the reputation score
> of a routing node could be exported across nodes (a behavior that one could
> expect if you assume web-of-trust, as the current LN network topology is
> heavily based on). On the statement, that attaching reputation to payment
> contradicts the LN's privacy-focused goal, I would say it's a light one in
> regards to the state of cryptography tools like blinded signature, known
> since the 80s.
>

I think that further research on assigning the blame to the sender is of
interest, but as we 

Re: [Lightning-dev] [bitcoin-dev] Taro: A Taproot Asset Representation Overlay

2022-11-07 Thread Johan Torås Halseth
Hi Laolu,

Yeah, that is definitely the main downside, as Ruben also mentioned:
tokens are "burned" if they get sent to an already spent UTXO, and
there is no way to block those transfers.

And I do agree with your concern about losing the blockchain as the
main synchronization point, that seems indeed to be a prerequisite for
making the scheme safe in terms of re-orgs and asynchronicity.

I do think the scheme itself is sound though (maybe not off-chain, see
below): it prevents double spending and as long as the clients adhere
to the "rule" of not sending to a spent UTXO you'll be fine (if not
your tokens will be burned, the same way as if you don't satisfy the
Taro script when spending).

Thinking more about the examples you gave, I think you are right it
won't easily be compatible with LN channels though:
If you want to refill an existing channel with tokens, you need the
channel counterparties to start signing new commitments that include
spending the newly sent tokens. A problem arises however, if the
channel is force-closed with a pre-existing commitment from before the
token transfer took place. Since this commitment will be spending the
funding UTXO, but not the new tokens, the tokens will be burned. And
that seems to be harder to deal with (Eltoo style channels could be an
avenue to explore, if one could override the broadcasted commitment).

Tl;dr: I think you're right, the scheme is not compatible with LN.

- Johan


On Sat, Nov 5, 2022 at 1:36 AM Olaoluwa Osuntokun  wrote:
>
> Hi Johan,
>
> I haven't really been able to find a precise technical explanation of the
> "utxo teleport" scheme, but after thinking about your example use cases a
> bit, I don't think the scheme is actually sound. Consider that the scheme
> attempts to target transmitting "ownership" to a UTXO. However, by the time
> that transaction hits the chain, the UTXO may no longer exist. At that
> point, what happens to the asset? Is it burned? Can you retry it again? Does
> it go back to the sender?
>
> As a concrete example, imagine I have a channel open, and give you an
> address to "teleport" some additional assets to it. You take that addr, then
> make a transaction to commit to the transfer. However, the block before you
> commit to the transfer, my channel closes for w/e reason. As a result, when
> the transaction committing to the UTXO (blinded or not), hits the chain, the
> UTXO no longer exists. Alternatively, imagine the things happen in the
> expected order, but then a re-org occurs, and my channel close is mined in a
> block before the transfer. Ultimately, as a normal Bitcoin transaction isn't
> used as a serialization point, the scheme seems to lack a necessary total
> ordering to ensure safety.
>
> If we look at Taro's state transition model in contrast, everything is fully
> bound to a single synchronization point: a normal Bitcoin transaction with
> inputs consumed and outputs created. All transfers, just like Bitcoin
> transactions, end up consuming assets from the set of inputs, and
> re-creating them with a different distribution with the set of outputs. As a
> result, Taro transfers inherit the same re-org safety traits as regular
> Bitcoin transactions. It also isn't possible to send to something that won't
> ultimately exist, as sends create new outputs just like Bitcoin
> transactions.
>
> Taro's state transition model also means anything you can do today with
> Bitcoin/LN also apply. As an example, it would be possible for you to
> withdrawn from your exchange into a Loop In address (on chain to off chain
> swap), and have everything work as expected, with you topping off your
> channel. Stuff like splicing, and other interactive transaction construction
> schemes (atomic swaps, MIMO swaps, on chain auctions, etc) also just work.
>
> Ignoring the ordering issue I mentioned above, I don't think this is a great
> model for anchoring assets in channels either. With Taro, when you make the
> channel, you know how many assets are committed since they're all committed
> to in the funding output when the channel is created. However, let's say we
> do teleporting instead: at which point would we recognize the new asset
> "deposits"? What if we close before a pending deposits confirms, how can one
> regain those funds? Once again you lose the serialization of events/actions
> the blockchain provides. I think you'd also run into similar issues when you
> start to think about how these would even be advertised on a hypothetical
> gossip network.
>
> I think one other drawback of the teleport model iiuc is that: it either
> requires an OP_RETURN, or additional out of band synchronization to complete
> the transfer. Since it needs to commit to w/e hash description of the
> teleport, it either needs to use an OP_RETURN (so the receiver can see the
> on chain action), or the sender needs to contact the receiver to initiate
> the resolution of the transfer (details committed to in a change addr or
> w/e).
>
> With 

Re: [Lightning-dev] A pragmatic, unsatisfying work-around for anchor outputs fee-bumping reserve requirements

2022-11-07 Thread Bastien TEINTURIER
Hey laolu,

Thanks for your feedback.

> However, I can imagine that if an implementation doesn't have its own
> wallet, then things can be a bit more difficult, as stuff like the
bitcoind
> wallet may not expose the APIs one needs to do things like CPFP properly.

I don't think this is only an issue of wallet management though. I would
bet that most node operators today don't have a good enough utxo reserve
to actually provide safety against a malicious attacker once mempool
backlog starts filling up. Eclair isn't really limited by bitcoind APIs
here, but rather by BIP 125 rules, which creates the same issues for
lnd's internal wallet.

The reason we're not seeing issues today is only because there is no
malicious behavior on the network, but when that happens, there will
be issues (remember, the turkey thinks everything is fine and really
likes that guy who comes everyday to feed it, until Thanksgiving comes).

The issue I'm most concerned with is an attacker filling your outgoing
HTLC slots with HTLCs that timeout at blocks N+1, N+2, ..., N+483. It's
not trivial to batch those HTLCs because they all have different cltv
expiries. If you're low on utxo count, you only have two choices:

1. Re-create a batched transaction at every block to include more utxos
2. Create trees of unconfirmed transactions where the change output of
one HTLC tx is the funding input of another HTLC tx

With option 1, you'll need to increase the overall feerate at every
block to match RBF rules, which means you'll end up greatly overpaying
the fees once you've RBF-ed a hundred times or more...

With option 2, it's even worse, because HTLCs that timeout earlier end
up closer to the root of tree. It will be prohibitively costly to RBF
them because BIP 125 will force you to pay a huge fee for replacing a
high number of unconfirmed descendants. Your only option is to CPFP at
one of the leaves of the tree, which is going to be very costly as well
because it requires setting a high feerate to the *whole* tree, even
though it could contain many HTLCs that still have time available before
the cltv-expiry-delta ends.

The attacker may also regularly claim HTLCs one-by-one by revealing the
preimage just to invalidate your whole batched transaction or tree of
unconfirmed transactions, requiring you to rebuild it at every block and
wait one more block before getting transactions confirmed (until you
reach the point where the attacker can also claim all HTLC outputs and
your only solution is a scorched-earth strategy).

Both options end up with the node operator greatly overpaying fees when
fighting a relatively smart attacker (which means the node operator is
actually losing money).

Also, this is a fairly complex bit of logic to implement, which depends
on bitcoin node's relaying policies and is really hard to test against
enough malicious scenarios. It's thus very likely to have subtle bugs.

This class of attacks are why eclair's default are very conservative:

* 30 accepted HTLCs instead of 483
* 144 blocks of cltv-expiry-delta

I'm afraid this is the only way to tip the odds in the favor of the
honest node operators with the current protocol. It's very frustrating
though!

As for Taproot compatibility, this is perfectly doable: if we're able
to share one nonce, we can just share multiple every time. If we're
able to watch one transaction, we're able to watch multiple. It's more
costly, but it's not a fundamental limitation.

I'm more concerned about limitations of this proposal to tackle dusty
HTLCs kind of attacks and the complexity it introduces, as was raised
in the comments of the spec PR [1]. Granted, this isn't a very *good*
proposal, but it's an attempt at raising awareness that we probably
need to do *something* to get slightly better funds safety.

Thanks,
Bastien

[1] https://github.com/lightning/bolts/pull/1036

Le sam. 5 nov. 2022 à 01:52, Olaoluwa Osuntokun  a écrit
:
>
> Hi tbast,
>
> FWIW, we haven't had _too_ many issues with the additional constraints
> anchor channels bring. Initially users had to deal w/ the UTXO reserve,
but
> then sort of accepted the trade-off for the safety that actually being
able
> to dynamically bump the fee on your commitment transaction and HTLCs.
We're
> also able to re-target the fee level of second level spends on the fly,
and
> even aggregate them into distinct fee buckets.
>
> However, I can imagine that if an implementation doesn't have its own
> wallet, then things can be a bit more difficult, as stuff like the
bitcoind
> wallet may not expose the APIs one needs to do things like CPFP properly.
> lnd has its own wallet (btcwallet), which is what has allowed us to adopt
> default P2TR addresses everywhere so quickly (tho ofc we inherit
additional
> maintenance costs).
>
> > Correctly managing this fee-bumping reserve involves a lot of complex
> > decisions and dynamic risk assessment, because in worst-case scenarios,
a
> > node may need to fee-bump thousands of HTLC transactions in a