Re: [Lightning-dev] A pragmatic, unsatisfying work-around for anchor outputs fee-bumping reserve requirements

2022-11-04 Thread Olaoluwa Osuntokun
Hi tbast,

FWIW, we haven't had _too_ many issues with the additional constraints
anchor channels bring. Initially users had to deal w/ the UTXO reserve, but
then sort of accepted the trade-off for the safety that actually being able
to dynamically bump the fee on your commitment transaction and HTLCs. We're
also able to re-target the fee level of second level spends on the fly, and
even aggregate them into distinct fee buckets.

However, I can imagine that if an implementation doesn't have its own
wallet, then things can be a bit more difficult, as stuff like the bitcoind
wallet may not expose the APIs one needs to do things like CPFP properly.
lnd has its own wallet (btcwallet), which is what has allowed us to adopt
default P2TR addresses everywhere so quickly (tho ofc we inherit additional
maintenance costs).

> Correctly managing this fee-bumping reserve involves a lot of complex
> decisions and dynamic risk assessment, because in worst-case scenarios, a
> node may need to fee-bump thousands of HTLC transactions in a short period
> of time.

IMO these new considerations aren't any worse than needing to predict the
future fee schedule of the chain to ensure that you can force close in a
timely manner when you need to. Re fee bumping thousands of HTLCs: anchor
lets them all be batched in the same transaction, which reduces fees and
also the worst-case on-chain force close footprint.

> each node can simply sign multiple versions of the HTLC transactions at
> various feerates

I'm not sure this can be mapped super cleanly to taproot channels that use
musig2. Today in the spec draft/impl, both sides maintain a pair of nonces
(one for each commitment transaction). If they need to sign N different
versions, then they also need to exchange N nonces, both during the initial
funding process, and also each time a new commitment transaction is created.
Mo signatures means mo transaction latency. Also how would retransmitting be
handled? By sending distinct valid signatures for a given fee rate, you're
effectively creating _even more_ commitments one needs to watch to be able
to play once they potentially hit the chain.

Ultimately, I'm not sure why implementations that have already rolled out
anchors by default, and have a satisfactory policy for ensuring fee bumping
UTXOs are available at all times would implement this. It's just yet another
option defined in the spec, and prescribes a more restrictive solution to
what's already possible: being able to dynamically fee bump commitment
transactions, and aggregate second level spends.

-- Laolu

On Thu, Oct 27, 2022 at 6:51 AM Bastien TEINTURIER  wrote:

> Good morning list,
>
> The lightning network transaction format was updated to leverage CPFP
> carve-out and allow nodes to set fees at broadcast time, using a feature
> called anchor outputs [1].
>
> While desirable, this change brought a whole new set of challenges, by
> requiring nodes to maintain a reserve of available utxos for fee-bumping.
> Correctly managing this fee-bumping reserve involves a lot of complex
> decisions and dynamic risk assessment, because in worst-case scenarios,
> a node may need to fee-bump thousands of HTLC transactions in a short
> period of time.
>
> This is especially frustrating because HTLC transactions should not need
> external inputs, as the whole value of the HTLC is already provided in
> its input, which means we could in theory "simply" decrease the amount of
> the corresponding output to set the fees to any desired value. However,
> we can't do this safely because it doesn't work well with the revocation
> mechanism, unless we find fancy new sighash flags to add to bitcoin.
> See [2] for a longer rant on this issue.
>
> A very low tech and unsatisfying solution exists, which is what I'm
> proposing today: each node can simply sign multiple versions of the
> HTLC transactions at various feerates, and at broadcast time if you're
> lucky you'll have a pre-signed transaction that approximately matches
> the feerate you want, so you don't need to add inputs from your fee
> bumping reserve. This reduces the requirements on your on-chain wallet
> and simplifies transaction management logic. I believe that it's a
> pragmatic approach, even though not very elegant, to increase funds
> safety for existing node operators and wallets. I opened a spec PR
> that is currently chasing concept ACKs before I refine it [3].
>
> Please let me know what you think, and if this is something that you
> would like your implementation to provide.
>
> Thanks,
> Bastien
>
> [1] https://github.com/lightning/bolts/pull/688
> [2] https://github.com/lightning/bolts/issues/845
> [3] https://github.com/lightning/bolts/pull/1036
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list

Re: [Lightning-dev] [bitcoin-dev] Taro: A Taproot Asset Representation Overlay

2022-11-04 Thread Olaoluwa Osuntokun
Hi Johan,

I haven't really been able to find a precise technical explanation of the
"utxo teleport" scheme, but after thinking about your example use cases a
bit, I don't think the scheme is actually sound. Consider that the scheme
attempts to target transmitting "ownership" to a UTXO. However, by the time
that transaction hits the chain, the UTXO may no longer exist. At that
point, what happens to the asset? Is it burned? Can you retry it again? Does
it go back to the sender?

As a concrete example, imagine I have a channel open, and give you an
address to "teleport" some additional assets to it. You take that addr, then
make a transaction to commit to the transfer. However, the block before you
commit to the transfer, my channel closes for w/e reason. As a result, when
the transaction committing to the UTXO (blinded or not), hits the chain, the
UTXO no longer exists. Alternatively, imagine the things happen in the
expected order, but then a re-org occurs, and my channel close is mined in a
block before the transfer. Ultimately, as a normal Bitcoin transaction isn't
used as a serialization point, the scheme seems to lack a necessary total
ordering to ensure safety.

If we look at Taro's state transition model in contrast, everything is fully
bound to a single synchronization point: a normal Bitcoin transaction with
inputs consumed and outputs created. All transfers, just like Bitcoin
transactions, end up consuming assets from the set of inputs, and
re-creating them with a different distribution with the set of outputs. As a
result, Taro transfers inherit the same re-org safety traits as regular
Bitcoin transactions. It also isn't possible to send to something that won't
ultimately exist, as sends create new outputs just like Bitcoin
transactions.

Taro's state transition model also means anything you can do today with
Bitcoin/LN also apply. As an example, it would be possible for you to
withdrawn from your exchange into a Loop In address (on chain to off chain
swap), and have everything work as expected, with you topping off your
channel. Stuff like splicing, and other interactive transaction construction
schemes (atomic swaps, MIMO swaps, on chain auctions, etc) also just work.

Ignoring the ordering issue I mentioned above, I don't think this is a great
model for anchoring assets in channels either. With Taro, when you make the
channel, you know how many assets are committed since they're all committed
to in the funding output when the channel is created. However, let's say we
do teleporting instead: at which point would we recognize the new asset
"deposits"? What if we close before a pending deposits confirms, how can one
regain those funds? Once again you lose the serialization of events/actions
the blockchain provides. I think you'd also run into similar issues when you
start to think about how these would even be advertised on a hypothetical
gossip network.

I think one other drawback of the teleport model iiuc is that: it either
requires an OP_RETURN, or additional out of band synchronization to complete
the transfer. Since it needs to commit to w/e hash description of the
teleport, it either needs to use an OP_RETURN (so the receiver can see the
on chain action), or the sender needs to contact the receiver to initiate
the resolution of the transfer (details committed to in a change addr or
w/e).

With Taro, sending to an address creates an on-chain taproot output just
like sending to a P2TR address. The creation of the output directly creates
the new asset anchor/output as well, which allows the receiver to look for
that address on chain just like a normal on chain transaction. To 3rd party
observers, it just looks like a normal P2TR transfer. In order to finalize
the receipt of the asset, the receiver needs to obtain the relevant
provenance proofs, which can be obtained from a multi-verse gRPC/HTTP
service keyed by the input outpoint and output index. In short, the send
process is fully async, with the sender and receiver using the blockchain
itself as a synchronization point like a normal Bitcoin wallet.

-- Laolu
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Proposal: Add support for proxying p2p connections to/from LND

2022-09-01 Thread Olaoluwa Osuntokun
Hi Alex,

This is a super cool project! I've shared some thoughts here in a comment on
the draft PR:
PR:
https://github.com/lightningnetwork/lnd/pull/6843#issuecomment-1234933319

Also I cc'd the lnd mailing list on this reply, perhaps we can move the
discussion over there (or in the issue) since this is more of an lnd
specific thing. In the future, the lnd mailing list is also probably a
better place for lnd architecture specific proposals/discussions.

-- Laolu


On Thu, Sep 1, 2022 at 10:56 AM Alex Akselrod via Lightning-dev <
lightning-dev@lists.linuxfoundation.org> wrote:

> At NYDIG, we're considering ways to harden large LND deployments. Joost
> and I discussed that currently, when external untrusted peers make inbound
> connections, LND must verify the identity of the peer during the noise
> handshake, and it must do this before enforcing any potential key-based
> allow lists. This is done in the same process as the node's other critical
> tasks, such as monitoring the chain.
>
> To reduce the attack area of the main node process, we'd like to propose a
> means to optionally separate the peer communication into a separate
> process: something like CLN's connectd, running separately, and the
> connections would be multiplexed over a single network connection initiated
> from the node to the proxy. The core of our current idea is demonstrated in
> a draft PR: https://github.com/lightningnetwork/lnd/pull/6843
>
> I'd love some early feedback on the general direction of this. If this
> would be interesting, I'll build it out into a fully working feature.
>
> Thanks,
>
> Alex Akselrod
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Onion messages rate-limiting

2022-07-01 Thread Olaoluwa Osuntokun
Hi Matt,

> Ultimately, paying suffers from the standard PoW-for-spam issue - you
> cannot assign a reasonable cost that an attacker cares about without
> impacting the system's usability due to said cost.

Applying this statement to related a area, would you also agree that
proposals
to introduce pre-payments for HTLCs to mitigate jamming attacks is similarly
a dead end?  Personally, this has been my opinion for some time now. Which
is why I advocate for the forwarding pass approach (gracefully degrade to
stratified topology), which in theory would allow the major flows of the
network to continue in the face of disruption.

-- Laolu
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Onion messages rate-limiting

2022-07-01 Thread Olaoluwa Osuntokun
Hi Val,

> Another huge win of backpressure is that it only needs to happen in DoS
> situations, meaning it doesn’t have to impact users in the normal case.

I agree, I think the same would apply to prepayments as well (0 or 1 msat in
calm times). My main concern with relying _only_ on backpressure rate
limiting is that we'd end up w/ your first scenario more often than not,
which means routine (and more important to the network) things like fetching
invoices becomes unreliable.

I'm not saying we should 100% compare onion messages to Tor, but that we
might
be able to learn from what works and what isn't working for them. The
systems
aren't identical, but have some similarities.

On the topic of parameters across the network: could we end up in a scenario
where someone is doing like streaming payments for a live stream (or w/e),
ends up fetching a ton of invoices (actual traffic leading to payments), but
then ends up being erroneously rate limited by their peers? Assuming they
have 1 or 2 channels that have now all been clamped down, is waiting N
minutes (or w/e) their only option? If so then this might lead to their
livestream (data being transmitted elsewhere) being shut off. Oops, they
just
missed the greatest World Cup goal in history!  You had to be there, you
had to
be there, you had to *be* there...

Another question on my mind is: if this works really well for rate limiting
of
onion messages, then why can't we use it for HTLCs as well?

-- Laolu
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Achieving Zero Downtime Splicing in Practice via Chain Signals

2022-07-01 Thread Olaoluwa Osuntokun
> That's not 100% reliable at all.   How long to you want for the new
gossip?

So you know it's a new channel, with a new capacity (look at the on-chain
output), between the same parties (assuming ppl use that multi-sig signal).
If
you attempt to route over it and have a stale policy, you'll get the latest
policy. Therefore, it doesn't really matter how long you wait, as you aren't
removing the channel from your graph, as you know it didn't really close.

If you don't see a message after 2 weeks or w/e, then you mark it as a
zombie just like any other channel.

-- Laolu


On Wed, Jun 29, 2022 at 5:35 PM Rusty Russell  wrote:

> Olaoluwa Osuntokun  writes:
> > Hi Rusty,
> >
> > Thanks for the feedback!
> >
> >> This is over-design: if you fail to get reliable gossip, your routing
> will
> >> suffer anyway.  Nothing new here.
> >
> > Idk, it's pretty simple: you're already watching for closes, so if a
> close
> > looks a certain way, it's a splice. When you see that, you can even take
> > note of the _new_ channel size (funds added/removed) and update your
> > pathfinding/blindedpaths/hophints accordingly.
>
> Why spam the chain?
>
> > If this is an over-designed solution, that I'd categorize _only_ waiting
> N
> > blocks as wishful thinking, given we have effectively no guarantees w.r.t
> > how long it'll take a message to propagate.
>
> Sure, it's a simplification on "wait 6 blocks plus 30 minutes".
>
> > If by routing you mean a sender, then imo still no: you don't necessarily
> > need _all_ gossip, just the latest policies of the nodes you route most
> > frequently to. On top of that, since you can get the latest policy each
> time
> > you incur a routing failure, as you make payments, you'll get the latest
> > policies of the nodes you care about over time. Also consider that you
> might
> > fail to get "reliable" gossip, simply just due to your peer neighborhood
> > aggressively rate limiting gossip (they only allow 1 update a day for a
> > node, you updated your fee, oops, no splice msg for you).
>
> There's no ratelimiting on new channel announcements?
>
> > So it appears you don't agree that the "wait N blocks before you close
> your
> > channels" isn't a fool proof solution? Why 12 blocks, why not 15? Or 144?
>
> Because it's simple.
>
> >>From my PoV, the whole point of even signalling that a splice is on
> going,
> > is for the sender's/receivers: they can continue to send/recv payments
> over
> > the channel while the splice is in process. It isn't that a node isn't
> > getting any gossip, it's that if the node fails to obtain the gossip
> message
> > within the N block period of time, then the channel has effectively
> closed
> > from their PoV, and it may be an hour+ until it's seen as a usable (new)
> > channel again.
>
> Sure.  If you want to not forget channels at all on close, that works too.
>
> > If there isn't a 100% reliable way to signal that a splice is in
> progress,
> > then this disincentives its usage, as routers can lose out on potential
> fee
> > revenue, and sends/receivers may grow to favor only very long lived
> > channels. IMO _only_ having a gossip message simply isn't enough:
> there're
> > no real guarantees w.r.t _when_ all relevant parties will get your gossip
> > message. So why not give them a 100% reliable on chain signal that:
> > something is in progress here, stay tuned for the gossip message,
> whenever
> > you receive that.
>
> That's not 100% reliable at all.   How long to you want for the new
> gossip?
>
> Just treat every close as signalling "stay tuned for the gossip
> message".  That's reliable.  And simple.
>
> Cheers,
> Rusty.
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Achieving Zero Downtime Splicing in Practice via Chain Signals

2022-07-01 Thread Olaoluwa Osuntokun
Hi Lisa,

> Adding a noticeable on-chain signal runs counter to the goal of the move
> to taproot / gossip v2, which is to make lightning's onchain footprint
> indistinguishable from any other onchain usage

My model of gossip v2 is something like:

  * there's no longer a 1:1 mapping of channels and UTXOs
  * verifiers don't actually care if the advertised UTXO is actually a
channel or not
  * verifiers aren't watching the chain for spends, as channel
advertisements expire after 2 weeks or w/e
  * there might be a degree of "leverage" allowing someone to advertise a 1
BTC UTXO as having 10 BTC capacity (or w/e)

So in this model, splicing on the gossip network wouldn't really be an
explicit event. Since I'm free to advertise a series of channels that might
not actually exist, I can just say: ok, this set of 5 channels is now
actually 2 channels, and you can route a bit more over them. In this world,
re-organizing by a little corner of the channel graph isn't necessarily
tied to
making a series of on-chain transactions.

In the realm of the gossip network as it's defined today, the act of
splicing is already itself a noticeable chain signal: I see a channel close,
then another one advertised that uses that old channel as inputs, and the
closing and opening transactions are the same. As a result, for _public_
channels any of the chain signals I listed above don't actually give away
any additional information: splices are already identifiable (in theory).

I don't disagree that waiting N blocks is probably "good enough" for most
cases (ignoring block storms, rare long intervals between blocks, etc, etc).
Instead this is suggested in the spirit of a belt-and-suspenders approach:
if I can do something to make the signal 100% reliable, that doesn't add
extra bytes to the chain, and doesn't like additional information for public
channels (the only case where the message even matters), then why not?

-- Laolu


On Wed, Jun 29, 2022 at 5:43 PM lisa neigut  wrote:

> Adding a noticeable on-chain signal runs counter to the goal of the move
> to taproot / gossip v2, which is to make lightning's onchain footprint
> indistinguishable from
> any other onchain usage.
>
> I'm admittedly a bit confused as to why onchain signals are even being
> seriously
>  proposed. Aside from "infallibility", is there another reason for
> suggesting
> we add an onchain detectable signal for this? Seems heavy handed imo,
> given
> that the severity of a comms failure is pretty minimal (*potential* for
> lost routing fees).
>
> > So it appears you don't agree that the "wait N blocks before you close
> your
> channels" isn't a fool proof solution? Why 12 blocks, why not 15? Or 144?
>
> fwiw I seem to remember seeing that it takes  ~an hour for gossip to
> propagate
> (no link sorry). Given that, 2x an hour or 12 blocks is a reasonable first
> estimate.
> I trust we'll have time to tune this after we've had some real-world
> experience with them.
>
> Further, we can always add more robust signaling later, if lost routing
> fees turns
> out to be a huge issue.
>
> Finally, worth noting that Alex Myer's minisketch project may well
> help/improve gossip
> reconciliation efficiency to the point where gossip reliability is less
> of an issue.
>
> ~nifty
>
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Using BOLT 8 to Send Wumbo Messages

2022-07-01 Thread Olaoluwa Osuntokun
Hi y'all,

Quick post...

A few weeks ago, some of the dlcspecs developers reached out to ask for
feedback on this PR [1] that attempts to specify a way to send messages
larger
than 65 KB using BOLT 8 (Noise based encrypted transport). After taking a
glance at the PR, I realized that it isn't totally obvious from reading BOLT
8 that it's actually possible to do this w/o adding any new application
layer messages (as the PR proposes).

As I explained in my comment [2], all the sender needs to do is chunk their
messages, and the receiver reads out messages into a read buffer exposed
over
a stream-like interface. This is no different than using TCP/IP to send a 65
KB message over the wire: a series of messages below the Maximum
Transmission Unit at each hop are sent, w/ the receiver
collecting/re-ordering them all before delivering up the API stack.

This was actually in the OG spec, but then was removed to make things a bit
simpler. Here's my commit from way back when implementing this behavior [3].
If we wanted to re-introduce this behavior (so we can do things like
increase the max HTLC limit w/o having to worry about messages being to
large due to all the extra sigs), afaict, we could just add a new wumbo
message feature bit. This bit indicates that a peer knows how to properly
chunk and aggregate larger messages.


[1]: https://github.com/discreetlogcontracts/dlcspecs/pull/192
[2]:
https://github.com/discreetlogcontracts/dlcspecs/pull/192#issuecomment-1171569378
[3]:
https://github.com/lightningnetwork/lnd/commit/767c550d65ef97a765eabe09c97941d91e05f054

-- Laolu
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Onion messages rate-limiting

2022-06-29 Thread Olaoluwa Osuntokun
Hi t-bast,

Happy to see this finally written up! With this, we have two classes of
proposals for rate limiting onion messaging:

  1. Back propagation based rate limiting as described here.

  2. Allowing nodes to express a per-message cost for their forwarding
  services, which is described here [1].

I still need to digest everything proposed here, but personally I'm more
optimistic about the 2nd category than the 1st.

One issue I see w/ the first category is that a single party can flood the
network and cause nodes to trigger their rate limits, which then affects the
usability of the onion messages for all other well-behaving parties. An
example, this might mean I can't fetch invoices, give up after a period of
time (how long?), then result to a direct connection (perceived payment
latency accumulated along the way).

With the 2nd route, if an attacker floods the network, they need to directly
pay for the forwarding usage themselves, though they may also directly cause
nodes to adjust their forwarding rate accordingly. However in this case, the
attacker has incurred a concrete cost, and even if the rates rise, then
those that really need the service (notifying an LSP that a user is online
or w/e) can continue to pay that new rate. In other words, by _pricing_ the
resource utilization, demand preferences can be exchanged, leading to more
efficient long term resource allocation.

W.r.t this topic, one event that imo is worth pointing out is that a very
popular onion routing system, Tor, has been facing a severe DDoS attack that
has lasted weeks, and isn't yet fully resolved [2]. The on going flooding
attack on Tor has actually started to affect LN (iirc over half of all
public routing nodes w/ an advertised address are tor-only), and other
related systems like Umbrel that 100% rely on tor for networking traversal.
Funnily enough, Tor developers have actually suggested adding some PoW to
attempt to mitigate DDoS attacks [3]. In that same post they throw around
the idea of using anonymous tokens to allow nodes to give them to "good"
clients, which is pretty similar to my lofty Forwarding Pass idea as relates
to onion messaging, and also general HTLC jamming mitigation.

In summary, we're not the first to attempt to tackle the problem of rate
limiting relayed message spam in an anonymous/pseudonymous network, and we
can probably learn a lot from what is and isn't working w.r.t how Tor
handles things. As you note near the end of your post, this might just be
the first avenue in a long line of research to best figure out how to handle
the spam concerns introduced by onion messaging. From my PoV, it still seems
to be an open question if the same network can be _both_ a reliable
micro-payment system _and_ also a reliable arbitrary message transport
layer. I guess only time will tell...

> The `shared_secret_hash` field contains a BIP 340 tagged hash

Any reason to use the tagged hash here vs just a plain ol HMAC? Under the
hood, they have a pretty similar construction [4].

[1]:
https://lists.linuxfoundation.org/pipermail/lightning-dev/2022-February/003498.html
[2]: https://status.torproject.org/issues/2022-06-09-network-ddos/
[3]: https://blog.torproject.org/stop-the-onion-denial/
[4]: https://datatracker.ietf.org/doc/html/rfc2104

-- Laolu



On Wed, Jun 29, 2022 at 1:28 AM Bastien TEINTURIER  wrote:

> During the recent Oakland Dev Summit, some lightning engineers got together 
> to discuss DoS
> protection for onion messages. Rusty proposed a very simple rate-limiting 
> scheme that
> statistically propagates back to the correct sender, which we describe in 
> details below.
>
> You can also read this in gist format if that works better for you [1].
>
> Nodes apply per-peer rate limits on _incoming_ onion messages that should be 
> relayed (e.g.
> N/seconds with some burst tolerance). It is recommended to allow more onion 
> messages from
> peers with whom you have channels, for example 10/seconds when you have a 
> channel and 1/second
> when you don't.
>
> When relaying an onion message, nodes keep track of where it came from (by 
> using the `node_id` of
> the peer who sent that message). Nodes only need the last such `node_id` per 
> outgoing connection,
> which ensures the memory footprint is very small. Also, this data doesn't 
> need to be persisted.
>
> Let's walk through an example to illustrate this mechanism:
>
> * Bob receives an onion message from Alice that should be relayed to Carol
> * After relaying that message, Bob stores Alice's `node_id` in its 
> per-connection state with Carol
> * Bob receives an onion message from Eve that should be relayed to Carol
> * After relaying that message, Bob replaces Alice's `node_id` with Eve's 
> `node_id` in its
> per-connection state with Carol
> * Bob receives an onion message from Alice that should be relayed to Dave
> * After relaying that message, Bob stores Alice's `node_id` in its 
> per-connection state with Dave
> * ...
>
> We introduce a new 

Re: [Lightning-dev] Achieving Zero Downtime Splicing in Practice via Chain Signals

2022-06-29 Thread Olaoluwa Osuntokun
Hi Rusty,

Thanks for the feedback!

> This is over-design: if you fail to get reliable gossip, your routing will
> suffer anyway.  Nothing new here.

Idk, it's pretty simple: you're already watching for closes, so if a close
looks a certain way, it's a splice. When you see that, you can even take
note of the _new_ channel size (funds added/removed) and update your
pathfinding/blindedpaths/hophints accordingly.

If this is an over-designed solution, that I'd categorize _only_ waiting N
blocks as wishful thinking, given we have effectively no guarantees w.r.t
how long it'll take a message to propagate.

If by routing you mean a routing node then: no, a routing node doesn't even
really need the graph at all to do their job.

If by routing you mean a sender, then imo still no: you don't necessarily
need _all_ gossip, just the latest policies of the nodes you route most
frequently to. On top of that, since you can get the latest policy each time
you incur a routing failure, as you make payments, you'll get the latest
policies of the nodes you care about over time. Also consider that you might
fail to get "reliable" gossip, simply just due to your peer neighborhood
aggressively rate limiting gossip (they only allow 1 update a day for a
node, you updated your fee, oops, no splice msg for you).

So it appears you don't agree that the "wait N blocks before you close your
channels" isn't a fool proof solution? Why 12 blocks, why not 15? Or 144?

>From my PoV, the whole point of even signalling that a splice is on going,
is for the sender's/receivers: they can continue to send/recv payments over
the channel while the splice is in process. It isn't that a node isn't
getting any gossip, it's that if the node fails to obtain the gossip message
within the N block period of time, then the channel has effectively closed
from their PoV, and it may be an hour+ until it's seen as a usable (new)
channel again.

If there isn't a 100% reliable way to signal that a splice is in progress,
then this disincentives its usage, as routers can lose out on potential fee
revenue, and sends/receivers may grow to favor only very long lived
channels. IMO _only_ having a gossip message simply isn't enough: there're
no real guarantees w.r.t _when_ all relevant parties will get your gossip
message. So why not give them a 100% reliable on chain signal that:
something is in progress here, stay tuned for the gossip message, whenever
you receive that.

-- Laolu


On Tue, Jun 28, 2022 at 6:40 PM Rusty Russell  wrote:

> Hi Roasbeef,
>
> This is over-design: if you fail to get reliable gossip, your routing
> will suffer anyway.  Nothing new here.
>
> And if you *know* you're missing gossip, you can simply delay onchain
> closures for longer: since nodes should respect the old channel ids for
> a while anyway.
>
> Matt's proposal to simply defer treating onchain closes is elegant and
> minimal.  We could go further and relax requirements to detect onchain
> closes at all, and optionally add a perm close message.
>
> Cheers,
> Rusty.
>
> Olaoluwa Osuntokun  writes:
> > Hi y'all,
> >
> > This mail was inspired by this [1] spec PR from Lisa. At a high level, it
> > proposes the nodes add a delay between the time they see a channel
> closed on
> > chain, to when they remove it from their local channel graph. The motive
> > here is to give the gossip message that indicates a splice is in process,
> > "enough" time to propagate through the network. If a node can see this
> > message before/during the splicing operation, then they'll be able relate
> > the old and the new channels, meaning it's usable again by
> senders/receiver
> > _before_ the entire chain of transactions confirms on chain.
> >
> > IMO, this sort of arbitrary delay (expressed in blocks) won't actually
> > address the issue in practice. The proposal suffers from the following
> > issues:
> >
> >   1. 12 blocks is chosen arbitrarily. If for w/e reason an announcement
> >   takes longer than 2 hours to reach the "economic majority" of
> >   senders/receivers, then the channel won't be able to mask the splicing
> >   downtime.
> >
> >   2. Gossip propagation delay and offline peers. These days most nodes
> >   throttle gossip pretty aggressively. As a result, a pair of nodes doing
> >   several in-flight splices (inputs become double spent or something, so
> >   they need to try a bunch) might end up being rate limited within the
> >   network, causing the splice update msg to be lost or delayed
> significantly
> >   (IIRC CLN resets these values after 24 hours). On top of that, if a
> peer
> >   is offline for too long (think mobile senders), then they may miss the
> >   update all together as most nodes don't do a

[Lightning-dev] Achieving Zero Downtime Splicing in Practice via Chain Signals

2022-06-27 Thread Olaoluwa Osuntokun
Hi y'all,

This mail was inspired by this [1] spec PR from Lisa. At a high level, it
proposes the nodes add a delay between the time they see a channel closed on
chain, to when they remove it from their local channel graph. The motive
here is to give the gossip message that indicates a splice is in process,
"enough" time to propagate through the network. If a node can see this
message before/during the splicing operation, then they'll be able relate
the old and the new channels, meaning it's usable again by senders/receiver
_before_ the entire chain of transactions confirms on chain.

IMO, this sort of arbitrary delay (expressed in blocks) won't actually
address the issue in practice. The proposal suffers from the following
issues:

  1. 12 blocks is chosen arbitrarily. If for w/e reason an announcement
  takes longer than 2 hours to reach the "economic majority" of
  senders/receivers, then the channel won't be able to mask the splicing
  downtime.

  2. Gossip propagation delay and offline peers. These days most nodes
  throttle gossip pretty aggressively. As a result, a pair of nodes doing
  several in-flight splices (inputs become double spent or something, so
  they need to try a bunch) might end up being rate limited within the
  network, causing the splice update msg to be lost or delayed significantly
  (IIRC CLN resets these values after 24 hours). On top of that, if a peer
  is offline for too long (think mobile senders), then they may miss the
  update all together as most nodes don't do a full historical
  _channel_update_ dump anymore.

In order to resolve these issues, I think instead we need to rely on the
primary splicing signal being sourced from the chain itself. In other words,
if I see a channel close, and a closing transaction "looks" a certain way,
then I know it's a splice. This would be used in concert w/ any new gossip
messages, as the chain signal is a 100% foolproof way of letting an aware
peer know that a splice is actually happening (not a normal close). A chain
signal doesn't suffer from any of the gossip/time related issues above, as
the signal is revealed at the same time a peer learns of a channel
close/splice.

Assuming, we agree that a chain signal has some sort of role in the ultimate
plans for splicing, we'd need to decide on exactly _what_ such a signal
looks like. Off the top, a few options are:

  1. Stuff something in the annex. Works in theory, but not in practice, as
  bitcoind (being the dominant full node implementation on the p2p network,
  as well as what all the miners use) treats annexes as non-standard. Also
  the annex itself might have some fundamental issues that get in the way of
  its use all together [2].

  2. Re-use the anchors for this purpose. Anchor are nice as they allow for
  1st/2nd/3rd party CPFP. As a splice might have several inputs and outputs,
  both sides will want to make sure it gets confirmed in a timely manner.
  Ofc, RBF can be used here, but that requires both sides to be online to
  make adjustments. Pre-signing can work too, but the effectiveness
  (minimizing chain cost while expediting confirmation) would be dependent
  on the fee step size.

  In this case, we'd use a different multi-sig output (both sides can rotate
  keys if they want to), and then roll the anchors into this splicing
  transaction. Given that all nodes on the network know what the anchor size
  is (assuming feature bit understanding), they're able to realize that it's
  actually a splice, and they don't need to remove it from the channel graph
  (yet).

  3. Related to the above: just re-use the same multi-sig output. If nodes
  don't care all that much about rotating these keys, then they can just use
  the same output. This is trivially recognizable by nodes, as they already
  know the funding keys used, as they're in the channel_announcement.

  4. OP_RETURN (yeh, I had to list it). Self explanatory, push some bytes in
  an OP_RETURN and use that as the marker.

  5. Fiddle w/ the locktime+sequence somehow to make it identifiable to
  verifiers. This might run into some unintended interactions if the inputs
  provided have either relative or absolute lock times. There might also be
  some interaction w/ the main constructing for eltoo (uses the locktime).

Of all the options, I think #2 makes the most sense: we already use anchors
to be able to do fee bumping after-the-fact for closing transactions, so why
not inherit them here. They make the splicing transaction slightly larger,
so maybe #3 (or something else) is a better choice.

The design space for spicing is preeetty large, so I figure the most
productive route might be discussing isolated aspects of it at a time.
Personally, I'm not suuuper caught up w/ what the latest design drafts are
(aside from convos at the recent LN Dev Summit), but from my PoV, how to
communicate the splice to other peers has been an outstanding design
question.

[1]: https://github.com/lightning/bolts/pull/1004
[2]:

Re: [Lightning-dev] LN Summit 2022 Notes & Summary/Commentary

2022-06-23 Thread Olaoluwa Osuntokun
Hi Michael,

> A minor point but terminology can get frustratingly sticky if it isn't
> agreed on early. Can we refer to it as nested MuSig2 going
> forward rather than recursive MuSig2?

No strong feelings on my end, the modifier _nested_ is certainly a bit less
loaded and conceptually simpler, so I'm fine w/ using that going forward if
others are as well.

> Rene Pickhardt brought up the issue of latency with regards to
> nested/recursive MuSig2 (or nested FROST for threshold) on Bitcoin
> StackExchange

Not explicitly, but that strikes me as more of an implementation level
concern. As an example, today more nodes are starting to use replicated
database backends instead of a local ed embedded database. Using such a
database means that _network latency_ is now also a factor, as committing
new states requires round trips between the DBMS that'll increase the
perceived latency of payments in practice. The benefit ofc is better support
for backups/replication.

I think in the multi-signature setting for LN, system designers will also
need to factor in the added latency due to adding more signers into the mix.
Also any system that starts to break up the logical portions of a node
(signing, hosting, etc -- like Blockstream's Greenlight project), will need
to wrangle with this as well (such is the nature of distributed systems).

> MuSig2 obviously generates an aggregated Schnorr signature and so even
> nested MuSig2 require the Lightning protocol to recognize and verify
> Schnorr signatures which it currently doesn't right?

Correct.

> So is the current thinking that Schnorr signatures will be supported first
> with a Schnorr 2-of-2 on the funding output (using OP_CHECKSIGADD and
> enabling the nested schemes) before potentially supporting non-nested
> MuSig2 between the channel counterparties on the funding output later? Or
> is this still in the process of being discussed?

The current plan is to jump straight to using musig2 in the funding output,
so: a single aggregated 2-of-2 key, with a single multi-signature being used
to close the channel (co-op or force close).

Re nested vs non-nested: to my knowledge, if Alice uses the new protocol
extensions to open a taproot channel w/ Bob, then she wouldn't necessarily
be aware that Bob is actually Barol (Bob+Carol). She sees Bob's key (which
might actually be an aggregated key) and his public nonce (which might
actually also be composed of two nonces), and just runs the protocol as
normal. Sure there might be some added latency depending on Barol's system
architecture, but from Alice's PoV that might just be normal network latency
(eg: Barol is connecting over Tor which already adds some additional
latency).

-- Laolu
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] LN Summit 2022 Notes & Summary/Commentary

2022-06-07 Thread Olaoluwa Osuntokun
Hi y'all,

Last week nearly 30 (!) Lightning developers and researchers gathered in
Oakland, California for three day to discuss a number of matters related to
the current state and evolution of the protocol.  This time around, we had
much better representation for all the major Lightning Node implementations
compared to the last LN Dev Summit (Zurich, Oct 2021).

Similar to the prior LN Dev Summit, notes were kept throughout the day that
attempted on a best effort basis to capture the relevant discussions,
decisions, and new relevant research or follow up areas to circle back on.
Last time around, I sent out an email that summarized some key takeaways
(from my PoV) of the last multi-day dev summit [1]. What follows in this
email is a similar summary/recap of the three day summit. Just like last
time: if you attended and felt I missed out on a key point, or inadvertently
misrepresented a statement/idea, please feel free to reply, correcting or
adding additional detail.

The meeting notes in full can be found here:
https://docs.google.com/document/d/1KHocBjlvg-XOFH5oG_HwWdvNBIvQgxwAok3ZQ6bnCW0/edit?usp=sharing

# Simple Taproot Channels

During the last summit, Taproot was a major discussion topic as though the
soft fork had been deployed, we we're all still watching the  's stack up
on the road to ultimately activation. Fast forward several months later and
Taproot has now been fully activated, with ecosystem starting to
progressively deploy more and more advanced systems/applications that take
advantage of the new features.

One key deployment model that came out of the last LN Dev summit was the
concept of an iterative roadmap that progressively revamped the system to
use more taprooty features, instead of a "big bang" approach that would
attempt to package up as many things as possible into one larger update. At
a high level the iterative roadmap proposed that we unroll an existing
larger proposal [2] into more bite sized pieces that can be incrementally
reviewed, implemented, and ultimately deployed (see my post on the LN Dev
Summit 2021 for more details).

## Extension BOLTs

Riiight before we started on the first day, I wrote up a minimal proposal
that attempted to tackle the first two items of the Taproot iterative
deployment schedule (musig2 funding outputs and simple tapscript mapping)
[3]. I called the proposal "Simple Taproot Channels" as it set out to do a
mechanical mapping of the current commitment and script structure to a more
taprooty domain. Rather than edit 4 or 5 different BOLTs with a series of
"if this feature bit applies" nested clauses, I instead opted to create a
new standalone "extension bolt" that defines _new_ behavior on top of the
existing BOLTs, referring to the BOLTs when necessary. The style of the
document was inspired by the "proposals" proposal (very meta), which was
popularized by cdecker and adopted by t-bast with his documents on
Trampoline and Blinded Paths.

If the concept catches on, extension BOLTs provide us with a new way to
extend the spec: rather than insert everything in-line, we could instead
create new standalone documents for larger features. Having a single self
contained document makes the proposal easier to review, and also gives the
author more room to provide any background knowledge, primaries, and also
rationale. Overtime, as the new extensions become widespread (eg: taproot is
the default channel type), we can fold in the extensions back to the main
set of "core" BOLTs (or make new ones as relevant).

Smaller changes to the spec like deprecating an old field or tightening up
some language will likely still follow the old approach of mutating the
existing BOLTs, but larger overhauls like the planned PTLC update may find
the extension BOLTs to be a better tool.

## Tapscript, Musig2, and Lightning

As mentioned above the Simple Taproot Channels proposal does two main
things:
  1. Move the existing 2-of-2 p2wsh segwit v0 funding output to a _single
  key_ p2tr output, with the single key actually being an aggregated musig2
  key.

  2. Map all our existing scripts to the tapscript domain, using the
  internal key (keyspend path) for things like revocations, which an
  potentially allow nodes to store less state for HTLCs.

Of the two components #1 is by far the trickiest. Musig2 is a very elegant
protocol (not to mention the spec which y'all should totally check out) but
as the signatures aren't deterministic (like RFC 6979 [5]), both signers
need to "protect themselves at all times" to ensure they don't ever re-use
nonces, which can lead to a private key leak (!!).

Rather than try to create some sort of psuedo-deterministic nonces scheme
(which maaybe works until the Blockstream Research team squints vaguely in
its direction), I opted to just make all nonces 100% ephemeral and tied to
the lifetime of a connection. Musig2 defines something called a public
nonces, which is actually two individual 33-byte nonces. This value needs to
be exchanged 

Re: [Lightning-dev] Taro - Separating Taro concerns from LN token concerns

2022-05-02 Thread Olaoluwa Osuntokun
Hi John,

> That said, I believe that the correct approach to supporting "tokens on
> Lightning" is to make it a separate concern from Taro, and that LL should
> create a separate BOLT proposal from the current Taro BIPs to ensure it LN
> standards have a genericized protocol that all LN implementations would be
> interested in supporting.

The current Taro BIPs describe just about everything needed in order to
create, validate, and interact with assets on chain. Naturally, the system
needs to exist on-chain before any off-chain constructs can be built on top
of it.

On the topic of a BOLT, I don't think something like Taro (particularly our
vision for the deployment path) should exist at the _BOLT level_. Instead,
we aim to create a bLIP that fully specifies the _optional_ series of TLV
extensions needed to open channels using Taro assets, and send them
off-chain. IMO this isn't something that needs to be a BOLT as: it isn't
intended to be 100% universal (most LN routing nodes and users will only
know of the core bitcoin backbone), isn't critical to the operation of the
core LN network, and it's something that will only initial be deployed at
the edges (sender+receiver).

On the BOLT side, there're a number of important upgrades/extensions being
proposed, and imo it doesn't make sense to attempt to soak up the already
scarce review bandwidth into something like Taro that will live purely at
the edges of the network. I also don't want to speak for the other LN devs,
but I think most would prefer to just focus on the core LN protocol and
ignore anything non-bitcoin on the sides. The implementations/developers
that think this is something worth implementing will be able to contribute
to and review the bLIPs as they wish.

A few implementations support LTC today, but that was mainly an exercise in
helping to build consensus for segwit so we could ultimately deploy LN on
Bitcoin's mainnet (iirc some implementations are in the process of even
removing support).  A prior version of the onion payload (now called the
legacy payload) had a "realm" field that was intended to be used for
multi-chain stuff. The newer modern TLV payload dropped that field as it
wasn't being used anywhere.  IMO that was the right move as it allows us to
keep the core protocol simple and let other ppl be concerned w/ building
multi-asset stuff on top of the base protocol.

> but instead the requirement to add several feature concepts to LN that
> would allow tokens to interact with LN nodes and LN routing:

>From this list of items, I gather that your vision is actually pretty
different from ours. Rather than update the core network to understand the
existence of the various Taro assets, instead we plan on leaving the core
protocol essentially unchanged, with the addition of new TLV extensions to
allow the edges to be aware of and interact w/ the Taro assets. As an
example, we wouldn't need to do anything like advertise exchange rates in
the core network over the existing gossip protocol (which doesn't seem like
the best idea in any case given how quickly they can change and the existing
challenges we have today in ensuring speedy update propagation).

> So, I ask that Lightning Labs coordinate with the LN community to ensure
> such support for other networks and other assets not be dedicated only to
> Taro, and instead genericized enough so that other networks may compete
> fairly in the market,

If you're eager to create a generalized series of extensions to enable your
vision, then of course you're welcome to pursue that. However, I don't think
the other LN developers will really care much about building some
generalized multi-chain/multi-asset system given all the existing work we
still need to do to make sure the bitcoin backbone works properly and can
scale up sufficiently. I'd also caution you against making the same mistakes
that Interledger did: they set out to build a generalized off-chain system
which abstracts over the assets/chains entirely, but years later, and
several hundred wc3 mailing list posts later, virtually nothing uses it.
Why? IMO, because it was overly generalized and they assumed that if they
built it, the entities that actually needed it would magically pop up
(spoiler alert -- *SpongeBob narrator voice*: several years later, they
didn't).

> Otherwise, we will be left with LL's advantage being that LND supports
> Taro, and weird narratives that Taro is somehow superior because LND
> specifically added support for it, without creating a generic spec or BOLT
> that all nodes could adopt for multi-network, multi-asset LN-as-rails use
> cases.

Given that all the specs so far are in the open, and we opted to first build
out the specifications before releasing our own implementation, I don't
foresee Taro being something that only LL or lnd implements. All the BIPs
are public, and the bLIP will be soon as well, so any motivated individual
or set of individuals will also be able to implement and adopt the protocol.

Re: [Lightning-dev] [bitcoin-dev] Taro: A Taproot Asset Representation Overlay

2022-04-11 Thread Olaoluwa Osuntokun
Hi Ruben,

> Also, the people that are responsible for the current shape of RGB aren't
> the people who originated the idea, so it would not be fair to the
> originators either (Peter Todd, Alekos Filini, Giacomo Zucco).

Sure I have no problems acknowledging them in the current BIP draft. Both
the protocols build off of ideas re client-side-validation, but then end up
exploring different parts of the large design space.  Peter Todd is already
there, but I can add the others you've listed. I might even just expand that
section into a longer "Related Work" section along the way.

> What I tried to say was that it does not make sense to build scripting
> support into Taro, because you can't actually do anything interesting with
> it due to this limitation.  can do with their own Taro tokens, or else he
> will burn them – not very useful

I agree that the usage will be somewhat context specific, and dependent on
the security properties one is after. In the current purposefully simplified
version, it's correct that ignoring the rules leads to assets being burnt,
but in most cases imo that's a sufficient enough incentive to maintain and
validate the relevant set of witnesses.

I was thinking about the scripting layer a bit over the weekend, and came up
with a "issuance covenant" design sketch that may or may not be useful. At a
high level, lets say we extend the system to allow a specified (so a new
asset type) or generalized script to be validated when an asset issuance
transaction is being validated. If we add some new domain specific covenant
op codes at the Taro level, then we'd be able to validate issuance events
like:

  * "Issuing N units of this assets can only be done if 1.5*N units of BTC
are present in the nth output of the minting transaction. In addition,
the output created must commit to a NUMs point for the internal key,
meaning that only a script path is possible. The script paths must be
revealed, with the only acceptable unlocking leaf being a time lock of 9
months".

I don't fully have a concrete protocol that would use something like that,
but that was an attempt to express certain collateralization requirements
for issuing certain assets. Verifiers would only recognize that asset if the
issuance covenant script passes, and (perhaps) the absolute timelock on
those coins hasn't expired yet. This seems like a useful primitive for
creating assets that are somehow backed by on-chain BTC collateralization.
However this is just a design sketch that needs to answer questions like:

  * are the assets still valid after that timeout period, or are they
considered to be burnt?

  * assuming that the "asset key family" (used to authorize issuance of
related assets) are jointly owned, and maintained in a canonical
Universe, then would it be possible for 3rd parties to verify the level
of collateralization on-chain, with the join parties maintaining the
pool of collateralized assets accordingly?

  * continuing with the above, is it feasible to use a DLC script within one
of these fixed tapscript leaves to allow more collateral to be
added/removed from the pool backing those assets?

I think it's too early to conclude that the scripting layer isn't useful.
Over time I plan to add more concrete ideas like the above to the section
tracking the types of applications that can be built on Taro.

> So theoretically you could get Bitcoin covenants to enforce certain
> spending conditions on Taro assets. Not sure how practical that ends up
> being, but intriguing to consider.

Exactly! Exactly how practical it ends up being would depend on the types of
covenants deployed in the future. With something like a TLUV and OP_CAT (as
they're sufficiently generalized vs adding op codes to very the proofs) a
Script would be able to re-create the set of commitments to restrict the set
of outputs that can be created after spending. One would use OP_CAT to
handle re-creating the taro asset root, and TLUV (or something similar) to
handle the Bitcoin tapscript part (swap out leaf index 0 where the taro
commitment is, etc).

> The above also reminds me of another potential issue which you need to be
> aware of, if you're not already. Similar to my comment about how the
> location of the Taro tree inside the taproot tree needs to be
> deterministic for the verifier, the output in which you place the Taro
> tree also needs to be

Yep, the location needs to be fully specified which includes factoring the
output index as well. A simple way to restrict this would just to say it's
always the first output. Otherwise, you could lift the output index into the
asset ID calculation.

-- Laolu
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Taro: A Taproot Asset Representation Overlay

2022-04-11 Thread Olaoluwa Osuntokun
Hi Harding,

Great questions!

> anything about Taro or the way you plan to implement support for
> transferring fungible assets via asset-aware LN endpoints[1] will address
> the "free call option" problem, which I think was first discussed on this
> list by Corné Plooy[2] and was later extended by ZmnSCPxj[3], with Tamas
> Blummer[4] providing the following summary

I agree w/ Tamas' quote there in that the problem doesn't exist for
transfers using the same asset. Consider a case of Alice sending to Bob,
with both of them using a hypothetical asset, USD-beef: if the final/last
hop withholds the HTLC, then they risk Bob not accepting the HTLC either due
to the payment timing out, or exchange rate fluctuations resulting in an
insufficient amount delivered to the destination (Bob wanted 10 USD-beef,
but the bound BTC in the onion route is only now 9 USD-beef), in either case
the payment would be cancelled.

> I know several attempts at mitigation have previously been discussed on
> this list, such as barrier escrows[5], so I'm curious whether it's your
> plan to use one of those existing mitigations, suggest a new mitigation,
> or just not worry about it at this point (as Blummer mentioned, it
> probably doesn't matter for swaps where price volatility is lower than fee
> income).

I'd say our current plan is a combination of not worry about it at this
point, rely on proper pricing of the spread/fee-rate that exists at the
first/last mile, and potentially introducing an upfront payment as well if
issues pop up (precise option pricing would need to be worked out still).
One side benefit of introducing this upfront payment at the edges (the idea
is that the asset channels are all private chans from the LN perfective, so
a hophint/blinded path is needed to route to them), is that it presents a
controlled experiment where we can toy with the mechanics of such upfront
payment proposals (which are a lot simpler since there's just one hop to
factor in).

Another difference here vs past attempts/proposals is that since all the
assets are at the edges, identifying a party creating long lived HTLCs that
cross an asset boundary is much simpler: the origin party is likely the one
sending those payments. This makes it easier to detect abuse and stop
forwarding those HTLCs (or close the channel) as unlike the prior
generalized LN-DEX ideas, the origin will always be that first hop.

I think another open question was exactly how a nuisance party would take
advantage of this opportunity:

 * Do they close out the channel and instead go to a traditional exchange
   to make that arbitrage trade? What guarantee do they have that their
   deposit gets there in time and they're able to profit.

 * Do they instead attempt to re-route the swap to use some other market
   maker elsewhere in the network? In this case, won't things just recurse
   with each party in the chain attempting to exploit the arbitrage trade?

IMO as long as the spread/fees make sense at the last/first mile, then the
parties are inactivated to carry out the transfers as they have more
certainty w.r.t revenues from the fees vs needing to reply on an arbitrage
trade that may or may not exist when they go to actually exploit it.

> I'd also be curious to learn what you and others on this list think will
> be different about using Taro versus other attempts to get LN channels to
> cross assets, e.g. allowing payments to be routed from a BTC-based channel
> to a Liquid-BTC-based channel through an LN bridge node.  I believe a fair
> amount of work in LN's early design and implementation was aimed at
> allowing cross-chain transfers but either because of the complexity, the
> free call option problem, or some other problem, it seemed to me that work
> on the problem was largely abandoned.

I think the main difference with our approach is that the LN Bitcoin
Backbone won't explicitly be aware of the existence of any of the assets. As
a result, we won't need core changes to the channel_update method, nor a
global value carved out in the "realm" field (instead w/ the scid alias
feature that can be used to identify which channel should be used to
complete the route), which was meant to be used to identify public LN routes
that crossed chains.

One other difference with our approach is that given all the assets are
presented on Bitcoin itself, we don't need to worry about things like the
other chain being down, translating time lock values, navigating forks
across several chains, etc.  As a result, the software can be a lot simpler,
as everything is anchored in the Bitcoin chain, and we don't actually need
to build in N different wallets which would really blow up the complexity.
I think most of the other attempts were also focused on being able to
emulate DEX-like functionality over the network. In contrast, we're
concerned mainly with payments, though I can see others attempting to tackle
building out an off-chain DEX systems with this new protocol base.

-- Laolu

Re: [Lightning-dev] [bitcoin-dev] Taro: A Taproot Asset Representation Overlay

2022-04-08 Thread Olaoluwa Osuntokun
the owner of the
> commitment to show different histories to different people.
>
> Finally, let me conclude with two questions. Could you clarify the purpose
> of the sparse merkle tree in your design? I suppose you want to be able to
> open a commitment and show it contains a certain asset without having to
> reveal any of the other assets and simultaneously guarantee that you
> haven't committed to the same asset twice (i.e. the SMT guarantees each
> asset gets a specific location in the tree)? Or is there another reason?
>
> And the second question – when transferring Taro token ownership from one
> Bitcoin UTXO to another, do you generate a new UTXO for the recipient or do
> you support the ability to "teleport" the tokens to an existing UTXO like
> how RGB does it? If the latter, have you given consideration to timing
> issues that might occur when someone sends tokens to an existing UTXO that
> simultaneously happens to get spent by the owner?
>
> In any case, I hope this email was useful. Feel free to reach out if I can
> clarify anything.
>
> Good luck with the protocol.
>
> Best regards,
> Ruben
>
> On Tue, Apr 5, 2022 at 5:06 PM Olaoluwa Osuntokun via bitcoin-dev <
> bitcoin-...@lists.linuxfoundation.org> wrote:
>
>> Hi y'all,
>>
>> I'm excited to publicly publish a new protocol I've been working on over
>> the
>> past few months: Taro. Taro is a Taproot Asset Representation Overlay
>> which
>> allows the issuance of normal and also collectible assets on the main
>> Bitcoin
>> chain. Taro uses the Taproot script tree to commit extra asset structured
>> meta
>> data based on a hybrid merkle tree I call a Merkle Sum Sparse Merkle Tree
>> or
>> MS-SMT. An MS-SMT combined the properties of a merkle sum tree, with a
>> sparse
>> merkle tree, enabling things like easily verifiable asset supply proofs
>> and
>> also efficient proofs of non existence (eg: you prove to me you're no
>> longer
>> committing to the 1-of-1 holographic beefzard card during our swap). Taro
>> asset
>> transfers are then embedded in a virtual/overlay transaction graph which
>> uses a
>> chain of asset witnesses to provably track the transfer of assets across
>> taproot outputs. Taro also has a scripting system, which allows for
>> programmatic unlocking/transfer of assets. In the first version, the
>> scripting
>> system is actually a recursive instance of the Bitcoin Script Taproot VM,
>> meaning anything that can be expressed in the latest version of Script
>> can be
>> expressed in the Taro scripting system. Future versions of the scripting
>> system
>> can introduce new functionality on the Taro layer, like covenants or other
>> updates.
>>
>> The Taro design also supports integration with the Lightning Network
>> (BOLTs) as
>> the scripting system can be used to emulate the existing HTLC structure,
>> which
>> allows for multi-hop transfers of Taro assets. Rather than modify the
>> internal
>> network, the protocol proposes to instead only recognize "assets at the
>> edges",
>> which means that only the sender+receiver actually need to know about and
>> validate the assets. This deployment route means that we don't need to
>> build up
>> an entirely new network and liquidity for each asset. Instead, all asset
>> transfers will utilize the Bitcoin backbone of the Lightning Network,
>> which
>> means that the internal routers just see Bitcoin transfers as normal, and
>> don't
>> even know about assets at the edges. As a result, increased demand for
>> transfers of these assets as the edges (say like a USD stablecoin), which
>> in
>> will turn generate increased demand of LN capacity, result in more
>> transfers, and
>> also more routing revenue for the Bitcoin backbone nodes.
>>
>> The set of BIPs are a multi-part suite, with the following breakdown:
>>  * The main Taro protocol:
>> https://github.com/Roasbeef/bips/blob/bip-taro/bip-taro.mediawiki
>>  * The MS-SMT structure:
>> https://github.com/Roasbeef/bips/blob/bip-taro/bip-taro-ms-smt.mediawiki
>>  * The Taro VM:
>> https://github.com/Roasbeef/bips/blob/bip-taro/bip-taro-vm.mediawiki
>>  * The Taro address format:
>> https://github.com/Roasbeef/bips/blob/bip-taro/bip-taro-addr.mediawiki
>>  * The Taro Universe concept:
>> https://github.com/Roasbeef/bips/blob/bip-taro/bip-taro-universe.mediawiki
>>  * The Taro flat file proof format:
>> https://github.com/Roasbeef/bips/blob/bip-taro/bip-taro-proof-file.mediawiki
>>
>> Rather than post them all in line (as the text wouldn't fit in the
>> allowed size
>> limit), all the BIPs can be found above.
>>
>> -- Laolu
>> ___
>> bitcoin-dev mailing list
>> bitcoin-...@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Taro: A Taproot Asset Representation Overlay

2022-04-05 Thread Olaoluwa Osuntokun
Hi y'all,

I'm excited to publicly publish a new protocol I've been working on over the
past few months: Taro. Taro is a Taproot Asset Representation Overlay which
allows the issuance of normal and also collectible assets on the main
Bitcoin
chain. Taro uses the Taproot script tree to commit extra asset structured
meta
data based on a hybrid merkle tree I call a Merkle Sum Sparse Merkle Tree or
MS-SMT. An MS-SMT combined the properties of a merkle sum tree, with a
sparse
merkle tree, enabling things like easily verifiable asset supply proofs and
also efficient proofs of non existence (eg: you prove to me you're no longer
committing to the 1-of-1 holographic beefzard card during our swap). Taro
asset
transfers are then embedded in a virtual/overlay transaction graph which
uses a
chain of asset witnesses to provably track the transfer of assets across
taproot outputs. Taro also has a scripting system, which allows for
programmatic unlocking/transfer of assets. In the first version, the
scripting
system is actually a recursive instance of the Bitcoin Script Taproot VM,
meaning anything that can be expressed in the latest version of Script can
be
expressed in the Taro scripting system. Future versions of the scripting
system
can introduce new functionality on the Taro layer, like covenants or other
updates.

The Taro design also supports integration with the Lightning Network
(BOLTs) as
the scripting system can be used to emulate the existing HTLC structure,
which
allows for multi-hop transfers of Taro assets. Rather than modify the
internal
network, the protocol proposes to instead only recognize "assets at the
edges",
which means that only the sender+receiver actually need to know about and
validate the assets. This deployment route means that we don't need to
build up
an entirely new network and liquidity for each asset. Instead, all asset
transfers will utilize the Bitcoin backbone of the Lightning Network, which
means that the internal routers just see Bitcoin transfers as normal, and
don't
even know about assets at the edges. As a result, increased demand for
transfers of these assets as the edges (say like a USD stablecoin), which in
will turn generate increased demand of LN capacity, result in more
transfers, and
also more routing revenue for the Bitcoin backbone nodes.

The set of BIPs are a multi-part suite, with the following breakdown:
 * The main Taro protocol:
https://github.com/Roasbeef/bips/blob/bip-taro/bip-taro.mediawiki
 * The MS-SMT structure:
https://github.com/Roasbeef/bips/blob/bip-taro/bip-taro-ms-smt.mediawiki
 * The Taro VM:
https://github.com/Roasbeef/bips/blob/bip-taro/bip-taro-vm.mediawiki
 * The Taro address format:
https://github.com/Roasbeef/bips/blob/bip-taro/bip-taro-addr.mediawiki
 * The Taro Universe concept:
https://github.com/Roasbeef/bips/blob/bip-taro/bip-taro-universe.mediawiki
 * The Taro flat file proof format:
https://github.com/Roasbeef/bips/blob/bip-taro/bip-taro-proof-file.mediawiki

Rather than post them all in line (as the text wouldn't fit in the allowed
size
limit), all the BIPs can be found above.

-- Laolu
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Dynamic Commitments Part 2: Taprooty Edition

2022-03-24 Thread Olaoluwa Osuntokun
Oh one other thing I forgot to mention is that switching over the _existing_
channels using w/e update protocol presents an accelerated path to
taproot/PTLC for the existing channels, as they don't necessarily need any
new gossip announcement messages. Instead, after making the switch over,
they can start to advertise the new relevant set of feature bits for
PTLC-like stuff.

This gives us a bit more freedom as moving the existing channels over to
taproot isn't necessarily blocked on deciding which path we want to go re
channel announcement evolution.

-- Laolu


On Thu, Mar 24, 2022 at 3:52 PM Olaoluwa Osuntokun 
wrote:

> Hi y'all,
>
> ## Dynamic Commitments Retrospective
>
> Two years-ish ago I made a mailing list post on some ideas re dynamic
> commitments [1], and how the concept can be used to allow us to upgrade
> channel types on the fly, and also remove pesky hard coded limits like the
> 483 HTLC in-flight limit that's present today. Back then my main target was
> upgrading all the existing channels over to the anchor output commitment
> variant, so the core internal routing network would be more resilient in a
> persistent high fee environment (which hasn't really happened over the past
> 2 years for various reasons tbh). Fast forward to today, and with taproot
> now active on mainnet, and some initial design work/sketches for
> taproot-native channels underway, I figure it would be good to bump this
> concept as it gives us a way to upgrade all 80k+ public channels to taproot
> without any on chain transactions.
>
> ## Updating Across Witness Versions w/ Adapter Commitments
>
> In my original mail, I incorrectly concluded that the dynamic commitments
> concept would only really work within the confines of a "static" multi-sig
> output, meaning that it couldn't be used to help channels upgrade to future
> segwit witness versions.  Thankfully this reply [2] by ZmnSCPxj, outlined a
> way to achieve this in practice. At a high level he proposes an "adaptor
> commitment" (similar to the kickoff transaction in eltoo/duplex), which is
> basically an upgrade transaction that spends one witness version type, and
> produces an output with the next (upgraded) type. In the context of
> converting from segwit v0 to v1 (taproot), two peers would collaboratively
> create a new adapter commitment that spends the old v0 multi-sig output,
> and
> produces a _new_ v1 multi-sig output. The new commitment transaction would
> then be anchored using this new output.
>
> Here's a rough sequence diagram of the before and after state to better
> convey the concept:
>
>   * Before: fundingOutputV0 -> commitmentTransaction
>
>   * After fundingOutputV0 -> fundingOutputV1 (the adapter) ->
> commitmentTransaction
>
> It *is* still the case that _ultimately_ the two transactions to close the
> old segwit v0 funding output, and re-open the channel with a new segwit v1
> funding output are unavoidable. However this adapter commitment lets peers
> _defer_ these two transactions until closing time. When force closing two
> transactions need to be confirmed before the commitment outputs can be
> resolved. However, for co-op close, you can just spend the v0 output, and
> deliver to the relevant P2TR outputs. The adapter commitment can leverage
> sighash anyonecanpay to let both parties (assuming it's symmetric) attach
> additional inputs for fees (to avoid introducing the old update_fee related
> static fee issues), or alternatively inherit the anchor output pattern at
> this level.
>
> ## Existing Dynamic Commitments Proposals
>
> Assuming this concept holds up, then we need an actual concrete protocol to
> allow for dynamic commitment updates. Last year, Rusty made a spec PR
> outlining a way to upgrade the commitment type (leveraging the new
> commitment type feature bits) upon channel re-establish [3]. The proposal
> relies on another message that both sides send (`stfu`) to clear the
> commitment (similar to the shutdown semantics) before the switch over
> happens. However as this is tied to the channel re-establish flow, it
> doesn't allow both sides to do things like only allow your peer to attach N
> HTLCs to start with, slowing increasing their allotted slots and possibly
> reducing them (TCP AIMD style).
>
> ## A Two-Phase Dynamic Commitment Update Protocol
>
> IMO if we're adding in a way to do commitment/channel upgrades, then it may
> be worthwhile to go with a more generalized, but slightly more involved
> route instead. In the remainder of this mail, I'll describe an alternative
> approach that would allow upgrading nearly all channel/commitment related
> values (dust limit, max in flight, etc), which is inspired by the way the
> Raft consensus protocol handles

[Lightning-dev] Dynamic Commitments Part 2: Taprooty Edition

2022-03-24 Thread Olaoluwa Osuntokun
Hi y'all,

## Dynamic Commitments Retrospective

Two years-ish ago I made a mailing list post on some ideas re dynamic
commitments [1], and how the concept can be used to allow us to upgrade
channel types on the fly, and also remove pesky hard coded limits like the
483 HTLC in-flight limit that's present today. Back then my main target was
upgrading all the existing channels over to the anchor output commitment
variant, so the core internal routing network would be more resilient in a
persistent high fee environment (which hasn't really happened over the past
2 years for various reasons tbh). Fast forward to today, and with taproot
now active on mainnet, and some initial design work/sketches for
taproot-native channels underway, I figure it would be good to bump this
concept as it gives us a way to upgrade all 80k+ public channels to taproot
without any on chain transactions.

## Updating Across Witness Versions w/ Adapter Commitments

In my original mail, I incorrectly concluded that the dynamic commitments
concept would only really work within the confines of a "static" multi-sig
output, meaning that it couldn't be used to help channels upgrade to future
segwit witness versions.  Thankfully this reply [2] by ZmnSCPxj, outlined a
way to achieve this in practice. At a high level he proposes an "adaptor
commitment" (similar to the kickoff transaction in eltoo/duplex), which is
basically an upgrade transaction that spends one witness version type, and
produces an output with the next (upgraded) type. In the context of
converting from segwit v0 to v1 (taproot), two peers would collaboratively
create a new adapter commitment that spends the old v0 multi-sig output, and
produces a _new_ v1 multi-sig output. The new commitment transaction would
then be anchored using this new output.

Here's a rough sequence diagram of the before and after state to better
convey the concept:

  * Before: fundingOutputV0 -> commitmentTransaction

  * After fundingOutputV0 -> fundingOutputV1 (the adapter) ->
commitmentTransaction

It *is* still the case that _ultimately_ the two transactions to close the
old segwit v0 funding output, and re-open the channel with a new segwit v1
funding output are unavoidable. However this adapter commitment lets peers
_defer_ these two transactions until closing time. When force closing two
transactions need to be confirmed before the commitment outputs can be
resolved. However, for co-op close, you can just spend the v0 output, and
deliver to the relevant P2TR outputs. The adapter commitment can leverage
sighash anyonecanpay to let both parties (assuming it's symmetric) attach
additional inputs for fees (to avoid introducing the old update_fee related
static fee issues), or alternatively inherit the anchor output pattern at
this level.

## Existing Dynamic Commitments Proposals

Assuming this concept holds up, then we need an actual concrete protocol to
allow for dynamic commitment updates. Last year, Rusty made a spec PR
outlining a way to upgrade the commitment type (leveraging the new
commitment type feature bits) upon channel re-establish [3]. The proposal
relies on another message that both sides send (`stfu`) to clear the
commitment (similar to the shutdown semantics) before the switch over
happens. However as this is tied to the channel re-establish flow, it
doesn't allow both sides to do things like only allow your peer to attach N
HTLCs to start with, slowing increasing their allotted slots and possibly
reducing them (TCP AIMD style).

## A Two-Phase Dynamic Commitment Update Protocol

IMO if we're adding in a way to do commitment/channel upgrades, then it may
be worthwhile to go with a more generalized, but slightly more involved
route instead. In the remainder of this mail, I'll describe an alternative
approach that would allow upgrading nearly all channel/commitment related
values (dust limit, max in flight, etc), which is inspired by the way the
Raft consensus protocol handles configuration/member changes.

For those that aren't aware, Raft is a consensus protocol analogous to Paxos
(but isn't byzantine fault tolerant out of the box) that was designed as a
more understandable alternative to Paxos for a pedagogical environment.
Typically the algorithm is run in the context of a fixed cluster with N
machines, but supports adding/removing machines from the cluster with a
configuration update protocol. At a high level the way this works is that a
new config is sent to the leader, with the leader synchronizing the config
change with the other members of the cluster. Once a majority threshold is
reached, the leader then commits the config change with the acknowledged
parties using the new config (basically a two phase commit). I'm skipping
over some edge cases here that can arise if the new nodes participate
consensus too early, which can cause a split majority leading to two leaders
being elected.

Applying this to the LN context is a bit simpler than a generalized
protocol, as we 

Re: [Lightning-dev] Taproot-aware Channel Announcements + Proof Verification

2022-03-23 Thread Olaoluwa Osuntokun
Hi Harding,

That's a really good point: the false signal is more costly with witness v0
outputs, as they need to pay for the extra bytes in the witness.

I agree we can't 100% maintain the same level of binding for pure taproot
channels. However by having validators verify the final key derivation, we
still effectively further restrict the _type_ of outputs that can be used to
advertise channels, as this means that someone can't use "normal" P2TR
wallet outputs for channel proofs (barring the existence of some new
threshold schnorr wallet, but that would use different key aggregation
(FROST?) all together).

-- Laolu

On Wed, Mar 23, 2022 at 2:02 PM David A. Harding  wrote:

> On 22.03.2022 15:10, Olaoluwa Osuntokun wrote:
> > ### Should the proof+verification strongly bind to the LN context?
> > Today, nodes use the two bitcoin keys and construct a p2wsh
> > multi-sig script and then verify that the script matches what has been
> > confirmed on chain. With taproot, the output is actually just a single
> > key. So if we want to maintain the same level of binding (which makes
> > it
> > harder to advertise fake channels using just a change output have or
> > something), then we'd specify that nodes reconstruct the aggregated
> > bitcoin public
> > key
>
> I think there's a significant difference between P2WSH and P2TR that's
> not being considered here.  With P2WSH, if I want to fake the creation
> of a channel by making my change outputs OP_CMS(2-of-2) with myself, I
> pay for that deception by incurring extra fee costs at spend time due to
> the need to provide more witness data over plain single-sig.  With
> P2TR/MuSig2,
> I can make my change outputs MuSig2(2-of-2) with myself without
> incurring
> any additional onchain spend costs.
>
> In short, I don't believe that you can maintain the same level of
> binding-to-LN-usage currently provided by P2WSH when P2TR keypath
> spends are allowed.
>
> Thanks,
>
> -Dave
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [RFC] Lightning gossip alternative

2022-03-22 Thread Olaoluwa Osuntokun
Hi Rusty,

> Timestamps are replaced with block heights.

This is a conceptually small change, but would actually make things like
rate limiting updates for implementations easier and more uniform. A simple
rule would be only allowing an update per block, which cuts down a lot on
potential chatter traffic, but maybe it's _too_ restrictive? Based on my
network observations, these days some many power user nodes more
aggressively
update their fee schedules in response to liquidity imbalances or as an
attempt to incentive usage of some channels vs others.

> 1. Nodes send out weekly node_announcement_v2, proving they own some
> UTXOs.

If a node doesn't send out this announcement, then will others start to
ignore their "channels"?

> 3. This uses UTXOs for anti-spam, but doesn't tie them to channels
> directly.

As I hinted a bit in prior discussion, and also my recent ML [1] post
outlining a possible "do most of the same things" stop gap, this has the
potentially undesirable effect of allowing parties on the network to utilize
_existing_ outputs to advertise false channels and inflate the "total
network capacity" metric. We'd effectively be moving away from "Alice and
Bob have N BTC of bound capacity between them to", "Bob has N BTC he can use
for signing these proofs".

Also while we're at it, why not add a merkle proof here (independent of
which direction we go with) which makes it possible for constrained/mobile
nodes to more easily verify gossip data (could be an optional query flag).

> FIXME: what about tapscripts?

Yeah, one side effect of moving to nodes advertising how much BTC they can
sign w/ vs the _actual_ BTC they have in "channels", is that to extend
validation here, the verifiers would need to fully verify possible script
path spends (assuming a scenario where a NUMs point is used as the internal
key).

> Legacy proofs are two signatures, similar to the existing
> channel_announcement.

Why not musig2? We'd be able to get away with just a single sig with this
modified `node_announcement_v2` and if we go the `channel_announcement2`
route, we'd be able to compress those 4 sigs into one.

> - If two node_announcement_v2 claim the same UTXO, use the first seen,
> discard any others.

So then this would mean that nodes that _actually_ have a channel between
them can't _individually_ claim the capacity?

> - node_announcement_v2 are discarded after a week (1000 blocks).  - Nodes
> do not need to monitor existence of UTXOs after initial check (since they
> will automatically prune them after a week).

A side effect of this would be _continual_ gossip churn in order to keep
channels alive. Today we do have the 2 week `channel_update` heart beat, but
channel updates are relatively small compared to this `node_announcement_v2`
message.

> - The total proved utxo value (not counting any utxos which are spent) is
> multiplied by 10 to give the "announcable_channel_capacity" for that node.

I don't see how this is at all useful in practice. We'd end up with inflated
numbers for the total node capacity, and path finding would be more
difficult as it isn't clear exactly how large an HTLC I can send over the
"channel". Sure there's the existence of max_htlc, but in that case why add
this "leverage" factor in the first place?

> 1. type: 273 (`channel_update_v2`)

This seems to allow the advertisement of channels which aren't actually
anchored in the chain, which I *think* is a cool thing to have? On the other
hand, the graph at the _edge_ level would be far more dynamic than it is
today (Bob can advertise an entirely distinct topology from one day to
another). Would need to think about the implications here for path finding
algorithms and nodes that want to maintain an update to date view of the
network...

> - `channel_id_and_claimant` is a 31-bit per-node channel_id which can be
> used in onion_messages, and a one bit stolen for the `claim` flag.

Where would this `channel_id` be derived from? FWIW, using this value in the
onion means we get a form of pubkey based routing [3] depending on how these
are derived.

> This simplifies gossip, requiring only two messages instead of three,
> and reducing the UTXO validation requirements to per-node instead of
> per-channel.

I'm not sure this would actually _simplify_ gossip in practice, given that
we'd be moving to a channel graph that isn't entirely based in the reality
of what's routable, and would be far more dynamic than it is today.

> We can use a convention that a channel_update_v2 with no `capacity` is a
> permanent close.

On the neutrino side, we tried to do something where if we see both channels
be disabled, then we'd mark the channel as closed. But in practice if you're
not syncing _every_ channel update every transmitted, then you'll end up
actually missing them.

> It also allows "leasing" of UTXOs: you could pay someone to sign their
> UTXO for your node_announcement, with some level of trust.

I'm not sure this is entirely a *good* thing, as 

[Lightning-dev] Taproot-aware Channel Announcements + Proof Verification

2022-03-22 Thread Olaoluwa Osuntokun
Hi y'all,

On the lnd side we've nearly finished fully integrating taproot into the
codebase [1] (which includes the btcsuite set of libraries and also full
btcwallet support), scheduled to ship in 0.15 (~April), which will enable
existing users of lnd's on-chain wallet and APIs to start getting taprooty
wit it. As any flavor of taproot will mean a different on-chain funding
output, the _existing_ gossip layer needs some sort of modification, as the
BOLTs today don't define how to validate that a given output is actually a
taproot channel. Discussions during the prior spec meetings seem to have
centered in on two possible paths:

  1. Use this as an opportunity to entirely redesign the channel validation
  portion of the gossip layer (ring sigs? zkps? less validation? better
  privacy?).

  2. Defer the above, and instead go with a more minimal mostly the same
  channel_announcement-like message for taproot channels.

In this mail, I want to explore the second option in detail, as Rusty has
already started a thread on what option #1 may look like [2].

## A new taproot-aware `channel_announcement2` message

A simple `channel_announcement2` message that was taproot aware could look
something like:

1. type: xxx (`channel_announcement2`) 2. data:
* [`signature`:`announcement_sig`]
* [`u16`:`len`]
* [`len*byte`:`features`]
* [`chain_hash`:`chain_hash`]
* [`short_channel_id`:`short_channel_id`]
* [`point`:`node_id_1`]
* [`point`:`node_id_2`]
* [`point`:`bitcoin_key_1`]
* [`point`:`bitcoin_key_2`]

(can assume it'll be just native TLV prob also)

This is pretty much the same as the _existing_ `channel_announcement`
message but instead of us carrying around 4 signatures, we'd use musig2 to
generate a _single_ signature under the aggregate public key
(`node_id_1`+`node_id_2`+`bitcoin_key_1`+`bitcoin_key_2`).

While were here, similar to what's been proposed in [2], it likely makes
sense to add an optional (?) merkle proof here to make channel validation
more feasible for constrained/mobile clients (they don't need to fetch all
those blocks any longer). The tradeoff here is that the merkle proof would
potentially be the largest part of the message, which means more on-chain
data needed to store the full channel graph. Alternatively, we could make
this into another gossip query option, with nodes only fetching the proofs
if they actually need it (full nodes with a txindex and just fetch the
transaction).

### Should the proof+verification strongly bind to the LN context?

As far as on-chain output validation, the main difference would be how nodes
actually validate the Bitcoin output (referenced by the `short_channel_id`)
on chain. Today, nodes use the two bitcoin keys and construct a p2wsh
multi-sig script and then verify that the script matches what has been
confirmed on chain. With taproot, the output is actually just a single key.
So if we want to maintain the same level of binding (which makes it harder
to advertise fake channels using just a change output have or something),
then we'd specify that nodes reconstruct the aggregated bitcoin public key
(Q = a_1*B_1 + a_2*_B_2, where a_i is a blinding factor derived using the
target key and every other key in the signing set) and assert that this
matches the pkScript on chain.

By verifying that this output key is just an aggregated key, then we can
also ensure that there's no actual committed script root (a la BIP 86 [3])
which binds the output to our context further. However maybe we want to also
include a `tapscript_root` field as well (so use the musig2 key as the
internal key and then apply the tweaking operations and verify things match
up), which would enable more elaborate unlocking/multi-sig functionality for
the normal funding output.

Alternatively, if we decided that this strong binding isn't as desirable
(starting to get into option 1 territory), then we'd specify just a single
Bitcoin key and look for that directly in the on chain script. IMO, if we're
going the route of something that very closely resembles what we have today,
then we shouldn't drop the strong binding, and fully verify that the key is
indeed a musig2 aggregated public key.

## `announcement_signatures2` and musig2 awareness

The `announcement_signatures` message would also need to change as we'd only
be sending a single signature across the wire: the musig2 _partial_
signature.

1. type: xxx (`announcement_signatures2`) 2. data:
* [`channel_id`:`channel_id`]
* [`short_channel_id`:`short_channel_id`]
* [`signature`:`partial_chan_proof_sig`]

Once both sides exchange this, as normal either party can generate the
`channel_announcement` message to broadcast to the network.

The addition of musig2 carries with it an additional dependency: before
these signatures can be generated, both sides need to exchange their public
nonce (in practice it's two nonces points R_1 and R_2), which is then used
to generate the aggregated nonce using for 

Re: [Lightning-dev] A Proposal for Adding Bandwidth Metered Payment to Onion Messages

2022-03-22 Thread Olaoluwa Osuntokun
completed. This is correct, and imo can be mitigated mainly
via a "tit for tat approach", so paying for a smol session and seeing if
they actually cooperate. Incentive wise, if the node continues to operate as
expected, then they can expect more messaging sessions to be created, which
means mo sats as revenue. I think we're seeing a similar dynamic play out to
day in the network, as path finding algorithms continue to evolve to factor
in "reliability" information (usually some derived probability of success),
which can tend to favor well managed and responsive nodes.

[1]: https://github.com/lightning/blips/blob/master/blip-0003.md
[2]: https://github.com/lightning/bolts/pull/658

-- Laolu


On Wed, Feb 23, 2022 at 8:37 PM Rusty Russell  wrote:

> Olaoluwa Osuntokun  writes:
> > Hi y'all,
> >
> > (TL;DR: a way to nodes to get paid to forward onion messages by adding an
> > upfront session creation phase that uses AMP tender a messaging session
> to a
> > receiver, with nodes being paid upfront for purchase of forwarding
> > bandwidth, and a session identifier being transmitted alongside onion
> > messages to identify paid sessions)
>
> AMP seems to be a Lightning Labs proprietary extension.  You mean
> keysend, which at least has a draft spec?
>
> > Onion messaging has been proposed as a way to do things like fetch
> invoices
> > directly from a potential receiver _directly_ over the existing LN. The
> > current proposal (packaged under the BOLT 12 umbrella) uses a new message
> > (`onion_message`) that inherits the design of the existing Sphinx-based
> > onion blob included in htlc_add messages as a way to propagate arbitrary
> > messages across the network. Blinded paths which are effectively an
> unrolled
> > Sphinx SURB (single use reply block), are used to support reply messages
> in
> > a more private manner. Compared to SURBs, blinded paths are more
> flexible as
> > they don't lock in things like fees or CLTV values.
> >
> > A direct outcome of widespread adoption of the proposal is that the
> scope of
> > LN is expanded beyond "just" a decentralized p2p payment system, with the
>
> Sure, let's keep encouraging people to use HTLCs for free to send data?
> I can certainly implement that if you prefer!
>
> >  1. As there's no explicit session creation/acceptance, a node can be
> >  spammed with unsolicited messages with no way to deny unwanted messages
> nor
> >  explicitly allow messages from certain senders.
> >
> >  2. Nodes that forward these messages (up to 32 KB per message) receive
> no
> >  compensation for the network bandwidth their expend, effectively
> shuffling
> >  around messages for free.
> >
> >  3. Rate limiting isn't concretely addressed, which may result in
> >  heterogeneous rate limiting policies enforced around the network, which
> can
> >  degrade the developer/user experience (why are my packets being randomly
> >  dropped?).
>
> Sure, this is a fun one!  I can post separately on ratelimiting; I
> suggest naively limiting to 10/sec for peers with channels, and 1/sec
> for peers without for now.
>
> (In practice, spamming with HTLCs is infinitely more satisfying...)
>
> > In this email I propose a way to address the issues mentioned above by
> > adding explicit onion messaging session creation as well as a way for
> nodes
> > to be (optionally) paid for any onion messages they forward. In short, an
> > explicit session creation phase is introduced, with the receiver being
> able
> > to accept/deny the session. If the session is accepted, then all nodes
> that
> > comprise the session route are compensated for allotting a certain
> amount of
> > bandwidth to the session (which is ephemeral by nature).
>
> It's an interesting layer on top (esp if you want to stream movies), but
> I never proposed this because it seems to add a source-identifying
> session id, which is a huge privacy step backwards.
>
> You really *do not want* to use this for independent transmissions.
>
> I flirted with using blinded tokens, but it gets complex fast; ideas
> welcome!
>
> > ## Node Announcement TLV Extension
> >
> > In order to allow nodes to signal that they want to be paid to forward
> onion
> > messages and also specify their pricing, we add two new TLV to the
> node_ann
> > message:
> >
> >   * type: 1 (`sats_per_byte`)
> >* data:
> >   * [`uint64`:`forwarding_rate`]
> >   * type: 2 (`sats_per_block`)
> >* data:
> >   * [`uint64`:`per_block_rate`]
>
> You mean:
>
>* type: 1 (`sats_per_byte`)
>* data:
&g

[Lightning-dev] A Proposal for Adding Bandwidth Metered Payment to Onion Messages

2022-02-23 Thread Olaoluwa Osuntokun
Hi y'all,

(TL;DR: a way to nodes to get paid to forward onion messages by adding an
upfront session creation phase that uses AMP tender a messaging session to a
receiver, with nodes being paid upfront for purchase of forwarding
bandwidth, and a session identifier being transmitted alongside onion
messages to identify paid sessions)

Onion messaging has been proposed as a way to do things like fetch invoices
directly from a potential receiver _directly_ over the existing LN. The
current proposal (packaged under the BOLT 12 umbrella) uses a new message
(`onion_message`) that inherits the design of the existing Sphinx-based
onion blob included in htlc_add messages as a way to propagate arbitrary
messages across the network. Blinded paths which are effectively an unrolled
Sphinx SURB (single use reply block), are used to support reply messages in
a more private manner. Compared to SURBs, blinded paths are more flexible as
they don't lock in things like fees or CLTV values.

A direct outcome of widespread adoption of the proposal is that the scope of
LN is expanded beyond "just" a decentralized p2p payment system, with the
protocol evolving to also support pseudonymous messaging and arbitrary data
transfer across the network. This expanded network (payments + arbitrary
data transfer) enables use cases like streaming video transfer, network
tunneled VPNs, large file download, popcorn time, etc -- . Depending on
one's view, the existence of such a combined protocol/network may either
elicit feelings of dread (can we really do _both_ payments _and_ data
properly in the same network?) or excitement (I finally have a censorship
resistant way to watch unboxing videos of all my favorite gadgets!).

Putting aside the discussion w.r.t if such an expanded network is desirable
and also if the combined functionality fundamentally _needs_ to exist in the
confines of a single protocol stack (eg: if LN impls packaged tor clients
would that be enough?), IMO onion messaging as currently proposed has
a few issues:

 1. As there's no explicit session creation/acceptance, a node can be
 spammed with unsolicited messages with no way to deny unwanted messages nor
 explicitly allow messages from certain senders.

 2. Nodes that forward these messages (up to 32 KB per message) receive no
 compensation for the network bandwidth their expend, effectively shuffling
 around messages for free.

 3. Rate limiting isn't concretely addressed, which may result in
 heterogeneous rate limiting policies enforced around the network, which can
 degrade the developer/user experience (why are my packets being randomly
 dropped?).

In this email I propose a way to address the issues mentioned above by
adding explicit onion messaging session creation as well as a way for nodes
to be (optionally) paid for any onion messages they forward. In short, an
explicit session creation phase is introduced, with the receiver being able
to accept/deny the session. If the session is accepted, then all nodes that
comprise the session route are compensated for allotting a certain amount of
bandwidth to the session (which is ephemeral by nature).

# High-Level Overview

Inspired by HORNETs two-phase session creation (first phase makes the
circuit, send allows data transfers), I propose we break up onion messaging
session creation into two phases. In the first phase a sender purchases
_forwarding bandwidth_ from a series of intermediate nodes and also requests
creation of a messaging session to the receiver in a single _atomic_ step.
In the second phase, assuming the session creation was successful, the
sender is able to use the purchased forwarding bandwidth to send messages to
the receiver. The session stays open until either it expires, or the
receiver runs out of cumulative forwarding bandwidth and needs to repeat the
first step.

As we'll see shortly, the created onion messaging sessions aren't tightly
coupled to the nodes that are a part of the initial session creation.
Instead session creation creates a sort of overlay network from the PoV of
the sender that can be used to transmit messages. The same route doesn't
need to be used by subsequent onion message transmissions as the sending
node may already have existing bandwidth sessions it can put together to
send a new/existing message.

One trade-off of the current approach is that a small amount of per-session
state is added to nodes that want to be paid to forward onion messages. The
current onion messaging proposal takes care to _not_ introduce any extra
state to nodes in an onion messaging path: they just decrypt/unblinded and
forward to the next hop. This proposal as it stands adds just 40-bytes-ish
of storage overhead per session (which are ephemeral so this state can be
forgotten over time). In practice, as nodes are being paid to forward, they
can ensure their pricing (more on that later) properly compensates then for
this added storage per session.

# AMP + Onion Messaging == Paid Onion Messaging

What 

Re: [Lightning-dev] Future of Atomic Multi-path Payments (AMP)?

2022-02-21 Thread Olaoluwa Osuntokun
Hi Jozef,

> I'm working on a project that uses LND with Atomic Multi-path Payments
> (AMP) invoices. It seems there isn't any mobile lightning wallet that is
> able to send sats to AMP invoices.

Any mobile wallet built on lnd v0.14 should be able to send to the
reusable invoices (has the required feature bit set for AMP only). Wallets
built on lnd v0.13 will be able to send to the invoices, but only once,
unless they manually specify the `set_id` flag to a random value for any
subsequent sends.

We have some new-ish higher level API docs here:
https://docs.lightning.engineering/lightning-network-tools/lnd/amp.

> What is the future of AMP? Is it a dead-end or can it make it to standard?
> Is there any other lightning implementation that supports it?

Great questions! I view it as the successor to keysend (same thing but
support payment splitting and a re-useable invoice format, though it isn't
as widely propagated yet. The next step here is for us to finalize the
current spec draft [1] and propose it as either a BOLT or a bLIP (the spec
doc linked uses a more bLIP like format, but this OG PR adds things to BOLT
4: https://github.com/lightning/bolts/pull/658). It's been on my TODO list
for sometime now, but I should be able to get to it over the next few weeks!

When PTLCs are eventually rolled out across the network, AMP can be updated
to use discrete-logs instead of payment hashes (Discrete Log Atomic
Multi-Path, so DAMP? lol..), and also support receiver secret-reveal (which
some consider to be a necessary component for "proof of payments") by
combining each share with a receiver specified public key.

[1]:
https://github.com/cfromknecht/lightning-rfc/blob/bolt-amp/21-atomic-multi-path-payments.md

-- Laolu

On Fri, Feb 18, 2022 at 5:10 AM Jozef Hutko  wrote:

> Hello,
>
> I'm working on a project that uses LND with Atomic Multi-path Payments
> (AMP) invoices. It seems there isn't any mobile lightning wallet that is
> able to send sats to AMP invoices. What is the future of AMP? Is it a
> dead-end or can it make it to standard? Is there any other lightning
> implementation that supports it?
>
> Thanks and Regards
> Jozef
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Normal operation questions

2022-02-16 Thread Olaoluwa Osuntokun
Hi Benjamin,

Glad you found it helpful, always happy to help clarify stuff like this. I
hope to eventually be able to leverage some recent research [1] in this area
to improve the specification, as well as general understanding of the update
protocol.

> 1) Why would concurrent signatures generate additional messages? My
> understanding is that by the time the signatures are sent, the HTLCs are
> already locked in.

The commitment state for the type of revocation channels we use today are
_asymmetric_: we both have our own copy of the latest channel state (though
symmetric state revocation designs do exist [2][3]). When I send an add,
then a sig to you and you revoke, then only _you_ have the HTLC on your
latest commitment.  Another round is required for _me_ (the one that
proposed the new HTLC in the first place) to obtain a commitment with this
new HTLC.

When a party sends a new signature, that new signature only commits to any
_remoteu_ pdates included _before_ my last revocation message. As an example
let's say Alice and Bob both send new HTLCs htlc_a, and htlc_b, then
concurrently send new signatures. Alice's initial signature to Bob _does not
include_ htlc_b, only htlc_a. The opposite is true for Bob. At the end of
this initial exchange, Alice's commitment contains htlc_a and Bob's has
htlc_b.

This type of interaction is mentioned in passing in the spec:

> Counter-intuitively, these updates apply to the other node's commitment
> transaction; the node only adds those updates to its own commitment
> transaction when the remote node acknowledges it has applied them via
> revoke_and_ack.

Another signature exchange is required to synchronize both commitments.
Depending on the processing order of the concurrent messages, additional
states may be created. However this isn't strictly required (stop and try to
synchronize commitments), as the protocol is non-blocking and as soon as the
HTLC is included in _both_ commitments (developers usually refer to this as
HTLCs being _locked in_), then they're safe to forward. The spec calls out
this interaction in this fragment:

> As the two nodes' updates are independent, the two commitment transactions
> may be out of sync indefinitely. This is not concerning: what matters is
> whether both sides have irrevocably committed to a particular update or
> not (the final state, above).


> 2) Perhaps I didn't just understand your explanation, but I still don't
> get why the additional `commitment_signed` and `revoke_and_ack` messages
> are necessary. The initial pair of `commitment_signed` and
> `revoke_and_ack` messages establish a new state _conditioned_ on
> possessing the pre-image, right?

Putting it another way: that extra round is needed to _remove_ the HTLC from
_both_ commitment transactions. You're correct that since they have the
pre-image they have the option of going to chain whenever, but then that
means they need to hold onto that HTLC in the commitment transaction
"forever". Today there're a limited amount of slots for HTLCs, so keeping
that extra HTLC reduces the available throughput of a channel.

Reading the initial message I'm not sure I fully understand the
question/ambiguity, but I _think_ the above answers it? Happy to carry on so
we can sync our mental models.

-- Laolu

[1]: https://github.com/kit-dsn/payment-channel-tla
[2]: https://eprint.iacr.org/2020/476
[3]:
https://stanford2017.scalingbitcoin.org/files/Day1/SB2017_script_2_0.pdf

On Wed, Feb 16, 2022 at 1:01 PM Benjamin Weintraub <
weintrau...@northeastern.edu> wrote:

> Hi Laolu!
>
> Thanks for the helpful reply. A couple follow up questions:
>
> 1) Why would concurrent signatures generate additional messages? My
> understanding is that by the time the signatures are sent, the HTLCs are
> already locked in.
>
> 2) Perhaps I didn't just understand your explanation, but I still don't
> get why the additional `commitment_signed` and `revoke_and_ack` messages
> are necessary. The initial pair of `commitment_signed` and `revoke_and_ack`
> messages establish a new state _conditioned_ on possessing the pre-image,
> right? So after the pre-image is shared, then all parties have assurance of
> the new state and therefore _could_ go to the chain (even if they don't
> want to, because they want to keep the channel open). Since the new state
> is already guaranteed by the previous commitments and revocations, what
> purpose do the additional commitments and revocations provide?
>
>
> Thanks again!
> Ben
>
> --
> Ben Weintraub
> PhD Student
> Khoury College of Computer Sciences
> Northeastern University
> https://ben-weintraub.com/
>
> --
> *From:* Olaoluwa Osuntokun 
> *Sent:* Tuesday, February 15, 2022 18:13
> *To:* Benjamin Weintraub 
> *Cc:* Lightning-dev@lists.linuxfoundation.org <
> lig

Re: [Lightning-dev] Normal operation questions

2022-02-15 Thread Olaoluwa Osuntokun
Hi Benjamin,

> 1) Multiple sources indicate that after Alice sends the `update_add_htlc`,
> she should then send the `commitment_signed`, but why is it important that
> she sends it first (before Bob)? As far as I understand, as long as she
> doesn't revoke the old state before Bob commits to the new state, there
> shouldn't be a problem. In that case, the order wouldn't matter---they
could
> even send their commitments concurrently. So does the order matter?

You're correct that it isn't absolutely necessary that she sends a new
signature after adding a new HTLC to the pending set of HTLCs. Alice may
want to delay her signature if she has other HTLCs she wants to add to the
commitment transaction, which allows her to batch/pipeline updates to the
channel.

If Alice is forwarding that HTLC, and Bob's side of the channel has been
dormant (not making many updates), then it's her best interest to propose a
new state immediately as she may generate some routing fees from a
successful forward.

Concurrent signatures aren't an issue, but will end up generating additional
state transitions for both sides to have the exact same set of locked in
HTLCs.

> 2) After Bob sends the `update_fulfill_htlc`, both he and Alice exchange
> `commitment_signed` and `revoke_and_ack` messages again. Why is this
> necessary? After Alice receives the preimage, doesn't she have enough
> information to claim her funds (with the new state)?

If Bob is sending the pre-image, then _he_ is the one that is claiming the
funds. Once Bob learns of the pre-image, he can go to chain if he wants to
in order to claim the HTLC. However that'll be a lot slower and also cost
more in chain fees than doing an update off-chain to settle the HTLC from
the PoV of the commitment transaction of both parties.  Both sides exchange
those messages in order to update their commitment state _off chain_.

Once Alice receives the pre-image (assuming a multi-hop scenario), she can
opt to not wait for the full exchange, and instead _pipeline_ the pre-image
back upstream in the route. In practice, this can reduce perceived user
latency for payments, as you can side step the 1.5 RTTs at each hop in the
route, and simply sling the pre-image all the way back to the original
sender.

-- Laolu

On Tue, Feb 15, 2022 at 7:32 AM Benjamin Weintraub <
weintrau...@northeastern.edu> wrote:

> Hi all,
>
> I have a couple questions about the Normal Operation protocol. For the
> following, consider a single-hop payment between Alice and Bob over a
> single channel.
>
> 1) Multiple sources indicate that after Alice sends the `update_add_htlc`,
> she should then send the `commitment_signed`, but why is it important that
> she sends it first (before Bob)? As far as I understand, as long as she
> doesn't revoke the old state before Bob commits to the new state, there
> shouldn't be a problem. In that case, the order wouldn't matter---they
> could even send their commitments concurrently. So does the order matter?
>
> 2) After Bob sends the `update_fulfill_htlc`, both he and Alice exchange
> `commitment_signed` and `revoke_and_ack` messages again. Why is this
> necessary? After Alice receives the preimage, doesn't she have enough
> information to claim her funds (with the new state)?
>
>
> Thanks!
> Ben
>
> --
> Ben Weintraub
> PhD Student
> Khoury College of Computer Sciences
> Northeastern University
> https://ben-weintraub.com/
>
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Removing lnd's source code from the Lightning specs repository

2021-11-02 Thread Olaoluwa Osuntokun
Oh, also there's currently this sort of placeholder logo from waaay back
that's used as the org's avatar/image. Perhaps it's time we roll an
"official" logo/avatar? Otherwise we can just switch over the randomly
generated blocks thingy that Github uses when an account/org has no
avatar.

-- Laolu

On Tue, Nov 2, 2021 at 7:34 PM Olaoluwa Osuntokun  wrote:

> Circling back to close the loop here:
>
>   * The new Github org (https://github.com/lightning) now exists, and all
> the
> major implementation maintainers have been added to the organization as
> admins.
>
>   * A new blips repo (https://github.com/lightning/blips) has been
> created to
> continue the PR that was originally started in the lightning-rfc
> repo.
>
>   * The old lightning-rfc repo has been moved over, and been renamed to
> "bolts"
> (https://github.com/lightning/bolts -- should it be all caps? )
>
> Thanks to all that participated in the discussion (particularly in
> meatspace
> during the recent protocol dev meetup!), happy we were able to resolve
> things
> and begin the next chapter in the evolution of the Lightning protocol!
>
> -- Laolu
>
>
> On Fri, Oct 15, 2021 at 1:49 AM Fabrice Drouin 
> wrote:
>
>> On Tue, 12 Oct 2021 at 21:57, Olaoluwa Osuntokun 
>> wrote:
>> > Also note that lnd has _never_ referred to itself as the "reference"
>> > implementation.  A few years ago some other implementations adopted that
>> > title themselves, but have since adopted softer language.
>>
>> I don't remember that but if you're referring to c-lightning it was
>> the first lightning implementation, and the only one for a while, so
>> in a way it was a "reference" at the time ?
>> Or it could have been a reference to their policy of "implementing the
>> spec, all the spec and nothing but the spec"  ?
>>
>> > I think it's worth briefly revisiting a bit of history here w.r.t the
>> github
>> > org in question. In the beginning, the lightningnetwork github org was
>> > created by Joseph, and the lightningnetwork/paper repo was added, the
>> > manuscript that kicked off this entire thing. Later
>> lightningnetwork/lnd was
>> > created where we started to work on an initial implementation (before
>> the
>> > BOLTs in their current form existed), and we were added as owners.
>> > Eventually we (devs of current impls) all met up in Milan and decided to
>> > converge on a single specification, thus we added the BOLTs to the same
>> > repo, despite it being used for lnd and knowingly so.
>>
>> Yes, work on c-lightning then eclair then lnd all began a long time
>> before the BOLTs process was implemented, and we all set up repos,
>> accounts...
>> I agree that we all inherited things  from the "pre-BOLTS" era and
>> changing them will create some friction but I still believe it should
>> be done. You also mentioned potential admin rights issues on the
>> current specs repos which would be solved by moving them to a new
>> clean repo.
>>
>> > As it seems the primary grievance here is collocating an implementation
>> of
>> > Lightning along with the _specification_ of the protocol, and given
>> that the
>> > spec was added last, how about we move the spec to an independent repo
>> owned
>> > by the community? I currently have github.com/lightning, and would be
>> happy
>> > to donate it to the community, or we could create a new org like
>> > "lightning-specs" or something similar.
>>
>> Sounds great! github.com/lightning is nice (and I like Damian's idea
>> of using github.com/lightning/bolts) and seems to please everyone so
>> it looks that we have a plan!
>>
>> Fabrice
>>
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Removing lnd's source code from the Lightning specs repository

2021-11-02 Thread Olaoluwa Osuntokun
Circling back to close the loop here:

  * The new Github org (https://github.com/lightning) now exists, and all
the
major implementation maintainers have been added to the organization as
admins.

  * A new blips repo (https://github.com/lightning/blips) has been created
to
continue the PR that was originally started in the lightning-rfc
repo.

  * The old lightning-rfc repo has been moved over, and been renamed to
"bolts"
(https://github.com/lightning/bolts -- should it be all caps? )

Thanks to all that participated in the discussion (particularly in meatspace
during the recent protocol dev meetup!), happy we were able to resolve
things
and begin the next chapter in the evolution of the Lightning protocol!

-- Laolu


On Fri, Oct 15, 2021 at 1:49 AM Fabrice Drouin 
wrote:

> On Tue, 12 Oct 2021 at 21:57, Olaoluwa Osuntokun 
> wrote:
> > Also note that lnd has _never_ referred to itself as the "reference"
> > implementation.  A few years ago some other implementations adopted that
> > title themselves, but have since adopted softer language.
>
> I don't remember that but if you're referring to c-lightning it was
> the first lightning implementation, and the only one for a while, so
> in a way it was a "reference" at the time ?
> Or it could have been a reference to their policy of "implementing the
> spec, all the spec and nothing but the spec"  ?
>
> > I think it's worth briefly revisiting a bit of history here w.r.t the
> github
> > org in question. In the beginning, the lightningnetwork github org was
> > created by Joseph, and the lightningnetwork/paper repo was added, the
> > manuscript that kicked off this entire thing. Later lightningnetwork/lnd
> was
> > created where we started to work on an initial implementation (before the
> > BOLTs in their current form existed), and we were added as owners.
> > Eventually we (devs of current impls) all met up in Milan and decided to
> > converge on a single specification, thus we added the BOLTs to the same
> > repo, despite it being used for lnd and knowingly so.
>
> Yes, work on c-lightning then eclair then lnd all began a long time
> before the BOLTs process was implemented, and we all set up repos,
> accounts...
> I agree that we all inherited things  from the "pre-BOLTS" era and
> changing them will create some friction but I still believe it should
> be done. You also mentioned potential admin rights issues on the
> current specs repos which would be solved by moving them to a new
> clean repo.
>
> > As it seems the primary grievance here is collocating an implementation
> of
> > Lightning along with the _specification_ of the protocol, and given that
> the
> > spec was added last, how about we move the spec to an independent repo
> owned
> > by the community? I currently have github.com/lightning, and would be
> happy
> > to donate it to the community, or we could create a new org like
> > "lightning-specs" or something similar.
>
> Sounds great! github.com/lightning is nice (and I like Damian's idea
> of using github.com/lightning/bolts) and seems to please everyone so
> it looks that we have a plan!
>
> Fabrice
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] LN Summit 2021 Notes & Summary/Commentary

2021-11-02 Thread Olaoluwa Osuntokun
Hi y'all,

A few weeks ago over a dozen Lightning developers met up in Zurich for two
days to discuss a number of matters related to the current state and
evolution of the protocol. All major implementations were represented to
some degree, and we even had a number of "independent"
developers/researchers join (in person or via a video call) as well.

What follows is my best attempt at summarizing (and inserting any relevant
details I forgot to write down, along with a bit of commentary) the set of
notes I took during the sessions.  I'm no Kanzure, so I wasn't able to fully
capture all the relevant conversations/topics, so what follows is my best
attempt at cataloguing the most relevant conversations. If you attended and
felt I missed out on a key point, or inadvertently misrepresented a
statement/idea, please feel free to reply correcting or adding additional
detail.

The meeting notes in full can be found here:
https://docs.google.com/document/d/1fPyjIUNkc9W1DPyMQ81isiv1fKDW9n7b0H6sObQ-QfU/edit?usp=sharing

# Taproot x PTLC x LN

One of the first larger sessions that kicked off was dedicated to discussing
various paths/proposals to upgrading the entire network to taproot based
channels and upgrading the current hash based e2e payment mechanisms (HTLCs)
to instead be based on EC keys (PTLC utilizing adapter sigs, etc). Taproot
is desirable as it allows most operations (funding, co-op close, certain
sweep types, etc) to blend in with normal transactions (once uptake is high
enough). The addition of schnorr sigs into the protocol alongside taproot
significantly expands the design space of off-chain protocols as it allows
the deployment of certain types of scriptless script based adapter
signatures. Namely, the PTLC concept that does away with the current
hash-baed HTLCs, and gives us better privacy + compostability.

## Big Bang vs Iterative Deployment

Recently aj published [4] a proposal for a taproot upgrade that incorporates
several ideas into a single package, with the goal of doing a single large
update to upgrade the network in one swoop. The proposal includes: an
upgrade to taproot, revising all the scripts we used to be more taprooty and
scriptless scripty, a base new revocation mechanism for punishment based
channels, an instantiation of PTLCs, a new commitment state update machine
(as it uses symmetric state like eltoo), a new layer commitment structure
meant to alleviate a trade-off of symmetric state/eltoo related to CLTV+CSV
dependencies and ultimately, eltoo itself as well.

A core matter discussed was if we should try to do things in a sort of "big
bang" manner (get everything all at once), or try to roll things out more
iteratively.

Pros for the big bang roll out is that we get things all at once, and
wouldn't need to "throw away" any intermediate steps as future ones are
rolled out. Cons are that it would be, well, a rather big update to the
entire network, and the core code of all implementations. This would
potentially be difficult to get right all at once, could take a ton of time
to properly review the entire protocol, implement, and finally deploy.

An alternative considered was a more iterative approach. Pros for the
iterative approach would that we'd be able to get some easy wins early on
(mu-sig based funding outputs as an example) for elements with a smaller
design space. We'd also be able to review the components one at at time,
while making further background progress on the more amorphous aspects of a
greater proposal. Cons of the iterative approach is that it would
_potentially_ take longer than a big bang approach (from start to "finish",
finish being everything we ever wanted). We'd also potentially need to
"abandon" components as newer ones are proposed to take their place,
possibly creating technical/code debt. A system for dynamically updating the
channel params/format would make this process more streamlined, as with the
new channel_type funding feature, we'd be able to incrementally upgrade a
channel (to a degree).

### An Potential Iterative Roadmap

The question of what an iterative roadmap would even look like was brought
up. A lofty proposal was the following ordering (not binding at all fwiw),
note that some of these items can be carried out concurrently:

  1. Mu-Sig

* The root component everything else relies on. We'd first all implement
  Mu-Sig 2 (more on that below) and port the current 2-of-2 multi-sig
  output instead a aggregate key schnorr 2-of-2 output. Along the way
  we'd do a naive port of the current script set into the tapscript
  tree. This would mean just porting the scripts as is, with minimal
  modifications. The main thing we'd need to do is port over all the CMS
  instances to use CSA (check sig add) instead).

  Note that we'd also likely need to change/modify the current
  sig/revocation dance to account for proper exchange of nonces, partial
  signatures, etc. Much of this still needs to be 

Re: [Lightning-dev] Removing lnd's source code from the Lightning specs repository

2021-10-12 Thread Olaoluwa Osuntokun
Hi Fabrice,

> I believe that was a mistake: a few days ago, Arcane Research published a
> fairly detailed report on the state of the Lightning Network:
> https://twitter.com/ArcaneResearch/status/1445442967582302213.  They
> obviously did some real work there, and seem to imply that their report
> was vetted by Open Node and Lightning Labs.

Appreciate the hard work from Arcane on putting together this report. That
said, our role wasn't to review the entire report, but instead to provide
feedback on questions they had. Had we reviewed the section in question, we
would have spotted those errors and told the authors to fix them. Mistakes
happen, and we're glad it got corrected.

Also note that lnd has _never_ referred to itself as the "reference"
implementation.  A few years ago some other implementations adopted that
title themselves, but have since adopted softer language.

> So I'm proposing that lnd's source code be removed from
> https://github.com/lightningnetwork/ (and moved to
> https://github.com/lightninglabs for example, with the rest of their
> Lightning tools, but it's up to Lightning Labs).

I think it's worth briefly revisiting a bit of history here w.r.t the github
org in question. In the beginning, the lightningnetwork github org was
created by Joseph, and the lightningnetwork/paper repo was added, the
manuscript that kicked off this entire thing. Later lightningnetwork/lnd was
created where we started to work on an initial implementation (before the
BOLTs in their current form existed), and we were added as owners.
Eventually we (devs of current impls) all met up in Milan and decided to
converge on a single specification, thus we added the BOLTs to the same
repo, despite it being used for lnd and knowingly so.

We purposefully made a _new_ lightninglabs github org as we wanted to keep
lnd, the implementation distinct from any of our future commercial
products/services. To this day, we've architected all our paid products to
be built _on top_ of lnd, rather than within it. As a result, users always
opt into these services.

As it seems the primary grievance here is collocating an implementation of
Lightning along with the _specification_ of the protocol, and given that the
spec was added last, how about we move the spec to an independent repo owned
by the community? I currently have github.com/lightning, and would be happy
to donate it to the community, or we could create a new org like
"lightning-specs" or something similar. We could then move the spec (the
BOLTs and also potentially the bLIPs since some devs want it to be within
its own repo) there, and have it be the home for any other
community-backed/owned projects.  I think the creation of a new github
organization would also be a good opportunity to further formalize the set
of stakeholders and the general process related to the evolution of
Lightning the protocol.

Thoughts?

-- Laolu

On Fri, Oct 8, 2021 at 5:25 PM Fabrice Drouin 
wrote:

> Hello,
>
> When you navigate to https://github.com/lightningnetwork/ you find
> - the Lightning Network white paper
> - the Lightning Network specifications
> - and ... the source code for lnd!
>
> This has been an anomaly for years, which has created some confusion
> between Lightning the open-source protocol and Lightning Labs, one of
> the companies specifying and implementing this protocol, but we didn't
> do anything about it.
>
> I believe that was a mistake: a few days ago, Arcane Research
> published a fairly detailed report on the state of the Lightning
> Network: https://twitter.com/ArcaneResearch/status/1445442967582302213.
> They obviously did some real work there, and seem to imply that their
> report was vetted by Open Node and Lightning Labs.
>
> Yet in the first version that they published you’ll find this:
>
> "Lightning Labs, founded in 2016, has developed the reference client
> for the Lightning Network called Lightning Network Daemon (LND)
> They also maintain the network standards documents (BOLTs)
> repository."
>
> They changed it because we told them that it was wrong, but the fact
> that in 2021 people who took time do do proper research, interviews,
> ... can still misunderstand that badly how the Lightning developers
> community works means that we ourselves badly underestimated how
> confusing mixing the open-source specs for Lightning and the source
> code for one of its implementations can be.
>
> To be clear, I'm not blaming Arcane Research that much for thinking
> that an implementation of an open-source protocol that is hosted with
> the white paper and specs for that protocol is a "reference"
> implementation, and thinking that since Lightning Labs maintains lnd
> then they probably maintain the other stuff too. The problem is how
> that information is published.
>
> So I'm proposing that lnd's source code be removed from
> https://github.com/lightningnetwork/ (and moved to
> https://github.com/lightninglabs for example, with the rest of their
> Lightning 

Re: [Lightning-dev] Stateless invoices with proof-of-payment

2021-09-22 Thread Olaoluwa Osuntokun
Hi Joost,

> The conventional approach is to create a lightning invoice on a node and
> store the invoice together with order details in a database. If the order
> then goes unfulfilled, cleaning processes remove the data from the node
> and database again.

> The problem with this setup is that it needs protection against unbounded
> generation of payment requests. There are solutions for that such as rate
> limiting, but wouldn't it be nice if invoices can be generated without the
> need to keep any state at all?

Isn't this ultimately an engineering issue? How much exactly is "too much"
in this case? Invoices are relatively small, and also don't even necessarily
need to be ever written to disk assuming a slim expiration window. It's
likely the case that a service can just throw everything in Redis and call
it a day. In terms of rate limiting a service would likely already need to
implement that on the API/service level to mitigate app level DoS attacks.

As far as pre-images go, this can already be "stateless" by generating a
single random seed (storing that somewhere w/ a counter likely) and then
using shachain or elkrem to deterministically generate payment hashes. You
can then either use the payment_addr/secret to index into the hash chain, or
have the user send some counter extracted from the invoice as a custom
record. Similar schemes have been proposed in the past to support "offline"
vending machine payments.

Taking it one step further, the service could maintain a unique
elkrem/shachain state for each unique user, which would then allow them to
also collapse the pre-image into the hash chain, which lets them save space
and be able to reproduce a given "proof that someone in the world paid"
(that no service/wallet seems to accept/generate in an
automated/standardized manner) statement dynamically.

-- Laolu


On Tue, Sep 21, 2021 at 3:08 AM Joost Jager  wrote:

> Problem
>
> One of the qualities of lightning is that it can provide light-weight,
> no-login payments with minimal friction. Games, paywalls, podcasts, etc can
> immediately present a QR code that is ready for scan and pay.
>
> Optimistically presenting payment requests does lead to many of those
> payment requests going unused. A user visits a news site and decides not to
> buy the article. The conventional approach is to create a lightning invoice
> on a node and store the invoice together with order details in a database.
> If the order then goes unfulfilled, cleaning processes remove the data from
> the node and database again.
>
> The problem with this setup is that it needs protection against unbounded
> generation of payment requests. There are solutions for that such as rate
> limiting, but wouldn't it be nice if invoices can be generated without the
> need to keep any state at all?
>
> Stateless invoices
>
> What would happen if a lightning invoice is only generated and stored
> nowhere on the recipient side? To the user, it won't make a difference.
> They would still scan and pay the invoice. When the payment arrives at the
> recipient though, two problems arise:
>
> 1. Recipient doesn't know whom or what the payment is for.
>
> This can be solved by attaching additional custom tlv records to the htlc.
> On the wire, this is all arranged for. The only missing piece is the
> ability to specify additional data for that custom tlv record in a bolt11
> invoice. One way would be to define a new tagged field for this in which
> the recipient can encode the order details.
>
> An alternative is to use the existing invoice description field and simply
> always pass that along with the htlc as a custom tlv record.
>
> A second alternative that already works today is to use part (for example
> 16 out of 32 bytes) of the payment_secret (aka payment address) to encode
> the order details in. This assumes that the secret is still secret enough
> with reduced entropy. Also there may not be enough space for every
> application.
>
> 2. Recipient doesn't know the preimage that is needed to settle the
> htlc(s).
>
> One option is to use a keysend payment or AMP payment. In that case, the
> sender includes the preimage with the htlc. Unfortunately this doesn't
> provide the sender with a proof of payment that they'd get with a regular
> lightning payment.
>
> An alternative solution is to use a deterministic preimage based on a
> (recipient node key-derived) secret, the payment secret and other relevant
> properties. This allows the recipient to derive the same preimage twice:
> Once when the lightning invoice is generated and again when a payment
> arrives.
>
> It could be something like this:
>
> payment_secret = random
> preimage = H(node_secret | payment_secret | payment_amount |
> encoded_order_details)
> invoice_hash = H(preimage)
>
> The sender sends an htlc locked to invoice_hash for payment_amount and
> passes along payment_secret and encoded_order_details in a custom tlv
> record.
>
> When the recipient receives the htlc, they 

Re: [Lightning-dev] Opening balanced channels using PSBT

2021-09-22 Thread Olaoluwa Osuntokun
Hi Ole,

It's generally known that one can use out of band transaction construction,
and the push_amt feature in the base funding protocol to simulate dual
funded channels.

The popular 'balanceofsatoshis' tool has a command that packages up the
interaction (`open-balanced-channel`) into an easier to use format, IIRC it
uses key send to ask a peer if they'll accept one and negotiate some of the
params.

The one thing you need to take mind of when doing this manually is that by
default lnd will only lock the UTXOs allocated for the funding attempt for a
few minutes. As a result, you need to make sure the process is finalized
during that interval or the UTXOs will be unlocked and you risk accidentally
double spending yourself.

Lightning Pool also uses this little trick to allows users to purchase
channels that are 50/50 balanced, and also purchase channel leases _for_ a
third party (called sidecar channels) to help on board them onto Lightning:
https://lightning.engineering/posts/2021-05-26-sidecar-channels/. Compared
to the above approaches, the process can be automatically batched w/ other
channels created in that epoch, and uses traits of the Pool account system
to make things atomic.

Ultimately, the balanced-ness of a channel is a transitory state (for
routing nodes, it's great for on-boarding end-users) and opening channels
like these only serves to allow the channel to _start_ in the state. If your
fees and channel policies aren't set accordingly, then it's possible that a
normal payment or balance flow shifts the channel away from equilibrium
shortly after the channel is open.

-- Laolu

On Tue, Sep 21, 2021 at 10:30 PM Ole Henrik Skogstrøm <
oleskogst...@gmail.com> wrote:

> Hi
>
> I have found a way of opening balanced channels using LND's psbt option
> when opening channels. What I'm doing is essentially just joining funded
> PSBTs before signing and submitting them. This makes it possible to open a
> balanced channel between two nodes or open a ring of balanced channels
> between multiple nodes (ROF).
>
> I found this interesting, however I don't know if this is somehow unsafe
> or for some other reason a bad idea. If not, then it could be an
> interesting alternative to only being able to open unbalanced channels.
>
> To do this efficiently, nodes need to collaborate by sending PSBTs back
> and forth to each other and doing this manually is a pain, so if this makes
> sense to do, it would be best to automate it through a client.
>
> --
> --- Here is an example of the complete flow for a single channel:
> --
>
> ** Node A: generates a new address and sends address to Node B *(lncli
> newaddress p2wkh)
>
> ** Node A starts an Interactive channel **open** to Node B* *using psbt*
> (lncli openchannel --psbt  200 100)
>
> ** Node A funds the channel address *(bitcoin-cli walletcreatefundedpsbt
> [] '[{"":0.02}]')
>
> ** Node B funds the refund transaction to Node A and sends PSBT back to
> Node A (*bitcoin-cli walletcreatefundedpsbt []
> '[{"":0.01}]')
>
> * *Node A joins the two PSBTs and sends it back to Node B (*bitcoin-cli
> joinpsbts '["", ""]')
>
> ** Node B verifies the content and signs the joined PSBT before sending it
> back to Node A *(bitcoin-cli walletprocesspsbt )
>
> ** Node A: Verifies the content and signs the joined PSBT *(bitcoin-cli
> walletprocesspsbt )
>
> ** Node A: Completes channel open by publishing the fully signed PSBT*
>
>
> --
> --- Here is an example of the complete flow for a ring of channels between
> multiple nodes:
> --
>
> ** Node A starts an Interactive open channel to Node B using psbt* (lncli
> openchannel --psbt --no_publish  200 100)
> ** Node A funds the channel address* (bitcoin-cli walletcreatefundedpsbt
> [] '[{"":0.02}]')
>
> ** Node B starts an Interactive open channel to Node C using psbt* (lncli
> openchannel --psbt --no_publish  200 100)
> ** Node B funds the channel address* (bitcoin-cli walletcreatefundedpsbt
> [] '[{"":0.02}]')
>
> ** Node C starts an Interactive open channel to Node A using psbt* (lncli
> openchannel --psbt  200 100)
> ** Node C funds the channel address *(bitcoin-cli walletcreatefundedpsbt
> [] '[{"":0.02}]')
>
> ** Node B and C sends Node A their PSBTs*
>
> ** Node A joins all the PSBTs* (bitcoin-cli joinpsbts
> '["", "",
> ""]')
>
> Using (bitcoin-cli walletprocesspsbt ):
>
>
>
> ** Node A verifies and signs the PSBT and sends it to Node B (1/3
> signatures)* Node B verifies and signs the PSBT and sends it to Node C (2/3
> signatures)* Node C verifies and signs the PSBT (3/3 signatures) before
> sending it to Node A and B.*
>
>
> ** Node A completes channel open (no_publish)* Node B completes channel
> open (no_publish)* Node C completes channel open and publishes the
> transaction.*
>
> --
> Ole Henrik Skogstrøm
>
> ___
> Lightning-dev mailing list
> 

Re: [Lightning-dev] Dropping Tor v2 onion services from node_announcement

2021-09-22 Thread Olaoluwa Osuntokun
Earlier this week I was helping a user debug a Tor related issue, and
realized (from the logs) that some newer Tor clients are already refusing to
connect out to v2 onion services.

On the lnd side, I think we'll move to disallow users creating a v2 onion
service in our next major release (0.14), and also possibly "upgrade" them
to a v3 onion service if their node supports it. I've made a tracking issue
here: https://github.com/lightningnetwork/lnd/issues/5771

I ran a naive script to gauge how much of the network is using Tor
generally, and also v2 onion services extract the following stats:
```
num nodes:  12844
num tor:  8793
num num v2:  66
num num v3:  8777
```

This counts advertised addresses total, so it likely over estimates, so you
can treat this as an upper bound. Thankfully only 60 or so v2 addresses seem
to be even _advertised_ on the network, so I don't think this'll cause much
disruption.

Another interesting tidbit here is that: _over half_ of all advertised
addresses on the network today are onion services. I wonder how the rise of
onion service usage (many nodes being tor-only) has affected: e2e payment
latency, general connection stability, and gossip announcement propagation.

In terms of actions we need to take at the spec level, it's likely enough to
amend the section on addrs in the node_announcement message to advise
implementations to _ignore_ the v2 addr type.

-- Laolu

On Tue, Jun 1, 2021 at 3:19 PM darosior via Lightning-dev <
lightning-dev@lists.linuxfoundation.org> wrote:

> Hi all,
>
> It's been almost 9 months since Tor v2 hidden services have been
> deprecated.
> The Tor project will drop v2 support in about a month in the latest
> release. It will then be entirely be dropped from all supported releases by
> October.
> More at https://blog.torproject.org/v2-deprecation-timeline .
>
> Bitcoin Core defaults to v3 since 0.21.0 (
> https://bitcoincore.org/en/releases/0.21.0/) and is planning to drop the
> v2 support for 0.22 (https://github.com/bitcoin/bitcoin/pull/22050),
> which means that v2 onions will gradually stop being gossiped on the
> Bitcoin network.
>
> I think we should do the same for the Lightning network, and the timeline
> is rather tight. Also, the configuration is user-facing (as opposed to
> Bitcoin Core, which generates ephemeral services) which i expect to make
> the transition trickier.
> C-lightning is deprecating the configuration of Tor v2 services starting
> next release, according to our deprecation policy we should be able to
> entirely drop its support 3 releases after this one, which should be not so
> far from the October deadline.
>
> Opinions? What is the state of other implementations with regard to Tor v2
> support?
>
> Antoine
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] #zerobasefee

2021-08-16 Thread Olaoluwa Osuntokun
Matt wrote:
> I'm frankly still very confused why we're having these conversations now

1000% this!!

This entire conversation strikes me as extremely premature and backwards
tbh.  Someone experimenting with a new approach shouldn't prompt us to
immediately modify the protocol to better "fit" the approach, particularly
before any sort of comparative analysis has even been published. At this
point, to my knowledge we don't even have an independent implementation of
the algorithm that has been tightly integrated into an existing LN
implementation. We don't know in which conditions the algorithm excels, and
in which conditions this improvement is maybe negligible (likely when payAmt
<< chanCapacity).

I think part of the difficulty here lies in the current lack of a robust
framework to use in comparing the efficacy of different approaches. Even in
this domain, there're a number of end traits to optimize for including: path
finding length, total CLTV delay across all shards, the amount of resulting
splits (goal should be consume less commitment space), attempt iteration
latency, amount/path randomization, path finding memory, etc, etc.

This also isn't the first time someone has attempted to adapt typical
flow-based algorithms to path finding in the LN setting. T-bast from ACINQ
initially attempted to adapt a greedy flow-based algorithm [1], but found
that a number of implementation related edge cases (he cites that the
min+max constraints in addition to the normal fee limit most implementations
as barriers to adapting the algorithm) led him to go with a simpler approach
to then iterate off of. I'd be curious to hear from T-bast w.r.t how this
new approach differs from his initial approach, and if he spots any
yet-to-be-recognized implementation level complexities to properly
integrating flow based algorithms into path finding.

> a) to my knowledge, no one has (yet) done any follow-on work to
> investigate pulling many of the same heuristics Rene et al use in a
> Dijkstras/A* algorithm with multiple passes or generating multiple routes
> in the same pass to see whether you can emulate the results in a faster
> algorithm without the drawbacks here,

lnd's current approach (very far from perfect, derived via satisficement)
has some similarities to the flow-based approach in its use of probabilities
to attempt to quantify the level of uncertainty of internal network channel
balances.

We start by assuming a config level a priori probability of any given route
working, we then take that, and the fee to route across a given link and
convert the two values into a scalar "distance/weight" (mapping to an
expected cost) we can plug into vanilla Dijkstras [2]. A fresh node uses
this value to compare routes instead of the typical hop count distance
metric. With a cold cache this doesn't really do much, but then we'll update
all the a priori probabilities with observations we gain in the wild.

If a node is able to forward an HTLC to the next hop, we boost their
probability (conditional on the amount forward/failed, so there's a bayesian
aspect). Each failure results in the probabilities of nodes being affected
differently (temp chan failure vs no next node, etc). For example, if we're
able to route through the first 3 hops of the route, but the final hop fails
with a temp chan failure. We'll rewards all the nodes with a success
probability amount (default rn is 95%) that applies when the amount being
carried is < that prior attempt.

As we assume balances are always changing, we then apply a half life decay
that slows increases a penalized probability back to the baseline. The
resulting algorithm starts with no information/memory, but then gains
information with each attempt (increasing and decreasing probabilities as a
function of the amount attempted and time that has passed since the last
attempt). The APIs also let you mark certain nodes as having a higher
apriori probability which can reduce the amount of bogus path exploration.
This API can be used "at scale" to create a sort of active learning system
that learns from the attempts of a fleet of nodes, wallets, trampoline
nodes, wallets, etc (some privacy caveats apply, though there're ways to
fuzz things a bit differential style).

Other knobs exist such as the min probability setting, which controls how
high a success probability a candidate edge needs to have before it is
explored. If the algo is exploring too many lackluster paths (and there're a
lot of these on mainnet due to normal balance imbalance), this value can be
increased which will let it shed a large number of edges to be explored.
When comparing this to the discussed approach that doesn't use any prior
memory, there may be certain cases that allows this algorithm to "jump" to
the correct approach and skip all the pre-processing and optimization that
may result in the same route, just with added latency before the initial
attempt. I'm skipping some other details like how we handle repeated
failures 

Re: [Lightning-dev] bLIPs: A proposal for community-driven app layer and protocol extension standardization

2021-07-01 Thread Olaoluwa Osuntokun
ferent concerns.
>
> bLIPs/SPARKS/BIPs clearly address the second point, which is good.
> But they don't address the first point at all, they instead work around it.
> To be fair, I don't think we can completely address that first point:
> properly
> reviewing spec proposals takes a lot of effort and accepting complex
> changes
> to the BOLTs shouldn't be done lightly.
>
> I am mostly in favor of this solution, but I want to highlight that it
> isn't
> only rainbows and unicorns: it will add fragmentation to the network, it
> will
> add maintenance costs and backwards-compatibility issues, many bLIPs will
> be
> sub-optimal solutions to the problem they try to solve and some bLIPs will
> be
> simply insecure and may put users' funds at risk (L2 protocols are hard
> and have
> subtle issues that can be easily missed). On the other hand, it allows for
> real
> world experimentation and iteration, and it's easier to amend a bLIP than
> the
> BOLTs.
>
> On the nuts-and-bolts (see the pun?) side, bLIPs cannot embrace a fully
> bazaar
> style of evolution. Most of them will need:
>
> - to assign feature bit(s)
> - to insert new tlv fields in existing messages
> - to create new messages
>
> We can't have collisions on any of these three things. bLIP XXX cannot use
> the
> same tlv types as bLIP YYY otherwise we're creating network
> incompatibilities.
> So they really need to be centralized, and we need a process to assign
> these
> and ensure they don't collide. It's not a hard problem, but we need to be
> clear
> about the process around those.
>
> Regarding the details of where they live, I don't have a strong opinion,
> but I
> think they must be easy to find and browse, and I think it's easier for
> readers
> if they're inside the spec repository. We already have PRs that use a
> dedicated
> "proposals" folder (e.g. [1], [2]).
>
> Cheers,
> Bastien
>
> [1] https://github.com/lightningnetwork/lightning-rfc/pull/829
> [2] https://github.com/lightningnetwork/lightning-rfc/pull/854
>
> Le jeu. 1 juil. 2021 à 02:31, Ariel Luaces  a
> écrit :
>
>> BIPs are already the Bazaar style of evolution that simultaneously
>> allows flexibility and coordination/interoperability (since anyone can
>> create a BIP and they create an environment of discussion).
>>
>> BOLTs are essentially one big BIP in the sense that they started as a
>> place for discussion but are now more rigid. BOLTs must be followed
>> strictly to ensure a node is interoperable with the network. And BOLTs
>> should be rigid, as rigid as any widely used BIP like 32 for example.
>> Even though BOLTs were flexible when being drafted their purpose has
>> changed from descriptive to prescriptive.
>> Any alternatives, or optional features should be extracted out of
>> BOLTs, written as BIPs. The BIP should then reference the BOLT and the
>> required flags set, messages sent, or alterations made to signal that
>> the BIP's feature is enabled.
>>
>> A BOLT may at some point organically change to reference a BIP. For
>> example if a BIP was drafted as an optional feature but then becomes
>> more widespread and then turns out to be crucial for the proper
>> operation of the network then a BOLT can be changed to just reference
>> the BIP as mandatory. There isn't anything wrong with this.
>>
>> All of the above would work exactly the same if there was a bLIP
>> repository instead. I don't see the value in having both bLIPs and
>> BIPs since AFAICT they seem to be functionally equivalent and BIPs are
>> not restricted to exclude lightning, and never have been.
>>
>> I believe the reason this move to BIPs hasn't happened organically is
>> because many still perceive the BOLTs available for editing, so
>> changes continue to be made. If instead BOLTs were perceived as more
>> "consensus critical", not subject to change, and more people were
>> strongly encouraged to write specs for new lightning features
>> elsewhere (like the BIP repo) then you would see this issue of growing
>> BOLTs resolved.
>>
>> Cheers
>> Ariel Lorenzo-Luaces
>>
>> On Wed, Jun 30, 2021 at 1:16 PM Olaoluwa Osuntokun 
>> wrote:
>> >
>> > > That being said I think all the points that are addressed in Ryan's
>> mail
>> > > could very well be formalized into BOLTs but maybe we just need to
>> rethink
>> > > the current process of the BOLTs to make it more accessible for new
>> ideas
>> > > to find their way into the BOLTs?
>> >
>> > I think part of what bLIPs are trying to solve he

Re: [Lightning-dev] bLIPs: A proposal for community-driven app layer and protocol extension standardization

2021-07-01 Thread Olaoluwa Osuntokun
> BIPs are already the Bazaar style of evolution that simultaneously
> allows flexibility and coordination/interoperability (since anyone can
create a
> BIP and they create an environment of discussion).

The answer to why not BIPs here applies to BOLTs as well, as bLIPs are
intended to effectively be nested under the BOLT umbrella (same repo, etc).
It's also the case that any document can be mirrored as a BIP, this has been
suggested before, but the BIP editors have decided not to do so.

bLIPs have a slightly different process than BIPs, as well as a different
set
of editors/maintainers (more widely distributed). As we saw with the Speedy
Trial saga (fingers crossed), the sole (?) maintainer of the BIP process was
able to effectively steelman the progression of an author document, with no
sound technical objection (they had a competing proposal that could've been
a distinct document). bLIPs sidestep shenanigans like this by having the
primary maintainer/editors be more widely distributed and closer to the
target domain (LN).

The other thing bLIPs do is do away with the whole "human picks the number
of documents", and "don't assign your own number, you must wait". Borrowing
from EIPs, the number of a document is simply the number of the PR that
proposes the document. This reduces friction, and eliminates a possible
bikeshedding vector.

-- Laolu


On Wed, Jun 30, 2021 at 5:31 PM Ariel Luaces  wrote:

> BIPs are already the Bazaar style of evolution that simultaneously
> allows flexibility and coordination/interoperability (since anyone can
> create a BIP and they create an environment of discussion).
>
> BOLTs are essentially one big BIP in the sense that they started as a
> place for discussion but are now more rigid. BOLTs must be followed
> strictly to ensure a node is interoperable with the network. And BOLTs
> should be rigid, as rigid as any widely used BIP like 32 for example.
> Even though BOLTs were flexible when being drafted their purpose has
> changed from descriptive to prescriptive.
> Any alternatives, or optional features should be extracted out of
> BOLTs, written as BIPs. The BIP should then reference the BOLT and the
> required flags set, messages sent, or alterations made to signal that
> the BIP's feature is enabled.
>
> A BOLT may at some point organically change to reference a BIP. For
> example if a BIP was drafted as an optional feature but then becomes
> more widespread and then turns out to be crucial for the proper
> operation of the network then a BOLT can be changed to just reference
> the BIP as mandatory. There isn't anything wrong with this.
>
> All of the above would work exactly the same if there was a bLIP
> repository instead. I don't see the value in having both bLIPs and
> BIPs since AFAICT they seem to be functionally equivalent and BIPs are
> not restricted to exclude lightning, and never have been.
>
> I believe the reason this move to BIPs hasn't happened organically is
> because many still perceive the BOLTs available for editing, so
> changes continue to be made. If instead BOLTs were perceived as more
> "consensus critical", not subject to change, and more people were
> strongly encouraged to write specs for new lightning features
> elsewhere (like the BIP repo) then you would see this issue of growing
> BOLTs resolved.
>
> Cheers
> Ariel Lorenzo-Luaces
>
> On Wed, Jun 30, 2021 at 1:16 PM Olaoluwa Osuntokun 
> wrote:
> >
> > > That being said I think all the points that are addressed in Ryan's
> mail
> > > could very well be formalized into BOLTs but maybe we just need to
> rethink
> > > the current process of the BOLTs to make it more accessible for new
> ideas
> > > to find their way into the BOLTs?
> >
> > I think part of what bLIPs are trying to solve here is promoting more
> loosely
> > coupled evolution of the network. I think the BOLTs do a good job
> currently of
> > specifying what _base_ functionality is required for a routing node in a
> > prescriptive manner (you must forward an HTLC like this, etc). However
> there's
> > a rather large gap in describing functionality that has emerged over
> time due
> > to progressive evolution, and aren't absolutely necessary, but enhance
> > node/wallet operation.
> >
> > Examples of  include things like: path finding heuristics (BOLTs just
> say you
> > should get from Alice to Bob, but provides no recommendations w.r.t
> _how_ to do
> > so), fee bumping heuristics, breach retribution handling, channel
> management,
> > rebalancing, custom records usage (like the podcast index meta-data,
> messaging,
> > etc), JIT channel opening, hosted channels, randomized channel IDs, fee
> > optimiz

Re: [Lightning-dev] bLIPs: A proposal for community-driven app layer and protocol extension standardization

2021-06-30 Thread Olaoluwa Osuntokun
> That being said I think all the points that are addressed in Ryan's mail
> could very well be formalized into BOLTs but maybe we just need to rethink
> the current process of the BOLTs to make it more accessible for new ideas
> to find their way into the BOLTs?

I think part of what bLIPs are trying to solve here is promoting more
loosely
coupled evolution of the network. I think the BOLTs do a good job currently
of
specifying what _base_ functionality is required for a routing node in a
prescriptive manner (you must forward an HTLC like this, etc). However
there's
a rather large gap in describing functionality that has emerged over time
due
to progressive evolution, and aren't absolutely necessary, but enhance
node/wallet operation.

Examples of  include things like: path finding heuristics (BOLTs just say
you
should get from Alice to Bob, but provides no recommendations w.r.t _how_
to do
so), fee bumping heuristics, breach retribution handling, channel
management,
rebalancing, custom records usage (like the podcast index meta-data,
messaging,
etc), JIT channel opening, hosted channels, randomized channel IDs, fee
optimization, initial channel boostrapping, etc.

All these examples are effectively optional as they aren't required for base
node operation, but they've organically evolved over time as node
implementations and wallet seek to solve UX and operational problems for
their users. bLIPs can be a _descriptive_ (this is how things can be done)
home for these types of standards, while BOLTs can be reserved for
_prescriptive_ measures (an HTLC looks like this, etc).

The protocol as implemented today has a number of extensions (TLVs, message
types, feature bits, etc) that allow implementations to spin out their own
sub-protocols, many of which won't be considered absolutely necessary for
node
operation. IMO we should embrace more of a "bazaar" style of evolution, and
acknowledge that loosely coupled evolution allows participants to more
broadly
explore the design space, without the constraints of "it isn't a thing
until N
of us start to do it".

Historically, BOLTs have also had a rather monolithic structure. We've used
the same 11 or so documents for the past few years with the size of the
documents swelling over time with new exceptions, features, requirements,
etc. If you were hired to work on a new codebase and saw that everything is
defined in 11 "functions" that have been growing linearly over time, you'd
probably declare the codebase as being unmaintainable. By having distinct
documents for proposals/standards, bLIPs (author documents really), each new
standard/proposal is able to be more effectively explained, motivated,
versionsed,
etc.

-- Laolu


On Wed, Jun 30, 2021 at 7:35 AM René Pickhardt via Lightning-dev <
lightning-dev@lists.linuxfoundation.org> wrote:

> Hey everyone,
>
> just for reference when I was new here (and did not understand the
> processes well enough) I proposed a similar idea (called LIP) in 2018 c.f.:
> https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-July/001367.html
>
>
> I wonder what exactly has changed in the reasoning by roasbeef which I
> will repeat here:
>
> *> We already have the equiv of improvement proposals: BOLTs. Historically*
>
> >* new standardization documents are proposed initially as issues or PR's 
> >when *
>
> >* ultimately accepted. Why do we need another repo? *
>
>
> As far as I can tell there was always some form of (invisible?) barrier to
> participate in the BOLTs but there are also new BOLTs being offered:
> * BOLT 12: https://github.com/lightningnetwork/lightning-rfc/pull/798
> * BOLT 14: https://github.com/lightningnetwork/lightning-rfc/pull/780
> and topics to be included like:
> * dual funding
> * splicing
> * the examples given by Ryan
>
> I don't see how a new repo would reduce that barrier - Actually I think it
> would even create more confusion as I for example would not know where
> something belongs. That being said I think all the points that are
> addressed in Ryan's mail could very well be formalized into BOLTs but maybe
> we just need to rethink the current process of the BOLTs to make it more
> accessible for new ideas to find their way into the BOLTs? One thing that I
> can say from answering lightning-network questions on stackexchange is that
> it would certainly help if the BOLTs where referenced  on lightning.network
> web page and in the whitepaper as the place to be if one wants to learn
> about the Lightning Network
>
> with kind regards Rene
>
> On Wed, Jun 30, 2021 at 4:10 PM Ryan Gentry via Lightning-dev <
> lightning-dev@lists.linuxfoundation.org> wrote:
>
>> Hi all,
>>
>> The recent thread around zero-conf channels [1] provides an opportunity
>> to discuss how the BOLT process handles features and best practices that
>> arise in the wild vs. originating within the process itself. Zero-conf
>> channels are one of many LN innovations on the app layer that have
>> struggled to make their way into the 

Re: [Lightning-dev] Hold fee rates as DoS protection (channel spamming and jamming)

2021-02-22 Thread Olaoluwa Osuntokun
> I think the problem of accidental channel closure is getting ignored by
> devs.
>
> If we think any anti-DoS fee will be "insignificant" compared to the cost
> of closing and reopening a channel, maybe dev attention should be on
> fixing accidental channel closure costs than any anti-DoS fee mechanism.
>
> Any deterrence of the channel jamming problem is economic so if the
> anti-DoS fee is tiny, then its deterrence will be tiny as well.

This struck me as an extremely salient point. One thing that has been
noticeable missing from these discussions is any sort of threat model or
attacker
profile. Given this is primarily a griefing attack, and the attacker doesn't
stand any direct gain, how high a fee is considered "adequate" deterrence
without also dramatically increasing the cost of node operation in the
average case?

If an attacker has say a budget of 20 BTC to blow as they just want to see
the world burn, then most parametrizations of attempt fees are likely
insufficient. In addition, if the HTLC attempt/hold fees rise well above
routing fees, then costs are also increased for senders in addition to
routing nodes.

Also IMO, it's important to re-state, that if channels are parametrized
properly (dust values, max/min HTLC, private channels, micropayment specific
channels, etc), then there is an inherent existing cost re the opportunity
cost of committing funds in channels and the chain fee cost of making the
series of channels in the first place.

Based on the discussion above, it appears that the decaying fee idea needs
closer examination to ensure it doesn't increase the day to day operational
cost of a routing node in order to defend against threats at the edges.
Nodes go down all the time for various reasons: need to allocate more disk,
software upgrade, infrastructure migrations, power outages, etc, etc. By
adding a steady decay cost, we introduce an idle penalty for lack of uptime
when holding an HTLC, similar to the availability slashing in PoS systems.
It would be unfortunate if an end result of such a solution is increasing
node operation costs as a whole, (which has other trickle down effects: less
nodes, higher routing fees, strain of dev-ops teams to ensure higher uptime
or loss of funds, etc), while having negligible effects on the "success"
profile of such an attack in practice.

If nodes wish to be compensated for committing capital to Lightning itself,
then markets such as Lightning Pool which rewards them for allocating the
capital (independent of use) for a period of time can help them scratch that
itch.

Returning back to the original point, it may very well be the case that the
very first solution proposed (circa 2015) to this issue: close out the
channel and send back a proof of closure, may in fact be more desirable from
the PoV of enforcing tangible costs given it requires the attacker to
forfeit on-chain fees in the case of an unsuccessful attack. Services that
require long lived HTLCs (HTLC mailboxes, etc) can flag the HTLCs as such in
the onion payload allowing nodes to preferentially forward or reject them.

Zooming out, I have a new idea in this domain that attempts to tackle things
from a different angle. Assuming that any efforts to add further off-chain
costs are insignificant in the face of an attacker with few constraints
w.r.t budget, perhaps some efforts should be focused on instead ensuring
that if there's "turbulence" in the network, it can gracefully degraded to a
slightly more restricted operating mode until the storm passes. If an
attacker spends coins/time/utxos, etc to be in position to distrust things,
but then finds that things are working as normal, such a solution may serve
as a low cost deterrence mechanism that won't tangibly increase
operation/forwarding/payment costs within the network. Working out some of
the kinks re the idea, but I hope to publish it sometime over the next few
days.

-- Laolu


On Fri, Feb 12, 2021 at 8:24 PM ZmnSCPxj via Lightning-dev <
lightning-dev@lists.linuxfoundation.org> wrote:

> Good morning Joost,
>
> > > Not quite up-to-speed back into this, but, I believe an issue with
> using feerates rather than fixed fees is "what happens if a channel is
> forced onchain"?
> > >
> > > Suppose after C offers the HTLC to D, the C-D channel, for any reason,
> is forced onchain, and the blockchain is bloated and the transaction
> remains floating in mempools until very close to the timeout of C-D.
> > > C is now liable for a large time the payment is held, and because the
> C-D channel was dropped onchain, presumably any parameters of the HTLC
> (including penalties D owes to C) have gotten fixed at the time the channel
> was dropped onchain.
> >
> > > The simplicity of the fixed fee is that it bounds the amount of risk
> that C has in case its outgoing channel is dropped onchain.
> >
> > The risk is bound in both cases. If you want you can cap the variable
> fee at a level that isn't considered risky, but it will then not fully
> cover 

Re: [Lightning-dev] Lightning Pool: A Non-Custodial Channel Lease Marketplace

2020-11-05 Thread Olaoluwa Osuntokun
Hi Z,

Thanks for such kind words!

> Is there a documentation for the client/server intercommunications
> protocol?

Long form documentation on the client/server protocol hasn't yet been
written. However, just like Loop, the Pool client uses a fully-featured gRPC
protocol to communicate with the server. The set of protobufs describing the
current client <-> server protocol can be found here [1].

> How stable is this protocol?

I'd say it isn't yet to be considered "stable". We've branded the current
release as an "alpha" release, as we want to leave open the possibility of
breaking changes in the API itself (in addition to the usual disclaimers),
though it's also possible to use proper upgrade mechanisms to never really
_have_ to break the current protocol as is.

> A random, possibly-dumb idea is that a leased channel should charge 0
fees initially.
> Enforcing that is a problem, however, since channel updates are
> unilateral, and of course the lessee cannot afford to close the channel it
> leased in case the lessor sets a nonzero feerate ahead of time.

Agreed that the purchaser of a lease should be able to also receive a fee
rate guarantee along with the channel lifetime enforcement. As you point
out, in order to be able to express something like this, the protocol may
need to be extended to allow nodes to advertise certain pair-wise channel
updates, that are only valid if _both_ sides sign off on each other's
advertisements, similar to the initial announcement signatures message. On
lookers in the network would possibly be able to recognize these new
modified channel update requirements via interpreting the bits in the
channel announcement itself, which requires both sides cooperating to
produce. It's also possible to dictate in the order of the channel lease
itself that the channel be unadvertised, though I know how you feel about
unadvertised channels :).

In the context of Lighting Pool itself, the employed node rating system can
be used to protect lease buyers from nodes that ramp up their fees after
selling a lease, using a punitive mechanism. From the PoV of the incentives
though, they should find the "smoothed" out revenue attractive enough to set
reasonable fees within sold channel leases.

One other thing that the purchaser of a lease needs to consider is effective
utilization of the leased capital. As an example, they should ensure they're
able to fully utilize the purchased bandwidth by using "htlc acceptor" type
hooks to disallow forward through the channel (as they could be used to
rebalance away the funds) to clamp down on "lease leak".

I plan to significantly extend the current "security analysis" section to
cover these aspects as well as some other considerations w.r.t the
interaction of Lifted UTXOs timeouts and batch confirmation/proposal in the
context of Shadowchains. There'll also eventually be a more fleshed out
implementation section once we ship some features like adding additional
duration buckets. The git repo of the LaTeX itself (which is embedded in the
rendered PDF) can be found here [2].

> Secondarily to the Shadowchain discussion, it should be noted that if all
> onchain UTXOs were signed with n-of-n, there would not be a need for a
> fixed orchestrator; all n participants would cooperatively act as an
> orchestrator.

This is correct, and as you point out moving to an n-of-n structure between
all participants runs into a number of scalability/coordination/availability
issues. The existence of the orchestrator also serves to reduce the
availability requirements of the participants, as the only need to be online
to accept/validate a shadowchain block that contains any of its lifted
UTXOs. With an addition of a merkle-tree/MMR/SMT over all chain state that's
committed to in each block (say P2CH-style within the orchestrator's
output), an offline participant would still be able to "fully validate" all
operations that happened while they were away. This structure could also be
used to allow _new_ participants to audit the past history of the chain as
well, and can also be used to _authenticate_ lease rate data in the context
of CLM/Pool (so an authenticated+verifiable price feed of sorts).

In the context of the Pool shadowchain, the existence of the orchestrator
allows the participants to make other tradeoffs given it's slightly elevated
signing position. Consider that it may be "safe" for participants to
instantly (zero conf chans) start using any channels created via a lease as
double spending the channel output itself requires coordination of _all_ the
participants as well as the orchestrator as all accounts are time lock
encumbered. Examining the dynamic more closely: as the auctioneer's account
in the context of Pool/CLM isn't encumbered, then they'd be the only one
able to spend their output unilaterally. However, they have an incentive to
not do so as they'd forfeit any paid execution fees in the chain. If we want
to strengthen the incentives to make "safe zero 

[Lightning-dev] Lightning Pool: A Non-Custodial Channel Lease Marketplace

2020-11-02 Thread Olaoluwa Osuntokun
Hi y'all,

We've recently released a new system which may be of interest to this list,
Lightning Pool [1]. Alongside a working client [2], we've also released a
white paper which goes deeper into the architecture of the system.

Pool builds on some earlier ideas that were tossed around the ML concerning
creating a market for dual-funding channels for the network (though the
concept
itself pre-dates those posts). Rather than target dual-funded channels, we
focus on the current uni-directional channels, and allow users to buy+sell
what
we call a "channel lease" that packages up inbound (and also potentially
outbound via side-car channels!) liquidity paying out a premium for a fixed
duration.

Live testnet+mainnet markets were also released today, giving routing nodes
a
new stable revenue source, and allowing those that need inbound to bootstrap
their new Lightning Service a new automated way to do so.

This is just our first alpha release which contains some
limits/simplifications
in the system itself. We plan to continue to iterate on the system to
implement
new things like streaming interest payments, and the version of side-car
channels (buy a channel for a 3rd party) described in the paper amongst many
other things.

[1]: https://lightning.engineering/posts/2020-11-02-pool-deep-dive/
[2]: https://github.com/lightninglabs/pool
[3]: https://lightning.engineering/lightning-pool-whitepaper.pdf
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Making (some) channel limits dynamic

2020-10-12 Thread Olaoluwa Osuntokun
> I suggest adding tlv records in `commitment_signed` to tell our channel >
> peer that we're changing the values of these fields.

I think this fits in nicely with the "parameter re-negotiation" portion of
my
loose Dynamic commitments proposal. Note that in that paradigm, something
like this would be a distinct message, and also only be allowed with a
"clean commitment" (as otherwise what if I reduce the number of slots to a
value that is lower than the number of active slots?). With this, both sides
would be able to propose/accept/deny updates to the flow control parameters
that can be used to either increase the security of a channel, or implement
a sort of "slow start" protocol for any new peers that connect to you.

Similar to congestion window expansion/contraction in TCP, when a new peer
connects to you, you likely don't want to allow them to be able to consume
all the newly allocated bandwidth in an outgoing direction. Instead, you may
want to only allow them to utilize say 10% of the available HTLC bandwidth,
slowly increasing based on successful payments, and drastically
(multiplicatively) decreasing when you encounter very long lived HTLCs, or
an excessive number of failures.

A dynamic HTLC bandwidth allocation mechanism would serve to mitigate
several classes of attacks (supplementing any mitigations by "channel
acceptor" hooks), and also give forwarding nodes more _control_ of exactly
how their allocated bandwidth is utilized by all connected peers.  This is
possible to some degree today (by using an implicit value lower than
the negotiated values), but the implicit route doesn't give the other party
any information, and may end up in weird re-send loops (as they _why_ an
HTLC was rejected) wasn't communicated. Also if you end up in a half-sign
state, since we don't have any sort of "unadd", then the channel may end up
borked if the violating party keeps retransmitting the same update upon
reconnection.

> Are there other fields you think would need to become dynamic as well?

One other value that IMO should be dynamic to protect against future
unexpected events is the dust limit. "It Is Known", that this value "doesn't
really change", but we should be able to upgrade _all_ channels on the fly
if it does for w/e reason.

-- Laolu
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Why should funders always pay on-chain fees?

2020-10-12 Thread Olaoluwa Osuntokun
> It seems to me that the "funder pays all the commit tx fees" rule exists
> solely for simplicity (which was totally reasonable).

At this stage, I've learned that simplicity (when doing anything that
involves multi-party on-chain fee negotiating/verification/enforcement can
really go a long way). Just think about all the edge cases w.r.t _allocating
enough funds to pay for fees_ we've discovered over the past few years in
the state machine. I fear adding a more elaborate fee splitting mechanism
would only blow up the number of obscure edge cases that may lead to a
channel temporarily or permanently being "borked".

If we're going to add a "fairer" way of splitting fees, we'll really need to
dig down pre-deployment to ensure that we've explored any resulting edge
cases within our solution space, as we'll only be _adding_ complexity to fee
splitting.

IMO, anchor commitments in their "final form" (fixed fee rate on commitment
transaction, only "emergency" use of update_fee) significantly simplifies
things as it shifts from "funding pay fees", to "broadcaster/confirmer pays
fees". However, as you note this doesn't fully distribute the worst-case
cost of needing to go to chain with a "fully loaded" commitment transaction.
Even with HTLCs, they could only be signed at 1 sat/byte from the funder's
perspective, once again putting the burden on the broadcaster/confirmer to
make up the difference.

-- Laolu


On Mon, Oct 5, 2020 at 6:13 AM Bastien TEINTURIER via Lightning-dev <
lightning-dev@lists.linuxfoundation.org> wrote:

> Good morning list,
>
> It seems to me that the "funder pays all the commit tx fees" rule exists
> solely for simplicity
> (which was totally reasonable). I haven't been able to find much
> discussion about this decision
> on the mailing list nor in the spec commits.
>
> At first glance, it's true that at the beginning of the channel lifetime,
> the funder should be
> responsible for the fee (it's his decision to open a channel after all).
> But as time goes by and
> both peers earn value from this channel, this rule becomes questionable.
> We've discovered since
> then that there is some risk associated with having pending HTLCs
> (flood-and-loot type of attacks,
> pinning, channel jamming, etc).
>
> I think that *in some cases*, fundees should be paying a portion of the
> commit-tx on-chain fees,
> otherwise we may end up with a web-of-trust network where channels would
> only exist between peers
> that trust each other, which is quite limiting (I'm hoping we can do
> better).
>
> Routing nodes may be at risk when they *receive* HTLCs. All the attacks
> that steal funds come from
> the fact that a routing node has paid downstream but cannot claim the
> upstream HTLCs (correct me
> if that's incorrect). Thus I'd like nodes to pay for the on-chain fees of
> the HTLCs they offer
> while they're pending in the commit-tx, regardless of whether they're
> funder or fundee.
>
> The simplest way to do this would be to deduce the HTLC cost (172 *
> feerate) from the offerer's
> main output (instead of the funder's main output, while keeping the base
> commit tx weight paid
> by the funder).
>
> A more extreme proposal would be to tie the *total* commit-tx fee to the
> channel usage:
>
> * if there are no pending HTLCs, the funder pays all the fee
> * if there are pending HTLCs, each node pays a proportion of the fee
> proportional to the number of
> HTLCs they offered. If Alice offered 1 HTLC and Bob offered 3 HTLCs, Bob
> pays 75% of the
> commit-tx fee and Alice pays 25%. When the HTLCs settle, the fee is
> redistributed.
>
> This model uses the on-chain fee as collateral for usage of the channel.
> If Alice wants to forward
> HTLCs through this channel (because she has something to gain - routing
> fees), she should be taking
> on some of the associated risk, not Bob. Bob will be taking the same risk
> downstream if he chooses
> to forward.
>
> I believe it also forces the fundee to care about on-chain feerates, which
> is a healthy incentive.
> It may create a feedback loop between on-chain feerates and routing fees,
> which I believe is also
> a good long-term thing (but it's hard to predict as there may be negative
> side-effects as well).
>
> What do you all think? Is this a terrible idea? Is it okay-ish, but not
> worth the additional
> complexity? Is it an amazing idea worth a lightning nobel? Please don't
> take any of my claims
> for granted and challenge them, there may be negative side-effects I'm
> completely missing, this is
> a fragile game of incentives...
>
> Side-note: don't forget to take into account that the fees for HTLC
> transactions (second-level txs)
> are always paid by the party that broadcasts them (which makes sense). I
> still think this is not
> enough and can even be abused by fundees in some setups.
>
> Thanks,
> Bastien
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> 

Re: [Lightning-dev] Dynamic Commitments: Upgrading Channels Without On-Chain Transactions

2020-07-21 Thread Olaoluwa Osuntokun
Hi Z,

> Probably arguably off-topic, but this post triggered me into thinking
> about an insane idea: offchain update from existing Poon-Dryja to newer
> Decker-Russell-Osuntokun ("eltoo") mechanism.

Ooo, yeh I don't see why this would be possible assuming at that point
no_input has been deployed...

However, switching between commitment types that have distinct commitment
invalidation mechanisms appears to make things a bit more complex. Consider
that since the earlier lifetime of my channel used _revocation_ based
invalidation, I'd need to be able to handle two types of invalid commitment
broadcasts: broadcast of a revoked commitment, and broadcast of a _replaced_
commitment.

As a result, implementations may want to limit the types of transitions to
only a commitment type with the same invalidation mechanism. On the other
hand, I don't think that additional complexity (being able to handle both
types
of contract violations) is insurmountable.

For those that wish to retain a revocation based commitment invalidation
model, they may instead opt to upgrade to something like this [1], which I
consider to be the current best successor to the OG Poon-Dryja revocation
mechanism (has some other tool traits too). The commitment format still
needs a sexy name though"el tres"? ;)

> We can create an upgrade transaction that is a cut-through of a mutual
> close of the Poon-Dryja, and a funding open of a Decker-Russell-Osuntokun.

Splicing reborn!

> The channel retains its short-channel-id, which may be useful, since a
> provably-long-lived channel implies both channel participants have high
> reliability (else one or the other would have closed the channel at some
> point), and a pathfinding algorithm may bias towards such long-lived
> channels.

Indeed, I think some implementations (eclair?) factor in the age of the
channel they're attempting to traverse during path finding.

[1]: https://eprint.iacr.org/2020/476

-- Laolu

On Tue, Jul 21, 2020 at 7:50 AM ZmnSCPxj  wrote:

> Good morning Laolu, and list,
>
> Probably arguably off-topic, but this post triggered me into thinking
> about an insane idea: offchain update from existing Poon-Dryja to newer
> Decker-Russell-Osuntokun ("eltoo") mechanism.
>
> Due to the way `SIGHASH_ANYPREVOUT` will be deployed --- requires a new
> pubkey type and works only inside the Taproot construction --- we cannot
> seamlessly upgrade from a Poon-Dryja channel to a Decker-Russell-Osuntokun.
> The funding outpoint itself has to be changed.
>
> We can create an upgrade transaction that is a cut-through of a mutual
> close of the Poon-Dryja, and a funding open of a Decker-Russell-Osuntokun.
> This transaction spends the funding outpoint of an existing Poon-Dryja
> channel, and creates a Decker-Russell-Osuntokun funding outpoint.
>
> However, once such an upgrade transaction has been created and signed by
> both parties (after the necessary initial state is signed in the
> Decker-Russell-Osuntokun mechanism), nothing prevents the participants
> from, say, just keeping the upgrade transaction offchain as well.
>
> The participants can simply, after the upgrade transaction has been
> signed, revoke the latest Poon-Dryja state (which has been copied into the
> initial Decker-Russell-Osuntokun state).
> Then they can keep the upgrade transaction offchain, and treat the funding
> outpoint of the upgrade transaction as the "internal funding outpoint" for
> future Decker-Russell-Osuntokun updates.
>
> Now, of course, since the onchain funding outpoint remains a Poon-Dryja,
> it can still be spent using a revoked state.
> Thus, we do not gain anything much, since the entire HTLC history of the
> Poon-Dryja channel needs to be retained as protection against theft
> attempts.
>
> However:
>
> * Future HTLCs in the Decker-Russell-Osuntokun domain need not be recorded
> permanently, thus at least bounding the information liability of the
> upgraded channel.
> * The channel retains its short-channel-id, which may be useful, since a
> provably-long-lived channel implies both channel participants have high
> reliability (else one or the other would have closed the channel at some
> point), and a pathfinding algorithm may bias towards such long-lived
> channels.
>
> Of note, is that if the channel is later mutually closed, the upgrade
> transaction, being offchain, never need appear onchain, so this potentially
> saves blockchain space.
>
> Regards,
> ZmnSCPxj
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Dynamic Commitments: Upgrading Channels Without On-Chain Transactions

2020-07-21 Thread Olaoluwa Osuntokun
alternative. I think it's better to explicitly signal that we
> want to pause the channel while we upgrade the commitment format (and stop
> accepting HTLCs while we're updating, like we do once we've exchanged the
> `shutdown` message). Otherwise the asynchronocity of the protocol is
> likely to
> create months (years?) of tracking unwanted force-closes because of races
> between `commig_sig`s with the new and old commitment format.
>
> Updating the commitment format should be a rare enough operation that we
> can
> afford to synchronize with a two-way `update_commitment_format` handshake,
> then
> temporarily freeze the channel.
>
> The tricky part will be how we handle "dangling" operations that were sent
> by
> the remote peer *after* we sent our `update_commitment_format` but *before*
> they received it. The simplest choice is probably to have the initiator
> just
> ignore these messages, and the non-initiator enqueue these un-acked
> messages
> and replay them after the commitment format update completes (or just drop
> them
> and cancel corresponding upstream HTLCs if needed).
>
> Regarding initiating the commitment format update, how do you see this
> happen?
> The funder activates a new feature on his (e.g. `option_anchor_outputs`),
> and
> broadcasts it in `init` and `node_announcement`, then waits until the
> remote
> also activates it in its `init` message and then reacts to this by
> triggering
> the update process?
>
> Thanks,
> Bastien
>
> Le mar. 21 juil. 2020 à 03:18, Olaoluwa Osuntokun  a
> écrit :
>
>> Hi y'all,
>>
>> In this post, I'd like to share an early version of an extension to the
>> spec
>> and channel state machine that would allow for on-the-fly commitment
>> _format/type_ changes. Notably, this would allow for us to _upgrade_
>> commitment types without any on-chain activity, executed in a
>> de-synchronized and distributed manner. The core realization these
>> proposal
>> is based on the fact that the funding output is the _only_ component of a
>> channel that's actually set in stone (requires an on-chain transaction to
>> modify).
>>
>>
>> # Motivation
>>
>> (you can skip this section if you already know why something like this is
>> important)
>>
>> First, some motivation. As y'all are likely aware, the current deployed
>> commitment format has changed once so far: to introduce the
>> `static_remote_key` variant which makes channels safer by sending the
>> funds
>> of the party that was force closed on to a plain pubkey w/o any extra
>> tweaks
>> or derivation. This makes channel recovery safer, as the party that may
>> have
>> lost data (or can't continue the channel), no longer needs to learn of a
>> secret value sent to them by the other party to be able to claim their
>> funds. However, as this new format was introduced sometime after the
>> initial
>> bootstrapping phase of the network, most channels in the wild today _are
>> not_ using this safer format.  Transitioning _all_ the existing channels
>> to
>> this new format as is, would require closing them _all_, generating tens
>> of
>> thousands of on-chain transactions (to close, then re-open), not to
>> mention
>> chain fees.
>>
>> With dynamic commitments, users will be able to upgrade their _existing_
>> channels to new safer types, without any new on-chain transactions!
>>
>> Anchor output based commitments represent another step forward in making
>> channels safer as they allow users/software to no longer have to predict
>> chain fees ahead of time, and also bump up the fee of a
>> commitment/2nd-level-htlc-transaction, which is extremely important when
>> it
>> comes to timely on-chain resolution of HTLC contracts. This upgrade
>> process
>> (as touched on below) can either be manually triggered, or automatically
>> triggered once the software updates and finds a new preferable default
>> commitment format is available.
>>
>> As many of us are aware, the addition of schnorr and taproot to the
>> Bitcoin
>> protocol dramatically increases the design space for channels as a whole.
>> It
>> may take some time to explore this design space, particularly as entirely
>> new channel/commitment formats [1] continue to be discovered. The roll out
>> of dynamic commitments allows us to defer the concrete design of the
>> future
>> commitment formats, yet still benefit from the immediate improvement that
>> comes with morphing the funding output to be a single-key (non-p2wsh,
>> though
>> the line starts

Re: [Lightning-dev] Dynamic Commitments: Upgrading Channels Without On-Chain Transactions

2020-07-21 Thread Olaoluwa Osuntokun
After getting some feedback from the Lightning Labs squad, we're thinking
that it may be better to make the initial switch over double-opt-in, similar
to the current `shutdown` message flow. So with this variant, we'd add two
new messages: `commit_switch` and `commit_switch_reply` (placeholder
names). We may want to retain the "initiator" only etiquette for simplicity,
but if we want to allow both sides to initiate then we'll need to handle
collisions (with a randomized back off possibly).

The `commit_switch` message would contain the new target `channel_type` and
the opaque TLV blob of the re-negotiation parameters. The
`commit_switch_reply` message would then give the receiver the ability to
_reject_ the switch (say it doesn't want to increase `max_allowed_htlcs`),
or accept it, and specify its own set of parameters. Similar to the
`shutdown` message, both parties can only proceed with the switch over _once
all HTLCs_ have been cleared. As a result, they should reject any HTLC
forwarding attempts through the target channel once they receive the initial
message. From there, they'd carry out the modified commitment dance outlined
in my prior mail.

Thoughts?

-- Laolu

-- Laolu

On Mon, Jul 20, 2020 at 6:18 PM Olaoluwa Osuntokun 
wrote:

> Hi y'all,
>
> In this post, I'd like to share an early version of an extension to the
> spec
> and channel state machine that would allow for on-the-fly commitment
> _format/type_ changes. Notably, this would allow for us to _upgrade_
> commitment types without any on-chain activity, executed in a
> de-synchronized and distributed manner. The core realization these proposal
> is based on the fact that the funding output is the _only_ component of a
> channel that's actually set in stone (requires an on-chain transaction to
> modify).
>
>
> # Motivation
>
> (you can skip this section if you already know why something like this is
> important)
>
> First, some motivation. As y'all are likely aware, the current deployed
> commitment format has changed once so far: to introduce the
> `static_remote_key` variant which makes channels safer by sending the funds
> of the party that was force closed on to a plain pubkey w/o any extra
> tweaks
> or derivation. This makes channel recovery safer, as the party that may
> have
> lost data (or can't continue the channel), no longer needs to learn of a
> secret value sent to them by the other party to be able to claim their
> funds. However, as this new format was introduced sometime after the
> initial
> bootstrapping phase of the network, most channels in the wild today _are
> not_ using this safer format.  Transitioning _all_ the existing channels to
> this new format as is, would require closing them _all_, generating tens of
> thousands of on-chain transactions (to close, then re-open), not to mention
> chain fees.
>
> With dynamic commitments, users will be able to upgrade their _existing_
> channels to new safer types, without any new on-chain transactions!
>
> Anchor output based commitments represent another step forward in making
> channels safer as they allow users/software to no longer have to predict
> chain fees ahead of time, and also bump up the fee of a
> commitment/2nd-level-htlc-transaction, which is extremely important when it
> comes to timely on-chain resolution of HTLC contracts. This upgrade process
> (as touched on below) can either be manually triggered, or automatically
> triggered once the software updates and finds a new preferable default
> commitment format is available.
>
> As many of us are aware, the addition of schnorr and taproot to the Bitcoin
> protocol dramatically increases the design space for channels as a whole.
> It
> may take some time to explore this design space, particularly as entirely
> new channel/commitment formats [1] continue to be discovered. The roll out
> of dynamic commitments allows us to defer the concrete design of the future
> commitment formats, yet still benefit from the immediate improvement that
> comes with morphing the funding output to be a single-key (non-p2wsh,
> though
> the line starts to blur w/ taproot) output. With this new funding output
> format in place, users/software will then be able to update to the latest
> and greatest commitment format that starts to utilize all the new tools
> available (scriptless script based htlcs, etc) at a later date.
>
> Finally, the ability to update the commitment format itself will also allow
> us to re-parametrize portions of the channels which are currently set in
> stone. As an example, right now the # of max allowed outstanding HTLCs is
> set in stone once the channel has opened. With the ability to also swap out
> commitment _parameters_, we can start to experiment with flow-control like
> ideas such as limiting

[Lightning-dev] Dynamic Commitments: Upgrading Channels Without On-Chain Transactions

2020-07-20 Thread Olaoluwa Osuntokun
Hi y'all,

In this post, I'd like to share an early version of an extension to the spec
and channel state machine that would allow for on-the-fly commitment
_format/type_ changes. Notably, this would allow for us to _upgrade_
commitment types without any on-chain activity, executed in a
de-synchronized and distributed manner. The core realization these proposal
is based on the fact that the funding output is the _only_ component of a
channel that's actually set in stone (requires an on-chain transaction to
modify).


# Motivation

(you can skip this section if you already know why something like this is
important)

First, some motivation. As y'all are likely aware, the current deployed
commitment format has changed once so far: to introduce the
`static_remote_key` variant which makes channels safer by sending the funds
of the party that was force closed on to a plain pubkey w/o any extra tweaks
or derivation. This makes channel recovery safer, as the party that may have
lost data (or can't continue the channel), no longer needs to learn of a
secret value sent to them by the other party to be able to claim their
funds. However, as this new format was introduced sometime after the initial
bootstrapping phase of the network, most channels in the wild today _are
not_ using this safer format.  Transitioning _all_ the existing channels to
this new format as is, would require closing them _all_, generating tens of
thousands of on-chain transactions (to close, then re-open), not to mention
chain fees.

With dynamic commitments, users will be able to upgrade their _existing_
channels to new safer types, without any new on-chain transactions!

Anchor output based commitments represent another step forward in making
channels safer as they allow users/software to no longer have to predict
chain fees ahead of time, and also bump up the fee of a
commitment/2nd-level-htlc-transaction, which is extremely important when it
comes to timely on-chain resolution of HTLC contracts. This upgrade process
(as touched on below) can either be manually triggered, or automatically
triggered once the software updates and finds a new preferable default
commitment format is available.

As many of us are aware, the addition of schnorr and taproot to the Bitcoin
protocol dramatically increases the design space for channels as a whole. It
may take some time to explore this design space, particularly as entirely
new channel/commitment formats [1] continue to be discovered. The roll out
of dynamic commitments allows us to defer the concrete design of the future
commitment formats, yet still benefit from the immediate improvement that
comes with morphing the funding output to be a single-key (non-p2wsh, though
the line starts to blur w/ taproot) output. With this new funding output
format in place, users/software will then be able to update to the latest
and greatest commitment format that starts to utilize all the new tools
available (scriptless script based htlcs, etc) at a later date.

Finally, the ability to update the commitment format itself will also allow
us to re-parametrize portions of the channels which are currently set in
stone. As an example, right now the # of max allowed outstanding HTLCs is
set in stone once the channel has opened. With the ability to also swap out
commitment _parameters_, we can start to experiment with flow-control like
ideas such as limiting a new channel peer to only a handful of HTLC slots,
which is then progressively increased based on "good behavior" (or the other
way around as well). Beyond just updating the channel parameters, it's also
possible to "change the rules" of a channel on the fly. An example of this
variant would be creating a new psuedo-type that implements a fee policy
other than "the initiator pays all fees".


# Protocol Changes

With the motivation/background set up, let's dig into some potential ways
the protocol can be modified to support this new meta-feature. As this
change is more of a meta-change, AFAICT, the amount of protocol changes
doesn't appear to be _too_ invasive ;). Most of the heavy lifting is done by
the wondrous TLV message field extensions.

## Explicit Channel Type Negotiation

Right now in the protocol, as new channel types are introduced (static key,
and now anchors) we add a new feature bit. If both nodes have the feature
bit set, then that new channel type is to be used. Notice how this is an
_implicit_ upgrade: there's no explicit signalling during the _funding_
process that a new channel type is to be used. This works OK, if there's one
major accepted "official" channel type, but not as new types are introduced
for specific use cases or applications. The implicit negotiation also makes
things a bit ambiguous at times. As an example, if both nodes have the
`static_remote_key` _and_ anchor outputs feature bit set, which channel type
should they open?

To resolve this existing ambiguity in the channel type negotiation, we'll
need to make the channel type used 

Re: [Lightning-dev] Disclosure of a fee blackmail attack that can make a victim loose almost all funds of a non Wumbo channel and potential fixes

2020-06-21 Thread Olaoluwa Osuntokun
Hi Jeremy,

The up-front costs can be further mitigated even without something like CTV
(which makes things more efficient) by adding a layer of in-direction w.r.t
how
HTLCs are manifested within the commitment transactions. To do this, we add
a
new 2-of-2 multi-sig output (an HTLC indirect block) to the commitment
transactions. This is then spent by a new transaction (the HTLC block) that
actually manifests (creates the HTLC outputs) the HTLCs.

With this change, the cost to have a commitment be mined in the chain is now
_independent of the number of HTLCs in the channel_. In the past I've called
this construction "coupe commitments" (lol).

Other flavors of this technique are possible as well, allowing both sides to
craft varying HTLC indirection trees (double layers of indirection are
possible, etc) which may factor in traits like HTLC expiration time (HTLCs
that
expire later are further down in the tree).

Something like CTV does indeed make this technique more powerful+efficient
as
it allows one to succinctly commit to all the relevant desirable
combinations
of HTLC indirect blocks, and HTLC fan-out transactions.

-- Laolu


On Sat, Jun 20, 2020 at 4:14 PM Jeremy  wrote:

> I am not steeped enough in Lightning Protocol issues to get the full
> design space, but I'm fairly certain BIP-119 Congestion Control trees would
> help with this issue.
>
> You can bucket a tree by doing a histogram of HTLC size, so that all small
> HTLCs live in a common CTV subtree and don't interfere with higher value
> HTLCs. You can also play with sequencing to prevent those HTLCs from
> getting longchains in the mempool until they're above a certain value.
> --
> @JeremyRubin 
> 
>
>
> On Thu, Jun 18, 2020 at 1:41 AM Antoine Riard 
> wrote:
>
>> Hi Rene,
>>
>> Thanks for disclosing this vulnerability,
>>
>> I think this blackmail scenario holds but sadly there is a lower scenario.
>>
>> Both "Flood & Loot" and your blackmail attack rely on `update_fee`
>> mechanism and unbounded commitment transaction size inflation. Though the
>> first to provoke block congestion and yours to lockdown in-flight fees as
>> funds hostage situation.
>>
>> > 1. The current solution is to just not use up the max value of
>> htlc's. Eclaire and c-lightning by default only use up to 30 htlcs.
>>
>> As of today, yes I would recommend capping commitment size both for
>> ensuring competitive propagation/block selection and limiting HTLC exposure.
>>
>> > 2. Probably the best fix (not sure if I understand the consequences
>> correctly) is coming from this PR to bitcoin core (c.f.
>> https://github.com/bitcoin/bitcoin/pull/15681 by @TheBlueMatt . If I get
>> it correctly with that we could always have low fees and ask the person who
>> want to claim their outputs to pay fees. This excludes overpayment and
>> could happen at a later stage when fees are not spiked. Still the victim
>> who offered the htlcs would have to spend those outputs at some time.
>>
>> It's a bit more complex, carve-out output, even combined with anchor
>> output support on the LN-side won't protect against different flavors of
>> pinning. I invite you to go through logs of past 2 LN dev meetings.
>>
>> > 3. Don't overpay fees in commitment transactions. We can't foresee the
>> future anyway
>>
>> Once 2. is well-addressed we may deprecate `update_fee`.
>>
>> > 4. Don't add htlcs for which the on chain fee is higher than the HTLCs
>> value (like we do with sub dust amounts and sub satoshi amounts. This would
>> at least make the attack expensive as the attacker would have to bind a lot
>> of liquidity.
>>
>> Ideally we want dust_limit to be dynamic, dust cap should be based on
>> HTLC economic value, feerate of its output, feerate of HTLC-transaction,
>> feerate estimation of any CPFP to bump it. I think that's kind of worthy to
>> do once we solved 3. and 4
>>
>> > 5. Somehow be able to aggregate htlc's. In a world where we use payment
>> points instead of preimages we might be able to do so. It would be really
>> cool if separate HTLC's could be combined to 1 single output. I played
>> around a little bit but I have not come up with a scheme that is more
>> compact in all cases. Thus I just threw in the idea.
>>
>> Yes we may encode all HTLC in some Taproot tree in the future. There are
>> some wrinkles but for a high-level theoretical construction see my post on
>> CoinPool.
>>
>> > 6. Split onchain fees differently (now the attacker would also lose
>> fees by conducting this attack) - No I don't want to start yet another fee
>> bikeshadding debate. (In particular I believe that a different split of
>> fees might make the Flood & Loot attack economically more viable which
>> relies on the same principle)
>>
>> Likely a bit more of fee bikeshedding is something we have to do to make
>> LN secure... Switching fee from pre-committed ones to a single-party,
>> dynamic one.
>>
>> > Independently I think we should 

Re: [Lightning-dev] Disclosure of a fee blackmail attack that can make a victim loose almost all funds of a non Wumbo channel and potential fixes

2020-06-21 Thread Olaoluwa Osuntokun
Hi Rene,

IMO this is mostly mitigated by anchor commitments.  The impact of this
attack is predicated on the "victim" paying 5x on-chain fees (for their
confirmation target) to sweep all their HTLCs.  Anchor commitments let the
initiator of the channel select a very low starting fee (just enough to get
into the mempool), and also let them actually bump the fees of second-level
HTLC transactions.

In addition to being able to pay much lower fees ("just enough" to get into
the chain), anchor commitments allow second-level HTLC _aggregation_, This
means that for HTLCs with the same expiry height, a peer is able to _batch_
them all into a single transaction, further saving on fees.

lnd shipped with a form of anchor commitments in our past major release
(v0.10.0-beta). In that release the format is opt in, and is enabled with a
startup command-line flag. For 0.11, we're planning on making this the
default commitment type, giving all users that update the ability to
_finally_ have proper fee control of their commitments, and second-level
HTLC transactions.

> The direction of HTLCs are chosen so that the amount is taken from the
> `to_remote` output of the attacker (obviously on the victims side it will
> be the `to_local` output)

One relevant detail here is that if the attacker is to attempt this with
minimal setup, then they'll need to be the ones that open the channel.
Since they're the initiator, they'll actually be the ones paying the fees
rendering this attempt moot.

Alternatively, they could use something like Lightning Loop to gain the
_outbound_ bandwidth (Loop In) needed to attempt this attack (using inbound
opened channels, but they'll need to pay for that bandwidth, adding a
further cost to the attack. Not to mention that they'll need to pay on-chain
fees to sweep the HTLCs they created themselves. In short, this attack isn't
costless as they'll need to acquire outbound liquidity for an incoming
channel, and also need to pay fees independent of the "success" of their
attack.

> I quote from BOLT 02 which suggests a buffer of a factor of 5

I'm not sure how many implementations actually follow this in practice.
FWIW, lnd doesn't.

> Additionally the victim will also have to swipe all offered HTLCs (which
> will be additional costs but could be done once the fees came down) so we
> neglect them.

No, the attacker is the one that needs to sweep these HTLCs, since they
offered them. This adds to their costs.

> Knowing that this will happen and that the victim has to spend those funds
> (publishing old state obviously does not work!) the attacker has a time
> window to blackmail the victim outside of the lightning network protocol

I don't think this is always the case. Depending on the minimum HTLC
settings in the channel (another mitigation), and the distribution of funds
in the channel, it may be the case that the victim doesn't have any funds in
the channel at all (everything was on the attacker's side). In that case,
the "victim" doesn't really care if this channel is clogged up as they
really have no stake in this channel.

> Also you might say that an attacker needs many incoming channels to
> execute this attack. This can be achieved by gaming the autopilot.

As mentioned above, gaining purely incoming channels doesn't allow the
attacker to launch this attack, as they'll be unable to _send out_ from any
of those channels.

> 1. The current solution is to just not use up the max value of htlc's.
> Eclaire and c-lightning by default only use up to 30 htlcs.

IMO, this isn't a solution. Lowering the max number of HTLCs in-flight just
makes it easier (lowers the capital costs) to jam a channel. The authors of
the paper you linked have another paper exploring these types of attacks
[1], and cite the _hard coded_ limit of 483 HTLCS as an enabling factor.

> 2. Probably the best fix (not sure if I understand the consequences
> correctly) is coming from this PR to bitcoin core

I think you're misinterpreting this PR, but see my first paragraph about
anchor commitments which that PR enables.

> 3. Don't overpay fees in commitment transactions. We can't foresee the
> future anyway

Anchors let you do this ;)

> 4. Don't add htlcs for which the on chain fee is higher than the HTLCs
> value (like we do with sub dust amounts and sub satoshi amounts.

This is already how "dust HTLCs" are calculated. The amount remaining from
the HTLC after it pays for its second-level transaction needs to be above
dust. This policy can be asymmetric across commitments in the channel.

> 5. Somehow be able to aggregate htlc's.

Anchors let you do this on the transaction level (MIMO 2nd level HTLC
transactions).

I hope other implementations join lnd in deploying anchor commitments to
mitigate nuisance attacks like this, and _finally_ give users better fee
control for channels and any off-chain contracts within those channels.

BTW, the "Flood & Loot" paper you linked mentions anchor commitments as a
solution towards the 

Re: [Lightning-dev] On the scalability issues of onboarding millions of LN mobile clients

2020-05-05 Thread Olaoluwa Osuntokun
Hi Antoine,

> Even with cheaper, more efficient protocols like BIP 157, you may have a
> huge discrepancy between what is asked and what is offered. Assuming 10M
> light clients [0] each of them consuming ~100MB/month for filters/headers,
> that means you're asking 1PB/month of traffic to the backbone network. If
> you assume 10K public nodes, like today, assuming _all_ of them opt-in to
> signal BIP 157, that's an increase of 100GB/month for each. Which is
> consequent with regards to the estimated cost of 350GB/month for running
> an actual public node

One really dope thing about BIP 157+158, is that the protocol makes serving
light clients now _stateless_, since the full node doesn't need to perform
any unique work for a given client. As a result, the entire protocol could
be served over something like HTTP, taking advantage of all the established
CDNs and anycast serving infrastructure, which can reduce syncing time
(less latency to
fetch data) and also more widely distributed the load of light clients using
the existing web infrastructure. Going further, with HTTP/2's server-push
capabilities, those serving this data can still push out notifications for
new headers, etc.

> Therefore, you may want to introduce monetary compensation in exchange of
> servicing filters. Light client not dedicating resources to maintain the
> network but free-riding on it, you may use their micro-payment
> capabilities to price chain access resources [3]

Piggy backing off the above idea, if the data starts being widely served
over HTTP, then LSATs[1][2] can be used to add a lightweight payment
mechanism by inserting a new proxy server in front of the filter/header
infrastructure. The minted tokens themselves may allow a user to purchase
access to a single header/filter, a range of them in the past, or N headers
past the known chain tip, etc, etc.

-- Laolu

[1]: https://lsat.tech/
[2]: https://lightning.engineering/posts/2020-03-30-lsat/


On Tue, May 5, 2020 at 3:17 AM Antoine Riard 
wrote:

> Hi,
>
> (cross-posting as it's really both layers concerned)
>
> Ongoing advancement of BIP 157 implementation in Core maybe the
> opportunity to reflect on the future of light client protocols and use this
> knowledge to make better-informed decisions about what kind of
> infrastructure is needed to support mobile clients at large scale.
>
> Trust-minimization of Bitcoin security model has always relied first and
> above on running a full-node. This current paradigm may be shifted by LN
> where fast, affordable, confidential, censorship-resistant payment services
> may attract a lot of adoption without users running a full-node. Assuming a
> user adoption path where a full-node is required to benefit for LN may
> deprive a lot of users, especially those who are already denied a real
> financial infrastructure access. It doesn't mean we shouldn't foster node
> adoption when people are able to do so, and having a LN wallet maybe even a
> first-step to it.
>
> Designing a mobile-first LN experience opens its own gap of challenges
> especially in terms of security and privacy. The problem can be scoped as
> how to build a scalable, secure, private chain access backend for millions
> of LN clients ?
>
> Light client protocols for LN exist (either BIP157 or Electrum are used),
> although their privacy and security guarantees with regards to
> implementation on the client-side may still be an object of concern
> (aggressive tx-rebroadcast, sybillable outbound peer selection, trusted fee
> estimation). That said, one of the bottlenecks is likely the number of
> full-nodes being willingly to dedicate resources to serve those clients.
> It's not about _which_ protocol is deployed but more about _incentives_ for
> node operators to dedicate long-term resources to client they have lower
> reasons to care about otherwise.
>
> Even with cheaper, more efficient protocols like BIP 157, you may have a
> huge discrepancy between what is asked and what is offered. Assuming 10M
> light clients [0] each of them consuming ~100MB/month for filters/headers,
> that means you're asking 1PB/month of traffic to the backbone network. If
> you assume 10K public nodes, like today, assuming _all_ of them opt-in to
> signal BIP 157, that's an increase of 100GB/month for each. Which is
> consequent with regards to the estimated cost of 350GB/month for running an
> actual public node. Widening full-node adoption, specially in term of
> geographic distribution means as much as we can to bound its operational
> cost.
>
> Obviously,  deployment of more efficient tx-relay protocol like Erlay will
> free up some resources but it maybe wiser to dedicate them to increase
> health and security of the backbone network like deploying more outbound
> connections.
>
> Unless your light client protocol is so ridiculous cheap to rely on
> niceness of a subset of node operators offering free resources, it won't
> scale. And it's likely you will always have a ratio 

Re: [Lightning-dev] An update on PTLCs

2020-04-23 Thread Olaoluwa Osuntokun
(this may be kind of off-topic, more about DLC deployment than PTLCs
themselves)

>From my PoV, new technologies aren't what has held back DLC deployment to
this date since the paper was originally released. Tadge has had working
code than can be deployed today for some time now, and other parties like
DG-Lab have created full-fledge demos with the system working end to end.
Instead, the real impediment has been the bootstrapping of the oracles
which the scheme critically depends upon.

Without oracles, none of it really works. Although, it's also the case that
there're measures to prevent the oracles from equivocating (reporting two
conflicting prices/events for a particular instance), bootstrapping a new
oracle still requires a very high degree of trust as they can lie or report
incorrect data. As a result, actually deploying an oracle for a system like
this is tricky business, as it's a trusted centralized entity, so it will
run into all the normal meatspace/legal/operational risk that any trusted
centralized service would encounter.

Earlier today, Coinbase announced that they were releasing a new price
oracle for the ETH ecosystem [1]. This caught my attention as one can
imagine, that it would be even simpler for them to deploy a DLC oracle which
exports an API to obtain signed prices/events. As an existing large company
in the space (depending on who you talk to), they're a trusted entity, which
has earned a good reputation over the years (solving this
bootstrapping/trust issue). If they do eventually grow the service to also
encompass this use case, then it enables a number of possibilities, as
there's still a ton of value in just base DLC-specific channels (or one off
contracts), without all the fancy barrier escrow scriptless scipts swappy
swap swap stuff.

-- Laolu

[1]:
https://blog.coinbase.com/introducing-the-coinbase-price-oracle-6d1ee22c7068


On Thu, Apr 23, 2020 at 7:52 AM Nadav Kohen  wrote:

> Hi Laolu,
>
> Thanks for the response :)
>
> I agree that some more framing probably would have been good to have in my
> update.
>
> First, I want to clarify that my intention is not to implement a
> PTLC-based lightning network on top of ECDSA adaptor signatures, as I do
> believe that using Schnorr will be superior, but rather I wish to get some
> PoC sandbox with which to start implementing and testing out the long list
> of currently theoretical proposals surrounding PTLCs, most of which are
> implementation agnostic (to a degree anyway). I think it would be super
> beneficial to have more fleshed out with respect to what some challenges of
> a Payment Point LN are going to be than we understand now, before Schnorr
> is implemented and it is time to commit to some PTLC scheme for real.
>
> Second, I agree that I've probably understated somewhat the changes that
> will be needed in most implementations as I was mostly thinking about what
> would need to change in the BOLTs, which does actually seem relatively
> minimal (although as you mention, these minimal changes to the BOLTs do
> trigger large changes in many implementations). Also, good point on how
> BOLT 11 (invoicing) will have to be altered as well, must've slipped my
> mind.
>
> Best,
> Nadav
>
> On Wed, Apr 22, 2020 at 8:17 PM Olaoluwa Osuntokun 
> wrote:
>
>> Hi Nadav,
>>
>> Thanks for the updates! Super cool to see this concept continue to evolve
>> and integrate new technologies as they pop up.
>>
>> > I believe this would only require a few changes to existing nodes:
>>
>> Rather than a "few changes", this would to date be the largest
>> network-level
>> update undertaken to the Lightning Network thus far. In the past, we
>> rolled
>> out the new onion blob format (which enables changes like this), but none
>> of
>> the intermediate nodes actually need to modify their behavior. New payment
>> types like MPP+AMP only needed the _end points_ to update making this an
>> end-to-end update that has been rolled out so far in a de-synchronized
>> manner.
>>
>> Re-phrasing deploying this requires changes to: the core channel state
>> machine (the protocol we use to make commitment updates), HTLC scripts,
>> on-chain HTLC handling and resolution, path finding algorithms (to only
>> see
>> out the new PTLC-enabled nodes), invoice changes and onion blob
>> processing.
>> I'd caution against underestimating how long all of this will take in
>> practice, and the degree of synchronization required to pull it all off
>> properly.
>>
>> For a few years now the question we've all been pondering is: do we wait
>> for
>> scnhorr to roll out multi-hop locks, or just use the latest ECDSA based
>> technique? As dual deployment is compatible (we can mak

Re: [Lightning-dev] An update on PTLCs

2020-04-22 Thread Olaoluwa Osuntokun
Hi Nadav,

Thanks for the updates! Super cool to see this concept continue to evolve
and integrate new technologies as they pop up.

> I believe this would only require a few changes to existing nodes:

Rather than a "few changes", this would to date be the largest network-level
update undertaken to the Lightning Network thus far. In the past, we rolled
out the new onion blob format (which enables changes like this), but none of
the intermediate nodes actually need to modify their behavior. New payment
types like MPP+AMP only needed the _end points_ to update making this an
end-to-end update that has been rolled out so far in a de-synchronized
manner.

Re-phrasing deploying this requires changes to: the core channel state
machine (the protocol we use to make commitment updates), HTLC scripts,
on-chain HTLC handling and resolution, path finding algorithms (to only see
out the new PTLC-enabled nodes), invoice changes and onion blob processing.
I'd caution against underestimating how long all of this will take in
practice, and the degree of synchronization required to pull it all off
properly.

For a few years now the question we've all been pondering is: do we wait for
scnhorr to roll out multi-hop locks, or just use the latest ECDSA based
technique? As dual deployment is compatible (we can make the onion blobs for
both types the same), a path has always existed to first roll out with the
latest ECDSA based technique then follow up later to roll out the schnorr
version as well. However there's also a risk here as depending on how
quickly things can be rolled out, schnorr may become available
mid-development, which would possibly cause us to reconsider the ECDSA path
and have the network purely use scnhorr to make things nice and uniform.

Zooming out for a bit, the solution space of "how channels can look post
scriptless-scripts + taproot" is rather large [1], and the addition of this
new technique allows for an even larger set of deployment possibilities.
This latest ECDSA variant is much simpler than the prior ones (which had a
few rounds of more involved ZKPs), but since it still uses OP_CMS, it can't
be used to modify the funding output.

[1]:
https://lists.linuxfoundation.org/pipermail/lightning-dev/2019-December/002375.html

-- Laolu


On Wed, Apr 22, 2020 at 8:13 AM Nadav Kohen  wrote:

> Hello all,
>
> I'd like to give an update on the current state of thinking and coding
> surrounding replacing Hash-TimeLock Contracts (HTLCs) with Point-TimeLock
> Contracts (PTLCs) (aka Payment Hashes -> Payment Points) in hopes of
> sparking interest, discussion, development, etc.
>
>
> We Want Payment Points!
> ---
>
> Using point-locks (in PTLCs) instead of hash-locks (in HTLCs) for
> lightning payments is an all around improvement. HTLCs require the use of
> the same hash across payment routes (barring fancy ZKPs which are inferior
> to PTLCs) while PTLCs allow for payment de-correlation along routes. For an
> introduction to the topic, see
> https://suredbits.com/payment-points-part-1/.
>
> In addition to improving privacy in this way and protecting against
> wormhole attacks, PTLC-based lightning channels open the door to a large
> variety of interesting applications that cannot be accomplished with HTLCs:
>
> Stuckless (retry-able) Payments with proof of payment (
> https://suredbits.com/payment-points-part-2-stuckless-payments/)
>
> Escrow contracts over Lightning (
> https://suredbits.com/payment-points-part-3-escrow-contracts/)
>
> High/DLOG AMP (
> https://docs.google.com/presentation/d/15l4h2_zEY4zXC6n1NqsImcjgA0fovl_lkgkKu1O3QT0/edit#slide=id.g64c15419e7_0_40
> )
>
> Stuckless + AMP (an improvement on Boomerang) (
> https://lists.linuxfoundation.org/pipermail/lightning-dev/2019-October/002239.html
> )
>
> Pay-for-signature (
> https://suredbits.com/payment-points-part-4-selling-signatures/)
>
> Pay-for-commitment (
> https://lists.linuxfoundation.org/pipermail/lightning-dev/2019-September/002166.html
> )
>
> Monotonic access structures on payment completion (
> https://suredbits.com/payment-points-monotone-access-structures/)
>
> Ideal Barrier Escrow Implementation (
> https://suredbits.com/payment-points-implementing-barrier-escrows/)
>
> And allowing for Barrier Escrows, we can even have
>
> Atomic multi-payment setup (
> https://suredbits.com/payment-points-and-barrier-escrows/)
>
> Lightning Discreet Log Contract (
> https://suredbits.com/discreet-log-contracts-on-lightning-network/)
>
> Atomic multi-payment update (
> https://suredbits.com/updating-and-transferring-lightning-payments/)
>
> Lightning Discreet Log Contract Novation/Transfer (
> https://suredbits.com/transferring-lightning-dlcs/)
>
> There are likely even more things that can be done with Payment Points so
> make sure to respond if I've missed any known ones.
>
>
> How Do We Get Payment Points?
> -
>
> Eventually, once we have Taproot, we can use 2p-Schnorr adaptor signatures
> in 

Re: [Lightning-dev] [bitcoin-dev] RBF Pinning with Counterparties and Competing Interest

2020-04-22 Thread Olaoluwa Osuntokun
> Indeed, that is what I’m suggesting

Gotcha, if this is indeed what you're suggesting (all HTLC spends are now
2-of-2 multi-sig), then I think the modifications to the state machine I
sketched out in an earlier email are required. An exact construction which
achieves the requirements of "you can't broadcast until you have a secret
which I can obtain from the htlc sig for your commitment transaction, and my
secret is revealed with another swap", appears to be an open problem, atm.

Even if they're restricted in this fashion (must be a 1-in-1 out,
sighashall, fees are pre agreed upon), they can still spend that with a CPFP
(while still unconfirmed in the mempool) and create another heavy tree,
which puts us right back at the same bidding war scenario?

> There are a bunch of ways of doing pinning - just opting into RBF isn’t
> even close to enough.

Mhmm, there're other ways of doing pinning. But with anchors as is defined
in that spec PR, they're forced to spend with an RBF-replaceable
transaction, which means the party wishing to time things out can enter into
a bidding war. If the party trying to impeded things participates in this
progressive absolute fee increase, it's likely that the war terminates
with _one_ of them getting into the block, which seems to resolve
everything?

-- Laolu


On Wed, Apr 22, 2020 at 4:20 PM Matt Corallo 
wrote:

>
>
> On Apr 22, 2020, at 16:13, Olaoluwa Osuntokun  wrote:
>
>
> > Hmm, maybe the proposal wasn't clear. The idea isn't to add signatures to
> > braodcasted transactions, but instead to CPFP a maybe-broadcasted
> > transaction by sending a transaction which spends it and seeing if it is
> > accepted
>
> Sorry I still don't follow. By "we clearly need to go the other direction -
> all HTLC output spends need to be pre-signed.", you don't mean that the
> HTLC
> spends of the non-broadcaster also need to be an off-chain 2-of-2 multi-sig
> covenant? If the other party isn't restricted w.r.t _how_ they can spend
> the
> output (non-rbf'd, ect), then I don't see how that addresses anything.
>
>
> Indeed, that is what I’m suggesting. Anchor output and all. One thing we
> could think about is only turning it on over a certain threshold, and
> having a separate “only-kinda-enforceable-on-chain-HTLC-in-flight” limit.
>
> Also see my mail elsewhere in the thread that the other party is actually
> forced to spend their HTLC output using an RBF-replaceable transaction.
> With
> that, I think we're all good here? In the end both sides have the ability
> to
> raise the fee rate of their spending transactions with the highest winning.
> As long as one of them confirms within the CLTV-delta, then everyone is
> made whole.
>
>
> It does seem like my cached recollection of RBF opt-in was incorrect but
> please re-read the intro email. There are a bunch of ways of doing pinning
> - just opting into RBF isn’t even close to enough.
>
> [1]: https://github.com/bitcoin/bitcoin/pull/18191
>
>
> On Wed, Apr 22, 2020 at 9:50 AM Matt Corallo 
> wrote:
>
>> A few replies inline.
>>
>> On 4/22/20 12:13 AM, Olaoluwa Osuntokun wrote:
>> > Hi Matt,
>> >
>> >
>> >> While this is somewhat unintuitive, there are any number of good
>> anti-DoS
>> >> reasons for this, eg:
>> >
>> > None of these really strikes me as "good" reasons for this limitation,
>> which
>> > is at the root of this issue, and will also plague any more complex
>> Bitcoin
>> > contracts which rely on nested trees of transaction to confirm (CTV,
>> Duplex,
>> > channel factories, etc). Regarding the various (seemingly arbitrary)
>> package
>> > limits it's likely the case that any issues w.r.t computational
>> complexity
>> > that may arise when trying to calculate evictions can be ameliorated
>> with
>> > better choice of internal data structures.
>> >
>> > In the end, the simplest heuristic (accept the higher fee rate package)
>> side
>> > steps all these issues and is also the most economically rationale from
>> a
>> > miner's perspective. Why would one prefer a higher absolute fee package
>> > (which could be very large) over another package with a higher total
>> _fee
>> > rate_?
>>
>> This seems like a somewhat unnecessary drive-by insult of a project you
>> don't contribute to, but feel free to start with
>> a concrete suggestion here :).
>>
>> >> You'll note that B would be just fine if they had a way to safely
>> monitor the
>> >> global mempool, and while this seems like a prudent mitigation for
>> >> lightning

Re: [Lightning-dev] [bitcoin-dev] RBF Pinning with Counterparties and Competing Interest

2020-04-22 Thread Olaoluwa Osuntokun
> This seems like a somewhat unnecessary drive-by insult of a project you
> don't contribute to, but feel free to start with a concrete suggestion
> here :).

This wasn't intended as an insult at all. I'm simply saying if there's
concern about worst case eviction/replacement, optimizations likely exist.
Other developers that are interested in more complex multi-transaction
contracts have realized this as well, and there're various open PRs that
attempt to propose such optimizations [1].

> Hmm, maybe the proposal wasn't clear. The idea isn't to add signatures to
> braodcasted transactions, but instead to CPFP a maybe-broadcasted
> transaction by sending a transaction which spends it and seeing if it is
> accepted

Sorry I still don't follow. By "we clearly need to go the other direction -
all HTLC output spends need to be pre-signed.", you don't mean that the HTLC
spends of the non-broadcaster also need to be an off-chain 2-of-2 multi-sig
covenant? If the other party isn't restricted w.r.t _how_ they can spend the
output (non-rbf'd, ect), then I don't see how that addresses anything.

Also see my mail elsewhere in the thread that the other party is actually
forced to spend their HTLC output using an RBF-replaceable transaction. With
that, I think we're all good here? In the end both sides have the ability to
raise the fee rate of their spending transactions with the highest winning.
As long as one of them confirms within the CLTV-delta, then everyone is
made whole.


[1]: https://github.com/bitcoin/bitcoin/pull/18191


On Wed, Apr 22, 2020 at 9:50 AM Matt Corallo 
wrote:

> A few replies inline.
>
> On 4/22/20 12:13 AM, Olaoluwa Osuntokun wrote:
> > Hi Matt,
> >
> >
> >> While this is somewhat unintuitive, there are any number of good
> anti-DoS
> >> reasons for this, eg:
> >
> > None of these really strikes me as "good" reasons for this limitation,
> which
> > is at the root of this issue, and will also plague any more complex
> Bitcoin
> > contracts which rely on nested trees of transaction to confirm (CTV,
> Duplex,
> > channel factories, etc). Regarding the various (seemingly arbitrary)
> package
> > limits it's likely the case that any issues w.r.t computational
> complexity
> > that may arise when trying to calculate evictions can be ameliorated with
> > better choice of internal data structures.
> >
> > In the end, the simplest heuristic (accept the higher fee rate package)
> side
> > steps all these issues and is also the most economically rationale from a
> > miner's perspective. Why would one prefer a higher absolute fee package
> > (which could be very large) over another package with a higher total _fee
> > rate_?
>
> This seems like a somewhat unnecessary drive-by insult of a project you
> don't contribute to, but feel free to start with
> a concrete suggestion here :).
>
> >> You'll note that B would be just fine if they had a way to safely
> monitor the
> >> global mempool, and while this seems like a prudent mitigation for
> >> lightning implementations to deploy today, it is itself a quagmire of
> >> complexity
> >
> > Is it really all that complex? Assuming we're talking about just watching
> > for a certain script template (the HTLC scipt) in the mempool to be able
> to
> > pull a pre-image as soon as possible. Early versions of lnd used the
> mempool
> > for commitment broadcast detection (which turned out to be a bad idea so
> we
> > removed it), but at a glance I don't see why watching the mempool is so
> > complex.
>
> Because watching your own mempool is not guaranteed to work, and during
> upgrade cycles that include changes to the
> policy rules an attacker could exploit your upgraded/non-upgraded status
> to perform the same attack.
>
> >> Further, this is a really obnoxious assumption to hoist onto lightning
> >> nodes - having an active full node with an in-sync mempool is a lot more
> >> CPU, bandwidth, and complexity than most lightning users were expecting
> to
> >> face.
> >
> > This would only be a requirement for Lightning nodes that seek to be a
> part
> > of the public routing network with a desire to _forward_ HTLCs. This
> isn't
> > doesn't affect laptops or mobile phones which likely mostly have private
> > channels and don't participate in HTLC forwarding. I think it's pretty
> > reasonable to expect a "proper" routing node on the network to be backed
> by
> > a full-node. The bandwidth concern is valid, but we'd need concrete
> numbers
> > that compare the bandwidth over head of mempool awareness (assuming the
> > latest and greatest me

Re: [Lightning-dev] [bitcoin-dev] RBF Pinning with Counterparties and Competing Interest

2020-04-22 Thread Olaoluwa Osuntokun
Hi z,

Actually, the current anchors proposal already does this, since it enforces
a
CSV of 1 block before the HTLCs can be spent (the block after
confirmation). So
I think we already do this, meaning the malicious node is already forced to
use
an RBF-replaceable transaction.

-- Laolu


On Wed, Apr 22, 2020 at 4:05 PM Olaoluwa Osuntokun 
wrote:

> Hi Z,
>
> > It seems to me that, if my cached understanding that `<0>
> > OP_CHECKSEQUENCEVERIFY` is sufficient to require RBF-flagging, then
> adding
> > that to the hashlock branch (2 witness bytes, 0.5 weight) would be a
> pretty
> > low-weight mitigation against this attack.
>
> I think this works...so they're forced to spend the output with a non-final
> sequence number, meaning it *must* signal RBF. In this case, now it's the
> timeout-er vs the success-er racing based on fee rate. If the honest party
> (the
> one trying to time out the HTLC) bids a fee rate higher (need to also
> account
> for the whole absolute fee replacement thing), then things should generally
> work out in their favor.
>
> -- Laolu
>
>
> On Tue, Apr 21, 2020 at 11:08 PM ZmnSCPxj  wrote:
>
>> Good morning Laolu, Matt, and list,
>>
>>
>> > >  * With `SIGHASH_NOINPUT` we can make the C-side signature
>> > >  `SIGHASH_NOINPUT|SIGHASH_SINGLE` and allow B to re-sign the B-side
>> > >  signature for a higher-fee version of HTLC-Timeout (assuming my
>> cached
>> > >  understanding of `SIGHASH_NOINPUT` still holds).
>> >
>> > no_input isn't needed. With simply single+anyone can pay, then B can
>> attach
>> > a new input+output pair to increase the fees on their HTLC redemption
>> > transaction. As you mention, they now enter into a race against this
>> > malicious ndoe to bump up their fees in order to win over the other
>> party.
>>
>> Right, right, that works as well.
>>
>> >
>> > If the malicious node uses a non-RBF signalled transaction to sweep
>> their
>> > HTLC, then we enter into another level of race, but this time on the
>> mempool
>> > propagation level. However, if there exists a relay path to a miner
>> running
>> > full RBF, then B's higher fee rate spend will win over.
>>
>> Hmm.
>>
>> So basically:
>>
>> * B has no mempool, because it wants to reduce its costs and etc.
>> * C broadcasts a non-RBF claim tx with low fee before A->B locktime (L+1).
>> * B does not notice this tx because:
>>   1.  The tx is too low fee to be put in a block.
>>   2.  B has no mempool so it cannot see the tx being propagated over the
>> P2P network.
>> * B tries to broadcast higher-fee HTLC-timeout, but fails because it
>> cannot replace a non-RBF tx.
>> * After L+1, C contacts the miners off-band and offers fee payment by
>> other means.
>>
>> It seems to me that, if my cached understanding that `<0>
>> OP_CHECKSEQUENCEVERIFY` is sufficient to require RBF-flagging, then adding
>> that to the hashlock branch (2 witness bytes, 0.5 weight) would be a pretty
>> low-weight mitigation against this attack.
>>
>> So I think the combination below gives us good size:
>>
>> * The HTLC-Timeout signature from C is flagged with
>> `OP_SINGLE|OP_ANYONECANPAY`.
>>   * Normally, the HTLC-Timeout still deducts the fee from the value of
>> the UTXO being spent.
>>   * However, if B notices that the L+1 timeout is approaching, it can
>> fee-bump HTLC-Timeout with some onchain funds, recreating its own signature
>> but reusing the (still valid) C signature.
>> * The hashlock branch in this case includes `<0> OP_CHECKSEQUENCEVERIFY`,
>> preventing C from broadcasting a low-fee claim tx.
>>
>> This has the advantages:
>>
>> * B does not need a mempool still and can run in `blocksonly`.
>> * The normal path is still the same as current behavior, we "only" add a
>> new path where if the L+1 timeout is approaching we fee-bump the
>> HTLC-Timeout.
>> * Costs are pretty low:
>>   * No need for extra RBF carve-out txo.
>>   * Just two additional witness bytes in the hashlock branch.
>> * No mempool rule changes needed, can be done with the P2P network of
>> today.
>>   * Probably still resilient even with future changes in mempool rules,
>> as long as typical RBF behaviors still remain.
>>
>> Is my understanding correct?
>>
>> Regards,
>> ZmnSCPxj
>>
>> >
>> > -- Laolu
>> >
>> > On Tue, Apr 21, 2020 at 9:13 PM ZmnSCPxj via bitcoin-dev <
>> bitcoin-...@lists.l

Re: [Lightning-dev] [bitcoin-dev] RBF Pinning with Counterparties and Competing Interest

2020-04-22 Thread Olaoluwa Osuntokun
Hi Z,

> It seems to me that, if my cached understanding that `<0>
> OP_CHECKSEQUENCEVERIFY` is sufficient to require RBF-flagging, then adding
> that to the hashlock branch (2 witness bytes, 0.5 weight) would be a
pretty
> low-weight mitigation against this attack.

I think this works...so they're forced to spend the output with a non-final
sequence number, meaning it *must* signal RBF. In this case, now it's the
timeout-er vs the success-er racing based on fee rate. If the honest party
(the
one trying to time out the HTLC) bids a fee rate higher (need to also
account
for the whole absolute fee replacement thing), then things should generally
work out in their favor.

-- Laolu


On Tue, Apr 21, 2020 at 11:08 PM ZmnSCPxj  wrote:

> Good morning Laolu, Matt, and list,
>
>
> > >  * With `SIGHASH_NOINPUT` we can make the C-side signature
> > >  `SIGHASH_NOINPUT|SIGHASH_SINGLE` and allow B to re-sign the B-side
> > >  signature for a higher-fee version of HTLC-Timeout (assuming my cached
> > >  understanding of `SIGHASH_NOINPUT` still holds).
> >
> > no_input isn't needed. With simply single+anyone can pay, then B can
> attach
> > a new input+output pair to increase the fees on their HTLC redemption
> > transaction. As you mention, they now enter into a race against this
> > malicious ndoe to bump up their fees in order to win over the other
> party.
>
> Right, right, that works as well.
>
> >
> > If the malicious node uses a non-RBF signalled transaction to sweep their
> > HTLC, then we enter into another level of race, but this time on the
> mempool
> > propagation level. However, if there exists a relay path to a miner
> running
> > full RBF, then B's higher fee rate spend will win over.
>
> Hmm.
>
> So basically:
>
> * B has no mempool, because it wants to reduce its costs and etc.
> * C broadcasts a non-RBF claim tx with low fee before A->B locktime (L+1).
> * B does not notice this tx because:
>   1.  The tx is too low fee to be put in a block.
>   2.  B has no mempool so it cannot see the tx being propagated over the
> P2P network.
> * B tries to broadcast higher-fee HTLC-timeout, but fails because it
> cannot replace a non-RBF tx.
> * After L+1, C contacts the miners off-band and offers fee payment by
> other means.
>
> It seems to me that, if my cached understanding that `<0>
> OP_CHECKSEQUENCEVERIFY` is sufficient to require RBF-flagging, then adding
> that to the hashlock branch (2 witness bytes, 0.5 weight) would be a pretty
> low-weight mitigation against this attack.
>
> So I think the combination below gives us good size:
>
> * The HTLC-Timeout signature from C is flagged with
> `OP_SINGLE|OP_ANYONECANPAY`.
>   * Normally, the HTLC-Timeout still deducts the fee from the value of the
> UTXO being spent.
>   * However, if B notices that the L+1 timeout is approaching, it can
> fee-bump HTLC-Timeout with some onchain funds, recreating its own signature
> but reusing the (still valid) C signature.
> * The hashlock branch in this case includes `<0> OP_CHECKSEQUENCEVERIFY`,
> preventing C from broadcasting a low-fee claim tx.
>
> This has the advantages:
>
> * B does not need a mempool still and can run in `blocksonly`.
> * The normal path is still the same as current behavior, we "only" add a
> new path where if the L+1 timeout is approaching we fee-bump the
> HTLC-Timeout.
> * Costs are pretty low:
>   * No need for extra RBF carve-out txo.
>   * Just two additional witness bytes in the hashlock branch.
> * No mempool rule changes needed, can be done with the P2P network of
> today.
>   * Probably still resilient even with future changes in mempool rules, as
> long as typical RBF behaviors still remain.
>
> Is my understanding correct?
>
> Regards,
> ZmnSCPxj
>
> >
> > -- Laolu
> >
> > On Tue, Apr 21, 2020 at 9:13 PM ZmnSCPxj via bitcoin-dev <
> bitcoin-...@lists.linuxfoundation.org> wrote:
> >
> > > Good morning Matt, and list,
> > >
> > > > RBF Pinning HTLC Transactions (aka "Oh, wait, I can steal funds,
> how, now?")
> > > > =
> > > >
> > > > You'll note that in the discussion of RBF pinning we were pretty
> broad, and that that discussion seems to in fact cover
> > > > our HTLC outputs, at least when spent via (3) or (4). It does,
> and in fact this is a pretty severe issue in today's
> > > > lightning protocol [2]. A lightning counterparty (C, who
> received the HTLC from B, who received it from A) today could,
> > > > if B broadcasts the commitment transaction, spend an HTLC using
> the preimage with a low-fee, RBF-disabled transaction.
> > > > After a few blocks, A could claim the HTLC from B via the
> timeout mechanism, and then after a few days, C could get the
> > > > HTLC-claiming transaction mined via some out-of-band agreement
> with a small miner. This leaves B short the HTLC value.
> > >
> > > My (cached) understanding is that, since RBF is signalled using
> `nSequence`, any `OP_CHECKSEQUENCEVERIFY` also automatically 

Re: [Lightning-dev] [bitcoin-dev] RBF Pinning with Counterparties and Competing Interest

2020-04-21 Thread Olaoluwa Osuntokun
> So what is needed is to allow B to add fees to HTLC-Timeout:

Indeed, anchors as defined in #lightning-rfc/688 allows this.

>  * With `SIGHASH_NOINPUT` we can make the C-side signature
>  `SIGHASH_NOINPUT|SIGHASH_SINGLE` and allow B to re-sign the B-side
>  signature for a higher-fee version of HTLC-Timeout (assuming my cached
>  understanding of `SIGHASH_NOINPUT` still holds).

no_input isn't needed. With simply single+anyone can pay, then B can attach
a new input+output pair to increase the fees on their HTLC redemption
transaction. As you mention, they now enter into a race against this
malicious ndoe to bump up their fees in order to win over the other party.

If the malicious node uses a non-RBF signalled transaction to sweep their
HTLC, then we enter into another level of race, but this time on the mempool
propagation level. However, if there exists a relay path to a miner running
full RBF, then B's higher fee rate spend will win over.

-- Laolu

On Tue, Apr 21, 2020 at 9:13 PM ZmnSCPxj via bitcoin-dev <
bitcoin-...@lists.linuxfoundation.org> wrote:

> Good morning Matt, and list,
>
>
>
> > RBF Pinning HTLC Transactions (aka "Oh, wait, I can steal funds,
> how, now?")
> > =
> >
> > You'll note that in the discussion of RBF pinning we were pretty
> broad, and that that discussion seems to in fact cover
> > our HTLC outputs, at least when spent via (3) or (4). It does, and
> in fact this is a pretty severe issue in today's
> > lightning protocol [2]. A lightning counterparty (C, who received
> the HTLC from B, who received it from A) today could,
> > if B broadcasts the commitment transaction, spend an HTLC using the
> preimage with a low-fee, RBF-disabled transaction.
> > After a few blocks, A could claim the HTLC from B via the timeout
> mechanism, and then after a few days, C could get the
> > HTLC-claiming transaction mined via some out-of-band agreement with
> a small miner. This leaves B short the HTLC value.
>
> My (cached) understanding is that, since RBF is signalled using
> `nSequence`, any `OP_CHECKSEQUENCEVERIFY` also automatically imposes the
> requirement "must be RBF-enabled", including `<0> OP_CHECKSEQUENCEVERIFY`.
> Adding that clause (2 bytes in witness if my math is correct) to the
> hashlock branch may be sufficient to prevent C from making an RBF-disabled
> transaction.
>
> But then you mention out-of-band agreements with miners, which basically
> means the transaction might not be in the mempool at all, in which case the
> vulnerability is not really about RBF or relay, but sheer economics.
>
> The payment is A->B->C, and the HTLC A->B must have a larger timeout (L +
> 1) than the HTLC B->C (L), in abstract non-block units.
> The vulnerability you are describing means that the current time must now
> be L + 1 or greater ("A could claim the HTLC from B via the timeout
> mechanism", meaning the A->B HTLC has timed out already).
>
> If so, then the B->C transaction has already timed out in the past and can
> be claimed in two ways, either via B timeout branch or C hashlock branch.
> This sets up a game where B and C bid to miners to get their version of
> reality committed onchain.
> (We can neglect out-of-band agreements here; miners have the incentive to
> publicly leak such agreements so that other potential bidders can offer
> even higher fees for their versions of that transaction.)
>
> Before L+1, C has no incentive to bid, since placing any bid at all will
> leak the preimage, which B can then turn around and use to spend from A,
> and A and C cannot steal from B.
>
> Thus, B should ensure that *before* L+1, the HTLC-Timeout has been
> committed onchain, which outright prevents this bidding war from even
> starting.
>
> The issue then is that B is using a pre-signed HTLC-timeout, which is
> needed since it is its commitment tx that was broadcast.
> This prevents B from RBF-ing the HTLC-Timeout transaction.
>
> So what is needed is to allow B to add fees to HTLC-Timeout:
>
> * We can add an RBF carve-out output to HTLC-Timeout, at the cost of more
> blockspace.
> * With `SIGHASH_NOINPUT` we can make the C-side signature
> `SIGHASH_NOINPUT|SIGHASH_SINGLE` and allow B to re-sign the B-side
> signature for a higher-fee version of HTLC-Timeout (assuming my cached
> understanding of `SIGHASH_NOINPUT` still holds).
>
> With this, B can exponentially increase the fee as L+1 approaches.
> If B can get HTLC-Timeout confirmed before L+1, then C cannot steal the
> HTLC value at all, since the UTXO it could steal from has already been
> spent.
>
> In particular, it does not seem to me that it is necessary to change the
> hashlock-branch transaction of C at all, since this mechanism is enough to
> sidestep the issue (as I understand it).
> But it does point to a need to make HTLC-Timeout (and possibly
> symmetrically, HTLC-Success) also fee-bumpable.
>
> Note as well that this does not require a mempool: B can run in
> 

Re: [Lightning-dev] [bitcoin-dev] RBF Pinning with Counterparties and Competing Interest

2020-04-21 Thread Olaoluwa Osuntokun
Hi Matt,


> While this is somewhat unintuitive, there are any number of good anti-DoS
> reasons for this, eg:

None of these really strikes me as "good" reasons for this limitation, which
is at the root of this issue, and will also plague any more complex Bitcoin
contracts which rely on nested trees of transaction to confirm (CTV, Duplex,
channel factories, etc). Regarding the various (seemingly arbitrary) package
limits it's likely the case that any issues w.r.t computational complexity
that may arise when trying to calculate evictions can be ameliorated with
better choice of internal data structures.

In the end, the simplest heuristic (accept the higher fee rate package) side
steps all these issues and is also the most economically rationale from a
miner's perspective. Why would one prefer a higher absolute fee package
(which could be very large) over another package with a higher total _fee
rate_?

> You'll note that B would be just fine if they had a way to safely monitor
the
> global mempool, and while this seems like a prudent mitigation for
> lightning implementations to deploy today, it is itself a quagmire of
> complexity

Is it really all that complex? Assuming we're talking about just watching
for a certain script template (the HTLC scipt) in the mempool to be able to
pull a pre-image as soon as possible. Early versions of lnd used the mempool
for commitment broadcast detection (which turned out to be a bad idea so we
removed it), but at a glance I don't see why watching the mempool is so
complex.

> Further, this is a really obnoxious assumption to hoist onto lightning
> nodes - having an active full node with an in-sync mempool is a lot more
> CPU, bandwidth, and complexity than most lightning users were expecting to
> face.

This would only be a requirement for Lightning nodes that seek to be a part
of the public routing network with a desire to _forward_ HTLCs. This isn't
doesn't affect laptops or mobile phones which likely mostly have private
channels and don't participate in HTLC forwarding. I think it's pretty
reasonable to expect a "proper" routing node on the network to be backed by
a full-node. The bandwidth concern is valid, but we'd need concrete numbers
that compare the bandwidth over head of mempool awareness (assuming the
latest and greatest mempool syncing) compared with the overhead of the
channel update gossip and gossip queries over head which LN nodes face today
as is to see how much worse off they really would be.

As detailed a bit below, if nodes watch the mempool, then this class of
attack assuming the anchor output format as described in the open
lightning-rfc PR is mitigated. At a glance, watching the mempool seems like
a far less involved process compared to modifying the state machine as its
defined today. By watching the mempool and implementing the changes in
#lightning-rfc/688, then this issue can be mitigated _today_. lnd 0.10
doesn't yet watch the mempool (but does include anchors [1]), but unless I'm
missing something it should be pretty straight forward to add which mor or
less
resolves this issue all together.

> not fixing this issue seems to render the whole exercise somewhat useless

Depends on if one considers watching the mempool a fix. But even with that a
base version of anchors still resolves a number of issues including:
eliminating the commitment fee guessing game, allowing users to pay less on
force close, being able to coalesce 2nd level HTLC transactions with the
same CLTV expiry, and actually being able to reliably enforce multi-hop HTLC
resolution.

> Instead of making the HTLC output spending more free-form with
> SIGHASH_ANYONECAN_PAY|SIGHASH_SINGLE, we clearly need to go the other
> direction - all HTLC output spends need to be pre-signed.

I'm not sure this is actually immediately workable (need to think about it
more). To see why, remember that the commit_sig message includes HTLC
signatures for the _remote_ party's commitment transaction, so they can
spend the HTLCs if they broadcast their version of the commitment (force
close). If we don't somehow also _gain_ signatures (our new HTLC signatures)
allowing us to spend HTLCs on _their_ version of the commitment, then if
they broadcast that commitment (without revoking), then we're unable to
redeem any of those HTLCs at all, possibly losing money.

In an attempt to counteract this, we might say ok, the revoke message also
now includes HTLC signatures for their new commitment allowing us to spend
our HTLCs. This resolves things in a weaker security model, but doesn't
address the issue generally, as after they receive the commit_sig, they can
broadcast immediately, again leaving us without a way to redeem our HTLCs.

I'd need to think about it more, but it seems that following this path would
require an overhaul in the channel state machine to make presenting a new
commitment actually take at least _two phases_ (at least a full round trip).
The first phase would tender the commitment, but 

[Lightning-dev] Anchor Outputs Spec & Implementation Progress

2020-03-30 Thread Olaoluwa Osuntokun
Hi y'all,

We've been discussing the current state of the spec and implementation
readiness of anchor outputs for a few week now on IRC. As detailed
conversations are at times difficult to have on IRC, and there's no true
history, I figured I'd start a new discussion thread where we can hammer out
the final details.

First, on the current state of implementation. Anchor outputs are now fully
supported in the master branch of lnd. A user can opt into this new format
by specifying a new command line parameter: --protocol.anchors (off by
default).  Nodes running with this flag will use the feature bit 1337 for
negotiation. We didn't use the range above 65k, as we realized that would
result in rather large init messages. This feature will be included in our
upcoming 0.10 release, which will be entering the release mandate phase in
the next week or two. We also plan to add an entry in the wiki declaring our
usage of this feature bit.

Anchors in lnd implement the spec as is currently defined: two anchors at
all times, with each anchor utilizing 330 satoshis.

During the last spec meeting, the following concerns were raised about
having two anchors at all times (compared to one and re-using the to_remote)
output:

  1. two anchors adds extra bytes to the commitment transaction, increasing
the
 fee burden for force closing
  2. two anchors pollutes the UTXO set, so instead one anchor (for the force
 closing party) should be present, while the other party re-uses their
 to_remote output for this purpose

In response to the first concern: it is indeed the case that these new
commitments are more expensive, but they're only _slightly_ so. The new
default commitment weight is as if there're two HTLCs at all times on the
commitment transaction. Adding in the extra anchor cost (660 satoshis) is a
false equivalence as both parties are able to recover these funds if they
chose. It's also the case that force cases in the ideal case are only due to
nodes needing to go on-chain to sweep HTLCs, so the extra bytes may be
dwarfed by several HTLCs, particularly in a post MPP/AMP world. The extra
cost may seem large (relatively) when looking at a 1 sat/byte commitment
transaction. However, fees today in the system are on the rise, and if one
is actually in a situation where they need to resolve HTLCs on chain,
they'll likely require a fee rate higher than 1 sat/byte to have their
commitment confirm in a timely manner.

On the topic of UTXO bloat, IMO re-purposing the to_remote output as an
anchor is arguably _worse_, as only a single party in the channel is able to
spend that output in order to remove its impact on the UTXO set. On the
other hand, using two anchors (with their special scripts) allows _anyone_
to sweep these outputs several blocks after the commitment transaction has
confirmed. In order to cover the case where the remote party has no balance,
but a single incoming HTLC, the channel initiator must either create a new
anchor output for this special case (creating a new type of ad-hoc reserve),
or always create a to_remote output for the other party (donating the 330
satoshis).  The first option reduces down to having two anchors once again,
while the second option creates an output which is likely uneconomical to
sweep in isolation (compared to anchors which can be swept globally in the
system taking advantage of the input aggregation savings).

The final factor to consider is if we wish to properly re-introduce a CSV
delay to the to_remote party in an attempt to remedy some game theoretical
issues w.r.t forcing one party to close early without a cost to the
instigator. In the past we made some headway in this direction, but then
reverted our changes as we discoverers some previously unknown gaming
vectors even with a symmetrical delay. If we keep two anchor as is, then we
leave this thread open to a comprehensive solution, as the dual anchor
format is fully decoupled from the rest of the commitment.

Circling back to our implementation, we're ready to deploy what we have as
is.  In the future, if the scheme changes, then we'll be able to easily
update all our users, as we're also concurrently working on a dynamic
commitment update protocol. By dynamic I mean that users will be able to
update their commitment type on the fly, compared to being locked into a
commitment type when the channel opens as is today.

Would love to hear y'alls thoughts on the two primary concerns laid out
above, and my response to them, thanks!

-- Laolu
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Potential Minor Sphinx Privacy Leak and Patch

2019-11-05 Thread Olaoluwa Osuntokun
Hi y'all,

A new paper analyzing the security of the Sphinx mix-net packet format [1]
(and also HORNET) has recently caught my attention. The paper is rather long
and very theory heavy, but the TL;DR is this:

* The OG Sphinx paper proved various aspects of its security using a
  model for onion routing originally put forth by Camenisch and
  Lysyanskaya [2].
* This new paper discovered that certain security notions put forth in
  [2] weren't actually practically achievable by real-world onion
routing
  implementations (in this case Onion-Correctnes), or weren't entirely
  correct or additive.  New stronger security notions are put forth in
  response, along with extensions to the original Sphinx mix-net packet
  format that achieve these notions.
* A flaw they discovered in the original Sphinx paper [3], can allow an
  exit node to deduce a lower bound of the length of the path used to
  reach it. The issue is that the original paper constructs the
  _starting packet_ (what the exit hop will receive) by adding extra
  padding zeroes after the destination and identifier (we've more or
  less revamped this with our new onion format, but it still stands).
  An adversarial exit node can then locate the first set bit after the
  identifier (our payload in this case), then use that to compute the
  lower bound.
 * One of the (?) reference Sphinx implementations recognizes that this
   was/is an issue in the paper and implements the mitigation [4].
 * The fix on our end is easy: we need to replace those zero bytes with
   random bytes when constructing the starting packet.

I've created a PR to lnd's lightning-onion PR implementing this mitigation
[5].  As this changes the starting packet format, we also need to either
update the test vectors or we can keep them as is, and note that we use
zeroes so the test vectors are fully deterministic. My PR to the spec
patching the privacy leak leaves the test vectors untouched as is [6].

With all that said, IMO we have larger existing privacy leaks just due to
our unique application of the packet format. As an example, a receiver can
use the CLTV of the final HTLC to deduce bounds on the path length as we
have a restricted topology and CLTV values for public channels are all
known. Another leak is our usage of the variable length onion payloads which
a node can use to ascertain path length since they space they consume counts
towards the max hop count of 20-something.

In any case, we can patch this with just a few lines of code (fill out with
random bytes) at _senders_, and don't need any intermediate nodes to update.
The new and old packet construction algos are compatible as packet
_processing_ isn't changing, instead just the starting set of bytes are.

As always, please double-check by interpretation of the paper, as it's
possible I'm missing something. If my interpretation stands, then it's a
relatively minor privacy leak, and an easy low-hanging fruit that can be
patched without wide-spread network coordination.

-- Laolu

[1]: https://arxiv.org/abs/1910.13772
[2]: https://www.iacr.org/cryptodb/archive/2005/CRYPTO/1091/1091.pdf
[3]: https://cypherpunks.ca/~iang/pubs/Sphinx_Oakland09.pdf
[4]:
https://github.com/UCL-InfoSec/sphinx/blob/c05b7034eaffd8f98454e0619b0b1548a9fa0f42/SphinxClient.py#L67
[5]: https://github.com/lightningnetwork/lightning-onion/pull/40
[6]: https://github.com/lightningnetwork/lightning-rfc/pull/697
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] A proposal for up-front payments.

2019-11-05 Thread Olaoluwa Osuntokun
Hi Rusty,

Agreed w.r.t the need for prepaid HTLCS, I've been mulling over other
alternatives for a few years now, and none of them seems to resolve the
series of routing related incentive issues that prepaid HTLCs would.

> Since both Offers and Joost's WhatSat are looking at sending messages,
> it's time to float actual proposals.

IMO both should just be done over HORNET, so we don't need introduce a new
set of internal protocol level messages whenever we have some new
control/signalling need. Instead, we'd have a control/signal channel (give
me
routes, invoices, sign this, etc), and a payment channel (HTLCs as used
today).

> 2. Adding an HTLC causes a *push* of a number of msat on commitment_signed
> (new field), and a hash.

The prepay amount should be signalled in the update add message instead.
This lets HTLCs carry a heterogeneous set of prepay amounts. In addition, we
need a new onion field as well to signal the incoming amount the node
_should_ have received (allows them to detect deviations in the sender's
intended route).

> 3. Failing/succeeding an HTLC returns some of those msat, and a count and
> preimage (new fields).

Failing shouldn't return the prepay amount, otherwise extending long lived
HTLCs then cancelling them at the last minute is still costless. This
costlessness of _adding_ an HTLC to a _remote_ commitment is IMO, the
biggest incentive flaw that exists today in the greater routing network.

>  You get to keep 50 msat[1] per preimage you present[2].

We should avoid introducing any new constants to the protocol, as they're
typically dreamed up independent of any empirical lessons learned from
deployment.

On the topic of the prepay cost, the channel update message should be
extended to allow nodes to signal prepay costs similar to the way we handle
regular payment success fees. In order to eliminate a number of costless
attacks possible today on the routing network, nodes should also be able to
signal a new coefficient used to _scale_ the prepay fee as a function of the
CLTV value of the incoming HTLC. With this addition, senders need to pay to
_add_ an HTLC to a remote commitment transaction (fixed base cost), then
also need to pay a variable rate that scales with the duration of the
proposed outgoing CLTV value (senders ofc don't prepay to themselves).  Once
we introduce this, loop attacks and the like are no longer free to launch,
and nodes can dynamically respond to congestion in the network by raising
their prepay prices.

-- Laolu

On Mon, Nov 4, 2019 at 6:25 PM Rusty Russell  wrote:

> Hi all,
>
> It's been widely known that we're going to have to have up-front
> payments for msgs eventually, to avoid Type 2 spam (I think of Type 1
> link-local, Type 2 though multiple nodes, and Type 3 liquidity-using
> spam).
>
> Since both Offers and Joost's WhatSat are looking at sending
> messages, it's time to float actual proposals.  I've been trying to come
> up with something for several years now, so thought I'd present the best
> I've got in the hope that others can improve on it.
>
> 1. New feature bit, extended messages, etc.
> 2. Adding an HTLC causes a *push* of a number of msat on
>commitment_signed (new field), and a hash.
> 3. Failing/succeeding an HTLC returns some of those msat, and a count
>and preimage (new fields).
>
> How many msat can you take for forwarding?  That depends on you
> presenting a series of preimages (which chain into a final hash given in
> the HTLC add), which you get by decoding the onion.  You get to keep 50
> msat[1] per preimage you present[2].
>
> So, how many preimages does the user have to give to have you forward
> the payment?  That depends.  The base rate is 16 preimages, but subtract
> one for each leading 4 zero bits of the SHA256(blockhash | hmac) of the
> onion.  The blockhash is the hash of the block specified in the onion:
> reject if it's not in the last 3 blocks[3].
>
> This simply adds some payment noise, while allowing a hashcash style
> tradeoff of sats for work.
>
> The final node gets some variable number of preimages, which adds noise.
> It should take all and subtract from the minimum required invoice amount
> on success, or take some random number on failure.
>
> This leaks some forward information, and makes an explicit tradeoff for
> the sender between amount spent and privacy, but it's the best I've been
> able to come up with.
>
> Thoughts?
> Rusty.
>
> [1] If we assume $1 per GB, $10k per BTC and 64k messages, we get about
> 655msat per message.  Flat pricing for simplicity; we're trying to
> prevent spam, not create a spam market.
> [2] Actually, a number and a single preimage; you can check this is
> indeed the n'th preimage.
> [3] This reduces incentive to grind the damn things in advance, though
> maybe that's dumb?  We can also use a shorter hash (siphash?), or
> even truncated SHA256 (128 bits).
> ___
> Lightning-dev mailing 

Re: [Lightning-dev] Rendez-vous on a Trampoline

2019-11-05 Thread Olaoluwa Osuntokun
Hi t-bast,

> She creates a Bolt 11 invoice containing that pre-encrypted onion.

This seem insufficient, as if the prescribed route that Alice selects fails,
then the sender has no further information to go off of (let's say Teddy is
offline, but there're other pats). cdecker's rendezvous sketch using Sphinx
you
linked above also suffers from the same issue: you need some other
bi-directional communication medium between the sender and receiver in
order to
account for payment failures. Beyond that, if any failures occur in the
latter
half of the route (the part that's opaque to the sender), then the sender
isn't
able to incorporate the failure information into their path finding.  As a
result, the payer would need to send the error back to the receiver for
decrypting, possibly ping-ponging several times in a payment attempt.

On the other hand, using HORNET for rendezvous routing as was originally
intended gives the sender+receiver a communication channel they can use to
exchange further payment information, and also a channel to use for
decryption
of the opaque errors. Amongst many other things, it would also give us a
payment-level ACK [1], which may be a key component for payment splitting
(otherwise
you have no idea if _any_ shards have even arrived at the other side).


[1]:
https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-November/001524.html

-- Laolu

On Tue, Oct 22, 2019 at 5:02 AM Bastien TEINTURIER  wrote:

> Good morning everyone,
>
> Since I'm a one-trick pony, I'd like to talk to you about...guess what?
> Trampoline!
> If you watched my talk at LNConf2019, I mentioned at the end that
> Trampoline enables high AMP very easily.
> Every Trampoline node in the route may aggregate an incoming multi-part
> payment and then decide on how
> to split the outgoing aggregated payment. It looks like this:
>
>  . 1mBTC ..--- 2mBTC ---.
> /\ /
> \
> Alice - 3mBTC --> Ted -- 4mBTC > Terry - 6mBTC >
> Bob
>\ /
> `--- 2mBTC --'
>
> In this example, Alice only has small-ish channels to Ted so she has to
> split in 3 parts. Ted has good outgoing
> capacity to Terry so he's able to split in only two parts. And Terry has a
> big channel to Bob so he doesn't need
> to split at all.
> This is interesting because each intermediate Trampoline node has
> knowledge of his local channels balances,
> thus can make more informed decisions than Alice on how to efficiently
> split to reach the next node.
>
> But it doesn't stop there. Trampoline also enables a better rendez-vous
> routing than normal payments.
> Christian has done most of the hard work to figure out how we could do
> rendez-vous on top of Sphinx [1]
> (thanks Christian!), so I won't detail that here (but I do plan on
> submitting a detailed spec proposal with all
> the crypto equations and nice diagrams someday, unless Christian does it
> first).
>
> One of the issues with rendez-vous routing is that once Alice (the
> recipient) has created her part of the onion,
> she needs to communicate that to Bob (the sender). If we use a Bolt 11
> invoice for that, it means we need to
> put 1366 additional bytes to the invoice (plus some additional information
> for the ephemeral key switch).
> If the amount Alice wants to receive is big and may require multi-part,
> Alice has to decide upfront on how to split
> and provide multiple pre-encrypted onions (so we need 1366 bytes *per
> partial payment*, which kinda sucks).
>
> But guess what? Bitcoin Trampoline fixes that*™*. Instead of doing the
> pre-encryption on a normal onion, Alice
> would do the pre-encryption on a Trampoline onion (which is much smaller,
> in my prototype it's 466 bytes).
> And that allows rendez-vous routing to benefit from Trampoline's ability
> to do multi-part at each hop.
> Obviously since the onion is smaller, that limits the number of trampoline
> hops that can be used, but don't
> forget that there are additional "normal" hops between each Trampoline
> node (and the final Trampoline spec
> can choose the size of the Trampoline onion to enable a good enough
> rendez-vous).
>
> Here is what it would look like. Alice chooses to rendez-vous at Terry.
> Alice wants the payment to go through Terry
> and Teddy so she pre-encrypts a Trampoline onion with that route:
>
> Alice <--- Teddy <--- Terry
>
> She creates a Bolt 11 invoice containing that pre-encrypted onion. Bob
> picks up that invoice and can either reach
> Terry directly (via a normal payment route) or via another Trampoline node
> (Toad?). Bob finalizes the encryption of
> the Trampoline onion and sends it onward. Bob can use multi-part and split
> the payment however he wishes,
> because every Trampoline node in the route will be free to aggregate and
> re-split differently.
> Terry is the only intermediate node to know that rendez-vous routing was
> used. Terry 

Re: [Lightning-dev] Increasing fee defaults to 5000+500 for a healthier network?

2019-10-11 Thread Olaoluwa Osuntokun
Hi Rusty,

I think this change may be a bit misguided, and we should be careful about
making sweeping changes to default values like this such as fees. I'm
worried that this post (and the subsequent LGTMs by some developers)
promotes the notion that somehow in Lightning, developers decide on fees
(fees are too low, let's raise them!).

IMO, there're a number of flaws in the reasoning behind this proposal:

> defaults actually indicate lower reliability, and routing gets tarpitted
> trying them all

Defaults don't necessarily indicate higher/lower reliability. Issuing a
single CLI command to raise/lower the fees on one's node doesn't magically
make the owner of said node a _better_ routing node operator. If a node has
many channels, with all of them poorly managed, then path finding algorithms
can move extrapolate the overall reliability of a node based on failures of
a sample of channels connected to that node. We've start to experiment with
such an approach here, so far the results are promising[1].

> There's no meaningful market signal in fees, since you can't drop much
> below 1ppm.

The market signal one should be extracting from the current state is: a true
market hasn't yet emerged as routing node operators are mostly hands off (as
they're used to being with their exiting bitcoin node) and have yet to begin
to factor in the various costs of operating a node into their fees schedule.
Only a handful of routing node operators have started to experiment with
distinct fee settings in an attempt to feel out the level of elasticity in
the forwarding market today (if I double by fees, by how much do my daily
forwards and fee revenue drop off?).

Ken Sedgwick had a pretty good talk on this topic as the most recent SF
Lightning Devs meet up[2]. The talk itself unfortunately wasn't recorded,
but there're a ton of cool graphs really digging into the various parameters
in the current market. He draws a similar conclusion stating that: "Many
current lightning channels are not charging enough fees to cover on-chain
replacement".

Developers raising the default fees (on their various implementations) won't
address this as it shows that the majority of participants today (routing
node operators) aren't yet thinking about their break even costs. IMO
generally this is due to a lack of education, which we're working to address
with our blog post series (eventually to be a fully fledged standalone
guide) on routing node operation[3]. Tooling also needs to improve to give
routing node operators better insight into their current level of
performance and efficacy of their capital allocation decisions.

> Compare lightningpowerusers.com which charges (1 msat + 5000 ppm),
> and seems to have significant usage, so there clearly is market tolerance
> for higher fees.

IIRC, the fees on that node are only that high due to user error by the
operator when setting their fees. `lnd` exposes fees on the command line
using the fixed point numerator which some find confusing. We'll likely add
another argument that allows users to specify their fees using their basis
points (bps) or a plain old percentage.

Independent of that, I don't think you can draw the conclusion that they
have "significant" usage, based on solely the number of channels they have.
That node has many channels due to the operator campaigning for users to
open channels with them on Twitter, as they provided an easy way to package
lnd for desktop users. A node having numerous channels doesn't necessarily
mean that they have significant usage, as it's easy to "paint the tape" with
on-chain transactions. What really matters is how effectively the node is
managed.

In terms of market signals, IMO the gradual rise of fees _above_ the current
widely used default is a strong signal as it will indicate a level of
maturation in the market. Preemptively raising defaults only adds noise as
then the advertised fees are less indicative of the actual market
conditions. Instead, we should (to promote a healthier network) educate
prospective routing node operators on best practices, provide analysis
tools t
hey can use to make channel management and liquidity allocation decisions,
and leave it up to the market participants to converge on steady state
economically rational fees!

[1]: https://github.com/lightningnetwork/lnd/pull/3462
[2]:
https://github.com/ksedgwic/lndtool/blob/graphstats/lightning-fee-market.pdf
[3]:
https://blog.lightning.engineering/posts/2019/08/15/routing-quide-1.html


On Thu, Oct 10, 2019 at 7:50 PM Rusty Russell  wrote:

> Hi all,
>
> I've been looking at the current lightning network fees, and it
> shows that 2/3 are sitting on the default (1000 msat + 1 ppm).
>
> This has two problems:
> 1. Low fees are now a negative signal: defaults actually indicate
>lower reliability, and routing gets tarpitted trying them all.
> 2. There's no meaningful market signal in fees, since you can't
>drop much below 1ppm.
>
> Compare 

Re: [Lightning-dev] CVEs assigned for lightning projects: please upgrade!

2019-09-10 Thread Olaoluwa Osuntokun
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

We've confirmed instances of the CVE being exploited in the wild.  If you’re
not on the following versions of either of these implementations (these
versions are fully patched), then you need to upgrade now to avoid risk of
funds loss:
* lnd v0.7.1 -- anything 0.7 and below is vulnerable
* c-lightning v0.7.1 -- anything 0.7 and below is vulnerable
* eclair v0.3.1 -- anything 0.3 and below is vulnerable

We'd also like to remind the community that we still have limits in place on
the network to mitigate widespread funds loss, and please keep that in mind
when putting funds onto the network at this early stage.

If you have trouble updating for whatever reason, feel free to reach out to
the developers of the respective implementations referenced above.
-BEGIN PGP SIGNATURE-

iQIzBAEBCgAdFiEE+AN+cMEseiY8AyUIzlj3+OIP2aIFAl13vxQACgkQzlj3+OIP
2aIUABAAxrXvdyNcrNeerEFgYjqshXXhZVJXUcQwpHrrd4UX7weqS+UakOE4NP/b
EBDnMlOoqN5X4UhiV8EVR0QMnznXGYJ5ZNws8OCvGg8QCUMbkHRg7rVNEnd4zZJU
oE9c75Vg02E5riNcMT9B+gBkcTppUeZiM/PboDoU6HWvXzdIAhRD3ZXHZaAJj35H
SRcAD7ehUQ1WRmXH9wfvF6jCX5GZMb731EfVPEvcyA3EiYG/P0GBNXrUKsFzknab
DE8txA31728iojydnQxesKcMmXZhZqS0IJfeqacBXiyzUNWcgWpTui0QhtPZzV9x
0yVseqcMWaONagIGRSZ2zrnBbU3aVXSbGQRSy4qvhljQjqrQgvoHCgshROr1JbvU
jqsNI5ZT2v3mRNLQMKQZM6O84ULLAvyIk17/ZiLVoLp018G/5ZI2p8npe/he01Wm
cClrag2F6a1POWiByd4bQDps/XfBh4yLRxFUCFDZhOPEHf2P7N8ydqmjcGTGh9oZ
iWIX7pHZYqMM9UwdIorgUQlm1K4PQA+0lKjB97pR5Vhj+Nt41bm4+S7UqvCQSalK
t0B8csNISrqGtA02jjXiNqpOnkRnRoiwiOwsB5wpL3w5cagIgrsE4wNpNsIQC1ZY
HhVts3uc299TtS8eMwl5WjKiY2zgKHILvIs0WcyEGqpVLV0hjyw=
=Q5CI
-END PGP SIGNATURE-


On Fri, Aug 30, 2019 at 2:34 AM Rusty Russell  wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> Security issues have been found in various lightning projects which
> could cause loss of funds.
>
> Full details will be released in 4 weeks (2019-09-27), please uprade
> well before then.
>
> Effected releases:
>
> CVE-2019-12998 c-lightning < 0.7.1
> CVE-2019-12999 lnd < 0.7
> CVE-2019-13000 eclair <= 0.3
>
> Cheers,
> Rusty.
> -BEGIN PGP SIGNATURE-
>
> iQIzBAEBCAAdFiEEFe6NbKsOfwz5mb/L2SAObNGtuPEFAl1o7UAACgkQ2SAObNGt
> uPFR7xAAqlcY/gCzfx5Sl49BwLIvr5EZlKYxasIoU4FoiAxLN0sRMksBLY+gUA3L
> 7XuPi7oJSsnJc0Gvq6DnWo8W/jqAETgK0XeCyESdtX1tLeXMEiCoAXccRBT/hNbr
> aHRiyeRO6YnrfzJN2CKStzXUvoVEvyB4lpMZ+dTJYdulOUs20ELU/zzSQe/syGnD
> 7kujvBVyk4LJIYQ9piGl1pc4Y8mORK2ttYCVk4HCy+eu1RGHRVze135ve2MhQVOd
> Mzs57lqXM8k+ZUumD5eB6pgvENlFzgFVaywYvf7+RSZIx185qosHTbQU84icyunp
> W68FhCk9DMUYlhU8lBVyX1qS1+YhBYvm79zK4lCSJ9CQBZ2Oox2tz9RuO/3DPSol
> RCZ3+h8SCKai8ZASXhz4dL4nXSpdKNjJrQdRvp7I1e2netkZpaF2Dyd7FDvFnhad
> SWP/juo/n9rmkyfbuxQYj5sdixV9G9cpV85BnQDX558r+AMRPVin/xs5NBZMknkN
> S7Wc9aq8nlVUeoTV5+TnGbz8NPXyYLNSotJdwBnA+RWTD9emCBah3UOxVlJR7N5e
> nZuumPauLJyZESzxvRDgQ0Hca7hMCMBh+xJ/OFDy+n4oHxFLihCtY3EktSE43v2N
> +PXbLFXw9w7jSPxn5FgqzB9D/E/eqkLe/+UKsnQ0ji8trEd36DU=
> =Z6RL
> -END PGP SIGNATURE-
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>


warn.txt.asc
Description: Binary data
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Improving Lightning Network Pathfinding Latency by Path Splicing and Other Real-Time Strategy Game Techniques

2019-08-02 Thread Olaoluwa Osuntokun
> I found out recently (mid-2019) that mainnet Lightning nodes take an
> inordinate amount of time to find a route between themselves and an
> arbitrary payee node.
> Typical quotes suggested that commodity hardware would take 2 seconds to
> find a route

Can you provide a reproducible benchmark or further qualify this number (2
seconds)? Not commenting on the rest of this email as I haven't read the
rest of it yet, but this sounds like just an issue of engineering
optimization. AFAIK, most implementations are using unoptimized on-disk
representations of the graph, do minimal caching, and really haven't made
any sort of push to optimize these hot spots. There's no reason that finding
a path in a graph of a 10s of thousands of edges should take _2 seconds_.

Beyond that, to my knowledge, all implementations other and lnd implement a
very rudimentary time based edge/node pruning in response to failures. I
call it rudimentary, as it just waits a small period of time, then forgets
all its past path finding history. As a result, each attempt will include
nodes that have been known to be offline, or nonoperational channels,
effectively doing redundant work each attempt.

The latest version of our software has moved beyond this [1], and will
factor in past path finding attempts into its central "mission control",
allowing it to learn from each attempt, and even import existing state into
its path finding memory (essentially a confidence factor that takes into
account the cost of a failed attempt mapped into a scalar weight we can use
for comparison purposes). This is just an initial first step, but we've seen
a significant improvement with just a _little_ bit more intelligence in our
path finding heuristics. We should take care to not get distracted by more
distant "pie in the sky" like ideas (since many of them are half-baked),
lest we ignore these low hanging engineering fruits and incremental
algorithmic updates.

> This is concerning, of course, since we would like the public Lightning
> Network to grow a few more orders of magnitude.

I'd say exactly _how large_ the _public_ graph needs to be is an open
question. Most of the public channels in the network today are more
extremely underutilized with capital largely being over allocated. Based on
our active network analysis, only a few hundred nodes are actively
managing their channels effectively, allowing them to be effective routing
nodes.

moar channels != better

As a result, clients today are able to ignore a _majority_ of the known
graph, and still have their payments attempts be successful, as they'll
ignore all the routing nodes that aren't actually walking the walk (proper
channel management).

-- Laolu

[1]: https://github.com/lightningnetwork/lnd/pull/2802


On Wed, Jul 31, 2019 at 6:52 PM ZmnSCPxj via Lightning-dev <
lightning-dev@lists.linuxfoundation.org> wrote:

> Introduction
> 
>
> I found out recently (mid-2019) that mainnet Lightning nodes take an
> inordinate amount of time to find a route between themselves and an
> arbitrary payee node.
> Typical quotes suggested that commodity hardware would take 2 seconds to
> find a route, then take a few hundred milliseconds for the actual payment
> attempt.
> With the help of Rene Pickhardt I was able to confirm that indeed, much of
> payment latency actually arises from the pathfinding algorithm and not the
> actual payment-over-routes.
>
> This is concerning, of course, since we would like the public Lightning
> Network to grow a few more orders of magnitude.
> Given that the best pathfinding search algorithms will be O(n log n) on
> the size of the network, we need to consider how to speed up the finding of
> routes.
>
> `permuteroute` and Faster Next Pathfinding Attempts
> ===
>
> As I was collaborating with Rene, JIT-Routing was activated in my core
> processing hardware.
>
> As I was contemplating this problem, I considered, that JIT-Routing would
> (ignoring fees) effectively "reroute" the original route around the failing
> channel.
>
> In particular, JIT-Routing is advantageous for these reasons:
>
> 1.  There is no need to report back the failure to the original payer.
> 2.  The node performing JIT-Routing has accurate information about its
> channel balances and which of its outgoing channels would be most effective
> to route through instead of that indicated by the original payer.
> It also knows of non-published channels it has.
> 3.  Searching for a circular rebalancing route could be done much quicker
> since the JIT-Routing node could restrict itself to looking only in its
> friend-of-friend network, and simply fail if it could not find a circular
> rebalancing route quickly in the reduced search space.
>
> The first two advantages cannot be emulated by the original payer.
>
> However, I realized that the third advantage *could* be emulated by the
> original payer.
> This is advantageous as the payer node can implement 

[Lightning-dev] Extending Associated Data in the Sphinx Packet to Cover All Payment Details

2019-02-07 Thread Olaoluwa Osuntokun
Hi y'all,

I'm not sure how good defenses are on implementations other than lnd, but
all implementations *should* be keeping a Sphinx reply cache of the past
shared secrets they know of [1]. If a node comes across an identical shared
secret of that in the cache, then they should reject that packet. Otherwise,
it's possible for an adversary to inject a stale packet back into the
network in order to observe the propagation of the packet through the
network. This is referred to as a "replay" attack, and is a de-anonymization
vector.

Typically mix nets enforce some sort of session lifetime identifier to allow
nodes to garbage collect their old shared secrets state, otherwise it grows
indefinitely. As our messages are actually payments with a clear expiration
date (the absolute CLTV), we can use this as the lifetime of a payment
circuit session. The sphinx packet construction allows some optional
plaintext data to be authenticated along side the packet. In the current
protocol we use this to bing the payment hash along with the packet. The
rationale is that in order for me to accept the packet, the attacker must
use the _same_ payment hash.  If the pre-image has already been revealed,
then the "victim" can instantly pull the payment, attaching a  cost to a
replay attempt.

However, since the CLTV isn't also authenticated, then it's possible to
attempt to inject a new HTLC with a fresher CLTV. If the node isn't keeping
around all pre-images, then they might forward this since it passes the
regular expiry tests. If we instead extend the associated data payload to
cover the CLTV as well, then this binds the adversary to using the same CLTV
details. As a result, the "victim" node will reject the HTLC since it has
already expired. Continuing down this line, if we progressively add more
payment details, for example the HTLC amount, then this forces the adversary
to commit the same amount as the original HTLC, potentially making the
probing vector more expensive (as they're likely to lose the funds on
attempt).

If this were to be deployed, then we can do it by using a new packet version
in the Sphinx packet. Nodes that come across this new version (signalled by
a global feature bit) would then know to include the extra information in
the AD for their MAC check. While we're at it, we should also actually
*commit* to the packet version. Right now nodes can swap out the version to
anything they want, potentially causing another node to reject the packet.
This should also be added to the AD to ensure the packet can't be modified
without another node detecting it.

Longer term, we may end up with _all_ payment details in the Sphinx packet.
The only thing outside in the update_add_htlc message would be link level
details such as the HTLC ID.

Thoughts?

[1]:
https://github.com/lightningnetwork/lightning-onion/blob/master/replaylog.go

-- Laolu
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] SURBs as a Solution for Protocol-Level Payment ACKs

2019-02-07 Thread Olaoluwa Osuntokun
Hi y'all,

Recently we've started to do more design work related to the Sphinx packet
(EOB format, rendezvous protocol). This prompted me to re-visit the original
Sphinx paper to refresh my memory w.r.t some of the finer details of the
protocol.  While I was re-reading the paper, I realized that we may be able
to use use SURBs (single-use-reply-blocks) to implement a "payment ACK" for
each sent HTLC.

(it's worth mentioning that switching to HORNET down the line would solve
this problem as well since the receiver already gets a multi-use backwards
route that they can use to send information back to the receiver)

Right now HTLC routing is mainly a game of "send and hope it arrives", as
you have no clear indication of the _arrival_ of an HTLC at the destination.
Instead, you only receive a protocol level message if the HTLC failed for
w/e reason, or if it was successfully redeemed.  As part of BOLT 1.1, it was
agreed upon that we should implement some sort of "payment ACK" feature. A
payment ACK scheme is strongly desired as it:

  * Allows the sender to actually know when a payment has reached the
receiver which is useful for many higher level protocols. Atm, the
sender is unable to distinguish an HTLC being "black holed" from one
that's actually reached the sender, and they're just holding on to it.
  * AMP implementations would be aided by being able to receive feedback on
successfully routed splits. If we're able to have the receiver ACK each
partial payment, then implementations can more aggressively split
payments as they're able to gain assurance that the first 2 BTC of 5
total have actually reached the sender, and weren't black holed.
  * Enforcing and relying on ACKs may also thwart silly games receivers
might play, claiming that the HTLC "didn't actually arrive".

Some also call this feature a "soft error" as a possible implementation
might to re-use the existing onion error protocol we've deployed today.  For
reference, in order to send back errors back long the route in a way that
doesn't reveal the sender of the HTLC to the receiver (or any of the
intermediate nodes) we re-use the shared secret each hop has derived, and
onion wrap a MAC'd error to the sender. Each hop can't actually check that
they've received a well formed error, but the sender is able to attribute an
error to a node in the route based on which shared secret they're able to
check the MAC with.

The original Sphinx packet format has a way for the receiver to send a
message back to the sender. This was originally envisioned to allow the
receiver to send a replay email/message back to the sender without knowing
who they were, and also in a manner that was bit-wise indistinguishable from
a regular forwarded packet. This is called a SURB or "single use reply
block". A SURB is composed of: a pre-crafted sphinx packet for the
"backwards route" (which can be distinct from the forwards route), the first
hop of the backwards route, and finally a symmetric key to use when
encrypting the reply.

When we more or less settled on using Sphinx, we started to remove things
that we didn't have a clear use for at the time. Two things that were
removed were the original end-to-end payload, and also the SURB. Removing
the payload made the packet size smaller, and it didn't seem realistic to
give _each_ hop a SURB to send reply back.

In order to implement payment ACKs, we can have the sender craft a SURB (for
the ACK), and mark the receipt of the SURB as the payment ACK itself.
Creating and processing a SURB is identical to the regular HTLC packets we
use today. As a result, the code impact to the code sphinx packet logic is
minimal. We'd then also re-introduce the e2e payload so we can carry the
SURB in the forward direction (HLTC add). The backwards packet would also
have a payload of random bytes with the same size as a regular packet to
make them look identical on the wire.

This payload can further be put to use in order to implement streaming or
subscription payments in a way. Since we must add a payload for in order to
send/reply look the same, we can also piggy back some useful additional
data. Each time a payment is sent, the receiver can use the extra payload to
stack on details such as:
  * A new invoice to pay for the metered service being paid for.
  * An invoice along with a deadline for when this must be paid, lest the
subscription service expire.
  * Details of lightning-native API
  * etc, etc

IMO, this approach is better than a potential client-server payment
negotiation protocol as it doesn't require any additional servers along side
the node, also maintains sender anonymity, and doesn't rely on any sort of
PKI.

>From the prospective of packet-analysis, errors today are identifiable due
to the packet size (though we do pad them out to avoid being able to
distinguish some errors from others on the wide). SURBs on the other hand,
have the same profile as regular HTLC adds since they use the 

Re: [Lightning-dev] Network probes

2019-01-18 Thread Olaoluwa Osuntokun
Hi Andrea,

> This saves the receiving node from doing a database lookup

Nodes can and eventually should start using bloom filters to avoid most
database lookups for incoming payment hashes. The false positive rate can be
set to a very low value as the bloom filter doesn't need to transmitted, and
can even be stored persistently. As an optimization, nodes may opt to
maintain a series of hierarchical bloom filters, with the highest tier
filter containing only payment hashes for non-expired invoices. Clever bloom
filter usage by nodes would allow them to avoid almost all database lookups
for incoming unknown payment hashes (probes or not).

> we can improve it by using the `padding` of the `per_hop` field of the
> onion;

I recently implemented a type of spontaneous payment [1] that works today in
the wild (gotta love dat End to End Principle). A requirement for this was
fully functional EOB packing logic at the sender, and multi-packet
unwrapping at the receiver, the modified packet construction/processing can
be found here [2]. Using the terminology of the current draft code, all that
would need to be done is specify an EOB type for this special probe type of
HTLC. As it doesn't need any additional data, it only consumes a single
pivot hop and doesn't require the route to be extended.

Have you seen aj's prior post [3] on this front (making probe HTLCs
identifiable to the receiver, and allowing intermediate nodes to drop them)?
Allowing intermediate nodes to identify probe HTLCs has privacy
implications, as all of a sudden we've created two path-level classes of
HTLCs. On the other hand, this may help with QoS scheduling on the
forwarding plane for nodes, they may want to prioritize actual payments over
probes, with some nodes opting to not forward probes all together.

[1]: https://github.com/lightningnetwork/lnd/pull/2455
[2]: https://github.com/lightningnetwork/lightning-onion/pull/31
[3]:
https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-November/001554.html

-- Laolu


On Fri, Jan 18, 2019 at 8:47 AM Andrea RASPITZU 
wrote:

> Good morning list,
>
> I know there have been discussion around how (and if) we should probe the
> network to check for the liveliness of a path before sending out the
> payment. Currently we issue a payment with a random payment_hash that is
> not redeemable by anyone, if the destination (and the path) is `lively` it
> will respond Error. Assuming we do want to probe, and it make sense to
> assume so because it can't be prevented, we can improve it by using the
> `padding` of the `per_hop` field of the onion; with a single bit of the
> padding we can tell the final node that this is a probe and not an actual
> payment. This saves the receiving node from doing a database lookup
> (checking if it has the preimage for such a payment_hash) and it does not
> reveal anything to intermediate nodes, we don't want them to change the
> behavior if they know it's a probe and not an actual payment. I believe
> probing can help reducing the error rate of payments (and even detect stale
> channels?) and I'm looking forward to have some feedback, and submit a
> draft.
>
> Cheers, Andrea.
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Mandatory "d" or "h" UX issues

2019-01-14 Thread Olaoluwa Osuntokun
It isn't mandatory. It can be left blank, none of the existing wallets
require users to input a description when they make an invoice.

On Mon, Jan 14, 2019, 3:28 PM Francis Pouliot  I'm currently in the process of building the Lightning Network payout
> feature which will allow users to purchase bitcoins with fiat and have the
> coins sent to the via LN rather than on-chain. The main issue I'm facing is
> ensuring that recipients generate the correct Bolt11 invoice.
>
> Having the "d" and "h" fields mandatory creates a UX issue for Bitcoin
> services that are performing payouts/withdrawals (in the absence of a
> widely adopted payment protocol).
>
> It seems to me that the design of Bolt11 may have been biased towards
> merchants, because normally merchants, as recipients, decide on what the
> invoice is going to be and the sender doesn't have a choice but to conform
> (since the recipient is the service provider).
>
> But for LN payouts (e.g. withdrawal from an exchange or a poker site), the
> Sender is the services provider, and it is the Sender who will be creating
> (most likely programatically) the terms of the payment. However, this means
> that the Sender must be able to communicate to his end-user exactly what
> type of Bolt11 invoice he wants the user to create. This means, in most
> cases, that the user will have to manually enter some fields in his wallet.
> And if the content doesnt match, there will be a payment failure.
>
> Here is how I picture the ux issues taking place.
>
>1. User goes on my app to buy Bitcoin with fiat, and opts to be paid
>out via LN rather than on-chain BTC.
>2. My app will tell him: "make an invoice with the following:
>msatoshi, description.
>3. He will go in his wallet and type msatoshi, description.
>4. It's likey he won't pay too much attention, make a typo in
>description, leave it blank, write his own description, etc.
>5. When my app tries to pay, we of course have to decode his bolt11
>first.
>6. We have to have some logic that will compare the "h" or "d" that we
>instructed him to create and the "h" or "d" that we got from the decoded
>bolt 11 (which is an extra hassle for devs)
>7. In the cases there they are not the same, we need to instruct the
>user to create a new bolt 11 invoice because the one he created was not
>correct.
>
> What this ends up doing is create a situation where the service provider
> depends on his user to create a correct invoice before sending him the
> funds, and creates an added (unecessary) requirement for communication, and
> lower payment success rates, and likely higher abandonment rate.
>
> Question: what is the logic behind making "d" and "h" mandatory? I think
> business logic should be left to Bitcoin businesses.
>
> Can we simply not make "d" or "h" mandatory without breaking anything?
>
> TL;DR users already have troube entering the correct amount of BTC when
> paying invoices that aren't BIP21, so I am afraid that there will be tons
> of issues with them writing down the correct description.
>
> P.s. I'm using c-lightning right now and would like to not have to switch
>
> P.s.s. this will likely be fixed with a standardised payment protocol but
> addressing this issue seems a lower hanging fruit.
>
> Issue: https://github.com/lightningnetwork/lightning-rfc/issues/541
>
> Thanks is for your time,
>
> Francis
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Approximate assignment of option names: please fix!

2018-11-16 Thread Olaoluwa Osuntokun
> OG AMP is inherently spontaneous in nature, therefore invoice might not
exist
> to put the feature on.

That is incorrect. One can use an invoice along with AMP as is, in order to
tag
a payment. As an example, I want to deposit to an exhcange, so I get an
invoice
from them. I note that the invoice has a special (new) field that indicates
they accept AMP payments, and include an 8-byte identifier. Each of the
payment
shards I send over to the exchange will carry this 8-byte identifier.
Inclusion
of this identifier signals to them to credit my account with the deposit
once
all the payments arrive. This generalizes to any case where a service or
good
is to be dispersed once a payment is received.

-- Laolu


On Mon, Nov 12, 2018 at 6:56 PM ZmnSCPxj via Lightning-dev <
lightning-dev@lists.linuxfoundation.org> wrote:

> Good Morning Rusty,
>
> OG AMP is inherently spontaneous in nature, therefore invoice might not
> exist to put the feature on.
> Thus it should be global feature.
>
> Do we tie spontaneous payment to OG AMP or do we support one which is
> payable by base AMP or normal singlepath?
>
> Given that both `option_switch_ephkey` and `option_og_amp` require
> understanding extended onion packet types, would it not be better to merge
> them into `option_extra_onion_packet_types`?
>
>
>
> Sent with ProtonMail Secure Email.
>
> ‐‐‐ Original Message ‐‐‐
> On Tuesday, November 13, 2018 7:49 AM, Rusty Russell <
> ru...@blockstream.com> wrote:
>
> > Hi all,
> >
> > I went through the wiki and made up option names (not yet
> > numbers, that comes next). I re-read our description of global vs local
> > bits:
> >
> > The feature masks are split into local features (which only
> > affect the protocol between these two nodes) and global features
> > (which can affect HTLCs and are thus also advertised to other
> > nodes).
> >
> > You might want to promote your local bit to a global bit so you can
> > advertize them (wumbo?)? But if it's expected that every node will
> > eventually support a bit, then it should probably stay local.
> >
> > Please edit your bits as appropriate, so I can assign bit numbers soon:
> >
> >
> https://github.com/lightningnetwork/lightning-rfc/wiki/Lightning-Specification-1.1-Proposal-States
> >
> > Thanks!
> > Rusty.
> >
> > Lightning-dev mailing list
> > Lightning-dev@lists.linuxfoundation.org
> > https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
>
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Wumbological local AND global features

2018-11-15 Thread Olaoluwa Osuntokun
I realized the other day that the wumbo bit should also likely encompass
wumbo
payments. What good is a wumbo channel that doesn't also allow wumbo
payments?
Naturally if the bit is signalled globally, then this should also signal the
willingness of the node to forward larger payments up to their max_htlc
limit
within the channel_update for that link.

On a similar note, I was reviewing the newer-ish section of the spec
concerning
the optional max_htlc value. I noticed an inconsistency: it states the value
should be below the max capacity of the channel, but makes no reference to
the
current (pre wumbo) _max HTLC limit_. As a result, as it reads now, one may
interpret signalling of the optional field as eligibility to route wumbo
payments in a pre-wumbo channel world.

-- Laolu


On Tue, Nov 13, 2018 at 3:34 PM Rusty Russell  wrote:

> ZmnSCPxj via Lightning-dev 
> writes:
> > Thus, I propose:
> >
> > * The local feature bit `option_i_wumbo_you_wumbo`, which indicates that
> the node is willing to wumbo with its counterparty in the connection.
> > * The global feature bit `option_anyone_can_wumbo`, which indicates that
> the node is willing to wumbo with any node.
>
> I think we need to name `option_anyone_can_wumbo` to `option_wumborama`?
>
> Otherwise, this looks excellent.
>
> Thanks,
> Rusty.
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Packet switching via intermediary rendezvous node

2018-11-15 Thread Olaoluwa Osuntokun
> If I'm not mistaken it'll not be possible for us to have spontaneous
> ephemeral key switches while forwarding a payment

If this _was_ possible, then it seems that it would allow nodes to create
unbounded path lengths (looks to other nodes as a normal packet), possibly
by controlling multiple nodes in a route, thereby sidestepping the 20 hop
limit all together. This would be undesirable many reasons, the most dire of
which being the ability to further amplify null-routing attacks.

-- Laolu

On Mon, Nov 12, 2018 at 8:06 PM Christian Decker 
wrote:

> Hi ZmnSCPxj,
>
> like I mentioned in the other mailing thread we have a minor
> complication in order get rendez-vous working.
>
> If I'm not mistaken it'll not be possible for us to have spontaneous
> ephemeral key switches while forwarding a payment. Specifically either
> the sender or the recipient have to know the switchover points in their
> respective parts of the onion. Otherwise it'll not be possible to cover
> the padding in the HMAC, for the same reason that we couldn't meet up
> with the same ephemeral key at the rendez-vous point.
>
> Sorry about not noticing this before.
>
> Cheers,
> Christian
>
> ZmnSCPxj via Lightning-dev 
> writes:
> > Good morning list,
> >
> > Although, packet switching was part of the agenda, we decided, that we
> would defer this to some later version of BOLT spec.
> >
> > Interestingly, some sort of packet switching becomes possible, due to
> the below features we did not defer:
> >
> > 1.  Multi-hop onion packets (i.e. s/realm/packettype/)
> > 2.  Identify "next" by node-id instead of short-channel-id (actually, we
> solved this by "short-channel-id is not binding" and next hop is identified
> by short-channel-id still).
> > 3.  Onion ephemeral key switching (required by rendez-vous routing).
> >
> > ---
> >
> > Suppose we define the below packettype (notice below type number is
> even, but I am uncertain how "is OK to be odd", is appropriate for this):
> >
> > packettype 0: same as current realm 0
> > packettype 2: ephemeral key switch (use ephemeral key in succeeding
> 65-byte packet)
> > packettype 4: identify next node by node-id on succeeding 65-byte packet
> >
> > Suppose I were to receive a packettype 0 in an onion.  It identifies a
> short-channel-id.  Now suppose this particular channel has no capacity.  As
> I showed in thread " Link-level payment splitting via intermediary
> rendezvous nodes"
> https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-November/001547.html,
> it is possible, that I can route it via some other route *composed of
> multiple channels*, by using packettype 4 at the end of this route to
> connect it to the rest of the onion I receive.
> >
> > However, in this case, in effect, the short-channel-id simply identifies
> the "next" node along this route.
> >
> > Suppose we also identify a new packettype (packettype 4)) where he
> "next" node is identified by its node-id.
> >
> > Let us make the below scenarios.
> >
> > 1.  Suppose the node-id so identified, I have a channel with this node.
> And suppose this channel has capacity.  I can send the payment directly to
> that node.  This is no different from today.
> > 2.  Suppose the node-id so identified, I have a channel with this node.
> But this channel has not capacity.  However, I can look for alternate
> route.  And by using rendez-vous feature "switch ephemeral key" I can
> generate a route that is multiple hops, in order to reach the identified
> node-id, and connect the rest of the onion to this.  This case is same as
> if the node is identified by short-channel-id.
> > 3.  Suppose the node-id so identified, I have not a channel with this
> node.  However, I can again look for alternate route.  Again, by using
> "switch ephemeral key" feature, I can generate a route that is multiple
> hops, in order to reach the identified node-id, and again connect the rest
> of the onion to this.
> >
> > Now, the case 3 above, can build up packet switching.  I might have a
> routemap that contains the destination node-id and have an accurate route
> through the network, and identify the path directly to the next node.  If
> not, I could guess/use statistics that one of my peers is likely to know
> how to route to that node, and also forward a packettype 4 to the same
> node-id to my peer.
> >
> > This particular packet switching, also allows some uncertainty about the
> destination.  For instance, even if I wish to pay CJP, actually I make an
> onion with packettype 4 Rene, packettype 4 CJP. packettype 0 HMAC=0.  Then
> I send the above onion (appropriately layered-encrypted) to my direct peer
> cdecker, who attempts to make an effort to route to Rene.  When Rene
> receives it, it sees packettype 4 CJP, and then makes an effort to route to
> CJP, who sees packettype 0 HMAC=0 meaning CJP is the recipient.
> >
> > Further, this is yet another use of the siwtch-ephemeral-key packettype.
> >
> > Thus:
> >
> > 1.  It allows packet 

Re: [Lightning-dev] Proposal for Advertising Channel Liquidity

2018-11-08 Thread Olaoluwa Osuntokun
Was approaching more so from the angle of a node new node with no existing
channels seeking to bootstrap connections to the network.

-- Sent from my Spaceship

On Fri, Nov 9, 2018, 9:10 AM Anthony Towns  On Thu, Nov 08, 2018 at 05:32:01PM +1030, Olaoluwa Osuntokun wrote:
> > > A node, via their node_announcement,
> > Most implementations today will ignore node announcements from nodes that
> > don't have any channels, in order to maintain the smallest routing set
> > possible (no zombies, etc). It seems for this to work, we would need to
> undo
> > this at a global scale to ensure these announcements propagate?
>
> Having incoming capacity from a random node with no other channels doesn't
> seem useful though? (It's not useful for nodes that don't have incoming
> capacity of their own, either)
>
> Cheers,
> aj
>
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Proposal for Advertising Channel Liquidity

2018-11-07 Thread Olaoluwa Osuntokun
> A node, via their node_announcement,

Most implementations today will ignore node announcements from nodes that
don't have any channels, in order to maintain the smallest routing set
possible (no zombies, etc). It seems for this to work, we would need to undo
this at a global scale to ensure these announcements propagate?

Aside from the incentives for leaches to arise that accept the fee then
insta close (they just drain the network and then no one uses this), I think
this is a dope idea in general! In the past, I've mulled over similar
constructions under a general umbrella of "Channel Liquidity Markets" (CLM),
though via extra-protocol negotiation.

-- Laolu


On Wed, Nov 7, 2018 at 2:38 PM lisa neigut  wrote:

> Problem
> 
> Currently it’s difficult to reliably source inbound capacity for your
> node. This is incredibly problematic for vendors and nodes hoping to setup
> shop as a route facilitator. Most solutions at the moment require an
> element of out of band negotiation in order to find other nodes that can
> help with your capacity needs.
>
> While splicing and dual funding mechanisms will give some relief by
> allowing for the initial negotiation to give the other node an opportunity
> to put funds in either at channel open or after the fact, the problem of
> finding channel liquidity is still left as an offline problem.
>
> Proposal
> =
> To solve the liquidity discovery problem, I'd like to propose allowing
> nodes to advertise initial liquidity matching. The goal of this proposal
> would be to allow nodes to independently source inbound capacity from a
> 'market' of advertised liquidity rates, as set by other nodes.
>
> A node, via their node_announcement, can advertise that they will match
> liquidity and a fee rate that they will provide to any incoming
> open_channel request that indicates requests it.
>
> `node_announcement`:
> new feature flag: option_liquidity_provider
> data:
>  [4 liquidity_fee_proportional_millionths] (option_liquidity_provider) fee
> charged per satoshi of liquidity added at channel open
>  [4 liquidity_fee_base_msat] (option_liquidity_provider) base fee charged
> for providing liquidity at channel open
>
> `open_channel`:
> new feature flag (channel_flags): option_liquidity_buy [2nd least
> significant bit]
> push_msat: set to fee payment for requested liquidity
> [8 liquidity_msat_request]: (option_liquidity_buy) amount of dual funding
> requested at channel open
>
> `accept_channel`:
> tbd. hinges on a dual funding proposal for how second node would send
> information about their funding input.
>
> If a node cannot provide the liquidity requested in `open_channel`, it
> must return an error.
> If the amount listed in `push_msat` does not cover the amount of liquidity
> provided, the liquidity provider node must return an error.
>
> Errata
> ==
> It's an open question as to whether or not a liquidity advertising node
> should also include a maximum amount of liquidity that they will
> match/provide. As currently proposed, the only way to discover if a node
> can meet your liquidity requirement is by sending an open channel request.
>
> This proposal depends on dual funding being possible.
>
> Should a node be able to request more liquidity than they put into the
> channel on their half? In the case of a vendor who wants inbound capacity,
> capping the liquidity request allowed seems unnecessary.
>
> Conclusion
> ===
> Allowing nodes to advertise liquidity paves the way for automated node
> re-balancing. Advertised liquidity creates a market of inbound capacity
> that any node can take advantage of, reducing the amount of out-of-band
> negotiation needed to get the inbound capacity that you need.
>
>
> Credit to Casey Rodamor for the initial idea.
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Improving payment UX with low-latency route probing

2018-11-06 Thread Olaoluwa Osuntokun
Hi Fabrice,

I think HORNET would address this rather nicely!

During the set up phase (which uses Sphinx), the sender is able to get a
sense
of if the route is actually "lively" or not, as the circuit can't be
finalized
if all the nodes aren't available. Additionally, during the set up phase,
the
sender can drop a unique payload to each node. In this scenario, it may be
the
amount range the node is looking to send over this circuit. The intermediate
nodes then package up a "Forwarding Segment" (FS) which includes a symmetric
key to use for their portion of the hop, and can also be extended to include
fee information. If this set up phase is payment value aware, then each node
can use a private "fee function" that may take into account the level of
congestion in their channels, or other factors. This would differ from the
current approach in that this fee schedule need not be communicated to the
wider network, only those wishing to route across that link.

Another cool thing that it would allow is the ability to receive a
protocol-level payment ACK. This may be useful when implementing AMP, as it
would allow the sender to know exactly how many satoshis have arrived at the
other site, adjusting their payment sharding accordingly. Nodes on either
side
of the circuit can then also use the data forwarding phase to exchange
payment
hashes, perform cool zkcp set up protcols, etc, etc.

The created circuits can actually be re-used across several distinct
payments.
In the paper, they use a TTL for each circuit, in our case, we can use a
block
height, after which all nodes should reject attempted data forwarding
attempts.
A notable change is that each node no longer needs to maintain per-circuit
state as we do now with Sphinx. Instead, the packets that come across
contain
all the information required for forwarding (our current per-hop payload).
As a
result, we can eliminate the asymmetric crytpo from the critical forwarding
path!

Finally, this would let nodes easily rotate their onion keys to achieve
forward
secrecy during the data phase (but not the set up phase), as in the FS, they
essentially key-wrap a symmetric key (using the derived shared secret for
that
hop) that should be used for that data forwarding phase.

There're a number of other cool things integration HORNET would allow,
perhaps
a distinct thread would be a more appropriate place to extol the many
virtues
of HORNET ;)

-- Laolu

On Thu, Nov 1, 2018 at 3:05 PM Fabrice Drouin 
wrote:

> Context
> ==
>
> Sent payments that remain pending, i.e. payments which have not yet
> been failed or fulfilled, are currently a major UX challenge for LN
> and a common source of complaints from end-users.
> Why payments are not fulfilled quickly is not always easy to
> investigate, but we've seen problems caused by intermediate nodes
> which were stuck waiting for a revocation, and recipients who could
> take a very long time to reply with a payment preimage.
> It is already possible to partially mitigate this by disconnecting
> from a node that is taking too long to send a revocation (after 30
> seconds for example) and reconnecting immediately to the same node.
> This way pending downstream HTLCs can be forgotten and the
> corresponding upstream HTLCs failed.
>
> Proposed changes
> ===
>
> It should be possible to provide a faster "proceed/try another route"
> answer to the sending node using probing with short timeout
> requirements: before sending the actual payment it would first send a
> "blank" probe request, along the same route. This request would be
> similar to a payment request, with the same onion packet formatting
> and processing, with the additional requirements that if the next node
> in the route has not replied within the timeout period (typically a
> few hundred milliseconds) then the current node will immediately send
> back an error message.
>
> There could be several options for the probe request:
> - include the same amounts and fee constraints than the actual payment
> request.
> - include no amount information, in which case we're just trying to
> "ping" every node on the route.
>
> Implementation
> 
>
> I would like to discuss the possibility of implementing this with a "0
> satoshi" payment request that the receiving node would generate along
> with the real one. The sender would first try to "pay" the "0 satoshi"
> request using the route it computed with the actual payment
> parameters. I think that it would not require many changes to the
> existing protocol and implementations.
> Not using the actual amount and fees means that the actual payment
> could fail because of capacity issues but as long as this happens
> quickly, and it should since we checked first that all nodes on the
> route are alive and responsive, it still is much better than “stuck”
> payments.
> And it would not help if a node decides to misbehave, but would not
> make things worse than they are now (?)
>
> Cheers,
> Fabrice
> 

Re: [Lightning-dev] Splicing Proposal: Feedback please!

2018-11-05 Thread Olaoluwa Osuntokun
> Mainly limitations of our descriptor language, TBH.

I don't follow...so it's a size issue? Or wanting to avoid "repeated"
fields?

> I thought about restarting the revocation sequence, but it seems like that
> only saves a tiny amount since we only store log(N) entries

Yeah that makes sense, forgetting the HTLC state is a big enough win in and
of itself.

>>> Splice Signing
>>
>> It seems that we're missing some fields here if we're to allow the
splicing
>> of inputs to be done in a non-blocking manner. We'll need to send two
>> revocation points for the new commitment: one to allow it to be created,
and
>> another to allow updates to proceed right after the signing is
completed. In
>> this case we'll also need to update both commitments in tandem until the
>> splicing transaction has been sufficiently confirmed.
>
>I think we can use the existing revocation points for both.

Yep, if we retain the existing shachain trees, then we just continue to
extend the leaves!

> We're basically co-generating a tx here, just like shutdown, except it's
> funding a new replacement channel.  Do we want to CPFP this one too?

It'd be nice to be able to also anchor down this splicing transaction given
that we may only allow a single outstanding splicing operation to begin
with. Being able to CPFP it (and later on provide arbitrary fee inputs)
allows be to speed up the process if I want to queue another operation up
right afterwards.

-- Laolu


On Wed, Oct 17, 2018 at 9:31 AM Rusty Russell  wrote:

> Olaoluwa Osuntokun  writes:
> > Hi Rusty,
> >
> > Happy to get the splicing train rolling!
> >
> >> We've had increasing numbers of c-lightning users get upset they can't
> >> open multiple channels, so I guess we're most motivated to allow
> splicing
> > of
> >> existing channels
> >
> > Splicing isn't a substitute for allowing multiple channels. Multiple
> > channels allow nodes to:
> >
> >   * create distinct channels with distinct acceptance policies.
> >   * create a mix of public and non-advertised channels with a node.
> >   * be able to send more than the (current) max HTLC amount
> > using various flavors of AMP.
> >   * get past the (current) max channel size value
> >   * allow a link to carry more HTLCs (due to the current super low max
> HTLC
> > values) given the additional HTLC pressure that
> > AMP may produce (alternative is a commitment fan out)
>
> These all seem marginal to me.  I think if we start hitting max values,
> we should discuss increasing them.
>
> > Is there a fundamental reason that CL will never allow nodes to create
> > multiple channels? It seems unnecessarily limiting.
>
> Yeah, we have a daemon per peer.  It's really simple with 1 daemon, 1
> channel.  My own fault: I was the one who insisted we mux multiple
> connections over the same transport; if we'd gone for independent
> connections our implementation would have been trivial.
>
> >> Splice Negotiation:
> >
> > Any reason to now make the splicing_add_* messages allow one to add
> several
> > inputs in a single message? Given "acceptable" constraints for how large
> the
> > witness and pkScripts can be, we can easily enforce an upper limit on the
> > number of inputs/outputs to add.
>
> Mainly limitations of our descriptor language, TBH.
>
> > I like that the intro messages have already been designed with the
> > concurrent case in mind beyond a simpler propose/accept flow. However is
> > there any reason why it doesn't also allow either side to fully
> re-negotiate
> > _all_ the funding details? Splicing is a good opportunity to garbage
> collect
> > the prior revocation state, and also free up obsolete space in watch
> towers.
>
> I thought about restarting the revocation sequence, but it seems like
> that only saves a tiny amount since we only store log(N) entries.  We
> can drop old HTLC info post-splice though, and (after some delay for
> obscurity) tell watchtowers to drop old entries I think.
>
> > Additionally, as the size of the channel is either expanding or
> contracting,
> > both sides should be allowed to modify things like the CSV param,
> reserve,
> > max accepted htlc's, max htlc size, etc. Many of these parameters like
> the
> > CSV value should scale with the size of the channel, not allowing these
> > parameters to be re-negotiated could result in odd scenarios like still
> > maintain a 1 week CSV when the channel size has dipped from 1 BTC to 100k
> > satoshis.
>
> Yep, good idea!  I missed that.
>
> Brings up a side point about these values, which deserves i

Re: [Lightning-dev] Wireshark plug-in for Lightning Network(BOLT) protocol

2018-11-05 Thread Olaoluwa Osuntokun
Hi tomokio,

This is so dope! We've long discussed creating canned protocol transcripts
for
other implementations to assert their responses again, and I think this is a
great first step towards that.

> Our proposal:
> Every implementation has compile option which enable output key
information
> file.

So is this request to add an option which will write out the _plaintext_
messages to disk, or an option that writes out the final derived read/write
secrets to disk? For the latter path, it the tools that read these
transcripts
would need to be aware of key rotations, so they'd  be able to continue to
decrypt the transact pt post rotation.

-- Laolu


On Sat, Oct 27, 2018 at 2:37 AM  wrote:

> Hello lightning network developers.
> Nayuta team is developing Wireshark plug-in for Lightning Network(BOLT)
> protocol.
> https://github.com/nayutaco/lightning-dissector
>
> It’s alpha version, but it can decode some BOLT message.
> Currently, this software works for Nayuta’s implementation(ptarmigan) and
> Éclair.
> When ptarmigan is compiled with some option, it write out key information
> file. This Wireshark plug-in decode packet using that file.
> When you use Éclair, this software parse log file.
>
> Through our development experience, interoperability test is time
> consuming task.
> If people can see communication log of BOLT message on same format
> (.pcap), it will be useful for interoperability test.
>
> Our proposal:
> Every implementation has compile option which enable output key
> information file.
>
> We are glad if this project is useful for lightning network eco-system.
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Splicing Proposal: Feedback please!

2018-11-05 Thread Olaoluwa Osuntokun
> However personally I do not really see the need to create multiple
channels
> to a single peer, or increase the capacity with a specific peer (via
splice
> or dual-funding).  As Christian says in the other mail, this
consideration,
> is that it becomes less a network and more of some channels to specific
big
> businesses you transact with regularly.

I made no reference to any "big businesses", only the utility that arises
when one has multiple channels to a given peer. Consider an easier example:
given the max channel size, I can only ever send 0.16 or so BTC to that
peer. If I have two channels, then I can send 0.32 and so on. Consider the
case post AMP where we maintain the current limit of the number of in flight
HTLCs. If AMP causes most HTLCs to generally be in flight within the
network, then all of a sudden, this "queue" size (outstanding HTLCS in a
commitment) becomes more scarce (assume a global MTU of say 100k sat for
simplicity). This may then promote nodes to open additional channels to
other nodes (1+) in order to accommodate the increased HTLC bandwidth load
due to the sharded multi-path payments.

Independent on bolstering the bandwidth capabilities of your links to other
nodes, you would still want to maintain a diverse set of channels for fault
tolerance, path diversity, and redundancy reasons.

In the splicing case, if only a single in flight splice is permitted, and me
as users wants to keep all their funds in channels, the more channels I
have, the more concurrent on-chain withdraws/deposits I'll be able to
service.

> I worry about doing away with initiator distinction

Can you re-phrase this sentence? I'm having trouble parsing it, thanks.

> which puzzles me, and I wonder if I am incorrect in my prioritization.

Consider that not all work items are created equal, and they have varying
levels of implementation and network wide synchronization. For example, I
think we all consider multi-hop decor to be a high priority.  However, it
has the longest and hardest road to deployment as it effectively forces us
to perform a "slow motion hard-fork" within the network. On the other hand,
if lnd wanted to deploy a flavor of non-interactive (no invoice) payments
*today*, we could do that without *any* synchronization at the
implementation of network level, as it's purely an end-to-end change.

> I am uncertain what this means in particular, but let me try to restate
> what you are talking about in other terms:

Thought about it a bit more (like way ago) and this is really no different
than having a donation page where people use public derivation to derive
addresses to deposit directly to your channel. All the Lightning node needs
to do, is recognize that any coins send to these derived addresses should be
immediately spliced into an available channel (doesn't have any other
outstanding splices).

> It seems to me naively that the above can be done by the client software
> without any modifications to the Lightning Network BOLT protocol

Sticking with that prior version yes, this would be able to be seamlessly
included in the async splce proposal. The one requirement is a link-level
protocol that allows both sides to collaboratively create and recognize
these outputs.

> Or is my above restatement different from what you are talking about?

You're missing the CLTV timeout clause. It isn't a plain p2wkh, it's a p2wsh
script. Either they splice this in before the timeout, or it times out and
it goes back to one party. In this case, it's no different than the async
concurrent commitment splice in double spend case.

-- Laolu


On Tue, Oct 16, 2018 at 10:16 PM ZmnSCPxj  wrote:

> Good morning Laolu,
>
> Is there a fundamental reason that CL will never allow nodes to create
> multiple channels? It seems unnecessarily limiting.
>
>
> The architecture of c-lightning assigns a separate process to each peer.
> For simplicity this peer process handles only a single channel.  Some of
> the channel initiation and shutdown protocols are written "directly", i.e.
> if the BOLT spec says this must happen before that, we literally write in
> the C code this_function(); that_function();.  It would be possible  to
> change this architecture with significant difficulty.
>
> However personally I do not really see the need to create multiple
> channels to a single peer, or increase the capacity with a specific peer
> (via splice or dual-funding).  As Christian says in the other mail, this
> consideration, is that it becomes less a network and more of some channels
> to specific big businesses you transact with regularly.  But I suppose,
> that we will have to see how the network evolves eventually; perhaps the
> goal of decentralization is simply doomed regardless, and Lightning will
> indeed evolve into a set of channels you maintain to specific big
> businesses you regularly work with.
>
>
> >* [`4`:`feerate_per_kw`]
>
> What fee rate is this? IMO we should do commitmentv2 before splicing as
> then
> we can 

Re: [Lightning-dev] Commitment Transaction Format Update Proposals?

2018-11-05 Thread Olaoluwa Osuntokun
> This seems at odds with the goal of "if the remote party force closes,
then
> I get my funds back immediately without requiring knowledge of any secret
> data"

Scratch that: the static back ups just need to include this CSV value!

-- Laolu


On Tue, Nov 6, 2018 at 3:29 PM Olaoluwa Osuntokun  wrote:

> Hi Rusty,
>
> I'm a big fan in general of most of this! Amongst many other things, it'll:
> simplify the whole static channel backup + recovery workflow, and avoid all
> the fee related headaches we've run into over the past few months.
>
> > - HTLC-timeout and HTLC-success txs sigs are
> > SIGHASH_ANYONECANPAY|SIGHASH_SINGLE, so you can Bring Your Own Fees.
>
> Would this mean that we no longer extend fees to the second-level
> transactions as well? If so, then a dusty HTLC would be determined solely
> by
> looking at the direct output, rather than the resulting output in the
> second
> layer.
>
> >  - `localpubkey`, `remotepubkey`, `local_htlcpubkey`,
> `remote_htlcpubkey`,
> > `local_delayedpubkey`, and `remote_delayedpubkey` derivation now uses a
> > two-stage unhardened BIP-32 derivation based on the commitment number.
>
> It seems enough to _only_ modify the derivation for local+remote pubkey (so
> the direct "settle" keys). This constrains the change to only what's
> necessary to simplify the backup+recovery workflow with the current
> commitment design. By restricting the change to these two keys, we minimize
> the code impact to the existing implementations, and avoid unnecessary
> changes that don't make strides towards the immediate goal.
>
> > - `to_remote` is now a P2WSH of:
> >`to_self_delay` OP_CSV OP_DROP  OP_CHECKSIG
>
> This seems at odds with the goal of "if the remote party force closes, then
> I get my funds back immediately without requiring knowledge of any secret
> data". If it was just a plain p2wkh, then during a routine seed import and
> rescan (assuming ample look ahead as we know this is a "special" key), I
> would pick up outputs of channels that were force closed while I was down
> due to my dog eating my hard drive.
>
> Alternatively, since the range of CSV values can be known ahead of time, I
> can brute force a set of scripts to look for in the chain. However, this
> results in potentially a very large number of scripts (depending on how
> many
> channels one has, and bounds on the acceptable CSV) I need to scan for.
>
> -- Laolu
>
>
> On Fri, Oct 12, 2018 at 3:57 PM Rusty Russell 
> wrote:
>
>> Hi all,
>>
>> There have been a number of suggested changes to the commitment
>> transaction format:
>>
>> 1. Rather than trying to agree on what fees will be in the future, we
>>should use an OP_TRUE-style output to allow CPFP (Roasbeef)
>> 2. The `remotepubkey` should be a BIP-32-style, to avoid the
>>option_data_loss_protect "please tell me your current
>>per_commitment_point" problem[1]
>> 3. The CLTV timeout should be symmetrical to avoid trying to game the
>>peer into closing. (Connor IIRC?).
>>
>> It makes sense to combine these into a single `commitment_style2`
>> feature, rather than having a testing matrix of all these disabled and
>> enabled.
>>
>> BOLT #2:
>>
>> - If `commitment_style2` negotiated, update_fee is a protocol error.
>>
>> This mainly changes BOLT #3:
>>
>> - The feerate for commitment transactions is always 253 satoshi/Sipa.
>> - Commitment tx always has a P2WSH OP_TRUE output of 1000 satoshi.
>> - Fees, OP_TRUE are always paid by the initial funder, because it's
>> simple,
>>   unless they don't have funds (eg. push_msat can do this, unless we
>> remove it?)
>> - HTLC-timeout and HTLC-success txs sigs are
>>   SIGHASH_ANYONECANPAY|SIGHASH_SINGLE, so you can Bring Your Own Fees.
>> - `localpubkey`, `remotepubkey`, `local_htlcpubkey`,
>>   `remote_htlcpubkey`, `local_delayedpubkey`, and `remote_delayedpubkey`
>>   derivation now uses a two-stage unhardened BIP-32 derivation based on
>>   the commitment number.  Two-stage because we can have 2^48 txs and
>>   BIP-32 only supports 2^31: the first 17 bits are used to derive the
>>   parent for the next 31 bits?
>> - `to_self_delay` for both sides is the maximum of either the
>>   `open_channel` or `accept_channel`.
>> - `to_remote` is now a P2WSH of:
>> `to_self_delay` OP_CSV OP_DROP  OP_CHECKSIG
>>
>> Cheers,
>> Rusty.
>>
>> [1] I recently removed checking this field from c-lightning, as I
>> couldn't get it to reliably work under stress-test.  I may just have
>> a bug, but we could just fix the spec instead, then we can get our
>> funds back even if we never talk to the peer.
>> ___
>> Lightning-dev mailing list
>> Lightning-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>>
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Commitment Transaction Format Update Proposals?

2018-11-05 Thread Olaoluwa Osuntokun
Hi Rusty,

I'm a big fan in general of most of this! Amongst many other things, it'll:
simplify the whole static channel backup + recovery workflow, and avoid all
the fee related headaches we've run into over the past few months.

> - HTLC-timeout and HTLC-success txs sigs are
> SIGHASH_ANYONECANPAY|SIGHASH_SINGLE, so you can Bring Your Own Fees.

Would this mean that we no longer extend fees to the second-level
transactions as well? If so, then a dusty HTLC would be determined solely by
looking at the direct output, rather than the resulting output in the second
layer.

>  - `localpubkey`, `remotepubkey`, `local_htlcpubkey`, `remote_htlcpubkey`,
> `local_delayedpubkey`, and `remote_delayedpubkey` derivation now uses a
> two-stage unhardened BIP-32 derivation based on the commitment number.

It seems enough to _only_ modify the derivation for local+remote pubkey (so
the direct "settle" keys). This constrains the change to only what's
necessary to simplify the backup+recovery workflow with the current
commitment design. By restricting the change to these two keys, we minimize
the code impact to the existing implementations, and avoid unnecessary
changes that don't make strides towards the immediate goal.

> - `to_remote` is now a P2WSH of:
>`to_self_delay` OP_CSV OP_DROP  OP_CHECKSIG

This seems at odds with the goal of "if the remote party force closes, then
I get my funds back immediately without requiring knowledge of any secret
data". If it was just a plain p2wkh, then during a routine seed import and
rescan (assuming ample look ahead as we know this is a "special" key), I
would pick up outputs of channels that were force closed while I was down
due to my dog eating my hard drive.

Alternatively, since the range of CSV values can be known ahead of time, I
can brute force a set of scripts to look for in the chain. However, this
results in potentially a very large number of scripts (depending on how many
channels one has, and bounds on the acceptable CSV) I need to scan for.

-- Laolu


On Fri, Oct 12, 2018 at 3:57 PM Rusty Russell  wrote:

> Hi all,
>
> There have been a number of suggested changes to the commitment
> transaction format:
>
> 1. Rather than trying to agree on what fees will be in the future, we
>should use an OP_TRUE-style output to allow CPFP (Roasbeef)
> 2. The `remotepubkey` should be a BIP-32-style, to avoid the
>option_data_loss_protect "please tell me your current
>per_commitment_point" problem[1]
> 3. The CLTV timeout should be symmetrical to avoid trying to game the
>peer into closing. (Connor IIRC?).
>
> It makes sense to combine these into a single `commitment_style2`
> feature, rather than having a testing matrix of all these disabled and
> enabled.
>
> BOLT #2:
>
> - If `commitment_style2` negotiated, update_fee is a protocol error.
>
> This mainly changes BOLT #3:
>
> - The feerate for commitment transactions is always 253 satoshi/Sipa.
> - Commitment tx always has a P2WSH OP_TRUE output of 1000 satoshi.
> - Fees, OP_TRUE are always paid by the initial funder, because it's simple,
>   unless they don't have funds (eg. push_msat can do this, unless we
> remove it?)
> - HTLC-timeout and HTLC-success txs sigs are
>   SIGHASH_ANYONECANPAY|SIGHASH_SINGLE, so you can Bring Your Own Fees.
> - `localpubkey`, `remotepubkey`, `local_htlcpubkey`,
>   `remote_htlcpubkey`, `local_delayedpubkey`, and `remote_delayedpubkey`
>   derivation now uses a two-stage unhardened BIP-32 derivation based on
>   the commitment number.  Two-stage because we can have 2^48 txs and
>   BIP-32 only supports 2^31: the first 17 bits are used to derive the
>   parent for the next 31 bits?
> - `to_self_delay` for both sides is the maximum of either the
>   `open_channel` or `accept_channel`.
> - `to_remote` is now a P2WSH of:
> `to_self_delay` OP_CSV OP_DROP  OP_CHECKSIG
>
> Cheers,
> Rusty.
>
> [1] I recently removed checking this field from c-lightning, as I
> couldn't get it to reliably work under stress-test.  I may just have
> a bug, but we could just fix the spec instead, then we can get our
> funds back even if we never talk to the peer.
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Splicing Proposal: Feedback please!

2018-10-15 Thread Olaoluwa Osuntokun
> I would suggest more to consider the simpler method, despite its larger
> onchain footprint (which is galling),

The on-chain footprint is a shame, and also it gets worse if we start to
allow multiple pending splices. Also the lack of a non-blocking splice in is
a big draw back IMO.

> but mostly because I do not see splicing as being as important as AMP or
> watchtowers (and payment decorrelation seems to affect how AMP can be
> implemented, so its priority also goes up).

Most of what you mention here have _very_ different deployment timelines and
synchronization requirements across clients. For example, splicing is a link
level change and can start to be rolled out immediately. Decorrelation on
the other hand, is a _network_ level change, and would take a considerable
amount of time to reach widespread deployment as it essentially splits the
rouble paths in the network until all/most are upgraded.

If you think any of these items is a higher priority than splicing then you
can simply start working on them! There's no agency that prescribes what
should and shouldn't be pursued or developed, just your willingness to
write some code.

One thing that I think we should lift from the multiple funding output
approach is the "pre seating of inputs". This is cool as it would allow
clients to generate addresses, that others could deposit to, and then have
be spliced directly into the channel. Public derivation can be used, along
with a script template to do it non-interactively, with the clients picking
up these deposits, and initiating a splice in as needed.

-- Laolu



On Thu, Oct 11, 2018 at 11:14 PM ZmnSCPxj via Lightning-dev <
lightning-dev@lists.linuxfoundation.org> wrote:

> Good morning Rusty,
>
> >
> > > It may be good to start brainstorming possible failure modes during
> splice, and how to recover, and also to indicate the expected behavior in
> the proposal, as I believe these will be the points where splicing must be
> designed most precisely. What happens when a splice is ongoing and the
> communication gets disconnected? What happens when some channel failure
> occurs during splicing and we are forced to drop onchain? And so on.
> >
> > Agreed, but we're now debating two fairly different methods for
> > splicing. Once we've decided on that, we can try to design the
> > proposals themselves.
>
> I would suggest more to consider the simpler method, despite its larger
> onchain footprint (which is galling), but mostly because I do not see
> splicing as being as important as AMP or watchtowers (and payment
> decorrelation seems to affect how AMP can be implemented, so its priority
> also goes up).  So I think getting *some* splicing design out would be
> better even if imperfect.  Others may disagree on priority.
>
> Regards,
> ZmnSCPxj
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] RouteBoost: Adding 'r=' fields to BOLT 11 invoices to flag capacity

2018-09-28 Thread Olaoluwa Osuntokun
> This is orothogonal.  There's no point probing your own channel, you're
> presumably probing someone else's.

In my scenario, you're receiving a new HTLC, from some remote party
unbeknownst to you. I was replying to cdecker's reply to johan that one
wouldn't always add this new type of routing hint for all channels since it
leaks available bandwidth information. Without something like an "unadd" you
can't do anything against an individual attempting to prob you other than
drop packets (drop as in don't even add to your commit, resulting in an HTLC
timeout), as if you cancel back, then they know that you had enough
bandwidth to _accept_ the HTLC in the first place.

-- Laolu


On Wed, Sep 26, 2018 at 5:54 PM Rusty Russell  wrote:

> Olaoluwa Osuntokun  writes:
> >> That might not be so desirable, since it leaks the current channel
> >> capacity to the user
> >
> >>From my PoV, the only way a node can protect the _instantaneous_
> available
> > bandwidth in their channel is to randomly reject packets, even if they
> have
> > the bandwidth to actually accept and forward them.
> >
> > Observe that if a "prober" learns that you've _accepted_  a packet, then
> > they know you have at least that amount as available bandwidth. As a
> result,
> > I can simply send varying sat packet sizes over to you, either with the
> > wrong timelock, or an unknown payment hash.
>
> Yes.  You have to have a false capacity floor, which must vary
> periodically, to protect against this kind of probing (randomly failing
> attempts as you get close to zero capaicty is also subject to probing,
> AFAICT).
>
> > Since we don't yet have the
> > "unadd" feature in the protocol, you _must_ accept the HTLC before you
> can
> > cancel it. This is mitigated a bit by the max_htlc value in the channel
> > update (basically our version of an MTU), but I can still just send
> > _multiple_ HTLC's rather than one big one to attempt to ascertain your
> > available bandwidth.
>
> This is orothogonal.  There's no point probing your own channel, you're
> presumably probing someone else's.
>
> Cheers,
> Rusty.
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] proposal for Lightning Network improvement proposals

2018-07-23 Thread Olaoluwa Osuntokun
to be assigned numbers, we need to standarize our
> feature
> through the BOLT process,
> but we might not wish to attempt standardization until our experimental
> features have been tested.
> Without standardization, different teams working on different experimental
> features may cause conflicts if different clients are treating feature
> bits or
> message types differently.
>
> By moving all experimental features to a new message where they are
> wrapped in
> a unique feature name, this eradicates chance of conflicting
> implementations.
>
> Additionally, this message can serve as a generic transport mechanism
> between
> any two lightning nodes who have agreed to support the
> expierment_name_hash,
> as there is no restriction on the format of the payload. This may make it
> possible to serve e.g: HTTP over Lightning.
>
>
> * General experiment messages:
>
> If `experiment_name_hash` in the experiment message is 0, treat its
> payload as
> on of the following messages:
>
> ** init_experiments message
>
> Informs a peer of features supported by the client.
>
>   1. experiment_type: 16
>   2. data:
>   * [2: eflen]
>   * [eflen*32: experiment_name_hashes]
>
> A sending node:
>* MUST send the `init_experiments` message before any other
> `experiment`
> message for each connection.
>* SHOULD send the `experiment_name_hash` for any features supported and
> set
> to enabled in their software client.
>
> A receiving node:
>* For each experiment_name_hash:
>   * If the hash is unknown or 0: Ignore the feature
>   * If the hash is known: SHOULD enable the feature for communication
> with
> this peer.
>
> ** experiment_error message
>
>  experiment_type: 17
>  data:
> [32: channel_id]
> [32: experiment_name_hash]
> [2: len]
> [len: data]
>
> For all messages before funding_created: Must use temporary_channel_id in
> lieu
> of channel_id.
>
> A sending node:
>* If error is critical, should also send the regular lightning `error`
> message from BOLT #1
>* If the error is not specific to any channel: set channel_id to 0.
>
> A receiving node
>* If experiment_name_hash is unknown:
>   - MUST fail the channel.
>* If channel_id is 0
>   - MUST fail all the channels
>
> Rationale
>
> This message is not intended to replace `error` for criticial errors, but
> is
> provided for additional debugging information
> related to the experimental feature being used.
> A client may decide whether or not it can recover from such errors
> individually per experimental feature, which may include aborting channels
> and
> the connection.
>
> TODO: Define gossip/query messages related to nodes/channels which support
> features by experiment_hash_name.
>
> ---EOF
>
>
> On Sunday, 22 July 2018 13:32:02 BST Olaoluwa Osuntokun wrote:
> > No need to apologize! Perhaps this confusion shows that the current
> process
> > surrounding creating/modifying/drafting BOLT documents does indeed need
> to
> > be better documented. We've more or less been rolling along with a pretty
> > minimal process among the various implementations which I'd say has
> worked
> > pretty well so far. However as more contributors get involved we may need
> > to add a bit more structure to ensure things are transparent for
> newcomers.
> >
> > On Sun, Jul 22, 2018, 12:57 PM René Pickhardt <
> r.pickha...@googlemail.com>
> >
> > wrote:
> > > Sorry did not realized that BOLTs are the equivalent - and aparently
> many
> > > people I spoke to also didn't realize that.
> > >
> > > I thought BOLT is the protocol specification and the bolts are just the
> > > sections. And the BOLT should be updated to a new version.
> > >
> > > Also I suggested that this should take place for example within the
> > > lightning rfc repo. So my suggestion was not about creating another
> place
> > > but more about making the process more transparent or kind of filling
> the
> > > gap that I felt was there.
> > >
> > > I am sorry for spaming mailboxes with my suggestion just because I
> didn't
> > > understand the current process.
> > >
> > >
> > > Olaoluwa Osuntokun  schrieb am So., 22. Juli 2018
> > >
> > > 20:59:
> > >> We already have the equiv of improvement proposals: BOLTs.
> Historically
> > >> new standardization documents are proposed initially as issues or PR's
> > >> when
> > >> ultimately accepted. Why do we need another repo?
> > 

Re: [Lightning-dev] proposal for Lightning Network improvement proposals

2018-07-22 Thread Olaoluwa Osuntokun
No need to apologize! Perhaps this confusion shows that the current process
surrounding creating/modifying/drafting BOLT documents does indeed need to
be better documented. We've more or less been rolling along with a pretty
minimal process among the various implementations which I'd say has worked
pretty well so far. However as more contributors get involved we may need
to add a bit more structure to ensure things are transparent for newcomers.

On Sun, Jul 22, 2018, 12:57 PM René Pickhardt 
wrote:

> Sorry did not realized that BOLTs are the equivalent - and aparently many
> people I spoke to also didn't realize that.
>
> I thought BOLT is the protocol specification and the bolts are just the
> sections. And the BOLT should be updated to a new version.
>
> Also I suggested that this should take place for example within the
> lightning rfc repo. So my suggestion was not about creating another place
> but more about making the process more transparent or kind of filling the
> gap that I felt was there.
>
> I am sorry for spaming mailboxes with my suggestion just because I didn't
> understand the current process.
>
>
> Olaoluwa Osuntokun  schrieb am So., 22. Juli 2018
> 20:59:
>
>> We already have the equiv of improvement proposals: BOLTs. Historically
>> new standardization documents are proposed initially as issues or PR's when
>> ultimately accepted. Why do we need another repo?
>>
>> On Sun, Jul 22, 2018, 6:45 AM René Pickhardt via Lightning-dev <
>> lightning-dev@lists.linuxfoundation.org> wrote:
>>
>>> Hey everyone,
>>>
>>> in the grand tradition of BIPs I propose that we also start to have our
>>> own LIPs (Lightning Network Improvement proposals)
>>>
>>> I think they should be placed on the github.com/lightning account in a
>>> repo called lips (or within the lightning rfc repo) until that will happen
>>> I created a draft for LIP-0001 (which is describing the process and is 95%
>>> influenced by BIP-0002) in my github repo:
>>>
>>> https://github.com/renepickhardt/lips  (There are some open Todos and
>>> Questions in this LIP)
>>>
>>> The background for this Idea: I just came home from the bitcoin munich
>>> meetup where I held a talk examining BOLT. As I was asked to also talk
>>> about the future plans of the developers for BOLT 1.1 I realized while
>>> preparing the talk that many ideas are distributed within the community but
>>> it seems we don't have a central place where we collect future enhancements
>>> for BOLT1.1. Having this in mind I think also for the meeting in Australia
>>> it would be nice if already a list of LIPs would be in place so that the
>>> discussion can be more focused.
>>> potential LIPs could include:
>>> * Watchtowers
>>> * Autopilot
>>> * AMP
>>> * Splicing
>>> * Routing Protcols
>>> * Broadcasting past Routing statistics
>>> * eltoo
>>> * ...
>>>
>>> As said before I would volunteer to work on a LIP for Splicing (actually
>>> I already started)
>>>
>>> best Rene
>>>
>>>
>>> --
>>> https://www.rene-pickhardt.de
>>>
>>> Skype: rene.pickhardt
>>>
>>> mobile: +49 (0)176 5762 3618
>>> ___
>>> Lightning-dev mailing list
>>> Lightning-dev@lists.linuxfoundation.org
>>> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>>>
>>
> Am 22.07.2018 20:59 schrieb "Olaoluwa Osuntokun" :
>
> We already have the equiv of improvement proposals: BOLTs. Historically
> new standardization documents are proposed initially as issues or PR's when
> ultimately accepted. Why do we need another repo?
>
> On Sun, Jul 22, 2018, 6:45 AM René Pickhardt via Lightning-dev <
> lightning-dev@lists.linuxfoundation.org> wrote:
>
>> Hey everyone,
>>
>> in the grand tradition of BIPs I propose that we also start to have our
>> own LIPs (Lightning Network Improvement proposals)
>>
>> I think they should be placed on the github.com/lightning account in a
>> repo called lips (or within the lightning rfc repo) until that will happen
>> I created a draft for LIP-0001 (which is describing the process and is 95%
>> influenced by BIP-0002) in my github repo:
>>
>> https://github.com/renepickhardt/lips  (There are some open Todos and
>> Questions in this LIP)
>>
>> The background for this Idea: I just came home from the bitcoin munich
>> meetup where I held a talk examining BOLT. As I was asked t

Re: [Lightning-dev] proposal for Lightning Network improvement proposals

2018-07-22 Thread Olaoluwa Osuntokun
We already have the equiv of improvement proposals: BOLTs. Historically new
standardization documents are proposed initially as issues or PR's when
ultimately accepted. Why do we need another repo?

On Sun, Jul 22, 2018, 6:45 AM René Pickhardt via Lightning-dev <
lightning-dev@lists.linuxfoundation.org> wrote:

> Hey everyone,
>
> in the grand tradition of BIPs I propose that we also start to have our
> own LIPs (Lightning Network Improvement proposals)
>
> I think they should be placed on the github.com/lightning account in a
> repo called lips (or within the lightning rfc repo) until that will happen
> I created a draft for LIP-0001 (which is describing the process and is 95%
> influenced by BIP-0002) in my github repo:
>
> https://github.com/renepickhardt/lips  (There are some open Todos and
> Questions in this LIP)
>
> The background for this Idea: I just came home from the bitcoin munich
> meetup where I held a talk examining BOLT. As I was asked to also talk
> about the future plans of the developers for BOLT 1.1 I realized while
> preparing the talk that many ideas are distributed within the community but
> it seems we don't have a central place where we collect future enhancements
> for BOLT1.1. Having this in mind I think also for the meeting in Australia
> it would be nice if already a list of LIPs would be in place so that the
> discussion can be more focused.
> potential LIPs could include:
> * Watchtowers
> * Autopilot
> * AMP
> * Splicing
> * Routing Protcols
> * Broadcasting past Routing statistics
> * eltoo
> * ...
>
> As said before I would volunteer to work on a LIP for Splicing (actually I
> already started)
>
> best Rene
>
>
> --
> https://www.rene-pickhardt.de
>
> Skype: rene.pickhardt
>
> mobile: +49 (0)176 5762 3618
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Including a Protocol for splicing to BOLT

2018-07-04 Thread Olaoluwa Osuntokun
> #1 lets us leave out double-funded channels.  #2 and #3 lets us leave out
> splice.

> For myself, I would rather leave out AMP and double-funding and splicing
> than remove ZKCP.

It isn't one or the other. ZKCPs are compatible with various flavors of AMP.
All of these technologies can be rolled out, some with less coordination
than others. Please stop presenting these upgrades as if they're opposing
and fundamental constrains only allow a handful of them to be deployed.

Dual funded channels allow for immediate bi-directional transfers between
endpoints. Splicing allows channels to contract or grow, as well as: pay out
to on chain addresses, fund new channel on the fly, close into old channels,
consolidate change addresses, create fee inputs for eltoo, orchestrate
closing/opening coin-joins, etc, etc.

-- Laolu

On Wed, Jul 4, 2018 at 10:36 PM ZmnSCPxj  wrote:

> Good morning all,
>
> > > What's the nasty compromise?
> > >
> > > Let's also not underestimate how big of an update switching to dlog
> based
> > >
> > > HTLCs will be.
> >
> > Was referring to losing proof-of-payment; that's vital in a system
> >
> > without intermediaries. We have to decide what the lesser evil is.
>
> Without the inherent ZKCP, it becomes impossible to build a trustless
> off-to-on/on-to-offchain bridge, since a trustless swap outside of
> Lightning becomes impossible.  To my mind, ZKCP is an important building
> block in cryptocurrency: it is what we use in Lightning for routing.
> Further, ZKCP can be composed together to form a larger ZKCP, which again
> is what we use in Lightning for routing.
>
> The ZKCP here is what lets LN endpoint to interact with the chain and lets
> off-to-on/on-to-offchain bridges to be trustless.
>
> off/onchain bridges are important as they provide:
>
> 1.  Incoming channels: Get some onchain funds from cold storage (or
> borrowed), create an outgoing channel (likely to the bridge for best chance
> of working), then ask bridge for an invoice to send money to an address you
> control onchain. The outgoing channel capacity becomes incoming capacity,
> you get (most of) your money back (minus fees) onchain.
> 2.  Reloading spent channels.  Give bridge an invoice and pay to the
> bridge to trigger it reloading your channel.
> 3.  Unloading full channels. If you earn so much money (minus what you
> spend on expenses, subcontractors, employees, suppliers, etc.) you can use
> the bridge to send directly to your cold storage.
>
> #1 lets us leave out double-funded channels.  #2 and #3 lets us leave out
> splice.
>
> The interaction between bridge and Lightning is simply via BOLT11
> invoices.  Those provide the ZKCP necessary to make the bridge trustless.
>
> AMP enhances such a Lightning+bridge network, since the importance of
> maximum channel capacity is reduced if a ZKCP-providing AMP is available.
> For myself, I would rather leave out AMP and double-funding and splicing
> than remove ZKCP.
>
> One could imagine a semi-trusted ZKCP service for real-world items.  Some
> semi-trusted institution provides special safeboxes for rent that can be
> unlocked either by seller private key after 1008 blocks, or by the
> recipient key and a proof-of-payment preimage (and records the preimage in
> some publicly-accessible website).  To sell a real-world item, make a
> BOLT11 invoice, bring item to a safebox, lock it with appropriate keys and
> the invoice payment hash, give BOLT11 invoice to buyer.  Buyer pays and
> gets proof-of-payment preimage, goes to safebox and gets item.  Multi-way
> trades (A wants item from B, B wants item from C, C wants item from A) are
> just compositions of ZKCP.
>
> >
> > And yeah, I called it Schnorr-Eltoonicorn not only because it's so
> >
> > pretty, but because actually capturing it will be a saga.
>
> Bards shall sing about The Hunt for Schnorr-Eltoonicorn for ages, until
> Satoshi himself is but a vague memory of a myth long forgotten.
>
> Regards,
> ZmnSCPxj
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Including a Protocol for splicing to BOLT

2018-07-04 Thread Olaoluwa Osuntokun
> Was referring to losing proof-of-payment; that's vital in a system without
> intermediaries.  We have to decide what the lesser evil is.

As is now, we don't have a proper proof of payment. We have a "proof that
someone paid". A proper proof of payment would be: "proof that bob paid
carol".
Aside from that, spontaneous payments is amongst the most request feature
request I get from users and developers.

There're a few ways to achieve this with dlog based AMPs.

As far hash based AMPs, with a bit more interaction, and something like
zkboo,
one can achieve stronger binding. However, we'd lose the nice "one shot"
property that dlog based AMPs allow.

> And yeah, I called it Schnorr-Eltoonicorn not only because it's so
> pretty, but because actually capturing it will be a saga.

eltoo won't be the end-all-be-all as it comes along with several tradeoffs,
like everything else does.

Also, everything we can do with Schnorr, we can also do with ECDSA, but
today.

-- Laolu


On Wed, Jul 4, 2018 at 7:12 PM Rusty Russell  wrote:

> Olaoluwa Osuntokun  writes:
> > What's the nasty compromise?
> >
> > Let's also not underestimate how big of an update switching to dlog based
> > HTLCs will be.
>
> Was referring to losing proof-of-payment; that's vital in a system
> without intermediaries.  We have to decide what the lesser evil is.
>
> And yeah, I called it Schnorr-Eltoonicorn not only because it's so
> pretty, but because actually capturing it will be a saga.
>
> Cheers,
> Rusty.
>
> > On Wed, Jul 4, 2018, 4:21 PM Rusty Russell 
> wrote:
> >
> >> Christian Decker  writes:
> >>
> >> > ZmnSCPxj via Lightning-dev 
> >> writes:
> >> >> For myself, I think splice is less priority than AMP. But I prefer an
> >> >> AMP which retains proper ZKCP (i.e. receipt of preimage at payer
> >> >> implies receipt of payment at payee, to facilitate trustless
> >> >> on-to-offchain and off-to-onchain bridges).
> >> >
> >> > Agreed, multipath routing is a priority, but I think splicing is just
> as
> >> > much a key piece to a better UX, since it allows to ignore differences
> >> > between on-chain and off-chain funds, showing just a single balance
> for
> >> > all use-cases.
> >>
> >> Agreed, we need both.  Multi-channel was a hack because splicing doesn't
> >> exist, and I'd rather not ever have to implement multi-channel :)
> >>
> >> AMP is important, but it's a nasty compromise with the current
> >> limitations.  I want to have my cake and eat it too, and I'm pretty sure
> >> it's possible once the Scnorr-Eltoonicorn arrives.
> >>
> >> Cheers,
> >> Rusty.
> >> ___
> >> Lightning-dev mailing list
> >> Lightning-dev@lists.linuxfoundation.org
> >> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
> >>
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Including a Protocol for splicing to BOLT

2018-07-04 Thread Olaoluwa Osuntokun
What's the nasty compromise?

Let's also not underestimate how big of an update switching to dlog based
HTLCs will be.

On Wed, Jul 4, 2018, 4:21 PM Rusty Russell  wrote:

> Christian Decker  writes:
>
> > ZmnSCPxj via Lightning-dev 
> writes:
> >> For myself, I think splice is less priority than AMP. But I prefer an
> >> AMP which retains proper ZKCP (i.e. receipt of preimage at payer
> >> implies receipt of payment at payee, to facilitate trustless
> >> on-to-offchain and off-to-onchain bridges).
> >
> > Agreed, multipath routing is a priority, but I think splicing is just as
> > much a key piece to a better UX, since it allows to ignore differences
> > between on-chain and off-chain funds, showing just a single balance for
> > all use-cases.
>
> Agreed, we need both.  Multi-channel was a hack because splicing doesn't
> exist, and I'd rather not ever have to implement multi-channel :)
>
> AMP is important, but it's a nasty compromise with the current
> limitations.  I want to have my cake and eat it too, and I'm pretty sure
> it's possible once the Scnorr-Eltoonicorn arrives.
>
> Cheers,
> Rusty.
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Rebalancing argument

2018-07-03 Thread Olaoluwa Osuntokun
Dmytro wrote:
> Yet the question remains — are there obvious advantages of cycle
> transactions over a smart fee/routing system?

That's a good question, it may be the case that by modifying the fee
structure to punish flows that unbalance channels further, then this can
simplify the problem as the heuristics can target solely the fee rate. The
earliest suggestion of this I can recall was from Tadge way back in like
2015 or so. The goal here is for a node to ideally maintain relatively
balanced channels, but then charge a payment an amount that scales super
linearly when flows consume most of their available balance.

The current fee schedule is essentially:

  base_fee + amt*rate

clighting and lnd (borrowed from c-lightning) currently use a "risk factor"
to factor in the impact of the time lock on the "weight" of an edge when
path finding. With this, the fee schedule looks like:

  (base_fee + amt*rate) + (cltv_delta * risk_factor / 1,000,)

In the future, we may want to allow nodes to also signal how they wish the
fee to scale with the absolute CLTV of the HTLC extend to this. This would
allow them to more naturally factor in their conception of the time value of
their BTC.

Finally, if we factor in an "balance disruption" factor, the fee schedule
may look something like this:

  (base_fee + amt*rate) + (cltv_delta * risk_factor / 1,000,) +
  gamma*f(capacity, amt)

Here f is a function whose output is proportional to the distance the
payment flow (assuming full capacity at that instance) puts the channel away
from being fully balanced, and gamma is a coefficient that allows nodes to
express the degree of penalty for unbalancing a channel. f itself is either
agreed upon by the network completely, or resembles a certain polynomial,
allowing nodes to select coefficients as they wish.

We may want to consider moving to something like this for BOLT 1.x+ as it
allows nodes to quantify their apprehension to time locks and also
channel balance equilibrium affinity.

Alternatively, if we move to something like HORNET, then during the set up
phase, nodes can ask for an initial "quote" for a set of payment ranges,
then just use that quote for all payments sent. This allows nodes to keep
their fee schedules private (for w/e reason) and also change them at a whim
if they wish.

-- Laolu


On Sun, Jul 1, 2018 at 8:39 AM Dmytro Piatkivskyi <
dmytro.piatkivs...@ntnu.no> wrote:

> Hi Rene,
>
> Thanks for your answer!
>
> 1. The Lightning network operates under different assumptions, that is
> true. However, the difference is minor in terms of the issue posed. The
> premise for the quoted statement is that taking fees changes the nodes’
> balances, therefore selected paths affect the liquidity. In the Lightning
> network fees are very small, so the change in liquidity may be negligible.
> Moreover, intermediate nodes gain in fees, which only increases the
> liquidity.
>
> 2.A. It is too early to speculate where the privacy requirements will
> settle down. Flare suggests a mechanism of sharing the infrastructure view
> between nodes, possibly sharing weights. As the network grows routing will
> become more difficult, however we don’t know yet to which extent. It may
> organise itself in ‘domains’, so when we send a payment we know to which
> domain we are sending to, knowing the path to it beforehand. The point is
> we don’t know yet, so we can’t speculate.
>
> 2.B. That is surely an interesting aspect. HTLC locks
> temporarily downgrade the network liquidity. Now the question is how it
> changes the order of transactions and how that order change affects the
> transaction feasibility. Does it render some transactions infeasible or
> just defers them? It definitely needs a closer look.
>
> Yet the question remains — are there obvious advantages of cycle
> transactions over a smart fee/routing system? In any sense. Path lengths,
> for example. To answer that I am going to run a simulation, but also would
> appreciate your opinions.
>
> Best,
> Dmytro
>
> From: René Pickhardt 
> Date: Sunday, 1 July 2018 at 13:59
> To: Dmytro Piatkivskyi 
> Cc: lightning-dev 
> Subject: Re: [Lightning-dev] Rebalancing argument
>
> Hey Dmytro,
>
> thank your for your solid input to the discussion. I think we need to
> consider that the setting in the lightning network is not exactly
> comparable to the one described in the 2010 paper.
>
> 1st: the paper states in section 5.2: "It appears that a mathematical
> analysis of a transaction routing model where intermediate nodes charged a
> routing fee would require an entirely new approach since it would
> invalidate the cycle-reachability relation that forms the basis of our
> results."
> Since we have routing fees in the lightning network to my understanding
> the theorem and lemma you quoted in your medium post won't hold.
>
> 2nd: Even if we neglect the routing fees and assume the theorem still
> holds true we have two conditions that make the problem way more dynamic:
>  A) In the 

Re: [Lightning-dev] Including a Protocol for splicing to BOLT

2018-06-25 Thread Olaoluwa Osuntokun
Hi René,

Speaking at a high level, the main differ between modifying autopilot
strategies (channel bootstrapping, and maintenance) vs something like
splicing, is that the former is purely policy while the latter is actually a
protocol modifications. With respect to difficulty, the first option (in lnd
at least) requires a dev to work solely on a high level (implementing a
series of "pure" interfaces), on the other hand something like splicing
requires a bit more low-level knowledge of Bitcoin, the protocol, and also
specific details of an implementation (funding channels, signing, sync,
etc).

Splicing is likely something to be included (along with many other things on
our various wish lists) within BOLT 1.1, which will start to be "officially"
drafted late fall of this year. However of course it's possible for
implementations to start to draft up working versions, reserving a temporary
feature bit.

> They people from lightning labs told me that they are currently started
> working on splicing

Yep, I have a branch locally that has started a full async version of
splicing. Mostly started to see if any implementation level details would be
a surprise, compared to how we think it all should work in our heads.

> but even though it seems technically straight forward t

Well the full async implementation may be a bit involved, depending on the
architecture of the implementation (see the second point below).

In the abstract, I'd say a splicing proposal should include the following:

  * a generic message for both splice in/out
* this allows both sides to schedule/queue up possible changes,
  opportunistically piggy-backing then on top of the other sides
  initiation
* most of the channel params can also be re-negotiated as this point,
  another upside is this effectively allows garbage collecting old
  revocation state
  * fully async splice in/out
 * async is ideal as we don't need to block channel operation for
   confirmations, this greatly improves the UX of the process
 * async splice in should use a technique similar to what Conner has
   suggested in the past [0], otherwise it would need to block :(
  * a sort of pre-announcement gossip messages
 * purpose of this is to signal to other nodes "hey this channel is
   about to change outpoints, but you can keep routing through it"
 * otherwise, if this doesn't exist, then nodes would interpret the
   action as a close then open of a channel, rather than a re-allocation

Jumping down to a lower level detail, the inclusion of a sort of "scheduler"
for splicing can also allow implementations to greatly batch all their
operations. One example is using a splicing session initiated by the remote
party to open channels, send regular on-chain payments, CPFP pending
sweeps/commitments, etc.

[0]:
https://github.com/lightningnetwork/lightning-rfc/issues/280#issuecomment-388269599

-- Laolu


On Mon, Jun 25, 2018 at 3:10 AM René Pickhardt via Lightning-dev <
lightning-dev@lists.linuxfoundation.org> wrote:

> Hey everyone,
>
> I found a mail from 6 month ago on this list ( c.f.:
> https://lists.linuxfoundation.org/pipermail/lightning-dev/2017-December/000865.html
>  )
> in which it was stated that there was a plan to include a splicing protocol
> as BOLT 1.1 (On a side node I wonder weather it would make more sense to
> include splicing to BOLT 3?) I checked out the git repo and issues and
> don't see that anyone is currently working on that topic and that it hasn't
> been included yet. Am I correct?
> If noone works on this at the moment and the spec is still needed I might
> take the initiative on that one over the next weeks. If someone is working
> on this I would kindly offer my support.
>
> The background for my question: Last weekend I have been attending the 2nd
> lightninghackday in Berlin and we had quite some intensive discussions
> about the autopilot feature and splicing. (c.f. a summary can be found on
> my blog:
> https://www.rene-pickhardt.de/improve-the-autopilot-of-bitcoins-lightning-network-summary-of-the-bar-camp-session-at-the-2nd-lightninghackday-in-berlin
> )
>
> They people from lightning labs told me that they are currently started
> working on splicing but even though it seems technically straight forward
> the protocols should also be formalized. Previously I planned working on
> improving the intelligence of the autopilot feature of the lightning
> network however on the weekend I got convinced that splicing should be much
> higher priority and the process should be specified in the lightning rfc.
>
> Also it would be nice if someone would be willing to help out improving
> the quality of the spec that I would create since it will be my first time
> adding work to such a formal rfc.
>
> best Rene
>
>
> --
> www.rene-pickhardt.de
> 
>
> Skype: rene.pickhardt
>
> mobile: +49 (0)176 5762 3618 <+49%20176%2057623618>
> 

Re: [Lightning-dev] Scriptless Scripts with ECDSA

2018-05-07 Thread Olaoluwa Osuntokun
FWIW, Conner pointed out that the initial ZK Proof for the correctness of
the
Paillier params (even w/ usage of bulletproofs) has multiple rounds of
interaction,
iirc up to 5+ (with additional pipelining) rounds of interaction.

-- Laolu

On Mon, May 7, 2018 at 5:14 PM Olaoluwa Osuntokun <laol...@gmail.com> wrote:

> Actually, just thought about this a bit more and I think it's possible to
> deploy this in unison with (or after) any sort of SS based on schnorr
> becomes
> possible in Bitcoin. My observation is that since both techniques are
> based on
> the same underlying technique (revealing a secret value in a signature) and
> they center around leveraging the onion payload to drop off a payment point
> (G*a, or G*a_1*a_2*a_3, etc), then the disclosure within the _links_ can be
> heterogeneous, as the same secret is still revealed in an end-to-end
> matter.
>
> As an illustration, consider: A <-> B <-> C. The A <-> B link could use
> the 2pc
> pailier technique, while the B <-> C link could use the OG SS technique
> based
> on schnorr. If i'm correct, then this would mean that we can deploy both
> techniques, without worrying about fragmenting the network due to the
> existence
> of two similar but incompatible e2e payment routing schemes!
>
> -- Laolu
>
>
> On Mon, May 7, 2018 at 4:57 PM Olaoluwa Osuntokun <laol...@gmail.com>
> wrote:
>
>> Hi Pedro,
>>
>> Very cool stuff! When I originally discovered the Lindell's technique, my
>> immediate thought was the we could phase this in as a way to
>> _immediately_ (no
>> additional Script upgrades required), replace the regular 2-of-2
>> mulit-sig with
>> a single p2wkh. The immediate advantages of this would: be lower fees for
>> opening/closing channels (as the public key script, and witness are
>> smaller),
>> openings and cooperative close transactions would blend in with the
>> anonymity
>> set of regular p2wkh transactions, and finally the htlc timeout+success
>> transactions can be made smaller as we can remove the multi-sig. The
>> second
>> benefit is nerfed a bit if the channel are advertised, but non-advertised
>> channels would be able to take advantage of this "stealth" feature.
>>
>> The upside of the original application I hand in mind is that it wouldn't
>> require any end-to-end changes, as it would only be a link level change
>> (diff
>> output for the funding transaction). If we wanted to allow these styles of
>> channels to be used outside of non-advertised channels, then we would
>> need to
>> update the way channels are verified in the gossip layer.
>>
>> Applying this to the realm of allowing us to use randomized payment
>> identifiers
>> across the route is obviously much, much doper. So then the question
>> would be
>> what the process of integrating the scheme into the existing protocol
>> would
>> look like. The primary thing we'd need to account for is the additional
>> cryptographic overhead this scheme would add if integrated. Re-reviewing
>> the
>> paper, there's an initial setup and verification phase (which was omitted
>> from
>> y'alls note for brevity) where both parties need to complete before the
>> actually signing process can take place. Ideally, we can piggy-back this
>> setup
>> on top of the existing accept_channel/open_channel dance both sides need
>> to go
>> through in order to advance the channel negotiation process today.
>>
>> Conner actually started to implement this when we first discovered the
>> scheme,
>> so we have a pretty good feel w.r.t the implementation of the initial set
>> of
>> proofs. The three proofs required for the set up phase are:
>>
>>   1. A proof that that the Paillier public key is well formed. In the
>> paper
>>   they only execute this step for the party that wishes to _obtain_ the
>>   signature. In our case, since we'll need to sign for HTLCs in both
>>   directions, but parties will need to execute this step.
>>
>>   2. A dlog proof for the signing keys themselves. We already do this
>> more or
>>   less, as if the remote party isn't able to sign with their target key,
>> then
>>   we won't be able to update the channel, or even create a valid
>> commitment in
>>   the first place.
>>
>>   3. A proof that value encrypted (the Paillier ciphertext) is actually
>> the
>>   dlog of the public key to be used for signing. (as an aside this is the
>> part
>>   of the protocol that made me do a double take when first reading it:
>> using one
>>   c

Re: [Lightning-dev] Scriptless Scripts with ECDSA

2018-05-07 Thread Olaoluwa Osuntokun
Hi Pedro,

Very cool stuff! When I originally discovered the Lindell's technique, my
immediate thought was the we could phase this in as a way to _immediately_
(no
additional Script upgrades required), replace the regular 2-of-2 mulit-sig
with
a single p2wkh. The immediate advantages of this would: be lower fees for
opening/closing channels (as the public key script, and witness are
smaller),
openings and cooperative close transactions would blend in with the
anonymity
set of regular p2wkh transactions, and finally the htlc timeout+success
transactions can be made smaller as we can remove the multi-sig. The second
benefit is nerfed a bit if the channel are advertised, but non-advertised
channels would be able to take advantage of this "stealth" feature.

The upside of the original application I hand in mind is that it wouldn't
require any end-to-end changes, as it would only be a link level change
(diff
output for the funding transaction). If we wanted to allow these styles of
channels to be used outside of non-advertised channels, then we would need
to
update the way channels are verified in the gossip layer.

Applying this to the realm of allowing us to use randomized payment
identifiers
across the route is obviously much, much doper. So then the question would
be
what the process of integrating the scheme into the existing protocol would
look like. The primary thing we'd need to account for is the additional
cryptographic overhead this scheme would add if integrated. Re-reviewing the
paper, there's an initial setup and verification phase (which was omitted
from
y'alls note for brevity) where both parties need to complete before the
actually signing process can take place. Ideally, we can piggy-back this
setup
on top of the existing accept_channel/open_channel dance both sides need to
go
through in order to advance the channel negotiation process today.

Conner actually started to implement this when we first discovered the
scheme,
so we have a pretty good feel w.r.t the implementation of the initial set of
proofs. The three proofs required for the set up phase are:

  1. A proof that that the Paillier public key is well formed. In the paper
  they only execute this step for the party that wishes to _obtain_ the
  signature. In our case, since we'll need to sign for HTLCs in both
  directions, but parties will need to execute this step.

  2. A dlog proof for the signing keys themselves. We already do this more
or
  less, as if the remote party isn't able to sign with their target key,
then
  we won't be able to update the channel, or even create a valid commitment
in
  the first place.

  3. A proof that value encrypted (the Paillier ciphertext) is actually the
  dlog of the public key to be used for signing. (as an aside this is the
part
  of the protocol that made me do a double take when first reading it:
using one
  cryptosystem to encrypt the private key of another cryptosystem in order
to
  construct a 2pc to allow signing in the latter cryptosystem! soo clever!)

First, we'll examine the initial proof. This only needs to be done once by
both
parties AFAICT. As a result, we may be able to piggyback this onto the
initial
channel funding steps. Reviewing the paper cited on the Lindell paper [1],
it
appears this would take 1 RTT, so this shouldn't result in any additional
round
trips during the funding process. We should be able to use a Paillier
modulos
of 2048 bits, so nothing too crazy. This would just result in a slightly
bigger
opening message.

Skipping the second proofs as it's pretty standard.

The third proof as described (Section 6 of the Lindell paper) is
interactive.
It also contains a ZK range proof as a sub-protocol which as described in
Appendix A is also interactive. However, it was pointed out to us by Omer
Shlomovits on the lnd slack, that we can actually replace their custom range
proofs with Bulletproofs. This would make this section non-interactive,
allowing the proof itself to take 1.5 RTT AFAICT. Additionally, this would
only
need to be done once at the start, as AFIACT, we can re-use the encryption
of
the secp256k1 private key of both parties.

The current channel opening process requires 2 RTT, so it seems that we'd be
able to easily piggy back all the opening proofs on top of the existing
funding
protocol. The main cost would be the increased size of these opening
messages,
and also the additional computational cost of operations within the Paillier
modulus and the new range proof.

The additional components that would need to be modified are the process of
adding+settling an HTLC, and also the onion payload that drops off the point
whose dlog is r_1*alpha. Within the current protocol, adding and settling an
HTLC are more or less non-interactive, we have a single message for each,
which
is then staged to be committed in new commitments for both parties. With
this
new scheme (if I follow it correctly), adding an HTLC now requires N RTT:
  1. Alice sends A = G*alpha to Bob. 

Re: [Lightning-dev] Receiving via unpublished channels

2018-05-07 Thread Olaoluwa Osuntokun
AFAIK, all the other implementations already do this (lnd does at least
[1]).  As otherwise, it wouldn't be possible to properly utilize routing
hints.

> I want to ask the other LN implementations (lnd, eclair, ucoin, lit)

As an side, what's "ucoin"? Searched for a bit and didn't find anything
notable.

[1]:
https://github.com/lightningnetwork/lnd/blob/master/discovery/gossiper.go#L1747

On Thu, Apr 26, 2018 at 4:35 PM ZmnSCPxj via Lightning-dev <
lightning-dev@lists.linuxfoundation.org> wrote:

> Good morning list,
>
> While implementing support for `r` field in invoices, I stumbled upon some
> issues regarding *creating* invoices with `r` fields.
>
> In order to receive via an unpublished channel, we need to know what
> onLightning fees the other side of that channel wants to charge.  We cannot
> use our own onLightning fees because our fees apply if we were forwarding
> to the other side.
>
> However, in case of an unpublished channel, we do not send
> channel_announcement, and in that case we do not send channel_update.  So
> the other side of the channel never informs us of the onLightning fees they
> want to charge if we would receive funds by this channel.
>
> An idea we want to consider is to simply send `channel_update` as soon as
> we lock in the channel:
> https://github.com/ElementsProject/lightning/pull/1330#issuecomment-383931817
>
> I want to ask the other LN implementations (lnd, eclair, ucoin, lit) if we
> should consider standardizing this behavior (i.e. send `channel_update`
> after lockin  regardless of published/unpublished state).  It seems
> back-compatible: software which does not expect this behavior will simply
> drop the `channel_update` (as they do not follow a `channel_announcement`).
>
> In any case, what was the intended way to get the onLightning fee rates to
> put into invoice `r` fields for private routes?
>
> Regards,
> ZmnSCPxj
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Trustless WatchTowers?

2018-04-16 Thread Olaoluwa Osuntokun
Hi ZmnSCPxj,

> It seems to me, that the only safe way to implement a trustless
WatchTower,
> is for the node to generate a fully-signed justice transaction,
IMMEDIATELY
> after every commitment transaction is revoked, and transmit it to the
> WatchTower.

No, one doesn't need to transmit the entire justice transaction. Instead,
the client simply sends out the latest items in the script template, and a
series of _signatures_ for the various breach outputs. The pre-generated
signature means that the server is *forced* to reproduce the justice
transaction that satisfies the latest template and signature. Upfront, free
parameters such as breach bonus (or w/e else) can be negotiated.

> The WatchTower would have to store each and every justice transaction it
> received, and would not be able to compress it or use various techniques
to
> store data efficiently.

In our current implementation, we've abandoned the "savings" from
compressing the shachain/elkrem tree. When one factors in the space
complexity due the *just* the commitment signatures, the savings from
compression become less attractive. Going a step father, once you factor in
the space complexity of the 2-stage HTLC claims, then the savings from
compressing the revocation tree become insignificant.

It's also worth pointing out that if the server is able to compress the
revocation tree, then their necessarily linking new breach payloads with a
particular channel. Another downside, is that if you go to revocation tree
compression, then all updates *must* be sent in order, and updates cannot be
*skipped*.

As a result of these downside, our current implementation goes back to the
ol' "encrypted blob" approach. One immediate benefit with this approach is
that the outsourcing protocol isn't so coupled with the current _commitment
protocol_. Instead, the internal payload can be typed, allowing the server
to dispatch the proper breach protocol based on the commitment type. The
blob approach can also support a "swap" protocol which is required for
commitment designs that allow for O(1) outsourcer state per-client, like the
scheme I presented at the last Scaling Bitcoin.

-- Laolu


On Sun, Apr 15, 2018 at 8:32 PM ZmnSCPxj via Lightning-dev <
lightning-dev@lists.linuxfoundation.org> wrote:

> Hi all,
>
> Nicolas Dorier was requesting additional hooks in c-lightning for a simple
> WatchTower system:
> https://github.com/ElementsProject/lightning/issues/1353
>
> Unfortunately I was only able to provide an interface which requires a
> *trusted* WatchTower.  Trust is of course a five-letter word and should not
> be used in polite company.
>
> My key problem is that I provide enough information to the WatchTower for
> the WatchTower to be able to create the justice transaction by itself.  If
> so, the WatchTower could just make the justice transaction output to itself
> and the counterparty, so that the WatchTower and the counterparty can
> cooperate to steal the channel funds: the counterparty publishes a revoked
> transaction, the WatchTower writes a justice transaction on it that splits
> the earnings between itself and the counterparty.
>
> It seems to me, that the only safe way to implement a trustless
> WatchTower, is for the node to generate a fully-signed justice transaction,
> IMMEDIATELY after every commitment transaction is revoked, and transmit it
> to the WatchTower.  The WatchTower would have to store each and every
> justice transaction it received, and would not be able to compress it or
> use various techniques to store data efficiently.  The WatchTower would not
> have enough information to regenerate justice transactions (and in
> particular would not be able to create a travesty-of-justice transaction
> that pays out to itself rather than the protected party).  In practice this
> would require that node software also keep around those transactions until
> some process has ensured that the WatchTower has received the justice
> transactions.
>
> Is there a good way to make trustless WatchTowers currently or did this
> simply not reach BOLT v1.0?
>
> Regards,
> ZmnSCPxj
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Improving the initial gossip sync

2018-02-25 Thread Olaoluwa Osuntokun
> With that said, this should instead be a distinct `chan_update_horizon`
> message (or w/e name). If a particular bit is set in the `init` message,
> then the next message *both* sides send *must* be `chan_update_horizon`.

Tweaking this a bit, if we make it: don't send *any* channel updates at all
unless the other side sends this message, then this allows both parties to
precisely control their initial load, and also if they even *want*
channel_update messages at all.

Purely routing nodes don't need any updates at all. In the case they wish to
send (assumed to be infrequent in this model), they'll get the latest update
after their first failure.

Similarly, leaf/edge nodes can opt to receive the latest updates if they
wish to minimize payment latency due to routing failures that are the result
of dated information.

IMO, the only case where a node would want the most up to date link policy
state is for optimization/analysis, or to minimize payment latency at the
cost of additional load.

--Laolu

On Fri, Feb 23, 2018 at 4:45 PM Olaoluwa Osuntokun <laol...@gmail.com>
wrote:

> Hi Rusty,
>
> > 1. query_short_channel_id
> > IMPLEMENTATION: trivial
>
> *thumbs up*
>
> > 2. query_channel_range/reply_channel_range
> > IMPLEMENTATION: requires channel index by block number, zlib
>
> For the sake of expediency of deployment, if we add a byte (or two) to
> denote the encoding/compression scheme, we can immediately roll out the
> vanilla (just list the ID's), then progressively roll out more
> context-specific optimized schemes.
>
> > 3. A gossip_timestamp field in `init`
> > This is a new field appended to `init`: the negotiation of this feature
> bit
> > overrides `initial_routing_sync`
>
> As I've brought up before, from my PoV, we can't append any additional
> fields to the innit message as it already contains *two* variable sized
> fields (and no fixed size fields). Aside from this, it seems that the
> `innit` message should be simply for exchange versioning information,
> which
> may govern exactly *which* messages are sent after it. Otherwise, by adding
> _additional_ fields to the `innit` message, we paint ourselves in a corner
> and can never remove it. Compared to using the `innit` message to set up
> the
> initial session context, where we can safely add other bits to nullify or
> remove certain expected messages.
>
> With that said, this should instead be a distinct `chan_update_horizon`
> message (or w/e name). If a particular bit is set in the `init` message,
> then the next message *both* sides send *must* be `chan_update_horizon`.
>
> Another advantage of making this a distinct message, is that either party
> can at any time update this horizon/filter to ensure that they only receive
> the *freshest* updates.Otherwise, one can image a very long lived
> connections (say weeks) and the remote party keeps sending me very dated
> updates (wasting bandwidth) when I only really want the *latest*.
>
> This can incorporate decker's idea about having a high+low timestamp. I
> think this is desirable as then for the initial sync case, the receiver can
> *precisely* control their "verification load" to ensure they only process a
> particular chunk at a time.
>
>
> Fabrice wrote:
> > We could add a `data` field which contains zipped ids like in
> > `reply_channel_range` so we can query several items with a single
> message ?
>
> I think this is an excellent idea! It would allow batched requests in
> response to a channel range message. I'm not so sure we need to jump
> *straight* to compressing everything however.
>
> > We could add an additional `encoding_type` field before `data` (or it
> > could be the first byte of `data`)
>
> Great minds think alike :-)
>
>
> If we're in rough agreement generally about this initial "kick can"
> approach, I'll start implementing some of this in a prototype branch for
> lnd. I'm very eager to solve the zombie churn, and initial burst that can
> be
> very hard on light clients.
>
> -- Laolu
>
>
> On Wed, Feb 21, 2018 at 10:03 AM Fabrice Drouin <fabrice.dro...@acinq.fr>
> wrote:
>
>> On 20 February 2018 at 02:08, Rusty Russell <ru...@rustcorp.com.au>
>> wrote:
>> > Hi all,
>> >
>> > This consumed much of our lightning dev interop call today!  But
>> > I think we have a way forward, which is in three parts, gated by a new
>> > feature bitpair:
>>
>> We've built a prototype with a new feature bit `channel_range_queries`
>> and the following logic:
>> When you receive their init message and check their local features
>> - if they set `initial_routing_sync` and `channe

Re: [Lightning-dev] Improving the initial gossip sync

2018-02-23 Thread Olaoluwa Osuntokun
Hi Rusty,

> 1. query_short_channel_id
> IMPLEMENTATION: trivial

*thumbs up*

> 2. query_channel_range/reply_channel_range
> IMPLEMENTATION: requires channel index by block number, zlib

For the sake of expediency of deployment, if we add a byte (or two) to
denote the encoding/compression scheme, we can immediately roll out the
vanilla (just list the ID's), then progressively roll out more
context-specific optimized schemes.

> 3. A gossip_timestamp field in `init`
> This is a new field appended to `init`: the negotiation of this feature
bit
> overrides `initial_routing_sync`

As I've brought up before, from my PoV, we can't append any additional
fields to the innit message as it already contains *two* variable sized
fields (and no fixed size fields). Aside from this, it seems that the
`innit` message should be simply for exchange versioning information, which
may govern exactly *which* messages are sent after it. Otherwise, by adding
_additional_ fields to the `innit` message, we paint ourselves in a corner
and can never remove it. Compared to using the `innit` message to set up the
initial session context, where we can safely add other bits to nullify or
remove certain expected messages.

With that said, this should instead be a distinct `chan_update_horizon`
message (or w/e name). If a particular bit is set in the `init` message,
then the next message *both* sides send *must* be `chan_update_horizon`.

Another advantage of making this a distinct message, is that either party
can at any time update this horizon/filter to ensure that they only receive
the *freshest* updates.Otherwise, one can image a very long lived
connections (say weeks) and the remote party keeps sending me very dated
updates (wasting bandwidth) when I only really want the *latest*.

This can incorporate decker's idea about having a high+low timestamp. I
think this is desirable as then for the initial sync case, the receiver can
*precisely* control their "verification load" to ensure they only process a
particular chunk at a time.


Fabrice wrote:
> We could add a `data` field which contains zipped ids like in
> `reply_channel_range` so we can query several items with a single message
?

I think this is an excellent idea! It would allow batched requests in
response to a channel range message. I'm not so sure we need to jump
*straight* to compressing everything however.

> We could add an additional `encoding_type` field before `data` (or it
> could be the first byte of `data`)

Great minds think alike :-)


If we're in rough agreement generally about this initial "kick can"
approach, I'll start implementing some of this in a prototype branch for
lnd. I'm very eager to solve the zombie churn, and initial burst that can be
very hard on light clients.

-- Laolu


On Wed, Feb 21, 2018 at 10:03 AM Fabrice Drouin 
wrote:

> On 20 February 2018 at 02:08, Rusty Russell  wrote:
> > Hi all,
> >
> > This consumed much of our lightning dev interop call today!  But
> > I think we have a way forward, which is in three parts, gated by a new
> > feature bitpair:
>
> We've built a prototype with a new feature bit `channel_range_queries`
> and the following logic:
> When you receive their init message and check their local features
> - if they set `initial_routing_sync` and `channel_range_queries` then
> do nothing (they will send you a `query_channel_range`)
> - if they set `initial_routing_sync` and not `channel_range_queries`
> then send your routing table (as before)
> - if you support `channel_range_queries` then send a
> `query_channel_range` message
>
> This way new and old nodes should be able to understand each other
>
> > 1. query_short_channel_id
> > =
> >
> > 1. type: 260 (`query_short_channel_id`)
> > 2. data:
> >* [`32`:`chain_hash`]
> >* [`8`:`short_channel_id`]
>
> We could add a `data` field which contains zipped ids like in
> `reply_channel_range` so we can query several items with a single
> message ?
>
> > 1. type: 262 (`reply_channel_range`)
> > 2. data:
> >* [`32`:`chain_hash`]
> >* [`4`:`first_blocknum`]
> >* [`4`:`number_of_blocks`]
> >* [`2`:`len`]
> >* [`len`:`data`]
>
> We could add an additional `encoding_type` field before `data` (or it
> could be the first byte of `data`)
>
> > Appendix A: Encoding Sizes
> > ==
> >
> > I tried various obvious compression schemes, in increasing complexity
> > order (see source below, which takes stdin and spits out stdout):
> >
> > Raw = raw 8-byte stream of ordered channels.
> > gzip -9: gzip -9 of raw.
> > splitgz: all blocknums first, then all txnums, then all outnums,
> then gzip -9
> > delta: CVarInt encoding:
> blocknum_delta,num,num*txnum_delta,num*outnum.
> > deltagz: delta, with gzip -9
> >
> > Corpus 1: LN mainnet dump, 1830 channels.[1]
> >
> > Raw: 14640 bytes
> > gzip -9: 6717 bytes
> >   

Re: [Lightning-dev] AMP: Atomic Multi-Path Payments over Lightning

2018-02-06 Thread Olaoluwa Osuntokun
Hi ZmnSCPxj,

> This is excellent work!

Thanks!

> I think, a `globalfeatures` odd bit could be used for this.  As it is
> end-ot-end, `localfeatures` is not appropriate.

Yep, it would need to be a global feature bit. In the case that we're
sending to a destination which isn't publicly advertised, then perhaps an
extension to BOLT-11 could be made to signal receiver support.

> I believe, currently, fees have not this super-linear component

Yep they don't. Arguably, we should also have a component that scales
according to the proposed CLTV value of the outgoing HTLC. At Scaling
Bitcoin Stanford, Aviv Zohar gave a talked titled "How to Charge Lightning"
where the authors analyzed the possible evolution of fees on the network
(and also suggested adding this super-linear component to extend the
lifetime of channels).  However, the talk itself focused on a very simple
"mega super duper hub" topology. Towards the end he alluded to a forthcoming
paper that had more comprehensive analysis of more complex topologies. I
look forward to the publication of their finalized work.

> Indeed, the existence of per-hop fees (`fee_base_msat`) means, splitting
> the payment over multiple flows will be, very likely, more expensive,
> compared to using a single flow.

Well it's still to be seen how the fee structure on mainnet emerges once the
network is still fully bootstrapped. AFAIK, most running on mainnet atm are
using the default fee schedules for their respective implementations. For
example, the default fee_base_msat for lnd is 1000 msat (1 satoshi).

> I believe the `realm` byte is intended for this.

The realm byte is meant to signal "forward this to the dogecoin channel".
ATM, we just default to 0 as "Bitcoin". However, the byte itself only really
need significance between the sender and the intermediate node. So there
isn't necessarily pressure to have a globally synchronized set of realm
bytes.

> Thus, you can route over nodes that are unaware of AMP, and only provide
> an AMP realm byte to the destination node, who, is able to reconstruct
this
> your AMP data as per your algorithm.

Yes, the intermediate nodes don't need to be aware of the end-to-end
protocol. For the final hop, there are actually 53 free bytes (before one
needs to signal the existence of EOBs):

  * 1 byte realm
  * 8 bytes next addr (all zeroes to signal final dest)
  * 32 bytes hmac (also all zeroes for the final dest)
  * 12 bytes padding

So any combo of these bytes can be used to signal more advanced protocols to
the final destination.


A correction from the prior email description:

> We can further modify our usage of the per-hop payloads to send
> (H(BP), s_i) to consume most of the EOB sent from sender to receiver.

This should actually be (H(s_0 || s_1 || ...), s_i). So we still allow them
to check this finger print to see if they have all the final shares, but
don't allow them to preemptively pull all the payments.


-- Laolu


On Mon, Feb 5, 2018 at 11:12 PM ZmnSCPxj  wrote:

> Good morning Laolu,
>
> This is excellent work!
>
> Some minor comments...
>
>
> (Atomic Multi-path Payments). It can be experimented with on Lightning
> *today* with the addition of a new feature bit to gate this new
> feature. The beauty of the scheme is that it requires no fundamental
> changes
> to the protocol as is now, as the negotiation is strictly *end-to-end*
> between sender and receiver.
>
>
> I think, a `globalfeatures` odd bit could be used for this.  As it is
> end-ot-end, `localfeatures` is not appropriate.
>
>   - Potential fee savings for larger payments, contingent on there being a
> super-linear component to routed fees. It's possible that with
> modifications to the fee schedule, it's actually *cheaper* to send
> payments over multiple flows rather than one giant flow.
>
>
> I believe, currently, fees have not this super-linear component.  Indeed,
> the existence of per-hop fees (`fee_base_msat`) means, splitting the
> payment over multiple flows will be, very likely, more expensive, compared
> to using a single flow.  Tiny roundoffs in computing the proportional fees
> (`fee_proportional_millionths`) may make smaller flows give a slight fee
> advantage, but I think the multiplication of per-hop fees will dominate.
>
>
>   - Using smaller payments increases the set of possible paths a partial
> payment could have taken, which reduces the effectiveness of static
> analysis techniques involving channel capacities and the plaintext
> values being forwarded.
>
>
> Strongly agree!
>
>
> In order to include the three tuple within the per-hop payload for the
> final
> destination, we repurpose the _first_ byte of the un-used padding bytes in
> the payload to signal version 0x01 of the AMP protocol (note this is a PoC
> outline, we would need to standardize signalling of these 12 bytes to
> support other protocols).
>
>
> I believe the `realm` byte is intended for this.  Intermediate nodes do

Re: [Lightning-dev] lnd on bitcoind

2018-01-31 Thread Olaoluwa Osuntokun
Segwit has been merged into btcd for for sometime now. It's also possible to
run with bitcoind. I encourage you to check out the documentation:
https://github.com/lightningnetwork/lnd/blob/master/docs/INSTALL.md

In lnd, the chain backend has already been abstracted[1]. This is what
allows
it to run with any of the three supported chain backends (btcd, bitcoind,
neutrino). I invite you to continue this conversation on #lnd on Freenode.

[1]:
https://github.com/lightningnetwork/lnd/blob/master/lnwallet/interface.go

On Wed, Jan 31, 2018 at 12:23 PM Benjamin Mord  wrote:

> Hi,
>
> I'm not finding evidence of segwit in btcd, yet choice of golang is
> appealing to me. Can one run lnd on bitcoind?
>
> More generally speaking, is there a plan for the layer 2 / layer 1
> protocol binding to be abstracted away from the implementation on either
> side, via SPI or such?
>
> Thanks,
> Ben
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] General question on routing difficulties

2017-11-27 Thread Olaoluwa Osuntokun
Hi Pedro,

I came across this paper a few weeks ago, skimmed it lightly, and noted a
few interesting aspects I wanted to dig into later. Your email reminded me
to re-read the paper, so thanks for that! Before reading the paper, I
wasn't aware of the concept of coordinate embedding, nor how that could be
leveraged in order to provide sender+receiver privacy in a payment network
using a distance-vector-like routing system. Very cool technique!


After reading the paper again, my current conclusion is that while the
protocol presents some novel traits in the design a routing system for
payment channel based networks, it lends much better to a
closed-membership, credit network, such as Ripple (which is the focus of
the paper).


In Ripple, there are only a handful of gateways, and clients that seek to
interact with the network must chose their gateways *very* carefully,
otherwise consensus faults can occur, violating safety properties of the
network. It would appear that this gateway model nicely translates well to
the concept of landmarks that the protocol is strongly dependant on.
Ideally, each gateway would be a landmark, and as there are a very small
number of gateways within Ripple (as you must be admitted to be a verified
gateway in the network), then parameter L (the total number of landmarks)
is kept small which minimizes routing overhead, the average path-length,
etc.


When we compare Ripple to LN, we find that the two networks are nearly
polar opposites of each other. LN is an open-membership network that
requires zero initial configuration by central administrators(s). It more
closely resembles *debit* network (a series of tubes of money), as the
funds within channels must be pre-committed in order to establish a link
between two nodes, and cannot be increased without an additional on-chain
control transaction (to add or remove funds). Additionally, AFAIK (I'm no
expert on Ripple of course), there's no concept of fees within the
network. While within LN, the fee structure is a critical component of the
inventive for node operators to lift their coins onto this new layer to
provider payment routing services.  Finally, in LN we rely on time-locks
in order to ensure that all transactions are atomic which adds another set
of constraints. Ripple has no such constraint as transfers are based on
bi-lateral trust.


With that said, the primary difference between this protocol is that
currently we utilize a source-routed system which requires the sender to
know "most" of the path to the destination. I say "most" as currently,
it's possible for the receiver of a payment to use a poor man's rendezvous
system to provide the sender with a set of suffix paths form what one can
consider ad-hoc landmarks. The sender can then concatenate these with
their own paths, and construct the Sphinx routing package which encodes
the full route. This itself only gives sender privacy, and the receiver
doesn't know the identity of the sender, but the sender learns the
identity of the receiver.


We have plans to achieve proper sender/receiver privacy by extending our
Sphinx usage to leverage HORNET, such that the payment descriptor (payment
request containing details of the payment) also includes several paths
from rendezvous nodes (Rodrigo's) to the receiver. The rendezvous route
itself will be nested as a further Anonymous Header (AHDR) which includes
the information necessary to complete the onion circuit from Rodrigo to
the receiver. As onion routing is used, only Rodrigo can decrypt the
payload and finalize the route. With such a structure, the only nodes that
need to advertise their channels are nodes which seek to actively serve as
channel routers. All other nodes (phones, laptops, etc), don't need to
advertise their channels to the greater network, reducing the size of the
visible network, and also the storage and validation overhead. This serves
to extend the "scale ceiling" a bit.


My first question is: is it possible to adapt the protocol to allow each
intermediate node to communicate their time lock and fee references to the
sender? Currently, as the full path isn't known ahead of time, the sender
is unable to properly craft the timelocks to ensure safety+atomicity of
the payment. This would mean they don't know what the total timelock
should be on the first outgoing link. Additionally, as they don't know the
total path and the fee schedule of each intermediate node, then once
again, they don't know how much to send on the first out going link. It
would seem that one could extend the probing phase to allow backwards
communication by each intermediate node back to the sender, such that they
can properly craft a valid HTLC. This would increase the set up costs of
the protocol however, and may also increase routing failures as it's
possible incompatibilities arise at run-time between the preferences of
intermediate nodes. Additionally, routes may fail as an intermediate node
consumes too many funds as their fee, 

  1   2   >