Re: [Lightning-dev] Removing lnd's source code from the Lightning specs repository

2021-10-12 Thread Olaoluwa Osuntokun
Hi Fabrice,

> I believe that was a mistake: a few days ago, Arcane Research published a
> fairly detailed report on the state of the Lightning Network:
> https://twitter.com/ArcaneResearch/status/1445442967582302213.  They
> obviously did some real work there, and seem to imply that their report
> was vetted by Open Node and Lightning Labs.

Appreciate the hard work from Arcane on putting together this report. That
said, our role wasn't to review the entire report, but instead to provide
feedback on questions they had. Had we reviewed the section in question, we
would have spotted those errors and told the authors to fix them. Mistakes
happen, and we're glad it got corrected.

Also note that lnd has _never_ referred to itself as the "reference"
implementation.  A few years ago some other implementations adopted that
title themselves, but have since adopted softer language.

> So I'm proposing that lnd's source code be removed from
> https://github.com/lightningnetwork/ (and moved to
> https://github.com/lightninglabs for example, with the rest of their
> Lightning tools, but it's up to Lightning Labs).

I think it's worth briefly revisiting a bit of history here w.r.t the github
org in question. In the beginning, the lightningnetwork github org was
created by Joseph, and the lightningnetwork/paper repo was added, the
manuscript that kicked off this entire thing. Later lightningnetwork/lnd was
created where we started to work on an initial implementation (before the
BOLTs in their current form existed), and we were added as owners.
Eventually we (devs of current impls) all met up in Milan and decided to
converge on a single specification, thus we added the BOLTs to the same
repo, despite it being used for lnd and knowingly so.

We purposefully made a _new_ lightninglabs github org as we wanted to keep
lnd, the implementation distinct from any of our future commercial
products/services. To this day, we've architected all our paid products to
be built _on top_ of lnd, rather than within it. As a result, users always
opt into these services.

As it seems the primary grievance here is collocating an implementation of
Lightning along with the _specification_ of the protocol, and given that the
spec was added last, how about we move the spec to an independent repo owned
by the community? I currently have github.com/lightning, and would be happy
to donate it to the community, or we could create a new org like
"lightning-specs" or something similar. We could then move the spec (the
BOLTs and also potentially the bLIPs since some devs want it to be within
its own repo) there, and have it be the home for any other
community-backed/owned projects.  I think the creation of a new github
organization would also be a good opportunity to further formalize the set
of stakeholders and the general process related to the evolution of
Lightning the protocol.

Thoughts?

-- Laolu

On Fri, Oct 8, 2021 at 5:25 PM Fabrice Drouin 
wrote:

> Hello,
>
> When you navigate to https://github.com/lightningnetwork/ you find
> - the Lightning Network white paper
> - the Lightning Network specifications
> - and ... the source code for lnd!
>
> This has been an anomaly for years, which has created some confusion
> between Lightning the open-source protocol and Lightning Labs, one of
> the companies specifying and implementing this protocol, but we didn't
> do anything about it.
>
> I believe that was a mistake: a few days ago, Arcane Research
> published a fairly detailed report on the state of the Lightning
> Network: https://twitter.com/ArcaneResearch/status/1445442967582302213.
> They obviously did some real work there, and seem to imply that their
> report was vetted by Open Node and Lightning Labs.
>
> Yet in the first version that they published you’ll find this:
>
> "Lightning Labs, founded in 2016, has developed the reference client
> for the Lightning Network called Lightning Network Daemon (LND)
> They also maintain the network standards documents (BOLTs)
> repository."
>
> They changed it because we told them that it was wrong, but the fact
> that in 2021 people who took time do do proper research, interviews,
> ... can still misunderstand that badly how the Lightning developers
> community works means that we ourselves badly underestimated how
> confusing mixing the open-source specs for Lightning and the source
> code for one of its implementations can be.
>
> To be clear, I'm not blaming Arcane Research that much for thinking
> that an implementation of an open-source protocol that is hosted with
> the white paper and specs for that protocol is a "reference"
> implementation, and thinking that since Lightning Labs maintains lnd
> then they probably maintain the other stuff too. The problem is how
> that information is published.
>
> So I'm proposing that lnd's source code be removed from
> https://github.com/lightningnetwork/ (and moved to
> https://github.com/lightninglabs for example, with the rest of their
> Lightning 

Re: [Lightning-dev] Stateless invoices with proof-of-payment

2021-09-22 Thread Olaoluwa Osuntokun
Hi Joost,

> The conventional approach is to create a lightning invoice on a node and
> store the invoice together with order details in a database. If the order
> then goes unfulfilled, cleaning processes remove the data from the node
> and database again.

> The problem with this setup is that it needs protection against unbounded
> generation of payment requests. There are solutions for that such as rate
> limiting, but wouldn't it be nice if invoices can be generated without the
> need to keep any state at all?

Isn't this ultimately an engineering issue? How much exactly is "too much"
in this case? Invoices are relatively small, and also don't even necessarily
need to be ever written to disk assuming a slim expiration window. It's
likely the case that a service can just throw everything in Redis and call
it a day. In terms of rate limiting a service would likely already need to
implement that on the API/service level to mitigate app level DoS attacks.

As far as pre-images go, this can already be "stateless" by generating a
single random seed (storing that somewhere w/ a counter likely) and then
using shachain or elkrem to deterministically generate payment hashes. You
can then either use the payment_addr/secret to index into the hash chain, or
have the user send some counter extracted from the invoice as a custom
record. Similar schemes have been proposed in the past to support "offline"
vending machine payments.

Taking it one step further, the service could maintain a unique
elkrem/shachain state for each unique user, which would then allow them to
also collapse the pre-image into the hash chain, which lets them save space
and be able to reproduce a given "proof that someone in the world paid"
(that no service/wallet seems to accept/generate in an
automated/standardized manner) statement dynamically.

-- Laolu


On Tue, Sep 21, 2021 at 3:08 AM Joost Jager  wrote:

> Problem
>
> One of the qualities of lightning is that it can provide light-weight,
> no-login payments with minimal friction. Games, paywalls, podcasts, etc can
> immediately present a QR code that is ready for scan and pay.
>
> Optimistically presenting payment requests does lead to many of those
> payment requests going unused. A user visits a news site and decides not to
> buy the article. The conventional approach is to create a lightning invoice
> on a node and store the invoice together with order details in a database.
> If the order then goes unfulfilled, cleaning processes remove the data from
> the node and database again.
>
> The problem with this setup is that it needs protection against unbounded
> generation of payment requests. There are solutions for that such as rate
> limiting, but wouldn't it be nice if invoices can be generated without the
> need to keep any state at all?
>
> Stateless invoices
>
> What would happen if a lightning invoice is only generated and stored
> nowhere on the recipient side? To the user, it won't make a difference.
> They would still scan and pay the invoice. When the payment arrives at the
> recipient though, two problems arise:
>
> 1. Recipient doesn't know whom or what the payment is for.
>
> This can be solved by attaching additional custom tlv records to the htlc.
> On the wire, this is all arranged for. The only missing piece is the
> ability to specify additional data for that custom tlv record in a bolt11
> invoice. One way would be to define a new tagged field for this in which
> the recipient can encode the order details.
>
> An alternative is to use the existing invoice description field and simply
> always pass that along with the htlc as a custom tlv record.
>
> A second alternative that already works today is to use part (for example
> 16 out of 32 bytes) of the payment_secret (aka payment address) to encode
> the order details in. This assumes that the secret is still secret enough
> with reduced entropy. Also there may not be enough space for every
> application.
>
> 2. Recipient doesn't know the preimage that is needed to settle the
> htlc(s).
>
> One option is to use a keysend payment or AMP payment. In that case, the
> sender includes the preimage with the htlc. Unfortunately this doesn't
> provide the sender with a proof of payment that they'd get with a regular
> lightning payment.
>
> An alternative solution is to use a deterministic preimage based on a
> (recipient node key-derived) secret, the payment secret and other relevant
> properties. This allows the recipient to derive the same preimage twice:
> Once when the lightning invoice is generated and again when a payment
> arrives.
>
> It could be something like this:
>
> payment_secret = random
> preimage = H(node_secret | payment_secret | payment_amount |
> encoded_order_details)
> invoice_hash = H(preimage)
>
> The sender sends an htlc locked to invoice_hash for payment_amount and
> passes along payment_secret and encoded_order_details in a custom tlv
> record.
>
> When the recipient receives the htlc, they 

Re: [Lightning-dev] Opening balanced channels using PSBT

2021-09-22 Thread Olaoluwa Osuntokun
Hi Ole,

It's generally known that one can use out of band transaction construction,
and the push_amt feature in the base funding protocol to simulate dual
funded channels.

The popular 'balanceofsatoshis' tool has a command that packages up the
interaction (`open-balanced-channel`) into an easier to use format, IIRC it
uses key send to ask a peer if they'll accept one and negotiate some of the
params.

The one thing you need to take mind of when doing this manually is that by
default lnd will only lock the UTXOs allocated for the funding attempt for a
few minutes. As a result, you need to make sure the process is finalized
during that interval or the UTXOs will be unlocked and you risk accidentally
double spending yourself.

Lightning Pool also uses this little trick to allows users to purchase
channels that are 50/50 balanced, and also purchase channel leases _for_ a
third party (called sidecar channels) to help on board them onto Lightning:
https://lightning.engineering/posts/2021-05-26-sidecar-channels/. Compared
to the above approaches, the process can be automatically batched w/ other
channels created in that epoch, and uses traits of the Pool account system
to make things atomic.

Ultimately, the balanced-ness of a channel is a transitory state (for
routing nodes, it's great for on-boarding end-users) and opening channels
like these only serves to allow the channel to _start_ in the state. If your
fees and channel policies aren't set accordingly, then it's possible that a
normal payment or balance flow shifts the channel away from equilibrium
shortly after the channel is open.

-- Laolu

On Tue, Sep 21, 2021 at 10:30 PM Ole Henrik Skogstrøm <
oleskogst...@gmail.com> wrote:

> Hi
>
> I have found a way of opening balanced channels using LND's psbt option
> when opening channels. What I'm doing is essentially just joining funded
> PSBTs before signing and submitting them. This makes it possible to open a
> balanced channel between two nodes or open a ring of balanced channels
> between multiple nodes (ROF).
>
> I found this interesting, however I don't know if this is somehow unsafe
> or for some other reason a bad idea. If not, then it could be an
> interesting alternative to only being able to open unbalanced channels.
>
> To do this efficiently, nodes need to collaborate by sending PSBTs back
> and forth to each other and doing this manually is a pain, so if this makes
> sense to do, it would be best to automate it through a client.
>
> --
> --- Here is an example of the complete flow for a single channel:
> --
>
> ** Node A: generates a new address and sends address to Node B *(lncli
> newaddress p2wkh)
>
> ** Node A starts an Interactive channel **open** to Node B* *using psbt*
> (lncli openchannel --psbt  200 100)
>
> ** Node A funds the channel address *(bitcoin-cli walletcreatefundedpsbt
> [] '[{"":0.02}]')
>
> ** Node B funds the refund transaction to Node A and sends PSBT back to
> Node A (*bitcoin-cli walletcreatefundedpsbt []
> '[{"":0.01}]')
>
> * *Node A joins the two PSBTs and sends it back to Node B (*bitcoin-cli
> joinpsbts '["", ""]')
>
> ** Node B verifies the content and signs the joined PSBT before sending it
> back to Node A *(bitcoin-cli walletprocesspsbt )
>
> ** Node A: Verifies the content and signs the joined PSBT *(bitcoin-cli
> walletprocesspsbt )
>
> ** Node A: Completes channel open by publishing the fully signed PSBT*
>
>
> --
> --- Here is an example of the complete flow for a ring of channels between
> multiple nodes:
> --
>
> ** Node A starts an Interactive open channel to Node B using psbt* (lncli
> openchannel --psbt --no_publish  200 100)
> ** Node A funds the channel address* (bitcoin-cli walletcreatefundedpsbt
> [] '[{"":0.02}]')
>
> ** Node B starts an Interactive open channel to Node C using psbt* (lncli
> openchannel --psbt --no_publish  200 100)
> ** Node B funds the channel address* (bitcoin-cli walletcreatefundedpsbt
> [] '[{"":0.02}]')
>
> ** Node C starts an Interactive open channel to Node A using psbt* (lncli
> openchannel --psbt  200 100)
> ** Node C funds the channel address *(bitcoin-cli walletcreatefundedpsbt
> [] '[{"":0.02}]')
>
> ** Node B and C sends Node A their PSBTs*
>
> ** Node A joins all the PSBTs* (bitcoin-cli joinpsbts
> '["", "",
> ""]')
>
> Using (bitcoin-cli walletprocesspsbt ):
>
>
>
> ** Node A verifies and signs the PSBT and sends it to Node B (1/3
> signatures)* Node B verifies and signs the PSBT and sends it to Node C (2/3
> signatures)* Node C verifies and signs the PSBT (3/3 signatures) before
> sending it to Node A and B.*
>
>
> ** Node A completes channel open (no_publish)* Node B completes channel
> open (no_publish)* Node C completes channel open and publishes the
> transaction.*
>
> --
> Ole Henrik Skogstrøm
>
> ___
> Lightning-dev mailing list
> 

Re: [Lightning-dev] Dropping Tor v2 onion services from node_announcement

2021-09-22 Thread Olaoluwa Osuntokun
Earlier this week I was helping a user debug a Tor related issue, and
realized (from the logs) that some newer Tor clients are already refusing to
connect out to v2 onion services.

On the lnd side, I think we'll move to disallow users creating a v2 onion
service in our next major release (0.14), and also possibly "upgrade" them
to a v3 onion service if their node supports it. I've made a tracking issue
here: https://github.com/lightningnetwork/lnd/issues/5771

I ran a naive script to gauge how much of the network is using Tor
generally, and also v2 onion services extract the following stats:
```
num nodes:  12844
num tor:  8793
num num v2:  66
num num v3:  8777
```

This counts advertised addresses total, so it likely over estimates, so you
can treat this as an upper bound. Thankfully only 60 or so v2 addresses seem
to be even _advertised_ on the network, so I don't think this'll cause much
disruption.

Another interesting tidbit here is that: _over half_ of all advertised
addresses on the network today are onion services. I wonder how the rise of
onion service usage (many nodes being tor-only) has affected: e2e payment
latency, general connection stability, and gossip announcement propagation.

In terms of actions we need to take at the spec level, it's likely enough to
amend the section on addrs in the node_announcement message to advise
implementations to _ignore_ the v2 addr type.

-- Laolu

On Tue, Jun 1, 2021 at 3:19 PM darosior via Lightning-dev <
lightning-dev@lists.linuxfoundation.org> wrote:

> Hi all,
>
> It's been almost 9 months since Tor v2 hidden services have been
> deprecated.
> The Tor project will drop v2 support in about a month in the latest
> release. It will then be entirely be dropped from all supported releases by
> October.
> More at https://blog.torproject.org/v2-deprecation-timeline .
>
> Bitcoin Core defaults to v3 since 0.21.0 (
> https://bitcoincore.org/en/releases/0.21.0/) and is planning to drop the
> v2 support for 0.22 (https://github.com/bitcoin/bitcoin/pull/22050),
> which means that v2 onions will gradually stop being gossiped on the
> Bitcoin network.
>
> I think we should do the same for the Lightning network, and the timeline
> is rather tight. Also, the configuration is user-facing (as opposed to
> Bitcoin Core, which generates ephemeral services) which i expect to make
> the transition trickier.
> C-lightning is deprecating the configuration of Tor v2 services starting
> next release, according to our deprecation policy we should be able to
> entirely drop its support 3 releases after this one, which should be not so
> far from the October deadline.
>
> Opinions? What is the state of other implementations with regard to Tor v2
> support?
>
> Antoine
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] #zerobasefee

2021-08-16 Thread Olaoluwa Osuntokun
Matt wrote:
> I'm frankly still very confused why we're having these conversations now

1000% this!!

This entire conversation strikes me as extremely premature and backwards
tbh.  Someone experimenting with a new approach shouldn't prompt us to
immediately modify the protocol to better "fit" the approach, particularly
before any sort of comparative analysis has even been published. At this
point, to my knowledge we don't even have an independent implementation of
the algorithm that has been tightly integrated into an existing LN
implementation. We don't know in which conditions the algorithm excels, and
in which conditions this improvement is maybe negligible (likely when payAmt
<< chanCapacity).

I think part of the difficulty here lies in the current lack of a robust
framework to use in comparing the efficacy of different approaches. Even in
this domain, there're a number of end traits to optimize for including: path
finding length, total CLTV delay across all shards, the amount of resulting
splits (goal should be consume less commitment space), attempt iteration
latency, amount/path randomization, path finding memory, etc, etc.

This also isn't the first time someone has attempted to adapt typical
flow-based algorithms to path finding in the LN setting. T-bast from ACINQ
initially attempted to adapt a greedy flow-based algorithm [1], but found
that a number of implementation related edge cases (he cites that the
min+max constraints in addition to the normal fee limit most implementations
as barriers to adapting the algorithm) led him to go with a simpler approach
to then iterate off of. I'd be curious to hear from T-bast w.r.t how this
new approach differs from his initial approach, and if he spots any
yet-to-be-recognized implementation level complexities to properly
integrating flow based algorithms into path finding.

> a) to my knowledge, no one has (yet) done any follow-on work to
> investigate pulling many of the same heuristics Rene et al use in a
> Dijkstras/A* algorithm with multiple passes or generating multiple routes
> in the same pass to see whether you can emulate the results in a faster
> algorithm without the drawbacks here,

lnd's current approach (very far from perfect, derived via satisficement)
has some similarities to the flow-based approach in its use of probabilities
to attempt to quantify the level of uncertainty of internal network channel
balances.

We start by assuming a config level a priori probability of any given route
working, we then take that, and the fee to route across a given link and
convert the two values into a scalar "distance/weight" (mapping to an
expected cost) we can plug into vanilla Dijkstras [2]. A fresh node uses
this value to compare routes instead of the typical hop count distance
metric. With a cold cache this doesn't really do much, but then we'll update
all the a priori probabilities with observations we gain in the wild.

If a node is able to forward an HTLC to the next hop, we boost their
probability (conditional on the amount forward/failed, so there's a bayesian
aspect). Each failure results in the probabilities of nodes being affected
differently (temp chan failure vs no next node, etc). For example, if we're
able to route through the first 3 hops of the route, but the final hop fails
with a temp chan failure. We'll rewards all the nodes with a success
probability amount (default rn is 95%) that applies when the amount being
carried is < that prior attempt.

As we assume balances are always changing, we then apply a half life decay
that slows increases a penalized probability back to the baseline. The
resulting algorithm starts with no information/memory, but then gains
information with each attempt (increasing and decreasing probabilities as a
function of the amount attempted and time that has passed since the last
attempt). The APIs also let you mark certain nodes as having a higher
apriori probability which can reduce the amount of bogus path exploration.
This API can be used "at scale" to create a sort of active learning system
that learns from the attempts of a fleet of nodes, wallets, trampoline
nodes, wallets, etc (some privacy caveats apply, though there're ways to
fuzz things a bit differential style).

Other knobs exist such as the min probability setting, which controls how
high a success probability a candidate edge needs to have before it is
explored. If the algo is exploring too many lackluster paths (and there're a
lot of these on mainnet due to normal balance imbalance), this value can be
increased which will let it shed a large number of edges to be explored.
When comparing this to the discussed approach that doesn't use any prior
memory, there may be certain cases that allows this algorithm to "jump" to
the correct approach and skip all the pre-processing and optimization that
may result in the same route, just with added latency before the initial
attempt. I'm skipping some other details like how we handle repeated
failures 

Re: [Lightning-dev] bLIPs: A proposal for community-driven app layer and protocol extension standardization

2021-07-01 Thread Olaoluwa Osuntokun
ferent concerns.
>
> bLIPs/SPARKS/BIPs clearly address the second point, which is good.
> But they don't address the first point at all, they instead work around it.
> To be fair, I don't think we can completely address that first point:
> properly
> reviewing spec proposals takes a lot of effort and accepting complex
> changes
> to the BOLTs shouldn't be done lightly.
>
> I am mostly in favor of this solution, but I want to highlight that it
> isn't
> only rainbows and unicorns: it will add fragmentation to the network, it
> will
> add maintenance costs and backwards-compatibility issues, many bLIPs will
> be
> sub-optimal solutions to the problem they try to solve and some bLIPs will
> be
> simply insecure and may put users' funds at risk (L2 protocols are hard
> and have
> subtle issues that can be easily missed). On the other hand, it allows for
> real
> world experimentation and iteration, and it's easier to amend a bLIP than
> the
> BOLTs.
>
> On the nuts-and-bolts (see the pun?) side, bLIPs cannot embrace a fully
> bazaar
> style of evolution. Most of them will need:
>
> - to assign feature bit(s)
> - to insert new tlv fields in existing messages
> - to create new messages
>
> We can't have collisions on any of these three things. bLIP XXX cannot use
> the
> same tlv types as bLIP YYY otherwise we're creating network
> incompatibilities.
> So they really need to be centralized, and we need a process to assign
> these
> and ensure they don't collide. It's not a hard problem, but we need to be
> clear
> about the process around those.
>
> Regarding the details of where they live, I don't have a strong opinion,
> but I
> think they must be easy to find and browse, and I think it's easier for
> readers
> if they're inside the spec repository. We already have PRs that use a
> dedicated
> "proposals" folder (e.g. [1], [2]).
>
> Cheers,
> Bastien
>
> [1] https://github.com/lightningnetwork/lightning-rfc/pull/829
> [2] https://github.com/lightningnetwork/lightning-rfc/pull/854
>
> Le jeu. 1 juil. 2021 à 02:31, Ariel Luaces  a
> écrit :
>
>> BIPs are already the Bazaar style of evolution that simultaneously
>> allows flexibility and coordination/interoperability (since anyone can
>> create a BIP and they create an environment of discussion).
>>
>> BOLTs are essentially one big BIP in the sense that they started as a
>> place for discussion but are now more rigid. BOLTs must be followed
>> strictly to ensure a node is interoperable with the network. And BOLTs
>> should be rigid, as rigid as any widely used BIP like 32 for example.
>> Even though BOLTs were flexible when being drafted their purpose has
>> changed from descriptive to prescriptive.
>> Any alternatives, or optional features should be extracted out of
>> BOLTs, written as BIPs. The BIP should then reference the BOLT and the
>> required flags set, messages sent, or alterations made to signal that
>> the BIP's feature is enabled.
>>
>> A BOLT may at some point organically change to reference a BIP. For
>> example if a BIP was drafted as an optional feature but then becomes
>> more widespread and then turns out to be crucial for the proper
>> operation of the network then a BOLT can be changed to just reference
>> the BIP as mandatory. There isn't anything wrong with this.
>>
>> All of the above would work exactly the same if there was a bLIP
>> repository instead. I don't see the value in having both bLIPs and
>> BIPs since AFAICT they seem to be functionally equivalent and BIPs are
>> not restricted to exclude lightning, and never have been.
>>
>> I believe the reason this move to BIPs hasn't happened organically is
>> because many still perceive the BOLTs available for editing, so
>> changes continue to be made. If instead BOLTs were perceived as more
>> "consensus critical", not subject to change, and more people were
>> strongly encouraged to write specs for new lightning features
>> elsewhere (like the BIP repo) then you would see this issue of growing
>> BOLTs resolved.
>>
>> Cheers
>> Ariel Lorenzo-Luaces
>>
>> On Wed, Jun 30, 2021 at 1:16 PM Olaoluwa Osuntokun 
>> wrote:
>> >
>> > > That being said I think all the points that are addressed in Ryan's
>> mail
>> > > could very well be formalized into BOLTs but maybe we just need to
>> rethink
>> > > the current process of the BOLTs to make it more accessible for new
>> ideas
>> > > to find their way into the BOLTs?
>> >
>> > I think part of what bLIPs are trying to solve he

Re: [Lightning-dev] bLIPs: A proposal for community-driven app layer and protocol extension standardization

2021-07-01 Thread Olaoluwa Osuntokun
> BIPs are already the Bazaar style of evolution that simultaneously
> allows flexibility and coordination/interoperability (since anyone can
create a
> BIP and they create an environment of discussion).

The answer to why not BIPs here applies to BOLTs as well, as bLIPs are
intended to effectively be nested under the BOLT umbrella (same repo, etc).
It's also the case that any document can be mirrored as a BIP, this has been
suggested before, but the BIP editors have decided not to do so.

bLIPs have a slightly different process than BIPs, as well as a different
set
of editors/maintainers (more widely distributed). As we saw with the Speedy
Trial saga (fingers crossed), the sole (?) maintainer of the BIP process was
able to effectively steelman the progression of an author document, with no
sound technical objection (they had a competing proposal that could've been
a distinct document). bLIPs sidestep shenanigans like this by having the
primary maintainer/editors be more widely distributed and closer to the
target domain (LN).

The other thing bLIPs do is do away with the whole "human picks the number
of documents", and "don't assign your own number, you must wait". Borrowing
from EIPs, the number of a document is simply the number of the PR that
proposes the document. This reduces friction, and eliminates a possible
bikeshedding vector.

-- Laolu


On Wed, Jun 30, 2021 at 5:31 PM Ariel Luaces  wrote:

> BIPs are already the Bazaar style of evolution that simultaneously
> allows flexibility and coordination/interoperability (since anyone can
> create a BIP and they create an environment of discussion).
>
> BOLTs are essentially one big BIP in the sense that they started as a
> place for discussion but are now more rigid. BOLTs must be followed
> strictly to ensure a node is interoperable with the network. And BOLTs
> should be rigid, as rigid as any widely used BIP like 32 for example.
> Even though BOLTs were flexible when being drafted their purpose has
> changed from descriptive to prescriptive.
> Any alternatives, or optional features should be extracted out of
> BOLTs, written as BIPs. The BIP should then reference the BOLT and the
> required flags set, messages sent, or alterations made to signal that
> the BIP's feature is enabled.
>
> A BOLT may at some point organically change to reference a BIP. For
> example if a BIP was drafted as an optional feature but then becomes
> more widespread and then turns out to be crucial for the proper
> operation of the network then a BOLT can be changed to just reference
> the BIP as mandatory. There isn't anything wrong with this.
>
> All of the above would work exactly the same if there was a bLIP
> repository instead. I don't see the value in having both bLIPs and
> BIPs since AFAICT they seem to be functionally equivalent and BIPs are
> not restricted to exclude lightning, and never have been.
>
> I believe the reason this move to BIPs hasn't happened organically is
> because many still perceive the BOLTs available for editing, so
> changes continue to be made. If instead BOLTs were perceived as more
> "consensus critical", not subject to change, and more people were
> strongly encouraged to write specs for new lightning features
> elsewhere (like the BIP repo) then you would see this issue of growing
> BOLTs resolved.
>
> Cheers
> Ariel Lorenzo-Luaces
>
> On Wed, Jun 30, 2021 at 1:16 PM Olaoluwa Osuntokun 
> wrote:
> >
> > > That being said I think all the points that are addressed in Ryan's
> mail
> > > could very well be formalized into BOLTs but maybe we just need to
> rethink
> > > the current process of the BOLTs to make it more accessible for new
> ideas
> > > to find their way into the BOLTs?
> >
> > I think part of what bLIPs are trying to solve here is promoting more
> loosely
> > coupled evolution of the network. I think the BOLTs do a good job
> currently of
> > specifying what _base_ functionality is required for a routing node in a
> > prescriptive manner (you must forward an HTLC like this, etc). However
> there's
> > a rather large gap in describing functionality that has emerged over
> time due
> > to progressive evolution, and aren't absolutely necessary, but enhance
> > node/wallet operation.
> >
> > Examples of  include things like: path finding heuristics (BOLTs just
> say you
> > should get from Alice to Bob, but provides no recommendations w.r.t
> _how_ to do
> > so), fee bumping heuristics, breach retribution handling, channel
> management,
> > rebalancing, custom records usage (like the podcast index meta-data,
> messaging,
> > etc), JIT channel opening, hosted channels, randomized channel IDs, fee
> > optimiz

Re: [Lightning-dev] bLIPs: A proposal for community-driven app layer and protocol extension standardization

2021-06-30 Thread Olaoluwa Osuntokun
> That being said I think all the points that are addressed in Ryan's mail
> could very well be formalized into BOLTs but maybe we just need to rethink
> the current process of the BOLTs to make it more accessible for new ideas
> to find their way into the BOLTs?

I think part of what bLIPs are trying to solve here is promoting more
loosely
coupled evolution of the network. I think the BOLTs do a good job currently
of
specifying what _base_ functionality is required for a routing node in a
prescriptive manner (you must forward an HTLC like this, etc). However
there's
a rather large gap in describing functionality that has emerged over time
due
to progressive evolution, and aren't absolutely necessary, but enhance
node/wallet operation.

Examples of  include things like: path finding heuristics (BOLTs just say
you
should get from Alice to Bob, but provides no recommendations w.r.t _how_
to do
so), fee bumping heuristics, breach retribution handling, channel
management,
rebalancing, custom records usage (like the podcast index meta-data,
messaging,
etc), JIT channel opening, hosted channels, randomized channel IDs, fee
optimization, initial channel boostrapping, etc.

All these examples are effectively optional as they aren't required for base
node operation, but they've organically evolved over time as node
implementations and wallet seek to solve UX and operational problems for
their users. bLIPs can be a _descriptive_ (this is how things can be done)
home for these types of standards, while BOLTs can be reserved for
_prescriptive_ measures (an HTLC looks like this, etc).

The protocol as implemented today has a number of extensions (TLVs, message
types, feature bits, etc) that allow implementations to spin out their own
sub-protocols, many of which won't be considered absolutely necessary for
node
operation. IMO we should embrace more of a "bazaar" style of evolution, and
acknowledge that loosely coupled evolution allows participants to more
broadly
explore the design space, without the constraints of "it isn't a thing
until N
of us start to do it".

Historically, BOLTs have also had a rather monolithic structure. We've used
the same 11 or so documents for the past few years with the size of the
documents swelling over time with new exceptions, features, requirements,
etc. If you were hired to work on a new codebase and saw that everything is
defined in 11 "functions" that have been growing linearly over time, you'd
probably declare the codebase as being unmaintainable. By having distinct
documents for proposals/standards, bLIPs (author documents really), each new
standard/proposal is able to be more effectively explained, motivated,
versionsed,
etc.

-- Laolu


On Wed, Jun 30, 2021 at 7:35 AM René Pickhardt via Lightning-dev <
lightning-dev@lists.linuxfoundation.org> wrote:

> Hey everyone,
>
> just for reference when I was new here (and did not understand the
> processes well enough) I proposed a similar idea (called LIP) in 2018 c.f.:
> https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-July/001367.html
>
>
> I wonder what exactly has changed in the reasoning by roasbeef which I
> will repeat here:
>
> *> We already have the equiv of improvement proposals: BOLTs. Historically*
>
> >* new standardization documents are proposed initially as issues or PR's 
> >when *
>
> >* ultimately accepted. Why do we need another repo? *
>
>
> As far as I can tell there was always some form of (invisible?) barrier to
> participate in the BOLTs but there are also new BOLTs being offered:
> * BOLT 12: https://github.com/lightningnetwork/lightning-rfc/pull/798
> * BOLT 14: https://github.com/lightningnetwork/lightning-rfc/pull/780
> and topics to be included like:
> * dual funding
> * splicing
> * the examples given by Ryan
>
> I don't see how a new repo would reduce that barrier - Actually I think it
> would even create more confusion as I for example would not know where
> something belongs. That being said I think all the points that are
> addressed in Ryan's mail could very well be formalized into BOLTs but maybe
> we just need to rethink the current process of the BOLTs to make it more
> accessible for new ideas to find their way into the BOLTs? One thing that I
> can say from answering lightning-network questions on stackexchange is that
> it would certainly help if the BOLTs where referenced  on lightning.network
> web page and in the whitepaper as the place to be if one wants to learn
> about the Lightning Network
>
> with kind regards Rene
>
> On Wed, Jun 30, 2021 at 4:10 PM Ryan Gentry via Lightning-dev <
> lightning-dev@lists.linuxfoundation.org> wrote:
>
>> Hi all,
>>
>> The recent thread around zero-conf channels [1] provides an opportunity
>> to discuss how the BOLT process handles features and best practices that
>> arise in the wild vs. originating within the process itself. Zero-conf
>> channels are one of many LN innovations on the app layer that have
>> struggled to make their way into the 

Re: [Lightning-dev] Hold fee rates as DoS protection (channel spamming and jamming)

2021-02-22 Thread Olaoluwa Osuntokun
> I think the problem of accidental channel closure is getting ignored by
> devs.
>
> If we think any anti-DoS fee will be "insignificant" compared to the cost
> of closing and reopening a channel, maybe dev attention should be on
> fixing accidental channel closure costs than any anti-DoS fee mechanism.
>
> Any deterrence of the channel jamming problem is economic so if the
> anti-DoS fee is tiny, then its deterrence will be tiny as well.

This struck me as an extremely salient point. One thing that has been
noticeable missing from these discussions is any sort of threat model or
attacker
profile. Given this is primarily a griefing attack, and the attacker doesn't
stand any direct gain, how high a fee is considered "adequate" deterrence
without also dramatically increasing the cost of node operation in the
average case?

If an attacker has say a budget of 20 BTC to blow as they just want to see
the world burn, then most parametrizations of attempt fees are likely
insufficient. In addition, if the HTLC attempt/hold fees rise well above
routing fees, then costs are also increased for senders in addition to
routing nodes.

Also IMO, it's important to re-state, that if channels are parametrized
properly (dust values, max/min HTLC, private channels, micropayment specific
channels, etc), then there is an inherent existing cost re the opportunity
cost of committing funds in channels and the chain fee cost of making the
series of channels in the first place.

Based on the discussion above, it appears that the decaying fee idea needs
closer examination to ensure it doesn't increase the day to day operational
cost of a routing node in order to defend against threats at the edges.
Nodes go down all the time for various reasons: need to allocate more disk,
software upgrade, infrastructure migrations, power outages, etc, etc. By
adding a steady decay cost, we introduce an idle penalty for lack of uptime
when holding an HTLC, similar to the availability slashing in PoS systems.
It would be unfortunate if an end result of such a solution is increasing
node operation costs as a whole, (which has other trickle down effects: less
nodes, higher routing fees, strain of dev-ops teams to ensure higher uptime
or loss of funds, etc), while having negligible effects on the "success"
profile of such an attack in practice.

If nodes wish to be compensated for committing capital to Lightning itself,
then markets such as Lightning Pool which rewards them for allocating the
capital (independent of use) for a period of time can help them scratch that
itch.

Returning back to the original point, it may very well be the case that the
very first solution proposed (circa 2015) to this issue: close out the
channel and send back a proof of closure, may in fact be more desirable from
the PoV of enforcing tangible costs given it requires the attacker to
forfeit on-chain fees in the case of an unsuccessful attack. Services that
require long lived HTLCs (HTLC mailboxes, etc) can flag the HTLCs as such in
the onion payload allowing nodes to preferentially forward or reject them.

Zooming out, I have a new idea in this domain that attempts to tackle things
from a different angle. Assuming that any efforts to add further off-chain
costs are insignificant in the face of an attacker with few constraints
w.r.t budget, perhaps some efforts should be focused on instead ensuring
that if there's "turbulence" in the network, it can gracefully degraded to a
slightly more restricted operating mode until the storm passes. If an
attacker spends coins/time/utxos, etc to be in position to distrust things,
but then finds that things are working as normal, such a solution may serve
as a low cost deterrence mechanism that won't tangibly increase
operation/forwarding/payment costs within the network. Working out some of
the kinks re the idea, but I hope to publish it sometime over the next few
days.

-- Laolu


On Fri, Feb 12, 2021 at 8:24 PM ZmnSCPxj via Lightning-dev <
lightning-dev@lists.linuxfoundation.org> wrote:

> Good morning Joost,
>
> > > Not quite up-to-speed back into this, but, I believe an issue with
> using feerates rather than fixed fees is "what happens if a channel is
> forced onchain"?
> > >
> > > Suppose after C offers the HTLC to D, the C-D channel, for any reason,
> is forced onchain, and the blockchain is bloated and the transaction
> remains floating in mempools until very close to the timeout of C-D.
> > > C is now liable for a large time the payment is held, and because the
> C-D channel was dropped onchain, presumably any parameters of the HTLC
> (including penalties D owes to C) have gotten fixed at the time the channel
> was dropped onchain.
> >
> > > The simplicity of the fixed fee is that it bounds the amount of risk
> that C has in case its outgoing channel is dropped onchain.
> >
> > The risk is bound in both cases. If you want you can cap the variable
> fee at a level that isn't considered risky, but it will then not fully
> cover 

Re: [Lightning-dev] Lightning Pool: A Non-Custodial Channel Lease Marketplace

2020-11-05 Thread Olaoluwa Osuntokun
Hi Z,

Thanks for such kind words!

> Is there a documentation for the client/server intercommunications
> protocol?

Long form documentation on the client/server protocol hasn't yet been
written. However, just like Loop, the Pool client uses a fully-featured gRPC
protocol to communicate with the server. The set of protobufs describing the
current client <-> server protocol can be found here [1].

> How stable is this protocol?

I'd say it isn't yet to be considered "stable". We've branded the current
release as an "alpha" release, as we want to leave open the possibility of
breaking changes in the API itself (in addition to the usual disclaimers),
though it's also possible to use proper upgrade mechanisms to never really
_have_ to break the current protocol as is.

> A random, possibly-dumb idea is that a leased channel should charge 0
fees initially.
> Enforcing that is a problem, however, since channel updates are
> unilateral, and of course the lessee cannot afford to close the channel it
> leased in case the lessor sets a nonzero feerate ahead of time.

Agreed that the purchaser of a lease should be able to also receive a fee
rate guarantee along with the channel lifetime enforcement. As you point
out, in order to be able to express something like this, the protocol may
need to be extended to allow nodes to advertise certain pair-wise channel
updates, that are only valid if _both_ sides sign off on each other's
advertisements, similar to the initial announcement signatures message. On
lookers in the network would possibly be able to recognize these new
modified channel update requirements via interpreting the bits in the
channel announcement itself, which requires both sides cooperating to
produce. It's also possible to dictate in the order of the channel lease
itself that the channel be unadvertised, though I know how you feel about
unadvertised channels :).

In the context of Lighting Pool itself, the employed node rating system can
be used to protect lease buyers from nodes that ramp up their fees after
selling a lease, using a punitive mechanism. From the PoV of the incentives
though, they should find the "smoothed" out revenue attractive enough to set
reasonable fees within sold channel leases.

One other thing that the purchaser of a lease needs to consider is effective
utilization of the leased capital. As an example, they should ensure they're
able to fully utilize the purchased bandwidth by using "htlc acceptor" type
hooks to disallow forward through the channel (as they could be used to
rebalance away the funds) to clamp down on "lease leak".

I plan to significantly extend the current "security analysis" section to
cover these aspects as well as some other considerations w.r.t the
interaction of Lifted UTXOs timeouts and batch confirmation/proposal in the
context of Shadowchains. There'll also eventually be a more fleshed out
implementation section once we ship some features like adding additional
duration buckets. The git repo of the LaTeX itself (which is embedded in the
rendered PDF) can be found here [2].

> Secondarily to the Shadowchain discussion, it should be noted that if all
> onchain UTXOs were signed with n-of-n, there would not be a need for a
> fixed orchestrator; all n participants would cooperatively act as an
> orchestrator.

This is correct, and as you point out moving to an n-of-n structure between
all participants runs into a number of scalability/coordination/availability
issues. The existence of the orchestrator also serves to reduce the
availability requirements of the participants, as the only need to be online
to accept/validate a shadowchain block that contains any of its lifted
UTXOs. With an addition of a merkle-tree/MMR/SMT over all chain state that's
committed to in each block (say P2CH-style within the orchestrator's
output), an offline participant would still be able to "fully validate" all
operations that happened while they were away. This structure could also be
used to allow _new_ participants to audit the past history of the chain as
well, and can also be used to _authenticate_ lease rate data in the context
of CLM/Pool (so an authenticated+verifiable price feed of sorts).

In the context of the Pool shadowchain, the existence of the orchestrator
allows the participants to make other tradeoffs given it's slightly elevated
signing position. Consider that it may be "safe" for participants to
instantly (zero conf chans) start using any channels created via a lease as
double spending the channel output itself requires coordination of _all_ the
participants as well as the orchestrator as all accounts are time lock
encumbered. Examining the dynamic more closely: as the auctioneer's account
in the context of Pool/CLM isn't encumbered, then they'd be the only one
able to spend their output unilaterally. However, they have an incentive to
not do so as they'd forfeit any paid execution fees in the chain. If we want
to strengthen the incentives to make "safe zero 

[Lightning-dev] Lightning Pool: A Non-Custodial Channel Lease Marketplace

2020-11-02 Thread Olaoluwa Osuntokun
Hi y'all,

We've recently released a new system which may be of interest to this list,
Lightning Pool [1]. Alongside a working client [2], we've also released a
white paper which goes deeper into the architecture of the system.

Pool builds on some earlier ideas that were tossed around the ML concerning
creating a market for dual-funding channels for the network (though the
concept
itself pre-dates those posts). Rather than target dual-funded channels, we
focus on the current uni-directional channels, and allow users to buy+sell
what
we call a "channel lease" that packages up inbound (and also potentially
outbound via side-car channels!) liquidity paying out a premium for a fixed
duration.

Live testnet+mainnet markets were also released today, giving routing nodes
a
new stable revenue source, and allowing those that need inbound to bootstrap
their new Lightning Service a new automated way to do so.

This is just our first alpha release which contains some
limits/simplifications
in the system itself. We plan to continue to iterate on the system to
implement
new things like streaming interest payments, and the version of side-car
channels (buy a channel for a 3rd party) described in the paper amongst many
other things.

[1]: https://lightning.engineering/posts/2020-11-02-pool-deep-dive/
[2]: https://github.com/lightninglabs/pool
[3]: https://lightning.engineering/lightning-pool-whitepaper.pdf
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Making (some) channel limits dynamic

2020-10-12 Thread Olaoluwa Osuntokun
> I suggest adding tlv records in `commitment_signed` to tell our channel >
> peer that we're changing the values of these fields.

I think this fits in nicely with the "parameter re-negotiation" portion of
my
loose Dynamic commitments proposal. Note that in that paradigm, something
like this would be a distinct message, and also only be allowed with a
"clean commitment" (as otherwise what if I reduce the number of slots to a
value that is lower than the number of active slots?). With this, both sides
would be able to propose/accept/deny updates to the flow control parameters
that can be used to either increase the security of a channel, or implement
a sort of "slow start" protocol for any new peers that connect to you.

Similar to congestion window expansion/contraction in TCP, when a new peer
connects to you, you likely don't want to allow them to be able to consume
all the newly allocated bandwidth in an outgoing direction. Instead, you may
want to only allow them to utilize say 10% of the available HTLC bandwidth,
slowly increasing based on successful payments, and drastically
(multiplicatively) decreasing when you encounter very long lived HTLCs, or
an excessive number of failures.

A dynamic HTLC bandwidth allocation mechanism would serve to mitigate
several classes of attacks (supplementing any mitigations by "channel
acceptor" hooks), and also give forwarding nodes more _control_ of exactly
how their allocated bandwidth is utilized by all connected peers.  This is
possible to some degree today (by using an implicit value lower than
the negotiated values), but the implicit route doesn't give the other party
any information, and may end up in weird re-send loops (as they _why_ an
HTLC was rejected) wasn't communicated. Also if you end up in a half-sign
state, since we don't have any sort of "unadd", then the channel may end up
borked if the violating party keeps retransmitting the same update upon
reconnection.

> Are there other fields you think would need to become dynamic as well?

One other value that IMO should be dynamic to protect against future
unexpected events is the dust limit. "It Is Known", that this value "doesn't
really change", but we should be able to upgrade _all_ channels on the fly
if it does for w/e reason.

-- Laolu
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Why should funders always pay on-chain fees?

2020-10-12 Thread Olaoluwa Osuntokun
> It seems to me that the "funder pays all the commit tx fees" rule exists
> solely for simplicity (which was totally reasonable).

At this stage, I've learned that simplicity (when doing anything that
involves multi-party on-chain fee negotiating/verification/enforcement can
really go a long way). Just think about all the edge cases w.r.t _allocating
enough funds to pay for fees_ we've discovered over the past few years in
the state machine. I fear adding a more elaborate fee splitting mechanism
would only blow up the number of obscure edge cases that may lead to a
channel temporarily or permanently being "borked".

If we're going to add a "fairer" way of splitting fees, we'll really need to
dig down pre-deployment to ensure that we've explored any resulting edge
cases within our solution space, as we'll only be _adding_ complexity to fee
splitting.

IMO, anchor commitments in their "final form" (fixed fee rate on commitment
transaction, only "emergency" use of update_fee) significantly simplifies
things as it shifts from "funding pay fees", to "broadcaster/confirmer pays
fees". However, as you note this doesn't fully distribute the worst-case
cost of needing to go to chain with a "fully loaded" commitment transaction.
Even with HTLCs, they could only be signed at 1 sat/byte from the funder's
perspective, once again putting the burden on the broadcaster/confirmer to
make up the difference.

-- Laolu


On Mon, Oct 5, 2020 at 6:13 AM Bastien TEINTURIER via Lightning-dev <
lightning-dev@lists.linuxfoundation.org> wrote:

> Good morning list,
>
> It seems to me that the "funder pays all the commit tx fees" rule exists
> solely for simplicity
> (which was totally reasonable). I haven't been able to find much
> discussion about this decision
> on the mailing list nor in the spec commits.
>
> At first glance, it's true that at the beginning of the channel lifetime,
> the funder should be
> responsible for the fee (it's his decision to open a channel after all).
> But as time goes by and
> both peers earn value from this channel, this rule becomes questionable.
> We've discovered since
> then that there is some risk associated with having pending HTLCs
> (flood-and-loot type of attacks,
> pinning, channel jamming, etc).
>
> I think that *in some cases*, fundees should be paying a portion of the
> commit-tx on-chain fees,
> otherwise we may end up with a web-of-trust network where channels would
> only exist between peers
> that trust each other, which is quite limiting (I'm hoping we can do
> better).
>
> Routing nodes may be at risk when they *receive* HTLCs. All the attacks
> that steal funds come from
> the fact that a routing node has paid downstream but cannot claim the
> upstream HTLCs (correct me
> if that's incorrect). Thus I'd like nodes to pay for the on-chain fees of
> the HTLCs they offer
> while they're pending in the commit-tx, regardless of whether they're
> funder or fundee.
>
> The simplest way to do this would be to deduce the HTLC cost (172 *
> feerate) from the offerer's
> main output (instead of the funder's main output, while keeping the base
> commit tx weight paid
> by the funder).
>
> A more extreme proposal would be to tie the *total* commit-tx fee to the
> channel usage:
>
> * if there are no pending HTLCs, the funder pays all the fee
> * if there are pending HTLCs, each node pays a proportion of the fee
> proportional to the number of
> HTLCs they offered. If Alice offered 1 HTLC and Bob offered 3 HTLCs, Bob
> pays 75% of the
> commit-tx fee and Alice pays 25%. When the HTLCs settle, the fee is
> redistributed.
>
> This model uses the on-chain fee as collateral for usage of the channel.
> If Alice wants to forward
> HTLCs through this channel (because she has something to gain - routing
> fees), she should be taking
> on some of the associated risk, not Bob. Bob will be taking the same risk
> downstream if he chooses
> to forward.
>
> I believe it also forces the fundee to care about on-chain feerates, which
> is a healthy incentive.
> It may create a feedback loop between on-chain feerates and routing fees,
> which I believe is also
> a good long-term thing (but it's hard to predict as there may be negative
> side-effects as well).
>
> What do you all think? Is this a terrible idea? Is it okay-ish, but not
> worth the additional
> complexity? Is it an amazing idea worth a lightning nobel? Please don't
> take any of my claims
> for granted and challenge them, there may be negative side-effects I'm
> completely missing, this is
> a fragile game of incentives...
>
> Side-note: don't forget to take into account that the fees for HTLC
> transactions (second-level txs)
> are always paid by the party that broadcasts them (which makes sense). I
> still think this is not
> enough and can even be abused by fundees in some setups.
>
> Thanks,
> Bastien
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> 

Re: [Lightning-dev] SIGHASH_SINGLE + update_fee Considered Harmful

2020-09-10 Thread Olaoluwa Osuntokun
Hi Antoine,

Great findings!

I think an even simpler mitigation is just for the non-initiator to _reject_
update_fee proposals that are "unreasonable". The non-initiator can run a
"fee leak calculation" to compute the worst-case leakage of fees in the
revocation case. This can be done to day without any significant updates to
implementations, and some implementations may already be doing this.

One issue is that we don't have a way to do a "soft reject" of an update_fee
as is. However, depending on the implementations, it may be possible to just
reconnect and issue a co-op close if there're no HTLCs on the commitment
transaction.

As you mentioned by setting proper values for max allowed htlcs, max in
flight, reserve, etc, nodes are able to quantify this fee leak risk ahead of
time, and set reasonable parameters based on their security model. One issue
is that these values are set in stone rn when the channel is opened, but
future iterations of dynamic commitments may allow us to update them on the
fly.

In the mid-term, implementations can start to phase out usage of update_fee
by setting a minimal commitment fee when the channel is first opened, then
relying on CPFP to bump up the commitment and any HTLCs if needed. This
discovery might very well hasten the demise of update_fee in the protocol
all together as well.  I don't think we need to depend entirely on a
theoretical package relay Bitcoin p2p upgrade assuming implementations are
willing to make an assumption that say 20 sat/byte or w/e has a good chance
of widespread propagation into mempools.

>From the perspective of channel safety, and variations of attacks like
"flood & loot", imo it's absolutely critical that nodes are able to update
the fees on their second-level HTLC transactions. As this is where the real
danger lies: if nodes aren't able to get 2nd level HTLCs in the chain in
time, then the incoming HTLC expiry will expire, creating a race condition
across both commitments which can potentially cascade.

In lnd today, anchors is still behind a build flag, but we plan to enable
it by default for our upcoming 0.12 release. The blockers on our end were to
add support for towers, and add basic deadline aware bumping, both of which
are currently on track. We'll now also look into setting clamps on the
receiver end to just not accept unreasonable values for the fee rate of a
commitment, as this ends up eating into the true HTLC values for both sides.

-- Laolu


On Thu, Sep 10, 2020 at 9:28 AM Antoine Riard 
wrote:

> Hi,
>
> In this post, I would like to expose a potential vulnerability introduced
> by the recent anchor output spec update related to the new usage of
> SIGHASH_SINGLE for HTLC transactions. This new malleability combined with
> the currently deployed mechanism of `update_fee` is likely harmful for
> funds safety.
>
> This has been previously shared with deployed implementations devs, as
> anchor channels are flagged as experimental it's better to discuss and
> solve this publicly. That said, if you're currently running experimental
> anchor channels with non-trusted parties on mainnet, you might prefer to
> close them.
>
> # SIGHASH_SINGLE and `update_fee` (skip it if you're familiar)
>
> First, let's get started by a quick reminder of the data set committed by
> signature digest algorithm of Segwit transactions (BIP 143):
> * nVersion
> * hashPrevouts
> * hashSequence
> * outpoint
> * scriptCode of the input
> * value of the output spent by this input
> * nSequence of the input
> * hashOutputs
> * nLocktime
> * sighash type of the signature
>
> Anchor output switched the sighash type from SIGHASH_ALL to SIGHASH_SINGLE
> | SIGHASH_ANYONECANPAY for HTLC signatures sent to your counterparty. Thus
> it can spend non-cooperatively its HTLC outputs on its commitment
> transactions. I.e when Alice broadcasts her commitment transaction, every
> Bob's signatures on Alice's HTLC-Success/Timeout transactions are now
> flagging the new sighash type.
>
> Thus `hashPrevouts`, `hashSequence` (ANYONECANPAY) and `hashOutputs`
> (SINGLE) aren't committed anymore. SINGLE only enforces commitment to the
> output scriptpubkey/amount at the same index that
> the spending input. Alice is free to attach additional inputs/outputs to
> her HTLC transaction. This change is aiming to let a single-party bump the
> feerate of 2nd-stage HTLC transactions in case of mempool-congestion,
> without counterparty cooperation and thus make HTLC funds safer.
>
> The attached outputs are _not_ encumbered by a revokeable redeemscript for
> a potential punishment.
>
> That said, anchor ouput spec didn't change disable the current fee
> mechanism already covering HTLC transactions. Pre/post-anchor channels are
> negotiating a feerate through `update_fee` exchange, initiated by the
> channel funder. This `update_fee` can be rejected by the receiver if it's
> deemed unreasonable compared to your local fee estimator view, but as of
> today implementations are pretty 

Re: [Lightning-dev] Dynamic Commitments: Upgrading Channels Without On-Chain Transactions

2020-07-21 Thread Olaoluwa Osuntokun
Hi Z,

> Probably arguably off-topic, but this post triggered me into thinking
> about an insane idea: offchain update from existing Poon-Dryja to newer
> Decker-Russell-Osuntokun ("eltoo") mechanism.

Ooo, yeh I don't see why this would be possible assuming at that point
no_input has been deployed...

However, switching between commitment types that have distinct commitment
invalidation mechanisms appears to make things a bit more complex. Consider
that since the earlier lifetime of my channel used _revocation_ based
invalidation, I'd need to be able to handle two types of invalid commitment
broadcasts: broadcast of a revoked commitment, and broadcast of a _replaced_
commitment.

As a result, implementations may want to limit the types of transitions to
only a commitment type with the same invalidation mechanism. On the other
hand, I don't think that additional complexity (being able to handle both
types
of contract violations) is insurmountable.

For those that wish to retain a revocation based commitment invalidation
model, they may instead opt to upgrade to something like this [1], which I
consider to be the current best successor to the OG Poon-Dryja revocation
mechanism (has some other tool traits too). The commitment format still
needs a sexy name though"el tres"? ;)

> We can create an upgrade transaction that is a cut-through of a mutual
> close of the Poon-Dryja, and a funding open of a Decker-Russell-Osuntokun.

Splicing reborn!

> The channel retains its short-channel-id, which may be useful, since a
> provably-long-lived channel implies both channel participants have high
> reliability (else one or the other would have closed the channel at some
> point), and a pathfinding algorithm may bias towards such long-lived
> channels.

Indeed, I think some implementations (eclair?) factor in the age of the
channel they're attempting to traverse during path finding.

[1]: https://eprint.iacr.org/2020/476

-- Laolu

On Tue, Jul 21, 2020 at 7:50 AM ZmnSCPxj  wrote:

> Good morning Laolu, and list,
>
> Probably arguably off-topic, but this post triggered me into thinking
> about an insane idea: offchain update from existing Poon-Dryja to newer
> Decker-Russell-Osuntokun ("eltoo") mechanism.
>
> Due to the way `SIGHASH_ANYPREVOUT` will be deployed --- requires a new
> pubkey type and works only inside the Taproot construction --- we cannot
> seamlessly upgrade from a Poon-Dryja channel to a Decker-Russell-Osuntokun.
> The funding outpoint itself has to be changed.
>
> We can create an upgrade transaction that is a cut-through of a mutual
> close of the Poon-Dryja, and a funding open of a Decker-Russell-Osuntokun.
> This transaction spends the funding outpoint of an existing Poon-Dryja
> channel, and creates a Decker-Russell-Osuntokun funding outpoint.
>
> However, once such an upgrade transaction has been created and signed by
> both parties (after the necessary initial state is signed in the
> Decker-Russell-Osuntokun mechanism), nothing prevents the participants
> from, say, just keeping the upgrade transaction offchain as well.
>
> The participants can simply, after the upgrade transaction has been
> signed, revoke the latest Poon-Dryja state (which has been copied into the
> initial Decker-Russell-Osuntokun state).
> Then they can keep the upgrade transaction offchain, and treat the funding
> outpoint of the upgrade transaction as the "internal funding outpoint" for
> future Decker-Russell-Osuntokun updates.
>
> Now, of course, since the onchain funding outpoint remains a Poon-Dryja,
> it can still be spent using a revoked state.
> Thus, we do not gain anything much, since the entire HTLC history of the
> Poon-Dryja channel needs to be retained as protection against theft
> attempts.
>
> However:
>
> * Future HTLCs in the Decker-Russell-Osuntokun domain need not be recorded
> permanently, thus at least bounding the information liability of the
> upgraded channel.
> * The channel retains its short-channel-id, which may be useful, since a
> provably-long-lived channel implies both channel participants have high
> reliability (else one or the other would have closed the channel at some
> point), and a pathfinding algorithm may bias towards such long-lived
> channels.
>
> Of note, is that if the channel is later mutually closed, the upgrade
> transaction, being offchain, never need appear onchain, so this potentially
> saves blockchain space.
>
> Regards,
> ZmnSCPxj
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Dynamic Commitments: Upgrading Channels Without On-Chain Transactions

2020-07-21 Thread Olaoluwa Osuntokun
alternative. I think it's better to explicitly signal that we
> want to pause the channel while we upgrade the commitment format (and stop
> accepting HTLCs while we're updating, like we do once we've exchanged the
> `shutdown` message). Otherwise the asynchronocity of the protocol is
> likely to
> create months (years?) of tracking unwanted force-closes because of races
> between `commig_sig`s with the new and old commitment format.
>
> Updating the commitment format should be a rare enough operation that we
> can
> afford to synchronize with a two-way `update_commitment_format` handshake,
> then
> temporarily freeze the channel.
>
> The tricky part will be how we handle "dangling" operations that were sent
> by
> the remote peer *after* we sent our `update_commitment_format` but *before*
> they received it. The simplest choice is probably to have the initiator
> just
> ignore these messages, and the non-initiator enqueue these un-acked
> messages
> and replay them after the commitment format update completes (or just drop
> them
> and cancel corresponding upstream HTLCs if needed).
>
> Regarding initiating the commitment format update, how do you see this
> happen?
> The funder activates a new feature on his (e.g. `option_anchor_outputs`),
> and
> broadcasts it in `init` and `node_announcement`, then waits until the
> remote
> also activates it in its `init` message and then reacts to this by
> triggering
> the update process?
>
> Thanks,
> Bastien
>
> Le mar. 21 juil. 2020 à 03:18, Olaoluwa Osuntokun  a
> écrit :
>
>> Hi y'all,
>>
>> In this post, I'd like to share an early version of an extension to the
>> spec
>> and channel state machine that would allow for on-the-fly commitment
>> _format/type_ changes. Notably, this would allow for us to _upgrade_
>> commitment types without any on-chain activity, executed in a
>> de-synchronized and distributed manner. The core realization these
>> proposal
>> is based on the fact that the funding output is the _only_ component of a
>> channel that's actually set in stone (requires an on-chain transaction to
>> modify).
>>
>>
>> # Motivation
>>
>> (you can skip this section if you already know why something like this is
>> important)
>>
>> First, some motivation. As y'all are likely aware, the current deployed
>> commitment format has changed once so far: to introduce the
>> `static_remote_key` variant which makes channels safer by sending the
>> funds
>> of the party that was force closed on to a plain pubkey w/o any extra
>> tweaks
>> or derivation. This makes channel recovery safer, as the party that may
>> have
>> lost data (or can't continue the channel), no longer needs to learn of a
>> secret value sent to them by the other party to be able to claim their
>> funds. However, as this new format was introduced sometime after the
>> initial
>> bootstrapping phase of the network, most channels in the wild today _are
>> not_ using this safer format.  Transitioning _all_ the existing channels
>> to
>> this new format as is, would require closing them _all_, generating tens
>> of
>> thousands of on-chain transactions (to close, then re-open), not to
>> mention
>> chain fees.
>>
>> With dynamic commitments, users will be able to upgrade their _existing_
>> channels to new safer types, without any new on-chain transactions!
>>
>> Anchor output based commitments represent another step forward in making
>> channels safer as they allow users/software to no longer have to predict
>> chain fees ahead of time, and also bump up the fee of a
>> commitment/2nd-level-htlc-transaction, which is extremely important when
>> it
>> comes to timely on-chain resolution of HTLC contracts. This upgrade
>> process
>> (as touched on below) can either be manually triggered, or automatically
>> triggered once the software updates and finds a new preferable default
>> commitment format is available.
>>
>> As many of us are aware, the addition of schnorr and taproot to the
>> Bitcoin
>> protocol dramatically increases the design space for channels as a whole.
>> It
>> may take some time to explore this design space, particularly as entirely
>> new channel/commitment formats [1] continue to be discovered. The roll out
>> of dynamic commitments allows us to defer the concrete design of the
>> future
>> commitment formats, yet still benefit from the immediate improvement that
>> comes with morphing the funding output to be a single-key (non-p2wsh,
>> though
>> the line starts

Re: [Lightning-dev] Dynamic Commitments: Upgrading Channels Without On-Chain Transactions

2020-07-21 Thread Olaoluwa Osuntokun
After getting some feedback from the Lightning Labs squad, we're thinking
that it may be better to make the initial switch over double-opt-in, similar
to the current `shutdown` message flow. So with this variant, we'd add two
new messages: `commit_switch` and `commit_switch_reply` (placeholder
names). We may want to retain the "initiator" only etiquette for simplicity,
but if we want to allow both sides to initiate then we'll need to handle
collisions (with a randomized back off possibly).

The `commit_switch` message would contain the new target `channel_type` and
the opaque TLV blob of the re-negotiation parameters. The
`commit_switch_reply` message would then give the receiver the ability to
_reject_ the switch (say it doesn't want to increase `max_allowed_htlcs`),
or accept it, and specify its own set of parameters. Similar to the
`shutdown` message, both parties can only proceed with the switch over _once
all HTLCs_ have been cleared. As a result, they should reject any HTLC
forwarding attempts through the target channel once they receive the initial
message. From there, they'd carry out the modified commitment dance outlined
in my prior mail.

Thoughts?

-- Laolu

-- Laolu

On Mon, Jul 20, 2020 at 6:18 PM Olaoluwa Osuntokun 
wrote:

> Hi y'all,
>
> In this post, I'd like to share an early version of an extension to the
> spec
> and channel state machine that would allow for on-the-fly commitment
> _format/type_ changes. Notably, this would allow for us to _upgrade_
> commitment types without any on-chain activity, executed in a
> de-synchronized and distributed manner. The core realization these proposal
> is based on the fact that the funding output is the _only_ component of a
> channel that's actually set in stone (requires an on-chain transaction to
> modify).
>
>
> # Motivation
>
> (you can skip this section if you already know why something like this is
> important)
>
> First, some motivation. As y'all are likely aware, the current deployed
> commitment format has changed once so far: to introduce the
> `static_remote_key` variant which makes channels safer by sending the funds
> of the party that was force closed on to a plain pubkey w/o any extra
> tweaks
> or derivation. This makes channel recovery safer, as the party that may
> have
> lost data (or can't continue the channel), no longer needs to learn of a
> secret value sent to them by the other party to be able to claim their
> funds. However, as this new format was introduced sometime after the
> initial
> bootstrapping phase of the network, most channels in the wild today _are
> not_ using this safer format.  Transitioning _all_ the existing channels to
> this new format as is, would require closing them _all_, generating tens of
> thousands of on-chain transactions (to close, then re-open), not to mention
> chain fees.
>
> With dynamic commitments, users will be able to upgrade their _existing_
> channels to new safer types, without any new on-chain transactions!
>
> Anchor output based commitments represent another step forward in making
> channels safer as they allow users/software to no longer have to predict
> chain fees ahead of time, and also bump up the fee of a
> commitment/2nd-level-htlc-transaction, which is extremely important when it
> comes to timely on-chain resolution of HTLC contracts. This upgrade process
> (as touched on below) can either be manually triggered, or automatically
> triggered once the software updates and finds a new preferable default
> commitment format is available.
>
> As many of us are aware, the addition of schnorr and taproot to the Bitcoin
> protocol dramatically increases the design space for channels as a whole.
> It
> may take some time to explore this design space, particularly as entirely
> new channel/commitment formats [1] continue to be discovered. The roll out
> of dynamic commitments allows us to defer the concrete design of the future
> commitment formats, yet still benefit from the immediate improvement that
> comes with morphing the funding output to be a single-key (non-p2wsh,
> though
> the line starts to blur w/ taproot) output. With this new funding output
> format in place, users/software will then be able to update to the latest
> and greatest commitment format that starts to utilize all the new tools
> available (scriptless script based htlcs, etc) at a later date.
>
> Finally, the ability to update the commitment format itself will also allow
> us to re-parametrize portions of the channels which are currently set in
> stone. As an example, right now the # of max allowed outstanding HTLCs is
> set in stone once the channel has opened. With the ability to also swap out
> commitment _parameters_, we can start to experiment with flow-control like
> ideas such as limiting

[Lightning-dev] Dynamic Commitments: Upgrading Channels Without On-Chain Transactions

2020-07-20 Thread Olaoluwa Osuntokun
Hi y'all,

In this post, I'd like to share an early version of an extension to the spec
and channel state machine that would allow for on-the-fly commitment
_format/type_ changes. Notably, this would allow for us to _upgrade_
commitment types without any on-chain activity, executed in a
de-synchronized and distributed manner. The core realization these proposal
is based on the fact that the funding output is the _only_ component of a
channel that's actually set in stone (requires an on-chain transaction to
modify).


# Motivation

(you can skip this section if you already know why something like this is
important)

First, some motivation. As y'all are likely aware, the current deployed
commitment format has changed once so far: to introduce the
`static_remote_key` variant which makes channels safer by sending the funds
of the party that was force closed on to a plain pubkey w/o any extra tweaks
or derivation. This makes channel recovery safer, as the party that may have
lost data (or can't continue the channel), no longer needs to learn of a
secret value sent to them by the other party to be able to claim their
funds. However, as this new format was introduced sometime after the initial
bootstrapping phase of the network, most channels in the wild today _are
not_ using this safer format.  Transitioning _all_ the existing channels to
this new format as is, would require closing them _all_, generating tens of
thousands of on-chain transactions (to close, then re-open), not to mention
chain fees.

With dynamic commitments, users will be able to upgrade their _existing_
channels to new safer types, without any new on-chain transactions!

Anchor output based commitments represent another step forward in making
channels safer as they allow users/software to no longer have to predict
chain fees ahead of time, and also bump up the fee of a
commitment/2nd-level-htlc-transaction, which is extremely important when it
comes to timely on-chain resolution of HTLC contracts. This upgrade process
(as touched on below) can either be manually triggered, or automatically
triggered once the software updates and finds a new preferable default
commitment format is available.

As many of us are aware, the addition of schnorr and taproot to the Bitcoin
protocol dramatically increases the design space for channels as a whole. It
may take some time to explore this design space, particularly as entirely
new channel/commitment formats [1] continue to be discovered. The roll out
of dynamic commitments allows us to defer the concrete design of the future
commitment formats, yet still benefit from the immediate improvement that
comes with morphing the funding output to be a single-key (non-p2wsh, though
the line starts to blur w/ taproot) output. With this new funding output
format in place, users/software will then be able to update to the latest
and greatest commitment format that starts to utilize all the new tools
available (scriptless script based htlcs, etc) at a later date.

Finally, the ability to update the commitment format itself will also allow
us to re-parametrize portions of the channels which are currently set in
stone. As an example, right now the # of max allowed outstanding HTLCs is
set in stone once the channel has opened. With the ability to also swap out
commitment _parameters_, we can start to experiment with flow-control like
ideas such as limiting a new channel peer to only a handful of HTLC slots,
which is then progressively increased based on "good behavior" (or the other
way around as well). Beyond just updating the channel parameters, it's also
possible to "change the rules" of a channel on the fly. An example of this
variant would be creating a new psuedo-type that implements a fee policy
other than "the initiator pays all fees".


# Protocol Changes

With the motivation/background set up, let's dig into some potential ways
the protocol can be modified to support this new meta-feature. As this
change is more of a meta-change, AFAICT, the amount of protocol changes
doesn't appear to be _too_ invasive ;). Most of the heavy lifting is done by
the wondrous TLV message field extensions.

## Explicit Channel Type Negotiation

Right now in the protocol, as new channel types are introduced (static key,
and now anchors) we add a new feature bit. If both nodes have the feature
bit set, then that new channel type is to be used. Notice how this is an
_implicit_ upgrade: there's no explicit signalling during the _funding_
process that a new channel type is to be used. This works OK, if there's one
major accepted "official" channel type, but not as new types are introduced
for specific use cases or applications. The implicit negotiation also makes
things a bit ambiguous at times. As an example, if both nodes have the
`static_remote_key` _and_ anchor outputs feature bit set, which channel type
should they open?

To resolve this existing ambiguity in the channel type negotiation, we'll
need to make the channel type used 

Re: [Lightning-dev] Disclosure of a fee blackmail attack that can make a victim loose almost all funds of a non Wumbo channel and potential fixes

2020-06-21 Thread Olaoluwa Osuntokun
Hi Jeremy,

The up-front costs can be further mitigated even without something like CTV
(which makes things more efficient) by adding a layer of in-direction w.r.t
how
HTLCs are manifested within the commitment transactions. To do this, we add
a
new 2-of-2 multi-sig output (an HTLC indirect block) to the commitment
transactions. This is then spent by a new transaction (the HTLC block) that
actually manifests (creates the HTLC outputs) the HTLCs.

With this change, the cost to have a commitment be mined in the chain is now
_independent of the number of HTLCs in the channel_. In the past I've called
this construction "coupe commitments" (lol).

Other flavors of this technique are possible as well, allowing both sides to
craft varying HTLC indirection trees (double layers of indirection are
possible, etc) which may factor in traits like HTLC expiration time (HTLCs
that
expire later are further down in the tree).

Something like CTV does indeed make this technique more powerful+efficient
as
it allows one to succinctly commit to all the relevant desirable
combinations
of HTLC indirect blocks, and HTLC fan-out transactions.

-- Laolu


On Sat, Jun 20, 2020 at 4:14 PM Jeremy  wrote:

> I am not steeped enough in Lightning Protocol issues to get the full
> design space, but I'm fairly certain BIP-119 Congestion Control trees would
> help with this issue.
>
> You can bucket a tree by doing a histogram of HTLC size, so that all small
> HTLCs live in a common CTV subtree and don't interfere with higher value
> HTLCs. You can also play with sequencing to prevent those HTLCs from
> getting longchains in the mempool until they're above a certain value.
> --
> @JeremyRubin 
> 
>
>
> On Thu, Jun 18, 2020 at 1:41 AM Antoine Riard 
> wrote:
>
>> Hi Rene,
>>
>> Thanks for disclosing this vulnerability,
>>
>> I think this blackmail scenario holds but sadly there is a lower scenario.
>>
>> Both "Flood & Loot" and your blackmail attack rely on `update_fee`
>> mechanism and unbounded commitment transaction size inflation. Though the
>> first to provoke block congestion and yours to lockdown in-flight fees as
>> funds hostage situation.
>>
>> > 1. The current solution is to just not use up the max value of
>> htlc's. Eclaire and c-lightning by default only use up to 30 htlcs.
>>
>> As of today, yes I would recommend capping commitment size both for
>> ensuring competitive propagation/block selection and limiting HTLC exposure.
>>
>> > 2. Probably the best fix (not sure if I understand the consequences
>> correctly) is coming from this PR to bitcoin core (c.f.
>> https://github.com/bitcoin/bitcoin/pull/15681 by @TheBlueMatt . If I get
>> it correctly with that we could always have low fees and ask the person who
>> want to claim their outputs to pay fees. This excludes overpayment and
>> could happen at a later stage when fees are not spiked. Still the victim
>> who offered the htlcs would have to spend those outputs at some time.
>>
>> It's a bit more complex, carve-out output, even combined with anchor
>> output support on the LN-side won't protect against different flavors of
>> pinning. I invite you to go through logs of past 2 LN dev meetings.
>>
>> > 3. Don't overpay fees in commitment transactions. We can't foresee the
>> future anyway
>>
>> Once 2. is well-addressed we may deprecate `update_fee`.
>>
>> > 4. Don't add htlcs for which the on chain fee is higher than the HTLCs
>> value (like we do with sub dust amounts and sub satoshi amounts. This would
>> at least make the attack expensive as the attacker would have to bind a lot
>> of liquidity.
>>
>> Ideally we want dust_limit to be dynamic, dust cap should be based on
>> HTLC economic value, feerate of its output, feerate of HTLC-transaction,
>> feerate estimation of any CPFP to bump it. I think that's kind of worthy to
>> do once we solved 3. and 4
>>
>> > 5. Somehow be able to aggregate htlc's. In a world where we use payment
>> points instead of preimages we might be able to do so. It would be really
>> cool if separate HTLC's could be combined to 1 single output. I played
>> around a little bit but I have not come up with a scheme that is more
>> compact in all cases. Thus I just threw in the idea.
>>
>> Yes we may encode all HTLC in some Taproot tree in the future. There are
>> some wrinkles but for a high-level theoretical construction see my post on
>> CoinPool.
>>
>> > 6. Split onchain fees differently (now the attacker would also lose
>> fees by conducting this attack) - No I don't want to start yet another fee
>> bikeshadding debate. (In particular I believe that a different split of
>> fees might make the Flood & Loot attack economically more viable which
>> relies on the same principle)
>>
>> Likely a bit more of fee bikeshedding is something we have to do to make
>> LN secure... Switching fee from pre-committed ones to a single-party,
>> dynamic one.
>>
>> > Independently I think we should 

Re: [Lightning-dev] Disclosure of a fee blackmail attack that can make a victim loose almost all funds of a non Wumbo channel and potential fixes

2020-06-21 Thread Olaoluwa Osuntokun
Hi Rene,

IMO this is mostly mitigated by anchor commitments.  The impact of this
attack is predicated on the "victim" paying 5x on-chain fees (for their
confirmation target) to sweep all their HTLCs.  Anchor commitments let the
initiator of the channel select a very low starting fee (just enough to get
into the mempool), and also let them actually bump the fees of second-level
HTLC transactions.

In addition to being able to pay much lower fees ("just enough" to get into
the chain), anchor commitments allow second-level HTLC _aggregation_, This
means that for HTLCs with the same expiry height, a peer is able to _batch_
them all into a single transaction, further saving on fees.

lnd shipped with a form of anchor commitments in our past major release
(v0.10.0-beta). In that release the format is opt in, and is enabled with a
startup command-line flag. For 0.11, we're planning on making this the
default commitment type, giving all users that update the ability to
_finally_ have proper fee control of their commitments, and second-level
HTLC transactions.

> The direction of HTLCs are chosen so that the amount is taken from the
> `to_remote` output of the attacker (obviously on the victims side it will
> be the `to_local` output)

One relevant detail here is that if the attacker is to attempt this with
minimal setup, then they'll need to be the ones that open the channel.
Since they're the initiator, they'll actually be the ones paying the fees
rendering this attempt moot.

Alternatively, they could use something like Lightning Loop to gain the
_outbound_ bandwidth (Loop In) needed to attempt this attack (using inbound
opened channels, but they'll need to pay for that bandwidth, adding a
further cost to the attack. Not to mention that they'll need to pay on-chain
fees to sweep the HTLCs they created themselves. In short, this attack isn't
costless as they'll need to acquire outbound liquidity for an incoming
channel, and also need to pay fees independent of the "success" of their
attack.

> I quote from BOLT 02 which suggests a buffer of a factor of 5

I'm not sure how many implementations actually follow this in practice.
FWIW, lnd doesn't.

> Additionally the victim will also have to swipe all offered HTLCs (which
> will be additional costs but could be done once the fees came down) so we
> neglect them.

No, the attacker is the one that needs to sweep these HTLCs, since they
offered them. This adds to their costs.

> Knowing that this will happen and that the victim has to spend those funds
> (publishing old state obviously does not work!) the attacker has a time
> window to blackmail the victim outside of the lightning network protocol

I don't think this is always the case. Depending on the minimum HTLC
settings in the channel (another mitigation), and the distribution of funds
in the channel, it may be the case that the victim doesn't have any funds in
the channel at all (everything was on the attacker's side). In that case,
the "victim" doesn't really care if this channel is clogged up as they
really have no stake in this channel.

> Also you might say that an attacker needs many incoming channels to
> execute this attack. This can be achieved by gaming the autopilot.

As mentioned above, gaining purely incoming channels doesn't allow the
attacker to launch this attack, as they'll be unable to _send out_ from any
of those channels.

> 1. The current solution is to just not use up the max value of htlc's.
> Eclaire and c-lightning by default only use up to 30 htlcs.

IMO, this isn't a solution. Lowering the max number of HTLCs in-flight just
makes it easier (lowers the capital costs) to jam a channel. The authors of
the paper you linked have another paper exploring these types of attacks
[1], and cite the _hard coded_ limit of 483 HTLCS as an enabling factor.

> 2. Probably the best fix (not sure if I understand the consequences
> correctly) is coming from this PR to bitcoin core

I think you're misinterpreting this PR, but see my first paragraph about
anchor commitments which that PR enables.

> 3. Don't overpay fees in commitment transactions. We can't foresee the
> future anyway

Anchors let you do this ;)

> 4. Don't add htlcs for which the on chain fee is higher than the HTLCs
> value (like we do with sub dust amounts and sub satoshi amounts.

This is already how "dust HTLCs" are calculated. The amount remaining from
the HTLC after it pays for its second-level transaction needs to be above
dust. This policy can be asymmetric across commitments in the channel.

> 5. Somehow be able to aggregate htlc's.

Anchors let you do this on the transaction level (MIMO 2nd level HTLC
transactions).

I hope other implementations join lnd in deploying anchor commitments to
mitigate nuisance attacks like this, and _finally_ give users better fee
control for channels and any off-chain contracts within those channels.

BTW, the "Flood & Loot" paper you linked mentions anchor commitments as a
solution towards the 

Re: [Lightning-dev] On the scalability issues of onboarding millions of LN mobile clients

2020-05-05 Thread Olaoluwa Osuntokun
Hi Antoine,

> Even with cheaper, more efficient protocols like BIP 157, you may have a
> huge discrepancy between what is asked and what is offered. Assuming 10M
> light clients [0] each of them consuming ~100MB/month for filters/headers,
> that means you're asking 1PB/month of traffic to the backbone network. If
> you assume 10K public nodes, like today, assuming _all_ of them opt-in to
> signal BIP 157, that's an increase of 100GB/month for each. Which is
> consequent with regards to the estimated cost of 350GB/month for running
> an actual public node

One really dope thing about BIP 157+158, is that the protocol makes serving
light clients now _stateless_, since the full node doesn't need to perform
any unique work for a given client. As a result, the entire protocol could
be served over something like HTTP, taking advantage of all the established
CDNs and anycast serving infrastructure, which can reduce syncing time
(less latency to
fetch data) and also more widely distributed the load of light clients using
the existing web infrastructure. Going further, with HTTP/2's server-push
capabilities, those serving this data can still push out notifications for
new headers, etc.

> Therefore, you may want to introduce monetary compensation in exchange of
> servicing filters. Light client not dedicating resources to maintain the
> network but free-riding on it, you may use their micro-payment
> capabilities to price chain access resources [3]

Piggy backing off the above idea, if the data starts being widely served
over HTTP, then LSATs[1][2] can be used to add a lightweight payment
mechanism by inserting a new proxy server in front of the filter/header
infrastructure. The minted tokens themselves may allow a user to purchase
access to a single header/filter, a range of them in the past, or N headers
past the known chain tip, etc, etc.

-- Laolu

[1]: https://lsat.tech/
[2]: https://lightning.engineering/posts/2020-03-30-lsat/


On Tue, May 5, 2020 at 3:17 AM Antoine Riard 
wrote:

> Hi,
>
> (cross-posting as it's really both layers concerned)
>
> Ongoing advancement of BIP 157 implementation in Core maybe the
> opportunity to reflect on the future of light client protocols and use this
> knowledge to make better-informed decisions about what kind of
> infrastructure is needed to support mobile clients at large scale.
>
> Trust-minimization of Bitcoin security model has always relied first and
> above on running a full-node. This current paradigm may be shifted by LN
> where fast, affordable, confidential, censorship-resistant payment services
> may attract a lot of adoption without users running a full-node. Assuming a
> user adoption path where a full-node is required to benefit for LN may
> deprive a lot of users, especially those who are already denied a real
> financial infrastructure access. It doesn't mean we shouldn't foster node
> adoption when people are able to do so, and having a LN wallet maybe even a
> first-step to it.
>
> Designing a mobile-first LN experience opens its own gap of challenges
> especially in terms of security and privacy. The problem can be scoped as
> how to build a scalable, secure, private chain access backend for millions
> of LN clients ?
>
> Light client protocols for LN exist (either BIP157 or Electrum are used),
> although their privacy and security guarantees with regards to
> implementation on the client-side may still be an object of concern
> (aggressive tx-rebroadcast, sybillable outbound peer selection, trusted fee
> estimation). That said, one of the bottlenecks is likely the number of
> full-nodes being willingly to dedicate resources to serve those clients.
> It's not about _which_ protocol is deployed but more about _incentives_ for
> node operators to dedicate long-term resources to client they have lower
> reasons to care about otherwise.
>
> Even with cheaper, more efficient protocols like BIP 157, you may have a
> huge discrepancy between what is asked and what is offered. Assuming 10M
> light clients [0] each of them consuming ~100MB/month for filters/headers,
> that means you're asking 1PB/month of traffic to the backbone network. If
> you assume 10K public nodes, like today, assuming _all_ of them opt-in to
> signal BIP 157, that's an increase of 100GB/month for each. Which is
> consequent with regards to the estimated cost of 350GB/month for running an
> actual public node. Widening full-node adoption, specially in term of
> geographic distribution means as much as we can to bound its operational
> cost.
>
> Obviously,  deployment of more efficient tx-relay protocol like Erlay will
> free up some resources but it maybe wiser to dedicate them to increase
> health and security of the backbone network like deploying more outbound
> connections.
>
> Unless your light client protocol is so ridiculous cheap to rely on
> niceness of a subset of node operators offering free resources, it won't
> scale. And it's likely you will always have a ratio 

Re: [Lightning-dev] An update on PTLCs

2020-04-23 Thread Olaoluwa Osuntokun
(this may be kind of off-topic, more about DLC deployment than PTLCs
themselves)

>From my PoV, new technologies aren't what has held back DLC deployment to
this date since the paper was originally released. Tadge has had working
code than can be deployed today for some time now, and other parties like
DG-Lab have created full-fledge demos with the system working end to end.
Instead, the real impediment has been the bootstrapping of the oracles
which the scheme critically depends upon.

Without oracles, none of it really works. Although, it's also the case that
there're measures to prevent the oracles from equivocating (reporting two
conflicting prices/events for a particular instance), bootstrapping a new
oracle still requires a very high degree of trust as they can lie or report
incorrect data. As a result, actually deploying an oracle for a system like
this is tricky business, as it's a trusted centralized entity, so it will
run into all the normal meatspace/legal/operational risk that any trusted
centralized service would encounter.

Earlier today, Coinbase announced that they were releasing a new price
oracle for the ETH ecosystem [1]. This caught my attention as one can
imagine, that it would be even simpler for them to deploy a DLC oracle which
exports an API to obtain signed prices/events. As an existing large company
in the space (depending on who you talk to), they're a trusted entity, which
has earned a good reputation over the years (solving this
bootstrapping/trust issue). If they do eventually grow the service to also
encompass this use case, then it enables a number of possibilities, as
there's still a ton of value in just base DLC-specific channels (or one off
contracts), without all the fancy barrier escrow scriptless scipts swappy
swap swap stuff.

-- Laolu

[1]:
https://blog.coinbase.com/introducing-the-coinbase-price-oracle-6d1ee22c7068


On Thu, Apr 23, 2020 at 7:52 AM Nadav Kohen  wrote:

> Hi Laolu,
>
> Thanks for the response :)
>
> I agree that some more framing probably would have been good to have in my
> update.
>
> First, I want to clarify that my intention is not to implement a
> PTLC-based lightning network on top of ECDSA adaptor signatures, as I do
> believe that using Schnorr will be superior, but rather I wish to get some
> PoC sandbox with which to start implementing and testing out the long list
> of currently theoretical proposals surrounding PTLCs, most of which are
> implementation agnostic (to a degree anyway). I think it would be super
> beneficial to have more fleshed out with respect to what some challenges of
> a Payment Point LN are going to be than we understand now, before Schnorr
> is implemented and it is time to commit to some PTLC scheme for real.
>
> Second, I agree that I've probably understated somewhat the changes that
> will be needed in most implementations as I was mostly thinking about what
> would need to change in the BOLTs, which does actually seem relatively
> minimal (although as you mention, these minimal changes to the BOLTs do
> trigger large changes in many implementations). Also, good point on how
> BOLT 11 (invoicing) will have to be altered as well, must've slipped my
> mind.
>
> Best,
> Nadav
>
> On Wed, Apr 22, 2020 at 8:17 PM Olaoluwa Osuntokun 
> wrote:
>
>> Hi Nadav,
>>
>> Thanks for the updates! Super cool to see this concept continue to evolve
>> and integrate new technologies as they pop up.
>>
>> > I believe this would only require a few changes to existing nodes:
>>
>> Rather than a "few changes", this would to date be the largest
>> network-level
>> update undertaken to the Lightning Network thus far. In the past, we
>> rolled
>> out the new onion blob format (which enables changes like this), but none
>> of
>> the intermediate nodes actually need to modify their behavior. New payment
>> types like MPP+AMP only needed the _end points_ to update making this an
>> end-to-end update that has been rolled out so far in a de-synchronized
>> manner.
>>
>> Re-phrasing deploying this requires changes to: the core channel state
>> machine (the protocol we use to make commitment updates), HTLC scripts,
>> on-chain HTLC handling and resolution, path finding algorithms (to only
>> see
>> out the new PTLC-enabled nodes), invoice changes and onion blob
>> processing.
>> I'd caution against underestimating how long all of this will take in
>> practice, and the degree of synchronization required to pull it all off
>> properly.
>>
>> For a few years now the question we've all been pondering is: do we wait
>> for
>> scnhorr to roll out multi-hop locks, or just use the latest ECDSA based
>> technique? As dual deployment is compatible (we can mak

Re: [Lightning-dev] An update on PTLCs

2020-04-22 Thread Olaoluwa Osuntokun
Hi Nadav,

Thanks for the updates! Super cool to see this concept continue to evolve
and integrate new technologies as they pop up.

> I believe this would only require a few changes to existing nodes:

Rather than a "few changes", this would to date be the largest network-level
update undertaken to the Lightning Network thus far. In the past, we rolled
out the new onion blob format (which enables changes like this), but none of
the intermediate nodes actually need to modify their behavior. New payment
types like MPP+AMP only needed the _end points_ to update making this an
end-to-end update that has been rolled out so far in a de-synchronized
manner.

Re-phrasing deploying this requires changes to: the core channel state
machine (the protocol we use to make commitment updates), HTLC scripts,
on-chain HTLC handling and resolution, path finding algorithms (to only see
out the new PTLC-enabled nodes), invoice changes and onion blob processing.
I'd caution against underestimating how long all of this will take in
practice, and the degree of synchronization required to pull it all off
properly.

For a few years now the question we've all been pondering is: do we wait for
scnhorr to roll out multi-hop locks, or just use the latest ECDSA based
technique? As dual deployment is compatible (we can make the onion blobs for
both types the same), a path has always existed to first roll out with the
latest ECDSA based technique then follow up later to roll out the schnorr
version as well. However there's also a risk here as depending on how
quickly things can be rolled out, schnorr may become available
mid-development, which would possibly cause us to reconsider the ECDSA path
and have the network purely use scnhorr to make things nice and uniform.

Zooming out for a bit, the solution space of "how channels can look post
scriptless-scripts + taproot" is rather large [1], and the addition of this
new technique allows for an even larger set of deployment possibilities.
This latest ECDSA variant is much simpler than the prior ones (which had a
few rounds of more involved ZKPs), but since it still uses OP_CMS, it can't
be used to modify the funding output.

[1]:
https://lists.linuxfoundation.org/pipermail/lightning-dev/2019-December/002375.html

-- Laolu


On Wed, Apr 22, 2020 at 8:13 AM Nadav Kohen  wrote:

> Hello all,
>
> I'd like to give an update on the current state of thinking and coding
> surrounding replacing Hash-TimeLock Contracts (HTLCs) with Point-TimeLock
> Contracts (PTLCs) (aka Payment Hashes -> Payment Points) in hopes of
> sparking interest, discussion, development, etc.
>
>
> We Want Payment Points!
> ---
>
> Using point-locks (in PTLCs) instead of hash-locks (in HTLCs) for
> lightning payments is an all around improvement. HTLCs require the use of
> the same hash across payment routes (barring fancy ZKPs which are inferior
> to PTLCs) while PTLCs allow for payment de-correlation along routes. For an
> introduction to the topic, see
> https://suredbits.com/payment-points-part-1/.
>
> In addition to improving privacy in this way and protecting against
> wormhole attacks, PTLC-based lightning channels open the door to a large
> variety of interesting applications that cannot be accomplished with HTLCs:
>
> Stuckless (retry-able) Payments with proof of payment (
> https://suredbits.com/payment-points-part-2-stuckless-payments/)
>
> Escrow contracts over Lightning (
> https://suredbits.com/payment-points-part-3-escrow-contracts/)
>
> High/DLOG AMP (
> https://docs.google.com/presentation/d/15l4h2_zEY4zXC6n1NqsImcjgA0fovl_lkgkKu1O3QT0/edit#slide=id.g64c15419e7_0_40
> )
>
> Stuckless + AMP (an improvement on Boomerang) (
> https://lists.linuxfoundation.org/pipermail/lightning-dev/2019-October/002239.html
> )
>
> Pay-for-signature (
> https://suredbits.com/payment-points-part-4-selling-signatures/)
>
> Pay-for-commitment (
> https://lists.linuxfoundation.org/pipermail/lightning-dev/2019-September/002166.html
> )
>
> Monotonic access structures on payment completion (
> https://suredbits.com/payment-points-monotone-access-structures/)
>
> Ideal Barrier Escrow Implementation (
> https://suredbits.com/payment-points-implementing-barrier-escrows/)
>
> And allowing for Barrier Escrows, we can even have
>
> Atomic multi-payment setup (
> https://suredbits.com/payment-points-and-barrier-escrows/)
>
> Lightning Discreet Log Contract (
> https://suredbits.com/discreet-log-contracts-on-lightning-network/)
>
> Atomic multi-payment update (
> https://suredbits.com/updating-and-transferring-lightning-payments/)
>
> Lightning Discreet Log Contract Novation/Transfer (
> https://suredbits.com/transferring-lightning-dlcs/)
>
> There are likely even more things that can be done with Payment Points so
> make sure to respond if I've missed any known ones.
>
>
> How Do We Get Payment Points?
> -
>
> Eventually, once we have Taproot, we can use 2p-Schnorr adaptor signatures
> in 

Re: [Lightning-dev] [bitcoin-dev] RBF Pinning with Counterparties and Competing Interest

2020-04-22 Thread Olaoluwa Osuntokun
> Indeed, that is what I’m suggesting

Gotcha, if this is indeed what you're suggesting (all HTLC spends are now
2-of-2 multi-sig), then I think the modifications to the state machine I
sketched out in an earlier email are required. An exact construction which
achieves the requirements of "you can't broadcast until you have a secret
which I can obtain from the htlc sig for your commitment transaction, and my
secret is revealed with another swap", appears to be an open problem, atm.

Even if they're restricted in this fashion (must be a 1-in-1 out,
sighashall, fees are pre agreed upon), they can still spend that with a CPFP
(while still unconfirmed in the mempool) and create another heavy tree,
which puts us right back at the same bidding war scenario?

> There are a bunch of ways of doing pinning - just opting into RBF isn’t
> even close to enough.

Mhmm, there're other ways of doing pinning. But with anchors as is defined
in that spec PR, they're forced to spend with an RBF-replaceable
transaction, which means the party wishing to time things out can enter into
a bidding war. If the party trying to impeded things participates in this
progressive absolute fee increase, it's likely that the war terminates
with _one_ of them getting into the block, which seems to resolve
everything?

-- Laolu


On Wed, Apr 22, 2020 at 4:20 PM Matt Corallo 
wrote:

>
>
> On Apr 22, 2020, at 16:13, Olaoluwa Osuntokun  wrote:
>
>
> > Hmm, maybe the proposal wasn't clear. The idea isn't to add signatures to
> > braodcasted transactions, but instead to CPFP a maybe-broadcasted
> > transaction by sending a transaction which spends it and seeing if it is
> > accepted
>
> Sorry I still don't follow. By "we clearly need to go the other direction -
> all HTLC output spends need to be pre-signed.", you don't mean that the
> HTLC
> spends of the non-broadcaster also need to be an off-chain 2-of-2 multi-sig
> covenant? If the other party isn't restricted w.r.t _how_ they can spend
> the
> output (non-rbf'd, ect), then I don't see how that addresses anything.
>
>
> Indeed, that is what I’m suggesting. Anchor output and all. One thing we
> could think about is only turning it on over a certain threshold, and
> having a separate “only-kinda-enforceable-on-chain-HTLC-in-flight” limit.
>
> Also see my mail elsewhere in the thread that the other party is actually
> forced to spend their HTLC output using an RBF-replaceable transaction.
> With
> that, I think we're all good here? In the end both sides have the ability
> to
> raise the fee rate of their spending transactions with the highest winning.
> As long as one of them confirms within the CLTV-delta, then everyone is
> made whole.
>
>
> It does seem like my cached recollection of RBF opt-in was incorrect but
> please re-read the intro email. There are a bunch of ways of doing pinning
> - just opting into RBF isn’t even close to enough.
>
> [1]: https://github.com/bitcoin/bitcoin/pull/18191
>
>
> On Wed, Apr 22, 2020 at 9:50 AM Matt Corallo 
> wrote:
>
>> A few replies inline.
>>
>> On 4/22/20 12:13 AM, Olaoluwa Osuntokun wrote:
>> > Hi Matt,
>> >
>> >
>> >> While this is somewhat unintuitive, there are any number of good
>> anti-DoS
>> >> reasons for this, eg:
>> >
>> > None of these really strikes me as "good" reasons for this limitation,
>> which
>> > is at the root of this issue, and will also plague any more complex
>> Bitcoin
>> > contracts which rely on nested trees of transaction to confirm (CTV,
>> Duplex,
>> > channel factories, etc). Regarding the various (seemingly arbitrary)
>> package
>> > limits it's likely the case that any issues w.r.t computational
>> complexity
>> > that may arise when trying to calculate evictions can be ameliorated
>> with
>> > better choice of internal data structures.
>> >
>> > In the end, the simplest heuristic (accept the higher fee rate package)
>> side
>> > steps all these issues and is also the most economically rationale from
>> a
>> > miner's perspective. Why would one prefer a higher absolute fee package
>> > (which could be very large) over another package with a higher total
>> _fee
>> > rate_?
>>
>> This seems like a somewhat unnecessary drive-by insult of a project you
>> don't contribute to, but feel free to start with
>> a concrete suggestion here :).
>>
>> >> You'll note that B would be just fine if they had a way to safely
>> monitor the
>> >> global mempool, and while this seems like a prudent mitigation for
>> >> lightning

Re: [Lightning-dev] [bitcoin-dev] RBF Pinning with Counterparties and Competing Interest

2020-04-22 Thread Olaoluwa Osuntokun
> This seems like a somewhat unnecessary drive-by insult of a project you
> don't contribute to, but feel free to start with a concrete suggestion
> here :).

This wasn't intended as an insult at all. I'm simply saying if there's
concern about worst case eviction/replacement, optimizations likely exist.
Other developers that are interested in more complex multi-transaction
contracts have realized this as well, and there're various open PRs that
attempt to propose such optimizations [1].

> Hmm, maybe the proposal wasn't clear. The idea isn't to add signatures to
> braodcasted transactions, but instead to CPFP a maybe-broadcasted
> transaction by sending a transaction which spends it and seeing if it is
> accepted

Sorry I still don't follow. By "we clearly need to go the other direction -
all HTLC output spends need to be pre-signed.", you don't mean that the HTLC
spends of the non-broadcaster also need to be an off-chain 2-of-2 multi-sig
covenant? If the other party isn't restricted w.r.t _how_ they can spend the
output (non-rbf'd, ect), then I don't see how that addresses anything.

Also see my mail elsewhere in the thread that the other party is actually
forced to spend their HTLC output using an RBF-replaceable transaction. With
that, I think we're all good here? In the end both sides have the ability to
raise the fee rate of their spending transactions with the highest winning.
As long as one of them confirms within the CLTV-delta, then everyone is
made whole.


[1]: https://github.com/bitcoin/bitcoin/pull/18191


On Wed, Apr 22, 2020 at 9:50 AM Matt Corallo 
wrote:

> A few replies inline.
>
> On 4/22/20 12:13 AM, Olaoluwa Osuntokun wrote:
> > Hi Matt,
> >
> >
> >> While this is somewhat unintuitive, there are any number of good
> anti-DoS
> >> reasons for this, eg:
> >
> > None of these really strikes me as "good" reasons for this limitation,
> which
> > is at the root of this issue, and will also plague any more complex
> Bitcoin
> > contracts which rely on nested trees of transaction to confirm (CTV,
> Duplex,
> > channel factories, etc). Regarding the various (seemingly arbitrary)
> package
> > limits it's likely the case that any issues w.r.t computational
> complexity
> > that may arise when trying to calculate evictions can be ameliorated with
> > better choice of internal data structures.
> >
> > In the end, the simplest heuristic (accept the higher fee rate package)
> side
> > steps all these issues and is also the most economically rationale from a
> > miner's perspective. Why would one prefer a higher absolute fee package
> > (which could be very large) over another package with a higher total _fee
> > rate_?
>
> This seems like a somewhat unnecessary drive-by insult of a project you
> don't contribute to, but feel free to start with
> a concrete suggestion here :).
>
> >> You'll note that B would be just fine if they had a way to safely
> monitor the
> >> global mempool, and while this seems like a prudent mitigation for
> >> lightning implementations to deploy today, it is itself a quagmire of
> >> complexity
> >
> > Is it really all that complex? Assuming we're talking about just watching
> > for a certain script template (the HTLC scipt) in the mempool to be able
> to
> > pull a pre-image as soon as possible. Early versions of lnd used the
> mempool
> > for commitment broadcast detection (which turned out to be a bad idea so
> we
> > removed it), but at a glance I don't see why watching the mempool is so
> > complex.
>
> Because watching your own mempool is not guaranteed to work, and during
> upgrade cycles that include changes to the
> policy rules an attacker could exploit your upgraded/non-upgraded status
> to perform the same attack.
>
> >> Further, this is a really obnoxious assumption to hoist onto lightning
> >> nodes - having an active full node with an in-sync mempool is a lot more
> >> CPU, bandwidth, and complexity than most lightning users were expecting
> to
> >> face.
> >
> > This would only be a requirement for Lightning nodes that seek to be a
> part
> > of the public routing network with a desire to _forward_ HTLCs. This
> isn't
> > doesn't affect laptops or mobile phones which likely mostly have private
> > channels and don't participate in HTLC forwarding. I think it's pretty
> > reasonable to expect a "proper" routing node on the network to be backed
> by
> > a full-node. The bandwidth concern is valid, but we'd need concrete
> numbers
> > that compare the bandwidth over head of mempool awareness (assuming the
> > latest and greatest me

Re: [Lightning-dev] [bitcoin-dev] RBF Pinning with Counterparties and Competing Interest

2020-04-22 Thread Olaoluwa Osuntokun
Hi z,

Actually, the current anchors proposal already does this, since it enforces
a
CSV of 1 block before the HTLCs can be spent (the block after
confirmation). So
I think we already do this, meaning the malicious node is already forced to
use
an RBF-replaceable transaction.

-- Laolu


On Wed, Apr 22, 2020 at 4:05 PM Olaoluwa Osuntokun 
wrote:

> Hi Z,
>
> > It seems to me that, if my cached understanding that `<0>
> > OP_CHECKSEQUENCEVERIFY` is sufficient to require RBF-flagging, then
> adding
> > that to the hashlock branch (2 witness bytes, 0.5 weight) would be a
> pretty
> > low-weight mitigation against this attack.
>
> I think this works...so they're forced to spend the output with a non-final
> sequence number, meaning it *must* signal RBF. In this case, now it's the
> timeout-er vs the success-er racing based on fee rate. If the honest party
> (the
> one trying to time out the HTLC) bids a fee rate higher (need to also
> account
> for the whole absolute fee replacement thing), then things should generally
> work out in their favor.
>
> -- Laolu
>
>
> On Tue, Apr 21, 2020 at 11:08 PM ZmnSCPxj  wrote:
>
>> Good morning Laolu, Matt, and list,
>>
>>
>> > >  * With `SIGHASH_NOINPUT` we can make the C-side signature
>> > >  `SIGHASH_NOINPUT|SIGHASH_SINGLE` and allow B to re-sign the B-side
>> > >  signature for a higher-fee version of HTLC-Timeout (assuming my
>> cached
>> > >  understanding of `SIGHASH_NOINPUT` still holds).
>> >
>> > no_input isn't needed. With simply single+anyone can pay, then B can
>> attach
>> > a new input+output pair to increase the fees on their HTLC redemption
>> > transaction. As you mention, they now enter into a race against this
>> > malicious ndoe to bump up their fees in order to win over the other
>> party.
>>
>> Right, right, that works as well.
>>
>> >
>> > If the malicious node uses a non-RBF signalled transaction to sweep
>> their
>> > HTLC, then we enter into another level of race, but this time on the
>> mempool
>> > propagation level. However, if there exists a relay path to a miner
>> running
>> > full RBF, then B's higher fee rate spend will win over.
>>
>> Hmm.
>>
>> So basically:
>>
>> * B has no mempool, because it wants to reduce its costs and etc.
>> * C broadcasts a non-RBF claim tx with low fee before A->B locktime (L+1).
>> * B does not notice this tx because:
>>   1.  The tx is too low fee to be put in a block.
>>   2.  B has no mempool so it cannot see the tx being propagated over the
>> P2P network.
>> * B tries to broadcast higher-fee HTLC-timeout, but fails because it
>> cannot replace a non-RBF tx.
>> * After L+1, C contacts the miners off-band and offers fee payment by
>> other means.
>>
>> It seems to me that, if my cached understanding that `<0>
>> OP_CHECKSEQUENCEVERIFY` is sufficient to require RBF-flagging, then adding
>> that to the hashlock branch (2 witness bytes, 0.5 weight) would be a pretty
>> low-weight mitigation against this attack.
>>
>> So I think the combination below gives us good size:
>>
>> * The HTLC-Timeout signature from C is flagged with
>> `OP_SINGLE|OP_ANYONECANPAY`.
>>   * Normally, the HTLC-Timeout still deducts the fee from the value of
>> the UTXO being spent.
>>   * However, if B notices that the L+1 timeout is approaching, it can
>> fee-bump HTLC-Timeout with some onchain funds, recreating its own signature
>> but reusing the (still valid) C signature.
>> * The hashlock branch in this case includes `<0> OP_CHECKSEQUENCEVERIFY`,
>> preventing C from broadcasting a low-fee claim tx.
>>
>> This has the advantages:
>>
>> * B does not need a mempool still and can run in `blocksonly`.
>> * The normal path is still the same as current behavior, we "only" add a
>> new path where if the L+1 timeout is approaching we fee-bump the
>> HTLC-Timeout.
>> * Costs are pretty low:
>>   * No need for extra RBF carve-out txo.
>>   * Just two additional witness bytes in the hashlock branch.
>> * No mempool rule changes needed, can be done with the P2P network of
>> today.
>>   * Probably still resilient even with future changes in mempool rules,
>> as long as typical RBF behaviors still remain.
>>
>> Is my understanding correct?
>>
>> Regards,
>> ZmnSCPxj
>>
>> >
>> > -- Laolu
>> >
>> > On Tue, Apr 21, 2020 at 9:13 PM ZmnSCPxj via bitcoin-dev <
>> bitcoin-...@lists.l

Re: [Lightning-dev] [bitcoin-dev] RBF Pinning with Counterparties and Competing Interest

2020-04-22 Thread Olaoluwa Osuntokun
Hi Z,

> It seems to me that, if my cached understanding that `<0>
> OP_CHECKSEQUENCEVERIFY` is sufficient to require RBF-flagging, then adding
> that to the hashlock branch (2 witness bytes, 0.5 weight) would be a
pretty
> low-weight mitigation against this attack.

I think this works...so they're forced to spend the output with a non-final
sequence number, meaning it *must* signal RBF. In this case, now it's the
timeout-er vs the success-er racing based on fee rate. If the honest party
(the
one trying to time out the HTLC) bids a fee rate higher (need to also
account
for the whole absolute fee replacement thing), then things should generally
work out in their favor.

-- Laolu


On Tue, Apr 21, 2020 at 11:08 PM ZmnSCPxj  wrote:

> Good morning Laolu, Matt, and list,
>
>
> > >  * With `SIGHASH_NOINPUT` we can make the C-side signature
> > >  `SIGHASH_NOINPUT|SIGHASH_SINGLE` and allow B to re-sign the B-side
> > >  signature for a higher-fee version of HTLC-Timeout (assuming my cached
> > >  understanding of `SIGHASH_NOINPUT` still holds).
> >
> > no_input isn't needed. With simply single+anyone can pay, then B can
> attach
> > a new input+output pair to increase the fees on their HTLC redemption
> > transaction. As you mention, they now enter into a race against this
> > malicious ndoe to bump up their fees in order to win over the other
> party.
>
> Right, right, that works as well.
>
> >
> > If the malicious node uses a non-RBF signalled transaction to sweep their
> > HTLC, then we enter into another level of race, but this time on the
> mempool
> > propagation level. However, if there exists a relay path to a miner
> running
> > full RBF, then B's higher fee rate spend will win over.
>
> Hmm.
>
> So basically:
>
> * B has no mempool, because it wants to reduce its costs and etc.
> * C broadcasts a non-RBF claim tx with low fee before A->B locktime (L+1).
> * B does not notice this tx because:
>   1.  The tx is too low fee to be put in a block.
>   2.  B has no mempool so it cannot see the tx being propagated over the
> P2P network.
> * B tries to broadcast higher-fee HTLC-timeout, but fails because it
> cannot replace a non-RBF tx.
> * After L+1, C contacts the miners off-band and offers fee payment by
> other means.
>
> It seems to me that, if my cached understanding that `<0>
> OP_CHECKSEQUENCEVERIFY` is sufficient to require RBF-flagging, then adding
> that to the hashlock branch (2 witness bytes, 0.5 weight) would be a pretty
> low-weight mitigation against this attack.
>
> So I think the combination below gives us good size:
>
> * The HTLC-Timeout signature from C is flagged with
> `OP_SINGLE|OP_ANYONECANPAY`.
>   * Normally, the HTLC-Timeout still deducts the fee from the value of the
> UTXO being spent.
>   * However, if B notices that the L+1 timeout is approaching, it can
> fee-bump HTLC-Timeout with some onchain funds, recreating its own signature
> but reusing the (still valid) C signature.
> * The hashlock branch in this case includes `<0> OP_CHECKSEQUENCEVERIFY`,
> preventing C from broadcasting a low-fee claim tx.
>
> This has the advantages:
>
> * B does not need a mempool still and can run in `blocksonly`.
> * The normal path is still the same as current behavior, we "only" add a
> new path where if the L+1 timeout is approaching we fee-bump the
> HTLC-Timeout.
> * Costs are pretty low:
>   * No need for extra RBF carve-out txo.
>   * Just two additional witness bytes in the hashlock branch.
> * No mempool rule changes needed, can be done with the P2P network of
> today.
>   * Probably still resilient even with future changes in mempool rules, as
> long as typical RBF behaviors still remain.
>
> Is my understanding correct?
>
> Regards,
> ZmnSCPxj
>
> >
> > -- Laolu
> >
> > On Tue, Apr 21, 2020 at 9:13 PM ZmnSCPxj via bitcoin-dev <
> bitcoin-...@lists.linuxfoundation.org> wrote:
> >
> > > Good morning Matt, and list,
> > >
> > > > RBF Pinning HTLC Transactions (aka "Oh, wait, I can steal funds,
> how, now?")
> > > > =
> > > >
> > > > You'll note that in the discussion of RBF pinning we were pretty
> broad, and that that discussion seems to in fact cover
> > > > our HTLC outputs, at least when spent via (3) or (4). It does,
> and in fact this is a pretty severe issue in today's
> > > > lightning protocol [2]. A lightning counterparty (C, who
> received the HTLC from B, who received it from A) today could,
> > > > if B broadcasts the commitment transaction, spend an HTLC using
> the preimage with a low-fee, RBF-disabled transaction.
> > > > After a few blocks, A could claim the HTLC from B via the
> timeout mechanism, and then after a few days, C could get the
> > > > HTLC-claiming transaction mined via some out-of-band agreement
> with a small miner. This leaves B short the HTLC value.
> > >
> > > My (cached) understanding is that, since RBF is signalled using
> `nSequence`, any `OP_CHECKSEQUENCEVERIFY` also automatically 

Re: [Lightning-dev] [bitcoin-dev] RBF Pinning with Counterparties and Competing Interest

2020-04-21 Thread Olaoluwa Osuntokun
> So what is needed is to allow B to add fees to HTLC-Timeout:

Indeed, anchors as defined in #lightning-rfc/688 allows this.

>  * With `SIGHASH_NOINPUT` we can make the C-side signature
>  `SIGHASH_NOINPUT|SIGHASH_SINGLE` and allow B to re-sign the B-side
>  signature for a higher-fee version of HTLC-Timeout (assuming my cached
>  understanding of `SIGHASH_NOINPUT` still holds).

no_input isn't needed. With simply single+anyone can pay, then B can attach
a new input+output pair to increase the fees on their HTLC redemption
transaction. As you mention, they now enter into a race against this
malicious ndoe to bump up their fees in order to win over the other party.

If the malicious node uses a non-RBF signalled transaction to sweep their
HTLC, then we enter into another level of race, but this time on the mempool
propagation level. However, if there exists a relay path to a miner running
full RBF, then B's higher fee rate spend will win over.

-- Laolu

On Tue, Apr 21, 2020 at 9:13 PM ZmnSCPxj via bitcoin-dev <
bitcoin-...@lists.linuxfoundation.org> wrote:

> Good morning Matt, and list,
>
>
>
> > RBF Pinning HTLC Transactions (aka "Oh, wait, I can steal funds,
> how, now?")
> > =
> >
> > You'll note that in the discussion of RBF pinning we were pretty
> broad, and that that discussion seems to in fact cover
> > our HTLC outputs, at least when spent via (3) or (4). It does, and
> in fact this is a pretty severe issue in today's
> > lightning protocol [2]. A lightning counterparty (C, who received
> the HTLC from B, who received it from A) today could,
> > if B broadcasts the commitment transaction, spend an HTLC using the
> preimage with a low-fee, RBF-disabled transaction.
> > After a few blocks, A could claim the HTLC from B via the timeout
> mechanism, and then after a few days, C could get the
> > HTLC-claiming transaction mined via some out-of-band agreement with
> a small miner. This leaves B short the HTLC value.
>
> My (cached) understanding is that, since RBF is signalled using
> `nSequence`, any `OP_CHECKSEQUENCEVERIFY` also automatically imposes the
> requirement "must be RBF-enabled", including `<0> OP_CHECKSEQUENCEVERIFY`.
> Adding that clause (2 bytes in witness if my math is correct) to the
> hashlock branch may be sufficient to prevent C from making an RBF-disabled
> transaction.
>
> But then you mention out-of-band agreements with miners, which basically
> means the transaction might not be in the mempool at all, in which case the
> vulnerability is not really about RBF or relay, but sheer economics.
>
> The payment is A->B->C, and the HTLC A->B must have a larger timeout (L +
> 1) than the HTLC B->C (L), in abstract non-block units.
> The vulnerability you are describing means that the current time must now
> be L + 1 or greater ("A could claim the HTLC from B via the timeout
> mechanism", meaning the A->B HTLC has timed out already).
>
> If so, then the B->C transaction has already timed out in the past and can
> be claimed in two ways, either via B timeout branch or C hashlock branch.
> This sets up a game where B and C bid to miners to get their version of
> reality committed onchain.
> (We can neglect out-of-band agreements here; miners have the incentive to
> publicly leak such agreements so that other potential bidders can offer
> even higher fees for their versions of that transaction.)
>
> Before L+1, C has no incentive to bid, since placing any bid at all will
> leak the preimage, which B can then turn around and use to spend from A,
> and A and C cannot steal from B.
>
> Thus, B should ensure that *before* L+1, the HTLC-Timeout has been
> committed onchain, which outright prevents this bidding war from even
> starting.
>
> The issue then is that B is using a pre-signed HTLC-timeout, which is
> needed since it is its commitment tx that was broadcast.
> This prevents B from RBF-ing the HTLC-Timeout transaction.
>
> So what is needed is to allow B to add fees to HTLC-Timeout:
>
> * We can add an RBF carve-out output to HTLC-Timeout, at the cost of more
> blockspace.
> * With `SIGHASH_NOINPUT` we can make the C-side signature
> `SIGHASH_NOINPUT|SIGHASH_SINGLE` and allow B to re-sign the B-side
> signature for a higher-fee version of HTLC-Timeout (assuming my cached
> understanding of `SIGHASH_NOINPUT` still holds).
>
> With this, B can exponentially increase the fee as L+1 approaches.
> If B can get HTLC-Timeout confirmed before L+1, then C cannot steal the
> HTLC value at all, since the UTXO it could steal from has already been
> spent.
>
> In particular, it does not seem to me that it is necessary to change the
> hashlock-branch transaction of C at all, since this mechanism is enough to
> sidestep the issue (as I understand it).
> But it does point to a need to make HTLC-Timeout (and possibly
> symmetrically, HTLC-Success) also fee-bumpable.
>
> Note as well that this does not require a mempool: B can run in
> 

Re: [Lightning-dev] [bitcoin-dev] RBF Pinning with Counterparties and Competing Interest

2020-04-21 Thread Olaoluwa Osuntokun
Hi Matt,


> While this is somewhat unintuitive, there are any number of good anti-DoS
> reasons for this, eg:

None of these really strikes me as "good" reasons for this limitation, which
is at the root of this issue, and will also plague any more complex Bitcoin
contracts which rely on nested trees of transaction to confirm (CTV, Duplex,
channel factories, etc). Regarding the various (seemingly arbitrary) package
limits it's likely the case that any issues w.r.t computational complexity
that may arise when trying to calculate evictions can be ameliorated with
better choice of internal data structures.

In the end, the simplest heuristic (accept the higher fee rate package) side
steps all these issues and is also the most economically rationale from a
miner's perspective. Why would one prefer a higher absolute fee package
(which could be very large) over another package with a higher total _fee
rate_?

> You'll note that B would be just fine if they had a way to safely monitor
the
> global mempool, and while this seems like a prudent mitigation for
> lightning implementations to deploy today, it is itself a quagmire of
> complexity

Is it really all that complex? Assuming we're talking about just watching
for a certain script template (the HTLC scipt) in the mempool to be able to
pull a pre-image as soon as possible. Early versions of lnd used the mempool
for commitment broadcast detection (which turned out to be a bad idea so we
removed it), but at a glance I don't see why watching the mempool is so
complex.

> Further, this is a really obnoxious assumption to hoist onto lightning
> nodes - having an active full node with an in-sync mempool is a lot more
> CPU, bandwidth, and complexity than most lightning users were expecting to
> face.

This would only be a requirement for Lightning nodes that seek to be a part
of the public routing network with a desire to _forward_ HTLCs. This isn't
doesn't affect laptops or mobile phones which likely mostly have private
channels and don't participate in HTLC forwarding. I think it's pretty
reasonable to expect a "proper" routing node on the network to be backed by
a full-node. The bandwidth concern is valid, but we'd need concrete numbers
that compare the bandwidth over head of mempool awareness (assuming the
latest and greatest mempool syncing) compared with the overhead of the
channel update gossip and gossip queries over head which LN nodes face today
as is to see how much worse off they really would be.

As detailed a bit below, if nodes watch the mempool, then this class of
attack assuming the anchor output format as described in the open
lightning-rfc PR is mitigated. At a glance, watching the mempool seems like
a far less involved process compared to modifying the state machine as its
defined today. By watching the mempool and implementing the changes in
#lightning-rfc/688, then this issue can be mitigated _today_. lnd 0.10
doesn't yet watch the mempool (but does include anchors [1]), but unless I'm
missing something it should be pretty straight forward to add which mor or
less
resolves this issue all together.

> not fixing this issue seems to render the whole exercise somewhat useless

Depends on if one considers watching the mempool a fix. But even with that a
base version of anchors still resolves a number of issues including:
eliminating the commitment fee guessing game, allowing users to pay less on
force close, being able to coalesce 2nd level HTLC transactions with the
same CLTV expiry, and actually being able to reliably enforce multi-hop HTLC
resolution.

> Instead of making the HTLC output spending more free-form with
> SIGHASH_ANYONECAN_PAY|SIGHASH_SINGLE, we clearly need to go the other
> direction - all HTLC output spends need to be pre-signed.

I'm not sure this is actually immediately workable (need to think about it
more). To see why, remember that the commit_sig message includes HTLC
signatures for the _remote_ party's commitment transaction, so they can
spend the HTLCs if they broadcast their version of the commitment (force
close). If we don't somehow also _gain_ signatures (our new HTLC signatures)
allowing us to spend HTLCs on _their_ version of the commitment, then if
they broadcast that commitment (without revoking), then we're unable to
redeem any of those HTLCs at all, possibly losing money.

In an attempt to counteract this, we might say ok, the revoke message also
now includes HTLC signatures for their new commitment allowing us to spend
our HTLCs. This resolves things in a weaker security model, but doesn't
address the issue generally, as after they receive the commit_sig, they can
broadcast immediately, again leaving us without a way to redeem our HTLCs.

I'd need to think about it more, but it seems that following this path would
require an overhaul in the channel state machine to make presenting a new
commitment actually take at least _two phases_ (at least a full round trip).
The first phase would tender the commitment, but 

[Lightning-dev] Anchor Outputs Spec & Implementation Progress

2020-03-30 Thread Olaoluwa Osuntokun
Hi y'all,

We've been discussing the current state of the spec and implementation
readiness of anchor outputs for a few week now on IRC. As detailed
conversations are at times difficult to have on IRC, and there's no true
history, I figured I'd start a new discussion thread where we can hammer out
the final details.

First, on the current state of implementation. Anchor outputs are now fully
supported in the master branch of lnd. A user can opt into this new format
by specifying a new command line parameter: --protocol.anchors (off by
default).  Nodes running with this flag will use the feature bit 1337 for
negotiation. We didn't use the range above 65k, as we realized that would
result in rather large init messages. This feature will be included in our
upcoming 0.10 release, which will be entering the release mandate phase in
the next week or two. We also plan to add an entry in the wiki declaring our
usage of this feature bit.

Anchors in lnd implement the spec as is currently defined: two anchors at
all times, with each anchor utilizing 330 satoshis.

During the last spec meeting, the following concerns were raised about
having two anchors at all times (compared to one and re-using the to_remote)
output:

  1. two anchors adds extra bytes to the commitment transaction, increasing
the
 fee burden for force closing
  2. two anchors pollutes the UTXO set, so instead one anchor (for the force
 closing party) should be present, while the other party re-uses their
 to_remote output for this purpose

In response to the first concern: it is indeed the case that these new
commitments are more expensive, but they're only _slightly_ so. The new
default commitment weight is as if there're two HTLCs at all times on the
commitment transaction. Adding in the extra anchor cost (660 satoshis) is a
false equivalence as both parties are able to recover these funds if they
chose. It's also the case that force cases in the ideal case are only due to
nodes needing to go on-chain to sweep HTLCs, so the extra bytes may be
dwarfed by several HTLCs, particularly in a post MPP/AMP world. The extra
cost may seem large (relatively) when looking at a 1 sat/byte commitment
transaction. However, fees today in the system are on the rise, and if one
is actually in a situation where they need to resolve HTLCs on chain,
they'll likely require a fee rate higher than 1 sat/byte to have their
commitment confirm in a timely manner.

On the topic of UTXO bloat, IMO re-purposing the to_remote output as an
anchor is arguably _worse_, as only a single party in the channel is able to
spend that output in order to remove its impact on the UTXO set. On the
other hand, using two anchors (with their special scripts) allows _anyone_
to sweep these outputs several blocks after the commitment transaction has
confirmed. In order to cover the case where the remote party has no balance,
but a single incoming HTLC, the channel initiator must either create a new
anchor output for this special case (creating a new type of ad-hoc reserve),
or always create a to_remote output for the other party (donating the 330
satoshis).  The first option reduces down to having two anchors once again,
while the second option creates an output which is likely uneconomical to
sweep in isolation (compared to anchors which can be swept globally in the
system taking advantage of the input aggregation savings).

The final factor to consider is if we wish to properly re-introduce a CSV
delay to the to_remote party in an attempt to remedy some game theoretical
issues w.r.t forcing one party to close early without a cost to the
instigator. In the past we made some headway in this direction, but then
reverted our changes as we discoverers some previously unknown gaming
vectors even with a symmetrical delay. If we keep two anchor as is, then we
leave this thread open to a comprehensive solution, as the dual anchor
format is fully decoupled from the rest of the commitment.

Circling back to our implementation, we're ready to deploy what we have as
is.  In the future, if the scheme changes, then we'll be able to easily
update all our users, as we're also concurrently working on a dynamic
commitment update protocol. By dynamic I mean that users will be able to
update their commitment type on the fly, compared to being locked into a
commitment type when the channel opens as is today.

Would love to hear y'alls thoughts on the two primary concerns laid out
above, and my response to them, thanks!

-- Laolu
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Potential Minor Sphinx Privacy Leak and Patch

2019-11-05 Thread Olaoluwa Osuntokun
Hi y'all,

A new paper analyzing the security of the Sphinx mix-net packet format [1]
(and also HORNET) has recently caught my attention. The paper is rather long
and very theory heavy, but the TL;DR is this:

* The OG Sphinx paper proved various aspects of its security using a
  model for onion routing originally put forth by Camenisch and
  Lysyanskaya [2].
* This new paper discovered that certain security notions put forth in
  [2] weren't actually practically achievable by real-world onion
routing
  implementations (in this case Onion-Correctnes), or weren't entirely
  correct or additive.  New stronger security notions are put forth in
  response, along with extensions to the original Sphinx mix-net packet
  format that achieve these notions.
* A flaw they discovered in the original Sphinx paper [3], can allow an
  exit node to deduce a lower bound of the length of the path used to
  reach it. The issue is that the original paper constructs the
  _starting packet_ (what the exit hop will receive) by adding extra
  padding zeroes after the destination and identifier (we've more or
  less revamped this with our new onion format, but it still stands).
  An adversarial exit node can then locate the first set bit after the
  identifier (our payload in this case), then use that to compute the
  lower bound.
 * One of the (?) reference Sphinx implementations recognizes that this
   was/is an issue in the paper and implements the mitigation [4].
 * The fix on our end is easy: we need to replace those zero bytes with
   random bytes when constructing the starting packet.

I've created a PR to lnd's lightning-onion PR implementing this mitigation
[5].  As this changes the starting packet format, we also need to either
update the test vectors or we can keep them as is, and note that we use
zeroes so the test vectors are fully deterministic. My PR to the spec
patching the privacy leak leaves the test vectors untouched as is [6].

With all that said, IMO we have larger existing privacy leaks just due to
our unique application of the packet format. As an example, a receiver can
use the CLTV of the final HTLC to deduce bounds on the path length as we
have a restricted topology and CLTV values for public channels are all
known. Another leak is our usage of the variable length onion payloads which
a node can use to ascertain path length since they space they consume counts
towards the max hop count of 20-something.

In any case, we can patch this with just a few lines of code (fill out with
random bytes) at _senders_, and don't need any intermediate nodes to update.
The new and old packet construction algos are compatible as packet
_processing_ isn't changing, instead just the starting set of bytes are.

As always, please double-check by interpretation of the paper, as it's
possible I'm missing something. If my interpretation stands, then it's a
relatively minor privacy leak, and an easy low-hanging fruit that can be
patched without wide-spread network coordination.

-- Laolu

[1]: https://arxiv.org/abs/1910.13772
[2]: https://www.iacr.org/cryptodb/archive/2005/CRYPTO/1091/1091.pdf
[3]: https://cypherpunks.ca/~iang/pubs/Sphinx_Oakland09.pdf
[4]:
https://github.com/UCL-InfoSec/sphinx/blob/c05b7034eaffd8f98454e0619b0b1548a9fa0f42/SphinxClient.py#L67
[5]: https://github.com/lightningnetwork/lightning-onion/pull/40
[6]: https://github.com/lightningnetwork/lightning-rfc/pull/697
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] A proposal for up-front payments.

2019-11-05 Thread Olaoluwa Osuntokun
Hi Rusty,

Agreed w.r.t the need for prepaid HTLCS, I've been mulling over other
alternatives for a few years now, and none of them seems to resolve the
series of routing related incentive issues that prepaid HTLCs would.

> Since both Offers and Joost's WhatSat are looking at sending messages,
> it's time to float actual proposals.

IMO both should just be done over HORNET, so we don't need introduce a new
set of internal protocol level messages whenever we have some new
control/signalling need. Instead, we'd have a control/signal channel (give
me
routes, invoices, sign this, etc), and a payment channel (HTLCs as used
today).

> 2. Adding an HTLC causes a *push* of a number of msat on commitment_signed
> (new field), and a hash.

The prepay amount should be signalled in the update add message instead.
This lets HTLCs carry a heterogeneous set of prepay amounts. In addition, we
need a new onion field as well to signal the incoming amount the node
_should_ have received (allows them to detect deviations in the sender's
intended route).

> 3. Failing/succeeding an HTLC returns some of those msat, and a count and
> preimage (new fields).

Failing shouldn't return the prepay amount, otherwise extending long lived
HTLCs then cancelling them at the last minute is still costless. This
costlessness of _adding_ an HTLC to a _remote_ commitment is IMO, the
biggest incentive flaw that exists today in the greater routing network.

>  You get to keep 50 msat[1] per preimage you present[2].

We should avoid introducing any new constants to the protocol, as they're
typically dreamed up independent of any empirical lessons learned from
deployment.

On the topic of the prepay cost, the channel update message should be
extended to allow nodes to signal prepay costs similar to the way we handle
regular payment success fees. In order to eliminate a number of costless
attacks possible today on the routing network, nodes should also be able to
signal a new coefficient used to _scale_ the prepay fee as a function of the
CLTV value of the incoming HTLC. With this addition, senders need to pay to
_add_ an HTLC to a remote commitment transaction (fixed base cost), then
also need to pay a variable rate that scales with the duration of the
proposed outgoing CLTV value (senders ofc don't prepay to themselves).  Once
we introduce this, loop attacks and the like are no longer free to launch,
and nodes can dynamically respond to congestion in the network by raising
their prepay prices.

-- Laolu

On Mon, Nov 4, 2019 at 6:25 PM Rusty Russell  wrote:

> Hi all,
>
> It's been widely known that we're going to have to have up-front
> payments for msgs eventually, to avoid Type 2 spam (I think of Type 1
> link-local, Type 2 though multiple nodes, and Type 3 liquidity-using
> spam).
>
> Since both Offers and Joost's WhatSat are looking at sending
> messages, it's time to float actual proposals.  I've been trying to come
> up with something for several years now, so thought I'd present the best
> I've got in the hope that others can improve on it.
>
> 1. New feature bit, extended messages, etc.
> 2. Adding an HTLC causes a *push* of a number of msat on
>commitment_signed (new field), and a hash.
> 3. Failing/succeeding an HTLC returns some of those msat, and a count
>and preimage (new fields).
>
> How many msat can you take for forwarding?  That depends on you
> presenting a series of preimages (which chain into a final hash given in
> the HTLC add), which you get by decoding the onion.  You get to keep 50
> msat[1] per preimage you present[2].
>
> So, how many preimages does the user have to give to have you forward
> the payment?  That depends.  The base rate is 16 preimages, but subtract
> one for each leading 4 zero bits of the SHA256(blockhash | hmac) of the
> onion.  The blockhash is the hash of the block specified in the onion:
> reject if it's not in the last 3 blocks[3].
>
> This simply adds some payment noise, while allowing a hashcash style
> tradeoff of sats for work.
>
> The final node gets some variable number of preimages, which adds noise.
> It should take all and subtract from the minimum required invoice amount
> on success, or take some random number on failure.
>
> This leaks some forward information, and makes an explicit tradeoff for
> the sender between amount spent and privacy, but it's the best I've been
> able to come up with.
>
> Thoughts?
> Rusty.
>
> [1] If we assume $1 per GB, $10k per BTC and 64k messages, we get about
> 655msat per message.  Flat pricing for simplicity; we're trying to
> prevent spam, not create a spam market.
> [2] Actually, a number and a single preimage; you can check this is
> indeed the n'th preimage.
> [3] This reduces incentive to grind the damn things in advance, though
> maybe that's dumb?  We can also use a shorter hash (siphash?), or
> even truncated SHA256 (128 bits).
> ___
> Lightning-dev mailing 

Re: [Lightning-dev] Rendez-vous on a Trampoline

2019-11-05 Thread Olaoluwa Osuntokun
Hi t-bast,

> She creates a Bolt 11 invoice containing that pre-encrypted onion.

This seem insufficient, as if the prescribed route that Alice selects fails,
then the sender has no further information to go off of (let's say Teddy is
offline, but there're other pats). cdecker's rendezvous sketch using Sphinx
you
linked above also suffers from the same issue: you need some other
bi-directional communication medium between the sender and receiver in
order to
account for payment failures. Beyond that, if any failures occur in the
latter
half of the route (the part that's opaque to the sender), then the sender
isn't
able to incorporate the failure information into their path finding.  As a
result, the payer would need to send the error back to the receiver for
decrypting, possibly ping-ponging several times in a payment attempt.

On the other hand, using HORNET for rendezvous routing as was originally
intended gives the sender+receiver a communication channel they can use to
exchange further payment information, and also a channel to use for
decryption
of the opaque errors. Amongst many other things, it would also give us a
payment-level ACK [1], which may be a key component for payment splitting
(otherwise
you have no idea if _any_ shards have even arrived at the other side).


[1]:
https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-November/001524.html

-- Laolu

On Tue, Oct 22, 2019 at 5:02 AM Bastien TEINTURIER  wrote:

> Good morning everyone,
>
> Since I'm a one-trick pony, I'd like to talk to you about...guess what?
> Trampoline!
> If you watched my talk at LNConf2019, I mentioned at the end that
> Trampoline enables high AMP very easily.
> Every Trampoline node in the route may aggregate an incoming multi-part
> payment and then decide on how
> to split the outgoing aggregated payment. It looks like this:
>
>  . 1mBTC ..--- 2mBTC ---.
> /\ /
> \
> Alice - 3mBTC --> Ted -- 4mBTC > Terry - 6mBTC >
> Bob
>\ /
> `--- 2mBTC --'
>
> In this example, Alice only has small-ish channels to Ted so she has to
> split in 3 parts. Ted has good outgoing
> capacity to Terry so he's able to split in only two parts. And Terry has a
> big channel to Bob so he doesn't need
> to split at all.
> This is interesting because each intermediate Trampoline node has
> knowledge of his local channels balances,
> thus can make more informed decisions than Alice on how to efficiently
> split to reach the next node.
>
> But it doesn't stop there. Trampoline also enables a better rendez-vous
> routing than normal payments.
> Christian has done most of the hard work to figure out how we could do
> rendez-vous on top of Sphinx [1]
> (thanks Christian!), so I won't detail that here (but I do plan on
> submitting a detailed spec proposal with all
> the crypto equations and nice diagrams someday, unless Christian does it
> first).
>
> One of the issues with rendez-vous routing is that once Alice (the
> recipient) has created her part of the onion,
> she needs to communicate that to Bob (the sender). If we use a Bolt 11
> invoice for that, it means we need to
> put 1366 additional bytes to the invoice (plus some additional information
> for the ephemeral key switch).
> If the amount Alice wants to receive is big and may require multi-part,
> Alice has to decide upfront on how to split
> and provide multiple pre-encrypted onions (so we need 1366 bytes *per
> partial payment*, which kinda sucks).
>
> But guess what? Bitcoin Trampoline fixes that*™*. Instead of doing the
> pre-encryption on a normal onion, Alice
> would do the pre-encryption on a Trampoline onion (which is much smaller,
> in my prototype it's 466 bytes).
> And that allows rendez-vous routing to benefit from Trampoline's ability
> to do multi-part at each hop.
> Obviously since the onion is smaller, that limits the number of trampoline
> hops that can be used, but don't
> forget that there are additional "normal" hops between each Trampoline
> node (and the final Trampoline spec
> can choose the size of the Trampoline onion to enable a good enough
> rendez-vous).
>
> Here is what it would look like. Alice chooses to rendez-vous at Terry.
> Alice wants the payment to go through Terry
> and Teddy so she pre-encrypts a Trampoline onion with that route:
>
> Alice <--- Teddy <--- Terry
>
> She creates a Bolt 11 invoice containing that pre-encrypted onion. Bob
> picks up that invoice and can either reach
> Terry directly (via a normal payment route) or via another Trampoline node
> (Toad?). Bob finalizes the encryption of
> the Trampoline onion and sends it onward. Bob can use multi-part and split
> the payment however he wishes,
> because every Trampoline node in the route will be free to aggregate and
> re-split differently.
> Terry is the only intermediate node to know that rendez-vous routing was
> used. Terry 

Re: [Lightning-dev] Increasing fee defaults to 5000+500 for a healthier network?

2019-10-11 Thread Olaoluwa Osuntokun
Hi Rusty,

I think this change may be a bit misguided, and we should be careful about
making sweeping changes to default values like this such as fees. I'm
worried that this post (and the subsequent LGTMs by some developers)
promotes the notion that somehow in Lightning, developers decide on fees
(fees are too low, let's raise them!).

IMO, there're a number of flaws in the reasoning behind this proposal:

> defaults actually indicate lower reliability, and routing gets tarpitted
> trying them all

Defaults don't necessarily indicate higher/lower reliability. Issuing a
single CLI command to raise/lower the fees on one's node doesn't magically
make the owner of said node a _better_ routing node operator. If a node has
many channels, with all of them poorly managed, then path finding algorithms
can move extrapolate the overall reliability of a node based on failures of
a sample of channels connected to that node. We've start to experiment with
such an approach here, so far the results are promising[1].

> There's no meaningful market signal in fees, since you can't drop much
> below 1ppm.

The market signal one should be extracting from the current state is: a true
market hasn't yet emerged as routing node operators are mostly hands off (as
they're used to being with their exiting bitcoin node) and have yet to begin
to factor in the various costs of operating a node into their fees schedule.
Only a handful of routing node operators have started to experiment with
distinct fee settings in an attempt to feel out the level of elasticity in
the forwarding market today (if I double by fees, by how much do my daily
forwards and fee revenue drop off?).

Ken Sedgwick had a pretty good talk on this topic as the most recent SF
Lightning Devs meet up[2]. The talk itself unfortunately wasn't recorded,
but there're a ton of cool graphs really digging into the various parameters
in the current market. He draws a similar conclusion stating that: "Many
current lightning channels are not charging enough fees to cover on-chain
replacement".

Developers raising the default fees (on their various implementations) won't
address this as it shows that the majority of participants today (routing
node operators) aren't yet thinking about their break even costs. IMO
generally this is due to a lack of education, which we're working to address
with our blog post series (eventually to be a fully fledged standalone
guide) on routing node operation[3]. Tooling also needs to improve to give
routing node operators better insight into their current level of
performance and efficacy of their capital allocation decisions.

> Compare lightningpowerusers.com which charges (1 msat + 5000 ppm),
> and seems to have significant usage, so there clearly is market tolerance
> for higher fees.

IIRC, the fees on that node are only that high due to user error by the
operator when setting their fees. `lnd` exposes fees on the command line
using the fixed point numerator which some find confusing. We'll likely add
another argument that allows users to specify their fees using their basis
points (bps) or a plain old percentage.

Independent of that, I don't think you can draw the conclusion that they
have "significant" usage, based on solely the number of channels they have.
That node has many channels due to the operator campaigning for users to
open channels with them on Twitter, as they provided an easy way to package
lnd for desktop users. A node having numerous channels doesn't necessarily
mean that they have significant usage, as it's easy to "paint the tape" with
on-chain transactions. What really matters is how effectively the node is
managed.

In terms of market signals, IMO the gradual rise of fees _above_ the current
widely used default is a strong signal as it will indicate a level of
maturation in the market. Preemptively raising defaults only adds noise as
then the advertised fees are less indicative of the actual market
conditions. Instead, we should (to promote a healthier network) educate
prospective routing node operators on best practices, provide analysis
tools t
hey can use to make channel management and liquidity allocation decisions,
and leave it up to the market participants to converge on steady state
economically rational fees!

[1]: https://github.com/lightningnetwork/lnd/pull/3462
[2]:
https://github.com/ksedgwic/lndtool/blob/graphstats/lightning-fee-market.pdf
[3]:
https://blog.lightning.engineering/posts/2019/08/15/routing-quide-1.html


On Thu, Oct 10, 2019 at 7:50 PM Rusty Russell  wrote:

> Hi all,
>
> I've been looking at the current lightning network fees, and it
> shows that 2/3 are sitting on the default (1000 msat + 1 ppm).
>
> This has two problems:
> 1. Low fees are now a negative signal: defaults actually indicate
>lower reliability, and routing gets tarpitted trying them all.
> 2. There's no meaningful market signal in fees, since you can't
>drop much below 1ppm.
>
> Compare 

Re: [Lightning-dev] CVEs assigned for lightning projects: please upgrade!

2019-09-10 Thread Olaoluwa Osuntokun
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

We've confirmed instances of the CVE being exploited in the wild.  If you’re
not on the following versions of either of these implementations (these
versions are fully patched), then you need to upgrade now to avoid risk of
funds loss:
* lnd v0.7.1 -- anything 0.7 and below is vulnerable
* c-lightning v0.7.1 -- anything 0.7 and below is vulnerable
* eclair v0.3.1 -- anything 0.3 and below is vulnerable

We'd also like to remind the community that we still have limits in place on
the network to mitigate widespread funds loss, and please keep that in mind
when putting funds onto the network at this early stage.

If you have trouble updating for whatever reason, feel free to reach out to
the developers of the respective implementations referenced above.
-BEGIN PGP SIGNATURE-

iQIzBAEBCgAdFiEE+AN+cMEseiY8AyUIzlj3+OIP2aIFAl13vxQACgkQzlj3+OIP
2aIUABAAxrXvdyNcrNeerEFgYjqshXXhZVJXUcQwpHrrd4UX7weqS+UakOE4NP/b
EBDnMlOoqN5X4UhiV8EVR0QMnznXGYJ5ZNws8OCvGg8QCUMbkHRg7rVNEnd4zZJU
oE9c75Vg02E5riNcMT9B+gBkcTppUeZiM/PboDoU6HWvXzdIAhRD3ZXHZaAJj35H
SRcAD7ehUQ1WRmXH9wfvF6jCX5GZMb731EfVPEvcyA3EiYG/P0GBNXrUKsFzknab
DE8txA31728iojydnQxesKcMmXZhZqS0IJfeqacBXiyzUNWcgWpTui0QhtPZzV9x
0yVseqcMWaONagIGRSZ2zrnBbU3aVXSbGQRSy4qvhljQjqrQgvoHCgshROr1JbvU
jqsNI5ZT2v3mRNLQMKQZM6O84ULLAvyIk17/ZiLVoLp018G/5ZI2p8npe/he01Wm
cClrag2F6a1POWiByd4bQDps/XfBh4yLRxFUCFDZhOPEHf2P7N8ydqmjcGTGh9oZ
iWIX7pHZYqMM9UwdIorgUQlm1K4PQA+0lKjB97pR5Vhj+Nt41bm4+S7UqvCQSalK
t0B8csNISrqGtA02jjXiNqpOnkRnRoiwiOwsB5wpL3w5cagIgrsE4wNpNsIQC1ZY
HhVts3uc299TtS8eMwl5WjKiY2zgKHILvIs0WcyEGqpVLV0hjyw=
=Q5CI
-END PGP SIGNATURE-


On Fri, Aug 30, 2019 at 2:34 AM Rusty Russell  wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> Security issues have been found in various lightning projects which
> could cause loss of funds.
>
> Full details will be released in 4 weeks (2019-09-27), please uprade
> well before then.
>
> Effected releases:
>
> CVE-2019-12998 c-lightning < 0.7.1
> CVE-2019-12999 lnd < 0.7
> CVE-2019-13000 eclair <= 0.3
>
> Cheers,
> Rusty.
> -BEGIN PGP SIGNATURE-
>
> iQIzBAEBCAAdFiEEFe6NbKsOfwz5mb/L2SAObNGtuPEFAl1o7UAACgkQ2SAObNGt
> uPFR7xAAqlcY/gCzfx5Sl49BwLIvr5EZlKYxasIoU4FoiAxLN0sRMksBLY+gUA3L
> 7XuPi7oJSsnJc0Gvq6DnWo8W/jqAETgK0XeCyESdtX1tLeXMEiCoAXccRBT/hNbr
> aHRiyeRO6YnrfzJN2CKStzXUvoVEvyB4lpMZ+dTJYdulOUs20ELU/zzSQe/syGnD
> 7kujvBVyk4LJIYQ9piGl1pc4Y8mORK2ttYCVk4HCy+eu1RGHRVze135ve2MhQVOd
> Mzs57lqXM8k+ZUumD5eB6pgvENlFzgFVaywYvf7+RSZIx185qosHTbQU84icyunp
> W68FhCk9DMUYlhU8lBVyX1qS1+YhBYvm79zK4lCSJ9CQBZ2Oox2tz9RuO/3DPSol
> RCZ3+h8SCKai8ZASXhz4dL4nXSpdKNjJrQdRvp7I1e2netkZpaF2Dyd7FDvFnhad
> SWP/juo/n9rmkyfbuxQYj5sdixV9G9cpV85BnQDX558r+AMRPVin/xs5NBZMknkN
> S7Wc9aq8nlVUeoTV5+TnGbz8NPXyYLNSotJdwBnA+RWTD9emCBah3UOxVlJR7N5e
> nZuumPauLJyZESzxvRDgQ0Hca7hMCMBh+xJ/OFDy+n4oHxFLihCtY3EktSE43v2N
> +PXbLFXw9w7jSPxn5FgqzB9D/E/eqkLe/+UKsnQ0ji8trEd36DU=
> =Z6RL
> -END PGP SIGNATURE-
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>


warn.txt.asc
Description: Binary data
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Improving Lightning Network Pathfinding Latency by Path Splicing and Other Real-Time Strategy Game Techniques

2019-08-02 Thread Olaoluwa Osuntokun
> I found out recently (mid-2019) that mainnet Lightning nodes take an
> inordinate amount of time to find a route between themselves and an
> arbitrary payee node.
> Typical quotes suggested that commodity hardware would take 2 seconds to
> find a route

Can you provide a reproducible benchmark or further qualify this number (2
seconds)? Not commenting on the rest of this email as I haven't read the
rest of it yet, but this sounds like just an issue of engineering
optimization. AFAIK, most implementations are using unoptimized on-disk
representations of the graph, do minimal caching, and really haven't made
any sort of push to optimize these hot spots. There's no reason that finding
a path in a graph of a 10s of thousands of edges should take _2 seconds_.

Beyond that, to my knowledge, all implementations other and lnd implement a
very rudimentary time based edge/node pruning in response to failures. I
call it rudimentary, as it just waits a small period of time, then forgets
all its past path finding history. As a result, each attempt will include
nodes that have been known to be offline, or nonoperational channels,
effectively doing redundant work each attempt.

The latest version of our software has moved beyond this [1], and will
factor in past path finding attempts into its central "mission control",
allowing it to learn from each attempt, and even import existing state into
its path finding memory (essentially a confidence factor that takes into
account the cost of a failed attempt mapped into a scalar weight we can use
for comparison purposes). This is just an initial first step, but we've seen
a significant improvement with just a _little_ bit more intelligence in our
path finding heuristics. We should take care to not get distracted by more
distant "pie in the sky" like ideas (since many of them are half-baked),
lest we ignore these low hanging engineering fruits and incremental
algorithmic updates.

> This is concerning, of course, since we would like the public Lightning
> Network to grow a few more orders of magnitude.

I'd say exactly _how large_ the _public_ graph needs to be is an open
question. Most of the public channels in the network today are more
extremely underutilized with capital largely being over allocated. Based on
our active network analysis, only a few hundred nodes are actively
managing their channels effectively, allowing them to be effective routing
nodes.

moar channels != better

As a result, clients today are able to ignore a _majority_ of the known
graph, and still have their payments attempts be successful, as they'll
ignore all the routing nodes that aren't actually walking the walk (proper
channel management).

-- Laolu

[1]: https://github.com/lightningnetwork/lnd/pull/2802


On Wed, Jul 31, 2019 at 6:52 PM ZmnSCPxj via Lightning-dev <
lightning-dev@lists.linuxfoundation.org> wrote:

> Introduction
> 
>
> I found out recently (mid-2019) that mainnet Lightning nodes take an
> inordinate amount of time to find a route between themselves and an
> arbitrary payee node.
> Typical quotes suggested that commodity hardware would take 2 seconds to
> find a route, then take a few hundred milliseconds for the actual payment
> attempt.
> With the help of Rene Pickhardt I was able to confirm that indeed, much of
> payment latency actually arises from the pathfinding algorithm and not the
> actual payment-over-routes.
>
> This is concerning, of course, since we would like the public Lightning
> Network to grow a few more orders of magnitude.
> Given that the best pathfinding search algorithms will be O(n log n) on
> the size of the network, we need to consider how to speed up the finding of
> routes.
>
> `permuteroute` and Faster Next Pathfinding Attempts
> ===
>
> As I was collaborating with Rene, JIT-Routing was activated in my core
> processing hardware.
>
> As I was contemplating this problem, I considered, that JIT-Routing would
> (ignoring fees) effectively "reroute" the original route around the failing
> channel.
>
> In particular, JIT-Routing is advantageous for these reasons:
>
> 1.  There is no need to report back the failure to the original payer.
> 2.  The node performing JIT-Routing has accurate information about its
> channel balances and which of its outgoing channels would be most effective
> to route through instead of that indicated by the original payer.
> It also knows of non-published channels it has.
> 3.  Searching for a circular rebalancing route could be done much quicker
> since the JIT-Routing node could restrict itself to looking only in its
> friend-of-friend network, and simply fail if it could not find a circular
> rebalancing route quickly in the reduced search space.
>
> The first two advantages cannot be emulated by the original payer.
>
> However, I realized that the third advantage *could* be emulated by the
> original payer.
> This is advantageous as the payer node can implement 

[Lightning-dev] Extending Associated Data in the Sphinx Packet to Cover All Payment Details

2019-02-07 Thread Olaoluwa Osuntokun
Hi y'all,

I'm not sure how good defenses are on implementations other than lnd, but
all implementations *should* be keeping a Sphinx reply cache of the past
shared secrets they know of [1]. If a node comes across an identical shared
secret of that in the cache, then they should reject that packet. Otherwise,
it's possible for an adversary to inject a stale packet back into the
network in order to observe the propagation of the packet through the
network. This is referred to as a "replay" attack, and is a de-anonymization
vector.

Typically mix nets enforce some sort of session lifetime identifier to allow
nodes to garbage collect their old shared secrets state, otherwise it grows
indefinitely. As our messages are actually payments with a clear expiration
date (the absolute CLTV), we can use this as the lifetime of a payment
circuit session. The sphinx packet construction allows some optional
plaintext data to be authenticated along side the packet. In the current
protocol we use this to bing the payment hash along with the packet. The
rationale is that in order for me to accept the packet, the attacker must
use the _same_ payment hash.  If the pre-image has already been revealed,
then the "victim" can instantly pull the payment, attaching a  cost to a
replay attempt.

However, since the CLTV isn't also authenticated, then it's possible to
attempt to inject a new HTLC with a fresher CLTV. If the node isn't keeping
around all pre-images, then they might forward this since it passes the
regular expiry tests. If we instead extend the associated data payload to
cover the CLTV as well, then this binds the adversary to using the same CLTV
details. As a result, the "victim" node will reject the HTLC since it has
already expired. Continuing down this line, if we progressively add more
payment details, for example the HTLC amount, then this forces the adversary
to commit the same amount as the original HTLC, potentially making the
probing vector more expensive (as they're likely to lose the funds on
attempt).

If this were to be deployed, then we can do it by using a new packet version
in the Sphinx packet. Nodes that come across this new version (signalled by
a global feature bit) would then know to include the extra information in
the AD for their MAC check. While we're at it, we should also actually
*commit* to the packet version. Right now nodes can swap out the version to
anything they want, potentially causing another node to reject the packet.
This should also be added to the AD to ensure the packet can't be modified
without another node detecting it.

Longer term, we may end up with _all_ payment details in the Sphinx packet.
The only thing outside in the update_add_htlc message would be link level
details such as the HTLC ID.

Thoughts?

[1]:
https://github.com/lightningnetwork/lightning-onion/blob/master/replaylog.go

-- Laolu
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] SURBs as a Solution for Protocol-Level Payment ACKs

2019-02-07 Thread Olaoluwa Osuntokun
Hi y'all,

Recently we've started to do more design work related to the Sphinx packet
(EOB format, rendezvous protocol). This prompted me to re-visit the original
Sphinx paper to refresh my memory w.r.t some of the finer details of the
protocol.  While I was re-reading the paper, I realized that we may be able
to use use SURBs (single-use-reply-blocks) to implement a "payment ACK" for
each sent HTLC.

(it's worth mentioning that switching to HORNET down the line would solve
this problem as well since the receiver already gets a multi-use backwards
route that they can use to send information back to the receiver)

Right now HTLC routing is mainly a game of "send and hope it arrives", as
you have no clear indication of the _arrival_ of an HTLC at the destination.
Instead, you only receive a protocol level message if the HTLC failed for
w/e reason, or if it was successfully redeemed.  As part of BOLT 1.1, it was
agreed upon that we should implement some sort of "payment ACK" feature. A
payment ACK scheme is strongly desired as it:

  * Allows the sender to actually know when a payment has reached the
receiver which is useful for many higher level protocols. Atm, the
sender is unable to distinguish an HTLC being "black holed" from one
that's actually reached the sender, and they're just holding on to it.
  * AMP implementations would be aided by being able to receive feedback on
successfully routed splits. If we're able to have the receiver ACK each
partial payment, then implementations can more aggressively split
payments as they're able to gain assurance that the first 2 BTC of 5
total have actually reached the sender, and weren't black holed.
  * Enforcing and relying on ACKs may also thwart silly games receivers
might play, claiming that the HTLC "didn't actually arrive".

Some also call this feature a "soft error" as a possible implementation
might to re-use the existing onion error protocol we've deployed today.  For
reference, in order to send back errors back long the route in a way that
doesn't reveal the sender of the HTLC to the receiver (or any of the
intermediate nodes) we re-use the shared secret each hop has derived, and
onion wrap a MAC'd error to the sender. Each hop can't actually check that
they've received a well formed error, but the sender is able to attribute an
error to a node in the route based on which shared secret they're able to
check the MAC with.

The original Sphinx packet format has a way for the receiver to send a
message back to the sender. This was originally envisioned to allow the
receiver to send a replay email/message back to the sender without knowing
who they were, and also in a manner that was bit-wise indistinguishable from
a regular forwarded packet. This is called a SURB or "single use reply
block". A SURB is composed of: a pre-crafted sphinx packet for the
"backwards route" (which can be distinct from the forwards route), the first
hop of the backwards route, and finally a symmetric key to use when
encrypting the reply.

When we more or less settled on using Sphinx, we started to remove things
that we didn't have a clear use for at the time. Two things that were
removed were the original end-to-end payload, and also the SURB. Removing
the payload made the packet size smaller, and it didn't seem realistic to
give _each_ hop a SURB to send reply back.

In order to implement payment ACKs, we can have the sender craft a SURB (for
the ACK), and mark the receipt of the SURB as the payment ACK itself.
Creating and processing a SURB is identical to the regular HTLC packets we
use today. As a result, the code impact to the code sphinx packet logic is
minimal. We'd then also re-introduce the e2e payload so we can carry the
SURB in the forward direction (HLTC add). The backwards packet would also
have a payload of random bytes with the same size as a regular packet to
make them look identical on the wire.

This payload can further be put to use in order to implement streaming or
subscription payments in a way. Since we must add a payload for in order to
send/reply look the same, we can also piggy back some useful additional
data. Each time a payment is sent, the receiver can use the extra payload to
stack on details such as:
  * A new invoice to pay for the metered service being paid for.
  * An invoice along with a deadline for when this must be paid, lest the
subscription service expire.
  * Details of lightning-native API
  * etc, etc

IMO, this approach is better than a potential client-server payment
negotiation protocol as it doesn't require any additional servers along side
the node, also maintains sender anonymity, and doesn't rely on any sort of
PKI.

>From the prospective of packet-analysis, errors today are identifiable due
to the packet size (though we do pad them out to avoid being able to
distinguish some errors from others on the wide). SURBs on the other hand,
have the same profile as regular HTLC adds since they use the 

Re: [Lightning-dev] Network probes

2019-01-18 Thread Olaoluwa Osuntokun
Hi Andrea,

> This saves the receiving node from doing a database lookup

Nodes can and eventually should start using bloom filters to avoid most
database lookups for incoming payment hashes. The false positive rate can be
set to a very low value as the bloom filter doesn't need to transmitted, and
can even be stored persistently. As an optimization, nodes may opt to
maintain a series of hierarchical bloom filters, with the highest tier
filter containing only payment hashes for non-expired invoices. Clever bloom
filter usage by nodes would allow them to avoid almost all database lookups
for incoming unknown payment hashes (probes or not).

> we can improve it by using the `padding` of the `per_hop` field of the
> onion;

I recently implemented a type of spontaneous payment [1] that works today in
the wild (gotta love dat End to End Principle). A requirement for this was
fully functional EOB packing logic at the sender, and multi-packet
unwrapping at the receiver, the modified packet construction/processing can
be found here [2]. Using the terminology of the current draft code, all that
would need to be done is specify an EOB type for this special probe type of
HTLC. As it doesn't need any additional data, it only consumes a single
pivot hop and doesn't require the route to be extended.

Have you seen aj's prior post [3] on this front (making probe HTLCs
identifiable to the receiver, and allowing intermediate nodes to drop them)?
Allowing intermediate nodes to identify probe HTLCs has privacy
implications, as all of a sudden we've created two path-level classes of
HTLCs. On the other hand, this may help with QoS scheduling on the
forwarding plane for nodes, they may want to prioritize actual payments over
probes, with some nodes opting to not forward probes all together.

[1]: https://github.com/lightningnetwork/lnd/pull/2455
[2]: https://github.com/lightningnetwork/lightning-onion/pull/31
[3]:
https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-November/001554.html

-- Laolu


On Fri, Jan 18, 2019 at 8:47 AM Andrea RASPITZU 
wrote:

> Good morning list,
>
> I know there have been discussion around how (and if) we should probe the
> network to check for the liveliness of a path before sending out the
> payment. Currently we issue a payment with a random payment_hash that is
> not redeemable by anyone, if the destination (and the path) is `lively` it
> will respond Error. Assuming we do want to probe, and it make sense to
> assume so because it can't be prevented, we can improve it by using the
> `padding` of the `per_hop` field of the onion; with a single bit of the
> padding we can tell the final node that this is a probe and not an actual
> payment. This saves the receiving node from doing a database lookup
> (checking if it has the preimage for such a payment_hash) and it does not
> reveal anything to intermediate nodes, we don't want them to change the
> behavior if they know it's a probe and not an actual payment. I believe
> probing can help reducing the error rate of payments (and even detect stale
> channels?) and I'm looking forward to have some feedback, and submit a
> draft.
>
> Cheers, Andrea.
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Mandatory "d" or "h" UX issues

2019-01-14 Thread Olaoluwa Osuntokun
It isn't mandatory. It can be left blank, none of the existing wallets
require users to input a description when they make an invoice.

On Mon, Jan 14, 2019, 3:28 PM Francis Pouliot  I'm currently in the process of building the Lightning Network payout
> feature which will allow users to purchase bitcoins with fiat and have the
> coins sent to the via LN rather than on-chain. The main issue I'm facing is
> ensuring that recipients generate the correct Bolt11 invoice.
>
> Having the "d" and "h" fields mandatory creates a UX issue for Bitcoin
> services that are performing payouts/withdrawals (in the absence of a
> widely adopted payment protocol).
>
> It seems to me that the design of Bolt11 may have been biased towards
> merchants, because normally merchants, as recipients, decide on what the
> invoice is going to be and the sender doesn't have a choice but to conform
> (since the recipient is the service provider).
>
> But for LN payouts (e.g. withdrawal from an exchange or a poker site), the
> Sender is the services provider, and it is the Sender who will be creating
> (most likely programatically) the terms of the payment. However, this means
> that the Sender must be able to communicate to his end-user exactly what
> type of Bolt11 invoice he wants the user to create. This means, in most
> cases, that the user will have to manually enter some fields in his wallet.
> And if the content doesnt match, there will be a payment failure.
>
> Here is how I picture the ux issues taking place.
>
>1. User goes on my app to buy Bitcoin with fiat, and opts to be paid
>out via LN rather than on-chain BTC.
>2. My app will tell him: "make an invoice with the following:
>msatoshi, description.
>3. He will go in his wallet and type msatoshi, description.
>4. It's likey he won't pay too much attention, make a typo in
>description, leave it blank, write his own description, etc.
>5. When my app tries to pay, we of course have to decode his bolt11
>first.
>6. We have to have some logic that will compare the "h" or "d" that we
>instructed him to create and the "h" or "d" that we got from the decoded
>bolt 11 (which is an extra hassle for devs)
>7. In the cases there they are not the same, we need to instruct the
>user to create a new bolt 11 invoice because the one he created was not
>correct.
>
> What this ends up doing is create a situation where the service provider
> depends on his user to create a correct invoice before sending him the
> funds, and creates an added (unecessary) requirement for communication, and
> lower payment success rates, and likely higher abandonment rate.
>
> Question: what is the logic behind making "d" and "h" mandatory? I think
> business logic should be left to Bitcoin businesses.
>
> Can we simply not make "d" or "h" mandatory without breaking anything?
>
> TL;DR users already have troube entering the correct amount of BTC when
> paying invoices that aren't BIP21, so I am afraid that there will be tons
> of issues with them writing down the correct description.
>
> P.s. I'm using c-lightning right now and would like to not have to switch
>
> P.s.s. this will likely be fixed with a standardised payment protocol but
> addressing this issue seems a lower hanging fruit.
>
> Issue: https://github.com/lightningnetwork/lightning-rfc/issues/541
>
> Thanks is for your time,
>
> Francis
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Approximate assignment of option names: please fix!

2018-11-16 Thread Olaoluwa Osuntokun
> OG AMP is inherently spontaneous in nature, therefore invoice might not
exist
> to put the feature on.

That is incorrect. One can use an invoice along with AMP as is, in order to
tag
a payment. As an example, I want to deposit to an exhcange, so I get an
invoice
from them. I note that the invoice has a special (new) field that indicates
they accept AMP payments, and include an 8-byte identifier. Each of the
payment
shards I send over to the exchange will carry this 8-byte identifier.
Inclusion
of this identifier signals to them to credit my account with the deposit
once
all the payments arrive. This generalizes to any case where a service or
good
is to be dispersed once a payment is received.

-- Laolu


On Mon, Nov 12, 2018 at 6:56 PM ZmnSCPxj via Lightning-dev <
lightning-dev@lists.linuxfoundation.org> wrote:

> Good Morning Rusty,
>
> OG AMP is inherently spontaneous in nature, therefore invoice might not
> exist to put the feature on.
> Thus it should be global feature.
>
> Do we tie spontaneous payment to OG AMP or do we support one which is
> payable by base AMP or normal singlepath?
>
> Given that both `option_switch_ephkey` and `option_og_amp` require
> understanding extended onion packet types, would it not be better to merge
> them into `option_extra_onion_packet_types`?
>
>
>
> Sent with ProtonMail Secure Email.
>
> ‐‐‐ Original Message ‐‐‐
> On Tuesday, November 13, 2018 7:49 AM, Rusty Russell <
> ru...@blockstream.com> wrote:
>
> > Hi all,
> >
> > I went through the wiki and made up option names (not yet
> > numbers, that comes next). I re-read our description of global vs local
> > bits:
> >
> > The feature masks are split into local features (which only
> > affect the protocol between these two nodes) and global features
> > (which can affect HTLCs and are thus also advertised to other
> > nodes).
> >
> > You might want to promote your local bit to a global bit so you can
> > advertize them (wumbo?)? But if it's expected that every node will
> > eventually support a bit, then it should probably stay local.
> >
> > Please edit your bits as appropriate, so I can assign bit numbers soon:
> >
> >
> https://github.com/lightningnetwork/lightning-rfc/wiki/Lightning-Specification-1.1-Proposal-States
> >
> > Thanks!
> > Rusty.
> >
> > Lightning-dev mailing list
> > Lightning-dev@lists.linuxfoundation.org
> > https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
>
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Wumbological local AND global features

2018-11-15 Thread Olaoluwa Osuntokun
I realized the other day that the wumbo bit should also likely encompass
wumbo
payments. What good is a wumbo channel that doesn't also allow wumbo
payments?
Naturally if the bit is signalled globally, then this should also signal the
willingness of the node to forward larger payments up to their max_htlc
limit
within the channel_update for that link.

On a similar note, I was reviewing the newer-ish section of the spec
concerning
the optional max_htlc value. I noticed an inconsistency: it states the value
should be below the max capacity of the channel, but makes no reference to
the
current (pre wumbo) _max HTLC limit_. As a result, as it reads now, one may
interpret signalling of the optional field as eligibility to route wumbo
payments in a pre-wumbo channel world.

-- Laolu


On Tue, Nov 13, 2018 at 3:34 PM Rusty Russell  wrote:

> ZmnSCPxj via Lightning-dev 
> writes:
> > Thus, I propose:
> >
> > * The local feature bit `option_i_wumbo_you_wumbo`, which indicates that
> the node is willing to wumbo with its counterparty in the connection.
> > * The global feature bit `option_anyone_can_wumbo`, which indicates that
> the node is willing to wumbo with any node.
>
> I think we need to name `option_anyone_can_wumbo` to `option_wumborama`?
>
> Otherwise, this looks excellent.
>
> Thanks,
> Rusty.
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Packet switching via intermediary rendezvous node

2018-11-15 Thread Olaoluwa Osuntokun
> If I'm not mistaken it'll not be possible for us to have spontaneous
> ephemeral key switches while forwarding a payment

If this _was_ possible, then it seems that it would allow nodes to create
unbounded path lengths (looks to other nodes as a normal packet), possibly
by controlling multiple nodes in a route, thereby sidestepping the 20 hop
limit all together. This would be undesirable many reasons, the most dire of
which being the ability to further amplify null-routing attacks.

-- Laolu

On Mon, Nov 12, 2018 at 8:06 PM Christian Decker 
wrote:

> Hi ZmnSCPxj,
>
> like I mentioned in the other mailing thread we have a minor
> complication in order get rendez-vous working.
>
> If I'm not mistaken it'll not be possible for us to have spontaneous
> ephemeral key switches while forwarding a payment. Specifically either
> the sender or the recipient have to know the switchover points in their
> respective parts of the onion. Otherwise it'll not be possible to cover
> the padding in the HMAC, for the same reason that we couldn't meet up
> with the same ephemeral key at the rendez-vous point.
>
> Sorry about not noticing this before.
>
> Cheers,
> Christian
>
> ZmnSCPxj via Lightning-dev 
> writes:
> > Good morning list,
> >
> > Although, packet switching was part of the agenda, we decided, that we
> would defer this to some later version of BOLT spec.
> >
> > Interestingly, some sort of packet switching becomes possible, due to
> the below features we did not defer:
> >
> > 1.  Multi-hop onion packets (i.e. s/realm/packettype/)
> > 2.  Identify "next" by node-id instead of short-channel-id (actually, we
> solved this by "short-channel-id is not binding" and next hop is identified
> by short-channel-id still).
> > 3.  Onion ephemeral key switching (required by rendez-vous routing).
> >
> > ---
> >
> > Suppose we define the below packettype (notice below type number is
> even, but I am uncertain how "is OK to be odd", is appropriate for this):
> >
> > packettype 0: same as current realm 0
> > packettype 2: ephemeral key switch (use ephemeral key in succeeding
> 65-byte packet)
> > packettype 4: identify next node by node-id on succeeding 65-byte packet
> >
> > Suppose I were to receive a packettype 0 in an onion.  It identifies a
> short-channel-id.  Now suppose this particular channel has no capacity.  As
> I showed in thread " Link-level payment splitting via intermediary
> rendezvous nodes"
> https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-November/001547.html,
> it is possible, that I can route it via some other route *composed of
> multiple channels*, by using packettype 4 at the end of this route to
> connect it to the rest of the onion I receive.
> >
> > However, in this case, in effect, the short-channel-id simply identifies
> the "next" node along this route.
> >
> > Suppose we also identify a new packettype (packettype 4)) where he
> "next" node is identified by its node-id.
> >
> > Let us make the below scenarios.
> >
> > 1.  Suppose the node-id so identified, I have a channel with this node.
> And suppose this channel has capacity.  I can send the payment directly to
> that node.  This is no different from today.
> > 2.  Suppose the node-id so identified, I have a channel with this node.
> But this channel has not capacity.  However, I can look for alternate
> route.  And by using rendez-vous feature "switch ephemeral key" I can
> generate a route that is multiple hops, in order to reach the identified
> node-id, and connect the rest of the onion to this.  This case is same as
> if the node is identified by short-channel-id.
> > 3.  Suppose the node-id so identified, I have not a channel with this
> node.  However, I can again look for alternate route.  Again, by using
> "switch ephemeral key" feature, I can generate a route that is multiple
> hops, in order to reach the identified node-id, and again connect the rest
> of the onion to this.
> >
> > Now, the case 3 above, can build up packet switching.  I might have a
> routemap that contains the destination node-id and have an accurate route
> through the network, and identify the path directly to the next node.  If
> not, I could guess/use statistics that one of my peers is likely to know
> how to route to that node, and also forward a packettype 4 to the same
> node-id to my peer.
> >
> > This particular packet switching, also allows some uncertainty about the
> destination.  For instance, even if I wish to pay CJP, actually I make an
> onion with packettype 4 Rene, packettype 4 CJP. packettype 0 HMAC=0.  Then
> I send the above onion (appropriately layered-encrypted) to my direct peer
> cdecker, who attempts to make an effort to route to Rene.  When Rene
> receives it, it sees packettype 4 CJP, and then makes an effort to route to
> CJP, who sees packettype 0 HMAC=0 meaning CJP is the recipient.
> >
> > Further, this is yet another use of the siwtch-ephemeral-key packettype.
> >
> > Thus:
> >
> > 1.  It allows packet 

Re: [Lightning-dev] Proposal for Advertising Channel Liquidity

2018-11-08 Thread Olaoluwa Osuntokun
Was approaching more so from the angle of a node new node with no existing
channels seeking to bootstrap connections to the network.

-- Sent from my Spaceship

On Fri, Nov 9, 2018, 9:10 AM Anthony Towns  On Thu, Nov 08, 2018 at 05:32:01PM +1030, Olaoluwa Osuntokun wrote:
> > > A node, via their node_announcement,
> > Most implementations today will ignore node announcements from nodes that
> > don't have any channels, in order to maintain the smallest routing set
> > possible (no zombies, etc). It seems for this to work, we would need to
> undo
> > this at a global scale to ensure these announcements propagate?
>
> Having incoming capacity from a random node with no other channels doesn't
> seem useful though? (It's not useful for nodes that don't have incoming
> capacity of their own, either)
>
> Cheers,
> aj
>
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Proposal for Advertising Channel Liquidity

2018-11-07 Thread Olaoluwa Osuntokun
> A node, via their node_announcement,

Most implementations today will ignore node announcements from nodes that
don't have any channels, in order to maintain the smallest routing set
possible (no zombies, etc). It seems for this to work, we would need to undo
this at a global scale to ensure these announcements propagate?

Aside from the incentives for leaches to arise that accept the fee then
insta close (they just drain the network and then no one uses this), I think
this is a dope idea in general! In the past, I've mulled over similar
constructions under a general umbrella of "Channel Liquidity Markets" (CLM),
though via extra-protocol negotiation.

-- Laolu


On Wed, Nov 7, 2018 at 2:38 PM lisa neigut  wrote:

> Problem
> 
> Currently it’s difficult to reliably source inbound capacity for your
> node. This is incredibly problematic for vendors and nodes hoping to setup
> shop as a route facilitator. Most solutions at the moment require an
> element of out of band negotiation in order to find other nodes that can
> help with your capacity needs.
>
> While splicing and dual funding mechanisms will give some relief by
> allowing for the initial negotiation to give the other node an opportunity
> to put funds in either at channel open or after the fact, the problem of
> finding channel liquidity is still left as an offline problem.
>
> Proposal
> =
> To solve the liquidity discovery problem, I'd like to propose allowing
> nodes to advertise initial liquidity matching. The goal of this proposal
> would be to allow nodes to independently source inbound capacity from a
> 'market' of advertised liquidity rates, as set by other nodes.
>
> A node, via their node_announcement, can advertise that they will match
> liquidity and a fee rate that they will provide to any incoming
> open_channel request that indicates requests it.
>
> `node_announcement`:
> new feature flag: option_liquidity_provider
> data:
>  [4 liquidity_fee_proportional_millionths] (option_liquidity_provider) fee
> charged per satoshi of liquidity added at channel open
>  [4 liquidity_fee_base_msat] (option_liquidity_provider) base fee charged
> for providing liquidity at channel open
>
> `open_channel`:
> new feature flag (channel_flags): option_liquidity_buy [2nd least
> significant bit]
> push_msat: set to fee payment for requested liquidity
> [8 liquidity_msat_request]: (option_liquidity_buy) amount of dual funding
> requested at channel open
>
> `accept_channel`:
> tbd. hinges on a dual funding proposal for how second node would send
> information about their funding input.
>
> If a node cannot provide the liquidity requested in `open_channel`, it
> must return an error.
> If the amount listed in `push_msat` does not cover the amount of liquidity
> provided, the liquidity provider node must return an error.
>
> Errata
> ==
> It's an open question as to whether or not a liquidity advertising node
> should also include a maximum amount of liquidity that they will
> match/provide. As currently proposed, the only way to discover if a node
> can meet your liquidity requirement is by sending an open channel request.
>
> This proposal depends on dual funding being possible.
>
> Should a node be able to request more liquidity than they put into the
> channel on their half? In the case of a vendor who wants inbound capacity,
> capping the liquidity request allowed seems unnecessary.
>
> Conclusion
> ===
> Allowing nodes to advertise liquidity paves the way for automated node
> re-balancing. Advertised liquidity creates a market of inbound capacity
> that any node can take advantage of, reducing the amount of out-of-band
> negotiation needed to get the inbound capacity that you need.
>
>
> Credit to Casey Rodamor for the initial idea.
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Improving payment UX with low-latency route probing

2018-11-06 Thread Olaoluwa Osuntokun
Hi Fabrice,

I think HORNET would address this rather nicely!

During the set up phase (which uses Sphinx), the sender is able to get a
sense
of if the route is actually "lively" or not, as the circuit can't be
finalized
if all the nodes aren't available. Additionally, during the set up phase,
the
sender can drop a unique payload to each node. In this scenario, it may be
the
amount range the node is looking to send over this circuit. The intermediate
nodes then package up a "Forwarding Segment" (FS) which includes a symmetric
key to use for their portion of the hop, and can also be extended to include
fee information. If this set up phase is payment value aware, then each node
can use a private "fee function" that may take into account the level of
congestion in their channels, or other factors. This would differ from the
current approach in that this fee schedule need not be communicated to the
wider network, only those wishing to route across that link.

Another cool thing that it would allow is the ability to receive a
protocol-level payment ACK. This may be useful when implementing AMP, as it
would allow the sender to know exactly how many satoshis have arrived at the
other site, adjusting their payment sharding accordingly. Nodes on either
side
of the circuit can then also use the data forwarding phase to exchange
payment
hashes, perform cool zkcp set up protcols, etc, etc.

The created circuits can actually be re-used across several distinct
payments.
In the paper, they use a TTL for each circuit, in our case, we can use a
block
height, after which all nodes should reject attempted data forwarding
attempts.
A notable change is that each node no longer needs to maintain per-circuit
state as we do now with Sphinx. Instead, the packets that come across
contain
all the information required for forwarding (our current per-hop payload).
As a
result, we can eliminate the asymmetric crytpo from the critical forwarding
path!

Finally, this would let nodes easily rotate their onion keys to achieve
forward
secrecy during the data phase (but not the set up phase), as in the FS, they
essentially key-wrap a symmetric key (using the derived shared secret for
that
hop) that should be used for that data forwarding phase.

There're a number of other cool things integration HORNET would allow,
perhaps
a distinct thread would be a more appropriate place to extol the many
virtues
of HORNET ;)

-- Laolu

On Thu, Nov 1, 2018 at 3:05 PM Fabrice Drouin 
wrote:

> Context
> ==
>
> Sent payments that remain pending, i.e. payments which have not yet
> been failed or fulfilled, are currently a major UX challenge for LN
> and a common source of complaints from end-users.
> Why payments are not fulfilled quickly is not always easy to
> investigate, but we've seen problems caused by intermediate nodes
> which were stuck waiting for a revocation, and recipients who could
> take a very long time to reply with a payment preimage.
> It is already possible to partially mitigate this by disconnecting
> from a node that is taking too long to send a revocation (after 30
> seconds for example) and reconnecting immediately to the same node.
> This way pending downstream HTLCs can be forgotten and the
> corresponding upstream HTLCs failed.
>
> Proposed changes
> ===
>
> It should be possible to provide a faster "proceed/try another route"
> answer to the sending node using probing with short timeout
> requirements: before sending the actual payment it would first send a
> "blank" probe request, along the same route. This request would be
> similar to a payment request, with the same onion packet formatting
> and processing, with the additional requirements that if the next node
> in the route has not replied within the timeout period (typically a
> few hundred milliseconds) then the current node will immediately send
> back an error message.
>
> There could be several options for the probe request:
> - include the same amounts and fee constraints than the actual payment
> request.
> - include no amount information, in which case we're just trying to
> "ping" every node on the route.
>
> Implementation
> 
>
> I would like to discuss the possibility of implementing this with a "0
> satoshi" payment request that the receiving node would generate along
> with the real one. The sender would first try to "pay" the "0 satoshi"
> request using the route it computed with the actual payment
> parameters. I think that it would not require many changes to the
> existing protocol and implementations.
> Not using the actual amount and fees means that the actual payment
> could fail because of capacity issues but as long as this happens
> quickly, and it should since we checked first that all nodes on the
> route are alive and responsive, it still is much better than “stuck”
> payments.
> And it would not help if a node decides to misbehave, but would not
> make things worse than they are now (?)
>
> Cheers,
> Fabrice
> 

Re: [Lightning-dev] Splicing Proposal: Feedback please!

2018-11-05 Thread Olaoluwa Osuntokun
> Mainly limitations of our descriptor language, TBH.

I don't follow...so it's a size issue? Or wanting to avoid "repeated"
fields?

> I thought about restarting the revocation sequence, but it seems like that
> only saves a tiny amount since we only store log(N) entries

Yeah that makes sense, forgetting the HTLC state is a big enough win in and
of itself.

>>> Splice Signing
>>
>> It seems that we're missing some fields here if we're to allow the
splicing
>> of inputs to be done in a non-blocking manner. We'll need to send two
>> revocation points for the new commitment: one to allow it to be created,
and
>> another to allow updates to proceed right after the signing is
completed. In
>> this case we'll also need to update both commitments in tandem until the
>> splicing transaction has been sufficiently confirmed.
>
>I think we can use the existing revocation points for both.

Yep, if we retain the existing shachain trees, then we just continue to
extend the leaves!

> We're basically co-generating a tx here, just like shutdown, except it's
> funding a new replacement channel.  Do we want to CPFP this one too?

It'd be nice to be able to also anchor down this splicing transaction given
that we may only allow a single outstanding splicing operation to begin
with. Being able to CPFP it (and later on provide arbitrary fee inputs)
allows be to speed up the process if I want to queue another operation up
right afterwards.

-- Laolu


On Wed, Oct 17, 2018 at 9:31 AM Rusty Russell  wrote:

> Olaoluwa Osuntokun  writes:
> > Hi Rusty,
> >
> > Happy to get the splicing train rolling!
> >
> >> We've had increasing numbers of c-lightning users get upset they can't
> >> open multiple channels, so I guess we're most motivated to allow
> splicing
> > of
> >> existing channels
> >
> > Splicing isn't a substitute for allowing multiple channels. Multiple
> > channels allow nodes to:
> >
> >   * create distinct channels with distinct acceptance policies.
> >   * create a mix of public and non-advertised channels with a node.
> >   * be able to send more than the (current) max HTLC amount
> > using various flavors of AMP.
> >   * get past the (current) max channel size value
> >   * allow a link to carry more HTLCs (due to the current super low max
> HTLC
> > values) given the additional HTLC pressure that
> > AMP may produce (alternative is a commitment fan out)
>
> These all seem marginal to me.  I think if we start hitting max values,
> we should discuss increasing them.
>
> > Is there a fundamental reason that CL will never allow nodes to create
> > multiple channels? It seems unnecessarily limiting.
>
> Yeah, we have a daemon per peer.  It's really simple with 1 daemon, 1
> channel.  My own fault: I was the one who insisted we mux multiple
> connections over the same transport; if we'd gone for independent
> connections our implementation would have been trivial.
>
> >> Splice Negotiation:
> >
> > Any reason to now make the splicing_add_* messages allow one to add
> several
> > inputs in a single message? Given "acceptable" constraints for how large
> the
> > witness and pkScripts can be, we can easily enforce an upper limit on the
> > number of inputs/outputs to add.
>
> Mainly limitations of our descriptor language, TBH.
>
> > I like that the intro messages have already been designed with the
> > concurrent case in mind beyond a simpler propose/accept flow. However is
> > there any reason why it doesn't also allow either side to fully
> re-negotiate
> > _all_ the funding details? Splicing is a good opportunity to garbage
> collect
> > the prior revocation state, and also free up obsolete space in watch
> towers.
>
> I thought about restarting the revocation sequence, but it seems like
> that only saves a tiny amount since we only store log(N) entries.  We
> can drop old HTLC info post-splice though, and (after some delay for
> obscurity) tell watchtowers to drop old entries I think.
>
> > Additionally, as the size of the channel is either expanding or
> contracting,
> > both sides should be allowed to modify things like the CSV param,
> reserve,
> > max accepted htlc's, max htlc size, etc. Many of these parameters like
> the
> > CSV value should scale with the size of the channel, not allowing these
> > parameters to be re-negotiated could result in odd scenarios like still
> > maintain a 1 week CSV when the channel size has dipped from 1 BTC to 100k
> > satoshis.
>
> Yep, good idea!  I missed that.
>
> Brings up a side point about these values, which deserves i

Re: [Lightning-dev] Wireshark plug-in for Lightning Network(BOLT) protocol

2018-11-05 Thread Olaoluwa Osuntokun
Hi tomokio,

This is so dope! We've long discussed creating canned protocol transcripts
for
other implementations to assert their responses again, and I think this is a
great first step towards that.

> Our proposal:
> Every implementation has compile option which enable output key
information
> file.

So is this request to add an option which will write out the _plaintext_
messages to disk, or an option that writes out the final derived read/write
secrets to disk? For the latter path, it the tools that read these
transcripts
would need to be aware of key rotations, so they'd  be able to continue to
decrypt the transact pt post rotation.

-- Laolu


On Sat, Oct 27, 2018 at 2:37 AM  wrote:

> Hello lightning network developers.
> Nayuta team is developing Wireshark plug-in for Lightning Network(BOLT)
> protocol.
> https://github.com/nayutaco/lightning-dissector
>
> It’s alpha version, but it can decode some BOLT message.
> Currently, this software works for Nayuta’s implementation(ptarmigan) and
> Éclair.
> When ptarmigan is compiled with some option, it write out key information
> file. This Wireshark plug-in decode packet using that file.
> When you use Éclair, this software parse log file.
>
> Through our development experience, interoperability test is time
> consuming task.
> If people can see communication log of BOLT message on same format
> (.pcap), it will be useful for interoperability test.
>
> Our proposal:
> Every implementation has compile option which enable output key
> information file.
>
> We are glad if this project is useful for lightning network eco-system.
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Splicing Proposal: Feedback please!

2018-11-05 Thread Olaoluwa Osuntokun
> However personally I do not really see the need to create multiple
channels
> to a single peer, or increase the capacity with a specific peer (via
splice
> or dual-funding).  As Christian says in the other mail, this
consideration,
> is that it becomes less a network and more of some channels to specific
big
> businesses you transact with regularly.

I made no reference to any "big businesses", only the utility that arises
when one has multiple channels to a given peer. Consider an easier example:
given the max channel size, I can only ever send 0.16 or so BTC to that
peer. If I have two channels, then I can send 0.32 and so on. Consider the
case post AMP where we maintain the current limit of the number of in flight
HTLCs. If AMP causes most HTLCs to generally be in flight within the
network, then all of a sudden, this "queue" size (outstanding HTLCS in a
commitment) becomes more scarce (assume a global MTU of say 100k sat for
simplicity). This may then promote nodes to open additional channels to
other nodes (1+) in order to accommodate the increased HTLC bandwidth load
due to the sharded multi-path payments.

Independent on bolstering the bandwidth capabilities of your links to other
nodes, you would still want to maintain a diverse set of channels for fault
tolerance, path diversity, and redundancy reasons.

In the splicing case, if only a single in flight splice is permitted, and me
as users wants to keep all their funds in channels, the more channels I
have, the more concurrent on-chain withdraws/deposits I'll be able to
service.

> I worry about doing away with initiator distinction

Can you re-phrase this sentence? I'm having trouble parsing it, thanks.

> which puzzles me, and I wonder if I am incorrect in my prioritization.

Consider that not all work items are created equal, and they have varying
levels of implementation and network wide synchronization. For example, I
think we all consider multi-hop decor to be a high priority.  However, it
has the longest and hardest road to deployment as it effectively forces us
to perform a "slow motion hard-fork" within the network. On the other hand,
if lnd wanted to deploy a flavor of non-interactive (no invoice) payments
*today*, we could do that without *any* synchronization at the
implementation of network level, as it's purely an end-to-end change.

> I am uncertain what this means in particular, but let me try to restate
> what you are talking about in other terms:

Thought about it a bit more (like way ago) and this is really no different
than having a donation page where people use public derivation to derive
addresses to deposit directly to your channel. All the Lightning node needs
to do, is recognize that any coins send to these derived addresses should be
immediately spliced into an available channel (doesn't have any other
outstanding splices).

> It seems to me naively that the above can be done by the client software
> without any modifications to the Lightning Network BOLT protocol

Sticking with that prior version yes, this would be able to be seamlessly
included in the async splce proposal. The one requirement is a link-level
protocol that allows both sides to collaboratively create and recognize
these outputs.

> Or is my above restatement different from what you are talking about?

You're missing the CLTV timeout clause. It isn't a plain p2wkh, it's a p2wsh
script. Either they splice this in before the timeout, or it times out and
it goes back to one party. In this case, it's no different than the async
concurrent commitment splice in double spend case.

-- Laolu


On Tue, Oct 16, 2018 at 10:16 PM ZmnSCPxj  wrote:

> Good morning Laolu,
>
> Is there a fundamental reason that CL will never allow nodes to create
> multiple channels? It seems unnecessarily limiting.
>
>
> The architecture of c-lightning assigns a separate process to each peer.
> For simplicity this peer process handles only a single channel.  Some of
> the channel initiation and shutdown protocols are written "directly", i.e.
> if the BOLT spec says this must happen before that, we literally write in
> the C code this_function(); that_function();.  It would be possible  to
> change this architecture with significant difficulty.
>
> However personally I do not really see the need to create multiple
> channels to a single peer, or increase the capacity with a specific peer
> (via splice or dual-funding).  As Christian says in the other mail, this
> consideration, is that it becomes less a network and more of some channels
> to specific big businesses you transact with regularly.  But I suppose,
> that we will have to see how the network evolves eventually; perhaps the
> goal of decentralization is simply doomed regardless, and Lightning will
> indeed evolve into a set of channels you maintain to specific big
> businesses you regularly work with.
>
>
> >* [`4`:`feerate_per_kw`]
>
> What fee rate is this? IMO we should do commitmentv2 before splicing as
> then
> we can 

Re: [Lightning-dev] Commitment Transaction Format Update Proposals?

2018-11-05 Thread Olaoluwa Osuntokun
> This seems at odds with the goal of "if the remote party force closes,
then
> I get my funds back immediately without requiring knowledge of any secret
> data"

Scratch that: the static back ups just need to include this CSV value!

-- Laolu


On Tue, Nov 6, 2018 at 3:29 PM Olaoluwa Osuntokun  wrote:

> Hi Rusty,
>
> I'm a big fan in general of most of this! Amongst many other things, it'll:
> simplify the whole static channel backup + recovery workflow, and avoid all
> the fee related headaches we've run into over the past few months.
>
> > - HTLC-timeout and HTLC-success txs sigs are
> > SIGHASH_ANYONECANPAY|SIGHASH_SINGLE, so you can Bring Your Own Fees.
>
> Would this mean that we no longer extend fees to the second-level
> transactions as well? If so, then a dusty HTLC would be determined solely
> by
> looking at the direct output, rather than the resulting output in the
> second
> layer.
>
> >  - `localpubkey`, `remotepubkey`, `local_htlcpubkey`,
> `remote_htlcpubkey`,
> > `local_delayedpubkey`, and `remote_delayedpubkey` derivation now uses a
> > two-stage unhardened BIP-32 derivation based on the commitment number.
>
> It seems enough to _only_ modify the derivation for local+remote pubkey (so
> the direct "settle" keys). This constrains the change to only what's
> necessary to simplify the backup+recovery workflow with the current
> commitment design. By restricting the change to these two keys, we minimize
> the code impact to the existing implementations, and avoid unnecessary
> changes that don't make strides towards the immediate goal.
>
> > - `to_remote` is now a P2WSH of:
> >`to_self_delay` OP_CSV OP_DROP  OP_CHECKSIG
>
> This seems at odds with the goal of "if the remote party force closes, then
> I get my funds back immediately without requiring knowledge of any secret
> data". If it was just a plain p2wkh, then during a routine seed import and
> rescan (assuming ample look ahead as we know this is a "special" key), I
> would pick up outputs of channels that were force closed while I was down
> due to my dog eating my hard drive.
>
> Alternatively, since the range of CSV values can be known ahead of time, I
> can brute force a set of scripts to look for in the chain. However, this
> results in potentially a very large number of scripts (depending on how
> many
> channels one has, and bounds on the acceptable CSV) I need to scan for.
>
> -- Laolu
>
>
> On Fri, Oct 12, 2018 at 3:57 PM Rusty Russell 
> wrote:
>
>> Hi all,
>>
>> There have been a number of suggested changes to the commitment
>> transaction format:
>>
>> 1. Rather than trying to agree on what fees will be in the future, we
>>should use an OP_TRUE-style output to allow CPFP (Roasbeef)
>> 2. The `remotepubkey` should be a BIP-32-style, to avoid the
>>option_data_loss_protect "please tell me your current
>>per_commitment_point" problem[1]
>> 3. The CLTV timeout should be symmetrical to avoid trying to game the
>>peer into closing. (Connor IIRC?).
>>
>> It makes sense to combine these into a single `commitment_style2`
>> feature, rather than having a testing matrix of all these disabled and
>> enabled.
>>
>> BOLT #2:
>>
>> - If `commitment_style2` negotiated, update_fee is a protocol error.
>>
>> This mainly changes BOLT #3:
>>
>> - The feerate for commitment transactions is always 253 satoshi/Sipa.
>> - Commitment tx always has a P2WSH OP_TRUE output of 1000 satoshi.
>> - Fees, OP_TRUE are always paid by the initial funder, because it's
>> simple,
>>   unless they don't have funds (eg. push_msat can do this, unless we
>> remove it?)
>> - HTLC-timeout and HTLC-success txs sigs are
>>   SIGHASH_ANYONECANPAY|SIGHASH_SINGLE, so you can Bring Your Own Fees.
>> - `localpubkey`, `remotepubkey`, `local_htlcpubkey`,
>>   `remote_htlcpubkey`, `local_delayedpubkey`, and `remote_delayedpubkey`
>>   derivation now uses a two-stage unhardened BIP-32 derivation based on
>>   the commitment number.  Two-stage because we can have 2^48 txs and
>>   BIP-32 only supports 2^31: the first 17 bits are used to derive the
>>   parent for the next 31 bits?
>> - `to_self_delay` for both sides is the maximum of either the
>>   `open_channel` or `accept_channel`.
>> - `to_remote` is now a P2WSH of:
>> `to_self_delay` OP_CSV OP_DROP  OP_CHECKSIG
>>
>> Cheers,
>> Rusty.
>>
>> [1] I recently removed checking this field from c-lightning, as I
>> couldn't get it to reliably work under stress-test.  I may just have
>> a bug, but we could just fix the spec instead, then we can get our
>> funds back even if we never talk to the peer.
>> ___
>> Lightning-dev mailing list
>> Lightning-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>>
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Commitment Transaction Format Update Proposals?

2018-11-05 Thread Olaoluwa Osuntokun
Hi Rusty,

I'm a big fan in general of most of this! Amongst many other things, it'll:
simplify the whole static channel backup + recovery workflow, and avoid all
the fee related headaches we've run into over the past few months.

> - HTLC-timeout and HTLC-success txs sigs are
> SIGHASH_ANYONECANPAY|SIGHASH_SINGLE, so you can Bring Your Own Fees.

Would this mean that we no longer extend fees to the second-level
transactions as well? If so, then a dusty HTLC would be determined solely by
looking at the direct output, rather than the resulting output in the second
layer.

>  - `localpubkey`, `remotepubkey`, `local_htlcpubkey`, `remote_htlcpubkey`,
> `local_delayedpubkey`, and `remote_delayedpubkey` derivation now uses a
> two-stage unhardened BIP-32 derivation based on the commitment number.

It seems enough to _only_ modify the derivation for local+remote pubkey (so
the direct "settle" keys). This constrains the change to only what's
necessary to simplify the backup+recovery workflow with the current
commitment design. By restricting the change to these two keys, we minimize
the code impact to the existing implementations, and avoid unnecessary
changes that don't make strides towards the immediate goal.

> - `to_remote` is now a P2WSH of:
>`to_self_delay` OP_CSV OP_DROP  OP_CHECKSIG

This seems at odds with the goal of "if the remote party force closes, then
I get my funds back immediately without requiring knowledge of any secret
data". If it was just a plain p2wkh, then during a routine seed import and
rescan (assuming ample look ahead as we know this is a "special" key), I
would pick up outputs of channels that were force closed while I was down
due to my dog eating my hard drive.

Alternatively, since the range of CSV values can be known ahead of time, I
can brute force a set of scripts to look for in the chain. However, this
results in potentially a very large number of scripts (depending on how many
channels one has, and bounds on the acceptable CSV) I need to scan for.

-- Laolu


On Fri, Oct 12, 2018 at 3:57 PM Rusty Russell  wrote:

> Hi all,
>
> There have been a number of suggested changes to the commitment
> transaction format:
>
> 1. Rather than trying to agree on what fees will be in the future, we
>should use an OP_TRUE-style output to allow CPFP (Roasbeef)
> 2. The `remotepubkey` should be a BIP-32-style, to avoid the
>option_data_loss_protect "please tell me your current
>per_commitment_point" problem[1]
> 3. The CLTV timeout should be symmetrical to avoid trying to game the
>peer into closing. (Connor IIRC?).
>
> It makes sense to combine these into a single `commitment_style2`
> feature, rather than having a testing matrix of all these disabled and
> enabled.
>
> BOLT #2:
>
> - If `commitment_style2` negotiated, update_fee is a protocol error.
>
> This mainly changes BOLT #3:
>
> - The feerate for commitment transactions is always 253 satoshi/Sipa.
> - Commitment tx always has a P2WSH OP_TRUE output of 1000 satoshi.
> - Fees, OP_TRUE are always paid by the initial funder, because it's simple,
>   unless they don't have funds (eg. push_msat can do this, unless we
> remove it?)
> - HTLC-timeout and HTLC-success txs sigs are
>   SIGHASH_ANYONECANPAY|SIGHASH_SINGLE, so you can Bring Your Own Fees.
> - `localpubkey`, `remotepubkey`, `local_htlcpubkey`,
>   `remote_htlcpubkey`, `local_delayedpubkey`, and `remote_delayedpubkey`
>   derivation now uses a two-stage unhardened BIP-32 derivation based on
>   the commitment number.  Two-stage because we can have 2^48 txs and
>   BIP-32 only supports 2^31: the first 17 bits are used to derive the
>   parent for the next 31 bits?
> - `to_self_delay` for both sides is the maximum of either the
>   `open_channel` or `accept_channel`.
> - `to_remote` is now a P2WSH of:
> `to_self_delay` OP_CSV OP_DROP  OP_CHECKSIG
>
> Cheers,
> Rusty.
>
> [1] I recently removed checking this field from c-lightning, as I
> couldn't get it to reliably work under stress-test.  I may just have
> a bug, but we could just fix the spec instead, then we can get our
> funds back even if we never talk to the peer.
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Splicing Proposal: Feedback please!

2018-10-15 Thread Olaoluwa Osuntokun
> I would suggest more to consider the simpler method, despite its larger
> onchain footprint (which is galling),

The on-chain footprint is a shame, and also it gets worse if we start to
allow multiple pending splices. Also the lack of a non-blocking splice in is
a big draw back IMO.

> but mostly because I do not see splicing as being as important as AMP or
> watchtowers (and payment decorrelation seems to affect how AMP can be
> implemented, so its priority also goes up).

Most of what you mention here have _very_ different deployment timelines and
synchronization requirements across clients. For example, splicing is a link
level change and can start to be rolled out immediately. Decorrelation on
the other hand, is a _network_ level change, and would take a considerable
amount of time to reach widespread deployment as it essentially splits the
rouble paths in the network until all/most are upgraded.

If you think any of these items is a higher priority than splicing then you
can simply start working on them! There's no agency that prescribes what
should and shouldn't be pursued or developed, just your willingness to
write some code.

One thing that I think we should lift from the multiple funding output
approach is the "pre seating of inputs". This is cool as it would allow
clients to generate addresses, that others could deposit to, and then have
be spliced directly into the channel. Public derivation can be used, along
with a script template to do it non-interactively, with the clients picking
up these deposits, and initiating a splice in as needed.

-- Laolu



On Thu, Oct 11, 2018 at 11:14 PM ZmnSCPxj via Lightning-dev <
lightning-dev@lists.linuxfoundation.org> wrote:

> Good morning Rusty,
>
> >
> > > It may be good to start brainstorming possible failure modes during
> splice, and how to recover, and also to indicate the expected behavior in
> the proposal, as I believe these will be the points where splicing must be
> designed most precisely. What happens when a splice is ongoing and the
> communication gets disconnected? What happens when some channel failure
> occurs during splicing and we are forced to drop onchain? And so on.
> >
> > Agreed, but we're now debating two fairly different methods for
> > splicing. Once we've decided on that, we can try to design the
> > proposals themselves.
>
> I would suggest more to consider the simpler method, despite its larger
> onchain footprint (which is galling), but mostly because I do not see
> splicing as being as important as AMP or watchtowers (and payment
> decorrelation seems to affect how AMP can be implemented, so its priority
> also goes up).  So I think getting *some* splicing design out would be
> better even if imperfect.  Others may disagree on priority.
>
> Regards,
> ZmnSCPxj
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] RouteBoost: Adding 'r=' fields to BOLT 11 invoices to flag capacity

2018-09-28 Thread Olaoluwa Osuntokun
> This is orothogonal.  There's no point probing your own channel, you're
> presumably probing someone else's.

In my scenario, you're receiving a new HTLC, from some remote party
unbeknownst to you. I was replying to cdecker's reply to johan that one
wouldn't always add this new type of routing hint for all channels since it
leaks available bandwidth information. Without something like an "unadd" you
can't do anything against an individual attempting to prob you other than
drop packets (drop as in don't even add to your commit, resulting in an HTLC
timeout), as if you cancel back, then they know that you had enough
bandwidth to _accept_ the HTLC in the first place.

-- Laolu


On Wed, Sep 26, 2018 at 5:54 PM Rusty Russell  wrote:

> Olaoluwa Osuntokun  writes:
> >> That might not be so desirable, since it leaks the current channel
> >> capacity to the user
> >
> >>From my PoV, the only way a node can protect the _instantaneous_
> available
> > bandwidth in their channel is to randomly reject packets, even if they
> have
> > the bandwidth to actually accept and forward them.
> >
> > Observe that if a "prober" learns that you've _accepted_  a packet, then
> > they know you have at least that amount as available bandwidth. As a
> result,
> > I can simply send varying sat packet sizes over to you, either with the
> > wrong timelock, or an unknown payment hash.
>
> Yes.  You have to have a false capacity floor, which must vary
> periodically, to protect against this kind of probing (randomly failing
> attempts as you get close to zero capaicty is also subject to probing,
> AFAICT).
>
> > Since we don't yet have the
> > "unadd" feature in the protocol, you _must_ accept the HTLC before you
> can
> > cancel it. This is mitigated a bit by the max_htlc value in the channel
> > update (basically our version of an MTU), but I can still just send
> > _multiple_ HTLC's rather than one big one to attempt to ascertain your
> > available bandwidth.
>
> This is orothogonal.  There's no point probing your own channel, you're
> presumably probing someone else's.
>
> Cheers,
> Rusty.
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] proposal for Lightning Network improvement proposals

2018-07-23 Thread Olaoluwa Osuntokun
to be assigned numbers, we need to standarize our
> feature
> through the BOLT process,
> but we might not wish to attempt standardization until our experimental
> features have been tested.
> Without standardization, different teams working on different experimental
> features may cause conflicts if different clients are treating feature
> bits or
> message types differently.
>
> By moving all experimental features to a new message where they are
> wrapped in
> a unique feature name, this eradicates chance of conflicting
> implementations.
>
> Additionally, this message can serve as a generic transport mechanism
> between
> any two lightning nodes who have agreed to support the
> expierment_name_hash,
> as there is no restriction on the format of the payload. This may make it
> possible to serve e.g: HTTP over Lightning.
>
>
> * General experiment messages:
>
> If `experiment_name_hash` in the experiment message is 0, treat its
> payload as
> on of the following messages:
>
> ** init_experiments message
>
> Informs a peer of features supported by the client.
>
>   1. experiment_type: 16
>   2. data:
>   * [2: eflen]
>   * [eflen*32: experiment_name_hashes]
>
> A sending node:
>* MUST send the `init_experiments` message before any other
> `experiment`
> message for each connection.
>* SHOULD send the `experiment_name_hash` for any features supported and
> set
> to enabled in their software client.
>
> A receiving node:
>* For each experiment_name_hash:
>   * If the hash is unknown or 0: Ignore the feature
>   * If the hash is known: SHOULD enable the feature for communication
> with
> this peer.
>
> ** experiment_error message
>
>  experiment_type: 17
>  data:
> [32: channel_id]
> [32: experiment_name_hash]
> [2: len]
> [len: data]
>
> For all messages before funding_created: Must use temporary_channel_id in
> lieu
> of channel_id.
>
> A sending node:
>* If error is critical, should also send the regular lightning `error`
> message from BOLT #1
>* If the error is not specific to any channel: set channel_id to 0.
>
> A receiving node
>* If experiment_name_hash is unknown:
>   - MUST fail the channel.
>* If channel_id is 0
>   - MUST fail all the channels
>
> Rationale
>
> This message is not intended to replace `error` for criticial errors, but
> is
> provided for additional debugging information
> related to the experimental feature being used.
> A client may decide whether or not it can recover from such errors
> individually per experimental feature, which may include aborting channels
> and
> the connection.
>
> TODO: Define gossip/query messages related to nodes/channels which support
> features by experiment_hash_name.
>
> ---EOF
>
>
> On Sunday, 22 July 2018 13:32:02 BST Olaoluwa Osuntokun wrote:
> > No need to apologize! Perhaps this confusion shows that the current
> process
> > surrounding creating/modifying/drafting BOLT documents does indeed need
> to
> > be better documented. We've more or less been rolling along with a pretty
> > minimal process among the various implementations which I'd say has
> worked
> > pretty well so far. However as more contributors get involved we may need
> > to add a bit more structure to ensure things are transparent for
> newcomers.
> >
> > On Sun, Jul 22, 2018, 12:57 PM René Pickhardt <
> r.pickha...@googlemail.com>
> >
> > wrote:
> > > Sorry did not realized that BOLTs are the equivalent - and aparently
> many
> > > people I spoke to also didn't realize that.
> > >
> > > I thought BOLT is the protocol specification and the bolts are just the
> > > sections. And the BOLT should be updated to a new version.
> > >
> > > Also I suggested that this should take place for example within the
> > > lightning rfc repo. So my suggestion was not about creating another
> place
> > > but more about making the process more transparent or kind of filling
> the
> > > gap that I felt was there.
> > >
> > > I am sorry for spaming mailboxes with my suggestion just because I
> didn't
> > > understand the current process.
> > >
> > >
> > > Olaoluwa Osuntokun  schrieb am So., 22. Juli 2018
> > >
> > > 20:59:
> > >> We already have the equiv of improvement proposals: BOLTs.
> Historically
> > >> new standardization documents are proposed initially as issues or PR's
> > >> when
> > >> ultimately accepted. Why do we need another repo?
> > 

Re: [Lightning-dev] proposal for Lightning Network improvement proposals

2018-07-22 Thread Olaoluwa Osuntokun
No need to apologize! Perhaps this confusion shows that the current process
surrounding creating/modifying/drafting BOLT documents does indeed need to
be better documented. We've more or less been rolling along with a pretty
minimal process among the various implementations which I'd say has worked
pretty well so far. However as more contributors get involved we may need
to add a bit more structure to ensure things are transparent for newcomers.

On Sun, Jul 22, 2018, 12:57 PM René Pickhardt 
wrote:

> Sorry did not realized that BOLTs are the equivalent - and aparently many
> people I spoke to also didn't realize that.
>
> I thought BOLT is the protocol specification and the bolts are just the
> sections. And the BOLT should be updated to a new version.
>
> Also I suggested that this should take place for example within the
> lightning rfc repo. So my suggestion was not about creating another place
> but more about making the process more transparent or kind of filling the
> gap that I felt was there.
>
> I am sorry for spaming mailboxes with my suggestion just because I didn't
> understand the current process.
>
>
> Olaoluwa Osuntokun  schrieb am So., 22. Juli 2018
> 20:59:
>
>> We already have the equiv of improvement proposals: BOLTs. Historically
>> new standardization documents are proposed initially as issues or PR's when
>> ultimately accepted. Why do we need another repo?
>>
>> On Sun, Jul 22, 2018, 6:45 AM René Pickhardt via Lightning-dev <
>> lightning-dev@lists.linuxfoundation.org> wrote:
>>
>>> Hey everyone,
>>>
>>> in the grand tradition of BIPs I propose that we also start to have our
>>> own LIPs (Lightning Network Improvement proposals)
>>>
>>> I think they should be placed on the github.com/lightning account in a
>>> repo called lips (or within the lightning rfc repo) until that will happen
>>> I created a draft for LIP-0001 (which is describing the process and is 95%
>>> influenced by BIP-0002) in my github repo:
>>>
>>> https://github.com/renepickhardt/lips  (There are some open Todos and
>>> Questions in this LIP)
>>>
>>> The background for this Idea: I just came home from the bitcoin munich
>>> meetup where I held a talk examining BOLT. As I was asked to also talk
>>> about the future plans of the developers for BOLT 1.1 I realized while
>>> preparing the talk that many ideas are distributed within the community but
>>> it seems we don't have a central place where we collect future enhancements
>>> for BOLT1.1. Having this in mind I think also for the meeting in Australia
>>> it would be nice if already a list of LIPs would be in place so that the
>>> discussion can be more focused.
>>> potential LIPs could include:
>>> * Watchtowers
>>> * Autopilot
>>> * AMP
>>> * Splicing
>>> * Routing Protcols
>>> * Broadcasting past Routing statistics
>>> * eltoo
>>> * ...
>>>
>>> As said before I would volunteer to work on a LIP for Splicing (actually
>>> I already started)
>>>
>>> best Rene
>>>
>>>
>>> --
>>> https://www.rene-pickhardt.de
>>>
>>> Skype: rene.pickhardt
>>>
>>> mobile: +49 (0)176 5762 3618
>>> ___
>>> Lightning-dev mailing list
>>> Lightning-dev@lists.linuxfoundation.org
>>> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>>>
>>
> Am 22.07.2018 20:59 schrieb "Olaoluwa Osuntokun" :
>
> We already have the equiv of improvement proposals: BOLTs. Historically
> new standardization documents are proposed initially as issues or PR's when
> ultimately accepted. Why do we need another repo?
>
> On Sun, Jul 22, 2018, 6:45 AM René Pickhardt via Lightning-dev <
> lightning-dev@lists.linuxfoundation.org> wrote:
>
>> Hey everyone,
>>
>> in the grand tradition of BIPs I propose that we also start to have our
>> own LIPs (Lightning Network Improvement proposals)
>>
>> I think they should be placed on the github.com/lightning account in a
>> repo called lips (or within the lightning rfc repo) until that will happen
>> I created a draft for LIP-0001 (which is describing the process and is 95%
>> influenced by BIP-0002) in my github repo:
>>
>> https://github.com/renepickhardt/lips  (There are some open Todos and
>> Questions in this LIP)
>>
>> The background for this Idea: I just came home from the bitcoin munich
>> meetup where I held a talk examining BOLT. As I was asked t

Re: [Lightning-dev] proposal for Lightning Network improvement proposals

2018-07-22 Thread Olaoluwa Osuntokun
We already have the equiv of improvement proposals: BOLTs. Historically new
standardization documents are proposed initially as issues or PR's when
ultimately accepted. Why do we need another repo?

On Sun, Jul 22, 2018, 6:45 AM René Pickhardt via Lightning-dev <
lightning-dev@lists.linuxfoundation.org> wrote:

> Hey everyone,
>
> in the grand tradition of BIPs I propose that we also start to have our
> own LIPs (Lightning Network Improvement proposals)
>
> I think they should be placed on the github.com/lightning account in a
> repo called lips (or within the lightning rfc repo) until that will happen
> I created a draft for LIP-0001 (which is describing the process and is 95%
> influenced by BIP-0002) in my github repo:
>
> https://github.com/renepickhardt/lips  (There are some open Todos and
> Questions in this LIP)
>
> The background for this Idea: I just came home from the bitcoin munich
> meetup where I held a talk examining BOLT. As I was asked to also talk
> about the future plans of the developers for BOLT 1.1 I realized while
> preparing the talk that many ideas are distributed within the community but
> it seems we don't have a central place where we collect future enhancements
> for BOLT1.1. Having this in mind I think also for the meeting in Australia
> it would be nice if already a list of LIPs would be in place so that the
> discussion can be more focused.
> potential LIPs could include:
> * Watchtowers
> * Autopilot
> * AMP
> * Splicing
> * Routing Protcols
> * Broadcasting past Routing statistics
> * eltoo
> * ...
>
> As said before I would volunteer to work on a LIP for Splicing (actually I
> already started)
>
> best Rene
>
>
> --
> https://www.rene-pickhardt.de
>
> Skype: rene.pickhardt
>
> mobile: +49 (0)176 5762 3618
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Including a Protocol for splicing to BOLT

2018-07-04 Thread Olaoluwa Osuntokun
> #1 lets us leave out double-funded channels.  #2 and #3 lets us leave out
> splice.

> For myself, I would rather leave out AMP and double-funding and splicing
> than remove ZKCP.

It isn't one or the other. ZKCPs are compatible with various flavors of AMP.
All of these technologies can be rolled out, some with less coordination
than others. Please stop presenting these upgrades as if they're opposing
and fundamental constrains only allow a handful of them to be deployed.

Dual funded channels allow for immediate bi-directional transfers between
endpoints. Splicing allows channels to contract or grow, as well as: pay out
to on chain addresses, fund new channel on the fly, close into old channels,
consolidate change addresses, create fee inputs for eltoo, orchestrate
closing/opening coin-joins, etc, etc.

-- Laolu

On Wed, Jul 4, 2018 at 10:36 PM ZmnSCPxj  wrote:

> Good morning all,
>
> > > What's the nasty compromise?
> > >
> > > Let's also not underestimate how big of an update switching to dlog
> based
> > >
> > > HTLCs will be.
> >
> > Was referring to losing proof-of-payment; that's vital in a system
> >
> > without intermediaries. We have to decide what the lesser evil is.
>
> Without the inherent ZKCP, it becomes impossible to build a trustless
> off-to-on/on-to-offchain bridge, since a trustless swap outside of
> Lightning becomes impossible.  To my mind, ZKCP is an important building
> block in cryptocurrency: it is what we use in Lightning for routing.
> Further, ZKCP can be composed together to form a larger ZKCP, which again
> is what we use in Lightning for routing.
>
> The ZKCP here is what lets LN endpoint to interact with the chain and lets
> off-to-on/on-to-offchain bridges to be trustless.
>
> off/onchain bridges are important as they provide:
>
> 1.  Incoming channels: Get some onchain funds from cold storage (or
> borrowed), create an outgoing channel (likely to the bridge for best chance
> of working), then ask bridge for an invoice to send money to an address you
> control onchain. The outgoing channel capacity becomes incoming capacity,
> you get (most of) your money back (minus fees) onchain.
> 2.  Reloading spent channels.  Give bridge an invoice and pay to the
> bridge to trigger it reloading your channel.
> 3.  Unloading full channels. If you earn so much money (minus what you
> spend on expenses, subcontractors, employees, suppliers, etc.) you can use
> the bridge to send directly to your cold storage.
>
> #1 lets us leave out double-funded channels.  #2 and #3 lets us leave out
> splice.
>
> The interaction between bridge and Lightning is simply via BOLT11
> invoices.  Those provide the ZKCP necessary to make the bridge trustless.
>
> AMP enhances such a Lightning+bridge network, since the importance of
> maximum channel capacity is reduced if a ZKCP-providing AMP is available.
> For myself, I would rather leave out AMP and double-funding and splicing
> than remove ZKCP.
>
> One could imagine a semi-trusted ZKCP service for real-world items.  Some
> semi-trusted institution provides special safeboxes for rent that can be
> unlocked either by seller private key after 1008 blocks, or by the
> recipient key and a proof-of-payment preimage (and records the preimage in
> some publicly-accessible website).  To sell a real-world item, make a
> BOLT11 invoice, bring item to a safebox, lock it with appropriate keys and
> the invoice payment hash, give BOLT11 invoice to buyer.  Buyer pays and
> gets proof-of-payment preimage, goes to safebox and gets item.  Multi-way
> trades (A wants item from B, B wants item from C, C wants item from A) are
> just compositions of ZKCP.
>
> >
> > And yeah, I called it Schnorr-Eltoonicorn not only because it's so
> >
> > pretty, but because actually capturing it will be a saga.
>
> Bards shall sing about The Hunt for Schnorr-Eltoonicorn for ages, until
> Satoshi himself is but a vague memory of a myth long forgotten.
>
> Regards,
> ZmnSCPxj
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Including a Protocol for splicing to BOLT

2018-07-04 Thread Olaoluwa Osuntokun
> Was referring to losing proof-of-payment; that's vital in a system without
> intermediaries.  We have to decide what the lesser evil is.

As is now, we don't have a proper proof of payment. We have a "proof that
someone paid". A proper proof of payment would be: "proof that bob paid
carol".
Aside from that, spontaneous payments is amongst the most request feature
request I get from users and developers.

There're a few ways to achieve this with dlog based AMPs.

As far hash based AMPs, with a bit more interaction, and something like
zkboo,
one can achieve stronger binding. However, we'd lose the nice "one shot"
property that dlog based AMPs allow.

> And yeah, I called it Schnorr-Eltoonicorn not only because it's so
> pretty, but because actually capturing it will be a saga.

eltoo won't be the end-all-be-all as it comes along with several tradeoffs,
like everything else does.

Also, everything we can do with Schnorr, we can also do with ECDSA, but
today.

-- Laolu


On Wed, Jul 4, 2018 at 7:12 PM Rusty Russell  wrote:

> Olaoluwa Osuntokun  writes:
> > What's the nasty compromise?
> >
> > Let's also not underestimate how big of an update switching to dlog based
> > HTLCs will be.
>
> Was referring to losing proof-of-payment; that's vital in a system
> without intermediaries.  We have to decide what the lesser evil is.
>
> And yeah, I called it Schnorr-Eltoonicorn not only because it's so
> pretty, but because actually capturing it will be a saga.
>
> Cheers,
> Rusty.
>
> > On Wed, Jul 4, 2018, 4:21 PM Rusty Russell 
> wrote:
> >
> >> Christian Decker  writes:
> >>
> >> > ZmnSCPxj via Lightning-dev 
> >> writes:
> >> >> For myself, I think splice is less priority than AMP. But I prefer an
> >> >> AMP which retains proper ZKCP (i.e. receipt of preimage at payer
> >> >> implies receipt of payment at payee, to facilitate trustless
> >> >> on-to-offchain and off-to-onchain bridges).
> >> >
> >> > Agreed, multipath routing is a priority, but I think splicing is just
> as
> >> > much a key piece to a better UX, since it allows to ignore differences
> >> > between on-chain and off-chain funds, showing just a single balance
> for
> >> > all use-cases.
> >>
> >> Agreed, we need both.  Multi-channel was a hack because splicing doesn't
> >> exist, and I'd rather not ever have to implement multi-channel :)
> >>
> >> AMP is important, but it's a nasty compromise with the current
> >> limitations.  I want to have my cake and eat it too, and I'm pretty sure
> >> it's possible once the Scnorr-Eltoonicorn arrives.
> >>
> >> Cheers,
> >> Rusty.
> >> ___
> >> Lightning-dev mailing list
> >> Lightning-dev@lists.linuxfoundation.org
> >> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
> >>
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Including a Protocol for splicing to BOLT

2018-07-04 Thread Olaoluwa Osuntokun
What's the nasty compromise?

Let's also not underestimate how big of an update switching to dlog based
HTLCs will be.

On Wed, Jul 4, 2018, 4:21 PM Rusty Russell  wrote:

> Christian Decker  writes:
>
> > ZmnSCPxj via Lightning-dev 
> writes:
> >> For myself, I think splice is less priority than AMP. But I prefer an
> >> AMP which retains proper ZKCP (i.e. receipt of preimage at payer
> >> implies receipt of payment at payee, to facilitate trustless
> >> on-to-offchain and off-to-onchain bridges).
> >
> > Agreed, multipath routing is a priority, but I think splicing is just as
> > much a key piece to a better UX, since it allows to ignore differences
> > between on-chain and off-chain funds, showing just a single balance for
> > all use-cases.
>
> Agreed, we need both.  Multi-channel was a hack because splicing doesn't
> exist, and I'd rather not ever have to implement multi-channel :)
>
> AMP is important, but it's a nasty compromise with the current
> limitations.  I want to have my cake and eat it too, and I'm pretty sure
> it's possible once the Scnorr-Eltoonicorn arrives.
>
> Cheers,
> Rusty.
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Rebalancing argument

2018-07-03 Thread Olaoluwa Osuntokun
Dmytro wrote:
> Yet the question remains — are there obvious advantages of cycle
> transactions over a smart fee/routing system?

That's a good question, it may be the case that by modifying the fee
structure to punish flows that unbalance channels further, then this can
simplify the problem as the heuristics can target solely the fee rate. The
earliest suggestion of this I can recall was from Tadge way back in like
2015 or so. The goal here is for a node to ideally maintain relatively
balanced channels, but then charge a payment an amount that scales super
linearly when flows consume most of their available balance.

The current fee schedule is essentially:

  base_fee + amt*rate

clighting and lnd (borrowed from c-lightning) currently use a "risk factor"
to factor in the impact of the time lock on the "weight" of an edge when
path finding. With this, the fee schedule looks like:

  (base_fee + amt*rate) + (cltv_delta * risk_factor / 1,000,)

In the future, we may want to allow nodes to also signal how they wish the
fee to scale with the absolute CLTV of the HTLC extend to this. This would
allow them to more naturally factor in their conception of the time value of
their BTC.

Finally, if we factor in an "balance disruption" factor, the fee schedule
may look something like this:

  (base_fee + amt*rate) + (cltv_delta * risk_factor / 1,000,) +
  gamma*f(capacity, amt)

Here f is a function whose output is proportional to the distance the
payment flow (assuming full capacity at that instance) puts the channel away
from being fully balanced, and gamma is a coefficient that allows nodes to
express the degree of penalty for unbalancing a channel. f itself is either
agreed upon by the network completely, or resembles a certain polynomial,
allowing nodes to select coefficients as they wish.

We may want to consider moving to something like this for BOLT 1.x+ as it
allows nodes to quantify their apprehension to time locks and also
channel balance equilibrium affinity.

Alternatively, if we move to something like HORNET, then during the set up
phase, nodes can ask for an initial "quote" for a set of payment ranges,
then just use that quote for all payments sent. This allows nodes to keep
their fee schedules private (for w/e reason) and also change them at a whim
if they wish.

-- Laolu


On Sun, Jul 1, 2018 at 8:39 AM Dmytro Piatkivskyi <
dmytro.piatkivs...@ntnu.no> wrote:

> Hi Rene,
>
> Thanks for your answer!
>
> 1. The Lightning network operates under different assumptions, that is
> true. However, the difference is minor in terms of the issue posed. The
> premise for the quoted statement is that taking fees changes the nodes’
> balances, therefore selected paths affect the liquidity. In the Lightning
> network fees are very small, so the change in liquidity may be negligible.
> Moreover, intermediate nodes gain in fees, which only increases the
> liquidity.
>
> 2.A. It is too early to speculate where the privacy requirements will
> settle down. Flare suggests a mechanism of sharing the infrastructure view
> between nodes, possibly sharing weights. As the network grows routing will
> become more difficult, however we don’t know yet to which extent. It may
> organise itself in ‘domains’, so when we send a payment we know to which
> domain we are sending to, knowing the path to it beforehand. The point is
> we don’t know yet, so we can’t speculate.
>
> 2.B. That is surely an interesting aspect. HTLC locks
> temporarily downgrade the network liquidity. Now the question is how it
> changes the order of transactions and how that order change affects the
> transaction feasibility. Does it render some transactions infeasible or
> just defers them? It definitely needs a closer look.
>
> Yet the question remains — are there obvious advantages of cycle
> transactions over a smart fee/routing system? In any sense. Path lengths,
> for example. To answer that I am going to run a simulation, but also would
> appreciate your opinions.
>
> Best,
> Dmytro
>
> From: René Pickhardt 
> Date: Sunday, 1 July 2018 at 13:59
> To: Dmytro Piatkivskyi 
> Cc: lightning-dev 
> Subject: Re: [Lightning-dev] Rebalancing argument
>
> Hey Dmytro,
>
> thank your for your solid input to the discussion. I think we need to
> consider that the setting in the lightning network is not exactly
> comparable to the one described in the 2010 paper.
>
> 1st: the paper states in section 5.2: "It appears that a mathematical
> analysis of a transaction routing model where intermediate nodes charged a
> routing fee would require an entirely new approach since it would
> invalidate the cycle-reachability relation that forms the basis of our
> results."
> Since we have routing fees in the lightning network to my understanding
> the theorem and lemma you quoted in your medium post won't hold.
>
> 2nd: Even if we neglect the routing fees and assume the theorem still
> holds true we have two conditions that make the problem way more dynamic:
>  A) In the 

Re: [Lightning-dev] Including a Protocol for splicing to BOLT

2018-06-25 Thread Olaoluwa Osuntokun
Hi René,

Speaking at a high level, the main differ between modifying autopilot
strategies (channel bootstrapping, and maintenance) vs something like
splicing, is that the former is purely policy while the latter is actually a
protocol modifications. With respect to difficulty, the first option (in lnd
at least) requires a dev to work solely on a high level (implementing a
series of "pure" interfaces), on the other hand something like splicing
requires a bit more low-level knowledge of Bitcoin, the protocol, and also
specific details of an implementation (funding channels, signing, sync,
etc).

Splicing is likely something to be included (along with many other things on
our various wish lists) within BOLT 1.1, which will start to be "officially"
drafted late fall of this year. However of course it's possible for
implementations to start to draft up working versions, reserving a temporary
feature bit.

> They people from lightning labs told me that they are currently started
> working on splicing

Yep, I have a branch locally that has started a full async version of
splicing. Mostly started to see if any implementation level details would be
a surprise, compared to how we think it all should work in our heads.

> but even though it seems technically straight forward t

Well the full async implementation may be a bit involved, depending on the
architecture of the implementation (see the second point below).

In the abstract, I'd say a splicing proposal should include the following:

  * a generic message for both splice in/out
* this allows both sides to schedule/queue up possible changes,
  opportunistically piggy-backing then on top of the other sides
  initiation
* most of the channel params can also be re-negotiated as this point,
  another upside is this effectively allows garbage collecting old
  revocation state
  * fully async splice in/out
 * async is ideal as we don't need to block channel operation for
   confirmations, this greatly improves the UX of the process
 * async splice in should use a technique similar to what Conner has
   suggested in the past [0], otherwise it would need to block :(
  * a sort of pre-announcement gossip messages
 * purpose of this is to signal to other nodes "hey this channel is
   about to change outpoints, but you can keep routing through it"
 * otherwise, if this doesn't exist, then nodes would interpret the
   action as a close then open of a channel, rather than a re-allocation

Jumping down to a lower level detail, the inclusion of a sort of "scheduler"
for splicing can also allow implementations to greatly batch all their
operations. One example is using a splicing session initiated by the remote
party to open channels, send regular on-chain payments, CPFP pending
sweeps/commitments, etc.

[0]:
https://github.com/lightningnetwork/lightning-rfc/issues/280#issuecomment-388269599

-- Laolu


On Mon, Jun 25, 2018 at 3:10 AM René Pickhardt via Lightning-dev <
lightning-dev@lists.linuxfoundation.org> wrote:

> Hey everyone,
>
> I found a mail from 6 month ago on this list ( c.f.:
> https://lists.linuxfoundation.org/pipermail/lightning-dev/2017-December/000865.html
>  )
> in which it was stated that there was a plan to include a splicing protocol
> as BOLT 1.1 (On a side node I wonder weather it would make more sense to
> include splicing to BOLT 3?) I checked out the git repo and issues and
> don't see that anyone is currently working on that topic and that it hasn't
> been included yet. Am I correct?
> If noone works on this at the moment and the spec is still needed I might
> take the initiative on that one over the next weeks. If someone is working
> on this I would kindly offer my support.
>
> The background for my question: Last weekend I have been attending the 2nd
> lightninghackday in Berlin and we had quite some intensive discussions
> about the autopilot feature and splicing. (c.f. a summary can be found on
> my blog:
> https://www.rene-pickhardt.de/improve-the-autopilot-of-bitcoins-lightning-network-summary-of-the-bar-camp-session-at-the-2nd-lightninghackday-in-berlin
> )
>
> They people from lightning labs told me that they are currently started
> working on splicing but even though it seems technically straight forward
> the protocols should also be formalized. Previously I planned working on
> improving the intelligence of the autopilot feature of the lightning
> network however on the weekend I got convinced that splicing should be much
> higher priority and the process should be specified in the lightning rfc.
>
> Also it would be nice if someone would be willing to help out improving
> the quality of the spec that I would create since it will be my first time
> adding work to such a formal rfc.
>
> best Rene
>
>
> --
> www.rene-pickhardt.de
> 
>
> Skype: rene.pickhardt
>
> mobile: +49 (0)176 5762 3618 <+49%20176%2057623618>
> 

Re: [Lightning-dev] Scriptless Scripts with ECDSA

2018-05-07 Thread Olaoluwa Osuntokun
FWIW, Conner pointed out that the initial ZK Proof for the correctness of
the
Paillier params (even w/ usage of bulletproofs) has multiple rounds of
interaction,
iirc up to 5+ (with additional pipelining) rounds of interaction.

-- Laolu

On Mon, May 7, 2018 at 5:14 PM Olaoluwa Osuntokun <laol...@gmail.com> wrote:

> Actually, just thought about this a bit more and I think it's possible to
> deploy this in unison with (or after) any sort of SS based on schnorr
> becomes
> possible in Bitcoin. My observation is that since both techniques are
> based on
> the same underlying technique (revealing a secret value in a signature) and
> they center around leveraging the onion payload to drop off a payment point
> (G*a, or G*a_1*a_2*a_3, etc), then the disclosure within the _links_ can be
> heterogeneous, as the same secret is still revealed in an end-to-end
> matter.
>
> As an illustration, consider: A <-> B <-> C. The A <-> B link could use
> the 2pc
> pailier technique, while the B <-> C link could use the OG SS technique
> based
> on schnorr. If i'm correct, then this would mean that we can deploy both
> techniques, without worrying about fragmenting the network due to the
> existence
> of two similar but incompatible e2e payment routing schemes!
>
> -- Laolu
>
>
> On Mon, May 7, 2018 at 4:57 PM Olaoluwa Osuntokun <laol...@gmail.com>
> wrote:
>
>> Hi Pedro,
>>
>> Very cool stuff! When I originally discovered the Lindell's technique, my
>> immediate thought was the we could phase this in as a way to
>> _immediately_ (no
>> additional Script upgrades required), replace the regular 2-of-2
>> mulit-sig with
>> a single p2wkh. The immediate advantages of this would: be lower fees for
>> opening/closing channels (as the public key script, and witness are
>> smaller),
>> openings and cooperative close transactions would blend in with the
>> anonymity
>> set of regular p2wkh transactions, and finally the htlc timeout+success
>> transactions can be made smaller as we can remove the multi-sig. The
>> second
>> benefit is nerfed a bit if the channel are advertised, but non-advertised
>> channels would be able to take advantage of this "stealth" feature.
>>
>> The upside of the original application I hand in mind is that it wouldn't
>> require any end-to-end changes, as it would only be a link level change
>> (diff
>> output for the funding transaction). If we wanted to allow these styles of
>> channels to be used outside of non-advertised channels, then we would
>> need to
>> update the way channels are verified in the gossip layer.
>>
>> Applying this to the realm of allowing us to use randomized payment
>> identifiers
>> across the route is obviously much, much doper. So then the question
>> would be
>> what the process of integrating the scheme into the existing protocol
>> would
>> look like. The primary thing we'd need to account for is the additional
>> cryptographic overhead this scheme would add if integrated. Re-reviewing
>> the
>> paper, there's an initial setup and verification phase (which was omitted
>> from
>> y'alls note for brevity) where both parties need to complete before the
>> actually signing process can take place. Ideally, we can piggy-back this
>> setup
>> on top of the existing accept_channel/open_channel dance both sides need
>> to go
>> through in order to advance the channel negotiation process today.
>>
>> Conner actually started to implement this when we first discovered the
>> scheme,
>> so we have a pretty good feel w.r.t the implementation of the initial set
>> of
>> proofs. The three proofs required for the set up phase are:
>>
>>   1. A proof that that the Paillier public key is well formed. In the
>> paper
>>   they only execute this step for the party that wishes to _obtain_ the
>>   signature. In our case, since we'll need to sign for HTLCs in both
>>   directions, but parties will need to execute this step.
>>
>>   2. A dlog proof for the signing keys themselves. We already do this
>> more or
>>   less, as if the remote party isn't able to sign with their target key,
>> then
>>   we won't be able to update the channel, or even create a valid
>> commitment in
>>   the first place.
>>
>>   3. A proof that value encrypted (the Paillier ciphertext) is actually
>> the
>>   dlog of the public key to be used for signing. (as an aside this is the
>> part
>>   of the protocol that made me do a double take when first reading it:
>> using one
>>   c

Re: [Lightning-dev] Scriptless Scripts with ECDSA

2018-05-07 Thread Olaoluwa Osuntokun
Hi Pedro,

Very cool stuff! When I originally discovered the Lindell's technique, my
immediate thought was the we could phase this in as a way to _immediately_
(no
additional Script upgrades required), replace the regular 2-of-2 mulit-sig
with
a single p2wkh. The immediate advantages of this would: be lower fees for
opening/closing channels (as the public key script, and witness are
smaller),
openings and cooperative close transactions would blend in with the
anonymity
set of regular p2wkh transactions, and finally the htlc timeout+success
transactions can be made smaller as we can remove the multi-sig. The second
benefit is nerfed a bit if the channel are advertised, but non-advertised
channels would be able to take advantage of this "stealth" feature.

The upside of the original application I hand in mind is that it wouldn't
require any end-to-end changes, as it would only be a link level change
(diff
output for the funding transaction). If we wanted to allow these styles of
channels to be used outside of non-advertised channels, then we would need
to
update the way channels are verified in the gossip layer.

Applying this to the realm of allowing us to use randomized payment
identifiers
across the route is obviously much, much doper. So then the question would
be
what the process of integrating the scheme into the existing protocol would
look like. The primary thing we'd need to account for is the additional
cryptographic overhead this scheme would add if integrated. Re-reviewing the
paper, there's an initial setup and verification phase (which was omitted
from
y'alls note for brevity) where both parties need to complete before the
actually signing process can take place. Ideally, we can piggy-back this
setup
on top of the existing accept_channel/open_channel dance both sides need to
go
through in order to advance the channel negotiation process today.

Conner actually started to implement this when we first discovered the
scheme,
so we have a pretty good feel w.r.t the implementation of the initial set of
proofs. The three proofs required for the set up phase are:

  1. A proof that that the Paillier public key is well formed. In the paper
  they only execute this step for the party that wishes to _obtain_ the
  signature. In our case, since we'll need to sign for HTLCs in both
  directions, but parties will need to execute this step.

  2. A dlog proof for the signing keys themselves. We already do this more
or
  less, as if the remote party isn't able to sign with their target key,
then
  we won't be able to update the channel, or even create a valid commitment
in
  the first place.

  3. A proof that value encrypted (the Paillier ciphertext) is actually the
  dlog of the public key to be used for signing. (as an aside this is the
part
  of the protocol that made me do a double take when first reading it:
using one
  cryptosystem to encrypt the private key of another cryptosystem in order
to
  construct a 2pc to allow signing in the latter cryptosystem! soo clever!)

First, we'll examine the initial proof. This only needs to be done once by
both
parties AFAICT. As a result, we may be able to piggyback this onto the
initial
channel funding steps. Reviewing the paper cited on the Lindell paper [1],
it
appears this would take 1 RTT, so this shouldn't result in any additional
round
trips during the funding process. We should be able to use a Paillier
modulos
of 2048 bits, so nothing too crazy. This would just result in a slightly
bigger
opening message.

Skipping the second proofs as it's pretty standard.

The third proof as described (Section 6 of the Lindell paper) is
interactive.
It also contains a ZK range proof as a sub-protocol which as described in
Appendix A is also interactive. However, it was pointed out to us by Omer
Shlomovits on the lnd slack, that we can actually replace their custom range
proofs with Bulletproofs. This would make this section non-interactive,
allowing the proof itself to take 1.5 RTT AFAICT. Additionally, this would
only
need to be done once at the start, as AFIACT, we can re-use the encryption
of
the secp256k1 private key of both parties.

The current channel opening process requires 2 RTT, so it seems that we'd be
able to easily piggy back all the opening proofs on top of the existing
funding
protocol. The main cost would be the increased size of these opening
messages,
and also the additional computational cost of operations within the Paillier
modulus and the new range proof.

The additional components that would need to be modified are the process of
adding+settling an HTLC, and also the onion payload that drops off the point
whose dlog is r_1*alpha. Within the current protocol, adding and settling an
HTLC are more or less non-interactive, we have a single message for each,
which
is then staged to be committed in new commitments for both parties. With
this
new scheme (if I follow it correctly), adding an HTLC now requires N RTT:
  1. Alice sends A = G*alpha to Bob. 

Re: [Lightning-dev] Receiving via unpublished channels

2018-05-07 Thread Olaoluwa Osuntokun
AFAIK, all the other implementations already do this (lnd does at least
[1]).  As otherwise, it wouldn't be possible to properly utilize routing
hints.

> I want to ask the other LN implementations (lnd, eclair, ucoin, lit)

As an side, what's "ucoin"? Searched for a bit and didn't find anything
notable.

[1]:
https://github.com/lightningnetwork/lnd/blob/master/discovery/gossiper.go#L1747

On Thu, Apr 26, 2018 at 4:35 PM ZmnSCPxj via Lightning-dev <
lightning-dev@lists.linuxfoundation.org> wrote:

> Good morning list,
>
> While implementing support for `r` field in invoices, I stumbled upon some
> issues regarding *creating* invoices with `r` fields.
>
> In order to receive via an unpublished channel, we need to know what
> onLightning fees the other side of that channel wants to charge.  We cannot
> use our own onLightning fees because our fees apply if we were forwarding
> to the other side.
>
> However, in case of an unpublished channel, we do not send
> channel_announcement, and in that case we do not send channel_update.  So
> the other side of the channel never informs us of the onLightning fees they
> want to charge if we would receive funds by this channel.
>
> An idea we want to consider is to simply send `channel_update` as soon as
> we lock in the channel:
> https://github.com/ElementsProject/lightning/pull/1330#issuecomment-383931817
>
> I want to ask the other LN implementations (lnd, eclair, ucoin, lit) if we
> should consider standardizing this behavior (i.e. send `channel_update`
> after lockin  regardless of published/unpublished state).  It seems
> back-compatible: software which does not expect this behavior will simply
> drop the `channel_update` (as they do not follow a `channel_announcement`).
>
> In any case, what was the intended way to get the onLightning fee rates to
> put into invoice `r` fields for private routes?
>
> Regards,
> ZmnSCPxj
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Trustless WatchTowers?

2018-04-16 Thread Olaoluwa Osuntokun
Hi ZmnSCPxj,

> It seems to me, that the only safe way to implement a trustless
WatchTower,
> is for the node to generate a fully-signed justice transaction,
IMMEDIATELY
> after every commitment transaction is revoked, and transmit it to the
> WatchTower.

No, one doesn't need to transmit the entire justice transaction. Instead,
the client simply sends out the latest items in the script template, and a
series of _signatures_ for the various breach outputs. The pre-generated
signature means that the server is *forced* to reproduce the justice
transaction that satisfies the latest template and signature. Upfront, free
parameters such as breach bonus (or w/e else) can be negotiated.

> The WatchTower would have to store each and every justice transaction it
> received, and would not be able to compress it or use various techniques
to
> store data efficiently.

In our current implementation, we've abandoned the "savings" from
compressing the shachain/elkrem tree. When one factors in the space
complexity due the *just* the commitment signatures, the savings from
compression become less attractive. Going a step father, once you factor in
the space complexity of the 2-stage HTLC claims, then the savings from
compressing the revocation tree become insignificant.

It's also worth pointing out that if the server is able to compress the
revocation tree, then their necessarily linking new breach payloads with a
particular channel. Another downside, is that if you go to revocation tree
compression, then all updates *must* be sent in order, and updates cannot be
*skipped*.

As a result of these downside, our current implementation goes back to the
ol' "encrypted blob" approach. One immediate benefit with this approach is
that the outsourcing protocol isn't so coupled with the current _commitment
protocol_. Instead, the internal payload can be typed, allowing the server
to dispatch the proper breach protocol based on the commitment type. The
blob approach can also support a "swap" protocol which is required for
commitment designs that allow for O(1) outsourcer state per-client, like the
scheme I presented at the last Scaling Bitcoin.

-- Laolu


On Sun, Apr 15, 2018 at 8:32 PM ZmnSCPxj via Lightning-dev <
lightning-dev@lists.linuxfoundation.org> wrote:

> Hi all,
>
> Nicolas Dorier was requesting additional hooks in c-lightning for a simple
> WatchTower system:
> https://github.com/ElementsProject/lightning/issues/1353
>
> Unfortunately I was only able to provide an interface which requires a
> *trusted* WatchTower.  Trust is of course a five-letter word and should not
> be used in polite company.
>
> My key problem is that I provide enough information to the WatchTower for
> the WatchTower to be able to create the justice transaction by itself.  If
> so, the WatchTower could just make the justice transaction output to itself
> and the counterparty, so that the WatchTower and the counterparty can
> cooperate to steal the channel funds: the counterparty publishes a revoked
> transaction, the WatchTower writes a justice transaction on it that splits
> the earnings between itself and the counterparty.
>
> It seems to me, that the only safe way to implement a trustless
> WatchTower, is for the node to generate a fully-signed justice transaction,
> IMMEDIATELY after every commitment transaction is revoked, and transmit it
> to the WatchTower.  The WatchTower would have to store each and every
> justice transaction it received, and would not be able to compress it or
> use various techniques to store data efficiently.  The WatchTower would not
> have enough information to regenerate justice transactions (and in
> particular would not be able to create a travesty-of-justice transaction
> that pays out to itself rather than the protected party).  In practice this
> would require that node software also keep around those transactions until
> some process has ensured that the WatchTower has received the justice
> transactions.
>
> Is there a good way to make trustless WatchTowers currently or did this
> simply not reach BOLT v1.0?
>
> Regards,
> ZmnSCPxj
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Improving the initial gossip sync

2018-02-25 Thread Olaoluwa Osuntokun
> With that said, this should instead be a distinct `chan_update_horizon`
> message (or w/e name). If a particular bit is set in the `init` message,
> then the next message *both* sides send *must* be `chan_update_horizon`.

Tweaking this a bit, if we make it: don't send *any* channel updates at all
unless the other side sends this message, then this allows both parties to
precisely control their initial load, and also if they even *want*
channel_update messages at all.

Purely routing nodes don't need any updates at all. In the case they wish to
send (assumed to be infrequent in this model), they'll get the latest update
after their first failure.

Similarly, leaf/edge nodes can opt to receive the latest updates if they
wish to minimize payment latency due to routing failures that are the result
of dated information.

IMO, the only case where a node would want the most up to date link policy
state is for optimization/analysis, or to minimize payment latency at the
cost of additional load.

--Laolu

On Fri, Feb 23, 2018 at 4:45 PM Olaoluwa Osuntokun <laol...@gmail.com>
wrote:

> Hi Rusty,
>
> > 1. query_short_channel_id
> > IMPLEMENTATION: trivial
>
> *thumbs up*
>
> > 2. query_channel_range/reply_channel_range
> > IMPLEMENTATION: requires channel index by block number, zlib
>
> For the sake of expediency of deployment, if we add a byte (or two) to
> denote the encoding/compression scheme, we can immediately roll out the
> vanilla (just list the ID's), then progressively roll out more
> context-specific optimized schemes.
>
> > 3. A gossip_timestamp field in `init`
> > This is a new field appended to `init`: the negotiation of this feature
> bit
> > overrides `initial_routing_sync`
>
> As I've brought up before, from my PoV, we can't append any additional
> fields to the innit message as it already contains *two* variable sized
> fields (and no fixed size fields). Aside from this, it seems that the
> `innit` message should be simply for exchange versioning information,
> which
> may govern exactly *which* messages are sent after it. Otherwise, by adding
> _additional_ fields to the `innit` message, we paint ourselves in a corner
> and can never remove it. Compared to using the `innit` message to set up
> the
> initial session context, where we can safely add other bits to nullify or
> remove certain expected messages.
>
> With that said, this should instead be a distinct `chan_update_horizon`
> message (or w/e name). If a particular bit is set in the `init` message,
> then the next message *both* sides send *must* be `chan_update_horizon`.
>
> Another advantage of making this a distinct message, is that either party
> can at any time update this horizon/filter to ensure that they only receive
> the *freshest* updates.Otherwise, one can image a very long lived
> connections (say weeks) and the remote party keeps sending me very dated
> updates (wasting bandwidth) when I only really want the *latest*.
>
> This can incorporate decker's idea about having a high+low timestamp. I
> think this is desirable as then for the initial sync case, the receiver can
> *precisely* control their "verification load" to ensure they only process a
> particular chunk at a time.
>
>
> Fabrice wrote:
> > We could add a `data` field which contains zipped ids like in
> > `reply_channel_range` so we can query several items with a single
> message ?
>
> I think this is an excellent idea! It would allow batched requests in
> response to a channel range message. I'm not so sure we need to jump
> *straight* to compressing everything however.
>
> > We could add an additional `encoding_type` field before `data` (or it
> > could be the first byte of `data`)
>
> Great minds think alike :-)
>
>
> If we're in rough agreement generally about this initial "kick can"
> approach, I'll start implementing some of this in a prototype branch for
> lnd. I'm very eager to solve the zombie churn, and initial burst that can
> be
> very hard on light clients.
>
> -- Laolu
>
>
> On Wed, Feb 21, 2018 at 10:03 AM Fabrice Drouin <fabrice.dro...@acinq.fr>
> wrote:
>
>> On 20 February 2018 at 02:08, Rusty Russell <ru...@rustcorp.com.au>
>> wrote:
>> > Hi all,
>> >
>> > This consumed much of our lightning dev interop call today!  But
>> > I think we have a way forward, which is in three parts, gated by a new
>> > feature bitpair:
>>
>> We've built a prototype with a new feature bit `channel_range_queries`
>> and the following logic:
>> When you receive their init message and check their local features
>> - if they set `initial_routing_sync` and `channe

Re: [Lightning-dev] Improving the initial gossip sync

2018-02-23 Thread Olaoluwa Osuntokun
Hi Rusty,

> 1. query_short_channel_id
> IMPLEMENTATION: trivial

*thumbs up*

> 2. query_channel_range/reply_channel_range
> IMPLEMENTATION: requires channel index by block number, zlib

For the sake of expediency of deployment, if we add a byte (or two) to
denote the encoding/compression scheme, we can immediately roll out the
vanilla (just list the ID's), then progressively roll out more
context-specific optimized schemes.

> 3. A gossip_timestamp field in `init`
> This is a new field appended to `init`: the negotiation of this feature
bit
> overrides `initial_routing_sync`

As I've brought up before, from my PoV, we can't append any additional
fields to the innit message as it already contains *two* variable sized
fields (and no fixed size fields). Aside from this, it seems that the
`innit` message should be simply for exchange versioning information, which
may govern exactly *which* messages are sent after it. Otherwise, by adding
_additional_ fields to the `innit` message, we paint ourselves in a corner
and can never remove it. Compared to using the `innit` message to set up the
initial session context, where we can safely add other bits to nullify or
remove certain expected messages.

With that said, this should instead be a distinct `chan_update_horizon`
message (or w/e name). If a particular bit is set in the `init` message,
then the next message *both* sides send *must* be `chan_update_horizon`.

Another advantage of making this a distinct message, is that either party
can at any time update this horizon/filter to ensure that they only receive
the *freshest* updates.Otherwise, one can image a very long lived
connections (say weeks) and the remote party keeps sending me very dated
updates (wasting bandwidth) when I only really want the *latest*.

This can incorporate decker's idea about having a high+low timestamp. I
think this is desirable as then for the initial sync case, the receiver can
*precisely* control their "verification load" to ensure they only process a
particular chunk at a time.


Fabrice wrote:
> We could add a `data` field which contains zipped ids like in
> `reply_channel_range` so we can query several items with a single message
?

I think this is an excellent idea! It would allow batched requests in
response to a channel range message. I'm not so sure we need to jump
*straight* to compressing everything however.

> We could add an additional `encoding_type` field before `data` (or it
> could be the first byte of `data`)

Great minds think alike :-)


If we're in rough agreement generally about this initial "kick can"
approach, I'll start implementing some of this in a prototype branch for
lnd. I'm very eager to solve the zombie churn, and initial burst that can be
very hard on light clients.

-- Laolu


On Wed, Feb 21, 2018 at 10:03 AM Fabrice Drouin 
wrote:

> On 20 February 2018 at 02:08, Rusty Russell  wrote:
> > Hi all,
> >
> > This consumed much of our lightning dev interop call today!  But
> > I think we have a way forward, which is in three parts, gated by a new
> > feature bitpair:
>
> We've built a prototype with a new feature bit `channel_range_queries`
> and the following logic:
> When you receive their init message and check their local features
> - if they set `initial_routing_sync` and `channel_range_queries` then
> do nothing (they will send you a `query_channel_range`)
> - if they set `initial_routing_sync` and not `channel_range_queries`
> then send your routing table (as before)
> - if you support `channel_range_queries` then send a
> `query_channel_range` message
>
> This way new and old nodes should be able to understand each other
>
> > 1. query_short_channel_id
> > =
> >
> > 1. type: 260 (`query_short_channel_id`)
> > 2. data:
> >* [`32`:`chain_hash`]
> >* [`8`:`short_channel_id`]
>
> We could add a `data` field which contains zipped ids like in
> `reply_channel_range` so we can query several items with a single
> message ?
>
> > 1. type: 262 (`reply_channel_range`)
> > 2. data:
> >* [`32`:`chain_hash`]
> >* [`4`:`first_blocknum`]
> >* [`4`:`number_of_blocks`]
> >* [`2`:`len`]
> >* [`len`:`data`]
>
> We could add an additional `encoding_type` field before `data` (or it
> could be the first byte of `data`)
>
> > Appendix A: Encoding Sizes
> > ==
> >
> > I tried various obvious compression schemes, in increasing complexity
> > order (see source below, which takes stdin and spits out stdout):
> >
> > Raw = raw 8-byte stream of ordered channels.
> > gzip -9: gzip -9 of raw.
> > splitgz: all blocknums first, then all txnums, then all outnums,
> then gzip -9
> > delta: CVarInt encoding:
> blocknum_delta,num,num*txnum_delta,num*outnum.
> > deltagz: delta, with gzip -9
> >
> > Corpus 1: LN mainnet dump, 1830 channels.[1]
> >
> > Raw: 14640 bytes
> > gzip -9: 6717 bytes
> >   

Re: [Lightning-dev] AMP: Atomic Multi-Path Payments over Lightning

2018-02-06 Thread Olaoluwa Osuntokun
Hi ZmnSCPxj,

> This is excellent work!

Thanks!

> I think, a `globalfeatures` odd bit could be used for this.  As it is
> end-ot-end, `localfeatures` is not appropriate.

Yep, it would need to be a global feature bit. In the case that we're
sending to a destination which isn't publicly advertised, then perhaps an
extension to BOLT-11 could be made to signal receiver support.

> I believe, currently, fees have not this super-linear component

Yep they don't. Arguably, we should also have a component that scales
according to the proposed CLTV value of the outgoing HTLC. At Scaling
Bitcoin Stanford, Aviv Zohar gave a talked titled "How to Charge Lightning"
where the authors analyzed the possible evolution of fees on the network
(and also suggested adding this super-linear component to extend the
lifetime of channels).  However, the talk itself focused on a very simple
"mega super duper hub" topology. Towards the end he alluded to a forthcoming
paper that had more comprehensive analysis of more complex topologies. I
look forward to the publication of their finalized work.

> Indeed, the existence of per-hop fees (`fee_base_msat`) means, splitting
> the payment over multiple flows will be, very likely, more expensive,
> compared to using a single flow.

Well it's still to be seen how the fee structure on mainnet emerges once the
network is still fully bootstrapped. AFAIK, most running on mainnet atm are
using the default fee schedules for their respective implementations. For
example, the default fee_base_msat for lnd is 1000 msat (1 satoshi).

> I believe the `realm` byte is intended for this.

The realm byte is meant to signal "forward this to the dogecoin channel".
ATM, we just default to 0 as "Bitcoin". However, the byte itself only really
need significance between the sender and the intermediate node. So there
isn't necessarily pressure to have a globally synchronized set of realm
bytes.

> Thus, you can route over nodes that are unaware of AMP, and only provide
> an AMP realm byte to the destination node, who, is able to reconstruct
this
> your AMP data as per your algorithm.

Yes, the intermediate nodes don't need to be aware of the end-to-end
protocol. For the final hop, there are actually 53 free bytes (before one
needs to signal the existence of EOBs):

  * 1 byte realm
  * 8 bytes next addr (all zeroes to signal final dest)
  * 32 bytes hmac (also all zeroes for the final dest)
  * 12 bytes padding

So any combo of these bytes can be used to signal more advanced protocols to
the final destination.


A correction from the prior email description:

> We can further modify our usage of the per-hop payloads to send
> (H(BP), s_i) to consume most of the EOB sent from sender to receiver.

This should actually be (H(s_0 || s_1 || ...), s_i). So we still allow them
to check this finger print to see if they have all the final shares, but
don't allow them to preemptively pull all the payments.


-- Laolu


On Mon, Feb 5, 2018 at 11:12 PM ZmnSCPxj  wrote:

> Good morning Laolu,
>
> This is excellent work!
>
> Some minor comments...
>
>
> (Atomic Multi-path Payments). It can be experimented with on Lightning
> *today* with the addition of a new feature bit to gate this new
> feature. The beauty of the scheme is that it requires no fundamental
> changes
> to the protocol as is now, as the negotiation is strictly *end-to-end*
> between sender and receiver.
>
>
> I think, a `globalfeatures` odd bit could be used for this.  As it is
> end-ot-end, `localfeatures` is not appropriate.
>
>   - Potential fee savings for larger payments, contingent on there being a
> super-linear component to routed fees. It's possible that with
> modifications to the fee schedule, it's actually *cheaper* to send
> payments over multiple flows rather than one giant flow.
>
>
> I believe, currently, fees have not this super-linear component.  Indeed,
> the existence of per-hop fees (`fee_base_msat`) means, splitting the
> payment over multiple flows will be, very likely, more expensive, compared
> to using a single flow.  Tiny roundoffs in computing the proportional fees
> (`fee_proportional_millionths`) may make smaller flows give a slight fee
> advantage, but I think the multiplication of per-hop fees will dominate.
>
>
>   - Using smaller payments increases the set of possible paths a partial
> payment could have taken, which reduces the effectiveness of static
> analysis techniques involving channel capacities and the plaintext
> values being forwarded.
>
>
> Strongly agree!
>
>
> In order to include the three tuple within the per-hop payload for the
> final
> destination, we repurpose the _first_ byte of the un-used padding bytes in
> the payload to signal version 0x01 of the AMP protocol (note this is a PoC
> outline, we would need to standardize signalling of these 12 bytes to
> support other protocols).
>
>
> I believe the `realm` byte is intended for this.  Intermediate nodes do

Re: [Lightning-dev] lnd on bitcoind

2018-01-31 Thread Olaoluwa Osuntokun
Segwit has been merged into btcd for for sometime now. It's also possible to
run with bitcoind. I encourage you to check out the documentation:
https://github.com/lightningnetwork/lnd/blob/master/docs/INSTALL.md

In lnd, the chain backend has already been abstracted[1]. This is what
allows
it to run with any of the three supported chain backends (btcd, bitcoind,
neutrino). I invite you to continue this conversation on #lnd on Freenode.

[1]:
https://github.com/lightningnetwork/lnd/blob/master/lnwallet/interface.go

On Wed, Jan 31, 2018 at 12:23 PM Benjamin Mord  wrote:

> Hi,
>
> I'm not finding evidence of segwit in btcd, yet choice of golang is
> appealing to me. Can one run lnd on bitcoind?
>
> More generally speaking, is there a plan for the layer 2 / layer 1
> protocol binding to be abstracted away from the implementation on either
> side, via SPI or such?
>
> Thanks,
> Ben
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] General question on routing difficulties

2017-12-15 Thread Olaoluwa Osuntokun
Thanks for filling in some gaps in my knowledge of the internal workings
of Ripple. I see now my mental model of the system and how it compares to
what's being proposed in SpeedyMurmurs wasn't quite correct.

> In my opinion, it is interesting to look at tradeoffs and the
> necessary/sufficient guarantees for the routing algorithm in a
> decentralized payment network such as the LN before we stick to a
solution.

Agreed, and there can be many such solutions depending on particular use
cases. When switching to new routing algorithms, most of the existing code
dealing with the interaction in the link itself (how channel updates are
done, funding channels, resolving multi-hop HTLC's, how disputes are
settled on-chain, etc) can be completely reused.

> For what I understand, what you are asking/proposing is a mixture of the
> routing layer (route from sender to receiver) + onion layer (using
> “adapted”/”optimized” sphinx)+ payment layer (HTLCs).

Not exactly, I was more asking how w/o onion routing (as we do now), the
sender is able to construct an outgoing HTLC that satisfies the time lock
and fee preferences of all participants in the final route. Currently the
sender completely orchestrates the route so it can select the total amount
and time locks such that all participants have the preferences upheld.

> Another proposal might consist on a payment operation that does not assume
> source-routing to start with.  There are many possibilities to investigate
> and think about.

Indeed. I haven't yet found a satisfactory solution to HTLC parameter
selection at the sender w/o a degree of source routing. The extreme naive
versions lead to an excessive total time lock value in the route, or
senders losing out more money to fees as the routes are no longer as
precise.

> What you describe here is indeed a problem inherent to the original
> landmark routing mechanism. However, it is no longer an issue in
> SpeedyMurmurs. In particular, any node could be a landmark or two users
> could have a different view of what set of nodes constitute the set of
> landmarks.

Ah! I missed this aspect the first time around in my read through. Thanks
for resolving a major misunderstanding on my end (along with the usage of
shortcuts).

> All in all, it seems that there might be some misconceptions and/or
> aspects in the current draft of the paper that might need clarifications
> so that the approach is well understood. We are more than happy to
> further talk about it and answer questions, doubts or concerns that
> might arise.

Thanks for clearing up my initial misunderstandings of the protocol! I'll
give the paper (along with the works it derives from) a close read and
follow back up with any further questions. Based on your response to my
initial comments, it seems I mischaracterized the routing algorithm as
being an incremental advancement compared to the original landmark
protocol. Instead, it has gone far beyond that.

> I, however, do not agree that we should choose one routing
> approach or another based on how unbalanced channels are handled.

I didn't mean to entail that an approach should be *chosen* based on how
unbalanced channels are handled. Instead, I was highlighting how they're
handled using a source routed protocol, to start a discussion on if
passive rebalancing can be applied to others. As you stated earlier in our
conversation, we should examine the desirable properties of a routing
proposal so we can navigate the various trade offs.

> My point is that I think we should not stick to one routing algorithm
> depending on how another algorithm/functionality at another layer is
> handled, at least not before we explore and fully understand the
> tradeoffs, benefits and impossibilities that we will have to face here.

Agreed. The intent of my initial response was to highlight how we handle
certain functionalities with the current algorithm so we can then begin to
investigate it those features are possible/applicable to other algorithms
such as SpeedyMurmurs. I don't think any of us see the current algorithm
as the one and only algorithm we'll be sticking to for the lifetime of the
system. Instead, it's a stepping stone of something with a degree of
privacy built-in by default that was simple enough to get the ball rolling
as far as deployment.

> I also believe that we might have more than just HTLC-based payments in
> the LN, but this is the topic for another long email :)

Definitely! HTLC's are just the start...

-- Laolu

On Thu, Nov 30, 2017 at 8:59 AM Pedro Moreno Sanchez <pmore...@purdue.edu>
wrote:

> Hi Laolu,
> Thanks for your detailed and interesting reply. Please see below some
> points I would like to make in some of your comments and the answers to
> your questions in the last email. And of course, I would be happy to
> further discuss with you.
>
>
> On 11/25/17 2:16 PM, Olaoluwa Osuntok

Re: [Lightning-dev] General question on routing difficulties

2017-11-27 Thread Olaoluwa Osuntokun
Hi Pedro,

I came across this paper a few weeks ago, skimmed it lightly, and noted a
few interesting aspects I wanted to dig into later. Your email reminded me
to re-read the paper, so thanks for that! Before reading the paper, I
wasn't aware of the concept of coordinate embedding, nor how that could be
leveraged in order to provide sender+receiver privacy in a payment network
using a distance-vector-like routing system. Very cool technique!


After reading the paper again, my current conclusion is that while the
protocol presents some novel traits in the design a routing system for
payment channel based networks, it lends much better to a
closed-membership, credit network, such as Ripple (which is the focus of
the paper).


In Ripple, there are only a handful of gateways, and clients that seek to
interact with the network must chose their gateways *very* carefully,
otherwise consensus faults can occur, violating safety properties of the
network. It would appear that this gateway model nicely translates well to
the concept of landmarks that the protocol is strongly dependant on.
Ideally, each gateway would be a landmark, and as there are a very small
number of gateways within Ripple (as you must be admitted to be a verified
gateway in the network), then parameter L (the total number of landmarks)
is kept small which minimizes routing overhead, the average path-length,
etc.


When we compare Ripple to LN, we find that the two networks are nearly
polar opposites of each other. LN is an open-membership network that
requires zero initial configuration by central administrators(s). It more
closely resembles *debit* network (a series of tubes of money), as the
funds within channels must be pre-committed in order to establish a link
between two nodes, and cannot be increased without an additional on-chain
control transaction (to add or remove funds). Additionally, AFAIK (I'm no
expert on Ripple of course), there's no concept of fees within the
network. While within LN, the fee structure is a critical component of the
inventive for node operators to lift their coins onto this new layer to
provider payment routing services.  Finally, in LN we rely on time-locks
in order to ensure that all transactions are atomic which adds another set
of constraints. Ripple has no such constraint as transfers are based on
bi-lateral trust.


With that said, the primary difference between this protocol is that
currently we utilize a source-routed system which requires the sender to
know "most" of the path to the destination. I say "most" as currently,
it's possible for the receiver of a payment to use a poor man's rendezvous
system to provide the sender with a set of suffix paths form what one can
consider ad-hoc landmarks. The sender can then concatenate these with
their own paths, and construct the Sphinx routing package which encodes
the full route. This itself only gives sender privacy, and the receiver
doesn't know the identity of the sender, but the sender learns the
identity of the receiver.


We have plans to achieve proper sender/receiver privacy by extending our
Sphinx usage to leverage HORNET, such that the payment descriptor (payment
request containing details of the payment) also includes several paths
from rendezvous nodes (Rodrigo's) to the receiver. The rendezvous route
itself will be nested as a further Anonymous Header (AHDR) which includes
the information necessary to complete the onion circuit from Rodrigo to
the receiver. As onion routing is used, only Rodrigo can decrypt the
payload and finalize the route. With such a structure, the only nodes that
need to advertise their channels are nodes which seek to actively serve as
channel routers. All other nodes (phones, laptops, etc), don't need to
advertise their channels to the greater network, reducing the size of the
visible network, and also the storage and validation overhead. This serves
to extend the "scale ceiling" a bit.


My first question is: is it possible to adapt the protocol to allow each
intermediate node to communicate their time lock and fee references to the
sender? Currently, as the full path isn't known ahead of time, the sender
is unable to properly craft the timelocks to ensure safety+atomicity of
the payment. This would mean they don't know what the total timelock
should be on the first outgoing link. Additionally, as they don't know the
total path and the fee schedule of each intermediate node, then once
again, they don't know how much to send on the first out going link. It
would seem that one could extend the probing phase to allow backwards
communication by each intermediate node back to the sender, such that they
can properly craft a valid HTLC. This would increase the set up costs of
the protocol however, and may also increase routing failures as it's
possible incompatibilities arise at run-time between the preferences of
intermediate nodes. Additionally, routes may fail as an intermediate node
consumes too many funds as their fee, 

Re: [Lightning-dev] Minutes of Dev Meeting 2017-07-10

2017-08-07 Thread Olaoluwa Osuntokun
> I think it does already:

Yep! An oversight on my part.

> So, you're suggesting SIGHASH_SINGLE|SIGHASH_ANYONECANPAY?

Precisely. The code modifications required to switch to this signing mode
are
trivial.

> though it's a pretty obscure case where we want to close out many HTLCs at
> once; this is more for fee bumping I think.

Well it's for both. In the case of a commitment transaction broadcast (for
what
ever reason) each party is able to group together HTLC's expiring around the
same height (in the case that the pre-image for a bunch was never revealed.
This leads to less transactions on-chain, and lower fees cumulative for
either
side to sweep all funds back into their primary wallet.

The fee bumping use case is also a bonus!


On Sat, Jul 29, 2017 at 10:36 PM Rusty Russell 
wrote:

> Rusty Russell  writes:
>
> >
> https://docs.google.com/document/d/1ng6FaOLGS7ZQEsv3kn6W-t2GzQShhD7eFPz-1yFQZm0/edit?usp=sharing
>
> Some feedback, since I missed what seems like a very productive
> discussion!
>
> > HTLC floor created by second-level HTLC transactions
> > Pierre points out that should choose HTLC min high enough that don’t run
> into issues.
> > Laolu points out this means that unable to send and claim small-ish
> amounts chain.
> > Laolu points out that would basically CREATE a dust output in the
> process.
> > LAOLU SUGGESTS THAT TRIM OUTPUT SPEC PORTION SHOULD ALSO SAY DON’T
> CREATE DUST OUTPUT ON SECOND LEVEL TX
>
> I think it does already:
>
>   For every offered HTLC, if the HTLC amount minus the HTLC-timeout fee
>   would be less than `dust_limit_satoshis` set by the transaction owner,
>   the commitment transaction MUST NOT contain that output
>
> (Similarly for received HTLCs)
>
> ie. don't create HTLC outputs which would need an HTLC tx with a dust
> output.
>
> > Don’t use sighash-all on the second-level HTLC transactions
> >   Laolu points out that this would allow us to coalesce many HTLC
> >   transactions into a single one. Saves on-chain foot print, and also
> >   allows to add more fees.  Basically like “Lighthouse” (by hearn).
>
> So, you're suggesting SIGHASH_SINGLE|SIGHASH_ANYONECANPAY?
>
> I *think* this would work, though it's a pretty obscure case where we
> want to close out many HTLCs at once; this is more for fee bumping I
> think.
>
> There are two other cases where we don't rely on the TXID, and such an
> approach would be possible:
>
> 1. Commitment tx with no HTLC outputs.
> 2. The closing transaction.
>
> Cheers,
> Rusty.
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev