Re: [Lightning-dev] Onion messages rate-limiting

2022-06-30 Thread Christian Decker
Thanks Bastien for writing up the proposal, it is simple but effective I

>> One issue I see w/ the first category is that a single party can flood the
>> network and cause nodes to trigger their rate limits, which then affects
>> the
>> usability of the onion messages for all other well-behaving parties.
> But that's exactly what this proposal addresses? That single party can
> only flood for a very small amount of time before being rate-limited for
> a while, so it cannot disrupt other parties that much (to be properly
> quantified by research, but it seems quite intuitive).

Indeed, it creates a tiny bubble (1-2 hops) in which an attacker can
indeed trigger the rate-limiter, but beyond which its messages simply
get dropped. In this respect it is very similar to the staggered
gossip, in which a node may send updates at an arbitrary rate, but since
each node will locally buffer these changes and aggregate them, the
effective rate that is forwarded/broadcast is such that it doesn't
overwhelm the network (parametrization and network size apart ^^).

This is also an argument for not allowing onion messages over
non-channel connections, since otherwise an attacker could arbitrarily
extend their bubble to encompass every channel in the network, and can
sybil its way to covering the entire network (depending on rate limiter,
and their parameters and timing the attacker bubble may extend to more
than a single hop).

Going back a step it is also questionable whether non-channel OM
forwarding is usable at all, since nodes usually do not know about the
existence of these connections at all (not gossiped). I'd therefore not
allow non-channel forwarding at all, with the small exception of some
local applications, where local knowledge is required, but in that case
the OM should signal this clearly to the forwarding node as well or rely
on direct messaging with the peer (pre-channel negotiation, etc).

>> W.r.t this topic, one event that imo is worth pointing out is that a very
>> popular onion routing system, Tor, has been facing a severe DDoS attack
>> that
>> has lasted weeks, and isn't yet fully resolved [2].
> I don't think we can compare lightning to Tor, the only common design
> is that there is onion encryption, but the networking parts are very
> different (and the attack vectors on Tor are mostly on components that
> don't exist in lightning).

Indeed, a major difference if we insist on there being a channel is that
it is no longer easy to sybil the network, and there are no ways to just
connect to a node and send it data (which is pretty much the Tor circuit
construction). So we can rely on the topology of the network to keep an
attacker constrained in its local region of the network, and extending
the attacker's reach would require opening channel, i.e., wouldn't be

Lightning-dev mailing list

Re: [Lightning-dev] LN Summit 2022 Notes & Summary/Commentary

2022-06-30 Thread Christian Decker
Matt Corallo  writes:

> On 6/28/22 9:05 AM, Christian Decker wrote:
>> It is worth mentioning here that the LN protocol is generally not very
>> latency sensitive, and from my experience can easily handle very slow
>> signers (3-5 seconds delay) without causing too many issues, aside from
>> slower forwards in case we are talking about a routing node. I'd expect
>> routing node signers to be well below the 1 second mark, even when
>> implementing more complex signer logic, including MuSig2 or nested
> In general, and especially for "edge nodes", yes, but if forwarding nodes 
> start taking a full second 
> to forward a payment, we probably need to start aggressively avoiding any 
> such nodes - while I'd 
> love for all forwarding nodes to take 30 seconds to forward to improve 
> privacy, users ideally expect 
> payments to complete in 100ms, with multiple payment retries in between.
> This obviously probably isn't ever going to happen in lightning, but getting 
> 95th percentile 
> payments down to one second is probably a good goal, something that requires 
> never having to retry 
> payments and also having forwarding nodes not take more than, say, 150ms.
> Of course I don't think we should ever introduce a timeout on the peer level 
> - if your peer went 
> away for a second and isn't responding quickly to channel updates it doesn't 
> merit closing a 
> channel, but its something we will eventually want to handle in route 
> selection if it becomes more 
> of an issue going forward.
> Matt

Absolutely agreed, and I wasn't trying to say that latency is not a
concern, I was merely pointing out that the protocol as is, is very
latency-tolerant. That doesn't mean that routers shouldn't strive to be
as fast as possible, but I think the MuSig schemes, executed over local
links, is unlikely to be problematic when considering overall network
latency that we have anyway.

For edge nodes it's rather nice to have relaxed timings, given that they
might be on slow or flaky connections, but routers are a completely
different category.

Lightning-dev mailing list

Re: [Lightning-dev] LN Summit 2022 Notes & Summary/Commentary

2022-06-28 Thread Christian Decker
Olaoluwa Osuntokun  writes:
>> Rene Pickhardt brought up the issue of latency with regards to
>> nested/recursive MuSig2 (or nested FROST for threshold) on Bitcoin
>> StackExchange
> Not explicitly, but that strikes me as more of an implementation level
> concern. As an example, today more nodes are starting to use replicated
> database backends instead of a local ed embedded database. Using such a
> database means that _network latency_ is now also a factor, as committing
> new states requires round trips between the DBMS that'll increase the
> perceived latency of payments in practice. The benefit ofc is better support
> for backups/replication.
> I think in the multi-signature setting for LN, system designers will also
> need to factor in the added latency due to adding more signers into the mix.
> Also any system that starts to break up the logical portions of a node
> (signing, hosting, etc -- like Blockstream's Greenlight project), will need
> to wrangle with this as well (such is the nature of distributed systems).

It is worth mentioning here that the LN protocol is generally not very
latency sensitive, and from my experience can easily handle very slow
signers (3-5 seconds delay) without causing too many issues, aside from
slower forwards in case we are talking about a routing node. I'd expect
routing node signers to be well below the 1 second mark, even when
implementing more complex signer logic, including MuSig2 or nested

In particular remember that the LN protocol implements a batch
mechanism, with changes applied to the commitment transaction as a
batch. Not every change requires a commitment and thus a signature. This
means that while a slow signer may have an impact on payment latency, it
should generally not have an impact on throughput on the routing nodes.

Lightning-dev mailing list

Re: [Lightning-dev] Lightning RPC

2022-01-19 Thread Christian Decker
Hi Will,

> I noticed you are doing RPC stuff... I'm looking to do RPC over
> lightning itself. I started a C library called lnsocket[1], scrounged
> from clightning parts, so that I can send messages from iOS to control
> my lightning node.

Sounds interesting, and similar to commando's goals. Rusty also has a
summer of bitcoin project attempting to expose a websocket directly to
browsers in order to provide another way to communicate with your node,
and of course there's commando.

> I've got to the point with lnsocket where I can send TLVs to my node,
> and now I'm starting to think about what format the RPC commands should
> be.
> I noticed the commando c-lightning plugin just uses the JSON-RPC
> payload, but perhaps something more compact and rpc-friendly like grpc
> would be better... which is why this cln-grpc PR peaked my curiosity.

Yep, JSON-RPC is rather bad with binary data, and doesn't have any
concept of streaming. I personally like grpc because it ticks a lot of
boxes: secure transport over TLS, mutual authentication via mTLS,
possibility to add metadata to calls (technically prohibited by the
JSON-RPC spec) which can help us use macaroons/runes in future,
streaming support and compact binary format.

Having an IDL to describe the interface is also rather nice, even though
for cln-grpc we actually generate that from the JSON-RPC schemas, so
it's a bit less expressive than .proto files.

> I think the end goal of an RPC bolt would be super powerful, so that
> lnsocket could talk to any lightning node, but that could be further
> down the line. Choosing the right data format seemed like an important
> step in that direction. Would love to hear your thoughts on this!

I agree. Exchanging the transport layer underneath grpc doesn't change
semantics, but does unlock a number of potential use-cases. I think
either the JSON-RPC or grpc can serve as a basis for a common RPC
definition that can have any number of bindings, since we generate
conversion code to/from JSON-RPC and grpc we can transparently map them
back and forth.

> I've cc'd clightning/lightning-dev as well to see if anyone else is
> working on this or thinking about this stuff right now.

Definitely open to suggestions, comments and criticism: the cln-grpc [1]
crate is rather new, and will see a number of rebases and fixups, but
should be reviewable as is. The cln-plugin [2] crate is a bit less
well-fleshed-out, but has the core functionality needed for
cln-grpc-plugin which was the goal of this first exploration. The
cln-rpc [4] crate is also missing many RPC commands, but that's just
grunt work that I plan to tackle separately :-)


Lightning-dev mailing list

Re: [Lightning-dev] Split payments within one LN invoice

2021-12-17 Thread Christian Decker
I was looking into the docs [1] and stumbled over `createinvoice` which
does almost what you need. However it requires the preimage, and stores the
invoice in the database which you don't want.

However if you have access to the `hsm_secret` you could sign in the plugin
itself, completely sidestepping `lightningd`. Once you have that it should
be a couple of days work to get a PoC plugin for the coordination and
testing. From there it depends on how much polish you want to apply and
what other systems you want to embed it into.

Each recipient will have to run the plugin otherwise they'd not understand
how to handle the payment, and creating an invoice requires a bit more work
(each payee needs to coordinate to be part of the Rendez-vous), but from
the senders point of view it's all seamless.

As for whether this is better suited for the protocol itself: could be,
probably not though. We let everybody experiment and then formalize and
standardize the best ideas from the community, so it may make its way into
the spec, but would need to be implemented, tested and popular enough to
warrant everybody having to implement yet another feature. In this case
it's more for a bLiP, which are less formal and better match the fact that
only a small part of the network needs to implement it (only payees need to
coordinate and forward, senders and everybody else doesn't care).



On Fri, 17 Dec 2021, 11:22 Ronan McGovern,  wrote:

> Hi ZmnSCPxj,
> So, are you saying there needs to be a new command "signfakeinvoice" at
> the protocol level?
> If that was there, how much work/hours would it be to build the poor man's
> rendez-vous at the application level?
> If the above were to be implemented, when the payer pays the invoice, it's
> then automatically split and sent to two (or more) recipients?
> Lastly, would it make more sense to have split payments at the protocol
> level?
> Thanks, Ronan
> On Thu, Dec 16, 2021 at 11:44 PM ZmnSCPxj  wrote:
>> Good morning William,
>> > Has anyone coded up a 'Poor man's rendez-vous' demo yet? How hard would
>> > it be, could it be done with a clightning plugin perhaps?
>> Probably not *yet*; it needs each intermediate payee (i.e. the one that
>> is not the last one) to sign an invoice for which it does not know the
>> preimage.
>> Maybe call such a command `signfakeinvoice`.
>> However, if a command to do the above is implemented (it would have to
>> generate and sign the invoice, but not insert it into the database at all),
>> then intermediate payees can use `htlc_accepted` hook for the "rendez-vous".
>> So to generate the invoice:
>> * Arrange the payees in some agreed fixed order.
>> * Last payee generates a normal invoice.
>> * From last payee to second, each one:
>>   * Passes its invoice to the previous payee.
>>   * The previous payee then creates its own signed invoice with
>> `signfakeinvoice` to itself, adding its payout plus a fee budget, as well
>> as adding its own delay budget.
>>   * The previous payee plugin stores the next-payee invoice and the
>> details of its own invoice to db, such as by `datastore` command.
>> * The first payee sends the sender the invoice.
>> On payment:
>> * The sender sends the payment to the first hop.
>> * From first payee to second-to-last:
>>   * Triggers `htlc_accepted` hook, and plugin checks if the incoming
>> payment has a hash that is in this scheme stored in the database.
>>   * The plugin gathers `htlc_accepted` hook invocations until they sum up
>> to the expected amount (this handles multipath between payees).
>>   * The plugin marks that it has gathered all `htlc_accepted` hooks for
>> that hash in durable storage a.k.a. `datastore` (this handles a race
>> condition where the plugin is able to respond to some `htlc_accepted`
>> hooks, but the node is restarted before all of them were able to be
>> recorded by C-Lightning in its own database --- this makes the plugin skip
>> the "gathering" step above, once it has already gathered them all before).
>>   * The plugin checks if there is already an outgoing payment for that
>> hash (this handles the case where our node gets restarted in the meantime
>> --- C-Lightning will reissue `htlc_accepted` on startup)
>> * If the outgoing payment exists and is pending, wait for it to
>> resolve to either success or failure.
>> * If the outgoing payment exists and succeeded, resolve all the
>> gathered `htlc_accepted` hooks.
>> * If the outgoing payment exists and failed, fail all the gathered
>> `htlc_accepted` hooks.
>> * Otherwise, perform a `pay`, giving `maxfeepercent` and `maxdelay`
>> based on its fee budget and delay budget.
>>   When the `pay` succeeds or fails, propagate it to the gathered
>> `htlc_accepted` hooks.
>> * The last payee just receives a normal payment using the normal
>> invoice-receive scheme.
>> Regards,
>> ZmnSCPxj

Re: [Lightning-dev] Split payments within one LN invoice

2021-12-16 Thread Christian Decker
This is quite a common request, and we've used a solution I like to call
the "Poor man's rendez-vous". It basically routes a payment through all
the parties that are to be paid, with the last one accepting the payment
for all participants.

The payment is atomic, once the circuit is set up no participant can
cheat the others and it's seamless from the payer's perspective.

Let's say user `A` wants to pay `B` and `C` atomically. `B` gets 10ksat
and `C` gets 90ksat out of a total of 100ksat:

 1) `C` creates an invoice with payment hash `H` for 90ksat and sends it
to `B`
 2) `B` creates an invoice with payment hash `H` (same as the first
invoice, but `B` doesn't know the preimage) for 100ksat (maybe plus
a tiny bit for routing fees between `B` and `C`).
 3) `A` receives an invoice which appears to be from `B` for the
expected total of 100ksat.
 4) `A` proceeds to pay the invoice to `B` like normal
 5) `B` receives the incoming payment, but doesn't have the preimage for
`H`, so they must forward to `C` if they want to receive their
share. `B` then proceeds to pay the 90ksat invoice from `C`, which
reveals the preimage to them, and they can turn around and claim
the incoming `100ksat` (covering both `B` and `C` share)

It's a poor man's version because it requires creating two invoices and
`B` sees two payments (100ksat incoming, 90ksat outgoing), but the
overall outcome is the desired one: either both parties get paid or
noone gets paid. This can trivially be extended to any number of parties
(with reduced success probability), and will remain atomic. It also
doesn't require any changes on the sender side, and only minimal setup
between the payees. The crux here is that we somehow need to ensure `H`
is always the same along the entire chain of payments, but with a good
coordination protocol that should be feasible.


Ronan McGovern  writes:
> Hi folks, I'm Ronan - based in Dublin and building (simple
> payment links to accept Lightning).
> I'm wondering if there is a way to create an invoice that splits the
> payment to two lightning addresses?
> If not, what would be required to develop this?
> * A protocol change?
> * Could it be built with the current protocol (I see an app on LN Bits to
> split but it doesn't seem to work).
> Many thanks, Ronan
> Ronan McGovern
> ___
> Lightning-dev mailing list
Lightning-dev mailing list

Re: [Lightning-dev] INTEROPERABILITY

2021-11-23 Thread Christian Decker
ZmnSCPxj via Lightning-dev 
> Are you proposing as well to provide the hardware and Internet
> connection for these boxes?

Having implemented and operated the lightning integration testing
framework [1,2] in the past this is something near and dear to my
heart. However I have since become convinced that this kind of
artificial setup is unlikely to catch any but the most egregious issues,
given their different usage pattern. Much like the bitcoin testnet is
not representative of what happens on the mainnet, I don't think a
separate network would be able to reproduce all the issues that occur on
the lightning mainnet.

I agree with ZmnSCPxj that testing on the mainnet is much more likely to
catch more involved issues, and therefore a solid process with release
candidates and nightly builds, in combination with lnprototest [3] to test
the nodes for spec adherence in isolation is the way to go.

> I know of one person at least who runs a node that tracks the
> C-Lightning master (I think they do a nightly build?), and I run a
> node that I update every release of C-Lightning (and runs CLBOSS as
> well).  I do not know the actual implementations of what they connect
> to, but LND is very popular on the network and LNBIG is known to be an
> LND shop, and LNBIG is so pervasive that nearly every long-lived
> forwarding node has at least one channel with *some* LNBIG node.  I
> consider this "good enough" in practice to catch interop bugs, but
> some interop bugs are deeper than just direct node-to-node
> communications.  For example, we had bugs in our interop with LND
> `keysend` before, by my memory.

We should differentiate between spec compliance, and compatibility of
extensions. `keysend` wasn't and still isn't specd which resulted in us
having to reverse engineer the logic from the first experimental
implementation, and I did get some details wrong. For example I expected
nodes to explicitly opt-into keysend via featurebit 55, but they just
yolo it...

As these primitives become more widespread and more users rely on them,
I think it is paramount that we actually spec them out (something that
the new bLIP process should cover), but until we do there is no way of
saying what's correct and what isn't.


Lightning-dev mailing list

Re: [Lightning-dev] Impact of eltoo loss of state

2021-07-20 Thread Christian Decker
We'd likely be using an HMAC to ensure the integrity of the data returned
by peers, so we'd only have to guard against them returning an older
version, which in eltoo. Furthermore by retrieving the blobs on reconnect
regardless of whether we need them or not we can verify that peers are
behaving correctly, since they shouldn't be able to distinguish whether
we're just checking or actually need the data. In addition we can store the
same data with multiple peers, ensuring that as long as one node is
behaving we're good.


On Thu, 15 Jul 2021, 12:28 Martin Habovštiak 

> What would happen in 2) if the node has data but the peer returned an
> incorrect state?
> On Wed, Jul 14, 2021, 20:13 Christian Decker 
> wrote:
>> Not quite sure if this issue is unique to eltoo tbh. While in LN-penalty
>> loss-of-state equates to loss-of-funds, in eltoo this is reduced to
>> impact only funds that are in a PTLC at the time of the loss-of-state.
>> We have a couple of options here, that don't touch the blockchain, and
>> are therefore rather lightweight:
>>  1) Do nothing and keep the incentive to keep up to date backups. It
>>  still is a reduction in risk w.r.t. LN-penalty, since this is just an
>>  append only log of secrets, and old secrets don't harm you like
>>  attempting to close with an old commitment would.
>>  2) Use the peer-storage idea, where we deposit an encrypted bundle with
>>  our peers, and which we expect the peers to return. by hiding the fact
>>  that we forgot some state, until the data has been exchanged we can
>>  ensure that peers always return the latest snapshot of whatever we gave
>>  them.
>> The latter is the encrypted-blob idea that Rusty has been proposing for
>> a while now.
>> Cheers,
>> Christian
>> Anthony Towns  writes:
>> > Hello world,
>> >
>> > Suppose you have some payments going from Alice to Bob to Carol with
>> > eltoo channels. Bob's lightning node crashes, and he recovers from an
>> > old backup, and Alice and Carol end up dropping newer channel states
>> > onto the blockchain.
>> >
>> > Suppose the timeout for the payments is a few hours away, while the
>> > channels have specified a week long CSV delay to rectify any problems
>> > on-chain.
>> >
>> > Then I think that that means that:
>> >
>> >  1) Carol will reveal the point preimages on-chain via adaptor
>> > signatures, but Bob won't be able to decode those adaptor signatures
>> > because those signatures will need to change for each state
>> >
>> >  2) Even if Bob knows the point preimages, he won't be able to
>> > claim the PTLC payments on-chain, for the same reason: he needs
>> > newer adaptor signatures that he'll have lost with the state update
>> >
>> >  3) For any payments that timeout, Carol doesn't have any particular
>> > incentive to make it easy for Bob to claim the refund, and Bob won't
>> > have the adaptor signatures for the latest state to do so
>> >
>> >  4) But Alice will be able to claim refunds easily. This is working how
>> > it's meant to, at least!
>> >
>> > I think you could fix (3) by giving Carol (who does have all the adaptor
>> > signatures for the latest state) the ability to steal funds that are
>> > meant to have been refunded, provided she gives Bob the option of
>> claiming
>> > them first.
>> >
>> > However fixing (1) and (2) aren't really going against Alice or Carol's
>> > interests, so maybe you can just ask: Carol loses nothing by allowing
>> > Bob to claim funds from Alice; and Alice has already indicated that
>> > knowing P is worth more to her than the PTLC's funds -- otherwise she
>> > wouldn't have forwarded the PTLC to Bob in the first place.
>> >
>> > Likewise, everyone's probably incentivised to negotiate cooperative
>> > closes instead of going on-chain -- better privacy, less fees, and less
>> > delay before the funds can be used elsewhere.
>> >
>> > FWIW, I think a similar flaw exists even in the original eltoo spec --
>> > Alice could simply decline to publish the settlement transaction until
>> > the timeout has been reached, preventing Bob from revealing the HTLC
>> > preimage before Alice can claim the refund.
>> >
>> > So I think that adds up to:
>> >
>> >  a) Nodes should share state on reconnection; if you find a node that
>> > doesn't do this, close the channel and

Re: [Lightning-dev] Impact of eltoo loss of state

2021-07-14 Thread Christian Decker
Not quite sure if this issue is unique to eltoo tbh. While in LN-penalty
loss-of-state equates to loss-of-funds, in eltoo this is reduced to
impact only funds that are in a PTLC at the time of the loss-of-state.

We have a couple of options here, that don't touch the blockchain, and
are therefore rather lightweight:

 1) Do nothing and keep the incentive to keep up to date backups. It
 still is a reduction in risk w.r.t. LN-penalty, since this is just an
 append only log of secrets, and old secrets don't harm you like
 attempting to close with an old commitment would.
 2) Use the peer-storage idea, where we deposit an encrypted bundle with
 our peers, and which we expect the peers to return. by hiding the fact
 that we forgot some state, until the data has been exchanged we can
 ensure that peers always return the latest snapshot of whatever we gave

The latter is the encrypted-blob idea that Rusty has been proposing for
a while now.


Anthony Towns  writes:
> Hello world,
> Suppose you have some payments going from Alice to Bob to Carol with
> eltoo channels. Bob's lightning node crashes, and he recovers from an
> old backup, and Alice and Carol end up dropping newer channel states
> onto the blockchain.
> Suppose the timeout for the payments is a few hours away, while the
> channels have specified a week long CSV delay to rectify any problems
> on-chain.
> Then I think that that means that:
>  1) Carol will reveal the point preimages on-chain via adaptor
> signatures, but Bob won't be able to decode those adaptor signatures
> because those signatures will need to change for each state
>  2) Even if Bob knows the point preimages, he won't be able to
> claim the PTLC payments on-chain, for the same reason: he needs
> newer adaptor signatures that he'll have lost with the state update
>  3) For any payments that timeout, Carol doesn't have any particular
> incentive to make it easy for Bob to claim the refund, and Bob won't
> have the adaptor signatures for the latest state to do so
>  4) But Alice will be able to claim refunds easily. This is working how
> it's meant to, at least!
> I think you could fix (3) by giving Carol (who does have all the adaptor
> signatures for the latest state) the ability to steal funds that are
> meant to have been refunded, provided she gives Bob the option of claiming
> them first.
> However fixing (1) and (2) aren't really going against Alice or Carol's
> interests, so maybe you can just ask: Carol loses nothing by allowing
> Bob to claim funds from Alice; and Alice has already indicated that
> knowing P is worth more to her than the PTLC's funds -- otherwise she
> wouldn't have forwarded the PTLC to Bob in the first place.
> Likewise, everyone's probably incentivised to negotiate cooperative
> closes instead of going on-chain -- better privacy, less fees, and less
> delay before the funds can be used elsewhere.
> FWIW, I think a similar flaw exists even in the original eltoo spec --
> Alice could simply decline to publish the settlement transaction until
> the timeout has been reached, preventing Bob from revealing the HTLC
> preimage before Alice can claim the refund.
> So I think that adds up to:
>  a) Nodes should share state on reconnection; if you find a node that
> doesn't do this, close the channel and put the node on your enemies
> list. If you disagree on what the current state is, share your most
> recent state, and if the other guy's state is more recent, and all
> the signatures verify, update your state to match theirs.
>  b) Always negotiate a mutual/cooperative close if possible, to avoid
> actually using the eltoo protocol on-chain.
>  c) If you want to allow continuing the channel after restoring an old
> state from backup, set the channel state index based on the real time,
> eg (real_time-start_time)*(max_updates_per_second). That way your
> first update after a restore from backup will ensure that any old
> states that your channel partner may not have told you about are
> invalidated.
>  d) Accept that if you lose connectivity to a channel partner, you will
> have to pay any PTLCs that were going to them, and won't be able
> to claim the PTLCs that were funding them. Perhaps limit the total
> value of inbound PTLCs for forwarding that you're willing to accept
> at any one itme?
> Also, layered commitments seem like they make channel factories
> complicated too. Nobody came up with a way to avoid layered commitments
> while I wasn't watching did they?
> Cheers,
> aj
> ___
> Lightning-dev mailing list
Lightning-dev mailing list

[Lightning-dev] Funding Timeout Recovery proposal

2021-03-15 Thread Christian Decker
Hi All,

I just finished writing a (very) rough draft of the Funding Timeout
Recovery proposal (a.k.a. "So long, and thanks for all the sigs"). You
can find the full proposal here [1].

The proposal details how the fundee can assist the funder quickly
recover a botched funding. This is an alternative to using the
pre-signed commitment transaction, which likely overestimates the
feerate, and also locks the funder's funds with a timeout since it is a
unilateral close.

The trick is to have the fundee sign a blank check with the
funding_privkey, used to setup the 2-of-2, and using `sighash_none` to
make the signature independent from the outputs. The funder can then use
that signature to create a close transaction however she wants,
including adjustable feerates, and any desired outputs.

In addition it also includes a recovery mechanism for malleated funding
transactions, which can happen from time to time, if there are
non-segwit inputs, or if the funding transaction is edited externally to
the lightning node prior to broadcasting it. This extension is however

There are a couple of open questions at the bottom, and I would be very
interested in everyone's opinion on the safety. I think we're ok due to
the funding_privkey = channel mapping, but I'm open to further analysis.

Since this is rather short-notice for today's spec meeting I'll probably
add it to the agenda for next time instead, to give everybody time to
familiarize themselves with the proposal, before delving into details


Lightning-dev mailing list

Re: [Lightning-dev] Escrow Over Lightning?

2021-02-27 Thread Christian Decker
> The `!(a && b && ...)` can be converted to a reversal of the payment.
> The individual `!BUYER` is just the buyer choosing not to claim the
> seller->buyer direction, and the individual `!ESCROW` is just the
> escrow choosing not to reveal its temporary scalar for this payment.
> And any products (i.e. `&&`) are trivially implemented in PTLCs as
> trivial scalar and point addition.
> So it may actually be possible to express *any* Boolean logic, by the
> use of reversal payments and "option not to release scalar", both of
> which implement the NOT gate needed for the above.  Boolean logic is a
> fairly powerful, non-Turing-complete, and consistent programming
> language, and if we can actually implement any kind of Boolean logic
> with a set of payments in various directions and Barrier Escrows we
> can enable some fairly complex use-cases..

This got me thinking about my first year logic course and functional
completeness [1], and it it trivial to prove that any boolean logic can
be expressed by this construction. We can trivially build a functionally
complete set by just constructing a NAND, a NOR, or {AND, NOT}, all of
which you've already done in your prior mails.

The resulting expressions may not be particularly nice, and result in a
multitude of payments going back and forth between the participants to
represent that logic, but it is possible. So the problem should now
simply be reduced to finding a minimal representation for a given
expression, which then minimizes the funds committed to a particular
instance of the expression.


Lightning-dev mailing list

Re: [Lightning-dev] [RFC] Simplified (but less optimal) HTLC Negotiation

2020-10-26 Thread Christian Decker
Rusty Russell  writes:
>> This is in stark contrast to the leader-based approach, where both
>> parties can just keep queuing updates without silent times to
>> transferring the token from one end to the other.
> You've swayed me, but it needs new wire msgs to indicate "these are
> your proposals I'm reflecting to you".
> OTOH they don't need to carry data, so we can probably just have:
> update_htlcs_ack:
>* [`channel_id`:`channel_id`]
>* [`u16`:`num_added`]
>* [`num_added*u64`:`added`]
>* [`u16`:`num_removed`]
>* [`num_removed*u64`:`removed`]
> update_fee can stay the same.
> Thoughts?

So this would pretty much be a batch-ack, sent after a whole series of
changes were proposed to the leader, and referenced by their `htlc_id`,
correct? This is one optimization step further than what I was thinking,
but it can work. My proposal would have been to either reflect the whole
message (nodes need to remember proposals they've sent anyway in case of
disconnects, so matching incoming changes with the pending ones should
not be too hard), or send back individual acks, containing the hash of
the message if we want to safe on bytes transferred. Alternatively we
could also use reference the change by its htlc_id.

The latter however means that we are now tightly binding the
linearization protocol (in which order should the changes be applied)
with the internals of these changes (namely we look into the change, and
reference the htlc_id). My goal ultimately is introduce a better
layering between the change proposal/commitment scheme, and the
semantics of the the individual changes ("which order" vs. "what").

I wonder what the performance increase of the batching would be compared
to just acking each update individually. My expectation would be that in
most cases we'd be acking a batch of size 1 :-)

Personally I think just reflecting the changes as a whole, interleaving
my updates with yours is likely the simplest protocol, with the least
implied state that can get out of sync, and cause nodes to drift apart
like we had a number of times ("bad signature" anyone ^^). And looking
(much much) further it is also a feasible protocol for multiparty
channels with eltoo or similar constructions, where the leader
reflecting my own changes back to me is more of a special case than the

Lightning-dev mailing list

Re: [Lightning-dev] [RFC] Simplified (but less optimal) HTLC Negotiation

2020-10-15 Thread Christian Decker

> And you don't get the benefit of the turn-taking approach, which is that
> you can have a known state for fee changes.  Even if you change it to
> have opener always the leader, it still has to handle the case where
> incoming changes are not allowed under the new fee regime (and similar
> issues for other dynamic updates).

Good point, I hadn't considered that a change from one side might become
invalid due to a change from the other side. I think however this can only
affect changes that result in other changes no longer being applicable,
e.g., changing the number of HTLCs you'll allow on a channel making the
HTLC we just added and whose update_add is still in flight invalid.

I don't think fee changes are impacted here, since the non-leader only
applies the change to its commitment once it gets back its own change.
The leader will have inserted your update_add into its stream after the
fee update, and so you'll first apply the fee update, and then use the
correct fee to add the HTLC to your commitment, resulting in the same

The remaining edgecases where changes can become invalid if they are in
flight, can be addressed by bouncing the change through the non-leader,
telling him that "hey, I'd like to propose this change, if you're good
with it send it back to me and I'll add it to my stream". This can be
seen as draining the queue of in-flight changes, however the non-leader
may pipeline its own changes after it and take the updated parameters
into consideration. Think of it as a two-phase commit, alerting the peer
with a proposal, before committing it by adding it to the stream. It
adds latency (about 1/2RTT over the token-passing approach since we can
emulate it with the token-passing approach) but these synchronization
points are rare and not on the critical path when forwarding payments.

>> The downside is that we add a constant overhead to one side's
>> operations, but since we pipeline changes, and are mostly synchronous
>> during the signing of the commitment tx today anyway, this comes out to
>> 1 RTT for each commitment.
> Yeah, it adds 1RTT to every hop on the network, vs my proposal which
> adds just over 1/2 RTT on average.

Doesn't that assume a change of turns while the HTLC was in-flight?
Adding and resolving an HTLC requires one change coming from either side
of the channel, implying that a turn change must have been performed,
which itself takes 1 RTT. Thus to add an remove an HTLC we add at least
1RTT for each hop.

With the leader-based approach, we add 1RTT latency to the updates from
one side, but the other never has to wait for the token, resulting in
1/2RTT per direction as well, since messages are well-balanced.

> Yes, but it alternates because that's optimal for a non-busy channel
> (since it's usually "Alice adds htlc, Bob completes the htlc").

What's bothering me more about the turn-based approach is that while the
token is in flight, neither endpoint can make any progress, since the
one reliquishing the token promised not to say anything and the other
one hasn't gotten the token yet. This might result in rather a lot of
dead-air if both sides have a constant stream of changes to add. So we'd
likely have to add a timeout to defer giving up the token, to counter
dead-air, further adding delay to the changes from the other end, and
adding yet another parameter.

This is in stark contrast to the leader-based approach, where both
parties can just keep queuing updates without silent times to
transferring the token from one end to the other.

Lightning-dev mailing list

Re: [Lightning-dev] Hold fees: 402 Payment Required for Lightning itself

2020-10-13 Thread Christian Decker
Joost Jager  writes:
>> The LOW-REP node being out of pocket is the clue here: if one party
>> loses funds, even a tiny bit, another party gains some funds. In this
>> case the HIGH-REP node collaborating with the ATTACKER can extract some
>> funds from the intermediate node, allowing them to dime their way to all
>> of LOW-REP's funds. If an attack results in even a tiny loss for an
>> intermediary and can be repeated, the intermediary's funds can be
>> syphoned by an attacker.
> The assumption is that HIGH-REP nodes won't do this :) LOW-REP will see all
> those failed payments and small losses and start to realize that something
> strange is happening. I know the proposal isn't fully trustless, but I
> think it can work in practice.
>> Another attack that is a spin on ZmnSCPxj's waiting to backpropagate the
>> preimage is even worse:
>>  - Attacker node `A` charging hold fees receives HTLC from victim `V`
>>  - `A` does not forward the HTLC, but starts charging hold fees
>>  - Just before the timeout for the HTLC would force us to settle onchain
>>`A` just removes the HTLC without forwarding it or he can try to
>>forward at the last moment, potentially blaming someone else for its
>>failure to complete
>> This results in `A` extracting the maximum hold fee from `V`, without
>> the downstream hold fees cutting into their profits. By forwarding as
>> late as possible `A` can cause a downstream failure and look innocent,
>> and the overall payment has the worst possible outcome: we waited an
>> eternity for what turns out to be a failed attempt.
> The idea is that an attacker node is untrusted and won't be able to charge
> hold fees.

The attacker controls both the sender and the HIGH-REP node. The sender
doesn't need to be trusted, it just initiates a payment that is used to
extract hold fees from a forwarding node. The HIGH-REP node doesn't
lose reputation because from what we can witness externally the payment
failed somewhere downstream. It does require an attacker to have a hold
fee charging HIGH-REP node, yes, but he is not jeopardizing its
reputation by having it fail downstream.

Lightning-dev mailing list

Re: [Lightning-dev] [RFC] Simplified (but less optimal) HTLC Negotiation

2020-10-13 Thread Christian Decker
I wonder if we should just go the tried-and-tested leader-based

 1. The node with the lexicographically lower node_id is determined to
be the leader.
 2. The leader receives proposals for changes from itself and the peer
and orders them into a logical sequence of changes
 3. The leader applies the changes locally and streams them to the peer.
 4. Either node can initiate a commitment by proposing a `flush` change.
 5. Upon receiving a `flush` the nodes compute the commitment
transaction and exchange signatures.

This is similar to your proposal, but does away with turn changes (it's
always the leader's turn), and therefore reduces the state we need to
keep track of (and re-negotiate on reconnect).

The downside is that we add a constant overhead to one side's
operations, but since we pipeline changes, and are mostly synchronous
during the signing of the commitment tx today anyway, this comes out to
1 RTT for each commitment.

On the other hand a token-passing approach (which I think is what you
propose) require a synchronous token handover whenever a the direction
of the updates changes. This is assuming I didn't misunderstand the turn
mechanics of your proposal :-)


Rusty Russell  writes:
> Hi all,
> Our HTLC state machine is optimal, but complex[1]; the Lightning
> Labs team recently did some excellent work finding another place the spec
> is insufficient[2].  Also, the suggestion for more dynamic changes makes it
> more difficult, usually requiring forced quiescence.
> The following protocol returns to my earlier thoughts, with cost of
> latency in some cases.
> 1. The protocol is half-duplex, with each side taking turns; opener first.
> 2. It's still the same form, but it's always one-direction so both sides
>stay in sync.
> update+-> commitsig-> <-revocation <-commitsig revocation->
> 3. A new message pair "turn_request" and "turn_reply" let you request
>when it's not your turn.
> 4. If you get an update in reply to your turn_request, you lost the race
>and have to defer your own updates until after peer is finished.
> 5. On reconnect, you send two flags: send-in-progress (if you have
>sent the initial commitsig but not the final revocation) and
>receive-in-progress (if you have received the initial commitsig
>not not received the final revocation).  If either is set,
>the sender (as indicated by the flags) retransmits the entire
>Otherwise, (arbitrarily) opener goes first again.
> Pros:
> 1. Way simpler.  There is only ever one pair of commitment txs for any
>given commitment index.
> 2. Fee changes are now deterministic.  No worrying about the case where
>the peer's changes are also in flight.
> 3. Dynamic changes can probably happen more simply, since we always
>negotiate both sides at once.
> Cons:
> 1. If it's not your turn, it adds 1 RTT latency.
> Unchanged:
> 1. Database accesses are unchanged; you need to commit when you send or
>receive a commitsig.
> 2. You can use the same state machine as before, but one day (when
>this would be compulsory) you'll be able signficantly simplify;
>you'll need to record the index at which HTLCs were changed
>(added/removed) in case peer wants you to rexmit though.
> Cheers,
> Rusty.
> [1] This is my fault; I was persuaded early on that optimality was more
> important than simplicity in a classic nerd-snipe.
> [2]
> ___
> Lightning-dev mailing list
Lightning-dev mailing list

Re: [Lightning-dev] Hold fees: 402 Payment Required for Lightning itself

2020-10-13 Thread Christian Decker
I think the mechanism can indeed create interesting dynamics, but not in
a good sense :-)

>> I can still establish channels to various low-reputation nodes, and
>> then use them to grief a high-reputation node.  Not only do I get to
>> jam up the high-reputation channels, as a bonus I get the
>> low-reputation nodes to pay for it!
> So you're saying:
> ATTACKER --(no hold fee)--> LOW-REP --(hold fee)--> HIGH-REP
> If I were LOW-REP, I'd still charge an unknown node a hold fee. I
> would only waive the hold fee for high-reputation nodes. In that case,
> the attacker is still paying for the attack. I may be forced to take a
> small loss on the difference, but at least the larger part of the pain
> is felt by the attacker. The assumption is that this is sufficient
> enough to deter the attacker from even trying.

The LOW-REP node being out of pocket is the clue here: if one party
loses funds, even a tiny bit, another party gains some funds. In this
case the HIGH-REP node collaborating with the ATTACKER can extract some
funds from the intermediate node, allowing them to dime their way to all
of LOW-REP's funds. If an attack results in even a tiny loss for an
intermediary and can be repeated, the intermediary's funds can be
syphoned by an attacker.

Another attack that is a spin on ZmnSCPxj's waiting to backpropagate the
preimage is even worse:

 - Attacker node `A` charging hold fees receives HTLC from victim `V`
 - `A` does not forward the HTLC, but starts charging hold fees
 - Just before the timeout for the HTLC would force us to settle onchain
   `A` just removes the HTLC without forwarding it or he can try to
   forward at the last moment, potentially blaming someone else for its
   failure to complete

This results in `A` extracting the maximum hold fee from `V`, without
the downstream hold fees cutting into their profits. By forwarding as
late as possible `A` can cause a downstream failure and look innocent,
and the overall payment has the worst possible outcome: we waited an
eternity for what turns out to be a failed attempt.

Lightning-dev mailing list

Re: [Lightning-dev] Simulating Eltoo Factories using SCU Escrows (aka SCUE'd Eltoo)

2020-09-22 Thread Christian Decker
ZmnSCPxj  writes:
> I am almost certain that a Smart Contract Unchained Escrowed
> Decker-Russell-Osuntokun channel factory can merge the watchtower and
> escrow functionality as well, using the above basic sketch, with
> additional overlay network to allow for federated escrows.  The issue
> is really the increased complexity of the `(halftxid, encrypted_blob)`
> scheme with Decker-Russell-Osuntokun.
> (To my knowledge, Decker-Russell-Osuntokun only simplifies watchtowers
> if the watchtower knows the funding outpoint, which is information we
> should really prefer the watchtower to not know unless an attack
> occurs; with an unknown-funding-outpoint, `(halftxid, encrypted_blob)`
> scheme, Decker-Russell-Osuntokun is actually more complicated, since
> hiding the funding outpoint prevents having a simple key for the
> watchtower to replace.)

Just a minor comment on this: for eltoo the watchtower does not need to
know the funding outpoint, instead any information that'd allow a
watchtower to collate (encrypted) updates would be sufficient for it to
be able to discard earlier ones. I'm thinking in particular about the
session-based collation that the lnd watchtower protocol uses can be one
such collation key. Alternatively we can still use the Poon-Dryja style
encryption with the trigger transaction hash (which admittedly isn't
very prominently described in the eltoo paper) as the encryption
key. That transaction being the first step towards closing a channel
unilaterally forces any cheating party to reveal the decryption key for
the update txs that'll override its actions.

Furthermore, while encrypting all the reactions with the same encryption
key may appear to leak information, it is only the update transaction
that is passed to the watchtower, not the actual state (direct outputs
and HTLCs) which is attached to the settlement transaction, kept by the
endpoint. So all the watchtower gets from decrypting all prior update
transactions is a set of semantically identical 1-input-1-output update
transactions from which it can at most learn how many updates were
performed. This last leak can also be addressed by simply randomizing
the increment step for state numbers and not passing every state update
to the watchtower (since the watchtower will only ever need the last one
we can coalesce multiple updates and flush them to the watchtower after
some time).


Lightning-dev mailing list

Re: [Lightning-dev] Simulating Eltoo Factories using SCU Escrows (aka SCUE'd Eltoo)

2020-09-01 Thread Christian Decker
Hi Nadav,

thanks for writing up this proposal. I think I can add a bit of details,
which might simplify the proposal.

## Ordering of updates

The way we ensure that an update tx (as the commitment txs are called in
the paper) can be attached only to prior updates is done by comparing
the state-number committed to in the prevout script with the current
timelock through CLTV. This functionality exists today already, and does
not have to be implemented by the Escrow at all: it can just sign off on
any update transaction and the monotonicity of the sequence of updates
is guaranteed through the script.

This should simplify the escrow considerably and allow us to disclose
less to it.

## Emulating `sighash_anyprevout`

We can emulate `sighash_anyprevout` and variants today already, if we
know all the transactions we'd eventually want to bind to at the time we
create the transaction that'd use `anyprevout`: simply iterate through
all the transactions it might bind to, update the transaction we're
signing with the prevout details of that potential binding, and sign
it. There are two downsides to this, namely processing overhead to
generate `n` signatures for `n` potential bindings, and communication
overhead, since we're now exchanging `n` signatures, instead of a single
`anyprevout` signature, but it can work.

I think with the escrow we can defer the creation of the signature to
the time we need it, and externalize the anyprevout logic: at each
update all parties sign a statement that they are ok with state
`k`. Should one party become unresponsive, or broadcast an intermediate
TX k' writes:
> Hi all,
> # Simulating Eltoo / ANYPREVOUT Factories Using SCU Escrows
> In this write-up I hope to convince you that it is possible to create some
> weak version of Eltoo channels and channel factories today without
> SIGHASH_ANYPREVOUT (although the version using this sighash is clearly
> superior) using ZmnSCPxj's proposal Smart Contracts Unchained (SCU) which
> Ben Carman has cleverly given the name SCUE'd Eltoo.
> ## Introduction
> ### Eltoo / ANYPREVOUT
> Eltoo is a proposal for a new (and generally improved) way of doing
> Lightning channels which also allows for multi-party channels (and channel
> factories). I am by no means fluent in the going's on of eltoo and
> anyprevout so I will link and
> My understanding is that
> at a high level, rather than using a penalty mechanism to update channel
> states, sighash_anyprevout is used to make any old commitment transaction
> spendable by any newer commitment transaction so that old revoked states
> can be updated on-chain instead of relying on a punishment mechanism.
> Benefits of this scheme include but are not limited to easier watchtower
> implementations, static partial backups, and multi-party channels.
> ### Smart Contracts Unchained (SCU)
> I strongly recommend the reader read this write up by ZmnSCPxj before
> continuing
> At a high level the idea is to use a participant-chosen "federation" of
> "escrows" which can be thought of as virtual machines which understand
> contracts written in some language and which enforce said contracts by
> giving users signatures of transactions that are produced by these
> contracts. A general goal of SCU is to be trust-minimizing and as private
> as possible. For example, escrows should not be able to see that they are
> being used if there are no disputes, among other considerations that can be
> made to make SCU Escrows as oblivious as possible (discussed further below).
> ## Proposal (Un-Optimized)
> At a high level, this proposal is to replace the use of ANYPREVOUT with a
> federation of SCU Escrows which will enforce state updates by only
> generating signatures to spend older states with newer ones.
> I will work in the general context of multi-party channels but all of this
> works just as well in two-party (Lightning) channels.
> Say that we have N parties who wish to enter into a multi-party channel
> (aka channel factory). Each participant has a public key P_i and together
> they do a distributed key generation (DKG) of some kind to reach some
> shared secret x (for example, each party contributes a commitment to a
> random number and then that random number, MuSig style, and the sum of
> these random numbers constitutes the shared secret). This x will be used to
> derive a sequence of (shared) key pairs (x_k, X_k) (for example this can be
> done by having x_k = PRNG(x, k)).
> Let State(k) be some agreed upon commitment of the channel state at update
> k (for example, HMAC(k, kth State Tx outputs)). State(0) is a commitment to
> 0 and the initial channel balances.
> Let Delta be some CSV timelock.
> For the sake of simplicity, let us consider the case where only a single
> SCU escrow is used which has public key E, but note that all of the
> following 

Re: [Lightning-dev] Collaborated stealing. What happens when the final recipient discloses the pre-image

2020-07-29 Thread Christian Decker
It might be worth mentioning here that the wormhole attack can also just
be considered a more efficient way of routing a payment over fewer hops,
freeing funds in channels that have been skipped by failing them even
though the overall payment has not been completed.

This is why I hesitate to even call it an attack in the first place: if
the skipped hops free the HTLCs, which the skipping entity that controls
both endpoints of the shortcut is encouraged to in order to free its own
reserved funds, we are increasing the efficiency of the network.

As ZmnSCPxj correctly points out this requires the attacker to be able
to collate HTLCs, which goes away with PTLCs. However even today we're
not worse off by nodes exploiting this.


ZmnSCPxj via Lightning-dev  writes:
> Good morning Ankit,
> I believe what you describe is a specific form of what is called the Wormhole 
> attack.
> In the general form, the Wormhole attack has two forwarding nodes in a path 
> that are coordinating with each other, and they can "teleport" the preimage 
> from one to the other, skipping intermediate forwarding nodes.
> The case you describe is the specific case where one of the nodes performing 
> this attack on a path is the payee itself.
> What is stolen here is not the payment amount, but the fees that the 
> "skipped" forwarding nodes should have earned for honestly forwarding.
> On the other hand, in that case, it is simply a form of the griefing attack: 
> C and E are able to  cause D to lock its funds into HTLCs without earning 
> fees, but C and E can mount that attack at any time regardless of A or B 
> anyway, so it is not an additional attack surface on D.
> At a high level, this attack is not a concern.
> As long as A is able to acquire the preimage, it has proof of payment, and it 
> is immaterial *how* A managed to get the preimage, as Rene describes.
> Even if E claims that it did not deliberately give the preimage and that it 
> was hacked by C, then it is C who is liable, in which case C and E, being a 
> cooperating team, have gained nothing at all (and just made C angry at E for 
> throwing C under the bus).
> Basically, the preimage *is* the proof.
> There are only two things you need to do:
> * Ensure that invoices are signed by E (meaning E agreed to perform some 
> service if the preimage is revealed by anyone).
>   BOLT11 already requires this.
> * Ensure that invoices indicate *who exactly* is going to get the service or 
> product.
>   Since the preimage is learned by every intermediate hop, it cannot be a 
> bearer certificate, so it must indicate specifically that the beneficiary of 
> the product or service will be A.
> With the above, A can be sure that paying in exchange for getting the 
> preimage, is a binding contract on the service indicated by the invoice.
> The preimage and the invoice (that has a signature from E), are sufficient to 
> show that E has an obligation to provide a service or product to A.
> The wormhole attack (which steals fees from D) is fixed by using PTLCs and 
> blinding factors.
> E learns the total of all blinding factors, and knows the final scalar, but 
> does not know the blinding factor delta from C to E, and thus cannot give C 
> any information on how to claim the funds.
> Regards,
> ZmnSCPxj
>> Hey Ankit, 
>> The lightning network sees the possession of a preimage as a proof of 
>> payment. And I believe everyone agrees that a court should rule in favor of 
>> A forcing E to deliver the good or reimburse A. The reason is that 
>> possession of the preimage matching the signed payment hash from E is a much 
>> stronger evidence of A actually having paid than E claiming to not have 
>> received anything. 
>> This is also due to the fact that guessing the preimage can practically be 
>> considered impossible (though there is a tiny likelihood) 
>> If E breaches the protocol by giving the preimage to C (for free) instead of 
>> claiming the money from D (and thus settling the Htlc) it will be considered 
>> E's problem, that E did not get reimbursed but just gave out the preimage 
>> for free. (actually E's so called "partner in crime" did get reimbursed). 
>> Even if D would testify that E never settled the Htlc one would wonder why E 
>> never settled the incoming htlc as they should only have created a payment 
>> hash for which they know the preimage. Since A can actually provide one it 
>> is again unlikely if E for example claims they just used a random hash for 
>> which they didn't know the preimage because they wanted to just see if A has 
>> enough liquidity. 
>> With kind regards Rene
>> Ankit Gangwal  schrieb am Fr., 17. Juli 2020, 08:43:
>> > Consider A wants to send some funds to E.
>> >
>> > They don’t have a direct payment channel among them. So, they use a 
>> > following path A-B-C-D-E. A is the sender of payment and E is final 
>> > recipient.
>> >
>> > E sends the hash of a secret r to A, A passes 

[Lightning-dev] Speciication Meeting 2020/05/25

2020-05-23 Thread Christian Decker
Dear Fellow Bolters,

the next Lightning Network specification meeting will be this Monday at
20:00 UTC [1]. The current agenda [2] is still a bit light on issues and
PRs, which gives us some time to spend on longer term goals, and
extended discussions. If there are issues that need to be discussed
feel free to add them to the agenda (comment and I'll add them to the
list), otherwise it'd be good to read up on the longer term topics in
the agenda.

The current agenda looks like this at the moment:

# Pull Request Review
 - Update BOLT 3 transaction test vectors to use static_remotekey #758

# Long Term Updates
 - Trampoline routing #654 (@t-bast)
 - Mempool tx pinning attack (@ariard @TheBlueMatt)
 - Anchor outputs #688 (@joostjager)
 - Blinded paths #765 (@t-bast)

# Backlog
The following are topics that we should discuss at some point, so if we
have time to discuss them great, otherwise they slip to the next

 - Upfront payments / DoS protection
 - Hornet (@cfromknecht)

See you all on Monday, and have a great weekend ^^


Lightning-dev mailing list

Re: [Lightning-dev] Sphinx Rendezvous Update

2020-03-02 Thread Christian Decker
Hi Bastien,

thanks for verifying my proposal, and I do share your concerns regarding
privacy leaks (how many hops are encoded in the onion) and success ratio
if a payment is based on a fixed (partial) path.

> I believe this makes it quite usable in Bolt 11 invoices, without blowing up
> the size of the QR code (but more experimentation is needed on that).

It becomes a tradeoff of how small you want your onion to be, and how
many hops the partial onion can have. For longer partial onions we're
getting close to the current full onion size, but I expect most partial
onion to be close to the network diameter of ~6 (excluding degerenate
chains). So the example below with 5 hops seemed realistic, and dropping
the legacy format in favor of TLVs we can get a couple of bytes back as

>> As an example such an onion, with 5 legacy hops (65 byte each) results
>> in a 325 + 66 bytes onion, and we save 975 bytes.
> While having flexibility when choosing the length of the prefill
> stream feels nice, wouldn't it be safer to impose a fixed size to
> avoid any kind of heuristic at `RV` to try to guess how many hops
> there are between him and the recipient?

I'm currently just using the maximum size, which is an obvious privacy
leak, but I'm also planning on exposing the size to be prefilled, and
hence cropped out when compressing, when generating. Ideally we'd have a
couple of presets, i.e., 1/4, 2/4, 3/4, and adhere to them, randomizing
which one we pick.

Having smaller partial onions would enable my stretch goal of being able
to chain multiple partial onions, though that might be a useless
achievement to unlock xD

>> Compute a shared secret using a random ephemeral private key and
>> `RV`s public key, and then generate a prefill-key
> While implementing, I felt that the part about the shared secret used
> to generate the prefill stream is a bit blurry (your proposal on
> Github doesn't phrase it the same way). I think it's important to
> stress that this secret is derived from both `ephkey` and `RV`'s
> private key, so that `RV+1` can't compute the same stream.

I noticed the same while implementing the decompress stage, which
requires the node ID from `RV` during generation, and performs ECDH +
HKDF with the `RV` node private and the ephemeral key in the *next*
onion, i.e., the one extracted from the payload itself. This is
necessary since the ephemeral key on the incoming onion, which delivered
the partial onion in its payload is not controlled by the partial onion
creator, while the one in the partial onion is.

This means that the ephemeral key in the partial onion is used twice:

 - Once by `RV` to generate the obfuscation stream to fill in the gap
 - As part of the reconstructed onion, processed by `RV+1` to decode the

I'm convinced this is secure and doesn't leak information since
otherwise transporting the ephemeral key publicly would be insecure
(`RV+1` can't generate the obfuscation secret used to fill in the gap
without access to `RV`s private key), and the ephemeral key is only
transmitted in cleartext once (from `RV` to `RV+1`), otherwise it is
hidden in the outer onion.

> Another thing that may be worth mentioning is error forwarding. Since
> the recipient generated the onion, `RV` won't have the shared secrets
> (that's by design). So it's expected that payment errors won't be
> readable by `RV`, but it's probably a good idea if `RV` returns an
> indication to the sender that the payment failed *after* the
> rendezvous point.

Indeed, this is pretty much by design, since otherwise the sender could
provoke errors, e.g., consuming all of `RV`s outgoing capacity with
probes to get back temporary channel failure errors for the channel that
was encoded in the partial onion, and then do that iteratively until we
have identified the real destination which we weren't supposed to learn.

So any error beyond `RV` should be treated by the sender as "rendez-vous
failed, discard partial onion".

> An important side-note is that your proposal is really quick and
> simple to implement from the existing Sphinx code. I have made ASCII
> diagrams of the scheme (see [1]).  This may help readers visualize it
> more easily.

I quickly skimmed the drawings and they're very nice to understand how
regions overlap, that was my main problem with the whole sphinx
construction, so thanks for taking the time :+1:

> It still has the issue that each hop's amount/cltv is fixed at invoice
> generation time by the recipient. That means MPP cannot be used, and
> if any channel along the path updates their fee the partial onion
> becomes invalid (unless you overpay the fees).
> Trampoline should be able to address that since it provides more
> freedom to each trampoline node to find an efficient way to forward to
> the next trampoline.  It's not yet obvious to me how I can mix these
> two proposals to make it work though.  I'll spend more time
> experimenting with that.

True, I think rendez-vous routing 

Re: [Lightning-dev] Sphinx Rendezvous Update

2020-02-24 Thread Christian Decker
Hi Bastien,

seems you were a bit quicker than I was with my writeup of my
proposal. I came up with a scheme that allows us to drop a large part of
the partial onion, so that it can indeed fit into an outer onion, and
the rendez-vous node RV can re-construct the original packet from the
included data [1].

The construction comes down to initializing the part of the routing info
string that is not going to be used, in such a way that the incremental
unwrappings at the nodes in the partial onion cancels out. Like you
mentioned in your mail it comes down extending the filler generation to
also cover the unused part and then applying all the encryption streams
xored to the unused space. By doing this we get the middle part of the
onion consisting of only 0x00 bytes.

I then decided to apply an additional ChaCha20 stream to this prefill,
such that the onion will not consist of mostly 0x00 bytes which would be
a dead giveaway to `RV+1` that `RV` was a rendez-vous node.

The process for the partial onion creator boils down to:

 - Compute a path from `RV` of its choice to recipient `R`.
 - Compute a shared secret using a random ephemeral private key and
  `RV`s public key, and then generate a prefill-key
 - Compute the prefill by combining the correct substrings of the
   encryption streams for the nodes along the path, then add the
   ChaCha20 stream keyed with the prefill-key.
 - Wrap the onion, including payloads for each of the nodes along path
   `RV` to `R`
 - Trim out the unused space, which now will match the obfuscation
   stream generated with the prefill-key

As an example such an onion, with 5 legacy hops (65 byte each) results
in a 325 + 66 bytes onion, and we save 975 bytes. See [2] for an example
of how this looks like.

The sender `S` then just does the following:

 - Compute a route from `S` to `RV`
 - Build an onion with the route, specifying the trimmed partial onion
   as payload, along with the usual parameters, for `RV`
 - Initiate payment with the constructed onion

Upon receiving an incoming HTLC with a partial onion the rendez-vous
node `RV` then just does the following:

 - Verify all parameters as usual
 - Extract the partial onion
 - Use the ephemeral key from the partial onion to generate the shared
   secret and the prefill key
 - Generate the prefill stream and insert it in the correct place,
   before the HMAC. This reconstitutes the original routing packet
 - Swap out the original onion with the reconstituted onion and forward.

My writeup [1] is an early draft, but I wanted to get it out early to
give the discussion a basis to work off. I'll revisit it a couple of
times before opening a PR, but feel free to shout at me if I have
forgotten to consider something :-)



Bastien TEINTURIER via Lightning-dev
> Good morning list,
> After exploring decoys [1], which is a cheap way of doing route blinding,
> I'm turning back to exploring rendezvous.
> The previous mails on the mailing list mentioned that there was a
> technicality
> to make the HMACs check out, but didn't provide a lot of details.
> The issue is that the filler generation needs to take into account some hops
> that will be added *later*, by the payer.
> However it is quite easy to work-around, with a few space trade-offs.
> Let's consider a typical rendezvous setup, where Alice wants to be paid via
> rendezvous Bob, and Carol wants to pay that invoice:
> Carol -> ... -> Bob -> ... -> Alice
> If Alice knows how many bytes Carol is going to use for her part of the
> onion
> payloads, Alice can easily take them into account when generating her
> filler by
> pre-pending the same amount of `0` bytes. It seems reasonable to impose a
> fixed
> number of onion bytes for each side of the rendezvous (650 each?) so Alice
> would
> know that amount.
> When Carol completes the onion with her part of the route, she simply needs
> to
> generate filler data for her part of the route following the normal Sphinx
> protocol
> and apply it to the onion she found in the invoice.
> But the tricky part is that she needs to give Bob a way of generating the
> same
> filler data to unapply it. Then all HMACs correctly check out.
> I see two ways of doing that:
> * Carol simply sends that filler (650 bytes), probably via a TLV in
> `update_add_htlc`.
> This means every intermediate hop needs to forward that, which is painful
> and
> potentially leaking too much data.
> * Carol provides Bob with the rho keys used to generate her filler, and the
> length
> used by each hop. This leaks to Bob an upper bound on the number of hops
> and the
> number of bytes sent to each hop.
> Since shift-and-xor kind of crypto is hard to read as equations, but very
> easy to
> read as diagrams, I spent a bit of time doing beautiful ASCII art [2].
> Don't 

Re: [Lightning-dev] Direct Message draft

2020-02-21 Thread Christian Decker
Rusty Russell  writes:

>> Would it not be better to create a circular path?  By this I mean,
>> Alice constructs an onion that overall creates a path from herself to
>> Bob and back, ensuring different nodes on the forward and return
>> directions.  The onion hop at Bob reveals that Bob is the chosen
>> conversation partner, and Bob forwards its reply via the onion return
>> path (that Alice prepared herself to get back to her via another
>> path).
> I like it!  The lack of "reply" function eliminates all storage
> requirements for the intermediaries.  Unfortunately it's not currently
> possible to fit the reply onion inside the existing onion, but I know
> Christian has a rabbit in his hat for this?

I think circular payment really means an onion that is

> A -> ... -> B -> ... -> A

and not a reply onion inside of a forward onion.

The problem with the circular path is that the "recipient" cannot add
any reply without invalidating the HMACs on the return leg of the
onion. The onion is fully predetermined by the sender, any malleability
introduced in order to allow the recipient to reply poses a threat to
the integrity of the onion routing, e.g., it opens us up to probing by
fiddling with parts of the onion until the attacker identifies the
location the recipient is supposed to put his reply into.

As Rusty mentioned I have a construction of the onion routing packet
that allows us to compress it in such a way that it fits inside of the
payload itself. I'll write up a complete proposal over the coming days,
but the basic idea is to initialize the unused part of the onion in such
a way that it cancels out the layers of encryption and the fully wrapped
onion consists of all `0x00` bytes. These can then be removed resulting
in a compressed onion, and the sender can simply add the padding 0x00
bytes back to get the original, fully HMACd onion, and then send it like
normal (there is an obfuscation step to hide the `0x00` bytes from the
next hop, but more on this in the full rendez-vous proposal later).

This rendez-vous construction is a bit more involved since we want to
fit an onion into another onion of the same size. If we design a
completely new messaging system, requiring end-to-end communication, it
might be worth re-introducing the end-to-end payload which we removed in
the routing onion. It's a simply, variable or fixed length, payload,
that is onion-decrypted at each hop and its contents are revealed to the
destination (this is how onion routing usually works). Since that
payload doesn't have to adhere to the constraints of the routing onions
(multiple payloads, one for each hop, and no special larger payload
destined for the final recipient) this is both simpler, and would allow
us to store a full, unmodified, return-onion in the end-to-end payload.

Another advantage is that the end-to-end payload is not covered by the
HMACs in the header, meaning that the recipient can construct a reply
without having to modify (and invalidate) the routing onion. I guess
this is going back to the roots of the Sphinx paper :-)

Might be worth a consideration, as it seems to me like it'd be simpler
:-) The downside of course is that we'd end up with two different onion
constructions for different use-cases.

>> After Alice receives the first message from Bob the circular
>> "circuit" is established and they can continue to communicate using
>> the same circuit: timing attacks are now "impossible" since Alice and
>> Bob can be anywhere along the circle, even if two of the nodes in the
>> circuit are surveillors cooperating with each other, the timing
>> information they get is the distance between the surveillor nodes.
>> Of course, if a node in the circular path drops the circuit is
>> disrupted, so any higher-level protocols on top of that should
>> probably be willing to resume the conversation on another circular
>> circuit.
> My immediate purpose for this is for "offers" which cause a invoice
> request, followed by an invoice reply.  This will probably be reused
> once for the payment itself.  2 uses is not sufficient to justify
> setting up a circuit, AFAICT.

I know someone who is itching to implement hornet for these use-cases

Lightning-dev mailing list

[Lightning-dev] Lightning Spec Meeting 2020/02/17

2020-02-14 Thread Christian Decker
Dear Fellow Protocol Devs,

the next meeting is this Monday (2020/02/17), and to facilitate review
and meeting preparations we have prepared a short agenda [1].

The open topic discussions during the last two iterations have been very
interesting, albeit a bit long, so this time we decided to limit
ourselves to just two topics, and track other topics of interest in a
backlog section. If there is time we can start discussing entries from
the backlog section, however it is likely better to concentrate well on
a few topics rather than touching many briefly. Let me know if you
agree, and of course any other feedback is more than welcome.

The agenda is not yet final, so if there are topics people are eager to
discuss please let me know either on GH or reply to this mail :-)

So far the agenda looks as follows:

The meeting will take place on Monday 2020/02/17 on IRC 
[#lightning-dev](irc:// It is open to the 

## Pull Request Review
- [ ] A home for BOLT #738
- [ ] Reply channel range simplification #737
- [ ] BOLT11 additional and negative tests #736
- [ ] Avoid stuck channels after fee increase #740

## Long Term Updates
- [ ] Protocol testing framework (@rustyrussell). See the `tools/` and 
`tests/events` directories in 
 for details.
- [ ] Poor man's rendez-vous routing (@t-bast). See [mail by 
 and [gist]( 
for details. (Since this is a fresh proposal still in everybody's mind this 
moved up the list to minimize loss of context).

## Backlog
The following are topics that we should discuss at some point, so if we have 
time to discuss them great, otherwise they slip to the next meeting.

- [ ] Current status of the trampoline routing proposal (@t-bast)
- [ ] How can we improve the gossip (gossip_queries_ex)? (@sstone )

Looking forward to Monday's meeting :-)


Lightning-dev mailing list

Re: [Lightning-dev] Sphinx and Push Notifications

2020-02-04 Thread Christian Decker
darosior via Lightning-dev  writes:
> Hi Pavol,
>> 1) Is c-lightning going to support Sphinx or other form of
>> spontaneous payments?
> I think cdecker is working on integrating keysend to his noise plugin
> (

The keysend functionality is implemented in the noise plugin and I am
planning to pull the keysend part out of the plugin, since that part is
really trivial to implement (`htlc_accepted` hook that checkes the
payment_hash against the preimage in the onion, then tell `lightningd`
to resolve directly).

As a side note: Sphinx-send is a terrible misnomer, since sphinx is the
name of our onion construction, keysend is the proper name to use in
this case.

>> 2) Can a lightning node (such as lnd or c-lightning) send a push
>> notification (e.g. to a webhook) when it receives or routes a
>> payment? If yes, is this notification cryptographically signed (for
>> example with the node's private key)? Is this documented somewhere?
> C-lightning sends notifications (and hooks, but it doesn't seem to be
> your usecase here) for typical events such as "I received an HTLC
> !". You can make a plugin which registers to these lightningd
> notifications sends encrypted push notifs. Doc here
> :-).

You can have a plugin subscribe to HTLC related events (such as
`forward_event` [1], or `invoice_payment` [2], to get notified about
forwardings or invoices being paid. What you do with that notification
then is up to you. It could queue the event in kafka, call out to a
webhook, or log a message with a log management system. You can
arbitrarily transform the event in the plugin, including issuing calls
to `signmessage` which will create a signature for the event message,
thus allowing you to prove authenticity of the message. You'd most
likely need to canonicalize the message before signing, since JSON is
not the best format for canonical serialization, i.e., decoding and
re-encoding can result in subtle changes, which could then fail
signature verification, but that should not be a major issue.

Lightning-dev mailing list

Re: [Lightning-dev] Not revealing the channel capacity during opening of channel in lightning network

2020-01-29 Thread Christian Decker
Matt Corallo  writes:
> Right, but there are approaches that are not as susceptible - an
> obvious, albeit somewhat naive, approach would be to define a fixed and
> proportional max fee, and pick a random (with some privacy properties eg
> biasing towards old or good-reputation nodes, routing across nodes
> hosted on different ISPs/Tor/across continents, etc) route that pays no
> more than those fees unless no such route is available. You could
> imagine hard-coding such fees to "fees that are generally available on
> the network as observed in the real world".

This is sort of what we do already in c-lightning, namely we set up a
fee budget of 0.5% and then select a random route within this
constraint. On top we also fuzz the amount and other parameters within
this range and similar ones in order to obfuscate the distance to the
recipient, i.e., slightly overpaying the recipient, but simulating a
shadow route.

So while not fixed in the network, we built our own fuzzing on top of
the real fees. The rationaly behind this is that users will simply not
care to optimize down to the satoshi, and the resulting randomization
helps privcay. We don't have real numbers but recent research results
show that attempting to squeeze the very last bit of fees out has a
detrimental effect on sender-receiver-privcay (surprise...).

Lightning-dev mailing list

[Lightning-dev] Lightning Spec Meeting 2020/01/20

2020-01-17 Thread Christian Decker
Dear Fellow Protocol Devs,

our next meeting is this Monday (2020/01/20) and I thought I'd try to
follow up on my new years resolution to add some more structure to the
spec meetings. I drafted an agenda for Monday [1], hoping to give
everybody a couple of days before the meeting to get up to speed with
the open Issues and Pull Requests that are going to be discussed. My
hope is that this speeds up the actual process of agreeing or discarding
individual proposals, and keep the short time we can meet as productive
as possible.

I kept the list of Issues and PRs short on purpose, to allow a good
discussion, and reduce the ACK / NACK slog to a minimum. In addition I
added a couple of items for longer term discussions.

This week @niftynei and @t-bast have agreed to give a short status
update on the dual-funding and the trampoline proposals respectively. I
think we could add a discussion of research results to future meetings,
to balance both short-term and long-term goals.

The document is a tentative agenda, so if there's something missing that
should be discussed, or you think is urgent, please let me know as a
comment in the document or here. However keep in mind that issues or PRs
on the agenda should be actionable (not require more than ~5 minutes of
discussion) :-)


Lightning-dev mailing list

Re: [Lightning-dev] eltoo towers and implications for settlement key derivation

2019-12-04 Thread Christian Decker
That is correct, the chain of noinput/anyprevout transactions is broken
as soon as the signers are online and can interactively bind and sign
without noinput/anyprevout.

Conner Fromknecht  writes:

> Good evening,
>> I didn't think this was the design.  The update transaction can spend any
> prior, with a fixed script, due to NOINPUT.
> From my reading of the final construction, each update transaction has a
> unique script to bind settlement transactions to exactly one update.
>> My understanding is that this is not logically possible?
> The update transaction has no fixed txid until it commits to a particular
> output-to-be-spent, which is either the funding/kickoff txout, or a
> lower-`nLockTime` update transaction output.
>> Thus a settlement transaction *must* use `NOINPUT` as well, as it has no
> txid it can spend, if it is constrained to spend a particular update
> transaction.
> This is also my understanding. Any presigned descendants of a NOINPUT txn
> must also use NOINPUT as well. This chain must continue until a signer is
> online to bind a txn to a confirmed input. The unique settlement keys thus
> prevent rebinding of settlement txns since NOINPUT with a shared script
> would be too liberal.
> Cheers,
> Conner
> On Mon, Dec 2, 2019 at 18:55 ZmnSCPxj  wrote:
>> Good morning Rusty,
>> > > Hi all,
>> > > I recently revisited the eltoo paper and noticed some things related
>> > > watchtowers that might affect channel construction.
>> > > Due to NOINPUT, any update transaction can spend from any other, so
>> > > in theory the tower only needs the most recent update txn to resolve
>> > > any dispute.
>> > > In order to spend, however, the tower must also produce a witness
>> > > script which when hashed matches the witness program of the input. To
>> > > ensure settlement txns can only spend from exactly one update txn,
>> > > each update txn uses unique keys for the settlement clause, meaning
>> > > that each state has a unique witness program.
>> >
>> > I didn't think this was the design. The update transaction can spend
>> > any prior, with a fixed script, due to NOINPUT.
>> >
>> > The settlement transaction does not use NOINPUT, and thus can only
>> > spend the matching update.
>> My understanding is that this is not logically possible?
>> The update transaction has no fixed txid until it commits to a particular
>> output-to-be-spent, which is either the funding/kickoff txout, or a
>> lower-`nLockTime` update transaction output.
>> Thus a settlement transaction *must* use `NOINPUT` as well, as it has no
>> txid it can spend, if it is constrained to spend a particular update
>> transaction.
>> Unless I misunderstand how update transactions work, or what settlement
>> transactions are.
>> Regards,
>> ZmnSCPxj
> -- 
> —Sent from my Spaceship
> ___
> Lightning-dev mailing list
Lightning-dev mailing list

Re: [Lightning-dev] eltoo towers and implications for settlement key derivation

2019-12-04 Thread Christian Decker
(I wrote this a couple of days ago but forgot to send it, sorry for that)

Hi Conner,

thanks for looking into this. I hadn't really thought too much about
watchtowers while writing the paper, so there might definitely be things
I hadn't considered. I fail to see where the watchtower needs to
generate the witness script if he's given the update transaction and the
matching settlement transaction (see deployment option 2 below).

There are a couple of deployment options for watchtowers, from simple
forward ratchetting to fully settling watchtowers. As you correctly
point out if the watchtower just ratchets forward the state, all it
needs is the latest update transaction that is bindable to any prior
update transaction and therefore the per-channel state is a single
update transaction. The channel operator would then come back at a later
time, when the watchtower has ratchetted forward and prevented any cheat
attempt by the counterparty and just release the latest settlement tx.

This is the model I had in mind when writing since it has constant
per-channel state on the watchtower independent of the number of updates
and of the size of the state (HTLCs, simple outputs, ...)  attached to
that settlement. This is safe since the operator knows when it has to
check back in at the latest in order to settle HTLCs built on top since
they have absolute locktimes (this is not true if we start building
relative locktime things on top of eltoo channels, but let's keep it
simple for now).

The second deployment option is to give the watchtower the settlement
transaction along with the update transaction. The settlement
transaction is fully signed and uses noinput/anyprevout to bind to the
update, so the bundle of update and settlement transactions is
broadcastable right away, no need to produce any scripts or
signatures. This ensures that we at least drop the correct state
on-chain, but comes at the cost of the watchtower learning intermediate
states, or at least the size of the state (number of outputs) if we
encrypt it.

A third deployment option would be to allow the watchtower the ability
to further settle things we built on top of the base eltoo contract,
such as HTLCs, but at that point we are leaking a lot of information,
watchtowers become very complex and we lose the flexibility of having
clear layering.  If we are aiming for this third option indeed the
watchtower would also need the ability to bind the HTLC settlement or
whatever we build on top of eltoo, which implies they'd also use
noinput/anyprevout, but that's hardly an issue, as long as the binding
is unique.


Conner Fromknecht  writes:
> Hi all,
> I recently revisited the eltoo paper and noticed some things related
> watchtowers that might affect channel construction.
> Due to NOINPUT, any update transaction _can_ spend from any other, so
> in theory the tower only needs the most recent update txn to resolve
> any dispute.
> In order to spend, however, the tower must also produce a witness
> script which when hashed matches the witness program of the input. To
> ensure settlement txns can only spend from exactly one update txn,
> each update txn uses unique keys for the settlement clause, meaning
> that each state has a _unique_ witness program.
> Naively then a tower could store settlement keys for all states,
> permitting it to reconstruct arbitrary witness scripts for any given
> sequence of confirmed update txns.
> So far, the only work around I’ve come up with to avoid this is to
> give the tower an extended parent pubkey for each party, and then
> derive non-hardened settlement keys on demand given the state numbers
> that get confirmed. It's not the most satisfactory solution though,
> since leaking one hot settlement key now compromises all sibling
> settlement keys.
> Spending the unique witness programs is mentioned somewhat in section
> 4.1.4, which refers to deriving keys via state numbers, but to me it
> reads mostly from the PoV of the counterparties and not a third-party
> service. Is requiring non-hardened keys a known consequence of the
> construction? Are there any alternative approaches folks are aware of?
> Cheers,
> Conner
> ___
> Lightning-dev mailing list
Lightning-dev mailing list

Re: [Lightning-dev] [bitcoin-dev] Continuing the discussion about noinput / anyprevout

2019-10-03 Thread Christian Decker
Anthony Towns  writes:

> On Mon, Sep 30, 2019 at 03:23:56PM +0200, Christian Decker via bitcoin-dev 
> wrote:
>> With the recently renewed interest in eltoo, a proof-of-concept 
>> implementation
>> [1], and the discussions regarding clean abstractions for off-chain protocols
>> [2,3], I thought it might be time to revisit the `sighash_noinput` proposal
>> (BIP-118 [4]), and AJ's `bip-anyprevout` proposal [5].
> Hey Christian, thanks for the write up!
>> ## Open questions
>> The questions that remain to be addressed are the following:
>> 1.  General agreement on the usefulness of noinput / anyprevoutanyscript /
>> anyprevout[?]
>> 2.  Is there strong support or opposition to the chaperone signatures[?]
>> 3.  The same for output tagging / explicit opt-in[?]
>> 4.  Shall we merge BIP-118 and bip-anyprevout. This would likely reduce the
>> confusion and make for simpler discussions in the end.
> I think there's an important open question you missed from this list:
> (1.5) do we really understand what the dangers of noinput/anyprevout-style
> constructions actually are?
> My impression on the first 3.5 q's is: (1) yes, (1.5) not really,
> (2) weak opposition for requiring chaperone sigs, (3) mixed (weak)
> support/opposition for output tagging.
> My thinking at the moment (subject to change!) is:
>  * anyprevout signatures make the address you're signing for less safe,
>which may cause you to lose funds when additional coins are sent to
>the same address; this can be avoided if handled with care (or if you
>don't care about losing funds in the event of address reuse)
>  * being able to guarantee that an address can never be signed for with
>an anyprevout signature is therefore valuable; so having it be opt-in
>at the tapscript level, rather than a sighash flag available for
>key-path spends is valuable (I call this "opt-in", but it's hidden
>until use via taproot rather than "explicit" as output tagging
>would be)
>  * receiving funds spent via an anyprevout signature does not involve any
>qualitatively new double-spending/malleability risks.
>(eltoo is unavoidably malleable if there are multiple update
>transactions (and chaperone signatures aren't used or are used with
>well known keys), but while it is better to avoid this where possible,
>it's something that's already easily dealt with simply by waiting
>for confirmations, and whether a transaction is malleable is always
>under the control of the sender not the receiver)
>  * as such, output tagging is also unnecessary, and there is also no
>need for users to mark anyprevout spends as "tainted" in order to
>wait for more confirmations than normal before considering those funds

Excellent points, I had missed the hidden nature of the opt-in via
pubkey prefix while reading your proposal. I'm starting to like that
option more and more. In that case we'd only ever be revealing that we
opted into anyprevout when we're revealing the entire script anyway, at
which point all fungibility concerns go out the window anyway.

Would this scheme be extendable to opt into all sighash flags the
outpoint would like to allow (e.g., adding opt-in for sighash_none and
sighash_anyonecanpay as well)? That way the pubkey prefix could act as a
mask for the sighash flags and fail verification if they don't match.

> I think it might be good to have a public testnet (based on Richard Myers
> et al's signet2 work?) where we have some fake exchanges/merchants/etc
> and scheduled reorgs, and demo every weird noinput/anyprevout case anyone
> can think of, and just work out if we need any extra code/tagging/whatever
> to keep those fake exchanges/merchants from losing money (and write up
> the weird cases we've found in a wiki or a paper so people can easily
> tell if we missed something obvious).

That'd be great, however even that will not ensure that every possible
corner case is handled and from experience it seems that people are
unwilling to invest a lot of time testing on a network unless their
money is on the line. That's not to say that we shouldn't try, we
absolutely should, I'm just not sure it alone is enough to dispell all
remaining doubts :-)

Lightning-dev mailing list

Re: [Lightning-dev] [bitcoin-dev] Continuing the discussion about noinput / anyprevout

2019-10-03 Thread Christian Decker
Chris Stewart  writes:

> I do have some concerns about SIGHASH_NOINPUT, mainly that it does
> introduce another footgun into the bitcoin protocol with address reuse.
> It's common practice for bitcoin businesses to re-use addresses. Many
> exchanges [1] reuse addresses for cold storage with very large sums of
> money that is stored in these addreses.
> It is my understanding with this part of BIP118
>>Using NOINPUT the input containing the signature no longer references a
> specific output. Any participant can take a transaction and rewrite it by
> changing the hash reference to the previous output, without invalidating
> the signatures. This allows transactions to be bound to any output that
> matches the value committed to in the witness and whose witnessProgram,
> combined with the spending transaction's witness returns true.
> if an exchange were to once produce a digital signature from that cold
> storage address with a SIGHASH_NOINPUT signature, that signature can be
> replayed again and again on the blockchain until their wallet is drained.
> This might be able to mitigated since the signatures commit to outputs,
> which may be small in value for the transaction that SIGHASH_NOINPUT was
> used. This means that an exchange could move coins from the address with a
> larger transaction that spends money to a new output (and presumably pays a
> higher fee than the smaller transactions).

Thanks for sharing your concerns Chris, I do agree that noinput and
friends are a very sharp knife that needs to be treated carefully, but
ultimately it's exactly its sharpness that makes it useful :-)

> ### Why does this matter?
> It seems that SIGHASH_NOINPUT will be an extremely useful tool for offchain
> protocols like Lightning. This gives us the building blocks for enforcing
> specific offchain states to end up onchain [2].
> Since this tool is useful, we can presume that it will be integrated into
> the signing path of large economic entities in bitcoin -- namely exchanges.
> Many exchanges have specific signing procedures for transactions that are
> leaving an exchange that is custom software. Now -- presuming wide adoption
> of off chain protocols -- they will need to have a _second unique signing
> path that uses SIGHASH_NOINPUT_.
> It is imperative that this second signing path -- which uses
> SIGHASH_NOINPUT -- does NOT get mixed up with the first signing path that
> controls an exchanges onchain funds. If this were to happen, fund lost
> could occur if the exchange is reusing address, which seems to be common
> practice.

Totally agreed, and as you point out, BIP118 is careful to mandate
separate private keys be used for off-chain contracts and that the
off-chain contract never be mixed with the remainder of your funds. The
way eltoo uses noinput we selectively open us up to replay attacks
(because that's what the update mechanism is after all) by controlling
the way the transactions can be replayed very carefully, and any other
use of noinput would need to make sure to have the same guarantees.
However, once we have separated the two domains, we can simply use a
separate (hardened) derivation path from a seed key, and never mix them
afterwards. We never exchange any private keys, so even leaking info
across derived keys is not an issue here.

> This is stated here in BIP118:
>>This also means that particular care has to be taken in order to avoid
> unintentionally enabling this rebinding mechanism. NOINPUT MUST NOT be
> used, unless it is explicitly needed for the application, e.g., it MUST NOT
> be a default signing flag in a wallet implementation. Rebinding is only
> possible when the outputs the transaction may bind to all use the same
> public keys. Any public key that is used in a NOINPUT signature MUST only
> be used for outputs that the input may bind to, and they MUST NOT be used
> for transactions that the input may not bind to. For example an application
> SHOULD generate a new key-pair for the application instance using NOINPUT
> signatures and MUST NOT reuse them afterwards.
> This means we need to encourage onchain hot wallet signing procedures to be
> kept separate from offchain hot wallet signing procedures, which introduces
> more complexity for key management (two keychains).

This is already the case: off-chain systems always require access to the
signing key in real-time in order to be useful. If any state change is
performed in a channel, even just adjusting fees or receiving a payment,
requires the signature from the key associated with the channel. With
high security on-chain systems on the other hand you should never have a
hot key that automatically signs off on transfers without human
intervention. So I find it unlikely that mandating the on-chain keys to
be kept separate from off-chain keys is any harder than what should be
done with the current systems.

> One (of the few) upsides of the current Lightning penalty mechanism is that
> fund loss can be contained 

Re: [Lightning-dev] Continuing the discussion about noinput / anyprevout

2019-10-03 Thread Christian Decker
Chris Stewart  writes:

>> I don't find too compelling the potential problem of a 'bad wallet
> designer', whether lazy or dogmatic, misusing noinput. I think there are
> simpler ways to cut corners and there will always be plenty of good wallet
> options people can choose.
> In my original post, the business that I am talking about don't use "off
> the shelf" wallet options. It isn't a "let's switch from wallet A to wallet
> B" kind of situation. Usually this involves design from ground up with
> security considerations that businesses of scale need to consider (signing
> procedures and key handling being the most important!).

In this case I'd hope that the custom wallet designers/developers are
well-versed in the issues they might encounter when implementing their
wallet. This is especially true if they decide to opt into using some
lesser known sighash flags, such as noinput, that come with huge warning
signs (I forgot to mention that renaming noinput to noinput_dangerous is
also still on the table).

>>Because scripts signed with no_input signatures are only really exchanged
> and used for off-chain negotiations, very few should ever appear on chain.
> Those that do should represent non-cooperative situations that involve
> signing parties who know not to reuse or share scripts with these public
> keys again. No third party has any reason to spend value to a
> multisignature script they don't control, whether or not a no_input
> signature exists for it.
> Just because some one is your friend today, doesn't mean they aren't
> necessarily your adversary tomorrow. I don't think a signature being
> onchain really matters, as you have to give it to your counterparty
> regardless. How do you know your counterparty won't replay that
> SIGHASH_NOINPUT signature later? Offchain protocols shouldn't rely on
> "good-will" for their counter parties for security.
>>As I mentioned before, I don't think the lazy wallet designer advantage is
> enough to justify the downsides of chaperone signatures. One additional
> downside is the additional code complexity required to flag whether or not
> a chaperone output is included. By comparison, the code changes for
> creating a no_input digest that skips the prevout and prevscript parts of a
> tx is much less intrusive and easier to maintain.
>>I want to second this. The most expensive part of wallet design is
> engineering time. Writing code that uses a new sighash or a custom
> script with a OP_CODE is a very large barrier to use. How many wallets
> support multisig or RBF? How much BTC has been stolen over the entire
> history of Bitcoin because of sighash SIGHASH_NONE or SIGHASH_SINGLE
> vs ECDSA nonce reuse
> I actually think lazy wallet designer is a really compelling reason to fix
> footguns in the bitcoin protocol. Mt Gox is allegedly a product of lazy
> wallet design. Now we have non-malleable transactions in the form of segwit
> (yay!) that prevent this exploit. We can wish that the Mt Gox wallet
> designers were more aware of bitcoin protocol vulnerabilities, but at the
> end of the day the best thing to do was offering an alternative that
> circumvents the vulnerability all together.

It's worth pointing out that the transaction malleability issue and the
introduction of a new sighash flag are fundamentally different: a wallet
developer has to take active measures to guard against transaction
malleability since it was present even for the most minimal
implementation, whereas with sighash flags the developers have to
actively add support for it. Where transaction malleability you just had
to know that it might be an issue, with noinput you actively have to do
work yo expose yourself to it.

I'd argue that you have to have a very compelling reason to opt into
supporting noinput, and that's usually because you want to support a
more complex protocol such as an off-chain contract anyway, at which
point I'd hope you know about the tradeoffs of various sighash flags :-)

> Ethan made a great point about SIGHASH_NONE or SIGHASH_SINGLE -- which have
> virtually no use AFAIK -- vs the ECDSA nonce reuse which is used in nearly
> every transaction. The feature -- ECDSA in this case -- was managed to be
> done wrong by wallet developers causing fund loss. Unfortunately we can't
> protect against this type of bug in the protocol.
> If things aren't used -- such as SIGHASH_NONE or SIGHASH_SINGLE -- it
> doesn't matter if they are secure or insecure. I'm hopefully that offchain
> protocols will achieve wide adoption, and I would hate to see money lost
> because of this. Even though they aren't used, in my OP I do advocate for
> fixing these.

I do share the feeling that we better make a commonly used sighash flag
as useable and safe as possible, but it's rather unrealistic to have a
developer that is able to implement a complex off-chain system, but
fails to understand the importance of using the correct sighash flags in
their wallet. That being said, I think this 

Re: [Lightning-dev] [bitcoin-dev] Continuing the discussion about noinput / anyprevout

2019-10-01 Thread Christian Decker
ZmnSCPxj  writes:
> To elucidate further ---
> Suppose rather than `SIGHASH_NOINPUT`, we created a new opcode,
> This new opcode ignores any `SIGHASH` flags, if present, on a
> signature, but instead hashes the current transaction without the
> input references, then checks that hash to the signature.
> This is equivalent to `SIGHASH_NOINPUT`.
> Yet as an opcode, it would be possible to embed in a Taproot script.
> For example, a Decker-Russell-Osuntokun would have an internal Taproot
> point be a 2-of-2, then have a script `OP_1
> OP_CHECKSIG_WITHOUT_INPUT`.  Unilateral closes would expose the hidden
> script, but cooperative closes would use the 2-of-2 directly.
> Of note, is that any special SCRIPT would already be supportable by Taproot.
> This includes SCRIPTs that may potentially lose funds for the user.
> Yet such SCRIPTs are already targetable by a Taproot address.
> If we are so concerned about `SIGHASH_NOINPUT` abuse, why are we not
> so concerned about Taproot abuse?

That would certainly be another possibility, which I have not explored
in detail so far. Due to the similarity between the various signature
checking op-codes it felt that it should be a sighash flag, and it
neatly slotted into the already existing flags. If we go for a separate
opcode we might end up reinventing the wheel, and to be honest I feared
that proposing a new opcode would get us into bikeshedding territory
(which I apparently failed to avoid with the sighash flag anyway...).

The advantage would be that with the sighash flag the spender is in
charge of specifying the flags, whereas with an opcode the output
dictates the signature verification modalities. The downside is the
increased design space.

What do others think? Would this be an acceptable opt-in mechanism that
addresses the main concerns?

Lightning-dev mailing list

Re: [Lightning-dev] [bitcoin-dev] Continuing the discussion about noinput / anyprevout

2019-10-01 Thread Christian Decker
ZmnSCPxj  writes:
> I rather strongly oppose output tagging.
> The entire point of for example Taproot was to reduce the variability
> of how outputs look like, so that unspent Taproot outputs look exactly
> like other unspent Taproot outputs regardless of the SCRIPT (or lack
> of SCRIPT) used to protect the outputs.  That is the reason why we
> would prefer to not support P2SH-wrapped Taproot even though
> P2SH-wrapping was intended to cover all future uses of SegWit,
> including SegWit v1 that Taproot will eventually get.

That is a bit reductive if you ask me. Taproot brings a number of
improvements such as the reduction of on-chain footprint in the
collaborative spend case, the hiding of complex logic in that case, and
yes, the uniformity of UTXOs that you mentioned. I do agree that it'd be
to make everything look identical to the outside observer, but saying
that separating outputs into two coarse-grained domains is equivalent to
throwing the baby out with the bath-water :-)

That being said, I should clarify that I would prefer not having to make
special accomodations on top of the raw sighash_noinput proposal, for
some perceived, but abstract danger that someone might shoot themselves
in the foot. I think we're all old enough not to need too much
handholding :-)

Output tagging is my second choice, since it minimizes the need for
people to get creative to work around other proposals, and minimizes the
on-chain footprint, and finally chaperone signatures are my least
preferred option due to its heavy-handed nature and the increased cost.

> Indeed, if it is output tagging that gets into Bitcoin base layer, I
> would strongly suggest the below for all Decker-Russell-Osuntokun
> implementations:
> * A standard MuSig 2-of-2 bip-schnorr SegWit v1 Funding Transaction Output, 
> confirmed onchain
> * A "translator transaction" spending the above and paying out to a SegWit 
> v16 output-tagged output, kept offchain.
> * Decker-Russell-Osuntokun update transaction, signed with `SIGHASH_NOINPUT` 
> spending the translator transaction output.
> * Decker-Russell-Osuntokun state transaction, signed with `SIGHASH_NOINPUT` 
> spending the update transaction output.

That is very much how I was planning to implement it anyway, using a
trigger transaction to separate timeout start and the actual
update/settlement pairs (cfr. eltoo paper Section 4.2). So for eltoo
there shouldn't be an issue here :-)

> The point regarding use of a commonly-known privkey to work around
> chaperone signatures is appropriate to the above, incidentally.  In
> short: this is a workaround, plain and simple, and one wonders the
> point of adding *either* chaperones *or* output tagging if we will, in
> practice, just work around them anyway.

Exactly, why introduce the extra burden of chaperone signatures or
output tagging if we're just going to sidestep it?

> Again, the *more* important point is that special blockchain
> constructions should only be used in the "bad" unilateral close case.
> In the cooperative case, we want to use simple plain
> bip-schnorr-signed outputs getting spent to further bip-schnor/Taproot
> SegWit v1 addresses, to increase the anonymity set of all uses of
> Decker-Russell-Osuntokun and other applications that might use
> `SIGHASH_NOINPUT` in some edge case (but which resolve down to simple
> bip-schnorr-signed n-of-n cases when the protocol is completed
> successfully by all participants).

While I do agree that we should keep outputs as unidentifiable as
possible, I am starting to question whether that is possible for
off-chain payment networks since we are gossiping about the existence of
channels and binding them to outpoints to prove their existence anyway.

Not the strongest argument I know, but there's little point in talking
ideal cases when we need to weaken that later again. 

>> Open questions
>> ---
>> The questions that remain to be addressed are the following:
>> 1.  General agreement on the usefulness of noinput / anyprevoutanyscript /
>> anyprevout. While at the CoreDev meeting I think everybody agreed that
>> these proposals a useful, also beyond eltoo, not everybody could be
>> there. I'd therefore like to elicit some feedback from the wider 
>> community.
> I strongly agree that `NOINPUT` is useful, and I was not able to attend 
> CoreDev (at least, not with any human fleshbot already known to you --- I 
> checked).

Great, good to know that I'm not shouting into the void, and that I'm
not just that crazy guy trying to get his hairbrained scheme to work :-)

>> 2.  Is there strong support or opposition to the chaperone signatures
>> introduced in anyprevout / anyprevoutanyscript? I think it'd be best to
>> formulate a concrete set of pros and contras, rather than talk about
>> abstract dangers or advantages.
> No opposition, we will just work around this by publishing a common
> known private key to use for all chaperone signatures, since 

[Lightning-dev] Continuing the discussion about noinput / anyprevout

2019-09-30 Thread Christian Decker
With the recently renewed interest in eltoo, a proof-of-concept implementation
[1], and the discussions regarding clean abstractions for off-chain protocols
[2,3], I thought it might be time to revisit the `sighash_noinput` proposal
(BIP-118 [4]), and AJ's `bip-anyprevout` proposal [5].

(sorry for the long e-mail. I wanted to give enough context and describe the
various tradeoffs so people don't have to stitch them together from memory. If
you're impatient there are a couple of open questions at the bottom)

Both proposals are ways to allow rebinding of transactions to new outputs, by
adding a sighash flag that excludes the output when signing. This allows the
transaction to be bound to any output, without needing a new signature, as
long as output script and input script are compatible, e.g., the signature
matches the public key specified in the output.

BIP-118 is limited to explaining the details of signature verification, and
omits anything related to deployment and dependency on other proposals. This
was done in order not to depend on bip-taproot which is also in draft-phase
currently, and to allow deployment alongside the next version of segwit
script. `bip-anyprevout` builds on top of BIP-118, adding integration with
`bip-taproot`, chaperone signatures, limits the use of the sighash flag to
script path spends, as well as a new pubkey serialization which uses the first
byte to signal opt-in.

I'd like to stress that both proposals are complementary and not competing,
which is something that I've heard a couple of times.

There remain a couple of unclear points which I hope we can address in the
coming days, to get this thing moving again, and hopefully get a new tool in
our toolbox soon(ish).

In the following I will quote a couple of things that were discussed during
the CoreDev meeting earlier this year, but not everybody could join, and it is
important that we engage the wider community, to get a better picture, and I
think not everybody is up-to-date about the current state.

## Dangers of `sighash_noinput`

An argument I have heard against noinput is that it is slightly less complex
or compute intensive than `sighash_all` signatures, which may encourage wallet
creators to only implement the noinput variant, and use it indiscrimi-
nately. This is certainly a good argument, and indeed we have seen at least
one developer proposing to use noinput for all transactions to discourage
address reuse.

This was also mentioned at CoreDev [6]:

> When [...] said he wanted to write a wallet that only used SIGHASH\_NOINPUT,
> that was pause for concern. Some people might want to use SIGHASH\_NOINPUT as 
> a
> way to cheapen or reduce the complexity of making a wallet
> implementation. SIGHASH\_NOINPUT is from a purely procedural point of view
> easier than doing a SIGHASH\_ALL, that's all I'm saying. So you're hashing
> less. It's way faster. That concern has been brought to my attention and it's
> something I can see. Do we want to avoid people being stupid and shooting
> themselves and their customers in the foot? Or do we treat this as a special
> case where you mark we're aware of how it should be used and we just try to
> get that awareness out?

Another issue that is sometimes brought up is that an external user may
attempt to send funds to a script that was really part of a higher-level
protocol. This leads to those funds becoming inaccessible unless you gather
all the participants and sign off on those funds. I don't believe this is
anything new, and if users really want to shoot themselves in the foot and
send funds to random addresses they fish out of a blockexplorer there's little
we can do. What we could do is make the scripts used internally in our
protocols unaddressable (see output tagging below), removing this issue

## Chaperone signatures

Chaperone signatures are signatures that ensure that there is no third-party
malleability of transactions. The idea is to have an additional signature,
that doesn't use noinput, or any of its variants, and therefore needs to be
authored by one of the pubkeys in the output script, i.e., one or more of the
participants of the contract the transaction belongs to. Concretely in eltoo
we'd be using a shared key known to all participants in the eltoo instance, so
any participant can sign an update to rebind it to the desired output.

Chaperone signatures have a number of downsides however:

-   Additional size: both the public key and the signature actually need to be
stored along with the real noinput signature, resulting in transfer,
computational and storage overhead. We can't reuse the same pubkey from the
noinput signature since that'd require access to the matching privkey which
is what we want to get rid of using noinput in the first place.
-   Protocols can still simply use a globally known privkey, voiding the
benefit of chaperone signatures, since third-parties can sign again. I
argue that third-party 

Re: [Lightning-dev] Revocations and Watchtowers

2019-09-19 Thread Christian Decker
I don't think this paints an accurate picture, both when it comes to
watchtowers for LN-penalty as well as for eltoo:

Technically the storage requirement for the shachain is also O(log(n))
and not O(1) due to the fact that we effectively have a cut through the
height of the tree, along which we have to keep the inner nodes until we
get the parent node which then allows us to infer the children. Given
that we use a constant size for that tree, it is not really relevant
but I thought it might be worth pointing this out. The shachain is
currently limited to 2^48 updates, which is way beyond what we can hope
to achieve on a single channel, so I agree with you that this limit is
not important at all currently.

Even with shachain the storage requirements for the nodes (not the
watchtowers) are far from being constant either: since any old state,
including anything that we built on top of it (HTLCs), so we need to
keep information around to react to those as well (preimages that cannot
be subsumed in a shachain since the HTLC preimage is chosen by many
remote senders).

When it comes to eltoo, just reusing the same watchtower protocol that
we designed for LN-penalty, with unidentified blobs, randomly inserted
by anyone, and encrypted with the commitment transaction is likely too
simplistic, and results in the O(n) requirement you mentioned. My
proposal would be to establish an authenticated session with a
watchtower, e.g., by signing all encrypted updates using a session key,
and the watchtower only replacing updates that match the session. An
attacker could not replace my updates I stashed with the watchtower
since it cannot hijack my session. This means that the watchtower can be
certain that it can discard old states, but still have the correct
reaction stashed when it needs it.

Notice that this is already what the lnd watchtower protocol pretty much
does, and it is likely that we'd like a session anyway in order to pay
the watchtower for its service. I think it's unrealistic to expect
altruistic watchtowers storing encrypted blobs for some random people
out there in eternity, without getting compensation for it. To hide the
activity and timing of our channels we could simply open multiple
sessions with the watchtower, or spread them across multiple watchtowers.

I'd even go further and just add the channel outpoint (or should I call
it "contract outpoint"?) to the update in cleartext so that the
watchtower can prune states for closed channels. We can still spread the
states across multiple watchtowers to hide update rate and timing. So
this effectively gets us to a O(1) storage space for watchtowers in


ZmnSCPxj via Lightning-dev 

> Good morning list,
> I was reading through the transcript of recent talk: 
> In section "Revocations and SIGHASH_NOINPUT":
>> There's another issue in lightning, which is the revocation transactions. 
>> There are basically, every time you do a state update, there's an extra 
>> transactions that both parties need to hold forever. If you're doing 
>> watchtowers, then the watchtowers need to keep all this evergrowing state.
>> ...
>> using SIGHASH_NOINPUT ... You have state to keep around, but it's just one 
>> transaction and it scales with O(1) instead of O(n).
> I thought I would just like to point out a few things:
> * Rusty created shachain so that we can store the O(n) transactions in O(1) 
> space (though with large constant) and O(log n) time to extract in case of 
> breach (and breach is expected to be a rare operation).
>   (Rusty can correct me if I am incorrect in the performance of this shachain 
> construct).
>   * For the most part we can ignore (I think) the storage of revocation keys 
> at this point in LN development.
>   * There is a limit to the number of updates possible, but my understanding 
> is that this is so large as to be impractical for users to reach even with 
> long-lifetime channels.
> * Admittedly, watchtowers with Poon-Dryja revocation mechanism need to store 
> O(n) transactions.
>   This is because shachain stores keys, and we do not want watchtowers to 
> possess revocation keys, only pre-built signatures to revocation transactions 
> that pay a partial fee to the watchtower (else the watchtower could sign a 
> revocation transaction paying only to itself without giving the client any 
> money at all).
>   But!
>   * Watchtowers, even with Decker-Russell-Osuntokun, still need to store 
> *all* O(n) transactions it receives for a channel.
> This is needed to protect against "poisoned blob" attacks, where an 
> attacker creates an encrypted blob that is just random data and feeds it into 
> the watchtower.
> See:
>   * 
>   * 

Re: [Lightning-dev] [bitcoin-dev] Reconciling the off-chain and on-chain models with eltoo

2019-09-18 Thread Christian Decker
ZmnSCPxj  writes:
>> cooperative close:
>> * when all parties mutually agree to close the channel
>> * close the channel with a layer one transaction which finalizes the outputs 
>> from the most recent channel output state
>> * should be optimized for privacy and low on-chain fees
> Of note is that a close of an update mechanism does not require the
> close of any hosted update mechanisms, or more prosaically, "close of
> channel factory does not require close of hosted channels".  This is
> true for both unilateral and cooperative closes.
> Of course, the most likely reason you want to unilaterally close an
> outer mechanism is if you have some contract in some deeply-nested
> mechanism that will absolute-locktime expire "soon", in which case you
> have to close everything that hosts it.  But for example if a channel
> factory has channels A B C and only A has an HTLC that will expire
> soon, while the factory and A have to close, B and C can continue
> operation, even almost as if nothing happened to A.

Indeed this is something that I think we already mentioned back in the
duplex micropayment channel days, though it was a bit hidden and only
mentioned HTLCs (though the principle carries over for other structures
built on the raw update mechanism):

> The process simply involves one party creating the teardown
> transaction, both parties signing it and committing it to the
> blockchain. HTLC outputs which have not been removed by agreement can
> be copied over to the summary transaction such that the same timelocks
> and resolution rules apply.

Notice that in the case of eltoo the settlement transaction is already
the same as the teardown transaction in DMC.

>> membership change (ZmnSCPxj ritual):
>> * when channel parties want to leave or add new members to the channel
>> * close and reopen a new channel via something like a channel splicing 
>> transaction to the layer one blockchain
>> * should be optimized for privacy and low on-chain fees paid for by parties 
>> entering and leaving the channel
> Assuming you mean that any owned funds will eventually have to be
> claimed onchain, I suppose this is doable as splice-out.
> But note that currently we have some issues with splice-in.
> As far as I can tell (perhaps Lisa Neigut can correct me, I believe
> she is working on this), splice-in has the below tradeoffs:
> 1.  Option 1: splice-in is async (other updates can continue after all 
> participants have sent the needed signatures for the splice-in).
> Drawback is that spliced-in funds need to be placed in a temporary
> n-of-n, meaning at least one additional tx.

Indeed this is the first proposal I had back at the Milan spec meeting,
and you are right that it requires stashing the funds in a temporary
co-owned output to make sure the transition once we splice in is
atomic. Batching could help here, if we have 3 participants joining they
can coordinate to set the funds aside together and then splice-in at the
same time. The downside is the added on-chain transaction, and the fact
that the funds are not operational until they reach the required depth
(I don't think we can avoid this with the current security guarantees
provided by Bitcoin). Notice that there is still some uncertainty
regarding the confirmation of the splice-in even though the funds were
stashed ahead of time, and we may end up in a state where we assumed
that the splice-in will succeed, but the fees we attached turn out to be
too low. In this case we built a sandcastle that collapses due to our
foundation being washed away, and we'd have to go back and agree on
re-splicing with corrected fees (which a malicious participant might
sabotage) or hope the splice eventually confirms.

> 2.  Option 2: splice-in is efficient (only the splice-in tx appears onchain).
> Drawback is that subsequent updates can only occur after the splice-in tx 
> is deeply confirmed.
> * This can be mitigated somewhat by maintaining a pre-splice-in
> and post-splice-in mechanism, until the splice-in tx is deeply
> confirmed, after which the pre-splice-in version is discarded.
>   Updates need to be done on *both* mechanisms until then, and any
> introduced money is "unuseable" anyway until the splice-in tx
> confirms deeply since it would not exist in the pre-splice-in
> mechanism yet.

This is the more complex variant we discussed during the last
face-to-face in Australia, and it seemed to me that people were mostly
in favor of doing it this way. It adds complexity since we maintain
multiple variants (making it almost un-implementable in LN-penalty),
however the reduced footprint, and the uncertainty regarding
confirmations in the first solution are strong arguments in favor of
this option.

> But perhaps a more interesting thing (and more in keeping with my
> sentiment "a future where most people do not typically have
> single-signer ownership of coins onchain") would be to transfer funds
> from one 

[Lightning-dev] Reconciling the off-chain and on-chain models with eltoo

2019-09-06 Thread Christian Decker
With the recently published proof-of-concept of eltoo on signet by
Richard, I thought it might a good time to share some thoughts on ho I
think we can build this system. I think there are a few properties of
eltoo that allow us to build a nicely layered protocol stack, which
improves flexibility and simplifies the reasoning about their relative

Since I don't like huge e-mails myself and I'm about to write one,
here's a quick TL;DR:

> Using the clean separation of protocol layers provided by eltoo we can
> reconcile many on-chain and off-chain concepts, and simplify the
> reasoning to build more complex functionality beyond simple
> HTLCs. Bitcoin transactions are a natural fit to represent proposed
> off-chain state-changes while they are being negotiated.

### Clean separation of protocol layers

One of te big advantages of eltoo over other off-chain update mechanisms
is that it provides strong guarantees regarding the state that will
eventually end up confirmed on-chain. If parties in an eltoo off-chain
contract agree on an update, we can be certain (within eltoo's security
assumptions) that this is the state that will eventually confirm
on-chain, if no newer states are agreed.

In particular it means that we are guaranteed no earlier state can leak
onto the chain, keeping anything we build on top of the update layer
unencumbered since it doesn't have to deal with this case.

This is in stark contrast to the penalty update mechanism, where
old/revoked states can leak on-chain, resulting in anything built on top
of the penalty mechanism having to deal with that eventuality. For
example if we look at HTLCs as specified [1] we see that it needs an
additional revokation path for the case the commitment transaction that
created this HTLC output is confirmed:

# To remote node with revocation key
# To local node via HTLC-success transaction.
# To remote node after timeout.

The update mechanism bleeding into the other layers is rather cumbersome
if you ask me, and complicates the reasoning about security. Having to
thread the penalty through outputs created by the off-chain contract may
also not work if we deal with more than 2 parties, since penalties
always steal all the funds, regardless of whether the output belonged to
the cheater or not (see asymmetry vs symmetry argument from the paper

With the clean separation we get from eltoo we can concentrate on
building the output scripts we'd like to have without having to thread
penalties through them. This reduces the complexity and our on-chain

The update layer now exposes only two very simple operations:
`add_output` and `remove_output` (this should sound very familiar :-p).

### Ownership and atomic update model

Now that we have a solid update layer, which ensures that agreed upon
states will eventually be reflected on-chain, we can turn our attention
to the next layer up: the negotiation layer. Each output in our
agreed-upon state needs to be assigned one or more owners. The owners
are the participants that need to sign off on removal of an output and
the creation of new outputs which redistribute the funds contained in
the removed outputs to newly created outputs.

In addition we need to ensure that multiple `remove_output` and
`add_output` are guaranteed to be applied atomically. By creating a
datastructure that lists a number of operations that are to either be
applied to the current state or discarded, we can have arbitrary complex
changes of ownership, and the newly created outputs can have arbitrary

If all of this sounds familiar that's because this is exactly the UTXO
model and the transaction structure we have in Bitcoin. We
collaboratively manage funds bound to some outputs (UTXO) and can change
their ownership and allocation over time (transactions).

This means that a subset of the participants in an off-chain contract
can negotiate among themselves how to redistribute funds, join and split
them in an arbitrary fashion, without the rest of the contract being
involved. The end result is a valid Bitcoin transaction that spends some
outputs of the current state, and is signed by the owners. The
transaction can then be presented to the entire group, and applied to
the state. Applying the transaction flattens multiple transactions built
on top of the current state into a new state (similar to transaction
cut-through in mimblewimble).

Using transactions as a means to represent off-chain negotiations, and
then applying them to the off-chain state via cut-through has a number
of advantages over similar schemes:

- Even if we failed to update the off-chain state, the transactions
  building on top of it are 

Re: [Lightning-dev] Using Per-Update Credential to enable Eltoo-Penalty

2019-07-14 Thread Christian Decker
ZmnSCPxj via Lightning-dev 

> Good morning Atoine,
> Thank you for your proposal.
>> Eltoo has been criticized to lower the cost for a malicious party to
>> test your monitoring of the chain. If we're able to reintroduce some
>> form of punishment without breaking transaction symmetry that would be great.
> The primary advantage of Decker-Russell-Osuntokun is that it
> eliminates "toxic waste".
> By this we mean, older version of your channel database are "toxic" in
> that you, ***or someone who wants to attack you***, can use it
> (accidentally in your case, deliberately in the attacker case), and
> then you will lose all funds in the channel.

I'm pretty sure at this point that the toxic-waste problem is inherent
to punishment schemes, and anything we build on top of it would
reintroduce asymmetry, undoing a lot of the benefits that we gained with
eltoo. Then again, I personally don't think that punishments are such a
great idea in the first place (having been inadvertently punished myself
due to botched backups and similar things).

> Note that access to your channel database, without necessarily
> accessing your node private keys, is often easier.  For example,
> C-Lightning stores channel data into an SQLITE database and exposes
> every transaction it makes to a `db_hook` that plugins can use to
> replicate the database elsewhere.  If you were to use an
> insufficiently secured plugin to replicate your database, an attacker
> might be able to access your channel data, replicate your database,
> and use an older version to frame you for theft and make you lose all
> your channel funds.

Just a minor correction here: your own commitment transactions are not
being signed until we want to release them. Therefore having access to
your DB doesn't give an attacker the ability to frame the user with an
old version, since that'd still require access to the keys to add our
own signature. Even a simple signing component that keeps a high-water
mark for the latest state and refuses to sign an older one would be
sufficient to guard against involuntary cheating.

Nevertheless, there are quite a few damaging things an attacker can do
if he get hold of your DB, just not this one :-)

> Thus, Decker-Russell-Osuntokun removes the punitive consideration so
> that you being framed for theft does not lose all your funds, it
> merely closes your channels.

Which is also not free: you are still paying on-chain fees for your
failed attempt to enforce an older state, and you still don't get the
desired effect, since the counterparty just overrides your attempt,
without returning your fees.

Lightning-dev mailing list

Re: [Lightning-dev] [PROPOSAL]: FAST - Forked Away Simultaneous Transactions

2019-06-28 Thread Christian Decker
Hi Ugam,

I just wanted to quickly note that the current proposal [1] (implemented
here [2]) is to give up on the fixed 65 byte frames altogether and allow
variable payloads (reclaiming what previously was padding in the hop
payloads). Given the low diameter of the network, this gives us a lot of
freedom to put additional payloads in the onion :-)



On Tue, Jun 25, 2019 at 1:07 PM Ugam Kamat  wrote:

> Hey guys,
> I’m kind of new to this mailing list, so let me know if this has been
> proposed previously. While reading Olaoluwa Osuntokun’s Spontaneous
> Payment proposal, I came up with the idea of simultaneous payments to
> multiple parties using the same partial route. In other words, say Alice,
> Bob, Charlie, Dave and Eric have channel opened with one another, and say
> Dave also has channel with Frank who has channel with Grace. Now, Alice is
> at a restaurant and wants to pay the bill amount to Eric (the restaurant
> owner) and a tip to Grace (who was her waiter). In the current scenario,
> Alice would have to send two payments A->B->C->D->E and A->B->C->D->F->G.
> However, if we repurpose the onion blob
>  in the same way
> as is needed for Spontaneous Payments, we can create a scenario where there
> is no path duplication. Dave would split the payments, one to Eric and
> other going to Grace through Frank. The preimage PM used in commitments
> A->B, B->C and C->D will be a function of pre-images P1 of D->E and P2 of
> D->F and F->G such that PM = f(P1, P2).
> *Proposal can be implemented by repurposing the onion in similar fashion
> as Spontaneous Payments with slight modification*
> This proposal works in similar fashion to Spontaneous Payment proposal, by
> packing in additional data in the unused hops. For B and C the onion blob
> will be identical to other lightning payments. When D parses the onion, the
> 4 MSB of the realm will tell D how much data can be extracted. This data
> will encode the hashes of the pre-images that would be used for commitment
> transaction towards Eric and other towards Frank.  For simplicity and
> privacy, I propose using 2 onion blobs for the data. So the payload can be
> 64 + 33 bytes = 97 bytes. The first byte would indicate how many hashes are
> packed, so we have 96 bytes for the payload, meaning we can pack a maximum
> of 3 hashes for 3 route payments from D. Now D will split the onion (18
> hops as it has used the first two for bifurcation data) into number of
> routes. In the above case it will be 9 hops each. Now these two onions are
> similar to other lightning payments. The first hop tells D the
> short-channel id, amount to forward, CLTV and the padding. Since, the
> preimage is 32 bytes, we can pack that in one single hop that is received
> by the final party. This leaves the remaining 7 hops can be used for
> routing. Below figure depicts the onion split in terms of how A will create
> it. D will add the filler to make each onion have 20 hops. Onion data is
> encoded in the same order in which the payment hashes are packed in the
> bifurcation data for D.
> *Calculating the preimages*
> Eric and Grace will parse the onion and use the pre-images for settlement.
> Let P1 represent the pre-images of D->E and P2 of D->F and F->G. When the
> pre-images arrive at node D, it will combine them such that PM = f(P1, P2).
> The easiest way for both A and D to calculate that will be PM = SHA256(P1
> || P2 || ss_d). Where || represents concatenation and ss_d is the shared
> secret created using the ephemeral public key of sender (the one generated
> by Alice) and private key of Dave. The need for using shared secret is to
> prevent the vulnerability where one channel operator who has nodes across
> both branches can use them to calculate the PM. Using shared secret also
> ensures that it is in fact D that has parsed them together.
> *Advantages of this proposal:*
>- Commitment transactions between A & B, B & C, and C & D now carry
>only one HTLC instead of two
>   - This means lower fees in case of on-chain settlement
>   - Lower routing fees for Alice as Bob and Charlie would not get to
>   charge for two routings
>   - Since 483 is the max limit of the htlcs nodes can accepts,
>   preventing duplication will allow more number of htlcs in flight.
>- If each payment of Eric and Grace is below the htlc min B or C
>accepts, but together if it is higher, this route is now usable
> *Some thoughts on if this proposal can be misused?*
>- The probability of transaction failures increases as now the
>transaction is dependent on 2/3 branches
> *Deployment*
> Not all nodes need to support this feature. For example, B, C, E, F,  and
> G does not even know that the payment arrived through 

Re: [Lightning-dev] Eltoo, anyprevout and chaperone signatures

2019-05-15 Thread Christian Decker
Hi Bastien,

thanks for investigating.

> I have been digging into Anthony Towns' anyprevout BIP
> proposal
> to verify that it has everything we need for Eltoo
> .
> The separation between anyprevout and anyprevoutanyscript is very handy
> (compared to the previous noinput proposal).
> Unless I'm missing something, it would simplify the funding tx (to a simple
> multisig without cltv/csv) and remove the need for the trigger tx.

I think it makes sense for us to consider both variants, one committing
to the script and the other not committing to the script, but I think it
applies rather to the `update_tx` <-> `settlement_tx` link and less to
the `funding_tx` <-> `update_tx` link and `update_tx` <-> `update_tx`
link. The reason is that the `settlement_tx` needs to be limited to be
bindable only to the matching `update_tx` (`anyprevout`), while
`update_tx` need to be bindable to the `funding_tx` as well as any prior
`update_tx` which differ in the script by at least the state number
(hence `anyprevoutanyscript`).

Like AJ pointed out in another thread, the use of an explicit trigger
transaction is not really needed since any `update_tx` can act as a
trigger transaction (i.e., start the relative timeouts to tick). This
was an oversight of mine, which may have contributed more confusion than
necessary :-)

The `funding_tx` itself doesn't need any form of timeout, in fact
collaborative spending/closing without a timeout should always be
possible. The `settlement_tx`s can have a BIP68-style relative timelock,
which also saves us a few bytes.

> The more tricky part to integrate is the chaperone signature.
> If I understand it correctly (which I'm not guaranteeing), we would need to
> modify the update transactions to something like:
> 10 OP_CSV
> 1 A(s,i) B(s,i) 2 OP_CHECKMULTISIGVERIFY  <- public keys' first
>> byte in this line is 0x02 or 0x03
> 2 A(s,i) B(s,i) 2 OP_CHECKMULTISIGVERIFY  <- public keys' first
>> byte in this line is 0x00 or 0x01
> 1 A(u) B(u) 2  OP_CHECKMULTISIGVERIFY  <- public keys' first
>> byte in this line is 0x02 or 0x03
> 2 A(u) B(u) 2  OP_CHECKMULTISIGVERIFY  <- public keys' first
>> byte in this line is 0x00 or 0x01

We could collapse those 1-of-2 multisigs into a single-sig if we just
collaboratively create a shared private key that is specific to the
instance of the protocol upon setup. That minimizes the extra space

Something that I notived talking to Jonas Nick is that we might have
some interaction between the taproot and noinput (or any of its aliases
:D). Specifically we can't make make use of the collaborative path where
we override an `update_tx` with a newer one in taproot as far as I can
see, since the `update_tx` needs to be signed with noinput (for
rebindability) but there is no way for us to specify the chaperone key
since we're not revealing the committed script.

> (I ommitted the tapscript changes, ie moving to OP_CHECKSIGADD, to
> highlight only the chaperone changes)
> When updating the channel, Alice and Bob would exchange their
> anyprevoutanyscript signatures (for the 2-of-2 multisig).
> The chaperone signature can be provided by either Alice or Bob at
> transaction broadcast time (so that it commits to a specific input
> transaction).
> It seems to me that using the same key for both signatures (the chaperone
> one and the anyprevoutanyscript one) is safe here, but if someone knows
> better I'm interested.
> If that's unsafe, we simply need to introduce another key-pair (chaperone
> key).
> Is that how you guys understand it too? Do you have other ideas on how to
> comply with the need for a chaperone signature?
> Note that as Anthony said himself, the BIP isn't final and we don't know
> yet if chaperone signatures will eventually be needed, but I think it's
> useful to make sure that Eltoo could support it.

I quite like the chaperone idea, however it doesn't really play nice
with taproot collaborative spends that require anyprevout /
anyprevoutanyscript / noinput, which would make our transactions stand
out quite a bit. Then again this is only the case for the unhappy,
unilateral close, path of the protocol, which (hopfully) should happen

Lightning-dev mailing list

Re: [Lightning-dev] Outsourcing route computation with trampoline payments

2019-04-04 Thread Christian Decker
Hi ZmnSCPzj,

I think we should not try to recover from a node not finding the next
hop in the trampoline, and rather expect trampolines to have reasonable
uptime (required anyway) and have an up to date routing table (that's
what we're paying them for after all).

So I'd rather propose reusing the existing onion construction as is and
expect the trampolines to fail a payment if they can't find the next

Let's take the following route for example (upper case letters represent

a -> b -> c -> D -> e -> f -> G -> h -> i -> j

With `a` being the sender, and `j` being the recipient. `D` and `G` are
trampolines. The sender `a` selects trampolines `D` and `G` at random
from their partial (possibly outdated) routing table. It creates the
inner onion using those two trampolines. It then computes a route to `D`
(`a -> b -> c -> D`). The `hop_payload` for `D` is a TLV payload that
has a single key `t` (assuming `t` is assigned in the TLV spec) that
contains the inner onion. It then initiates the payment using this
nested onion (`a -> b -> c -> D` + trampoline payload for `D`).

Upon receiving the onion `D` decrypts the outer onion to find the TLV
payload containing the `t` entry, which indicates that it should act as
a trampoline. It then decodes the inner trampoline onion and finds the
`node_id` of `G`. `D` then computes the outer onion to the next
trampoline `D -> e -> f -> G`, and adds the trampoline payload for `G`
(the inner trampoline onion we just decoded).

Upon receiving the onion `G` processes the onion like normal, finding
again an inner trampoline onion and decrypting it. Since `j` did not
indicate that it understands the trampoline protocol, `G` is instructed
to downgrade the onion into a normal non-trampoline onion (don't include
a trampoline, rather include the raw payload for `j`). It then computes
the route to `j`, and it creates a normal outer base routing onion `G ->
h -> i -> j`, which completes the protocol.

Like mentioned above the entire job of trampolines is to provide base
routing capability, and we should not make special provisions for myopic
trampoline nodes, since routing is their entire reason for existence :-)


>> Could this be implemented by replacing only the front of the 
>> trampoline-level onion?
>> (presumably with some adjustment of how the HMAC is computed for the new 
>> trampoline layer)
> I am trying to design a trampoline-level onion that would allow replacement 
> of the first hop of the onion.
> Below is what I came up with.
> As I am neither a cryptographer nor a mathematician, I am unable to consider, 
> whether this may have some problem in security.
> Now the "normal" onion, the first hop is encrypted to the recipient.
> I propose that for the "inner" trampoline-level onion, the first hop be sent 
> "in the clear".
> I think this is still secure, as the "inner" trampoline-level onion will 
> still be wrapped by the outer link-level onion, which would still encrypt it.
> When a node receives a trampoline-level onion, it checks if it is the first 
> hop.
> If it is, then it decrypts the rest of the onion and tries to route to the 
> next trampoline-level node.
> If not, then it is being delegated to, to find the trampoline.
> If the node cannot find the front of the trampoline-level onion, then it can 
> route it to another node that it suspects is more likely to know the 
> destination (such as the mechanisms in discussion in the "Routemap Scaling" 
> thread).
> Let me provide a concrete example.
> Payer Z creates a trampoline-level onion C->D->E:
> C | Z | encrypt(Z * C, D | encrypt(Z * D, E))
> Then Z routes to link-level onion A->B->C, with the payload to C being the 
> above trampoline-level onion:
> encrypt(Z * A, "link level" | B | encrypt(Z * B, "link level" | C | encrypt(Z 
> * C, "trampoline level" | C | Z | encrypt(Z * C, D | encrypt(Z * D, E)
> Upon reaching C, it sees it is given a trampoline-level onion, and if C is 
> unable to find D in its local map, it can delegate it to some other node.
> For example, if C thinks its neighbor M knows D, it can create:
> encrypt(C * M, "link level" | M | encrypt(C * M, "trampoline level" | D | Z | 
> encrypt(Z * D, E)))
> M finds that it is not the first hop in the trampoline-level onion.
> So M finds a route to D, for example via M->N->D, and gives:
> encrypt(M * N, "link level" | D | encrypt(M * D, "trampoline level" | D | Z | 
> encrypt(Z * D, E)))
> Is this workable?
> Note that it seems to encounter the same problem as Rendezvous Routing.
> I assume it is possible to do this somehow (else how would hidden services in 
> Tor work?), but the details, I am uncertain of.
> I only play a cryptographer on Internet.
> Regards,
> ZmnSCPxj
Lightning-dev mailing list

Re: [Lightning-dev] Outsourcing route computation with trampoline payments

2019-04-03 Thread Christian Decker
On Wed, 3 Apr 2019, 05:42 ZmnSCPxj via Lightning-dev <> wrote:

> Good morning Pierre and list,
> > There is another unrelated issue: because trampoline nodes don't know
> > anything about what happened before they received the onion, they may
> > unintentionnaly create overlapping routes. So we can't simply use the
> > payment_hash as we currently do, we would have to use something a bit
> > more elaborate.
> Just to be clear, the issue is for example with a network like:
> A --- B  C
>  / \
> /   \
>/ \
>   /   \
>  D --- E
> Then, A creates an inner trampoline onion "E->C", and an outer onion
> "A->B->E".
> E, on receiving the inner trampoline onion "E->C", finds that E->B
> direction is low capacity, so routes over the outer onion "E->D->B->C" with
> inner trampoline onion "C".
> This creates an overall route A->B->E->D->B->C.
> When the B->C HTLC is resolved, B can instead claim the A->B HTLC and just
> fail the D->B HTLC, thereby removing D and E from the route and claiming
> their fees, even though they participated in the route.

This is not an issue. Like we discussed for the multi-part payments the
HTLCs still resolve correctly, though node B might chose to short circuit
the payment it'll also clear the HTLCs through E And D (by failing them
instead of settling them) but the overall payment remains atomic and
end-to-end secure. The skipped nodes (which may include the trampoline) may
lose a bit of fees, but that is not in any way different than a failed
payment attempt that is being retried from the sender :-)

Lightning-dev mailing list

Re: [Lightning-dev] Outsourcing route computation with trampoline payments

2019-04-01 Thread Christian Decker
Thanks Pierre for this awesome proposal, I think we're onto something
here. I'll add a bit more color to the proposal, since I've been
thinking about it all weekend :-)

There are two ways we can use this:

 - A simple variant in which we just tell a single trampoline what the
   intended recipient is (just a pubkey, and an amount) and it'll find a
 - A complex variant in which a trampoline is given a next hop, and a
   smaller onion to pass along to the next hop. The trampoline doesn't
   learn the intended recipient, but can still route it.

# Simple Variant

As the name implies it is pretty trivial to implement: the sender
computes a route to some trampoline node `t` it knows in its 2- or
3-neightborhood and creates an onion that describes this route. The
payload for the trampoline node `t` then contains two parameters:
`receiver` and `amount`. The trampoline node `t` then computes a route
from itself to the `receiver` and creates a new onion (the old onion
terminates at the trampoline node). Since the trampoline node generates
a new route, it also generates the shared secrets, HMACs and everything
else to match (no problem with matching HMACs like in the case of
rendezvous routing).

The receiver doesn't learn anything about this being a trampoline
payment (it doesn't even have to implement it itself), and resolution of
the HTLC happens like normal (with the extra caveat that the trampoline
needs to associate the upstream incoming HTLC with the resolution of the
downstream HTLC, but we do that anyway now).

# Multi-trampoline routing

The more complex proposal involves nesting a smaller onion into the
outer routing onion. For this the sender generates a small onion of, for
example, 10 hops whose length is only 650 bytes instead of the 20 hops
for the outer routing onion. The hops in the inner/smaller onion do not
have to be adjacent to each other, i.e., they can be picked randomly
from the set of known nodes and there doesn't need to be a channel
between two consecutive hops, unlike in the outer/routing onion. The
hops in the smaller onion are called trampolines `t_1` to `t_10`.

The construction of the smaller onion can be identical to the
construction of the routing onion, just needs its size adjusted. The
sender then picks a random trampoline node `t_0` in its known
neighborhood and generates a routing onion containing the smaller onion
as payload to `t_0` and signaling data (final recipient, amount, inner
onion). Upon receiving an incoming payment with trampoline instructions
a trampoline `t_i` unwraps the inner onion, which yields the next
trampoline `t_{i+1}` node_id. The trampoline then finds a route to
`t_{i+1}`, serializing the inner onion (which was unwrapped and is now
destined for `t_{i+1}`) and creating the outer routing onion with that
as the payload. Notice that, like in the simple variant, `t_i` generates
a new outer onion, which means we don't have any issues with shared
secrets and HMACs like in rendezvous routing. Resolution is also
identical to above.

This construction reuses all the onion primitives we already have, and
it allows us to bounce a payment through multiple trampolines without
them learning their position in this nested path. The sender does
not have to have a total view of the network topology, just have a
reasonable chance that two consecutive trampolines can find a route to
each other, i.e., don't use mobile phone as trampolines :-)

# Tradeoffs everywhere

## Longer Routes

One potential downside is that by introducing this two-level nesting of
an outer routing onion and an inner trampoline onion, we increase the
maximum length of a route to `num_outer_hops * num_inner_hops`, given
that each layer of the inner onion may initiate a new `num_outer_hops`
outer route. For the example above (which is also the worst case) we
have a 10 inner hops, and 9 outer hops (due to the signalling overhead),
which results in a maximum route length of 90 hops. This may result in
more funds being used to route a payment, but it may also increase
chances of the payment succeeding.

## Comparison with TCP/IP + Tor

This proposal also brings us a lot closer to the structure of Tor on the
public network, in which the nodes that are part of a circuit do not
have to be direct neighboors in the network topology since end-to-end
reachability is guaranteed by a base routing layer (TCP/IP) whereas
sender/receiver obfuscation is tackled at a higher layer (Tor).

In our case the outer onion serves as the base routing layer that is
used for point-to-point communication, but unlike TCP/IP is also
encrypted and routed anonymously, while the inner onion takes care of
end-to-end reachability, also in encrypted fashion.

## In-network retrial

>From the comparison with TCP/IP and Tor might have hinted at this, but
since the outer onion is created from scratch at each trampoline, a
trampoline may actually retry a payment multiple times if an attempt
failed, reducing the burden on the sender, 

Re: [Lightning-dev] More thoughts on NOINPUT safety

2019-03-14 Thread Christian Decker
Anthony Towns  writes:
> I'm thinking of tagged outputs as "taproot plus" (ie, plus noinput),
> so if you used a tagged output, you could do everything normal taproot
> address could, but also do noinput sigs for them.
> So you might have:
>funding tx -> cooperative claim
>funding tx -> update 3 [TAGGED] -> settlement 3 -> claim
>funding tx -> update 3 [TAGGED] -> 
>  update 4 [TAGGED,NOINPUT] -> 
>settlement 4 [TAGGED,NOINPUT] -> 
>claim [NOINPUT]
> In the cooperative case, no output tagging needed.

I might be missing something here, but how do you bind update 3 to the
funding tx output, when that output is not tagged? Do we keep each
update in multiple separate states, one bound to the funding tx output
and another signed with noinput? If that's the case we just doubled our
storage and communication requirements for very little gain. An
alternative is to add a trigger transaction that needs to be published
in a unilateral case, but that'd increase our on-chain footprint.
Lightning-dev mailing list

Re: [Lightning-dev] Multi-frame sphinx onion format

2019-02-24 Thread Christian Decker
ZmnSCPxj  writes:
> Good morning Christian, Rusty, and list,
> You can take this a step further and make the realm 0 byte into a
> special type "0" which has a fixed length of 1299 bytes, with the
> length never encoded for this special type.  It would then define the
> next 1299 bytes as the "V", having the format of 64 bytes of the
> current hop format (short channel ID, amount, CLTV, 12-byte padding,
> HMAC), plus 19*65 bytes as the encrypted form of the next hop data.
> This lets us reclaim even the realm byte, removing its overhead by
> re-encoding it as the type in a TLV system, and with the special
> exception of dropping the "L" for the type 0 (== current realm 0)
> case.

I disagree that this would be any clearer than the current proposal
since we completely lose the separation of payload encoding vs. onion
encoding. Let's not mix the concepts of payload and transport onion,

> In short, drop the concept of 65-byte "frames".
> We could have another special length-not-encoded type 255, which
> declares the next 32 bytes as HMAC and the rest of the onion packet as
> the data for the next hop.
> The above is not a particularly serious proposal.

You had me worried for a second there :-)
Lightning-dev mailing list

Re: [Lightning-dev] Multi-frame sphinx onion format

2019-02-22 Thread Christian Decker
Rusty Russell  writes:
> There are two ways to add TLV to the onion:
> 1. Leave the existing fields and put TLV in the padding:
>* [`8`:`short_channel_id`]
>* [`8`:`amt_to_forward`]
>* [`4`:`outgoing_cltv_value`]
>* [`12`:`padding`]
> 2. Replace existing fields with TLV (eg. 2=short_channel_id,
>4=amt_to_forward, 6=outgoing_cltv_value) and use realm > 0
>to flag the new TLV format.
> The length turns out about the same for intermediary hops, since:
> TLV of short_channel_id => 10 bytes
> TLV of amt_to_forward => probably 5-6 bytes.
> TLV of outgoing_cltv_value => probably 3-4 bytes.
> For final hop, we don't use short_channel_id, so we save significantly
> there.  That's also where many proposals to add information go (eg. a
> special "app-level" value), so it sways me in the direction of making
> TLV take the entire room.

I'd definitely vote for making the entire payload a TLV (option 2) since
that allows us to completely redefine the payload. I don't think the
overhead argument really applies since we're currently wasting 12 bytes
of payload anyway, and with option 2 we still fit the current payload in
a single frame.

There is however a third option, namely make the entire payload a
TLV-set and then use the old payload format (`short_channel_id`,
`amt_to_forward`, `outgoing_ctlv_value`) as a single TLV-value with 20
bytes of size. That means we have only 2 bytes of overhead compared to
the old v0 format (4 byte less than option 2), and can drop it if we
require some other payload that doesn't adhere to this format.

Lightning-dev mailing list

[Lightning-dev] Multi-frame sphinx onion format

2019-02-18 Thread Christian Decker
Heya everybody,

during the spec meeting in Adelaide we decided that we'd like to extend
our current onion-routing capabilities with a couple of new features,
such as rendez-vous routing, spontaneous payments, multi-part payments,
etc. These features rely on two changes to the current onion format:
bigger per-hop payloads (in the form of multi-frame payloads) and a more
modern encoding (given by the TLV encoding).

In the following I will explain my proposal on how to extend the per-hop
payload from the current 65 bytes (which include realm and HMAC) to

Until now we had a 1-to-1 relationship between a 65 byte segment of
payload and a hop in the route. Since this is no longer the case, I
propose we call the 65 byte segment a frame, to differentiate it from a
hop in the route, hence the name multi-frame onion. The creation and
decoding process doesn't really change at all, only some of the

When constructing the onion, the sender currently always right-shifts by
a single 65 byte frame, serializes the payload, and encrypts using the
ChaCha20 stream. In parallel it also generates the fillers (basically 0s
that get appended and encrypted by the processing nodes, in order to get
matching HMACs), these are also shifted by a single 65 byte frame on
each hop. The change in the generation comes in the form of variable
shifts for both the payload serialization and filler generation,
depending on the payload size. So if the payload fits into 32 bytes
nothing changes, if the payload is bigger, we just use additional frames
until it fits. The payload is padded with 0s, the HMAC remains as the
last 32 bytes of the payload, and the realm stays at the first
byte. This gives us

> payload_size = num_frames * 65 byte - 1 byte (realm) - 32 bytes (hmac)

The realm byte encodes both the payload format as well as how many
additional frames were used to encode the payload. The MSB 4 bits encode
the number of frames used, while the 4 LSB bits encode the realm/payload

The decoding of an onion packet pretty much stays the same, the
receiving node generates the shared secret, then generates the ChaCha20
stream, and decrypts the packet (and additional padding that matches the
filler the sender generated for HMACs). It can then read the realm byte,
and knows how many frames to read, and how many frames it needs to left-
shift in order to derive the next onion.

This is a competing proposal with the proposal by roasbeef on the
lightning-onion repo [1], but I think it is superior in a number of
ways. The major advantage of this proposal is that the payload is in one
contiguous memory region after the decryption, avoiding re-assembly of
multiple parts and allowing zero-copy processing of the data. It also
avoids multiple decryption steps, and does not waste space on multiple,
useless, HMACs. I also believe that this proposal is simpler than [1],
since it doesn't require re-assembly, and creates a clear distinction
between payload units and hops.

To show that this proposal actually works, and is rather simple, I went
ahead and implemented it for c-lightning [2] and lnd [3] (sorry ACINQ,
my scala is not sufficient to implement if for eclair). Most of the code
changes are preparation for variable size payloads alongside the legacy
v0 payloads we used so far, the relevant commits that actually change
the generation of the onion are [4] and [5] for c-lightning and lnd

I'm hoping that this proposal proves to be useful, and that you agree
about the advantages I outlined above. I'd also like to mention that,
while this is working, I'm open to suggestions :-)


Lightning-dev mailing list

Re: [Lightning-dev] Extending Associated Data in the Sphinx Packet to Cover All Payment Details

2019-02-08 Thread Christian Decker
Hi Laolu,

thanks for bringing this up. I think committing to more data might be
nice, but I have some reservations re signaling in the onion packet
version. But let's start at the top:

> However, since the CLTV isn't also authenticated, then it's possible
> to attempt to inject a new HTLC with a fresher CLTV. If the node isn't
> keeping around all pre-images, then they might forward this since it
> passes the regular expiry tests. If we instead extend the associated
> data payload to cover the CLTV as well, then this binds the adversary
> to using the same CLTV details.

The CLTV is actually committed to indirectly through the outgoing CLTV
value in the onion payload itself (both for intermediate hops and final
hops). For intermediate hops we will refuse any forward that has a CLTV
value for the next leg that is not far enough in the future based on the
incoming CLTV value. Notice that the values we commit to are not deltas,
but absolute values. This means that a node needs to keep a cache of
shared secrets used until the `outgoing_cltv_value` from the onion dips
below `incoming_cltv_value - cltv_expiry_delta`). Any replay attempt
after that will result in the first hop (adjacent to the attacker) to
reject the HTLC with an `incorrect_cltv_expiry` error.

That being said I'm happy to add more information to the AD, but it may
need to be rolled out differently from what you describe.

> If this were to be deployed, then we can do it by using a new packet version
> in the Sphinx packet. Nodes that come across this new version (signalled by
> a global feature bit) would then know to include the extra information in
> the AD for their MAC check.

This will not really work if the route contains any node that does not
understand the new version of the packet. The node prior to the
non-upgraded node would have to downgrade the packet version from v1 to
v0 understood by the non-upgraded node, which could be done via an
instruction in the per-hop payload itself, but the non-upgraded node
would not have any way of learning that it needs to upgrade the packet
version to v1 again. This means we can use v1 up to the first node that
doesn't understand v1 and have a permanent downgrade for the rest of the

We might get away with signalling this in the payload itself, but that
inverts the processing of the onion into parse and interpret the payload
before checking the HMAC, which I can already hear cryptographers groan
about :-)

> While we're at it, we should also actually *commit* to the packet
> version. Right now nodes can swap out the version to anything they
> want, potentially causing another node to reject the packet.  This
> should also be added to the AD to ensure the packet can't be modified
> without another node detecting it.

I don't think this is really useful. If a node wants to cause us to
reject a packet it can just tamper with anything in the payload and
we'll fail with an HMAC failure. The version really is just a hint as to
how we should process the packet, and if tampered with it'll just cause
us to reject, similarly to when the attacker modifies the ephemeral key.

> Longer term, we may end up with _all_ payment details in the Sphinx packet.

Agreed, we can also just use the serialized HTLC output, since that is
the on-chain representation of the payment, and therefore has to include
all relevant details :-)

Lightning-dev mailing list

Re: [Lightning-dev] Quick analysis of channel_update data

2019-01-08 Thread Christian Decker
Fabrice Drouin  writes:

> I think there may even be a simpler case where not replacing updates
> will result in nodes not knowing that a channel has been re-enabled:
> suppose you got 3 updates U1, U2, U3 for the same channel, U2 disables
> it, U3 enables it again and is the same as U1. If you discard it and
> just keep U1, and your peer has U2, how will you tell them that the
> channel has been enabled again ? Unless "discard" here means keep the
> update but don't broadcast it ?

Excellent point, that's a simpler example of how it could break down.

>> I think all the bolted on things are pretty much overkill at this point,
>> it is unlikely that we will get any consistency in our views of the
>> routing table, but that's actually not needed to route, and we should
>> consider this a best effort gossip protocol anyway. If the routing
>> protocol is too chatty, we should make efforts towards local policies at
>> the senders of the update to reduce the number of flapping updates, not
>> build in-network deduplications. Maybe something like "eager-disable"
>> and "lazy-enable" is what we should go for, in which disables are sent
>> right away, and enables are put on an exponential backoff timeout (after
>> all what use are flappy nodes for routing?).
> Yes there are probably heuristics that would help reducing gossip
> traffic, and I see your point but I was thinking about doing the
> opposite: "eager-enable" and "lazy-disable", because from a sender's
> p.o.v trying to use a disabled channel is better than ignoring an
> enabled channel.

That depends on what you are trying to optimize. Your solution keeps
more channels in enabled mode, potentially increasing failures due to
channels being unavailable. I was approaching it from the other side,
since failures are on the critical path in the payment flow, they'd
result in longer delays and many more retries, which I think is annoying
too. It probably depends on the network structure, i.e., if the fanout
from the endpoints is large, missing some channels shouldn't be a
problem, in which case the many failures delaying your payment weighs
more than not finding a route (eager-disable & lazy-enable). If on the
other hand we are really relying on a huge number of flaky connections
then eager-enable & lazy-disable might get lucky and get the payment
through. I'm hoping the network will have the latter structure, because
we'd have really unpredictable behavior anyway.

We'll probably gain more insight once we start probing the network. My
expectation is that today's network is a baseline, whose resiliency and
redundancy will improve over time, hopefully swinging in favor of
trading off the speed gains over bare routability.

Lightning-dev mailing list

Re: [Lightning-dev] Quick analysis of channel_update data

2019-01-08 Thread Christian Decker
Rusty Russell  writes:
>> But only 18 000 pairs of channel updates carry actual fee and/or HTLC
>> value change. 85% of the time, we just queried information that we
>> already had!
> Note that this can happen in two legitimate cases:
> 1. The weekly refresh of channel_update.
> 2. A node updated too fast (A->B->A) and the ->A update caught up with the
>->B update.
> Fortunately, this seems fairly easy to handle: discard the newer
> duplicate (unless > 1 week old).  For future more advanced
> reconstruction schemes (eg. INV or minisketch), we could remember the
> latest timestamp of the duplicate, so we can avoid requesting it again.

Unfortunately this assumes that you have a single update partner, and
still results in flaps, and might even result in a stuck state for some

Assume that we have a network in which a node D receives the updates
from a node A through two or more separate paths:

A --- B --- D
 \--- C ---/

And let's assume that some channel of A (c_A) is flapping (not the ones
to B and C). A will send out two updates, one disables and the other one
re-enables c_A, otherwise they are identical (timestamp and signature
are different as well of course). The flush interval in B is sufficient
to see both updates before flushing, hence both updates get dropped and
nothing apparently changed (D doesn't get told about anything from
B). The flush interval of C triggers after getting the re-enable, and D
gets the disabling update, followed by the enabling update once C's
flush interval triggers again. Worse if the connection A-C gets severed
between the updates, now C and D learned that the channel is disabled
and will not get the re-enabling update since B has dropped that one
altogether. If B now gets told by D about the disable, it'll also go
"ok, I'll disable it as well", leaving the entire network believing that
the channel is disabled.

This is really hard to debug, since A has sent a re-enabling
channel_update, but everybody is stuck in the old state.

At least locally updating timestamp and signature for identical updates
and then not broadcasting if they were the only changes would at least
prevent the last issue of overriding a dropped state with an earlier
one, but it'd still leave C and D in an inconsistent state until we have
some sort of passive sync that compares routing tables and fixes these

>> Adding a basic checksum (4 bytes for example) that covers fees and
>> HTLC min/max value to our channel range queries would be a significant
>> improvement and I will add this the open BOLT 1.1 proposal to extend
>> queries with timestamps.
>> I also think that such a checksum could also be used
>> - in “inventory” based gossip messages
>> - in set reconciliation schemes: we could reconcile [channel id |
>> timestamp | checksum] first
> I think this is overkill?

I think all the bolted on things are pretty much overkill at this point,
it is unlikely that we will get any consistency in our views of the
routing table, but that's actually not needed to route, and we should
consider this a best effort gossip protocol anyway. If the routing
protocol is too chatty, we should make efforts towards local policies at
the senders of the update to reduce the number of flapping updates, not
build in-network deduplications. Maybe something like "eager-disable"
and "lazy-enable" is what we should go for, in which disables are sent
right away, and enables are put on an exponential backoff timeout (after
all what use are flappy nodes for routing?).

Lightning-dev mailing list

Re: [Lightning-dev] Quick analysis of channel_update data

2019-01-02 Thread Christian Decker
Hi Fabrice,

happy new year to you too :-)

Thanks for taking the time to collect that information. It's very much
in line with what we were expecting in that most of the updates come
from flapping channels. Your second observation that some updates only
change the timestamp is likely due to the staggered broadcast merging
multiple updates, e.g., one disabling and one enabling the channel, that
are sent very close to each other. This is the very reason we introduced
the staggering back in the days, as it limits the maximum rate of
updates a single node may produce for each of its channels.

In the second case we can probably get away with not forwarding the
update, but updating the timestamp and signature for the old
`channel_update` locally, so that we don't then flap back to an older
one should we get that in a roundabout way. That's purely a local
decision and does not warrant a spec change imho.

For the ones that flap with a period that is long enough for the
disabling and enabling updates being flushed, we are presented with a
tradeoff. IIRC we (c-lightning) currently hold back disabling
`channel_update`s until someone actually attempts to use the channel at
which point we fail the HTLC and send out the stashed `channel_update`
thus reducing the publicly visible flapping. For the enabling we can't
do that, but we could think about a local policy on how much to delay a
`channel_update` depending on the past stability of that peer. Again
this is local policy and doesn't warrant a spec change.

I think we should probably try out some policies related to when to send
`channel_update`s and how to hide redundant updates, and then we can see
which ones work best :-)


Fabrice Drouin  writes:
> Hello All, and Happy New Year!
> To understand why there is a steady stream of channel updates, even
> when fee parameters don't seem to actually change, I made hourly
> backups of the routing table of one of our nodes, and compared these
> routing tables to see what exactly was being modified.
> It turns out that:
> - there are a lot of disable/enable/disable etc…. updates which are
> just sent when a channel is disabled then enabled again (when nodes go
> offline for example ?). This can happen
> there are also a lot of updates that don’t change anything (just a new
> timestamp and signatures but otherwise same info), up to several times
> a day for the same channel id
> In both cases we end up syncing info that we already have.
> I don’t know yet how best to use this when syncing routing tables, but
> I thought it was worth sharing anyway. A basic checksum that does not
> cover all fields, but only fees and HTLC min/max values could probably
> be used to improve routing table sync ?
> Cheers,
> Fabrice
> ___
> Lightning-dev mailing list
Lightning-dev mailing list

Re: [Lightning-dev] Reason for having HMACs in Sphinx

2018-12-06 Thread Christian Decker
Corné Plooy  writes:

>> The total_decorrelation_secrets serves as the payer-generated shared
>> secret between payer and payee.  B cannot learn this, and thus cannot
>> fake its own secret.  Even if it instead offers ((I + K[A]) + k[z] *
>> G) for a new secret k[z], it cannot know how to change
>> total_decorrelation_secrets from k[a] + k[b] to k[a] + k[z] instead.
> The way things are now, the ephemeral key generation and the payment
> hash/preimage generation are completely unrelated. This is what allows
> an attacker to use the same payment hash, and use his own ephemeral key
> pair to create a new onion packet around it.

That is correct, one is generated by the recipient (secret and preimage)
and the other one is generated by the sender (ephemeral key). Mixing the
two seems very unwise, since the sender has very little control over
what the effective ephemeral key that is going to be used for the last
hop. This is the same issue that we have with rendez-vous routing, i.e.,
that if we require the ephemeral key to be something specific at a given
node we'd be breaking the hardness assumption of for the ephemeral key

> Primarily, path decorrelation replaces the payment hash/preimage part.
> Maybe I still don't understand something, but if that's the only thing
> (without changing the ephemeral key / onion shared secret generation),
> attacking the direct neighbor should still work; in your case, B would
> still offer ((I + K[A]) + K[B]) to C, with an onion packet B created
> himself. I'm not familiar enough with the path correlation to understand
> what happens after step 6, but for C it looks the same, so she should do
> the same.
> I do see that, if you couple the "H"TLC payment secret generation to the
> onion shared secret generation, you can make the attack impossible. Do I
> understand correctly that this is the idea? After all, C still needs to
> receive k somehow; my crypto math isn't that good, but my intuitive
> guess is that i + k is the secret that allows C to claim funds locked in
> ((I + K[A]) + K[B]) =? (i + (k[a] + k[b])) * G. If k is submitted from A
> to C through some mechanism that replaces the current ephemeral key
> system, then I understand what you're at.

I can't quite follow where we would be mixing in the ephemeral key here,
could you elaborate on that?

> Assuming this is the case, it's pretty neat. I do wonder how it
> interacts with rendezvous routing. If the sender and receiver each
> create the k[..] values for their own part of the route, can the
> receiver-generated onion packet still use points of the form ((I + K[A])
> + K[B]), including K[..] values related to the sender side? I need to
> dig deeper into this path decorrelation idea.

Since we have very little control over what ephemeral key will actually
be presented to the last hop if we have a multi-hop route, we can't
really hide any information in the ephemeral key itself. What we could
do is change the way the last hop generates the shared secret from it,
i.e., have a last hop mode and a forwarding hop mode, and mix in the
payment secret somehow, but I can't think of a good way to do that, and
it seems contorted. Let's just have the sender prove knowledge of the
original invoice by adding a TLV field with a shared secret from the
invoice instead.

Lightning-dev mailing list

Re: [Lightning-dev] Reason for having HMACs in Sphinx

2018-12-05 Thread Christian Decker
Rusty Russell  writes:
>> The shared secret doesn't need to be very large: the number of attempts
>> per second (to guess the shared secret) is limited by network latency,
>> bandwidth and maybe some artificial rate limiting. If an attacker can do
>> 100 attempts per second, then a 32-bit shared secret will take (on
>> average) 2^31 / (100*3600*24) = 248 days to crack, for a single guess of
>> which node is the final node. In the mean time, people will have noticed
>> the ongoing attack and will have taken countermeasures. Besides, the
>> transaction lock time will likely have expired in the mean time as well.
> We could really just use the last 4 bytes of the signature, AFAICT.

A stupid idea came to mind that would allow us to use no more space in
the onion at all: store the secret from the invoice in the HMAC
field. That would complicate the final hop checking on the recipient to
either being all 0x00, or some known secret (could also use a partial
HMAC so we can reduce the number of lookups we need to do). Another
option is that we could XOR it with some other field as well. The
recipient already signaled that it supports this by including a secret
in the invoice in the first place anyway, so no need for a lockstep
upgrade either.

Just putting it out there, I'm still unsure if I like it at all, since
it mixes field purposes, but it is an option if we decide this is a
serious issue.

Lightning-dev mailing list

Re: [Lightning-dev] Reason for having HMACs in Sphinx

2018-11-29 Thread Christian Decker
Hi Corne,

the HMACs are necessary in order to make sure that a hop cannot modify
the packet before forwarding, and the next node not detecting that

One potential attack that could facilitate is that an attacker could
learn the path length by messing with different per-hop payloads: set
n=0 the attacker flips bits in the nth per-hop payload, and forwards
it. If the next node doesn't return an error it was the final recipient,
if if returns an error, increment n and flip bits in the (n+1)th per-hop
payload, until no error is returned. Congratulation you just learned the
path length after you. The same can probably be done with the error
packet, meaning you can learn the exact position in the route. Add to
that the information you already know about the network (cltv_deltas,
amounts, fees, ...) and you can probably detect sender and recipient.

Adding HMACs solves this by ensuring that the next hop will return an
error if anything was changed, i.e., removing the leak about which node
would have failed the route.


Corné Plooy via Lightning-dev  writes:
> Hi,
> Is there a reason why we have HMACs in Sphinx? What could go wrong if we
> didn't?
> A receiving node doesn't know anyway what the origin node is; I don't
> see any attack mode where an attacker wouldn't be able to generate a
> valid HMAC.
> A receiving node only knows which peer sent it a Sphinx packet;
> verification that this peer really sent this Sphinx packet is (I think)
> already done on a lower protocol layer.
> AFAICS, The only real use case of the HMAC value is the special case of
> a 0-valued HMAC, indicating the end of the route. But that's just silly:
> it's essentially a boolean, not any kind of cryptographic verification.
> ___
> Lightning-dev mailing list
Lightning-dev mailing list

[Lightning-dev] Rendez-vous proposal with ephemeral key switch

2018-11-18 Thread Christian Decker
I finally got around to amending my initial (broken) proposal for the
rendez-vous protocol using the ephemeral key switch at the rendez-vous
point. I'd like to try and keep a live document that describes the
entire proposal in the Wiki to make it easier for people to get an
overall view of the proposal, instead of having to stitch it together
from the ML posts. You can find the proposal here [1]. It makes heavy
use of the description in the onion routing bolt [2].

The initial proposal was to have the rendez-vous node `RV` swap in an
ephemeral key `ek_rv` instead of generating it from `ss_k` derived from
ECDH(`ek_{k-1}`, node_id), because that allows the recipient `R` to
generate the second half of the route by selecting that `ek_rv`.

The problem I mentioned in other mails arises from the fact that when
`RV` decrypts its payload to learn about its routing instructions and
learn about `ek_rv`, it was also encrypting the filler bytes appended to
the end. The decryption is done via XOR with a ChaCha20 bytestream whose
key is generated from `ss_k`, which is unknown to `R` (depends on the
ephemeal key selected by the sender and the intermediate hops). This is
important since `R` needs to know the exact contents of the packet
including the filler to compute valid HMACs.

The fix is relatively simple, and just adds a virtual hop at `RV`. I'll
describe the actions of `RV` here instead of the packet building since
this way is easier to follow:

 - `RV` derives `ss_k` from `ek_k` which was given to it by the previous
   hop, appends the `0x00`-vector to shift in when stripping its per-hop
   payload (may need more than 65 bytes now since we shift more than one
   per-hop field now), generates the ChaCha20 stream using `ss_k` and
   XORs the packet with the stream.
 - `RV` reads its per-hop payload notices that an ephemeral key switch
   is desired and reads `ek_rv` from the per-hop payload. It overwrites
   the, now encrypted, filler vector with `0x00`-bytes again (to
   recreate a well-known state that `R` can use when generating its
   partial onion).
 - It then derives a new secret key `ss_rv` from `ek_rv` and its node
   ID. `ss_rv` is then used to generate a new ChaCha20 stream which will
   encrypt the packet again (obfuscating the filler) and it'll be used
   to generate a new ephemeral key `ek_{rv+1}` which will be passed on
   to the next hop.

At this point the normal operation continues as usual. IMHO the proposal
is clean and backwards compatible, but I'm open for suggestions. There
are a number of variants for this protocol, but I chose this one for its
symmetry with the existing scheme. I'll list a few alternatives here:

 - `ek_{rv+1}` == `ek_rv`: it is not really required to generate a new
   ephemeral key for the next hop, we could just reuse it. The reason
   the switch is done in normal Sphinx is to avoid correlating hops, but
   `ek_rv` is not really seen on the wire in cleartext right now so we
   could just reuse it. I prefer not to simply because of symmetry.
 - Overwrite the filler with `0x00`-bytes and don't obfuscate it. This
   is the simple initial proposal, but it leaks the fact that `RV` is a
   rendez-vous node to the next hop.

Please let me know if I missed something, I'll try to implement this
soon and see if something unexpected jumps at me :-)


Lightning-dev mailing list

Re: [Lightning-dev] type,len,value standard

2018-11-16 Thread Christian Decker
Conner Fromknecht  writes:
>> For a sequence of `type,len,value`, the `type`s must be in ascending order
>> -- not explicitly accepted or rejected.  It would be easier to check
>> uniqueness > (the previous rule we accepted) here for a naive parser (keep
>> track of some "minimum allowed type" that initializes at zero, check current
>> type >= this, update to current type + 1) if `type`s are in ascending order.
> Yep ascending makes sense to me, for the reasons you stated.

Definitely a good idea, especially because it results in a canonical
serialization format, which is important to ensure signatures over
messages can be verified even when reserializing parsed messages.
Lightning-dev mailing list

Re: [Lightning-dev] Base AMP

2018-11-15 Thread Christian Decker
I'm not sure this is an improvement at all over just allowing a single
merge-point, i.e., the destination. You see as long as we don't attempt
intermediate merges the routes are independent and failures of one HTLC
do not impact any other parts. Take for example the network below:

\   /

For simplicity let's assume unit capacities on all channels except C-D
and a total payment of 2 from A to D.

If we use C as a merge point for the two partial payments A-C-D and
A-B-C-D, then C can only forward if both partial payment succeed, i.e.,
if for example A-C fails then we'll need to tear down the HTLCs for both
paths because it'll no longer be possible to find an alternative route
to fulfill the forwarding of 2 over C-D.

If however we have two independent routes A-B-C-D and A-C-D, then A-C-D
can fail independently and we can recover by attempting A-E-D, no need
to touch A-B-C-D at all.

Overall it seems we get very little benefit (we save some HTLC setups
and teardown) for a lot of added complexity. In the above case we would
have saved on a single C-D HTLC, and the cost of doing so is many times
larger (2 HTLCs needed to be torn down because we could no longer pass
enough capacity to C in order for it to reach the forward threshold).

Let's please stick with the simple mechanism of having the recipient be
the only merge point.


ZmnSCPxj via Lightning-dev 
> Good morning list,
> I propose the below to support Base AMP.
> The below would allow arbitrary merges of paths, but not arbitrary splits.  I 
> am uncertain about the safety of arbitrary splits.
> ### The `multipath_merge_per_hop` type (`option_base_amp`)
> This indicates that payment has been split by the sender using Base AMP, and 
> that the receiver should wait for the total intended payment before 
> forwarding or claiming the payment.
> In case the receiving node is not the last node in the path, then succeeding 
> hops MUST be the same across all splits.
> 1. type: 1 (`termination_per_hop`)
> 2. data:
>   * [`8` : `short_channel_id`]
>   * [`8` : `amt_to_forward`]
>   * [`4` : `outgoing_cltv_value`]
>   * [`8` : `intended_total_payment`]
>   * [`4` : `zeros`]
> The contents of this hop will be the same across all paths of the Base AMP.
> The `payment_hash` of the incoming HTLCs will also be the same across all 
> paths of the Base AMP.
> `intended_total_payment` is the total amount of money that this node should 
> expect to receive in all incoming paths to the same `payment_hash`.
> This may be the last hop of a payment onion, in which case the `HMAC` for 
> this hop will be `0` (the same rule as for `per_hop_type` 0).
> The receiver:
> * MUST impose a reasonable timeout for waiting to receive all component 
> paths, and fail all incoming HTLC offers for the `payment_hash`  if they have 
> not totalled equal to `intended_total_payment`.
> * MUST NOT forward (if an intermediate node) or claim (if the final node) 
> unless it has received a total greater or equal to `intended_total_payment` 
> in all incoming HTLCs for the same `payment_hash`.
> The sender:
> * MUST use the same `payment_hash` for all paths of a single multipath 
> payment.
> Regards,
> ZmnSCPxj
> ___
> Lightning-dev mailing list
Lightning-dev mailing list

Re: [Lightning-dev] Link-level payment splitting via intermediary rendezvous nodes

2018-11-14 Thread Christian Decker
ZmnSCPxj  writes:
> The construction we came up with allows multiple rendezvous nodes,
> unlike the HORNET construction that is inherently only a single
> rendezvous node.  Perhaps the extra flexibility comes with some
> security degradation?

I don't think this is the case. If I remember correctly (Conner please
correct me if I'm wrong here), then the Hornet rendez-vous construction
relied on a Sphinx packet from the RV to R, wrapped in a Sphinx packet
from S to RV. This was possible because of the variable sized payload.
It would be possible to do that a number of times, with the downside
that the packet would be bigger and bigger since we are wrapping full
Sphinx packets.

Our construction with the ephemeral key switch at the rendez-vous point
is identical to that construction, except that we have the ephemeral key
at the RV hidden inside the routing information (per-hop payload) and
the remainder of the route in what would otherwise be padding. The
constructions are IMHO no different except for the location we store the
forward information that the RV will have to unpack (per-hop payload
instead of nested sphinx packets).

The only difficulty that I pointed out comes from the fact that the HMAC
verification can't work if we can't generate a specify shared secret at
the RV, which to me sounds like an intrinsic property of the way we use
one-way functions to derive those.

Lightning-dev mailing list

Re: [Lightning-dev] Link-level payment splitting via intermediary rendezvous nodes

2018-11-14 Thread Christian Decker
Hi Conner,

thanks for the pointers, looking forward to reading up on the
wrap resistance. I don't quite follow if you're against the
re-wrapping for spontaneous re-routing, or the entire rendez-vous
construction we came up with in Australia. If it's the latter, do
you have an alternative construction that we might look at?
Hornet requires the onion-in-onion initial sphinx setup IIRC
which is pretty much what we came up with here (with the
exception that we manage to have the second onion be hidden in
the first one's header instead of the payload).


On Tue, Nov 13, 2018 at 9:21 PM Conner Fromknecht

> Good morning all,
> Taking a step back—even if key switching can be done mathematically, it
> seems
> dubious that we would want to introduce re-routing or rendezvous routing
> in this
> manner. If the example provided _could_ be done, it would directly violate
> the
> wrap-resistance property of the ideal onion routing scheme defined in [1].
> This
> property is proven for Sphinx in section 4.3 of [2]. Schemes like HORNET
> [3]
> support rendezvous routing and are formally proven in this model. Seems
> this
> would be the obvious path forward, given that we've already done a
> considerable
> amount of work towards implementing HORNET via Sphinx.
> Cheers,
> Conner
> [1] A Formal Treatment of Onion Routing:
> [2] Sphinx:
> [3] HORNET:
> On Mon, Nov 12, 2018 at 8:47 PM ZmnSCPxj via Lightning-dev
>  wrote:
> >
> > Good morning Christian,
> >
> > I am nowhere near a mathematician, thus, cannot countercheck your
> expertise here (and cannot give a counterproposal thusly).
> >
> > But I want to point out the below scenarios:
> >
> > 1.  C is the payer.  He is in contact with an unknown payee (who in
> reality is E).  E provides the onion-wrapped route D->E with ephemeral key
> and other data necessary, as well as informing C that D is the rendez-vous
> point.  Then C creates a route from itself to D (via channel C->D or via
> C->A->D).
> >
> > 2.  B is the payer.  He knows the entire route B->C->D->E and knows that
> payee is C.  Unfortunately the C<->D channel is low capacity or down or etc
> etc.  At C, B has provided the onion-wrapped route D->E with ephemeral key
> and other data necessary, as well as informing to C that D is the next
> node.  Then C either pays via C->D or via C->A->D.
> >
> > Even if there is an off-by-one error in our thinking about rendez-vous
> nodes, could it not be compensated also by an off-by-one in the link-level
> payment splitting via intermediary rendez-vous node?
> > In short, D is the one that switches keys instead of A.
> >
> > The operation of processing a hop would be:
> >
> > 1.  Unwrap the onion with current ephemeral key.
> > 2.  Dispatch based on realm byte.
> > 2.1.  If realm byte 0:
> > 2.1.1.  Normal routing behavior, extract HMAC, etc etc
> > 2.2.  If realm byte 2 "switch ephemeral keys":
> > 2.2.1.  Set current ephemeral key to bytes 1 -> 32 of packet.
> > 2.2.2.  Shift onion by one hop packet.
> > 2.2.3.  Goto 1.
> >
> > Would that not work?
> > (I am being naive here, as I am not a mathist and I did not understand
> half what you wrote, sorry)
> >
> > Then at C, we have the onion from D->E, we also know the next ephemeral
> key to use (we can derive it since we would pass it to D anyway).
> > It rightshifts the onion by one, storing the next ephemeral key to the
> new hop it just allocated.
> > Then it encrypts the onion using a new ephemeral key that it will use to
> generate the D<-A<-C part of the onion.
> >
> > Regards,
> > ZmnSCPxj
> >
> >
> > Sent with ProtonMail Secure Email.
> >
> > ‐‐‐ Original Message ‐‐‐
> > On Tuesday, November 13, 2018 11:45 AM, Christian Decker <
>> wrote:
> >
> > > Great proposal ZmnSCPxj, but I think I need to raise a small issue with
> > > it. While writing up the proposal for rendez-vous I came across a
> > > problem with the mechanism I described during the spec meeting: the
> > > padding at the rendez-vous point would usually zero-padded and then
> > > encrypted in one go with the shared secret that was generated from the
> > > previous ephemeral key (i.e., the one before the switch). That
> ephemeral
> > > key is not known to the recipient (barring additional rounds of
> > > communication) so the 

Re: [Lightning-dev] Packet switching via intermediary rendezvous node

2018-11-12 Thread Christian Decker
Hi ZmnSCPxj,

like I mentioned in the other mailing thread we have a minor
complication in order get rendez-vous working.

If I'm not mistaken it'll not be possible for us to have spontaneous
ephemeral key switches while forwarding a payment. Specifically either
the sender or the recipient have to know the switchover points in their
respective parts of the onion. Otherwise it'll not be possible to cover
the padding in the HMAC, for the same reason that we couldn't meet up
with the same ephemeral key at the rendez-vous point.

Sorry about not noticing this before.


ZmnSCPxj via Lightning-dev 
> Good morning list,
> Although, packet switching was part of the agenda, we decided, that we would 
> defer this to some later version of BOLT spec.
> Interestingly, some sort of packet switching becomes possible, due to the 
> below features we did not defer:
> 1.  Multi-hop onion packets (i.e. s/realm/packettype/)
> 2.  Identify "next" by node-id instead of short-channel-id (actually, we 
> solved this by "short-channel-id is not binding" and next hop is identified 
> by short-channel-id still).
> 3.  Onion ephemeral key switching (required by rendez-vous routing).
> ---
> Suppose we define the below packettype (notice below type number is even, but 
> I am uncertain how "is OK to be odd", is appropriate for this):
> packettype 0: same as current realm 0
> packettype 2: ephemeral key switch (use ephemeral key in succeeding 65-byte 
> packet)
> packettype 4: identify next node by node-id on succeeding 65-byte packet
> Suppose I were to receive a packettype 0 in an onion.  It identifies a 
> short-channel-id.  Now suppose this particular channel has no capacity.  As I 
> showed in thread " Link-level payment splitting via intermediary rendezvous 
> nodes" 
>  it is possible, that I can route it via some other route *composed of 
> multiple channels*, by using packettype 4 at the end of this route to connect 
> it to the rest of the onion I receive.
> However, in this case, in effect, the short-channel-id simply identifies the 
> "next" node along this route.
> Suppose we also identify a new packettype (packettype 4)) where he "next" 
> node is identified by its node-id.
> Let us make the below scenarios.
> 1.  Suppose the node-id so identified, I have a channel with this node.  And 
> suppose this channel has capacity.  I can send the payment directly to that 
> node.  This is no different from today.
> 2.  Suppose the node-id so identified, I have a channel with this node.  But 
> this channel has not capacity.  However, I can look for alternate route.  And 
> by using rendez-vous feature "switch ephemeral key" I can generate a route 
> that is multiple hops, in order to reach the identified node-id, and connect 
> the rest of the onion to this.  This case is same as if the node is 
> identified by short-channel-id.
> 3.  Suppose the node-id so identified, I have not a channel with this node.  
> However, I can again look for alternate route.  Again, by using "switch 
> ephemeral key" feature, I can generate a route that is multiple hops, in 
> order to reach the identified node-id, and again connect the rest of the 
> onion to this.
> Now, the case 3 above, can build up packet switching.  I might have a 
> routemap that contains the destination node-id and have an accurate route 
> through the network, and identify the path directly to the next node.  If 
> not, I could guess/use statistics that one of my peers is likely to know how 
> to route to that node, and also forward a packettype 4 to the same node-id to 
> my peer.
> This particular packet switching, also allows some uncertainty about the 
> destination.  For instance, even if I wish to pay CJP, actually I make an 
> onion with packettype 4 Rene, packettype 4 CJP. packettype 0 HMAC=0.  Then I 
> send the above onion (appropriately layered-encrypted) to my direct peer 
> cdecker, who attempts to make an effort to route to Rene.  When Rene receives 
> it, it sees packettype 4 CJP, and then makes an effort to route to CJP, who 
> sees packettype 0 HMAC=0 meaning CJP is the recipient.
> Further, this is yet another use of the siwtch-ephemeral-key packettype.
> Thus:
> 1.  It allows packet switching
> 2.  It increases anonymity set of rendez-vous routing. Node that sees 
> packettype 2 (switch ephemeral key) does not know, if it is sending to a 
> packet-switched or link-level payment rerouting, or if it is the rendes-vous 
> for a deniable payment.
> 3.  Mapless Lightning nodes 
> (
>  could ask a peer to be their pathfinding provider, with some amount of 
> uncertinaty (it is possible that somebody else sent a packettype 4 to me, and 
> I selected you as peer who might know the destination; also, the destination 
> specified 

Re: [Lightning-dev] Wireshark plug-in for Lightning Network(BOLT) protocol

2018-11-07 Thread Christian Decker
Would it be possible to query a command line program or a JSON-RPC call to
get the secret? In that case we could add it to the `listpeers` output.

On Wed, Nov 7, 2018 at 6:43 AM tock203  wrote:

> We implemented the latter scheme. lightning-dissector already supports key
> rotation.
> FYI, here's the key log file format lightning-dissector currently
> implements.
> Whenever key rotation happens(nonce==0), lightning node software write
> 16byteMAC & key of "first BOLT packet".
> When you read .pcap starts with a message whose nonce is not 0, the
> messages can not be decrypted until the next key rotation.
> The current design is as described above. Because it is a provisional
> specification, any opinion is welcome.
> 2018年11月6日(火) 16:08 Olaoluwa Osuntokun :
>> Hi tomokio,
>> This is so dope! We've long discussed creating canned protocol
>> transcripts for
>> other implementations to assert their responses again, and I think this
>> is a
>> great first step towards that.
>> > Our proposal:
>> > Every implementation has compile option which enable output key
>> information
>> > file.
>> So is this request to add an option which will write out the _plaintext_
>> messages to disk, or an option that writes out the final derived
>> read/write
>> secrets to disk? For the latter path, it the tools that read these
>> transcripts
>> would need to be aware of key rotations, so they'd  be able to continue to
>> decrypt the transact pt post rotation.
>> -- Laolu
>> On Sat, Oct 27, 2018 at 2:37 AM  wrote:
>>> Hello lightning network developers.
>>> Nayuta team is developing Wireshark plug-in for Lightning Network(BOLT)
>>> protocol.
>>> It’s alpha version, but it can decode some BOLT message.
>>> Currently, this software works for Nayuta’s implementation(ptarmigan)
>>> and Éclair.
>>> When ptarmigan is compiled with some option, it write out key
>>> information file. This Wireshark plug-in decode packet using that file.
>>> When you use Éclair, this software parse log file.
>>> Through our development experience, interoperability test is time
>>> consuming task.
>>> If people can see communication log of BOLT message on same format
>>> (.pcap), it will be useful for interoperability test.
>>> Our proposal:
>>> Every implementation has compile option which enable output key
>>> information file.
>>> We are glad if this project is useful for lightning network eco-system.
>> ___
>>> Lightning-dev mailing list
>> ___
> Lightning-dev mailing list
Lightning-dev mailing list

Re: [Lightning-dev] Splicing Proposal: Feedback please!

2018-11-06 Thread Christian Decker
Olaoluwa Osuntokun  writes:

>> However personally I do not really see the need to create multiple
> channels
>> to a single peer, or increase the capacity with a specific peer (via
> splice
>> or dual-funding).  As Christian says in the other mail, this
> consideration,
>> is that it becomes less a network and more of some channels to specific
> big
>> businesses you transact with regularly.
> I made no reference to any "big businesses", only the utility that arises
> when one has multiple channels to a given peer. Consider an easier example:
> given the max channel size, I can only ever send 0.16 or so BTC to that
> peer. If I have two channels, then I can send 0.32 and so on. Consider the
> case post AMP where we maintain the current limit of the number of in flight
> HTLCs. If AMP causes most HTLCs to generally be in flight within the
> network, then all of a sudden, this "queue" size (outstanding HTLCS in a
> commitment) becomes more scarce (assume a global MTU of say 100k sat for
> simplicity). This may then promote nodes to open additional channels to
> other nodes (1+) in order to accommodate the increased HTLC bandwidth load
> due to the sharded multi-path payments.

I think I see the issue now, thanks for explaining. However I get the
feeling that this is a rather roundabout way of increasing the
limitations that you negotiated with your peer (max HTLC in flight, max
channel capacity, ...), so wouldn't those same limits also apply across
all channels that you have with that peer? Isn't the real solution here
to lift those limitations?

> Independent on bolstering the bandwidth capabilities of your links to other
> nodes, you would still want to maintain a diverse set of channels for fault
> tolerance, path diversity, and redundancy reasons.

Absolutely agree, and it was probably my mistake for assuming that you
would go for the one peer only approach as a direct result of increasing
bandwidth to one peer.
Lightning-dev mailing list

Re: [Lightning-dev] RFC: simplifications and suggestions on open/accept limits.

2018-11-06 Thread Christian Decker
Gert-Jaap Glasbergen  writes:
> Op 1 nov. 2018 om 03:38 heeft Rusty Russell 
>>> het volgende geschreven:
>> I believe this would render you inoperable in practice; fees are
>> frequently sub-satoshi, so you would fail everything.  The entire
>> network would have to drop millisatoshis, and the bitcoin maximalist in
>> me thinks that's unwise :)
> I can see how not wanting to use millisatoshis makes you less compatible
> with other people that do prefer using that unit of account. But in this
> case I think it's important to allow the freedom to choose.
> I essentially feel we should be allowed to respect the confines of the layer
> we're building upon. There's already a lot of benefits to achieve from second
> layer scaling whilst still respecting the limits of the base layer. Staying
> within those limits means optimally benefit form the security it offers.
> Essentially by allowing to keep satoshi as the smallest fraction, you ensure
> that everything you do off-chain is also valid and enforced by the chain when
> you need it to. It comes at trade offs though: it would mean that if someone
> routes your payment, you can only pay fees in whole satoshis - essentially
> meaning if someone wants to charge a (small) fee, you will be overpaying to
> stay within your chosen security parameters. Which is a consequence of your
> choice.

It should be pointed out here that the dust rules actually prevent us
from creating an output that is smaller than the dust limit (546
satoshis on Bitcoin). By the same logic we would be forced to treat the
dust limit as our atomic unit, and have transferred values and fees
always be multiples of that dust limit.

546 satoshis is by no means a tiny amount anymore, i.e., 546'000 times
the current minimum fee and value transferred. I think we will have to
deal with values that are not representable / enforceable on-chain
anyway, so we might as well make things more flexible by keeping

> I would be happy to make a further analysis on what consequences allowing this
> choice would have for the specification, and come up with a proposal on how to
> add support for this. But I guess this discussion is meant to "test the 
> waters"
> to see how much potential such a proposal would have to eventually be 
> included.
> I guess what I'm searching for is a way to achieve the freedom of choice,
> without negatively impacting other clients or users that decide to accept some
> level of trust. In my view, this would be possible - but I think working it 
> out
> in a concrete proposal/RFC to the spec would be a logical next step.

With a lot of choice comes great power, with great power comes great
responsibility... uh I mean complexity :-) I'm all for giving users the
freedom to chose what they feel comfortable with, but this freedom comes
at a high cost and the protocol is very complex as it is. So we need to
find the right configuration options, and I think not too many users
will care about their unit of transfer, especially when it's handled
automatically for them.

Lightning-dev mailing list

Re: [Lightning-dev] Splicing Proposal: Feedback please!

2018-10-16 Thread Christian Decker
ZmnSCPxj via Lightning-dev 

>> One thing that I think we should lift from the multiple funding output
>> approach is the "pre seating of inputs". This is cool as it would allow
>> clients to generate addresses, that others could deposit to, and then have
>> be spliced directly into the channel. Public derivation can be used, along
>> with a script template to do it non-interactively, with the clients picking
>> up these deposits, and initiating a splice in as needed.
> I am uncertain what this means in particular, but let me try to
> restate what you are talking about in other terms:
> 1.  Each channel has two public-key-derivation paths (BIP32) to create 
> onchain addresses.  One for each side of the channel.
> 2.  When somebody sends to one of the onchain addresses in the path, their 
> client detects this.
> 3.  The client initiates a splice-in automatically from this UTXO paying to 
> that address into the channel.
> It seems to me naively that the above can be done by the client
> software without any modifications to the Lightning Network BOLT
> protocol, as long as the BOLT protocol is capable of supporting *some*
> splice-in operation, i.e. it seems to be something that a client
> software can implement as a feature without requiring a BOLT change.
> Or is my above restatement different from what you are talking about?
> How about this restatement?
> 1.  Each channel has two public-key-derivation paths (BIP32) to create 
> onchain addresses.  One for each side of the channel.
> 2.  The base of the above is actually a combined private-public keypair of 
> both sides (e.g. created via MuSig or some other protocol).  Thus the 
> addresses require cooperation of both parties to spend.
> 3.  When somebody sends to one of the onchain addresses in the path, their 
> client detects this.
> 4.  The client updates the current transaction state, such that the new 
> commit transaction has two inputs ( the original channel transaction and the 
> new UTXO).
> The above seems unsafe without trust in the other peer, as, the other
> peer can simply refuse to create the new commit transaction.  Since
> the address requires both parties to spend, the money cannot be spent
> and there is no backoff transaction that can be used.  But maybe you
> can describe some mechanism to ensure this, if this is what is meant
> instead?

This could easily be solved by making the destination address a Taproot
address, which by default is just a 2-of-2, but in the uncooperative
case it can reveal the script it commits to, which is just a timelocked
refund that requires a single-sig. The only problem with this is that
the refund would be non-interactive, and so the entirety of the funds,
that may be from a third-party, need to be claimed by one endpoint,
i.e., there is no splitting the funds in case of an uncollaborative
refund. Not sure how important that is though, since I don't think
third-party funds will come from unrelated parties, e.g., most of these
funds will come from an on-chain wallet that is under the control of
either parties so the refund should go back to that party anyway.

Lightning-dev mailing list

Re: [Lightning-dev] Splicing Proposal: Feedback please!

2018-10-15 Thread Christian Decker
Olaoluwa Osuntokun  writes:
> Splicing isn't a substitute for allowing multiple channels. Multiple
> channels allow nodes to:
>   * create distinct channels with distinct acceptance policies.
>   * create a mix of public and non-advertised channels with a node.
>   * be able to send more than the (current) max HTLC amount
> using various flavors of AMP.
>   * get past the (current) max channel size value
>   * allow a link to carry more HTLCs (due to the current super low max HTLC
> values) given the additional HTLC pressure that
> AMP may produce (alternative is a commitment fan out)

While these are all good points, I think they are equally well served if
by creating channels to other peers. This has the added benefit of
reducing the node's reliance on a single peer. In fact it seems we are
currently encouraging users to have a small number of fat channels that
are manually maintained (dual-funding, splicing, multiple channels per
peer), rather than making the default to create a diverse set of
channels that allow indirectly routed payments.

Instead of obsessing about that one peer and hoping that that peer is
online when we need it, we should make routed payments a first-class
citizen. If we can route correctly and with confidence we can stop
worrying about that one peer and our connectivity to it. On the other
hand, if routing doesn't work, and people have to worry about that one
channel that connects them directly to the destination, then we're not
much of a network, but rather a set of disjoint channels.

Ultimately users should stop caring about individual channels or peer
relationships, and multipath routing gets us a long way there. I'd
really like to have a wallet that'll just manage channels in the
background and not expose those details to the users which just want to
send and receive payments, and we can start that now by de-emphasizing
the importance of the peer selection.

Lightning-dev mailing list

Re: [Lightning-dev] eltoo: A Simplified update Mechanism for Lightning and Off-Chain Contracts

2018-10-13 Thread Christian Decker
Great find ZmnSCPxj, we can also have an adaptive scheme here, in which we
start with a single update transaction, and then at ~90% of the available
range we add a second. This is starting to look a bit like the DMC
invalidation tree :-)
But realistically speaking I don't think 1B updates is going to be
exhausted any time soon, but the adaptive strategy gets the best of
both worlds.


On Fri, Oct 12, 2018 at 5:21 AM ZmnSCPxj  wrote:

> Another way would be to always have two update transactions, effectively
> creating a larger overall counter:
> [anchor] -> [update highbits] -> [update lobits] -> [settlement]
> We normally update [update lobits] until it saturates.  If lobits
> saturates we increment [update highbits] and reset [update lobits] to the
> lowest valid value.
> This will provide a single counter with 10^18 possible updates, which
> should be enough for a while even without reanchoring.
> Regards,
> ZmnSCPxj
> Sent with ProtonMail <> Secure Email.
> ‐‐‐ Original Message ‐‐‐
> On Friday, October 12, 2018 1:37 AM, Christian Decker <
>> wrote:
> Thanks Anthony for pointing this out, I was not aware we could
> roll keypairs to reset the state numbers.
> I basically thought that 1billion updates is more than I would
> ever do, since with splice-in / splice-out operations we'd be
> re-anchoring on-chain on a regular basis anyway.
> On Wed, Oct 10, 2018 at 10:25 AM Anthony Towns  wrote:
>> On Mon, Apr 30, 2018 at 05:41:38PM +0200, Christian Decker wrote:
>> > eltoo is a drop-in replacement for the penalty based invalidation
>> > mechanism that is used today in the Lightning specification. [...]
>> Maybe this is obvious, but in case it's not, re: the locktime-based
>> sequencing in eltoo:
>>  "any number above 0.500 billion is interpreted as a UNIX timestamp, and
>>   with a current timestamp of ~1.5 billion, that leaves about 1 billion
>>   numbers that are interpreted as being in the past"
>> I think if you had a more than a 1B updates to your channel (50 updates
>> per second for 4 months?) I think you could reset the locktime by rolling
>> over to use new update keys. When unilaterally closing you'd need to
>> use an extra transaction on-chain to do that roll-over, but you'd save
>> a transaction if you did a cooperative close.
>> ie, rather than:
>>   [funding] -> [coop close / re-fund] -> [update 23M] -> [HTLCs etc]
>> or
>>   [funding] -> [coop close / re-fund] -> [coop close]
>> you could have:
>>   [funding] -> [update 1B] -> [update 23,310,561 with key2] -> [HTLCs]
>> or
>>   [funding] -> [coop close]
>> You could repeat this when you get another 1B updates, making unilateral
>> closes more painful, but keeping cooperative closes cheap.
>> Cheers,
>> aj
Lightning-dev mailing list

Re: [Lightning-dev] eltoo: A Simplified update Mechanism for Lightning and Off-Chain Contracts

2018-10-11 Thread Christian Decker
Thanks Anthony for pointing this out, I was not aware we could
roll keypairs to reset the state numbers.

I basically thought that 1billion updates is more than I would
ever do, since with splice-in / splice-out operations we'd be
re-anchoring on-chain on a regular basis anyway.

On Wed, Oct 10, 2018 at 10:25 AM Anthony Towns  wrote:

> On Mon, Apr 30, 2018 at 05:41:38PM +0200, Christian Decker wrote:
> > eltoo is a drop-in replacement for the penalty based invalidation
> > mechanism that is used today in the Lightning specification. [...]
> Maybe this is obvious, but in case it's not, re: the locktime-based
> sequencing in eltoo:
>  "any number above 0.500 billion is interpreted as a UNIX timestamp, and
>   with a current timestamp of ~1.5 billion, that leaves about 1 billion
>   numbers that are interpreted as being in the past"
> I think if you had a more than a 1B updates to your channel (50 updates
> per second for 4 months?) I think you could reset the locktime by rolling
> over to use new update keys. When unilaterally closing you'd need to
> use an extra transaction on-chain to do that roll-over, but you'd save
> a transaction if you did a cooperative close.
> ie, rather than:
>   [funding] -> [coop close / re-fund] -> [update 23M] -> [HTLCs etc]
> or
>   [funding] -> [coop close / re-fund] -> [coop close]
> you could have:
>   [funding] -> [update 1B] -> [update 23,310,561 with key2] -> [HTLCs]
> or
>   [funding] -> [coop close]
> You could repeat this when you get another 1B updates, making unilateral
> closes more painful, but keeping cooperative closes cheap.
> Cheers,
> aj
Lightning-dev mailing list

Re: [Lightning-dev] Splicing Proposal: Feedback please!

2018-10-11 Thread Christian Decker
On Thu, Oct 11, 2018 at 3:40 AM Rusty Russell  wrote:

> > * Once we have enough confirmations we merge the channels (either
> > automatically or with the next channel update). A new commitment tx is
> > being created which now spends each output of each of the two funding tx
> > and assigns the channel balance to the channel partners accordingly to
> the
> > two independent channels. The old commitment txs are being invalidated.
> > * The disadvantage is that while splicing is not completed and if the
> > funder of the splicing tx is trying to publish an old commitment tx the
> > node will only be punished by sending all the funds of the first funding
> tx
> > to the partner as the special commitment tx of the 2nd output has no
> newer
> > state yet.
> Yes, this is the alternative method; produce a parallel funding tx
> (which only needs to support a single revocation, or could even be done
> by a long timeout) and then join them when it reaches the agreed depth.
> It has some elegance; particularly because one side doesn't have to do
> any validation or store anything until it's about to splice in.  You get
> asked for a key and signature, you produce a new one, and sign whatever
> tx they want.  They hand you back the tx and the key you used once it's
> buried far enough, and you check the tx is indeed buried and the output
> is the script you're expecting, then you flip the commitment tx.
> But I chose chose not to do this because every transaction commitment
> forever will require 2 signatures, and doesn't allow us to forget old
> revocation information.
> And it has some strange side-effects: onchain this looks like two
> channels; do we gossip about both?  We have to figure the limit on
> splice-in to make sure the commitment tx stays under 400kSipa.

This is a lot closer to my original proposal for splicing, and I
still like it a lot more since the transition from old to new
channel is bascially atomic (not having to update state on both
pre-splice and post-splice version). The new funds will remain
unavailable for the same time, and since we allow only one
concurrent splice in your proposal we don't even lose any
additional time regarding the splice-outs.

So pulling the splice_add_input and splice_add_output up to
signal the intent of adding funds to a splice. Splice_all_added
is then used to start moving the funds to a pre-allocated 2-of-2
output where the funds can mature. Once the funds are
matured (e.g., 6 confirmations) we can start the transition: both
parties claim the funding output, and the pre-allocated funds, to
create a new funding tx which is immediately broadcast, and we
flip over to the new channel state. No need to keep parallel
state and then disambiguating which one it was.

The downsides of this is that we now have 2 on-chain
transactions (pre-allocation and re-open), and splice-outs are no
longer immediate if we have a splice-in in the changeset as well.
The latter can be remediatet with one more reanchor that just
considers splice-ins that were proposed.

> > I believe splicing out is even safer:
> > * One just creates a spent of the funding tx which has two outputs. One
> > output goes to the recipient of the splice out operation and the second
> > output acts as a new funding transaction for the newly spliced channel.
> > Once signatures for the new commitment transaction are exchanged
> (basically
> > following the protocol to open a channel) the splicing operation can be
> > broadcasted.
> >
> > * The old channel MUST NOT be used anymore but the new channel can be
> > operational right away without blockchain confirmation. In case someone
> > tries to publish an old state of the old channel it will be a double
> spent
> > of the splicing operation and in the worst case will be punished and the
> > splicing was not successful.
> >
> >  if one publishes an old state of the new
> > channel everything will just work as normal even if the funding tx is not
> > yet mined. It could only be replaced with an old state of the previous
> > channel (which as we saw is not a larger risk than the usual operation
> of a
> > lightning node)
> Right, you're relying on CPFP pushing through the splice-out tx if it
> gets stuck.  This requires that we check carefully for standardness and
> other constraints which might prevent this; for example, we can't allow
> more than 20 (?) of these in a row without being sufficiently buried
> since I think that's where CPFP calculations top out.

We shouldn't allow more than one pending splice operation anyway, as
stated in your proposal initially. We are already critically reliant on our
transaction being confirmed on-chain, so I don't see this as much of an
added issue.

> > As mentioned maybe you had this workflow already in your mind but I don't
> > see why we need to send around all the messages twice with my workflow.
> We
> > only need to maintain double state but only until it is fair / safe to do
> > so. I would also 

Re: [Lightning-dev] W3C Web Payments Working Group / Payment Request API

2018-08-30 Thread Christian Decker
Just a quick followup on this: yes, I am indeed a member of the W3C Web
Payments Working Group, though not a very active one. I am following the
discussion as best I can, and try to figure out what changes and special
considerations, if any, are needed for both Bitcoin and Lightning to
work correctly, when the spec is finalized and deployed.

As it stands today the spec should be Bitcoin and Lightning compatible,
with the following considerations:

 - A special Payment Method ID [1] must be assigned to Bitcoin and
   Lightning since we cannot rely on a centralized URL to act as a
   payment method for these decentralized networks. Currently only the
   `basic-card` identifier has been assigned, but we can apply for one
 - As far as I see a local handler can be specified as Payment Handler
   [2] allowing us to have a Bitcoin or Lightning daemon running locally
   that is invoked for payment requests;
 - The Payment Request API [3] even mentions XBT as a supported
   currency, in addition to ISO4217 codes, so if a vendor publishes a
   Bitcoin amount and a matching Payment Method, we should be able to
   perform the payment;
 - Since we require special handling for Bitcoin and Lightning
   w.r.t. the Payment Method, the Payment Method Manifest [4] doesn't
   apply to us.

So all in all, we should be able to get Bitcoin and Lightning working
with the spec without any major roadblocks. Notice that this is based
solely on my current understanding of the spec, and I'd love for others
to chime in and point out anything that I might have missed.



René Pickhardt via Lightning-dev  

> Hey lightning devs,
> I was wondering if any of the companies here are members of W3C  and if
> anyone here could be member of the W3C Web Payments Working Group (c.f.:
> )? According to this mail
> Christian Decker is a member. Which I think would be awesome!
> They have just released their candidate recommendation for a payment API
> at: According to their site the
> proposed recommendation will be published not earlier than October 31st
> 2018. They are currently looking for feedback in their github repository
> at:
> I can see that they have bitcoin somewhat on their mind. But I guess it
> would be even cooler if we could make sure that lightning payments will
> also be compatible with their recommendation.
> Christian - if you really are a member -  could you give us an update on
> that work? How relevant is it for us?
> best Rene
> -- 
> Skype: rene.pickhardt
> mobile: +49 (0)176 5762 3618
> ___
> Lightning-dev mailing list
Lightning-dev mailing list

Re: [Lightning-dev] Including a Protocol for splicing to BOLT

2018-08-27 Thread Christian Decker
Corné Plooy via Lightning-dev 
>> Aside from that, spontaneous payments is amongst the most request feature
>> request I get from users and developers.
> A while ago I realized that spontaneous payments (without proof of
> payment, mostly for donations only) can be realized quite easily if the
> payer generates the preimage and hash, and includes the preimage in the
> sphinx message to the payee node. If the payee node recognizes the
> sphinx message format, it can use the preimage to claim the payment.

You mean like we describe in the Brainstorming wiki [1]? We definitely
need to make the Wiki more prominent :-)

Lightning-dev mailing list

Re: [Lightning-dev] Lack of capacity field in channel_announcement makes life difficult. Why is it not there?

2018-07-29 Thread Christian Decker
They are orthogonal, I agree, but we should judge their merits
independently, and not batch the discussions out of convenience.
In the case of the htlc_maximum_msat I think it will not be
controversial, but it should get its own proposal and discussion.

On Sun, Jul 29, 2018 at 4:17 PM Robert Olsson  wrote:
> Christian,
> Ok, it definitely makes sense to include the exact fixed capacity in 
> channel_announcement for the reason you mentioned, and more.
> However, can we do both while we are at it? The ideas are not mutually 
> exclusive, and for successful routing, i think the channel_update-approach is 
> much more of a boost.
> Regards,
> Robert
> On Sun, Jul 29, 2018 at 4:59 PM, Christian Decker 
>  wrote:
>> Robert Olsson  writes:
>> > I think however it would be much better and flexible to append a max to
>> > channel_update. We already have htlc_minimum_msat there and could add
>> > htlc_maximum_msat to show capacity (minus fees)
>> > Like this:
>> >
>> >
>> >1. type: 258 (channel_update)
>> >2. data:
>> >   - [64:signature]
>> >   - [32:chain_hash]
>> >   - [8:short_channel_id]
>> >   - [4:timestamp]
>> >   - [2:flags]
>> >   - [2:cltv_expiry_delta]
>> >   - [8:htlc_minimum_msat]
>> >   - [4:fee_base_msat]
>> >   - [4:fee_proportional_millionths]
>> >
>> >   - [8:htlc_maximum_msat]
>> This isn't about maximum HTLC value, rather Артём is talking about
>> adding the total channel capacity to the channel_announcement. That is a
>> perfectly reasonable idea, as it allows us to safe an on-chain lookup
>> (incidentally that is the main reason we started tracking an internal
>> UTXO set so we can stop asking bitcoind for full blocks just to check a
>> channel's capacity).
>> The channel's capacity is also fixed for the existence of that channel
>> (splice-in and splice-out will result in new short channel IDs), so the
>> announcement is exactly the right place to put this.
>> Cheers,
>> Christian
> ___
> Lightning-dev mailing list
Lightning-dev mailing list

Re: [Lightning-dev] Including a Protocol for splicing to BOLT

2018-07-03 Thread Christian Decker
ZmnSCPxj via Lightning-dev 
> For myself, I think, for old nodes, it should just appear as a
> "normal" close followed by a "normal" open.

That's exactly what they should look like, since the channel is being
closed with the existing protocol and opened (possibly with a slightly
different value).

> So, instead, maybe a new `channel_announce_reopen` which informs
> everyone that an old scid will eventually become a new scid, and that
> the nodes involved will still consider routes via the old scid to be
> valid regardless.

I thought of it more as a new alias for the old channel, so that the
update in the network view is just switching names after the announce
depth is reached.

> Then an ordinary `channel_announce` once the announce depth of the new
> scid is reached.
> From point of view of old nodes, the channel is closed for some
> blocks, but a new channel between the two nodes is then announced.
> From point of view of new nodes, the channel is referred to using the
> previous scid, until an ordinary `channel_announce` is received, and
> then the channel is referred to using the new scid.

The message announcing the reopen or the alias should probably preceed
the actual close, otherwise nodes may prune the channel from their view
upon seeing the close. The message then simply has the effect of saying
"ignore the close, let it linger for 6 more blocks before really
removing from your network view".

> For myself, I think splice is less priority than AMP. But I prefer an
> AMP which retains proper ZKCP (i.e. receipt of preimage at payer
> implies receipt of payment at payee, to facilitate trustless
> on-to-offchain and off-to-onchain bridges).

Agreed, multipath routing is a priority, but I think splicing is just as
much a key piece to a better UX, since it allows to ignore differences
between on-chain and off-chain funds, showing just a single balance for
all use-cases.

> With AMP, size of channels is less important, and many small channels
> will work almost as well as a few large channels.

Well, capacities are still very much important, and if there is a
smaller min-cut separating source and destination than the total amount
of the payment, then the payment will still fail. We now simply no
longer require a single channel with sufficient capacity to exist.
Lightning-dev mailing list

Re: [Lightning-dev] [bitcoin-dev] BIP sighash_noinput

2018-07-03 Thread Christian Decker
Gregory Maxwell  writes:
> I know it seems kind of silly, but I think it's somewhat important
> that the formal name of this flag is something like
> "SIGHASH_REPLAY_VULNERABLE" or likewise or at least
> "SIGHASH_WEAK_REPLAYABLE". This is because noinput is materially
> insecure for traditional applications where a third party might pay to
> an address a second time, and should only be used in special protocols
> which make that kind of mistake unlikely.   Otherwise, I'm worried
> that wallets might start using this sighash because it simplifies
> handling malleability without realizing that when a third party reuses
> a script pubkey, completely outside of control of the wallet that uses
> the flag, funds will be lost as soon as a troublemaker shows up (but
> not, sadly, in testing).  This sort of risk is magnified because the
> third party address reuser has no way to know that this sighash flag
> has (or will) be used with a particular scriptpubkey.

Absolutely agree that we should be signaling the danger of using noinput
as clearly as possible to developers, and I'm more than happy to adopt
the _unsafe suffix suggested by jb55. I think using non-sighash_all
sighashes is always a huge danger, as you have correctly pointed out, so
maybe we should be marking all of them as being unsafe, or make sure to
communicate that danger on a higher level (docs).
Lightning-dev mailing list

Re: [Lightning-dev] [bitcoin-dev] BIP sighash_noinput

2018-05-15 Thread Christian Decker
Anthony Towns  writes:

> On Thu, May 10, 2018 at 08:34:58AM +0930, Rusty Russell wrote:
>> > The big concern I have with _NOINPUT is that it has a huge failure
>> > case: if you use the same key for multiple inputs and sign one of them
>> > with _NOINPUT, you've spent all of them. The current proposal kind-of
>> > limits the potential damage by still committing to the prevout amount,
>> > but it still seems a big risk for all the people that reuse addresses,
>> > which seems to be just about everyone.
>> If I can convince you to sign with SIGHASH_NONE, it's already a problem
>> today.
> So, I don't find that very compelling: "there's already a way to lose
> your money, so it's fine to add other ways to lose your money". And
> again, I think NOINPUT is worse here, because a SIGHASH_NONE signature
> only lets others take the coin you're trying to spend, messing up when
> using NOINPUT can cause you to lose other coins as well (with caveats).

`SIGHASH_NOINPUT` is a rather powerful tool, but has to be used
responsibly, which is why we always mention that it shouldn't be used
lightly. Then again all sighash flags can be dangerous if not well
understood. Think for example `SIGHASH_SINGLE` with it's pitfall when
the input has no matching output, or the already mentioned SIGHASH_NONE.

>From a technical, and risk, point of view I don't think there is much
difference between a new opcode or a new sighash flag, with the
activation being the one exception. I personally believe that a segwit
script bump has cleaner semantics than soft-forking in a new opcode
(which has 90% overlap with the existing checksig and checkmultisig

>> [...]
>> In a world where SIGHASH_NONE didn't exist, this might be an argument :)
> I could see either dropping support for SIGHASH_NONE for segwit
> v1 addresses, or possibly limiting SIGHASH_NONE in a similar way to
> limiting SIGHASH_NOINPUT. Has anyone dug through the blockchain to see
> if SIGHASH_NONE is actually used/useful?

That's a good point, I'll try looking for it once I get back to my full
node :-) And yes, `SIGHASH_NONE` should also come with all the warning
signs about not using it without a very good reason.

>> That was also suggested by Mark Friedenbach, but I think we'll end up
>> with more "magic key" a-la Schnorr/taproot/graftroot and less script in
>> future.
> Taproot and graftroot aren't "less script" at all -- if anything they're
> the opposite in that suddenly every address can have a script path.
> I think NOINPUT has pretty much the same tradeoffs as taproot/graftroot
> scripts: in the normal case for both you just use a SIGHASH_ALL
> signature to spend your funds; in the abnormal case for NOINPUT, you use
> a SIGHASH_NOINPUT (multi)sig for unilateral eltoo closes or watchtower
> penalties, in the abnormal case for taproot/graftroot you use a script.

That's true for today's uses of `SIGHASH_NOINPUT` and others, but there
might be other uses that we don't know about in which noinput isn't just
used for the contingency, handwavy I know. That's probably not the case
for graftroot/taproot, but I'm happy to be corrected on that one.

Still, these opcodes and hash flags being mainly used for contingencies,
doesn't remove the need for these contingency options to be enforced

>> That means we'd actually want a different Segwit version for
>> "NOINPUT-can-be-used", which seems super ugly.
> That's backwards. If you introduce a new opcode, you can use the existing
> segwit version, rather than needing segwit v1. You certainly don't need
> v1 segwit for regular coins and v2 segwit for NOINPUT coins, if that's
> where you were going?
> For segwit v0, that would mean your addresses for a key "X", might be:
>[pubkey]  X
> - not usable with NOINPUT
>[script]  2 X Y 2 CHECKMULTISIG
> - not usable with NOINPUT
> - usable with NOINPUT (or SIGHASH_ALL)
> CHECKMULTISIG_1USE_VERIFY being soft-forked in by replacing an OP_NOP,
> of course. Any output spendable via a NOINPUT signature would then have
> had to have been deliberately created as being spendable by NOINPUT.

The main reason I went for the sighash flag instead of an opcode is that
it has clean semantics, allows for it to be bundled with a number of
other upgrades, and doesn't use up NOP-codes, which I was lectured
for my normalized tx BIP (BIP140) is a rare resource that should be used
sparingly. The `SIGHASH_NOINPUT` proposal is minimal, since it enhances
4 existing opcodes. If we were to do that with new opcodes we'd either
want a multisig and a singlesig variant, potentially with a verify
variant each. That's a lot of opcodes.

The proposal being minimal should also help against everybody trying to
get their favorite feature added, and hopefully streamline the

> For a new segwit version with taproot that likewise includes an opcode,
> that might be:
>[taproot]  X
> - 

Re: [Lightning-dev] eltoo: A Simplified update Mechanism for Lightning and Off-Chain Contracts

2018-05-03 Thread Christian Decker
Carsten Otto  writes:
> the paper is a bit confusing regarding the setup transaction, as it is
> not described formally. There also seems to be a mixup of "setup
> transaction" and "funding transaction", also named T_{u,0} without
> showing it in the diagrams.

The setup transaction is simply a transaction that spends some funds and
creates a single output, which has the script from Figure 2, but since
that would be a forward reference, I decided to handwave and call it a
multisig. A simple fix would be to change the setup phase bullet point
at the beginning of section 3, would that be sufficient?

> In 3.1 the funding transaction is described as funding "to a multisig
> address". In the description of trigger transactions the change is
> described as "The output from the setup transaction is changed into a
> simple 2-of-2 multisig output" - which it already is?

If instead of calling it a multisig we call it a multiparty output and
reference the script in Figure 2, that'd be addressed as well.

> As far as I understand the situation, the trigger transaction is needed
> because the broadcasted initial/funding/setup transaction includes an
> OP_CLV, which then starts the timer and could lead to premature
> settlement. Removing OP_CLV (and having in a transaction that is only
> published later when it is needed), i.e. by changing it to a simple
> multisig output, seems to solve this issue.
> Could you (Christian?) explain how the "setup transaction" is supposed
> to look like without the changes described in section 4.2?

Well, it has arbitrary inputs, and a single output with the script from
Figure 2, in the non-trigger case, and in the trigger case it'd be just
a `2 A B 2 OP_CMSV`.
Lightning-dev mailing list

Re: [Lightning-dev] eltoo: A Simplified update Mechanism for Lightning and Off-Chain Contracts

2018-05-01 Thread Christian Decker
ZmnSCPxj  writes:
> Good morning Christian,
> This is very interesting indeed!
> I have started skimming through the paper.
> I am uncertain if the below text is correct?
>> Throughout this paper we will use the terms *input script* to refer to 
>> `witnessProgram` and `scriptPubKey`, and *output script* to refer to the 
>> `witness` or `scriptSig`.
> Figure 2 contains to what looks to me like a `witnessProgram`, 
> `scriptPubKey`, or `redeemScript` but refers to it as an "output script":
>>10 OP_CSV
>> Figure 2: The output script used by the on-chain update transactions.
> Regards,
> ZmnSCPxj

Darn last minute changes! Yes, you are right, I seem to have flipped the
two definitions. I'll fix that up and push a new version.
Lightning-dev mailing list

[Lightning-dev] BIP sighash_noinput

2018-04-30 Thread Christian Decker
Hi all,

I'd like to pick up the discussion from a few months ago, and propose a new
sighash flag, `SIGHASH_NOINPUT`, that removes the commitment to the previous
output. This was previously mentioned on the list by Joseph Poon [1], but was
never formally proposed, so I wrote a proposal [2].

We have long known that `SIGHASH_NOINPUT` would be a great fit for Lightning.
They enable simple watch-towers, i.e., outsource the need to watch the
blockchain for channel closures, and react appropriately if our counterparty
misbehaves. In addition to this we just released the eltoo [3,4] paper which
describes a simplified update mechanism that can be used in Lightning, and other
off-chain contracts, with any number of participants.

By not committing to the previous output being spent by the transaction, we can
rebind an input to point to any outpoint with a matching output script and
value. The binding therefore is no longer explicit through a reference, but
through script compatibility, and the transaction ID reference in the input is a
hint to validators. The sighash flag is meant to enable some off-chain use-cases
and should not be used unless the tradeoffs are well-known. In particular we
suggest using contract specific key-pairs, in order to avoid having any unwanted
rebinding opportunities.

The proposal is very minimalistic, and simple. However, there are a few things
where we'd like to hear the input of the wider community with regards to the
implementation details though. We had some discussions internally on whether to
use a separate opcode or a sighash flag, some feeling that the sighash flag
could lead to some confusion with existing wallets, but given that we have
`SIGHASH_NONE`, and that existing wallets will not sign things with unknown
flags, we decided to go the sighash way. Another thing is that we still commit
to the amount of the outpoint being spent. The rationale behind this is that,
while rebinding to outpoints with the same value maintains the value
relationship between input and output, we will probably not want to bind to
something with a different value and suddenly pay a gigantic fee.

The deployment part of the proposal is left vague on purpose in order not to
collide with any other proposals. It should be possible to introduce it by
bumping the segwit script version and adding the new behavior.

I hope the proposal is well received, and I'm looking forward to discussing
variants and tradeoffs here. I think the applications we proposed so far are
quite interesting, and I'm sure there are many more we can enable with this


Lightning-dev mailing list

Re: [Lightning-dev] An Idea to Improve Connectivity of the Graph

2018-04-11 Thread Christian Decker
ZmnSCPxj via Lightning-dev 

> Good morning Alejandro,
> I was about to ask Christian this myself.
> There is another technique:
> Use a sequence of `nSequence`d transactions off-chain.  For example,
> to get an 2-bit counter, you would have:
> funding -> kickoff -> bit1 -> bit0
> Only funding is onchain.  kickoff, bit1, and bit0 transactions are all
> kept offchain.  We start a unilateral close by broadcasting kickoff,
> then wait for bit1 to become valid and broadcast then, then wait for
> bit0 to become valid and broadcast then.

Yes, this is exactly the way we would create a shared output that has an
indefinite lifetime, but would still be protected against the
counterparty becoming unresponsive. I usually call the `kickoff`
transaction the `trigger` transaction because it triggers the countdown
on the CSV encumbered scripts.

> There are two versions of the bit1 and bit0 transactions.  Each bit
> position, you have a high `nSequence` to represent the binary 0, and a
> low `nSequence` value to represent the binary 1.
> Then to increment your counter, you replace bit0.  If it has a high
> `nSequence` you replace it with a new bit0 transaction with the low
> `nSequence` (equivalent to flipping the bit).  If it is already the
> low `nSequence` (i.e. logically it is value 1) then we "carry" it by
> replacing the next higher bit, then replacing the current bit with the
> high `nSequence` (equivalent to propagating the carry and flipping the
> bit).  Thus it is equivalent to binary incrementation.
> It is safe to re-use the high `nSequence` on a lower bit if some
> higher bit in the offchain transactions uses the low `nSequence`
> value, since that higher bit dominates over the rest of the chain.
> This is basically just the "invalidation tree" concept brought to its
> logical conclusion.  We could use trinary or quaternary or more, but
> that limits the `nSequence` we can use (we do not want to use too
> large a high `nSequence` value as that increases wait times), so there
> is some balancing involved in the various parameters (number of
> digits, radix of counter).

Well, what you just described is a branching factor of 2, while in the
paper we usually used a branching factor of 48 (1 hour deltas, for 2
days total wait time). Unlike the Locktime based timeouts the deltas
along a branch in the tree are now cumulative so you'd probably want to
make sure that they sum up to a reasonable max timeout, i.e., all sum of
timeouts along a branch <= 2 days total.

> To get a 32-bit counter for a maximum of 4,294,967,296 updates
> transactions in sequence, we need 33 transactions in sequence kept
> off-chain.  When one party disappears, we are forced to feed the 33
> transactions one-by-one into the blockchain.  If we use 4 blocks for
> high `nSequence` (bit 0) and 0 blocks for low `nSequence` (bit 1) then
> at worst case lockup time for unilateral close is 128 blocks.

That is mostly due to the selection of 1 bit sequence diffs, the
branching gives us a huge increase in the number of invalidations. The
paper has the example of branching factor of 46, and a tree depth of 11,
which results in 1.48e11 updates.

> Note that all transactions are kept offchain: we never re-point a
> refund transaction as you describe in your "(b)".  Thus we only waste
> blockchain space if we are forced into a unilateral close.  Normal
> operation, we simply keep all transactions offchain and only touch the
> chain on unilateral or bilateral close.
> The big drawback is the large number of transactions in sequence in a
> unilateral close.  In a bilateral close we collapse all transactions
> into a single bilateral refund.  I suppose it is hopeful to consider
> that unilateral closes should be very rare.
> So, Christian, it still seems that techniques that reduce total wait
> times in a unilateral close have the drawback of increasing the number
> of transactions in sequence in a unilateral close.  It still seems
> Poon-Dryja, is superior in that total wait time is easily
> user-selectable and unilateral closes only have two transactions in
> sequence.  For low number of updates, we can consider having a tiny
> "counter" (possibly a quaternary counter) that itself terminates in
> multiple Poon-Dryja channels, which I believe is what the
> Burchert-Decker-Wattenhofer channel factories do.

Yes, I agree that DMCs have a much wider on-chain footprint for the
non-cooperative close scenario. I do prefer DMC style updates for some
use-cases though, since they do not have the issue with more than 2
parties, they have no toxic material that can result in your funds being
grabbed, just because you were out of date, and because it means that we
can totally forget old HTLCs since there is no way for them to ever
become relevant again (in LN, if an old commitment gets confirmed we
need to scramble to recover the preimage so the rightful owner can claim

I guess it's another 

Re: [Lightning-dev] Closing Transaction Cut-through as a Generalization of Splice-in/Splice-out

2018-04-11 Thread Christian Decker
ZmnSCPxj via Lightning-dev  writes:
> Suppose, rather than implement a splice-in/splice-out ("channel
> top-up", etc.) we instead implement a more general "cut-through" for a
> channel close transaction.
> Normally a channel close spends a single input and makes 1 or 2
> outputs.  Instead of such a simple transaction, both sides could
> additionally provide signed normal transactions that spend the
> outputs, then they could cooperatively create a new close transaction
> that cuts through the original close transaction and the additional
> normal transactions.

We could go a bit further and have both sides provide incomplete and
unsigned stubs, that would then be applied to the closing transaction:
inputs in the stubs are added as inputs to the closing transaction
adding to the balance of the providing party, and outputs are also added
to the closing transaction, drawing from the balance of the party adding
the new output. This way we can perform any number of splice-in / -out
operations in a single reseat operation. Not sure if we need to have any
additional negotiation besides this. Admittedly using tx formatted
messages to transport the intent is not really necessary, but it
reconnects to the idea of cut-through, combining multiple transactions
into one.

> A splice-in and splice-out would then be a closing transaction that
> gets cut-through with a funding transaction to the same peer.

That may complicate the state tracking a bit, but we'll eventually have
transactions that assume multiple roles anyway, so that sounds ok.

> The generalization is useful if we want to "reseat" a channel to one
> peer to another peer.  For example, if the node keeps payment
> statistics and notices that the channel with one peer always has a
> high probability of failing to forward to a destination, then it could
> decide to close that channel and open a channel to some other peer.
> This reseat operation could use the closing transaction cut-through to
> close the channel and open to another peer in a single onchain
> transaction.
> Such a reseat operation also seems like a reasonable primitive for
> Burchert-Decker-Wattenhofer channel factories to offer; reseats can be
> done offchain if both the reseat-form peer and the reseat-to peer and
> the node belong to the same channel factory.

The connection the channel factories is not really necessary, as long as
we have an invalidation scheme that allows us to invalidate a prior
funding transaction we can reseat without needing a cut-through, just
invalidate the funding tx of the old channel and add the funding tx for
the new one in the new state.
Lightning-dev mailing list

Re: [Lightning-dev] Proposal for Advertising Lightning nodes via DNS records.

2018-04-11 Thread Christian Decker
ZmnSCPxj  writes:
>> This also allows domain operators to have one or more public nodes,
>> but many private ones with channels open to their public nodes to
>> better manage their risk. For example, the private nodes could be
>> behind a firewall.
> I am not sure how the risk gets managed if the public and private
> nodes are owned by the same economic entity.
> Suppose I am a single economic entity, and I have a public node B and
> a private node C.  You are the customer A who pays me.
> A -> B -> C
> Now the channel B->C contains money I own.  Any transfers between B
> and C are simply money juggling around between two accounts I own.
> Thus my earnings are never in the B->C channel, but in the (public!)
> A->B channel.  So attacking B will still allow hackers to take my
> earnings, because B->C only contains savings.  Indeed I probably take
> *more* risk here, since I need to fund B->C rather than, say, keep
> that money in a cold storage paper in a locked vault buried beneath
> concrete somewhere in the jungles of Africa (I would like to take the
> time to note that this is not where I actually keep my cold storage).
> Which is not to say this is completely worthless.  Perhaps B and C are
> owned by different entities: B takes on risk, and in exchange charges
> a larger-than-usual feerate for the B->C channel transfers.

Excellent point, but I think there are more advantages to having a node
separate from the gateway as source of truth. Let's assume the gateway
node, exposed directly to the open network is compromised, that still
means that an eventual store independently verifies incoming payments,
i.e., it is not possible for the compromised node to escalate to the
store, without also compromising the hidden nodes. If however the
gateway node provides the ground truth for the store, then the attacker
could just mark all of the attacker's invoices as complete and thus
steal goods from the shop. The attacker is limited to drain the B -> C
channel, to steal goods from the store, until it gets rebalanced.

Lightning-dev mailing list

Re: [Lightning-dev] Proposal for Advertising Lightning nodes via DNS records.

2018-04-09 Thread Christian Decker

thanks for the detailed feedback, I'll try to address some of the issues

Tyler H  writes:
> --Regarding looking up nodes at the time of payments:
> In the future, nodes could negotiate a channel open with a push amount and
> provide the TXID or payment hash as proof of their payment of the invoice.
> This wouldn't even require the channel to be usable, and merchants could
> decide to accept 1 (or even 0) confirmations of this transaction based on
> their acceptable level of risk, considering the properties of the channel
> (capacity, local balance etc).  So in that use case, this would be a rough
> process of the interaction:

There is very little difference between pushing with the channel
creation and just doing an immediate update even though the channel
isn't confirmed yet. To be honest I think the `push_msat` feature is the
classical case of optimizing too early.

But the end result is still that the merchant either takes a hit in the
trustworthiness of the incoming payment, or the buyer is going to have a
bad time waiting at the checkout until the channel confirms. 

> User tries to pay lightning invoice, and it fails.  The user's wallet
> offers to pay via channel opening.  The user accepts.  The wallet reads the
> invoice for a "domain" field, or perhaps if the wallet happens to be a
> browser, it does a SRV lookup against the current domain serving the
> invoice.  The wallet looks up the domain records, and verifies the
> destination node is present.  If so, the wallet picks the correct node
> based on the records present, and opens a channel with a push amount to
> it.  The destination node sees this and via as some yet undetermined
> method, associates it to that payment invoice and chooses to mark it as
> "paid" or "pending X confirmations" according to whatever criteria the node
> operator wishes to use.

I was going to comment that, since we already have an invoice detailing
the destination, the indirection through the DNS system to find the
desired connection point was useless, but your example with Starblocks
where connections are accepted by one node, and payments by another
convinced me that this is indeed a useful feature. A feature however
that could be solved just as well by including an `r` tag in the invoice
itself. In this case you can either use the gossip protocol or the BOLT
10 DNS lookup system to locate the entry point into the merchant's
network. I don't think that a direct connection to the merchant in case
of it being unreachable is a good idea, because it creates latent
hubs. But I see the slight advantage of reducing the failure probability
w.r.t. to opening a channel with a random node.

> In a simple example, you could list all of your nodes but prefer clients
> open channels to a single one, similar to ACINQ's setup with "endurance"
> and "starblocks" on testnet.  This example would simply require setting
> "endurance" to have the highest priority. This also allows domain operators
> to have one or more public nodes, but many private ones with channels open
> to their public nodes to better manage their risk. For example, the private
> nodes could be behind a firewall.

This is definitely true, if I'm not mistaken, starblocks doesn't even
allow incoming connections, so you have to use endurance as an entry

> The result of this is that the user experience is improved, and a side
> benefit is being able to safely associate a given payment request, and by
> extension node, with a domain.  Another nontrivial benefit is there will be
> more channels opened with value on the other side, allowing for receiving
> funds back from Lightning.
> There are some possible open questions regarding ensuring a payment request
> hasn't been spoofed, but if you present the domain to the user, he/she can
> verify that the wallet is about to open a channel to the domain they
> expect.  Other issues with this are with DNS hijacking, which to be frank
> is not an unlikely scenario.  Caution would be necessary, and perhaps
> cryptographic means of associating nodes and their associated domains would
> be a requirement for something like this to exist, but the proposed BOLT
> lays the groundwork for that to happen.

There's some value in this, that's definitely true, however these kinds
of added security through DNS haven't quite worked out in the past. Then
again we can just do the domain -> nodeid binding without encouraging
users to actually open a direct connection :-)

> --Future payments going through the merchant:
> This is probably the biggest wrinkle.  The merchant _does_ have the ability
> to know when a payment transits the channel, thus reducing privacy.  I
> think the proposed BOLT should only be used to improve user experience, not
> as a replacement for the decentralized nature of Lightning.  For example,
> node operators will use autopilot-like functionality for opening channels,
> BUT they will be able to augment that with 

Re: [Lightning-dev] Proposal for Advertising Lightning nodes via DNS records.

2018-04-08 Thread Christian Decker
Hi Tyler,
Hi Robert,

first of all, welcome to the mailing list, always good to have more
people looking and improving the spec. I quickly read through the spec
and it is very well written, and it looks good.

On a conceptual level, I do however have some issues with the
proposal. I don't think that the kind of selective attachment to the
node of a merchant is beneficial to neither the node that is opening the
channel, nor for the network as a whole:

 - For the node opening a channel at the time of a payment is too late,
   it basically means that for the first payment you'd have to wait for
   an on-chain confirmation, even if we use `push_msat` to perform the
   initial payment. This is bad for the user experience. Channels should
   be opened ahead of time so that, when the customer enters a shop,
   everything is already set up. Special cases are always hard to
   communicate ("you have to wait, but only this time, then in future
   all will be nice and quick")
 - It also causes all future payments to go through that merchant, which
   can now collate your shopping activity with all of your other
   payments, and create a profile. It's basically the hub-and-spoke
   threat with the added problem of the hub also knowing your identity.
 - The merchant can cripple future payments that he might suspect are
   going to a competitor (Starbucks may attempt to block payments for
   amounts that look like coffee payments and go to their
   competitor). Think net neutrality for Lightning.
 - For the network as a whole this creates a network of large hubs that
   are only weakly interconnected, or not connected at all, unless the
   merchants are "generous" enough to maintain connections among each

But it's not all bad, I really like the possibility to look up a
merchant's node ID through DNS, so that my wallet can check (indirect)
connectivity to that node and try to optimize their connectivity.

I think we should encourage people, and implement the clients, to open
random connections, biased towards strenghtening the overall
connectivity. With the gossip protocol we already disseminate enough
information to allow nodes to identify bottlenecks and provide
additional capacity to bridge them.

Sorry for being so pessimistic, but I think it's important we move away
from people attempting to open targeted channels directly to the
merchants. I still regret publishing the IP address of SLEEPYARK.


Tyler H  writes:
> Greetings,
> A challenge end-users face is connecting to nodes with enough liquidity to
> pay every merchant, and failing that, finding the merchant node in a
> reasonably sane way to open a channel to them for payments.
> As it is now, people find nodes in other people's visualizers, and pass
> node aliases around via word of mouth which is very prone to inaccuracy and
> MITM attacks. A current alternative is attempting to make a payment,
> decoding the payment request, finding the node on your graph and attempting
> to open a channel to the merchant.  This is only possible if the
> destination is advertising addresses.
> We (Robert Olsson and I) propose an additional BOLT, tentatively scheduled
> to be BOLT 12, to allow for operators of domain names to create SRV records
> for their nodes.  This is separate from BOLT 10's seed functionality as the
> desired outcome is to get only the nodes associated with a particular
> domain.  This would allow, as an example, users to say to each other
> "connect to a node" and the user can independently look up
> that domain, find advertised nodes and connect/open channels.
> This also improves security from the perspective of nodes masquerading as
> other nodes, as anyone with a domain can authoritatively list their nodes.
> In addition, domain operators could provide subdomains for their node
> addresses to distinguish between nodes intended for a specific purpose,
> from a human perspective.
> Robert Olsson (rompert) and I have created
> as a draft of
> what the RFC could look like.
> Feedback is much appreciated.
> Best regards,
> Tyler (tyzbit)
> ___
> Lightning-dev mailing list
Lightning-dev mailing list

Re: [Lightning-dev] An Idea to Improve Connectivity of the Graph

2018-04-06 Thread Christian Decker
ZmnSCPxj via Lightning-dev 
> In a retaliation construction, if a party misbehaves, the other party gets 
> the entire amount they are working on together, as disincentive for any party 
> to cheat.
> That works for the two-party case A and B.  If A cheats, B gets money.
> How do you extend that to the three-party case A B C?  If A cheats, what 
> happens?
> Suppose the correct current state is A=2, B=99, C=3.  Suppose A cheats
> and attempts to publish A=102, B=1, C=1.  C detects it because B is
> asleep at that time.  Does C get to claim the money that A claimed for
> itself, basically 101+1 and thus 102?  But the correct state has
> almost all of the money assigned to B instead.  Obviously that is
> unjust.  Instead C should get to claim only 3 from A (its 3 in the
> final state) in addition to its 1 in the published state, and should
> give the 99 to B.  So now B also needs another retaliatory
> construction for the case "A cheated first and C found out and and
> also cheated me", and a separate construction for "A cheated but C was
> honest".  And that is separate construction for the case "C cheated
> first and A found out and also cheated me" and a separate construction
> for "C cheated but A was honest".
> As should be obvious, it does not scale well with number of
> participants on a single offchain "purse"; it quickly becomes complex
> fast.

The need to identify the misbehaving party and punish just that one
party could be addressed by having pre-committed retaliation
transactions. However this results in a large number of pre-committed
transactions that need to be carried around just for the case that
someone really misbehaves. In addition colluding parties may be able to
punish each other when an cheat attempt seems doomed to fail, which
reduces the cost of the attack. This could also be partially fixed by
pre-committing retaliation transactions that split the misbehaving
party's funds. Overall a very unsatisfactory solution.

> Retaliatory constructions however have the major advantage of not
> imposing limits on the number of updates that are allowed to the
> offchain "purse".  Prior to Rusty shachains it was thought to require
> storage linear in the number of updates (which could be pruned once
> the channel/"purse" is brought onchain), but Rusty shachains also
> require O(1) storage on number of updates.  Thus retaliatory
> constructions are used for channels.
> Note that channel factories, to my understanding, can have the Duplex
> construction near the root of the initial onchain anchor transaction,
> but be terminated in Poon-Dryja retaliatory channels, so that a good
> part of the current LN technology has a good chance of working even
> after channel factories are implemented.  This strikes me as a good
> balance: restructuring channels is expected to be much rarer compared
> to updating them normally for normal usage, so each construction plays
> its own strengths: the Decker-Wattenhofer construction which imposes a
> limit on the number of updates, but has no limit on number of
> participants, is used for the rarer. massive "channel restructuring"
> operations, while the Poon-Dryja construction which imposes a
> practical limit on number of particiapnts, but has no limit on number
> of updates, is used for "day-to-day" normal operation.

That's not as bad a tradeoff as people usually interpret, the DMC
construction has parameters that allow tweaking the number of
invalidations, and with parameters similar to LN we can have 1.4 billion
updates. Which is years of operation without need to
re-anchor. In addition penaltyless invalidation has a number of
advantages, for example it doesn't have the state asymmetry inherent in
LN and there is no toxic state information that, when leaked, results in
your funds being claimed through a retaliation. This happened to me btw
last month when I accidentally restored a wallet from backup and
attempted to reconnect.

Lightning-dev mailing list

Re: [Lightning-dev] Can I try Lightning without running a fully-fledged bitcoin block chain?

2018-03-30 Thread Christian Decker
Dear Mahesh,

as interesting as the discussion of alternative blockchain storage it,
it is probably off-topic for the Lightning mailing list. So I'd suggest
taking this discussion to either the bitcoin-dev or bitcoin-core-dev
mailing lists.


Mahesh Govind  writes:
> Could we use similar technique used in hyperledger to prune the chain .
> Pruning based on consensus ?
> -mahesh
> On Wed, Mar 28, 2018 at 10:30 AM, ZmnSCPxj via Lightning-dev <
>> wrote:
>> Good morning Segue,
>> Please consider creating an implementation of this idea for bitcoind and
>> share it on bitcoin-dev.  Then please make a pull request on
>> for this.
>> Regards,
>> ZmnSCPxj
>> Sent with ProtonMail  Secure Email.
>> ‐‐‐ Original Message ‐‐‐
>> On March 27, 2018 11:26 AM, Segue <> wrote:
>> Developers,
>> On THIS note and slightly off-topic but relevant, why can't chunks of
>> blockchain peel off the backend periodically and be archived, say on
>> minimum of 150 computers across 7 continents?
>> It seems crazy to continue adding on to an increasingly long chain to
>> infinity if the old chapters (i.e. more than, say, 2 years old) could be
>> stored in an evenly distributed manner across the planet.  The same 150
>> computers would not need to store every chapter either, just the index
>> would need to be widely distributed in order to reconnect with a chapter if
>> needed.
>> Then maybe it is no longer a limitation in the future for people like
>> Yubin.
>> Segue
>> On 3/26/18 6:12 PM, wrote:
>> Message: 2
>> Date: Sat, 17 Mar 2018 11:56:05 +0100
>> From: Federico Tenga  
>> To: Yubin Ruan  
>> Cc:
>> Subject: Re: [Lightning-dev] Can I try Lightning without running a
>>  fully-fledged bitcoin block chain?
>> Message-ID:

Re: [Lightning-dev] Lightning network implementation with ethereum

2018-03-30 Thread Christian Decker
Dear Mahesh,

that's a very interesting question, to be best of my knowledge there is
no working implementation for Ethereum and I don't think anybody is
working on one currently. There is the Raiden network attempt at porting
a Lightning-like network to Ethereum, but I'm not sure what the current
status of that project is.

A direct port is probably not possible due to the differences in the
underlying blockchain, but a Lightning-like network is definitely


Mahesh Govind  writes:
> Dear Experts,
> Could you please let me know the implementation I could use with ethereum  .
> With thanks and regards
> mahesh
> ___
> Lightning-dev mailing list
Lightning-dev mailing list

Re: [Lightning-dev] DNS Seed query semantics clarification

2018-03-19 Thread Christian Decker
Thomas Steenholdt  writes:
> Thanks for the explanation - This was exactly the the piece of the
> puzzle I was missing. 
> I'd be happy to help clarify this in the BOLT10 specification, if that
> makes any type of sense? I can make a pull request for revie

Absolutely, improvements are always welcome :-)
Lightning-dev mailing list

Re: [Lightning-dev] DNS Seed query semantics clarification

2018-03-16 Thread Christian Decker
Hi Thomas,

indeed the spec is a bit vague on the flags. The intent is to use them
as subdomains. For example if you want to query for only realm IPv4
nodes then you'd use the following:


while IPv4 or IPv6 nodes, but only nodes with realm 0, should be
returned to the following:


Notice however that I haven't implemented the query filtering itself
just yet.


Thomas Steenholdt  writes:
> Hi there,
> I'm trying to understand the DNS seeds described by BOLT10, but seem to be 
> missing something regarding the query semantics.
> The BOLT states that the DNS seed must support a list of key-value pairs, but 
> it's unclear to me how these pairs are used in a query. Are they encoded into 
> the fqdn used in the query or something entirely different?
> Any pointers?
> /Thomas
Lightning-dev mailing list

Re: [Lightning-dev] New form of 51% attack via lightning's revocation system possible?

2018-03-13 Thread Christian Decker
Good example, even if rather hard to setup :-)

What I meant with the attack being identical is that we can replay the
entire attack on-chain, without needing Lightning in the first place,
i.e., the attacker needed to own the funds he is going to steal at some
time, whether that is as part of a channel settlement that is repalced
or an output that has been spent.

You are however right that Lightning with its multi-hop payments
increases the potential exposure of a node, increasing the attackers
payoff in case of a successful attack.


René Pickhardt <> writes:
> Hey Christian,
> I agree with you on almost anything you said. however I disagree that in
> the lightning case it produces just another double spending. I wish to to
> emphasize on my statement that the in the case with lightning such a 51%
> attack can steal way more BTC than double spending my own funds. The
> following example is a little extrem and constructed but it should help to
> make the point. Also for pure convenience reasons I neglected the fact that
> channels should never be worse distributed than 99% to 1%:
> Let us assume I am the attacker currently owning 1000 BTC. Now 1000 nodes
> called n_0,...n_{999} open a payment channel with me (all funded by the
> other side with 999 BTC in each channel (and 1 BTC from me)) resulting in
> the following channel balance sheet:
> c_0: me = 1 BTC and n_0 = 999 BTC
> c_1: me = 1 BTC and n_1 = 999 BTC
> c_2: me = 1 BTC and n_2 = 999 BTC
> ...
> c_{999}: me = 1 BTC and n_{999} = 999 BTC
> Now node n_0 sends 1 BTC to each node n_1,...,n_{999} (using me for routing
> the payment) so the channel balances read:
> c_0: me =   1000 BTC and n_0 =   0 BTC (save the corresponding
> commitment transaction!)
> c_1: me = 0 BTC and n_1 = 1000 BTC
> c_2: me = 0 BTC and n_2 = 1000 BTC
> ...
> c_{999}: me=  0 BTC and n_{999} = 1000 BTC
> next n_1 sends 1000 BTC to n_0:
> c_0: me = 0 BTC and n_0 = 1000 BTC
> c_1: me =   1000 BTC and n_1 =0 BTC  (save the corresponding
> commitment transaction!)
> c_2: me = 0 BTC and n_2 = 1000 BTC
> ...
> c_{999}: me=  0 BTC and n_{999} = 1000 BTC
> similarly  n_2 sends 1000 BTC to n_1:
> c_0: me = 0 BTC and n_0 = 1000 BTC
> c_1: me = 0 BTC and n_1 = 1000 BTC
> c_2: me =   1000 BTC and n_2 =0 BTC  (save the corresponding
> commitment transaction!)
> ...
> c_{999}: me = 0 BTC nad n_{999} = 1000 BTC
> following this scheme n_3 --[1000 BTC]--> n_2, n_4 --[1000 BTC]--> n_3,...
> due to this (as mentioned highly constructed and artificial behavior) I
> will have old commitment transactions in *each* and every channel (which
> spends 1000 BTC to me)
> When starting my secret mining endeavor I spend those commitment
> transactions which gives in this particular case 1000 * 1000 BTC = 1M BTC
> to me.
> So while I agree that a 51% is a problem for any blockchain technology I
> think the consequences in the lightning scenario are way more problematic
> and makes such an attack also way more interesting for a dishonest
> fraudulent person / group. In particular I could run for a decade on stable
> payment channels storing old state and at some point realizing it would be
> a really big opportunity secretly cashing in all those old transactions
> which can't be revoked.
> I guess one way of resolving this kind of limitless but rare possibility
> for stealing could be to make sure no one can have more than 2 or three
> times the amount of BTC she owns in all the payment channels the person has
> open. As funding transactions are publicly visible on the blockchain one
> could at least use that measure to warn people before opening and funding
> another payment channel with a node that is heavily underfunded. Also in
> the sense of network topology such a measure would probably make sure that
> channels are somewhat equally funded.
> best Rene
> On Tue, Mar 13, 2018 at 3:55 PM, Christian Decker <
>> wrote:
>> Hi René,
>> very good question. I think the simple answer is that this is exactly
>> the reason why not having a participant in the network that can 51%
>> attack over a prolonged period is one of the base assumptions in
>> Lightning. These attacks are deadly to all blockchains, and we are
>> certainly no different in that regard.
>> More interesting is the assertion that this may indeed be more dangerous
>> than a classical 51% attack, in which an attacker can only doublespend
>> funds that she had control over at some point during the attack
>> (duration being defined as 

Re: [Lightning-dev] New form of 51% attack via lightning's revocation system possible?

2018-03-13 Thread Christian Decker
Hi René,

very good question. I think the simple answer is that this is exactly
the reason why not having a participant in the network that can 51%
attack over a prolonged period is one of the base assumptions in
Lightning. These attacks are deadly to all blockchains, and we are
certainly no different in that regard.

More interesting is the assertion that this may indeed be more dangerous
than a classical 51% attack, in which an attacker can only doublespend
funds that she had control over at some point during the attack
(duration being defined as the period she can build a hidden fork of). I
think the case for Lightning is not more dangerous since what they could
do is enforce an old state in which they had a higher balance than in
the final state, without incurring in a penalty. The key observation is
that in this old state they actually had to have the balance they are
stealing on the channel. So this maps directly to the classical
scenario in which an attacker simply doublespends funds they had control
over during the attack, making the attack pretty much the same.

Another interesting observation is that with Lightning the state that
the attacker is enforcing may predate the attack, e.g., an attacker
could use a state that existed and was replaced before it started
generating its fork. This is in contrast to the classical doublespend
attack in which invalidated spends have to happen after the fork
started, and the attacker just filters them from its fork.

But as I said before, if we can't count on there not being a 51%
attacker, then things are pretty much broken anyway :-)


René Pickhardt via Lightning-dev
> Hey everyone,
> disclaimer: as mentioned in my other mail (
> ) I am currently studying the revocation system of duplex micropayment
> channels in detail but I am also pretty new to the topic. So I hope the
> attack I am about to describe is not possible and it is just me overseeing
> some detail or rather my lack of understanding.
> That being said even after waiting one week upon discovery and double
> checking the assumptions I made I am still positive that the revocation
> system in its current form allows for a new form of a 51% attack. This
> attack seems to be way more harmful than a successful 51% attack on the
> bitcoin network. Afaik within the bitcoin network I could 'only double
> spend' my own funds with a successful 51% attack. In the lightning case it
> seems that an attacker could steal an arbitrary amount of funds as long as
> the attacker has enough payment channels with enough balance open.
> The attack itself follows exactly the philosophy of lightning: "If a tree
> falls in the forest and no one is around to hear it. Does it make a sound?"
> In the context of the attack this would translate to: "If a 51% attacker
> secretly mines enough blocks after fraudulently spending old commitment
> transactions and no one sees it during the the *to_self_delay*  period,
> have the commitment transactions been spent? (How) Can they be revoked?"
> As for the technical details I quote from the spec of BOLT 3:
> "*To allow an opportunity for penalty transactions, in case of a revoked
> commitment transaction, all outputs that return funds to the owner of the
> commitment transaction (a.k.a. the "local node") must be delayed for *
> *to_self_delay** blocks. This delay is done in a second-stage HTLC
> transaction (HTLC-success for HTLCs accepted by the local node,
> HTLC-timeout for HTLCs offered by the local node)*"
> Assume an attacker has 51% of the hash power she could open several
> lightning channels and in particular accept any incoming payment channel
> (the more balance is in her channels the more lucrative the 51% attack).
> Since the attacker already has a lot of hash power it is reasonable (but
> not necessary) to assume that the attacker already has a lot of bitcoins
> and is well known to honest nodes in the network which makes it even more
> likely to have many open channels.
> The attacker keeps track of her (revocable) commitment transactions in
> which the balance is mostly on the attackers side. Once the attacker knows
> enough of these (old) commitment transactions the attack is being executed
> in the following way:
> 0.) The max value of to_self_delay is evaluated. Let us assume it is 72
> blocks (or half a day).
> 1.) The attacker secretly starts mining on her own but does not broadcasts
> any successfully mined block. Since the attacker has 51% of the hash power
> she will most likely be faster than the network to mine the 72 blocks of
> the safety period in which fraudulent commitment transactions could be
> revoked.
> 2.) The attacker spends all the fraudulent (old) commitment transactions in
> the first block of her secrete mining endeavor.
> 3.) Meanwhile the attacker starts spending her own funds of 

Re: [Lightning-dev] Pinging a route for capacity

2018-03-04 Thread Christian Decker
Rusty Russell  writes:
> Jim Posen  writes:
> If failure is common this would be true, but I think it's too early to
> design for it.
> This kind of signalling is what fees are for: when capacity gets low you
> increase fees, and when it gets high, you reduce them.  But that may
> still prove insufficient.
> Two things come to mind:
> 1. `temporary_channel_failure` returns a `channel_update`.  The
>implication is that this has the disabled flag, but we should
>probably make that true iff the request asks for < 2% of the channel
>capacity or some "minimal bar".  If you can't even service this, you
>should disable the channel.
> 2. We can implement fast failure to reduce latency:
> Note that there needs to be more analysis on reliable ways to mask the
> active capacity of a channel: using a static random threshold still
> leaks information that *something* has happened, so it may need to be
> more sophisticated.

I have to agree with Rusty here, pinging a channel for capacity sounds a
lot like premature optimization. In addition it could lead to a rather
large privacy leak, both for the sender as well as the individual

Giving any information about the current balance of a channel, could
lead to tracing payments through the network. And users pinging channels
before making a payment could result also in traced payments.

The feedback mechanism we have by adding channel_updates in the failure
message should allow senders to learn about changes in the channels that
caused the failure, and it should be injected into the gossip so peers
learn about it as well. Once we have exhausted what we can do with the
simple gossip mechanism, only then should we be looking at other

Lightning-dev mailing list

Re: [Lightning-dev] AMP: Atomic Multi-Path Payments over Lightning

2018-02-12 Thread Christian Decker
Jim Posen  writes:
> If using two hashes to deliver the payment while still getting a proof, I'm
> not sure what that provides above just sending regular lightning payments
> over multiple routes with one hash. Firstly, if there is a second hash, it
> would presumably be the same for all routes, making them linkable again,
> which AMP tries to solve. And secondly, the receiver has no incentive to
> claim any of the HTLCs before all of them are locked in, because in that
> case they are releasing the transaction receipt before fully being paid.

Arguably the second concern is not really an issue, if you allow partial
claims you'll end up in a whole lot of trouble. It should always be the
case that the payment as whole is atomic, i.e., either the entirety of
the payment goes through or none of it, independently of whether it was
a singlepath or a multipath payment. This is actually one of the really
nice features that was enforced using the simple "just reuse the
hash"-mechanism, you always had to wait for the complete payment or
you'd risk losing part of it.
Lightning-dev mailing list

Re: [Lightning-dev] An Idea to Improve Connectivity of the Graph

2018-02-05 Thread Christian Decker
I'd also like to point out that the way we do state invalidations in
Lightning is not really suited for multi-party negotiations beyond 2
parties. The number of potential reactions to a party cheating grows
exponentially in the number of parties in the contract, which is the
reason the Channel Factories paper relies on the Duplex Micropayment
Channel construction instead of the retaliation construction in LN.

Furthermore I'm also not exactly clear how we could retaliate
misbehavior on one channel in the other channel if they are logically
independent. Without this you could potentially re-allocate your funds
to another channel and then attempt to cheat, without it costing your


ZmnSCPxj via Lightning-dev 

> Good morning Abhishek Sharma,
> While the goal of the idea is good, can you provide more details on the 
> Bitcoin transactions?  Presumably the on-chain anchor is a 3-of-3 multisig 
> UTXO, what is the transaction that spends that?  What do Lightning commitment 
> transactions spend?  Can you draw a graph of transaction chains that ensure 
> correct operation of this idea?
> Have you seen Burchert-Decker-Wattenhofer Channel Factories? 
>  What is the difference between your idea and the Burchert-Decker-Wattenhofer 
> Channel Factories?
> Regards,
> ZmnSCPxj
> Sent with [ProtonMail]( Secure Email.
>  Original Message 
> On February 4, 2018 6:21 PM, Abhishek Sharma  wrote:
>> Hello all,
>> I am not sure if this is the right place for this, but I have been thinking 
>> about the lightning network and how it could be modified so that fewer total 
>> channels would need to be open. I had the idea for a specific kind of 
>> transaction, in which three parties commit their funds all at once, and are 
>> able to move their funds between the three open channels between them. I 
>> will give a rough overview of my idea and give an example that I think 
>> illustrates how it could improve users' ability to route their transactions.
>> Say that three parties, A, B, and C, create a special commitment transaction 
>> on the network that creates three open channels between each of them with a 
>> pre-specified balance in each channel. Now, these channels would be 
>> lightning network channels, and so the three of them could transact with 
>> each other and modify balances in their individual channels at will. 
>> However, this special agreement between the three of them also has the 
>> property than they can move their funds between channels, provided they have 
>> the permission of the counterparty to the channel they move their funds 
>> from, and then presents this to the other counterparty to show that funds 
>> have been moved.
>> 1.) A, B, and C each create a commitment transaction, committing .5 BTC (3 
>> BTC in total) on their end of each of their channels.
>> 2.) A, B, and C transact normally using the lightning protocol. After some 
>> amount of time, the channel balances are as follows:
>> channel AB: A - 0.75, B - 0.25
>> channel BC: B - 0.4, C - 0.6,
>> channel AC: A - 0, C: 1.0
>> 3.) A would like to send .5 BTC to C, however she does not have enough funds 
>> in that channel to do so. It's also not possible for her to route her 
>> transaction through B, as B only has .4 in his channel with C. However, she 
>> does have those funds in her channel with B, and so asks for B's permission 
>> (in the form of a signed balance state that includes the hash of the 
>> previous balance), to move those funds over to her account with C. She gets 
>> this signed slip from B, and then presents it to C.
>> 4.) A, B, and C continue trading on their update balances.
>> 5.) When they wish to close out their channels, they all post the last 
>> signed balance statements each of them has.
>> Say, for example, A and B were to collude and trade on their old balance (of 
>> .75 and .25) after Bsigning the statement that A was 'moving' funds to C. If 
>> A and C were trading on their new balances, C has proof of both A and B's 
>> collusion, and she can present the signed slip which said that A was moving 
>> funds to AC and so the total balance on A and B's channel should've summed 
>> to 0.5. In this event, All funds in all three channels are forfeited to C.
>> I believe this works because, in virtue of being able to make inferences 
>> based on her own channel balances, C always knows (if she is following the 
>> protocol) exactly how much should be in channel AB. and can prove this. If 
>> there were 4 parties, C couldn't prove on her own that some set of parties 
>> colluded to trade on an old balance.
>> Now, I'll show why such a mechanism can be useful.
>> Now, assume that there are parties A, B, C, D, and E, and the following 

Re: [Lightning-dev] Manual channel funding

2018-02-05 Thread Christian Decker
Hi Alex,

not sure what the context of your question. It doesn't appear to be
protocol related, but rather an issue with the interface that the
implementations expose. If that is the case, I'd suggest filing an issue
with the respective implementation.


Alex P  writes:
> Hello!
> At the moment there is no option to choose outputs to fund channel
> manually. Moreover, there is no way to fund channel with "all available
> funds". That's weird, I set up a channel and tried to use "all I ave",
> and got is a transaction on blockchain with the output for 980 SAT:
> To my opinions at least there should be an option "take fee from funding
> amount", and may be an option to choose exact outputs to spend.
> Any ideas?
> ___
> Lightning-dev mailing list
Lightning-dev mailing list

[Lightning-dev] Improving the initial gossip sync

2018-02-05 Thread Christian Decker
Hi everyone

When we started drafting the specification we decided to postpone the
topology syncronization mechanism until we have a better picture of the
kind of loads that are to be expected in the network, e.g., churn and
update rate, and instead implement a trivial gossip protocol to
distribute the topology updates. This includes the dreaded initial
synchonization dump that has caused some issues lately to all
implementations, given that we dump several thousands of updates, that
may require block metadata (short channel ID to txid conversion) lookup
and a UTXO lookup (is this channel still active?).

During the last call we decided to go for an incremental improvement,
rather than a full synchronization mechanism (IBLT, rsync, ...). So
let's discuss how that improvement could look like.

In the following I'll describe a very simple extension based on a
highwater mark for updates, and I think Pierre has a good proposal of
his own, that I'll let him explain.

We already have the `initial_routing_sync` feature bit, which (if
implemented) allows disabling the initial gossip synchronization, and
onyl forwarding newly received gossip messages. I propose adding a new
feature bit (6, i.e., bitmask 0x40) indicating that the `init` message
is extended with a u32 `gossip_timestamp`, interpreted as a UNIX
timestamp. The `gossip_timestamp` is the lowest `channel_update` and
`node_announcement` timestamp the recipient is supposed to send, any
older update or announcement is to be skipped. This allows the `init`
sender to specify how far back the initial synchronization should go.

The logic to forward announcements thus follows this outline:

 - Set `gossip_timestamp` for this peer
 - Iterate through all `channel_update`s that have a timestamp that is
   newer than the `gossip_timestamp` (skipping replaced ones as per BOLT
 - For each `channel_update` fetch the corresponding
   `channel_announcement` and the endpoints `node_announcement`.
 - Forward the messages in the correct order, i.e.,
 - `channel_announcement`, then `channel_update`, and then `node_announcement`

The feature bit is even, meaning that it is required from the peer,
since we extend the `init` message itself, and a peer that does not
support this feature would be unable to parse any future extensions to
the `init` message. Alternatively we could create a new
`set_gossip_timestamp` message that is only sent if both endpoints
support this proposal, but that could result in duplicate messages being
delivered between the `init` and the `set_gossip_timestamp` message and
it'd require additional messages.

`gossip_timestamp` is rather flexible, since it allows the sender to
specify its most recent update if it believes it is completely caught
up, or send a slightly older timestamp to have some overlap for
currently broadcasting updates, or send the timestamp the node was last
connected with the network, in the case of prolonged downtime.

The reason I'm using timestamp and not the blockheight in the short
channel ID is that we already use the timestamp for pruning. In the
blockheight based timestamp we might ignore channels that were created,
then not announced or forgotten, and then later came back and are now

I hope this rather simple proposal is sufficient to fix the short-term
issues we are facing with the initial sync, while we wait for a real
sync protocol. It is definitely not meant to allow perfect
synchronization of the topology between peers, but then again I don't
believe that is strictly necessary to make the routing successful.

Please let me know what you think, and I'd love to discuss Pierre's
proposal as well.

Lightning-dev mailing list

Re: [Lightning-dev] How to use LN

2018-01-20 Thread Christian Decker
v e  writes:
> Will do as you suggested. one another question, when you say customers do
> you mean end clients who are buying goods and services?

Yes, they'll need to have clients that understand the Lightning protocol
just like anyone else in the network.

> Also i am building an server-client model where i am trying to host
> multiple merchants who can accepts payments from end customers.
> Does that mean i need to have (bitcoins node, c-lightning + charge) node
> per merchant?

Neither c-lightning nor Lightning Charge (or any other implementation
for that matter) is multi-tenant, which is what you're
describing. Someone with access to the RPC has full control over all
channels and all funds in the daemon. Just like you wouldn't expose a
raw bitcoind RPC interface to multiple users, you shouldn't directly
expose lightningd to multiple tenants. You can build a layer inbetween
that differentiates the tenants and controls access to individual
resources though, but we don't currently support it directly.

Notive also that we'd like to encourage every users, be it customer or
shop, to run their own nodes, not rely on large managed infrastructure.

Lightning-dev mailing list

Re: [Lightning-dev] How to use LN

2018-01-19 Thread Christian Decker
Hi v e,

in order to use Lightning Charge you will need the following:

 - A full bitcoind node sync'd with the network
 - A c-lightning node
 - npm + lightgning-charge running to give you access to the REST API

We currently do not have (and may never have) bindings for bitcoinj.

Re invoices: the invoices are tracked by c-lightning and you can store a
reference to them in your store using the `payment_hash`. Customers will
need to have their own Lightning client and some channels open to the
network in order to send payments.


v e  writes:
> Hi,
> I am building merchant app and consumer wallet app using
> APIs for wallet creation. Now I want to use LN
> built by your team. I am looking at this API
> and have few questions:
> * Do i need to run bitcoin core node?
> * I see invoice apis, I assume that the invoice is generated at the
> merchant wallet app. How do i tie the merchant wallet to the invoice?
> * similarly how do i send coins from consumer wallet to the created invoice?
> Sorry I am very new to this and apologize my assumptions.
> Any help is highly appreciated.
> ___
> Lightning-dev mailing list
Lightning-dev mailing list

Re: [Lightning-dev] negative fees for HTLC relay

2018-01-18 Thread Christian Decker
Mark Friedenbach  writes:

> It is not the case that all instances where you might have negative
> fees would have loops.

If we don't have a cycle we can hardly talk about rebalancing
channels. At that point you're paying for someone else's payment to go
through your channel, and I'm unclear what the motivation for that might
be. Anyway, this is still possible by communicating this out of band
with the payment creator, and should not be baked into the gossip
protocol itself, in my opinion. It's obscure enough to not be worth the
extra effort.

> One instance where you want this feature is when the network becomes
> too weighted in one side of the graph.

There is little you can do to prevent this: if we have a network with a
small cut, with a source and sink on opposite sides of that cut, no
amount of voluntary sacrifice from the nodes along the cut will have a
lasting effect. The better solution would be to change the topology of
the network to remove the cut, or balance traffic over it, e.g., moving
a sink to the other side of the cut.

> Another is when the other side is a non-routable endpoint. In both
> cases would be useful to signal to others that you were willing to pay
> to rebalance, and this hand wavy argument about loops doesn’t seem to
> apply.

I'm not sure I understand what you mean with non-routable endpoint, so
correct me if I'm wrong. I'm assuming that non-routable endpoint is a
non-publicly announced node in the network? In that case no fee tricks
will ever get people to route over it, since they can't even construct
the onion to talk to it. Notice that the payment requests allow for
recipients of payments to get paid by explicitly including the necessary
information to construct the onion to talk to that node.

Not trying to be dismissive here, and I might be getting this wrong, so
let me know if I did and what use-cases you had in mind :-)

Lightning-dev mailing list

Re: [Lightning-dev] negative fees for HTLC relay

2018-01-17 Thread Christian Decker
Benjamin Mord  writes:
> It isn't obvious to me from the BOLTs if fees can be negative, and I'm
> finding uint in the go source code - which suggests not. In scenarios where
> the funding of a payment channel has been fully committed in one direction,
> why not allow negative fees to incent unwinding, in scenarios where nodes
> consider that cheaper than on-chain rebalancing?

After discussing this for a while we decided not to allow negative fees
in channel announcements (for now), because they actually do not add to
the flexibility and require special handling for route finding.

The main argument for negative fees has always been that they allow a
channel operator to rebalance its channels. However it is neither
required, nor is it really all that helpful. If a node wants to
rebalance he needs to find a cycle, that it can use to rebalance.  The
simplest rebalancing is that the node itself sends a payment along that
cycle back to itself, giving the rebalancing node full control over the
amount to rebalance, timing and costs.

The negative fees were intended to encourage other participants to use
any cycle and rebalance for the node offering the negative fees. However
that results in less control over the rebalancing for the node, e.g.,
how many payments to incentivize, amounts, etc. This is compounded by
the inherent delay of channel updates being disseminated in the
network. So if a rebalancing node gets too many payments that try to
take advantage of the negative fees, what should it do? It'd result in
either losses for the node, or many forward rejections. So why not use
the funds one would have used towards negative fees for the active way
of rebalancing.

It is preferable to have payments be routed around an exhausted channel,
after all if there is a cycle there must be an alternative route, rather
than trying to artificially rebalance.

So overall, allowing only positive fees makes routing simpler, and still
allows for active rebalancing. As for other applications some have
alluded to, this constraint is only for the routing gossip. Should there
be a good reason to allow increasing the amount forwarded by a peer,
e.g., node n receives x from the previous hop and forwards x+e to the
next hop, that can still be negotiated out of band or even in the onion
payload for that node.

Lightning-dev mailing list

Re: [Lightning-dev] Descriptive annotations visible to intermediate nodes

2018-01-11 Thread Christian Decker
Hi Benjamin,

yes, there was a BOLT#6. It was a trivial bootstrapping mechanism I set up
using IRC (like the original bitcoin client), but we retired it in favor of
DNS seeds and gossiping.


On Thu, Jan 11, 2018 at 5:40 PM Benjamin Mord <> wrote:

> Thanks Christian. I'm reading the BOLTs and reviewing source code now, so
> perhaps my question / request will be more usefully specific once I've
> finished that review. Sorry for being vague as to use case for now, let me
> just point out that the concept of source routing opens up a lot of
> potential use cases that involve collaborations with intermediaries of
> various sorts, but for which flexible communication capability would be
> desirable or required (unless you do that out-of-band, which would be so
> messy as to largely defeat the point.)
> I'm impressed so far by how cleanly and explicitly the BOLTs address
> extensibility. I find that reassuring, many thanks to whoever applied their
> technical creativity to this aspect.
> Was there a BOLT #6?
> On Thu, Jan 4, 2018 at 12:13 PM, Christian Decker <
>> wrote:
>> Hi Benjamin,
>> currently the only piece of information that is kept contstant along the
>> entire route is the payment hash, and we will be removing that as well
>> in order to further decorrelate hops of a payment and make it harder for
>> forwarding nodes to collate hops into a route.
>> As ZmnSCPxj has pointed out we do have the possibility of adding some
>> information in the onion packet. You can even give every single hop its
>> very specific information in the onion. Say for example you have a node
>> that does currency conversion, you may specify the desired exchange rate
>> in the onion.
>> May I ask what use-case you are looking to implement using this feature?
>> Cheers,
>> Christian
>> Benjamin Mord <> writes:
>> > Are there, or will there be, annotations one can add to a lightning
>> > transaction that can be read by all intermediate nodes along a given
>> route?
>> > Conversely, can one add annotations readable only by certain specific
>> known
>> > (to sender) intermediate nodes?
>> > ___
>> > Lightning-dev mailing list
>> >
>> >
Lightning-dev mailing list

  1   2   >