Re: [Lightning-dev] Proposal for Advertising Channel Liquidity

2018-11-06 Thread ZmnSCPxj via Lightning-dev
Good morning Lisa,

>Should a node be able to request more liquidity than they put into the channel 
>on their half? In the case of a vendor who wants inbound capacity, capping the 
>liquidity request
>allowed seems unnecessary.

My initial thought is that it would be dangerous to allow the initiator of the 
request to request for arbitrary capacity.

For instance, suppose that, via my legion of captive zombie computers (which 
are entirely fictional and exist only in this example, since I am an ordinary 
human person) I have analyzed the blockchain and discovered that you have 1.0 
BTC you have reserved for liquidity requests under this protocol.  I could then 
have one of those computers spin up a temporary Lightning Node, request for 
1.0BTC incoming capacity with only some nominal fee, then shut down the node 
permanently, leaving your funds in an unuseable channel, unable to earn routing 
fees or such.  This loses you potential earnings from this 1.0 BTC.

If instead I were obligated to have at least greater capacity tied into this 
channel, then I would also be tying up at least 1.0 BTC into this channel as 
well, making this attack more expensive for me, as it also loses me any 
potential earnings from the 1.0 BTC of my own that I have locked up.

To my mind, this may become important.

Regards,
ZmnSCPxj

> Conclusion
> ===
> Allowing nodes to advertise liquidity paves the way for automated node 
> re-balancing. Advertised liquidity creates a market of inbound capacity that 
> any node can take advantage of, reducing the amount of out-of-band 
> negotiation needed to get the inbound capacity that you need.
>
> Credit to Casey Rodamor for the initial idea.___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Splicing Proposal: Feedback please!

2018-11-06 Thread ZmnSCPxj via Lightning-dev
Good morning Laolu,

>> I worry about doing away with initiator distinction
>
> Can you re-phrase this sentence? I'm having trouble parsing it, thanks.

The initiator of an action is the node which performs the first step in an 
action.

For instance, when opening a channel, the node which initiates the channel open 
is the initiator.  Even in a dual-funding channel open, we should distinguish 
the initiator.

What I want to preserve (as for current channel opening) is that the initiator 
of an action should be the one to pay any costs or fees to that action.

For instance, when opening a channel, the channel opener is the one who pays 
for all onchain fees related to opening and closing the channel, as the opening 
node is the initiator of the action.

Similarly, for channel splicing, I think it would be wiser to have the 
initiator of the splice be the one, to pay for any onchain fees related to 
splicing (and any backoff/failure path if some backoff is needed), even if the 
other side then also decides to splice in/out some funds together with the 
splice.

To my mind, this is wiser as it reduces the surface of potential attacks in 
case of a bad design or implementation of dual-fund-opening and splicing; to 
engage in the attack, one must be willing to shoulder all the onchain fees, 
which hopefully should somewhat deter all but the most egregious or lopsided of 
attacks.

Regards,
ZmnSCPxj___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Wireshark plug-in for Lightning Network(BOLT) protocol

2018-11-06 Thread tock203
We implemented the latter scheme. lightning-dissector already supports key
rotation.
FYI, here's the key log file format lightning-dissector currently
implements.
https://github.com/nayutaco/lightning-dissector/blob/master/CONTRIBUTING.md#by-dumping-key-log-file

Whenever key rotation happens(nonce==0), lightning node software write
16byteMAC & key of "first BOLT packet".
When you read .pcap starts with a message whose nonce is not 0, the
messages can not be decrypted until the next key rotation.

The current design is as described above. Because it is a provisional
specification, any opinion is welcome.

2018年11月6日(火) 16:08 Olaoluwa Osuntokun :

> Hi tomokio,
>
> This is so dope! We've long discussed creating canned protocol transcripts
> for
> other implementations to assert their responses again, and I think this is
> a
> great first step towards that.
>
> > Our proposal:
> > Every implementation has compile option which enable output key
> information
> > file.
>
> So is this request to add an option which will write out the _plaintext_
> messages to disk, or an option that writes out the final derived read/write
> secrets to disk? For the latter path, it the tools that read these
> transcripts
> would need to be aware of key rotations, so they'd  be able to continue to
> decrypt the transact pt post rotation.
>
> -- Laolu
>
>
> On Sat, Oct 27, 2018 at 2:37 AM  wrote:
>
>> Hello lightning network developers.
>> Nayuta team is developing Wireshark plug-in for Lightning Network(BOLT)
>> protocol.
>> https://github.com/nayutaco/lightning-dissector
>>
>> It’s alpha version, but it can decode some BOLT message.
>> Currently, this software works for Nayuta’s implementation(ptarmigan) and
>> Éclair.
>> When ptarmigan is compiled with some option, it write out key information
>> file. This Wireshark plug-in decode packet using that file.
>> When you use Éclair, this software parse log file.
>>
>> Through our development experience, interoperability test is time
>> consuming task.
>> If people can see communication log of BOLT message on same format
>> (.pcap), it will be useful for interoperability test.
>>
>> Our proposal:
>> Every implementation has compile option which enable output key
>> information file.
>>
>> We are glad if this project is useful for lightning network eco-system.
>>
> ___
>> Lightning-dev mailing list
>> Lightning-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>>
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] RFC: simplifications and suggestions on open/accept limits.

2018-11-06 Thread Pierre
Hi Rusty,

> funding_satoshis
> 
>
> c-lightning: must be >= 1000 (after reserve subtracted)
> Eclair: must be >= 10
> lnd: any
>
> SUGGESTION: At 253 satoshi/kSipa, and dust at 546, 1000 is too low to be
> sane (one output would always be dust).  Eclair seems pessimistic
> though; Should we suggest that any channel under 3 x min(our_dust_limit,
> their_dust_limit) SHOULD be rejected as too small (ie. min is 546*3).
>

The rationale for a relatively high minimal funding_satoshi is to not
have tons of
unilateral channel closings when there is a network fee spike. We
still care as a fundee,
because we may have a positive balance and will be annoyed if our
funds are delayed.

>
>
> dust_limit_satoshis
> ---
>
> c-lightning: gives 546, accepts any.
> Eclair: gives 546 (configurable), accepts >= 546.
> lnd: gives 573, accepts any.
>
> (Note: eclair's check here is overzealous, but friendly).

The reasoning is that we do care about remote's commitment tx
dust_limit in a dataloss
recovery scenario.

>
> SUGGESTION: we have to make this variable in future anyway (IRL it
> depends on min_relay_fee, which in turn depends on backlog...).
> I'd love to just pick a number :(

Me too!

>
>
> max_htlc_value_in_flight_msat
> -
> c-lightning: gives U64_MAX, accepts >= 100.
> Eclair: gives 50, accepts any.
> lnd: gives capacity - reserve, accepts >= 5 * htlc_minimum_msat.
>
> SUGGESTION: maybe drop it (must be U64_MAX?).

Agreed.

>
>
> channel_reserve_satoshis
> 
>
> c-lightning: gives 1% (rounded down), accepts <= capacity - 100.
> Eclair: gives 1% (?), accepts <= 5% (configurable)
> lnd: gives 1%, accepts <= 20%
>
> SUGGESTION: require it be 1% (rounded down).

Agreed.

>
>
> htlc_minimum_msat
> -
>
> c-lightning: gives 0, accepts up to 0.1% of channel capacity.
> Eclair: gives 1, accepts any.
> lnd: gives 1000, accepts any.
>
> SUGGESTION: maybe drop it (ie. must be 0?)

Why not, given that relay fees make it non-free anyway.

>
>
> to_self_delay
> -
>
> c-lightning: gives 144, accepts <= 2016
> Eclair: gives 144, accepts <= 2000
> lnd: gives 144-2016 (proportional to funding), accepts <= 1
>
> SUGGESTION: require it to be <= 2016.  Weaker suggestion: make it 2016,
> or use lnd's proportional formula.

2016 is sooo long though ;-) Especially given the high number of
unilateral close
we still see on mainnet. How about <= 1008?


>
>
> max_accepted_htlcs
> --
>
> c-lightning: gives 483, accepts > 0.
> Eclair: gives 30, accepts any.
> lnd: gives 483, accepts >= 5
>
> SUGGESTION: not sure why Eclair limits.  Maybe make it 483?

We wanted to avoid having a huge commitment tx and a corresponding
network fee. Since
the funder pays the fee, there is a loose connection between this,
funding_satoshis and
htlc_minimum_msat.

>
>
> minimum_depth
> -
>
> c-lightning: gives 3, accepts <= 10.
> Eclair: gives 3, accepts any.
> lnd: gives 3-6 (scaling with funding), accepts any.
>
> SUGGESTION: This field is only a suggestion anyway; you can always wait
> longer before sending funding_locked.  Let's limit it to <= 6?

I'm fine with <= 6, but as someone else noted, this would be Bitcoin specific.

> Are there any other areas of hidden consensus should illuminate and may
> simplify?

The two obvious ones are "who should force close when an error happens" and
"what is the current feerate" but both are being handled in the new commitment
format proposal.

I think we should also reconsider the hardcoded maximum funding_satoshis (maybe
dig up the "dangerous" flag proposal?).
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Splicing Proposal: Feedback please!

2018-11-06 Thread Christian Decker
Olaoluwa Osuntokun  writes:

>> However personally I do not really see the need to create multiple
> channels
>> to a single peer, or increase the capacity with a specific peer (via
> splice
>> or dual-funding).  As Christian says in the other mail, this
> consideration,
>> is that it becomes less a network and more of some channels to specific
> big
>> businesses you transact with regularly.
>
> I made no reference to any "big businesses", only the utility that arises
> when one has multiple channels to a given peer. Consider an easier example:
> given the max channel size, I can only ever send 0.16 or so BTC to that
> peer. If I have two channels, then I can send 0.32 and so on. Consider the
> case post AMP where we maintain the current limit of the number of in flight
> HTLCs. If AMP causes most HTLCs to generally be in flight within the
> network, then all of a sudden, this "queue" size (outstanding HTLCS in a
> commitment) becomes more scarce (assume a global MTU of say 100k sat for
> simplicity). This may then promote nodes to open additional channels to
> other nodes (1+) in order to accommodate the increased HTLC bandwidth load
> due to the sharded multi-path payments.

I think I see the issue now, thanks for explaining. However I get the
feeling that this is a rather roundabout way of increasing the
limitations that you negotiated with your peer (max HTLC in flight, max
channel capacity, ...), so wouldn't those same limits also apply across
all channels that you have with that peer? Isn't the real solution here
to lift those limitations?

> Independent on bolstering the bandwidth capabilities of your links to other
> nodes, you would still want to maintain a diverse set of channels for fault
> tolerance, path diversity, and redundancy reasons.

Absolutely agree, and it was probably my mistake for assuming that you
would go for the one peer only approach as a direct result of increasing
bandwidth to one peer.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] RFC: simplifications and suggestions on open/accept limits.

2018-11-06 Thread Gert-Jaap Glasbergen
> On 7 Nov 2018, at 12:01, Anthony Towns  wrote:
> 
> I don't think it quite makes sense either fwiw.

Glad it’s not just me :)

> What's enforcable on chain will vary though -- as fees rise, even if the
> network will still relay your 546 satoshi output, it may no longer be
> economical to claim it, so you might as well save fees by not including
> it in the first place.

I agree here, but there’s a provision in place to cope with this. People can 
define the minimum size of payments / channel balances they are willing to 
accept, in order to prevent producing dust or trimmed outputs. They can adhere 
to certain limits within their own control. If fees vary you can accept it’s 
current temporary nature and leave the channel in place for low tides, or if 
fees rise more structurally close channels and reopen them with higher limits. 
The key is that it’s in your control.

> Otherwise, if you're happy accepting 652 satoshis, I don't see why you
> wouldn't be happy accepting an off-chain balance of 652.003 satoshis;
> you're no worse off, in any event.

I wouldn’t be worse off when accepting the payment, I agree. I can safely 
ignore whatever fraction was sent if I don’t care about it anyway. The protocol 
is however expecting (if not demanding) me to also route payments with 
fractions, provided they are above the set minimum. In that case I’m also 
expected to send out fractions. Even though they don’t exist on-chain, if I 
send a fraction of a satoshi my new balance will be 1 satoshi lower on-chain 
since everything is rounded down.

If forwarding the payment is optional, then that technically gives me an out to 
implement my desired behaviour. But, I think it would be highly harmful to the 
reliability of the network if a client were to simply not route payments that 
don’t adhere to their (undocumented) requirements. It would be much more 
sensible for nodes to be made aware of those requirements, to prevent them from 
trying to route through channels in vain. That’s why I would prefer this to be 
part of the channel’s properties so everyone is aware. 

> Everything in open source is configurable by end users: at worst, either
> by them changing the code, or by choosing which implementation to use…

Well, yes, in that sense it is. But the argument was made that it’s too complex 
for average users to understand: I agree there, but that’s no reason to not 
make the protocol support this choice. The fact that the end user shouldn’t be 
bothered with the choice doesn’t prohibit the protocol from supporting it.

Gert-Jaap.


___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] RFC: simplifications and suggestions on open/accept limits.

2018-11-06 Thread Anthony Towns
On Tue, Nov 06, 2018 at 10:22:56PM +, Gert-Jaap Glasbergen wrote:
> > On 6 Nov 2018, at 14:10, Christian Decker  
> > wrote:
> > It should be pointed out here that the dust rules actually prevent us
> > from creating an output that is smaller than the dust limit (546
> > satoshis on Bitcoin). By the same logic we would be forced to treat the
> > dust limit as our atomic unit, and have transferred values and fees
> > always be multiples of that dust limit.
> I don’t follow the logic behind this.

I don't think it quite makes sense either fwiw.

> > 546 satoshis is by no means a tiny amount anymore, i.e., 546'000 times
> > the current minimum fee and value transferred. I think we will have to
> > deal with values that are not representable / enforceable on-chain
> > anyway, so we might as well make things more flexible by keeping
> > msatoshis.
> I can see how this makes sense. If you deviate from the realms of what is 
> possible to enforce on chain,

What's enforcable on chain will vary though -- as fees rise, even if the
network will still relay your 546 satoshi output, it may no longer be
economical to claim it, so you might as well save fees by not including
it in the first place.

But equally, if you're able to cope with fees rising _at all_ then
you're already okay with losing a few dozen satoshis here and there, so
how much difference does it make if you're losing them because fees
rose, or because there was a small HTLC that you could've claimed in
theory (or off-chain) but just can't claim on-chain?

> Again, I am not advocating mandatory limitations to stay within base layer 
> enforcement, I am advocating _not_ making it mandatory to depart from it.

That seems like it adds a lot of routing complexity for every node
(what is the current dust level? does it vary per node/channel? can I
get a path that accepts my microtransaction HTLC? do I pay enough less
in fees that it's better to bump it up to the dust level?), and routing
is already complex enough...

You could already get something like this behaviour by setting a high
"fee_base_msat" and a low "fee_proportional_millionths" so it's just
not economical to send small transactions via your channel, and a
corresponding "htlc_maximum_msat" to make sure you aren't too cheap at
the top end.

Otherwise, if you're happy accepting 652 satoshis, I don't see why you
wouldn't be happy accepting an off-chain balance of 652.003 satoshis;
you're no worse off, in any event.

> I would not envision this to be even configurable by end users. I am just 
> advocating the options in the protocol so that an implementation can choose 
> what security level it prefers. 

Everything in open source is configurable by end users: at worst, either
by them changing the code, or by choosing which implementation to use...

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Splicing Proposal: Feedback please!

2018-11-06 Thread Rusty Russell
Olaoluwa Osuntokun  writes:
>> Mainly limitations of our descriptor language, TBH.
>
> I don't follow...so it's a size issue? Or wanting to avoid "repeated"
> fields?

Not that smart: tools/extract-formats.py extracts descriptions from the
spec for each message.  It currently requires constants in the field
lengths, and these would be variable.

We'd have to teach it about messages within messages, eg:

1. subtype: 1 (`splice_add_input`)
2. data:
   * [`8`: `satoshis`]
   * [`32`: `prevtxid`]
   * [`4`: `prevtxoutnum`]
   * [`2`: `wscriptlen`]
   * [`wscriptlen`: `wscript`]
   * [`2`: `scriptlen`]
   * [`scriptlen`: `scriptpubkey`]

1. subtype: 2 (`splice_add_output`)
2. data:
   * [`32`:`channel_id`]
   * [`8`: `satoshis`]
   * [`2`: `scriptlen`]
   * [`scriptlen`: `outscript`]

1. type:  40 (`splice_add`) (`option_splice`)
   * [`32`:`channel_id`]
   * [`2`: `num_splice_in`]
   * [`num_splice_in*splice_add_input`: `inputs`]
   * [`2`: `num_splice_out`]
   * [`num_splice_out*splice_add_output`: `outputs`]

>> We're basically co-generating a tx here, just like shutdown, except it's
>> funding a new replacement channel.  Do we want to CPFP this one too?
>
> It'd be nice to be able to also anchor down this splicing transaction given
> that we may only allow a single outstanding splicing operation to begin
> with. Being able to CPFP it (and later on provide arbitrary fee inputs)
> allows be to speed up the process if I want to queue another operation up
> right afterwards.

That has some elegance (we would whatever fee scheme we will use for
commitment txs), but means we will almost always *have* to CPFP, which
is unfortunate for chain bloat.

Cheers,
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Improving payment UX with low-latency route probing

2018-11-06 Thread Pierre
Hi Laolu and Fabrice,

> I think HORNET would address this rather nicely!

HORNET sounds awesome, but does it really address the specific issue
Fabrice is describing though? IIUC, HORNET would operate at a lower
layer and it could be possible to have a valid circuit and still
indefinitely waiting for a revocation. OTOH it certainly would address
the case where the peer is completely unresponsive.

For example, I have already seen peers which don't send revocations,
but e.g. respond to pings just fine.

Actually, re-reading Fabrice's proposal I wonder if one could make the
same comment about it. Would the 0-satoshi payment require the
commit_sig/revoke_and_ack dance? If not, would we really gain more
confidence in the availability of the peers in the route?

>> It is already possible to partially mitigate this by disconnecting
from a node that is taking too long to send a revocation (after 30
seconds for example)

Actually I think this would substantially improve the issue at hand. I
believe we should probably add this to BOLT 2 in the form of a
"SHOULD" clause. I feel bad because in [1] I suggested doing just that
in lnd, but we don't actually do it in eclair :-/ Will eat my own dog
food asap!

Cheers,

Pierre

[1] https://github.com/lightningnetwork/lnd/issues/2045#issuecomment-429561637
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] RFC: simplifications and suggestions on open/accept limits.

2018-11-06 Thread Gert-Jaap Glasbergen
> On 6 Nov 2018, at 14:10, Christian Decker  wrote:
> 
> It should be pointed out here that the dust rules actually prevent us
> from creating an output that is smaller than the dust limit (546
> satoshis on Bitcoin). By the same logic we would be forced to treat the
> dust limit as our atomic unit, and have transferred values and fees
> always be multiples of that dust limit.

I don’t follow the logic behind this. I can see how you can’t make outputs 
below dust, but not how every transferred value must be multiples of that. My 
minimum HTLC should be 546 - sure - but then I can also make HTLCs worth 547, 
548? I don’t see how the next possible value transfer has to be 2*546. On 
single hop transfers it can be even possible to make a trustless payment of 1 
satoshi, provided the protocol would allow to do this without an HTLC.

> 546 satoshis is by no means a tiny amount anymore, i.e., 546'000 times
> the current minimum fee and value transferred. I think we will have to
> deal with values that are not representable / enforceable on-chain
> anyway, so we might as well make things more flexible by keeping
> msatoshis.

I can see how this makes sense. If you deviate from the realms of what is 
possible to enforce on chain, you may as well take as much advantage as 
possible for the tradeoff you’ve chosen. So in that scenario (you are already 
departing from on-chain enforcement) msatoshi makes for much broader 
applicability. However, my argument would be that this departure should be a 
conscious choice.

Again, I am not advocating mandatory limitations to stay within base layer 
enforcement, I am advocating _not_ making it mandatory to depart from it.

> With a lot of choice comes great power, with great power comes great
> responsibility... uh I mean complexity :-) I'm all for giving users the
> freedom to chose what they feel comfortable with, but this freedom comes
> at a high cost and the protocol is very complex as it is. So we need to
> find the right configuration options, and I think not too many users
> will care about their unit of transfer, especially when it's handled
> automatically for them.

I would not envision this to be even configurable by end users. I am just 
advocating the options in the protocol so that an implementation can choose 
what security level it prefers. 

Gert-Jaap

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] RFC: simplifications and suggestions on open/accept limits.

2018-11-06 Thread Christian Decker
Gert-Jaap Glasbergen  writes:
> Op 1 nov. 2018 om 03:38 heeft Rusty Russell 
> mailto:ru...@rustcorp.com.au>> het volgende geschreven:
>> I believe this would render you inoperable in practice; fees are
>> frequently sub-satoshi, so you would fail everything.  The entire
>> network would have to drop millisatoshis, and the bitcoin maximalist in
>> me thinks that's unwise :)
>
> I can see how not wanting to use millisatoshis makes you less compatible
> with other people that do prefer using that unit of account. But in this
> case I think it's important to allow the freedom to choose.
>
> I essentially feel we should be allowed to respect the confines of the layer
> we're building upon. There's already a lot of benefits to achieve from second
> layer scaling whilst still respecting the limits of the base layer. Staying
> within those limits means optimally benefit form the security it offers.
>
> Essentially by allowing to keep satoshi as the smallest fraction, you ensure
> that everything you do off-chain is also valid and enforced by the chain when
> you need it to. It comes at trade offs though: it would mean that if someone
> routes your payment, you can only pay fees in whole satoshis - essentially
> meaning if someone wants to charge a (small) fee, you will be overpaying to
> stay within your chosen security parameters. Which is a consequence of your
> choice.

It should be pointed out here that the dust rules actually prevent us
from creating an output that is smaller than the dust limit (546
satoshis on Bitcoin). By the same logic we would be forced to treat the
dust limit as our atomic unit, and have transferred values and fees
always be multiples of that dust limit.

546 satoshis is by no means a tiny amount anymore, i.e., 546'000 times
the current minimum fee and value transferred. I think we will have to
deal with values that are not representable / enforceable on-chain
anyway, so we might as well make things more flexible by keeping
msatoshis.

> I would be happy to make a further analysis on what consequences allowing this
> choice would have for the specification, and come up with a proposal on how to
> add support for this. But I guess this discussion is meant to "test the 
> waters"
> to see how much potential such a proposal would have to eventually be 
> included.
>
> I guess what I'm searching for is a way to achieve the freedom of choice,
> without negatively impacting other clients or users that decide to accept some
> level of trust. In my view, this would be possible - but I think working it 
> out
> in a concrete proposal/RFC to the spec would be a logical next step.

With a lot of choice comes great power, with great power comes great
responsibility... uh I mean complexity :-) I'm all for giving users the
freedom to chose what they feel comfortable with, but this freedom comes
at a high cost and the protocol is very complex as it is. So we need to
find the right configuration options, and I think not too many users
will care about their unit of transfer, especially when it's handled
automatically for them.

Cheers,
Christian
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Improving payment UX with low-latency route probing

2018-11-06 Thread Olaoluwa Osuntokun
Hi Fabrice,

I think HORNET would address this rather nicely!

During the set up phase (which uses Sphinx), the sender is able to get a
sense
of if the route is actually "lively" or not, as the circuit can't be
finalized
if all the nodes aren't available. Additionally, during the set up phase,
the
sender can drop a unique payload to each node. In this scenario, it may be
the
amount range the node is looking to send over this circuit. The intermediate
nodes then package up a "Forwarding Segment" (FS) which includes a symmetric
key to use for their portion of the hop, and can also be extended to include
fee information. If this set up phase is payment value aware, then each node
can use a private "fee function" that may take into account the level of
congestion in their channels, or other factors. This would differ from the
current approach in that this fee schedule need not be communicated to the
wider network, only those wishing to route across that link.

Another cool thing that it would allow is the ability to receive a
protocol-level payment ACK. This may be useful when implementing AMP, as it
would allow the sender to know exactly how many satoshis have arrived at the
other site, adjusting their payment sharding accordingly. Nodes on either
side
of the circuit can then also use the data forwarding phase to exchange
payment
hashes, perform cool zkcp set up protcols, etc, etc.

The created circuits can actually be re-used across several distinct
payments.
In the paper, they use a TTL for each circuit, in our case, we can use a
block
height, after which all nodes should reject attempted data forwarding
attempts.
A notable change is that each node no longer needs to maintain per-circuit
state as we do now with Sphinx. Instead, the packets that come across
contain
all the information required for forwarding (our current per-hop payload).
As a
result, we can eliminate the asymmetric crytpo from the critical forwarding
path!

Finally, this would let nodes easily rotate their onion keys to achieve
forward
secrecy during the data phase (but not the set up phase), as in the FS, they
essentially key-wrap a symmetric key (using the derived shared secret for
that
hop) that should be used for that data forwarding phase.

There're a number of other cool things integration HORNET would allow,
perhaps
a distinct thread would be a more appropriate place to extol the many
virtues
of HORNET ;)

-- Laolu

On Thu, Nov 1, 2018 at 3:05 PM Fabrice Drouin 
wrote:

> Context
> ==
>
> Sent payments that remain pending, i.e. payments which have not yet
> been failed or fulfilled, are currently a major UX challenge for LN
> and a common source of complaints from end-users.
> Why payments are not fulfilled quickly is not always easy to
> investigate, but we've seen problems caused by intermediate nodes
> which were stuck waiting for a revocation, and recipients who could
> take a very long time to reply with a payment preimage.
> It is already possible to partially mitigate this by disconnecting
> from a node that is taking too long to send a revocation (after 30
> seconds for example) and reconnecting immediately to the same node.
> This way pending downstream HTLCs can be forgotten and the
> corresponding upstream HTLCs failed.
>
> Proposed changes
> ===
>
> It should be possible to provide a faster "proceed/try another route"
> answer to the sending node using probing with short timeout
> requirements: before sending the actual payment it would first send a
> "blank" probe request, along the same route. This request would be
> similar to a payment request, with the same onion packet formatting
> and processing, with the additional requirements that if the next node
> in the route has not replied within the timeout period (typically a
> few hundred milliseconds) then the current node will immediately send
> back an error message.
>
> There could be several options for the probe request:
> - include the same amounts and fee constraints than the actual payment
> request.
> - include no amount information, in which case we're just trying to
> "ping" every node on the route.
>
> Implementation
> 
>
> I would like to discuss the possibility of implementing this with a "0
> satoshi" payment request that the receiving node would generate along
> with the real one. The sender would first try to "pay" the "0 satoshi"
> request using the route it computed with the actual payment
> parameters. I think that it would not require many changes to the
> existing protocol and implementations.
> Not using the actual amount and fees means that the actual payment
> could fail because of capacity issues but as long as this happens
> quickly, and it should since we checked first that all nodes on the
> route are alive and responsive, it still is much better than “stuck”
> payments.
> And it would not help if a node decides to misbehave, but would not
> make things worse than they are now (?)
>
> Cheers,
> Fabrice
>