[Lightning-dev] Possible Attack IF we add BOTH upfront AND negative routing fees to the Lightning Network

2023-01-01 Thread René Pickhardt via Lightning-dev
Happy new year dear fellow Lightning Network Developers,

last month I have made a small observation why we probably should at most
progress EITHER with `negative fees` [1] OR `upfront fees` [2] but not with
BOTH as adding both features to the protocol would result in a potentially
lucrative attack that I will describe here.

Assumption:
=

For simplicity of the argument please assume all nodes do payment delivery
by optimizing purely for fees instead of probabilistic payment delivery or
a combination of these two and potentially other features in their cost
function. The Argument will however work as long as fees are part of the
cost function.

The Attack:
=

1. Malory sets the routing fees of her channel(s) sufficiently negative.
2. Now the cheapest route for all possible payment pairs on the network
goes through Malory.
3. Malory will accept any incoming HTLC but will shortly after the HTLC is
locked in fail the payment without forwarding.
( 4. Depending on the design of upfront fees she may need a collaborating
proxy node)

Outcome:


1. After announcing the negative routing fees every node that has seen the
`channel_update`  will route through Malory if initiating a payment. This
effectively redirects the entire traffic of the network through her node.
2. Malory has create a DoS attack on her own node but depending on the size
of the network she will not even see it as her channel partners will go
down from the DoS first (or she is able to handle the traffic as she was
prepared)
3. Assuming Malory has enough Channel partners (or collaborates with them)
she can collect the tiny unconditional upfront fees (Depending on the price
of the upfront fees, the size of the network and the base load of payments
per second this may or may not be lucrative)

Also as her fees were so negative most nodes might not even blame her for
the routing failures as they might assume others were just more quickly
sniping that juicy liquidity. Yet Alice has collected some upfront fees of
all payments that are going on at that time.

Some thoughts about mitigation strategies:
=

## Working:
* Choose weather to progress with either negative fees or upfront fees will
stop this particular problem to come up.

## Probably not working:
* Forcing channels with negative fees to set the the upfront fee negative
will not work. This is effectively handing out free money to the channel
partner: As soon as someone announces negative fees the channel partner
will send out fake payments and earn the negative upfront fee.
* Allow `channel_updates` only to be relayed from connections that maintain
a channel so that Mallory cannot quickly inform the entire network about
being the most central node by connecting to everyone may help in
combination with rate limiting of payments and reputation ideas but I guess
others are more experienced than me with reputation systems. Also I think
new participants need `channel_updates` even if they don't have channels
yet.

Own thoughts:
===

As many of you know I am currently writing a paper about the fundamental
limitations of the scaling abilities of the Lightning Network to conduct
Bitcoin payments [3]. Most folks I talk to see deliberate and malicious
channel jamming as a problem. While I agree with the problem I think the
situation is worse. It is my current understanding that natural congestion
resulting from the selfish behavior of both sending and routing nodes will
be a huge challenge for the network. This is amplified by the uncertainty
(for example about liquidity). However, even without uncertainty it will
create an upper boundary of how many payments per second the participants
of the network will be able to conduct. This boundary is more or less given
by the weighted betweenness centrality of the most central node and the
routing throughput that this node is able to handle. More on this is soon
to come here...

That being said, independently of the up-front fees it seems to me that
allowing negative fees tend to increase centralization effects and thus the
price of anarchy and natural congestion. Yet I can't quantify this at this
time and thus I don't know yet if this fundamentally speaks against
negative fees. However as discussed above in combination with upfront fees
there seems to be an economic incentive to abuse both together.

Thanks to Christian Decker for spotting an error in an edge cases when I
initially presented him a similar argument for review.

with kind regards Rene Pickhardt

[1]
https://lists.linuxfoundation.org/pipermail/lightning-dev/2022-September/003685.html
[2]
https://lists.linuxfoundation.org/pipermail/lightning-dev/2022-November/003740.html
[3] https://twitter.com/renepickhardt/status/1605189724293169153
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Fee Ratecards (your gateway to negativity)

2022-09-25 Thread René Pickhardt via Lightning-dev
Dear Lisa and lightning developers,

thank you for your contribution and ideas to the problem of increasing
reliability to the payment deliver process by having balanced channels and
providing liquidity where necessary. This is at least how I understand your
intentions of the proposal. I will just add a few notes and remarks.

1. I think negative fees are certainly an interesting idea that is
worthwhile to investigate. Mathematically speaking it is kind of strange to
cut the routing_costs at 0 and to restrict market participants from
selecting / offering negative fees. In particular min cost flow solvers
should not have any problems with negative fees. I also like the fact that
we would have another reason to deprecate the base fee. However I am
uncertain if negative fees may introduce other problems and unintended side
effects in particular with strategic behavior of routing nodes. The biggest
issue I see is if negative fees would produce a negative cost cycle.
Obviously everyone would try to cancel such a cycle and earn some arbitrage
by moving liquidity. While the first node may be successful I assume nodes
will not directly update fees / propagate gossip which would basically
create a lot of traffic requests to nodes in the negative cost cycle even
though there is no liquidity in that fee band left.

2. I like the fact that fee rate cards would produce a piece wise linear
cost function which should also be convex (assuming the rates increase in
every band). Assuming we move forward with fee rate cards we should make it
an explicit requirement that the fees in higher bands must not decrease,
which seems like a very sane requirement. (Is it?) However I am not sure if
the virtualization of the channel into 4 smaller channels as ZmnSCPxj
suggested to think of it is beneficial for the network. Intuitively (I have
no formal proof or simulation for this yet) I agree with the people who
voiced their concern that this may just overall increase payment latency
and decrease reliability (because selfish senders might start with the
cheapest or cheaper bands). My concern might be mitigated if we started to
share two bits of information about the liquidity in a channel (either
network wide) or as I propose in PR 780 within the friend of the friend
network [1]. In your other mail of this thread [2] you referred to PR 780
and noted that it may be worthwhile to investigate this. I agree and I
invite you to help me doing so.

3. After having privately discussed my recent blog article [3] to set up
valves via `htlc_maximum_msat` with a few node operators and having heard
their thoughts I realized that the idea of fee rate cards might actually be
even more powerful if not used to divide the channel capacity but rather
have various fee rate cards for various `htlc_maximum_msat` values. The
Idea is quite simple: If you have a certain drain on a channel and install
a valve to do flow control you could still offer someone to break the valve
open and get their large payment through and deplete your channel if they
would pay premium for it. Again without formal proof or simulation it is my
intuition that this is a more natural and less complicated design and would
achieve two things which both seem beneficial for the network and its
participants:

3a) Node operators would not loose out the opportunity to route large
payments when setting up valves. The main concern I had from some node
operators about setting up valves was that they know that some channels
have not many forwarding requests but usually rather large ones. Setting up
a valve may not be in their interest as it would be too restrictive. With
various fee rates for various payment sizes they could still allow large
payments (Similar to your proposal where payments that are larger than 25%
of the capacity also cannot be in the first band of the fees)

3b) In contrast to your proposal the sending nodes would not have to have
the doubt in which band the channel currently is and save on guessing. If
the amount is small enough they would usually get the cheaper fee rate but
if they want to move a lot of liquidity they have to offer a premium. I
think this would also produce a convex cost function but reduce the
guessing game and lower failed attempts and latency. It could still happen
that especially for a larger amount that not enough liquidity would be in
the channel and even a small payment could fail (as could happen these
days). Both could happen within your proposal too.

So while I find your proposal interesting I would love to investigate a bit
more how it interacts with other proposals and if we could make your and
other proposals even stronger by combining some of the other ideas.

with kind Regards Rene

[1]: https://github.com/lightning/bolts/pull/780
[2]:
https://lists.linuxfoundation.org/pipermail/lightning-dev/2022-September/003693.html
[3]:
https://lists.linuxfoundation.org/pipermail/lightning-dev/2022-September/003686.html

On Tue, Sep 13, 2022 at 11:15 PM

Re: [Lightning-dev] `htlc_maximum_msat` as a valve for flow control on the Lightning Network

2022-09-23 Thread René Pickhardt via Lightning-dev
 decent settings.

I am happy if people have more insights into this and challenge my
expectation (because it actually really is only an expectation / intuition
at this point). That being said: Yes even with only a few gossip messages
per day I expect for the given reasons the setup of valves to be possible
and useful!

with kind regards Rene

[1]:
https://en.wikipedia.org/wiki/Maximum_entropy_probability_distribution#Uniform_and_piecewise_uniform_distributions
[2]: https://arxiv.org/abs/2103.08576
[3]:
https://github.com/lnresearch/Flow-Control-on-Lightning-Network-Channels-with-Drain-via-Control-Valves/blob/main/Privacy%20Considerations%20of%20signaling%20past%20drain%20via%20%60htlc_maximum_msat%60%20pairs.ipynb

>
> Thanks,
> Matt
>
> On 9/22/22 2:40 AM, René Pickhardt via Lightning-dev wrote:
> > Good morning fellow Lightning Developers,
> >
> > I am pleased to share my most recent research results [1] with you. They
> may (if at all) only have a
> > small impact on protocol development / specification but are actually
> mainly of concern to node
> > operators and LSPs. I still thought they may be relevant for the list.
> >
> > While trying to estimate the expected liquidity distribution in depleted
> channels due to drain via
> > Markov Models I realized that we can exploit the `htlc_maxium_msat`
> setting to act as a control
> > valve and regulate the "pressure" coming from the drain and mitigate the
> depletion of channels. Such
> > ideas are btw not novel at all and heavily used in fluid networks [2].
> Thus it seems very natural
> > that we do the same on the Lightning Network.
> >
> > In the article we show within a theoretic model how expected payment
> failure rates per channel may
> > drop significantly by up to an order of magnitude if channels set up
> proper asymmetric
> > `htlc_maximum_msat` pairs.
> >
> > We furthermore provide in our iPython notebook [3] two experimental
> algorithmic ideas with which
> > node operators can find decent `htlc_maximum_msat` values in a greedy
> fashion. One of the algorithms
> > does not even require to know the drain or payment size distribution or
> build the Markov model but
> > just looks at the liquidity distribution in the channel at the last x
> routing attempts and adjusts
> > the `htlc_maximum_msat` value if the distribution is to far away from a
> uniform distribution.
> >
> > Looking forwards for your thoughts and feedback.
> >
> > with kind regards Rene
> >
> >
> > [1]:
> >
> https://blog.bitmex.com/the-power-of-htlc_maximum_msat-as-a-control-valve-for-better-flow-control-improved-reliability-and-lower-expected-payment-failure-rates-on-the-lightning-network/
> <
> https://blog.bitmex.com/the-power-of-htlc_maximum_msat-as-a-control-valve-for-better-flow-control-improved-reliability-and-lower-expected-payment-failure-rates-on-the-lightning-network/
> >
> > [2]: https://en.wikipedia.org/wiki/Control_valve <
> https://en.wikipedia.org/wiki/Control_valve>
> > [3]:
> >
> https://github.com/lnresearch/Flow-Control-on-Lightning-Network-Channels-with-Drain-via-Control-Valves/blob/main/htlc_maximum_msat%20as%20a%20valve%20for%20flow%20control%20on%20the%20Lightnig%20network.ipynb
> <
> https://github.com/lnresearch/Flow-Control-on-Lightning-Network-Channels-with-Drain-via-Control-Valves/blob/main/htlc_maximum_msat%20as%20a%20valve%20for%20flow%20control%20on%20the%20Lightnig%20network.ipynb
> >
> >
> > --
> > https://ln.rene-pickhardt.de <https://ln.rene-pickhardt.de>
> >
> > ___
> > Lightning-dev mailing list
> > Lightning-dev@lists.linuxfoundation.org
> > https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>


-- 
https://www.rene-pickhardt.de
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] `htlc_maximum_msat` as a valve for flow control on the Lightning Network

2022-09-21 Thread René Pickhardt via Lightning-dev
Good morning fellow Lightning Developers,

I am pleased to share my most recent research results [1] with you. They
may (if at all) only have a small impact on protocol development /
specification but are actually mainly of concern to node operators and
LSPs. I still thought they may be relevant for the list.

While trying to estimate the expected liquidity distribution in depleted
channels due to drain via Markov Models I realized that we can exploit the
`htlc_maxium_msat` setting to act as a control valve and regulate the
"pressure" coming from the drain and mitigate the depletion of channels.
Such ideas are btw not novel at all and heavily used in fluid networks [2].
Thus it seems very natural that we do the same on the Lightning Network.

In the article we show within a theoretic model how expected payment
failure rates per channel may drop significantly by up to an order of
magnitude if channels set up proper asymmetric `htlc_maximum_msat` pairs.

We furthermore provide in our iPython notebook [3] two experimental
algorithmic ideas with which node operators can find decent
`htlc_maximum_msat` values in a greedy fashion. One of the algorithms does
not even require to know the drain or payment size distribution or build
the Markov model but just looks at the liquidity distribution in the
channel at the last x routing attempts and adjusts the `htlc_maximum_msat`
value if the distribution is to far away from a uniform distribution.

Looking forwards for your thoughts and feedback.

with kind regards Rene


[1]:
https://blog.bitmex.com/the-power-of-htlc_maximum_msat-as-a-control-valve-for-better-flow-control-improved-reliability-and-lower-expected-payment-failure-rates-on-the-lightning-network/
[2]: https://en.wikipedia.org/wiki/Control_valve
[3]:
https://github.com/lnresearch/Flow-Control-on-Lightning-Network-Channels-with-Drain-via-Control-Valves/blob/main/htlc_maximum_msat%20as%20a%20valve%20for%20flow%20control%20on%20the%20Lightnig%20network.ipynb

-- 
https://ln.rene-pickhardt.de
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Supporting a custodial user who wishes to withdraw all sats from the account...

2022-08-25 Thread René Pickhardt via Lightning-dev
Dear fellow Lightning Developers,

I was recently on an event where the visitors have been gifted 10k sats on
a custodial wallet. They could spend those sats via some web interface and
an NFC card. During the event I was contacted by several plebs who were
confused about one particular thing:

It was impossible for them to withdraw the full amount from the service.

Pasting an invoice for 10k sats would not work as the custodial service
required a fee budget of 1%. However if people submitted an invoice for
9900 sats the remaining 100 sats were usually not fully required for the
fees. Thus the users may have had a leftover of for example 67 sats. Now
the problem repeated on the residual amount. While some services seem to
have a drain feature for such a situation I find this frustrating and was
wondering if we could help directly on a protocol level.

Here is my proposal for a simple solution to this specific problem:
`option_recipient_pays_routing_fees`

This would be a new flag in invoices signaling that the recipient is
willing to pay for the routing fees by releasing the preimage even if the
full amount has not been arrived in htlcs at the recipient.

So the workflow would be the following:

1. Alice creates an invoice for 10k sats setting the
`option_recipient_pays_routing_fees` flag in the invoice and passes it
either to custodial user Bob or to her own custodial account.
2. The payer parses the invoice and searches for a payment path or payment
flow to Alice.
3. Because `option_recipient_pays_routing_fee` is set, the onion is not
constructed in a way that the final HTLC will be for the amount of 10k sats
but rather in a way that the first htlc will be for 10k sats and the
following HTLCs will be of decreasing value so that routing nodes are
compensated properly.
4. When the HTLC(s) arrive at Alice she will release the preimage if and
only if not too many sats (e.g. 1% of the amount) are missing. Of course it
would be good if the 1% was not hard coded in the protocol / software but
configurable by Alice at the time of invoice creation.

I think the main issue with this proposal is that instead of confusing
users who wish to drain an account we may now have to educate users about
two different invoice types. On the other hand I think this can probably
easily be achieved via the current wide spread user interfaces. Of course
it may be nice to have folks from the Bitcoin Design community to join this
specific part of the discussion.

With kind regards Rene Pickhardt
-- 
https://www.rene-pickhardt.de
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Preliminary Hidden Lightning Network Analysis

2022-06-07 Thread René Pickhardt via Lightning-dev
Dear Tony,

Thank you for putting emphasis on this. I was actually waiting for someone
to publicly exploit this.


> The reason this is possible is because [...] currently channel IDs are
> based on UTXO's. Scid aliases may be the biggest benefit here, but the use
> of `unknown_next_peer` , `invalid_onion_hmac`,  `incorrect_cltv_expiry`,
> and `amount_below_minimum` have been the biggest helpers in exploiting
> channel privacy.
>

Just for reference the exploit with short_channel_ids is known since 2019:

https://github.com/lightning/bolts/issues/675

Though it is nice you point out explicitly the use of error codes of
onions.


> By creating a probe guessing the Channel ID based on unspent p2wsh
> transactions, it's a `m * n` problem to probe the entire network, where `m`
> is utxos and `n` is nodes.
>

It is the main reason why I didn't do this. Though similar to you probing
ACINQ's node one could probabilistically learn which nodes tend to have
unannounced channels and gain some speedup by probing those nodes first.

Also wallets tend to have poor utxo management. So looking at the on-chain
signal one can probably guess for a p2wsh to which two nodes it might
belong and try them first.

These two strategies should reduce the number of tested nodes for a newly
seen p2wsh output significantly and probably make it feasible to probe the
network as new blocks come in.

With kind regards Rene Pickhardt
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Principle Limitations to the reliability of the Lightning Network Protocol

2022-05-26 Thread René Pickhardt via Lightning-dev
Dear fellow lightning developers,

please note my recent blog article titled "Price of Anarchy from selfish
routing strategies on the Lightning Network" [1] where we investigate how
the selfish behavior of nodes sending Bitcoin over the Lightning Network
may lead to higher drain on channels which in turn is expected to result in
higher depletion and failure rates for payments on the network. All of the
observations have been derived purely be looking at statistical measures
and computations on the data that the Gossip Protocol and Bitcoin Network
provides about the topology of the Lightning Network. No probing or
empirical experiments had to be conducted to derive these theoretical
results. All code can be found in the lnresearch repository at [2].

While those preliminary results are only presented for some of the
strategies that are currently being deployed by `pay` implementations we
have not been able yet to study the dynamics of the entire game, secondary
effects or to find the dominant strategies of routing and sending nodes.
Due to the implications with respect to reliability and payment failure
rates - which I assume many have observed in the wild - I thought I would
already share these early results with you.

While routing nodes seem to be able to mitigate some of the effects we note
in the article that it seems as if the routing nodes can hardly engage into
selfish behavior or strategies themselves to help with flow and congestion
control. This is because it seems as if all operations that we can
currently think of that routing nodes could engage in are limited (through
protocol design) if applied at scale. E.g:

* Adopting fees (limited through gossip relay policies which prevent spam)
* Opening / closing channels (limited through block space)
* Pro active off chain rebalancing (limited through fees that other nodes
charge and the time needed for finding opportunities to conduct rebalancing
and the additional load this put to the network )
* Pro active on chain rebalancing (limited through block space and routing
fees)

I hope the described effects won't be too strong for the expected traffic
and usage of the network so that the technology will work properly at the
required scale. I am very happy for your thoughts, feedback, comments and
questions as I find it fascinating to see how the game theory of the
Lightning network will eventually play out and at least in my current
understanding seems to produce limitations to the amount of traffic the
protocol may eventually be able to handle.

with kind Regards Rene Pickhardt

[1]:
https://blog.bitmex.com/price-of-anarchy-from-selfish-routing-strategies/
[2]: https://github.com/lnresearch/Price-Of-Anarchy-in-Selfish-Routing
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Invitation to test our research on probabilistic and optimal payment flows. I made it quick & easy for you (:

2022-05-12 Thread René Pickhardt via Lightning-dev
Dear fellow lightning developers,

last week I have started a new repository [0] where I maintain a python
package that can be used to test (and more importantly simulate) the
improvements to payment delivery that we have suggested over the last years
(c.f. [1][2][3]). I kindly invite you to check it out.

Feedback & code review will be highly appreciated if you find the spare
time to do so. Similarly I will be delighted if you can provide a patch or
an extension if necessary. Note while this may already be very useful that
this is work in progress and there are already quite some issues open [4]
and I am supervising several Summer of Bitcoin projects [5] that I expect
to contribute to the repository over the next couple of weeks in various
ways.

You can easily install the **pre-alpha** package (version 0.0.0) with the
PythonPackageIndex via:

:~$ pip install pickhardtpayments

Example code that shows how to use this library can be found in the
`Readme.md` file and the `example` folder [6]. Unfortunately my
presentation at MIT Bitcoin Expo where I first talked about this is still
buried in a long Livestream video (Day 2 Track A starts at 1:16:19) and is
not yet available as a stand alone video but I will maintain a
`Resources.md` file where I link to useful talks & articles [7] and other
resources related to the topic.

One final and very significant way to speed up the computation of the min
cost flow solver that I have included since my last mail [3] was achieved
by pruning the network to remove edges that are highly unlikely to be part
of the payment flow. The current solution is pretty ad-hoc but I am working
on doing this in a more automated / reliable way[8]. In this way I am very
pleased to report that as of now and with the given size of the zeroBaseFee
subnet...

**...The min cost flow solver consistently takes less that 100ms to find
close to optimal flows!**

That being said: Please be aware that the entire runtime of the provided
code is currently much higher. The reason is that for mere convenience I
have used the very slow `networkx` library that is used to store the
ChannelGraph, UncertaintyNetwork and OracleLightningNetwork with a lot of
memory overhead to maintain those data structures and to copy the necessary
data to the min cost flow solver before starting the actual solver (c.f.
[9]).

For safety reasons I deliberately did not provide an API to do mainnet
tests. However to the fellow expert user - who knows what they are doing -
it should be fairly straight forward to conduct mainnet experiments via the
following two steps:

1.) Create your own wrapper to an Lightning Node that exposes the
`send_onion` call [10] in the same way how it is currently being done in
the `OracleLightingNetwok` class and bring your own wrapper to the
`SyncSimulatedPaymentSession` class instead of the oracle that we have

2.) You may wish to make the payment loop [11] inside the
`SyncSimulatedPaymentSession` async.

Of course delivering production ready mainnet code will require more work
as one needs to figure out other constraints like channel reserves, min/max
htlc sizes, available HTLC slots, offline peers, hanging htlcs (stuck
payments),...

Also if you want to do simulations with non uniform distributions of
liquidity or because your LSP has a crawled snapshot of actual Liquidity
you can just bring your own `OracleLightningNetwork` that encodes your
assumed or known ground truth about the network.

I hope all of this makes sense. Feel free to ask me anything that you need
and I am curious to see if this will be useful for you. Note that there is
a notebook that has basically the same code but more documentation and in
particular a glossary of terms that can currently be found at [12].

Thanks to all people that I have been collaborating with so far and to
NTNU, BitMEX, various anonymous donors and Patreons who all helped to
achieve this.

with kind regards Rene Pickhardt

[0]: https://github.com/renepickhardt/pickhardtpayments
[1]:
https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-March/002984.html
[2]: https://arxiv.org/abs/2107.05322
[3]:
https://lists.linuxfoundation.org/pipermail/lightning-dev/2022-March/003510.html
[4]: https://github.com/renepickhardt/pickhardtpayments/issues
[5]:
https://www.summerofbitcoin.org/project-ideas-details?recordId=recJchpFa9tqSZkQ4
[6]: https://github.com/renepickhardt/pickhardtpayments/tree/main/examples
[7]:
https://github.com/renepickhardt/pickhardtpayments/blob/main/Resources.md
[8]: https://github.com/renepickhardt/pickhardtpayments/issues/1
[9]: https://github.com/renepickhardt/pickhardtpayments/issues/6
[10]:
https://github.com/renepickhardt/pickhardtpayments/blob/1121b48ec6bf5fb2dee2b1793f87d489ce3149e3/pickhardtpayments/OracleLightningNetwork.py#L37

[11]:
https://github.com/renepickhardt/pickhardtpayments/blob/1121b48ec6bf5fb2dee2b1793f87d489ce3149e3/pickhardtpayments/SyncSimulatedPaymentSession.py#L351
[12]:
https://github.com/renepickhardt/mpp-

Re: [Lightning-dev] Code for sub second runtime of piecewise linarization to quickly approximate the minimum convex cost flow problem (makes fast multi part payments with large amounts possible)

2022-03-14 Thread René Pickhardt via Lightning-dev
Dear Carsten and fellow lightning developers,

thanks for going into such detail and discovering some of the minor
inaccuracies of my very rough piecewise linearization!

On Mon, Mar 14, 2022 at 1:53 PM Carsten Otto  wrote:

> 1.2) The Mission Control information provided by lnd [...]
> > I think you talk a about a maximum available balance of a channel (and
> not
> > min available balance)?
>
> Yes, although MC also has information about "known" amounts (due to
> failures that only happened further down the road).
>

I am unsure how mission control stores and handles that data. In my
understanding they are mainly interested in a statistic of the ratio of
successfull payments over the past X attempts on a channel given a certain
time interval. But I assume they should have all the relevant data to
produce a proper conditional proability to utilize our learnt knowledge.

In any case from the probabilistic model we can do it mathematically
precise by just looking at the conditional probabilities. As said I have
written hands on instructions in the rust repo
https://github.com/lightningdevkit/rust-lightning/issues/1170#issuecomment-972396747
and
they have been fully implemented in
https://github.com/lightningdevkit/rust-lightning/pull/1227. Also in our
mainnet test and simulations we have updated the priors according to those
rules and this revealed the full power of the approach.

To summarize: Basically we need to know the effective uncertainty by only
looking at the effective amount that goes above the minimum certain
liquidity (that we might know from a prior attempt) and the effective
capacity (somebody recently suggested that conditional capacity might be a
better wording)

> Assuming that routing nodes indeed do so we would have learnt that neither
> > channel has an effective capacity of N. So the combined virtual channel
> > could be seen as 2N-1.
>
> You mean 2(N-1) = 2N-2?
>

Probably though the difference would be neglectable and if I understood you
correctly you will just keep parallel channels separate anyway.

> > 4) Leftovers after Piecewise Linearization
> > I am not sure if I understand your question / issue here. The splitting
> > works by selecting N points on the domain of the function and splitting
> the
> > domain into segments at those points. This should never leave sats over.
>
> With quantization of 10,000 a channel of size 123,456 ends up as an arc
> with a capacity of 12 units. Cutting this into 5 pieces gives us
> 5*2 with 2 units not ending up in any of those pieces. Or am I missing
> something here, and we should split into 5 pieces of size 2.4 = 12/5?
>

Your observation is correct! Indeed I think my code rounds down the
capacity instead of going to the correct points and using all of the
capacity in the segmentation by making some channels 1 unit larger than
others which would happen if actually finding points on the domain to build
the segments. This could easily be fixed. However as always: Fully
saturated channels mean very low probabilities so even in my situation
where I may cut off a significant part of the channel I'd say in the
extreme case where we would need to saturate even those sats the flow will
and should most likely fail as the min cust is probably just lower than the
amount we would attempt to send. Probably opening a new channel or doing an
on chain transaction will be more useful. Though of course we should build
the piecewise linearization correctly by the end of the day without
throughing away some capacity.

> If the quantization however makes a channel so small  that we cannot
> > even create 5 (or N) disjoint segments then I guess the likelihood for
> > being included into the final result is too small anyway.
>
> It may not be very likely, but flat-out ignoring 20k sat (in my
> contrived example above) or up to 4*quantization sats (which is the case
> you described) doesn't feel right.
>

See above. I agree it is not 100% accurate. but in practice I doubt it
would ever become a problem as this will only be an issue when the payment
amount is very close to the min cut which would make flows so unlikely to
begin with that we would use other ways to conduct the payment anyway.

witch kind regards Rene
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Code for sub second runtime of piecewise linarization to quickly approximate the minimum convex cost flow problem (makes fast multi part payments with large amounts possible)

2022-03-14 Thread René Pickhardt via Lightning-dev
Dear Carsten, Martin and fellow lightning developers,

first of all thank you very much for independently verifying and
acknowledging my recent findings about the runtime of finding a pieceweise
linearized approximation to the min cost flow problem, for working on
integrating them into lnd-manageJ and for your excellent questions &
thoughts.

On Sun, Mar 13, 2022 at 8:17 PM Carsten Otto via Lightning-dev <
lightning-dev@lists.linuxfoundation.org> wrote:


> 1) What's the reasoning behind combining parallel channels?
>

Generally speaking this is pure pragmatism on my end to simplify my life as
handling parallel channels in some cases blows up complexity of code and
simulations. However I think from a probabilistic point of view ( see below
) the combination is more accurate to reflect the actual likelihood that
the liquidity is available.

I agree that parallel channels make things a lot more complicated, but I
> also see the benefit from a node operator's point of view. That being
> said, wouldn't it suffice to treat parallel channels individually?
>

I think that should work and especially when including fees to the cost
function and considering how nodes handle routing requests on parallel
channels we might have to do so anyway. The suggested flows will probably
change in a way that disfavors parallel channels even if their virtual
capacity is larger than an alternative single channel (see below)

1.1) A payment of size 2 needs to be split into 1+1 to fit through
> parallel channels of size 1+1. Combining the 1+1 channels into a virtual
> channel of size 2 only complicates the code that has to do come up with
> a MPP that doesn't over-saturate the actual channels. On the other hand,
> I don't think the probability for the virtual channel of size 2 is more
> realistic than reasoning about two individual channels and their
> probabilities - but I didn't even try to see the math behind that.
> Please prove me wrong? :)
>

* The likelihood that a 1 Satoshi capacity channel has 1 Satoshi to route
is 1/2.
* The likelihood that 2 channels of capacity 1 have each 1 satoshi
available to route is 1/2*1/2 = 1/4
* Combining both parallel channels to one virtual channel of capacity 2 and
asking if 2 satoshis are available to route gives a likelihood of 1/3 which
is larger than 1/4.

However I believe in practice one cannot just send a 2 satoshi onion and
expect the routing node to split the amount  correctly / accordingly
between the two parallel channels. (I might be wrong here). So in that case
modelling and computing probabilities for parallel channels might be
necessary anyway though the math indicates that splitting liquidity in
parallel channels will get you selected less frequently for routing.

1.2) The Mission Control information provided by lnd can be used to
> place a minimum available balance on each of the parallel channels. If
> we know that node A isn't able to forward N sats to node B, we can treat
> all parallel channels between A and B (in that direction) to have a
> capacity of at most N-1 sats. How would this look like if we combined
> the parallel channels into a virtual one? Note that it may still be
> possible to route two individual payments/onions of size N-1 sats from A
> to B, given two parallel channels with that many sats on A's side.
>

I think you talk a about a maximum available balance of a channel (and not
min available balance)?
In the case of parallel channels I am not even sure if such information is
accurate as it is my understanding that the routing node may decide to use
the parallel channel to forward the amount even though the other channel
was specified in the onion.
Assuming that routing nodes indeed do so we would have learnt that neither
channel has an effective capacity of N. So the combined virtual channel
could be seen as 2N-1. However if routing nodes don't locally split a
forwarding request across both channels we would know that calaculating
with 2N-1 is bad as a request of N could not be fulfilled. I guess it is
for the implementations that support parallel channels to figure out the
details here.

2) Optimal Piecewise Linearization
>
> See Twitter [3].
>
> Is it worth it cutting a channel into pieces of different sizes, instead
> of just having (as per your example) 5 pieces of the same size? If it
> makes a noticeable difference, adding some complexity to the code might
> be worth it.
>

I will certainly do experiments or be happy if others are faster to do them
which compare the quality of the approximation with optimal piecewise
linearization to my choice of fixed intervals and the selection of various
numbers of segments. As long as we don't have numbers it is hard to guess
if it is worthwhile adding the complexity. Looking at the current results
it seems that my (geometricly motivated but) arbitrary choice might end up
to be good and easy enough. However we might very well see quite an
improvement of the approximation if we find better piecewise linearizatio

[Lightning-dev] Code for sub second runtime of piecewise linarization to quickly approximate the minimum convex cost flow problem (makes fast multi part payments with large amounts possible)

2022-03-11 Thread René Pickhardt via Lightning-dev
Dear fellow Lightning Developers,

I am pleased (and a bit proud) to be able to inform you that I finally
found a quick way to approximate the slow minimum convex cost flow
computation. This is necessary for optimally reliable and cheap payment
flows [0] to deliver large multi part payments over the Lightning Network.
The proposed solution happens via piecewise linearization [1] of the min
cost flow problem on the uncertainty network which we face in order to
compute the optimal split and planning of large amount multi part payments.
The notion of "optimal" is obviously subjective with respect to the chosen
cost function. As known we suggest to include the negative logarithm of
success probabilities based on the likelihood that enough liquidity is
available on a channel as a dominant feature of the used cost function. We
give the background for this in [2] which since then has already been
picked up by c-lightning and LDK. The c-lightning team even published
benchmarks showing significant improvement in payment speed over their
previously used cost function [2b].

Let me recall that one of the largest criticisms and concerns of our
approach to use minimum cost flows for payment delivery back in July /
August last year (especially by the folks from lightning labs) was that the
min cost flow approach would be impractical due to run time constrains.
Thus I am delighted that with the now published code [3] (which has exactly
100 lines including data import and parsing and ignoring comments) we are
able to compute a reasonable looking approximation to the optimal solution
in a sub second run time on the complete public channel graph of the
Lightning Network. This is achieved via piecewise linearization of the
convex cost function and invoking of a standard linear min cost flow solver
[4] for the linearized problem. This works quickly despite the fact that
the piecewise linearization adds a significant higher amount of arcs to the
network and blows up the size of the network on which we solve the min cost
flow problem. This makes me fairly certain that with proper pruning of the
graph we might even reach the 100 millisecond runtime frontier, which would
be far faster than what I dreamed & hoped to be possible.

The currently widely deployed Dijkstra search to generate a single
candidate path takes roughly 100ms of runtime. It seems that with the
runtime of the piecewise linearized problem the min cost flow approach is
now competitive from a runtime perspective. The flow computation is still a
bit slower than Dijkstra in both theory and practice. However the piecewise
linearized min cost flow has the huge advantage that it generates several
candidate paths for a solid approximation of the optimal MPP split.
Remember the exact min cost flow corresponds to the optimal MPP split. The
later was not used so far as the min cost flow was considered to be too
slow. Yet the question how to split seems to be addressed as issues in
implementations [5][6][7] and acknowledged to be complicated (especially
with respect to fees) in [8]. This result is btw of particular interest for
LSPs. If an LSP has to schedule x payments per second it can just do one
flow computation with several sink nodes and plan all of those payments
with a single min cost flow computation. This globally optimizes the
potentially heavy load that LSPs might have even if all payments were so
small that no splitting was necessary.

The iPython notebook which I shared contains about 1 page to explain how to
conduct a piecewise linear approximation. The quality of the approximation
is not the major goal here as I am just focused to demonstrate the run time
of the approach and the principle how to achieve this runtime. Thus I do
the piecewise linear approximation very roughly in the published code.
Selecting the optimal piecewise approximation [9] will not change the the
runtime of flow computation but only blows up the code to prepare the
solver. This is why I decided to keep the code as simple and short as
possible even if that means that the approximation will be not as close to
the optimum as it could be in practice. For the same reason I did not
include any code to update the uncertainty network from previously failed
or successful attempts by using conditional success probabilities P(X>a |
min_liquidity < X < max_liquidity ). People who are interested might look
at my hands on instructions and explanations for coders in this
rust-lightning issue [10]. The folks from LDK have picked this up and
implemented this already for single path payments in [11] which might be
relevant for people who prefer code over math papers. An obvious
optimization of the piece wise linearization would be to chose the first
segment of the piecewise linear approximation with a capacity of the
certain liquidity and a unit cost of 0 for that piece.

Our original papers describe everything only from a theoretical point of
view and with simulations. However our mainnet experiments f

Re: [Lightning-dev] Route reliability<->fee trade-off control parameter

2021-11-15 Thread René Pickhardt via Lightning-dev
Dear Joost,

First I am happy that you also agree that reliability can and should be
expressed as a probability as discussed in [0].

The problem that you address is that of feature engineering[1]. Which
consists of two (or even more) steps:

1.) Feature selection: That means in payment delivery we will compute a min
cost flow [2] with a chosen cost function (historically people used
dijkstra seach for single paths with the cost function representing the
weights on the edges of the graph -which is what most folks currently still
do). While [2] and I personally agree with you that the cost function
should be a combination the two features fees and reliability (as in
successprobability) Matt Corallo righfully pointed out [3] that other
features might be chosen in the future to deliver more optimal results. For
example implementations currently often use CLTV as a feature (which I
honestly find horrible) and I am currently investigating if one could add
latency of channels or - for known IP addresses - either the geo distance
or IP distance.

2.) Combining features: This is the question that you are asking. Often
people use a linear weighted sum to combine features. This is what often
happens implicitly in neural networks. While this is often good enough and
while it is often practical to either learn the weights or give users a
choice there are many situation where the weighted linear sum does not work
well with the selected features. An example for the weighted sum is the
risk-factor in c-lightning that could have been used to decide if one
wanted the dijkstra seach to either optimize for CLTV delta or for paid
routing fees. Also in our paper [2] in which we discuss the same two
features that you mentioned we explain how a linear sum of two features can
be optimal due to the lagrangian bounding principle. However in practice
(of machine learning) it has been shown that using the harmonic mean [4]
between features often works very well without the necessity to learn a
weight / parameter. This has for example been done when c-lightnign
recently switched to probabilistic path finding [5]. In this thread you
find a long discussion and evaluation how the harmonic mean outperformed
the linear sum.

I think the main issue that you address here is that there is no universal
truth for situations like this. In practice only tests and experience will
help us to make good decisions.

with kind Regards Rene

[0]:
https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-March/002984.html
[1]: https://en.wikipedia.org/wiki/Feature_engineering
[2]: https://arxiv.org/abs/2107.05322
[3]:
https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-September/003219.html
[4]:  https://en.wikipedia.org/wiki/Harmonic_mean
[5]: https://github.com/ElementsProject/lightning/pull/4771




On Mon, Nov 15, 2021 at 4:26 PM Joost Jager  wrote:

> In Lightning pathfinding the two main variables to optimize for are
> routing fee and reliability. Routing fee is concrete. It is the sat amount
> that is paid when a payment succeeds. Reliability is a property of a route
> that can be expressed as a probability. The probability that a route will
> be successful.
>
> During pathfinding, route options are compared against each other. So for
> example:
>
> Route A: fee 10 sat, success probability 50%
> Route B: fee 20 sat, success probability 80%
>
> Which one is the better route? That depends on user preference. A patient
> user will probably go for route A in the hope of saving on fees whereas for
> a time-sensitive payment route B looks better.
>
> It would be great to offer this trade-off to the user in a simple way.
> Preferably a single [0, 1] value that controls the selection process. At 0,
> the route is only optimized for fees and probabilities are ignored
> completely. At 1, the route is only optimized for reliability and fees are
> ignored completely.
>
> But how to choose between the routes A and B for a value somewhere in
> between 0 and 1? For example 0.5 - perfect balance between reliability and
> fee. But what does that mean exactly?
>
> Anyone got an idea on how to approach this best? I am looking for a simple
> formula to decide between routes, preferably with a reasonably sound
> probability-theoretical basis (whatever that means).
>
> Joost
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>


-- 
https://www.rene-pickhardt.de
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Handling nonzerobasefee when using Pickhard-Richter algo variants

2021-08-30 Thread René Pickhardt via Lightning-dev
Dear ZmnSCPxj,

thank you very much for this mail and in particular the openess to tinker
about a protocol change with respect to the transport layer of HTLCs /
PTLCs! I fully agree that we should think about adopting our onion
transport layer in a way that supports local payment splits and merges to
resemble the flowing nature of payments more closely! While an update for
that might be tricky I think right now is a perfect moment for such
discussions as we kind of need a full upgrade anyway when going to PTLCs
(which in fact might even be helpful with a local splittling / merging
logic as secrets would be additive / linear) Before I elaborate let me
state a few things and correct an error in your mail about the convex
nature of the fee function and the problems of the min-cost flow solver.

I agree with you (and AJ) that there is a problem in the base fee that
comes from overpaying when dissecting a flow into paths which share a
channel. However this is not related to the convexity issue. In a recent
mail [0] I conductd the calculation demonstrating why the current fee
function f(x)=x*r+b is not even linear (every linear function should be
convex). As described in the paper the problem is for the min cost flow
solving algorithms if the cost-function (in that case the fee function) is
neither linear nor convex.

The issue that breaks convexity is really subtle here and comes when going
from 0 flow to some flow. In that sense I apologize that the paper might be
slightly misleading in its presentation. While convexity is often checked
with the second derivative argument our function is in fact defined on
integers and thus not even differentiable which is why the argument with
the second derivative that we use to show the convexity of the negative log
probabilities is not transferable.

Let us better go with the wikipedia definition of convexity [1] that
states:
"In mathematics, a real-valued function is called convex if the line
segment between any two points on the graph of the function lies above the
graph between the two points."

So how you would test convexity is by making sure that for any two values
x_1 and x_2 the line connecting the points (x_1,f(x_1)) and (x_2,f(x_2))
lies above all other points (x,f(x)) for x  \in {x_1, , x_2}. In our
case because f(0)=0 and f(2)=2r+b
The line connecting those two points is defined by l(x)=((2r+b)/2)*x =
(r+b/2)*x which means that l(1)=r+b/2.
however f(1) = r+ b which is larger than l(1) for any positive value of b.
(Note that again with a zerobasefee b=0 we have f(1)=r=l(1) which would be
sufficient)

If you implement the capacity scaling algorithm for a min-cost flow solver
you will see that the algorithm linearizes the convex costs all the time by
switching to a unit cost in each delta-scaling phase (as described in
section 9, 10.2 and 14.1 through 14.5 of the book [2] that we refer to in
the paper and that Stefan already mentioned to you). This seems very
ismilar to the thoughts you seemed to have in your mail (though even after
reading them 3 times I can't verify how you turned the base_fee into a
proportional term by switching to a unisized payment amount). You will also
recognize that the base fee is not linearizable as a unit cost and will
break the reduced cost optimality criterium in the delta phases of the
algorithm. That being said in our mainnet tests we actually used 10k and
100k sats as a fixed min unit size for htlcs in flow computation but more
about that in the already announced and soon to come mail about the mainnet
tests.

Long story short while the base fee yields indeed a problem for a flow to
be dissected into paths the issues seems to be much more severe because of
this non continious jump when going from f(0) to f(1).

All that being said I am very delighted to see that you propose a protocol
change towards "whole flow payments" (Please allow me to still call them
payment flows as our paper title suggested). In my recent mail [0] I said I
would want to write another mail about  our mainnet test results and the
the consequences for the protocol, node operators, users etc... One of the
consequences goes fully along with the idea that you described here and
thus I very much like / appreciate your idea / proposal! As the other mail
was too long anyway let me quickly copy and past the text about the 5
consequences for the protocol that I saw from my draft and note that the
second one goes fully along with your proposal:

Consequences for the Protocol


1.) As we send larger amounts we confirmed what many people already knew:
Stuck and hanging HTLCs are becoming more of a problem. The main reason
that in our experiments we could not deliver the full amount was that we
"lost" htlcs as they did neither arrive at the destination in time nor did
they return as errors to the sender quickly enough before the mpp timeout.
We very very much need a cancable payment mechanism [3]. I know taproot and
PTLCs are coming but I thin

[Lightning-dev] Do we really want users to solve an NP-hard problem when they wish to find a cheap way of paying each other on the Lightning Network?

2021-08-26 Thread René Pickhardt via Lightning-dev
Dear fellow lightning developers,

with a mixture of shock and disbelief I have been following the (semi)
public discussions for the last 6 weeks and the reaction of some companies
/ people that reached out to me. I have to say I am really surprised by the
amount of hesitation that - despite obvious and overwhelming mathematical
evidence -  a small group of people demonstrated in response to our
results. While I cannot make any sense of this I decided to post here
despite the fact that I believe everything that needed to be said is
already written and explained clearly in our paper [0] about optimally
reliable and cheap payment flows on the Lightning Network. My hope is that
this mail can help us to

* clarify a few misunderstandings
* stop having opinionated discussions about mathematical facts
* find a quick agreement on how to move on

As far as I can tell currently all implementations use some form of
Dijkstra algorithm in payment delivery with a strong emphasize on finding
paths with cheap fees. The reason seems to be that it is a well established
assumption that users would prefer a cheap solution on the market of
offered routing fees. (while routing nodes of course try to offer fees that
maximize their earnings)

If we look at the mentioned paper there are several results (I will soon
publish a separate mail here discussing some of the more interesting
results and consequences but I decided to split it and put this one here
first as it seemed to be more urgent). One of the results which I will
discuss now is the realization that - given the current fee function -
finding the cheapest payment flow is an NP-hard problem because the fee
function is neither linear nor convex.

Our fee function is `f(x) = rx + b` where `r`= fee rate, `b`=base fee and
`x` is the amount we want to send.

As we thought it was obvious that the function is not linear we only
explained in the paper how the jump from f(0)=0 to f(1) = ppm+base_fee
breaks convexity. But as the question came up several times (for example
here [1]) I want to stress that the fee function - despite looking like a
straight line - is **not linear**. While writing this post I realized that
the issue might be that the concept of linearity [2] from the field of
linear algebra seems to be intermixed in the english language and American
school system with the concept of linear polynomials / linear functions
[3]. So maybe that is part of the problem that emerged in previous
discussions.

When I write linear here I am referring back to the concept of linearity
from linear algebra (c.f.: [2]) which btw seems to be also the main reason
why schnorr signatures [2b] are so powerful that everyone is happy they
will find their way into bitcoin. We know that a necessary condition for a
function to be called linear is that the following property holds:

f(x+y)=f(x)+f(y)

however if we look at our setting we see that:

f(x+y) = r(x+y) + b

and

f(x)+f(y) = rx+b + ry+b = r(x+y) + 2b

equality (and thus linearity) only holds if

r(x+y)+b = r(x+y) + 2b <==> b = 2b

the later is true if and only if `b=0`. In other words: **Our current fee
function is only linear if we set the base fee to zero.**

On the other side we refer to research in our paper that shows that an
optimal solution (in this case optimal means cheapest) cannot be found if
the fee function is neither linear nor convex. I think one of the biggest
misunderstandings that I saw in the discussions is that people seem to have
thought this has something to do with our new / proposed method. But as far
as I can tell it does absolutely not have any connection to it. Instead and
as most of you probably know the property to be an NP-hard problem is
completely independent of the used algorithm. This is why in the paper we
suggested to drop the base_fee and mentioned in the German Podcast that the
same effect could already be achieved by node operators today by setting
their base fee to 0. it is also the reason why we asked before we published
the paper why the base fee was introduced and what purpose it served [4].

I am so surprised by some of the resistance because some of the biggest
talking points by Lightning Network critics in the past have been that
routing is either not solved or if it was solved it would be NP-hard. In
our paper we show that delivering a payment can always be modeled as a min
cost flow and the resulting optimization problem will indeed be NP-hard
depending on the cost function. Given the current fee function with a non
zero base fee and the goal of minimizing fees that is followed by all
implementations I have to agree with such critics and say yes, in that
particular case delivering a payment optimally is an NP-hard problem.

Luckily there are things we can do about it:

1.) we can decide to have a different optimization goal. For example in the
Paper we used a previously introduced probabilisitic model that optimizes
for reliability instead of fees and we show that in that case the function

Re: [Lightning-dev] Proposal for an invoice pattern with an embedded Bitcoin onchain address

2021-07-09 Thread René Pickhardt via Lightning-dev
Hi,

I am sorry to hear you had trouble with payment pathfinding. However if I
understand your suggestion correctly I think the proposed functionality
already exists in a very similar way in today's invoices with a mechanism
called fallback address. The main difference seems to be that the fallback
adress is not a human readable part of the invoice string but encoded with
the other data in the bech32 part of the invoice.

Check bolt 11 [1] on github for more detail but I copied the relevant
snippets from there to this mail.


   - f (9): data_length variable, depending on version. Fallback on-chain
   address: for Bitcoin, this starts with a 5-bit version and contains a
   witness program or P2PKH or P2SH address.

The f field allows on-chain fallback; however, this may not make sense for
tiny or time-sensitive payments. It's possible that new address forms will
appear; thus, multiple f fields (in an implied preferred order) help with
transition, and f fields with versions 19-31 will be ignored by readers.

[1]:
https://github.com/lightningnetwork/lightning-rfc/blob/master/11-payment-encoding.md

With kind regards Rene Pickhardt

 schrieb am Sa., 10. Juli 2021, 07:56:

> Hi,
>
> I propose a new LN invoice pattern that contains a Bitcoin address for
> onchain transfer as backup.
>
> Motivation: My dream is to have an app wallet that works in a totally
> abstract and transparent way onchain and/or LN depending on the situation.
> Phoenix wallet almost achieves this, but there is still a certain
> LN/onchain distinction that confuses users a bit.
>
> I use Phoenix daily. Today, for some reason, I couldn't pay a friend.
> Payment failed in several attempts. It was not clear why. The fact is that
> I managed to transfer to Breeze and then from there I was finally able to
> transfer to the final destination. For some reason it had no liquidity on
> the specific route. These exception cases greatly confuse the most
> non-expert users. If, on the invoice my friend sent to me, I had embedded a
> Bitcoin address, the wallet could simply ask: "Couldn't send via LN, do you
> want to send it on-chain at XPTO fee rate? It can take a while."
>
> That way, in case of payment failure, there is an immediate onchain backup
> alternative, useful especially when rates are low, like now.
>
> The format could be something like:
>
> :::
>
> Example:
>
> ln:v2:Hi,
>
> I propose a new invoice pattern that contains a Bitcoin address for
> onchain transfer.
>
> Motivation: My dream is to have a portfolio that works in a totally
> abstract and transparent way onchain and/or LN depending on the situation.
> Phoenix wallet almost achieves this, but there is still a certain
> LN/onchain distinction that confuses users a bit.
>
> I use Phoenix daily. Today, for some reason, I couldn't pay a friend.
> Payment failed in several attempts. It was not clear why. The fact is that
> I managed to transfer to Breeze and then from there I was finally able to
> transfer to the final destination. For some reason it had no liquidity on
> the specific route. These exception cases greatly confuse the most
> non-expert users. If, on the invoice my friend sent me, I had embedded a
> Bitcoin address, the wallet could simply ask: "Couldn't send via LN, do you
> want to send onchain at XPTO rate?"
>
> That way, in case of payment failure, there is an immediate onchain backup
> alternative, useful especially when rates are low, like now.
>
> The format could be something like:
>
> :::
>
> Example:
>
>
> ln:v1:bc1qucfe06nunhrczh9nrfdxyvma84thy3eugs0825:lnbc20m1pvjluezpp5qqqsyqcyq5rqwzqfqqqsyqcyq5rqwzqfqqqsyqcyq5rqwzqfqypqhp58yjmdan79s6qqdhdzgynm4zwqd5d7xmw5fk98klysy043l2ahrqsfpp3qjmp7lwpagxun9pygexvgpjdc4jdj85fr9yq20q82gphp2nflc7jtzrcazrra7wwgzxqc8u7754cdlpfrmccae92qgzqvzq2ps8pqqpq9qqqvpeuqafqxu92d8lr6fvg0r5gv0heeeqgcrqlnm6jhphu9y00rrhy4grqszsvpcgpy9qqgq7qqzqj9n4evl6mr5aj9f58zp6fyjzup6ywn3x6sk8akg5v4tgn2q8g4fhx05wf6juaxu9760yp46454gpg5mtzgerlzezqcqvjnhjh8z3g2qqdhhwkj
>
> Thank you.
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] bLIPs: A proposal for community-driven app layer and protocol extension standardization

2021-06-30 Thread René Pickhardt via Lightning-dev
Hey everyone,

just for reference when I was new here (and did not understand the
processes well enough) I proposed a similar idea (called LIP) in 2018 c.f.:
https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-July/001367.html


I wonder what exactly has changed in the reasoning by roasbeef which I will
repeat here:

*> We already have the equiv of improvement proposals: BOLTs. Historically*

>* new standardization documents are proposed initially as issues or PR's when *

>* ultimately accepted. Why do we need another repo? *


As far as I can tell there was always some form of (invisible?) barrier to
participate in the BOLTs but there are also new BOLTs being offered:
* BOLT 12: https://github.com/lightningnetwork/lightning-rfc/pull/798
* BOLT 14: https://github.com/lightningnetwork/lightning-rfc/pull/780
and topics to be included like:
* dual funding
* splicing
* the examples given by Ryan

I don't see how a new repo would reduce that barrier - Actually I think it
would even create more confusion as I for example would not know where
something belongs. That being said I think all the points that are
addressed in Ryan's mail could very well be formalized into BOLTs but maybe
we just need to rethink the current process of the BOLTs to make it more
accessible for new ideas to find their way into the BOLTs? One thing that I
can say from answering lightning-network questions on stackexchange is that
it would certainly help if the BOLTs where referenced  on lightning.network
web page and in the whitepaper as the place to be if one wants to learn
about the Lightning Network

with kind regards Rene

On Wed, Jun 30, 2021 at 4:10 PM Ryan Gentry via Lightning-dev <
lightning-dev@lists.linuxfoundation.org> wrote:

> Hi all,
>
> The recent thread around zero-conf channels [1] provides an opportunity to
> discuss how the BOLT process handles features and best practices that arise
> in the wild vs. originating within the process itself. Zero-conf channels
> are one of many LN innovations on the app layer that have struggled to make
> their way into the spec. John Carvalho and Bitrefill launched Turbo
> channels in April 2019 [2], Breez posted their solution to the mailing list
> for feedback in August 2020 [3], and we know at least ACINQ and Muun
> (amongst others) have their own implementations. In an ideal world there
> would be a descriptive design document that the app layer implementers had
> collaborated on over the years that the spec group could then pick up and
> merge into the BOLTs now that the feature is deemed spec-worthy.
>
> Over the last couple of months, we have discussed the idea of adding a
> BIP-style process (bLIPs? SPARKs? [4]) on top of the BOLTs with various
> members of the community, and have received positive feedback from both app
> layer and protocol devs. This would not affect the existing BOLT process at
> all, but simply add a place for app layer best practices to be succinctly
> described and organized, especially those that require coordination. These
> features are being built outside of the BOLT process today anyways, so
> ideally a bLIP process would bring them into the fold instead of leaving
> them buried in old ML posts or not documented at all.
>
> Some potential bLIP ideas that people have mentioned include: each lnurl
> variant, on-the-fly channel opens, AMP, dynamic commitments, podcast
> payment metadata, p2p messaging formats, new pathfinding heuristics, remote
> node connection standards, etc.
>
> If the community is interested in moving forward, we've started a branch
> [5] describing such a process. It's based on BIP-0002, so not trying to
> reinvent any wheels. It would be great to have developers from various
> implementations and from the broader app layer ecosystem volunteer to be
> listed as editors (basically the same role as in the BIPs).
>
> Looking forward to hearing your thoughts!
>
> Best,
> Ryan
>
> [1]
> https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-June/003074.html
>
> [2]
> https://www.coindesk.com/bitrefills-thor-turbo-lets-you-get-started-with-bitcoins-lightning-faster
>
> [3]
> https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-August/002780.html
>
> [4] bLIP = Bitcoin Lightning Improvement Proposal and SPARK =
> Standardization of Protocols at the Request of the Kommunity (h/t fiatjaf)
>
> [5]
> https://github.com/ryanthegentry/lightning-rfc/blob/blip-0001/blips/blip-0001.mediawiki
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>


-- 
https://www.rene-pickhardt.de
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Lightning Network Protocol Suite Diagram - Request for feedback

2021-04-06 Thread René Pickhardt via Lightning-dev
Dear fellow Lightning Developers,

as you all probably know Andreas, Roasbeef and I are working on writing the
book "Mastering the Lightning Network" and I am happy to say that there has
been quite some progress over the last 18 months. In Particular it lead to
a diagram of the Lightning Network Protocol Suite which is the reason I am
sending this email. We would love to have your feedback and criticism (not
on my little design skills but on the content!). You can find the diagram
at:

https://commons.wikimedia.org/wiki/File:Lightning_Network_Protocol_Suite.png

As a Wikipedia Contributor I shared the current Version on Wikimedia
commons where the Diagram can easily be updated but the link will stay the
same as they also allow version control of files. Eventually I am happy to
open a PR and add this to BOLT 0 or the Readme of the lightning-rfc. But
first we would welcome and encourage your feedback and discussion.
Especially the naming of some layers are somewhat novel but also the labels
for the boxes and the geometry and semantics of the boxes themselves.

To give you some context: As you know the BOLTs are somewhat linked and
intermixed. For example BOLT2 describes the channel state machine with the
HTLCs which is mainly needed for routing, BOLT 07 is named gossip routing
though the main onion routing is happening in BOLT 04, ... I am not saying
that such things would be wrong. However I made the experience that those
iterdependencies can sometimes be a bit confusing when introducing the
protocol to new people.

Consequently this lead to plenty of internal discussions which we concluded
by deciding that we need an architecture diagram for the Lightning Network.
After a couple internal iterations we believe it is time to present our
current diagram to the community and hope to get even more feedback before
we `fix` it and use it also as a guide for some of the chapters in the
book. You can see a part of our discussions and previous versions on github
https://github.com/lnbook/lnbook/issues/342

Also please note that BOLT 03 and BOLT 5 which is the Bitcoin stuff is
currently not part of the diagram at all.

with kind Regards Rene



-- 
https://www.rene-pickhardt.de
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Towards more reliable payment path finding via probabilistic modeling the uncertainty of channel balance

2021-03-18 Thread René Pickhardt via Lightning-dev
Dear Elias,

Thanks for your kind words!

In the paper we suggest clients sort paths by probability but skip the ones
that charge fees that are too high for a user, which could be defined in a
user setting. I should have repeated that when I expressed that implicitly
with this sentence in my mail:

"This means that nodes which provide a lot of liquidity and thus utility
might be able to charge higher fees (as long as they are small enough so
that users are willing to pay them)"

I think that is very similar to your suggestion and of course one could
include not only fees but other criteria while filtering.

In general I think it is reasonable to be aware of the most likely path as
the inverse probability is the number of expected attempts. This if due to
failures and updating the knowledge and channel probabilities the remaining
path probabilities drop below a certain (configurable) value one might want
to stop trying or consider a more expensive path.

So if for example a user was to say I am willing to pay up to 0.5% of the
amount in routing fees but after a few attempts the likeliest path has only
a 0.01% chance a client could say something like: "it is very unlikely to
deliver the payment but if you pay 0.7% fees there is a chance (want to
try?)" Of course at some point onchain / opening a new channel might be
cheaper which would contribute to the potential emerging fee market.

Instead of filtering paths by fees one could also weight the probabilities
with the fees. An easy (but maybe not optimal) approach for that would be
to multiply the log probabilities (which have to be low for high success)
with the fees. That being said I think the main important result is that we
should always be aware of (multi)path probabilities during the trial and
error phase especially in order to make splitting decisions and to
determine when to stop trying

Best Rene

Elias Rohrer  schrieb am Do., 18. März 2021, 09:34:

> Dear René,
>
> thank you for the great work!
>
> One quick question regarding the consequence you mentioned: it seems
> plausible that manipulating the path choices would become harder, if the
> ability of doing so was correlated with the capacity locked in the network.
> However, if paths were only chosen regarding the probability of payment
> success (and neglecting accruing fees), couldn't high-capacity nodes in
> absence of competition simply raise their fee levels indefinitely, since
> they would be chosen regardless? Do you have any ideas how to protect
> against this?
>
> I imagine that some kind of 'mixed' strategy could be reasonable, in which
> certain paths are pre-filtered based on the probability of payment success,
> and then the final path is selected along the lines of the currently
> deployed fee rate/CLTV risk assessment?
>
> Kind Regards,
>
> Elias
>
> On 17 Mar 2021, at 13:50, René Pickhardt via Lightning-dev wrote:
>
> Dear fellow Lightning Network developers,
>
> I am very pleased to share with you some research progress [0] with
> respect to achieving better payment path finding and a better reliability
> of the payment process.
>
> TL;DR summary: In payment (multi)path finding use the (multi)paths with
> the highest success probability instead of the shortest or cheapest ones.
> (multi)path success probability is the product of channel success
> probabilities. Given current data crawled on the Network the channel
> success probability grows with the capacity of the channel and with smaller
> amounts that are to be sent (which is both intuitively obvious).
> (Multi)path success probability thus declines exponentially the more
> uncertain channels are included.
>
> I understand that the actual payment path finding is not part of the spec
> but I think my results should be relevant to the list since:
>
> a) The payment pathfinding is currently based on trial and error approach
> which has consequences that have not been studied well in the context of
> the Lightning Network
> b) All implementations will use some heuristics in order to achieve
> pathfinding.
> c) Quick path finding is a crucial for a good user experience.
> d) The uncertainty of payment paths is frequently quoted as a major
> criticism of the Lightning Network (c.f. [1]) and I believe the methodology
> of this paper can be used to address this.
>
> The main breakthrough is that  a very simple model that puts the
> uncertainty of channel balances at its heart. We belief the uncertainty of
> channel balance values is the main reason why some payments take several
> attempts and thus take more time.  With the help of probability theory we
> are able to define the channel success and failure probabilities and
> similarly (multi)path success and failure probabilities. Other Failure
> reasons could also be 

[Lightning-dev] Towards more reliable payment path finding via probabilistic modeling the uncertainty of channel balance

2021-03-17 Thread René Pickhardt via Lightning-dev
Dear fellow Lightning Network developers,

I am very pleased to share with you some research progress [0] with respect
to achieving better payment path finding and a better reliability of the
payment process.

TL;DR summary: In payment (multi)path finding use the (multi)paths with the
highest success probability instead of the shortest or cheapest ones.
(multi)path success probability is the product of channel success
probabilities. Given current data crawled on the Network the channel
success probability grows with the capacity of the channel and with smaller
amounts that are to be sent (which is both intuitively obvious).
(Multi)path success probability thus declines exponentially the more
uncertain channels are included.

I understand that the actual payment path finding is not part of the spec
but I think my results should be relevant to the list since:

a) The payment pathfinding is currently based on trial and error approach
which has consequences that have not been studied well in the context of
the Lightning Network
b) All implementations will use some heuristics in order to achieve
pathfinding.
c) Quick path finding is a crucial for a good user experience.
d) The uncertainty of payment paths is frequently quoted as a major
criticism of the Lightning Network (c.f. [1]) and I believe the methodology
of this paper can be used to address this.

The main breakthrough is that  a very simple model that puts the
uncertainty of channel balances at its heart. We belief the uncertainty of
channel balance values is the main reason why some payments take several
attempts and thus take more time.  With the help of probability theory we
are able to define the channel success and failure probabilities and
similarly (multi)path success and failure probabilities. Other Failure
reasons could also be included to the probability distributions.

With the help of crawling small samples of the network we observe that the
probability distributions of the channel balances can be estimated well
with a uniform distribution (which was a little bit surprising) but leads
to surprisingly easy formulas.  We are able to quantify the uncertainty in
the channels and use negative Bernoulli trials to compute the expected
number of attempts that are necessary to deliver a payment of a particular
amount from one node to another participant of the network. This can be
used to abort the trial and error path finding if the probability becomes
to low (expected number of attempts too high)

We can mathematically show what people already knew (and draw conclusions
like the mentioned ones from it):

a) smaller amounts have higher success probabilities
b) the success probability declines exponentially with the number of
uncertain channels in a (multi)path.
c) depending on the payment pair, amount and splitting strategy it can be
decided into how many parts a payment should be split to achieve the
highest success probability.
d) In particular for small amounts splitting almost never makes sense.

We demonstrate that sorting paths by their descending success probability
during the trial and error payment process (instead of currently used
heuristics like fees or route length) and updating the probabilities from
current failures decreases the number of average attempts and produces a
much faster delivery of payments.

Additionally we looked what happened if BOLT14 [2] was implemented or nodes
otherwise would pro-actively rebalance their channels according to previous
research [3] and realized that the observed prior distribution changes from
uniform to normal. This is great as small payments become even more likely
(as one would intuitively assume and as previously showed) Our results show
that probabilisitic path finding on a rebalanced network works even better
(as in fewer failed attempts) which is yet another hint why BOLT14 might be
a good idea. However as mentioned the results can be implemented even
without BOLT14 or without other protocol changes by any implementation.

One consequence from the paper that is not discussed heavily within the
paper that I find pretty interesting is that if implementations follow the
recommendation to use a probabilistic approach they will tend to route
payments along high capacity channels. While the fee based routing can
easily be gamed by dumping fees it is much harder to provide more
liquidity. And if done this would actually provide a service to the
network. This means that nodes which provide a lot of liquidity and thus
utility might be able to charge higher fees (as long as they are small
enough so that users are willing to pay them) which would probably allow
the emergence of a real routing fee market.

One note on the question of MPP: In the last couple weeks I have been
collaborating with Christian Decker. I belief (by using the methodology
from this paper) to also have a definite solution to the question of:

How to split a payment into k parts and how many funds to allocate to each
path to increas

Re: [Lightning-dev] Collaborated stealing. What happens when the final recipient discloses the pre-image

2020-07-17 Thread René Pickhardt via Lightning-dev
Hey Ankit,

The lightning network sees the possession of a preimage as a proof of
payment. And I believe everyone agrees that a court should rule in favor of
A forcing E to deliver the good or reimburse A. The reason is that
possession of the preimage matching the signed payment hash from E is a
much stronger evidence of A actually having paid than E claiming to not
have received anything.
This is also due to the fact that guessing the preimage can practically be
considered impossible (though there is a tiny likelihood)

If E breaches the protocol by giving the preimage to C (for free) instead
of claiming the money from D (and thus settling the Htlc) it will be
considered E's problem, that E did not get reimbursed but just gave out the
preimage for free. (actually E's so called "partner in crime" did get
reimbursed). Even if D would testify that E never settled the Htlc one
would wonder why E never settled the incoming htlc as they should only have
created a payment hash for which they know the preimage. Since A can
actually provide one it is again unlikely if E for example claims they just
used a random hash for which they didn't know the preimage because they
wanted to just see if A has enough liquidity.

With kind regards Rene

Ankit Gangwal  schrieb am Fr., 17. Juli 2020, 08:43:

> Consider A wants to send some funds to E.
>
>
>
> They don’t have a direct payment channel among them. So, they use a
> following path A-B-C-D-E. A is the sender of payment and E is final
> recipient.
>
>
>
> E sends the hash of a secret r to A, A passes on the hash to B, B to C, C
> to D, and D to E.
>
>
>
> E discloses the secret to C (a partner in crime with E) and E do not
> respond to D. C gives the secret to B (settling the HTLC between them).
> Then, B gives the secret to A (settling the HTLC between them).
>
>
>
> A sent (and lost) the money, as E denies receiving the money (and the
> promised service/good).
>
>
>
> How the lightening network sees this? Out of their control?
>
>
>
> --
>
> A_G
>
>
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Disclosure of a fee blackmail attack that can make a victim loose almost all funds of a non Wumbo channel and potential fixes

2020-06-17 Thread René Pickhardt via Lightning-dev
Hey everyone and of course good morning ZmnSCPxj (:

about 11 months ago I discovered a potential blackmail attack with HTLCs
after answering this question on stack exchange (c.f
https://bitcoin.stackexchange.com/questions/89232/why-is-my-spendable-msat-much-lower-than-msatoshi-to-us/89235#89235).
This attack is similar to the one that was possible with tx malleability on
the funding transaction without the segwit upgrade (c.f.
https://commons.wikimedia.org/w/index.php?title=File:Introduction_to_the_Lightning_Network_Protocol_and_the_Basics_of_Lightning_Technology_(BOLT_aka_Lightning-rfc).pdf&page=126).
Meaning an attacker can force a victim to lose money and use this fact to
blackmail the victim, to potentially gain / steal some of the lost funds.

TL;DR:
=
* Depending on the circumstances this attack allows an attacker to make
channel partners lose a substantial amount of BTC without substantial costs
for the attacker.
* Depending on the exact circumstances this could be for example ~0.15 BTC.
In particular it demonstrates why opening a channel is not an entirely
trustless activity.
* The attacker will reliably only be able to force the victim to lose this
amount of Bitcoin.
* It is not clear how in practice the attacker could gain this amount or
parts of it as this would involve not only game theory but also rather
quick communication between attacker and victim and customized Lightning
nodes which at least for the victim would be unlikely to exist.
* None of the suggested fixes seems to be satisfying though the current
solution of lowering the maximum amount of HTLCs that can concurrently be
in flight seems to be a reasonable start for now.


Timeline on Disclosure
=
I have disclosed this attack on Sunday July 21st 2019 to Fabrice Drouin
(and shortly after to Christian Decker) in a phone call who in turn has
discussed it with people from the other implementations. From his feedback
I understood that people working on implementations have been more or less
aware of the possibility of this attack. Fabrice also mentioned that he
believed implementations currently try to mitigate this by setting low
limits of allowed / accepted HTLCs in flight. However at that time this was
only true for e-clair. It is now also true for c-lightning and as far as I
know still not true for lnd. Fabrice said that the people he talked to have
suggested that I should eventually describe the attack in public to raise
awareness (also from the group of node operators) for the problems related
to this attack. He also suggested that - if I wanted to - I should update
the rfc with recommendations  and warnings. While I already have in mind
how to change the rfc I wanted to start the discussion first. Maybe some
people find better fixes than just a warning that I have in mind. So far I
didn't do anything because I wanted to also give lnd the chance to handle
the problem.

There are two reasons I disclose this attack today:
1.) I think almost 1 year is enough time to do something about it. The only
implementation that afaik didn't yet is lnd (see below) but I got roasbeefs
ok last week to go ahead and publish the attack anyway so that we can have
a broader discussion on mitigation strategies.
2.) The attack seems actually very similar to the one described in the
"Flood & Loot: A Systemic Attack On The Lightning Network" - paper which
came out 2 days ago (c.f.: https://arxiv.org/abs/2006.08513 ). I believe
any person reading that paper will understand the possibility of the attack
that I describe anyway so I believe it is now more or less public anyway
and thus time for an open / public discussion.

The main difference between the two attacks (if I understand this novel
paper correctly) is: In the "flood and loot"-attack one tries to steal the
HTLC output of the victims. Where in the "flood and blackmail"-attack that
I describe I try to to force the victim to lose almost all its funds due to
high on chain fees (Which I could use to blackmail the victim)

Description of the attack
===
Let us assume the victim has funded a channel with an attacker meaning it
will have to pay the fees for the commitment transaction in case of a force
close.

During a fee spike (let us assume fee estimators suggest 150 sat / byte)
the attacker spams this channel with the maximum possible amount of HTLCs
that the protocol allows. The HTLCs can be of a small value but need to be
bigger than the dust limit so that additional outputs are actually added to
the commitment transaction which makes it quite large in Bytes. According
to the BOLTs these are 483 additional outputs to the commitment
transaction.
The direction of HTLCs are chosen so that the amount is taken from the
`to_remote` output of the attacker (obviously on the victims side it will
be the `to_local` output) For the actual attack it does not matter in which
direction the HTLCs are spammed but economically the direction I propose
makes even more sense for the atta

Re: [Lightning-dev] Force close of channel with unresolved htlc

2020-05-05 Thread René Pickhardt via Lightning-dev
Dear Subhra,

as discussed bilaterally and after clarification of your question the
situation is as follows:

Let us assume A and B have a channel in which A has 4 tokens and B has 6
tokens

Now A offers an HTLC with the amount of 2 tokens and B accepts (receives)
the offer then A and B both have negotiated the HTLC output in the most
recent commitment transaction.

If A stops responding and B has to force close the channel a commitment
transaction with 3 UTXOs will hit the chain. One UTXO with 2 tokens
spendable by A, another one with 6 tokens spendable by B and the received
HTLC output with 2 tokens. This one can be spend by two different
conditions as in the offchain protocol

1.) Before the timelock of the HTLC has passed B can spend the output if B
knows his to_local HTLC secret AND the preimage. OR
2.) after the timelock A can spend the output if A knows the to_remote HTLC
secret.

the mechanism with HTLCs can be read upon in BOLT 2 (channel operation
https://github.com/lightningnetwork/lightning-rfc/blob/master/02-peer-protocol.md)
and the scripts can be seen in BOLT 3:
https://github.com/lightningnetwork/lightning-rfc/blob/master/03-transactions.md

A less technical summary that is more focused on explaining the concepts is
currently being developed in the routing chapter of mastering the lightning
network:
https://github.com/lnbook/lnbook/blob/43ce57298b4da345286ae3b53c42ea3eb9d9b056/routing.asciidoc

With kind regards Rene Pickhardt

On Tue, May 5, 2020 at 8:06 PM Subhra Mazumdar <
subhra.mazumdar1...@gmail.com> wrote:

> Hi,
>  I am having a doubt regarding force closure of channel. Suppose A->B
> there is an htlc which has been established for transfering fund. Now
> suppose for some unfortunate reason B doesnt have the witness to resolve
> htlc and the mean time A suffers crash fault. Then can B close the channel
> given that it has no way out of resolving the htlc due to lack of witness?
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>


-- 
https://www.rene-pickhardt.de

Skype: rene.pickhardt

mobile: +49 (0)176 5762 3618
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Direct Message draft

2020-02-20 Thread René Pickhardt via Lightning-dev
Hey Rusty,

I was very delighted to read your proposal. But I don't see how you prevent
message spam. If I understand you correctly you suggest that I can
communicate to any node along a path of peer connections (not necessarily
backed by payment channels but kind of only known through channel
announcements of gossip) via onions and these onions which are send within
a new gossip message are not bound to any fees or payments.

Let's assume I just missed some spam prevention mechanism or that we can
fix them. Do I understand the impact of your suggestion correctly that I
could use this protocol to

1.) create a fee free rebalancing protocol? Because I could also attach a
new lightning message inside the onions that would allow nodes without
direct peer connection to set up a circular rebalancing path.
2.) have the ability to communicate with nodes further away than just my
peers - for example to exchange information for pathfinding and / or
autopilots?


With kind regards Rene

Rusty Russell  schrieb am Do., 20. Feb. 2020, 10:37:

> Hi all!
>
> It seems that messaging over lightning is A Thing, and I want to
> use it for the offers protocol anyway.  So I've come up with the
> simplest proposal I can, and even implemented it.
>
> Importantly, it's unreliable.  Our implementation doesn't
> remember across restarts, limits us to 1000 total remembered forwards
> with random drop, and the protocol doesn't (yet?) include a method for
> errors.
>
> This is much friendlier on nodes than using an HTLC (which
> requires 2 round trips, signature calculations and db commits), so is an
> obvious candidate for much more than just invoice requests.
>
> The WIP patch is small enough I've pasted it below, but it's
> also at https://github.com/lightningnetwork/lightning-rfc/pull/748
>
> diff --git a/01-messaging.md b/01-messaging.md
> index 40d1909..faa5b18 100644
> --- a/01-messaging.md
> +++ b/01-messaging.md
> @@ -56,7 +56,7 @@ The messages are grouped logically into five groups,
> ordered by the most signifi
>- Setup & Control (types `0`-`31`): messages related to connection
> setup, control, supported features, and error reporting (described below)
>- Channel (types `32`-`127`): messages used to setup and tear down
> micropayment channels (described in [BOLT #2](02-peer-protocol.md))
>- Commitment (types `128`-`255`): messages related to updating the
> current commitment transaction, which includes adding, revoking, and
> settling HTLCs as well as updating fees and exchanging signatures
> (described in [BOLT #2](02-peer-protocol.md))
> -  - Routing (types `256`-`511`): messages containing node and channel
> announcements, as well as any active route exploration (described in [BOLT
> #7](07-routing-gossip.md))
> +  - Routing (types `256`-`511`): messages containing node and channel
> announcements, as well as any active route exploration or forwarding
> (described in [BOLT #7](07-routing-gossip.md))
>- Custom (types `32768`-`65535`): experimental and application-specific
> messages
>
>  The size of the message is required by the transport layer to fit into a
> 2-byte unsigned int; therefore, the maximum possible size is 65535 bytes.
> diff --git a/04-onion-routing.md b/04-onion-routing.md
> index 8d0f343..84eff9a 100644
> --- a/04-onion-routing.md
> +++ b/04-onion-routing.md
> @@ -51,6 +51,7 @@ A node:
>  * [Legacy HopData Payload Format](#legacy-hop_data-payload-format)
>  * [TLV Payload Format](#tlv_payload-format)
>  * [Basic Multi-Part Payments](#basic-multi-part-payments)
> +* [Directed Messages](#directed-messages)
>* [Accepting and Forwarding a
> Payment](#accepting-and-forwarding-a-payment)
>  * [Payload for the Last Node](#payload-for-the-last-node)
>  * [Non-strict Forwarding](#non-strict-forwarding)
> @@ -62,6 +63,7 @@ A node:
>* [Returning Errors](#returning-errors)
>  * [Failure Messages](#failure-messages)
>  * [Receiving Failure Codes](#receiving-failure-codes)
> +  * [Directed Message Replies](#directed-message-replies)
>* [Test Vector](#test-vector)
>  * [Returning Errors](#returning-errors)
>* [References](#references)
> @@ -366,6 +368,13 @@ otherwise meets the amount criterion (eg. some other
> failure, or
>  invoice timeout), however if it were to fulfill only some of them,
>  intermediary nodes could simply claim the remaining ones.
>
> +### Directed Messages
> +
> +Directed messages have an onion with an alternate `hop_payload`
> +format.  If this node is not the intended recipient, the payload is
> +simply a 33-byte pubkey indicating the next recipient.  Otherwise, the
> +payload is the message for this node.
> +
>  # Accepting and Forwarding a Payment
>
>  Once a node has decoded the payload it either accepts the payment
> locally, or forwards it to the peer indicated as the next hop in the
> payload.
> @@ -1142,6 +1151,11 @@ The _origin node_:
>- MAY use the data specified in the various failur

Re: [Lightning-dev] Few questions

2020-02-09 Thread René Pickhardt via Lightning-dev
Good Morning Cezary,

you might want to direct questions about understanding the lightning
network protocol like yours to https://bitcoin.stackexchanage.com as this
mailinglist is more devoted towards driving the development of the
protocol. Anyway here are the answers to your questions 2 and 3 and
probably also to the first one though I am not entirely sure if I
understand exactly what you are asking for. In case I misunderstood you
suggest to put follow up questions on stackexchange.

1. Is this possible that by sending funds without invoice, the last hub
> prepares the last HTLC with small amount to the payee? In other words - Can
> payee detect, that the last HTLC amount is smaller that it should be?
>

But in general the payee will only release the preimage for an invoice if
the payee is satisfied with the amount - which is usually specified in the
invoice. If you talk about keysend then the payee does not expect an amount
and will most likely release the preimage as the payee would consider this
to be free money


> 2. Are there additional data added to the end of onion encrypted list of
> HTLCs in order to prevent last hub to guess, that it is the last hub in the
> route?
>

yes the onions are always of constant size (20 * 65 Byte = 1300 Byte) This
process of padding is well described in the Sphinx paper
https://cypherpunks.ca/~iang/pubs/Sphinx_Oakland09.pdf and in BOLT 04
https://github.com/lightningnetwork/lightning-rfc/blob/master/04-onion-routing.md

3. When payment is during confirmation, are channels locked entirely, or
> only for the in flight payment amount? In other words - can single channel
> process more that single transaction at once?
>

HTLCs are additional outputs in the commitment transaction. The protocol
allows for up to 483 htlcs concurrently in flight as specified in BOLT 04 ("
max_accepted_htlcs is limited to 483 to ensure that, even if both sides
send the maximum number of HTLCs, the commitment_signed message will still
be under the maximum message size. It also ensures that a single penalty
transaction can spend the entire commitment transaction, as calculated in BOLT
#5

.")

However the the standard of implementations and recommendation is 30. This
means that a single payment is not freezing the channel. It however "locks"
the amount of that payment which for the time until settlement cannot be
used by either party of the channel for other payments / activities.

with kind regards Rene


> I would like to kindly pleas to reply in simple words, as my English is
> still far from being perfect.
>
> Best Regards,
> Cezary Dziemian
>
>
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>


-- 
https://www.rene-pickhardt.de

Skype: rene.pickhardt

mobile: +49 (0)176 5762 3618
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Research on proactive fee free channel rebalancing in the friend of a friend network / and roadmap for a protocol extension

2020-01-07 Thread René Pickhardt via Lightning-dev
Good morning ZmnSCPxj, and list,

the answer to your question is absolutely yes and we can can achieve this
actually in a very simple and elegant way.

Please find attached a clear and simple adaption of the algorithm described
from the paper for a general multipath payment and a small python code
example available at:
https://github.com/renepickhardt/Imbalance-measure-and-proactive-channel-rebalancing-algorithm-for-the-Lightning-Network/blob/master/code/mppalgo.py
that computes this in a model case (assuming 10 payment channels that all
have a capacity of 1 BTC just to save some lines in coding). The output of
the code is at the end of the Mail.


Algorithm to conduct payments which optimally (? I have not proved this yet
but I see no way that would be more optimal with respect to the
Ginicoefficients) reduces the imbalance of nodes when conducting a
(multipath)payment.

Let us assume a node $u$ wants to pay someone for an amount $a$.

1. We assume the payment was already achieved over some channel(s) and
compute the new node balance coefficient $\nu'_u$ after this imaginary
payment has been conducted as $\frac{\tau_u - a}{\kappa_u}$. remember
\tau_u is just the total amount of funds that node $u$ currently has and
$\kappa_u$ is the sum of the capacities of all its payment channels.
2. For all online channel partners $\{v_1,...,v_d\}$ we compute
$\zeta_{(u,v_1),...,\zeta_{(u,v_d})}$
3. Let us assume channels are ordered such that
$\zeta_{(u,v_1)>...>\zeta_{(u,v_d'})}$ and omit channels with $\zeta(u,v_i)
- \nu'_u < 0$ Those channels need more funds and should not be used to pay.
That is why we might have $d' < d$
5. Compute all $d'$ rebalancing amounts $r_i =
c(u,v_i)*(\zeta_{(u,v_i)}-\nu'_u)$ as in the paper but with the new node's
balance coefficient!
6. set $R = sum_i r_i$
7. distribute payments across channels  $a_i = a*r_i/R$ being the amount
$a_i$ that should be paid on channel $i$. Recall $a_i < a$ and $sum_i a_i =
a$ and $a_i < r_i$. This means that with this computation all channel have
enough liquidity to do the subpayments and the subpayments will add up to
the amount (ignoring channel reserves)
8. probe for paths for each amount and channel (potentially split the
amount for a channel across several paths that all start with that
channel). As we don't know what the rest of the network looks like we don't
know if we will be able to find paths for each channel (as before)

Now the really nice side effect: We could compute routing hints for the
invoices in the same way! by now taking the channels where $\zeta(u,v_i) -
\nu'_u$ < 0. We could also split the amounts in that way and also give
amounts together with the routing hints. This would allow a sender to send
the payments in a way that is most benefitial too us. (The sender could
also follow the above method for their outgoing channels) Only the rest of
the network might suffer worse imbalance bus guess what they charge a
routing fee for that service!

With these nice results let me just review some notation from the paper (so
that we in future might all agree to this wording/termonology):

* *Rebalancing* is the operation of moving funds along circular paths
between channels. As pointed out in the past this does not really change
the topology of the graph as the properties like the max flow / min cut
will not be affected by this. As such some people (including myself) have
argued in the past that multipathpayments are sufficient for path finding
as they will quicker find the max flow and that rebalancing is not
necessary. However the results of my research indicate that such operations
will increase the likelihood of arbitrary payments to succeed and thus (at
least in my interpretation) increase the reliability of the network.
* in particular a node is *balanced* if the zeta values are the same and
the gini coefficient is zero. While this is the case for all channels being
50-50 there are far less strict ways of achieving a good balance than
asking for channels to be opened in such a way that everyone ha 50-50.
* *Making payments actually changes the topology* of the network (similarly
to opening and closing channels). With the notation of the paper the \tau
values change and are part of the topology. This way "rebalancing" with
submarine swaps using loop or any of those services is not a rebalancing
operation in the sense of the paper and/or the above point but in fact a
change of topology.
* combining topology changes with rebalancing operations (which is often
the goal when making submarine swaps) seems however to be a good idea. In
that sense your general thought of rebalancing while paying should be
pursued.

Last but not least the promised output of the example code:

$ python3 mppalgo.py
0.3 initial imblance
new funds 4.8 and new node balance coefficient 0.48

Conduct the following payments:
channel 0 old balance: 1.00, payment amount 0.22 new balance 0.78
channel 1 old balance: 0.90, payment amount 0.18 new balance 0.72
channel 2 old ba

[Lightning-dev] Research on proactive fee free channel rebalancing in the friend of a friend network / and roadmap for a protocol extension

2019-12-23 Thread René Pickhardt via Lightning-dev
Dear fellow Lightning Developers,

today my research paper (together with Mariusz Nowostawski) "*Imbalance
measure and proactive channel rebalancing algorithm for the Lightning
Network*" was published on arxiv: https://arxiv.org/abs/1912.09555 The
LaTeX project as well as the code for the experiments (simulation and
evaluation) are open source (however not too well documented yet) at:
https://github.com/renepickhardt/Imbalance-measure-and-proactive-channel-rebalancing-algorithm-for-the-Lightning-Network
I am open for questions, feedback, discussions and in particular critical
remarks. Let me just state that I was quite surprised about the positive
impact of implementing such a rebalancing protocol in particular since in
its current form it seems to protect the privacy of nodes (more research
would be needed to be sure that the privacy is really protected).

# From the abstract:

*We further show that the success rate of a single unit payment increases
from 11.2% on the imbalanced network to 98.3% in the balanced network.
Similarly, the median possible payment size across all pairs of
participants increases from 0 to 0.5 mBTC for initial routing attempts on
the cheapest possible path. We provide an empirical evidence that routing
fees should be dropped for proactive rebalancing operations. Executing 4
different strategies for selecting rebalancing cycles lead to similar
results indicating that a collaborative approach within the friend of a
friend network might be preferable from a practical point of view*

# Some thoughts and context:

Already in my proposal of JIT Routing
https://lists.linuxfoundation.org/pipermail/lightning-dev/2019-March/001891.html
I suggested to allow for fee free rebalancing and do the rebalancing
operations in the friend of a friend network. With this research I tested
both assumptions and - unless someone points out methodological errors in
my evaluation - we now have strong evidence that this in deed makes a lot
of sense. I think it is in particular interesting that 0.5 mBTC can be
routed successfully on first attempt in 50% of the cases. I guess using
this together with Multi part / path payments as discussed in the future
work section of the paper might be the way to go for the lightning network.
I envision that while payments along several paths are routed and settled
nodes could already rebalance their channels in particular if we introduce
the redundancy to multi part / path payments as suggested in the Boomerang
Paper https://arxiv.org/abs/1910.01834 by Vivek Bagaria, Joachim Neu and
David Tse.

# Roadmap for BOLT 14 (Fee free Rebalancing Transport):

If no strong objections exist I would try to extend the BOLTs with the
following to be able to implement the rebalancing algorithm across the
network (as with JIT routing nodes can already opt to implement the
algorithm for themselves but this is probably not to useful from an
economic point of view).

* BOLT 07: a new gossip query and reply `query_want_rebalance_channel_ids`
/ `reply_want_rebalance_channel_ids` to ask channel partners on which of
their channels they want inbound / outbound liquidity. The query could
already include an optional offer how much the node initiating the
rebalancing operation is willing to offer while the reply could have an
optional offer field stating how much they nodes are willing to rebalance
(as the paper shows nodes might not have consensus about the amount and the
algorithm currently works by agreeing on the lowest value). Of course this
extension needs some protection against probing attacks to protect the
privacy of nodes.
* BOLT 14: (Fee Free Rebalancing Transport) While it seems tempting to
Reuse BOLT04 with a different realm that omits fees for the rebalancing
cycles (which nodes would have to accept then) this seems impossible as the
onions are not public and nodes could not verify that this is really a
rebalancing operation and not a payment which tries to "steel" fees. While
we might be able to extend BOLT 02 with a new message that transports a
"rebalancing" onion together with the keys for every envelop so that
everyone can verify that in fact this fee free onion is a rebalancing cycle
it seems plausible to have an open tansport for fee free rebalancing to
start with. We could also make it a feature flag and allow nodes to signal
if they support fee free rebalancing. I guess for backwards compatebility
this should be done in any case.
* One problem / attack vector with circular fee free payments that I see is
that if Alice wants to pay David she could initiate a rebalancing onion: A
--> B --> C --> D --> A with the paymenthash that David has created in an
invoice. David would just not set up the final HTLC from him to Alice as he
wanted to receive money from Alice. As far as I see this attack would only
be possible if Alice and David have a channel which they could have used
for the payment right away. Not using that channel is effectively a
rebalancing operation which is exactly w

[Lightning-dev] Congestion and Flow control for Multipath Routing

2019-07-15 Thread René Pickhardt via Lightning-dev
Dear fellow BOLT devs,

in this mail I want to suggest a congestion and flow control mechanism to
improve the speed and reliability of multi path routing schemes. This is
the first of a couple of emails that I will write in the following weeks as
I have used my break in hospital not only to recover but to tinker quite
a bit about path finding and routing algorithms on the lightning network.

Problem statement
===
Currently on the lightning network we have the issue of stuck payments [0].
As soon as an onion is sent it is out of the sender's control. This problem
seems to be in particular drastic if we wish to use Atomic Multi Path
routing [1] (which in the described form is not compatible with my
proposal. I believe my proposal should be compatible with the status quo of
base-AMP). The entire payments and HTLCs of a multipath payment will only
be settled once enough incoming HTLCs arrived at the recipient (meaning the
sum of amounts is bigger or equal to the amount specified in the invoice).
This has the following list of downsides:

- One malicious actor (who is just not forwarding the onion but also not
signaling an error) is enough to interrupt the entire payment process and
freeze all other HTLCs even of partial payments the actor is not part of.
- The entire payment process takes as long as `max_{p \in paths}(t(p))`
where `t(p)` is the time it took for path `p` to set up (and settle) HTLCs
- More HTLCs will be reserved by the network for a longer time. This means
more liquidity is bound / reserved and channels could even become unusable
if the 483 HTLC limit is reached.

Protocol Goals
===
I looked at the windowing mechanism used in TCP to achieve congestion
control and transferred this concept to the setting of the Lightning
Network. This idea is motivated by the Spider Network paper [2] which
mentions that in a simulation the success rate of payments is increased
when changing the lightning network from a circuit switched payment process
(which we currently have with our atomicity requirements) to a packet
switched mechanism that includes congestion control (though in that
publication congestion control had a different semantics than in has in my
proposal).

Protocol Benefits
=
- Improve the speed of multipath payments
- Reduce load from the network (in particular don't lock liquidity for such
a long time)
- less congestion at single nodes (I assume this is not a problem at this
point in time)
- more privacy (different preimages are used along different paths and
overall payments might become smaller or of uniform size)
- usual benefits from AMP

Protocol idea (base version)
=
Disclaimer: This base version has obvious drawbacks but I decided to
include it as it transports the idea.

A regular payment on the Lightning Network for amount `x` has a Payment
Hash `H` and a preimage `r`.  If a recipient would now accept that this
payment could be split over up to `n` paths the recipient would create a
sha-chain of preimages and payment hashes with `n` elements

```
r_0 = rand()
H_0 = H(r_0)
r_{i+1} = H_i
H_{i+1} = H(r_{i+1})
```

The payment process is initiated by the recipient providing H_{n-1} and
signaling (in the invoice) that up to `n` preimages are available to
complete this payment.

A sender can now decide to split the payment amount `x` into `n` seperate
payments for example of the amounts `x/n` though different splits should be
possible. Once the preimage of the first partial payment is returned the
payer learns the payment hash wich can be used for the next partial
payment. (One issue is that while we have a proof of payment we do not
necessarily have a proof of amount - which is true for the regular
lightning case though with a single atomic payment this is not an issue as
the preimage will not be relased if the amount is too low. We could avoid
this issue by demanding that mulipath payments have to be at least of size
`x/n`)

This protocol makes the AMP process sequential and reduces the load from
the network. Congestion (which is a local problem of routing nodes) becomes
less likely if only HTLCs are locked up for a partial payment independent
of the success or failure of other partial payments. However in the base
version there is a severe downside:

**Sequential payments will make the payment process even longer since it is
not the max time needed over all payments but the sum of times needed.**

We can resolve this issue by introducing flow control via a windowing
mechanism and allowing concurrency of partial payments

Protocol Overview (suggested version)
==
Let us assume the receiving node supports a window size of `s` concurrent
payments. Now the payee will not only create one sha-chain of `n` payment
hashes as in the base version but `s` sha-chains of `n` payment hashes.
In the invoice we would now transport the following data:

* `n` (we need a different letter as n is already taken) = amount of
partial 

Re: [Lightning-dev] [PROPOSAL]: FAST - Forked Away Simultaneous Transactions

2019-06-25 Thread René Pickhardt via Lightning-dev
Hey Ugam,

I like the very clearly communicated idea and the fact that we can do crazy
stuff with the filler of the onions. I have two concerns / questions:

1.) In pathfinding we actually try to make payments smaller (like moving to
AMP) instead of combining payments. I think it was shown several times that
the probability of finding a path (successful route) decreases with larger
amounts. So saving fees might actually not be the metric that we are trying
to optimize.
2.) Am I correct that this proposal would only work with the spontaneous
payment scenario as the payment hashes of Eric and Grace could not just be
added up as easy as the preimages can to get the overall payment hash for
Alice? So in that sense on the invoice based system your proposal is not
working and we don't have a proof of payment as Alice already knows the
preimages? Could one resolve this in the world of scriptless scripts or
when changing to secret / curve point based preimages based on the discrete
log?

best Rene

On Tue, Jun 25, 2019 at 7:07 AM Ugam Kamat  wrote:

> Hey guys,
>
>
>
> I’m kind of new to this mailing list, so let me know if this has been
> proposed previously. While reading Olaoluwa Osuntokun’s Spontaneous
> Payment proposal, I came up with the idea of simultaneous payments to
> multiple parties using the same partial route. In other words, say Alice,
> Bob, Charlie, Dave and Eric have channel opened with one another, and say
> Dave also has channel with Frank who has channel with Grace. Now, Alice is
> at a restaurant and wants to pay the bill amount to Eric (the restaurant
> owner) and a tip to Grace (who was her waiter). In the current scenario,
> Alice would have to send two payments A->B->C->D->E and A->B->C->D->F->G.
> However, if we repurpose the onion blob
>  in the same way
> as is needed for Spontaneous Payments, we can create a scenario where there
> is no path duplication. Dave would split the payments, one to Eric and
> other going to Grace through Frank. The preimage PM used in commitments
> A->B, B->C and C->D will be a function of pre-images P1 of D->E and P2 of
> D->F and F->G such that PM = f(P1, P2).
>
>
>
> *Proposal can be implemented by repurposing the onion in similar fashion
> as Spontaneous Payments with slight modification*
>
> This proposal works in similar fashion to Spontaneous Payment proposal, by
> packing in additional data in the unused hops. For B and C the onion blob
> will be identical to other lightning payments. When D parses the onion, the
> 4 MSB of the realm will tell D how much data can be extracted. This data
> will encode the hashes of the pre-images that would be used for commitment
> transaction towards Eric and other towards Frank.  For simplicity and
> privacy, I propose using 2 onion blobs for the data. So the payload can be
> 64 + 33 bytes = 97 bytes. The first byte would indicate how many hashes are
> packed, so we have 96 bytes for the payload, meaning we can pack a maximum
> of 3 hashes for 3 route payments from D. Now D will split the onion (18
> hops as it has used the first two for bifurcation data) into number of
> routes. In the above case it will be 9 hops each. Now these two onions are
> similar to other lightning payments. The first hop tells D the
> short-channel id, amount to forward, CLTV and the padding. Since, the
> preimage is 32 bytes, we can pack that in one single hop that is received
> by the final party. This leaves the remaining 7 hops can be used for
> routing. Below figure depicts the onion split in terms of how A will create
> it. D will add the filler to make each onion have 20 hops. Onion data is
> encoded in the same order in which the payment hashes are packed in the
> bifurcation data for D.
>
>
>
> *Calculating the preimages*
>
> Eric and Grace will parse the onion and use the pre-images for settlement.
> Let P1 represent the pre-images of D->E and P2 of D->F and F->G. When the
> pre-images arrive at node D, it will combine them such that PM = f(P1, P2).
> The easiest way for both A and D to calculate that will be PM = SHA256(P1
> || P2 || ss_d). Where || represents concatenation and ss_d is the shared
> secret created using the ephemeral public key of sender (the one generated
> by Alice) and private key of Dave. The need for using shared secret is to
> prevent the vulnerability where one channel operator who has nodes across
> both branches can use them to calculate the PM. Using shared secret also
> ensures that it is in fact D that has parsed them together.
>
>
>
> *Advantages of this proposal:*
>
>- Commitment transactions between A & B, B & C, and C & D now carry
>only one HTLC instead of two
>   - This means lower fees in case of on-chain settlement
>   - Lower routing fees for Alice as Bob and Charlie would not get to
>   charge for two routings
>   - Since 483 is the max limit of the htlcs nodes can accepts,
>   preventing duplication will al

Re: [Lightning-dev] Routemap scaling (was: Just in Time Routing (JIT-Routing) and a channel rebalancing heuristic as an add on for improved routing success in BOLT 1.0)

2019-03-29 Thread René Pickhardt via Lightning-dev
Good morning ZmnSCPxj,

Maybe I oversee something - in that case sorry for spamming the list - but
I don't understand how you could know the distance from yourself if you
don't know the entire topology? (unless u use it on the pruned view as you
suggested)

On the other hand querying for a certain depth would be possible with the
suggested `query ask egonetwork` in case for depth 3 one would have to peer
with the nodes from the friend of a friend network.

Best Rene


ZmnSCPxj  schrieb am Fr., 29. März 2019, 09:47:

> Good morning Rene,
>
>
> Sent with ProtonMail Secure Email.
>
> ‐‐‐ Original Message ‐‐‐
> On Friday, March 29, 2019 1:54 PM, René Pickhardt <
> r.pickha...@googlemail.com> wrote:
>
> > Dear ZmnSCPxj and fellow lightning developers,
> >
> > I want to point out two things and make a suggestion for a new gossip
> message.
> >
> > > A good pruning heuristic is "channel capacity", which can be checked
> onchain (the value of the UTXO backing the channel is the channel capacity).
> > > It is good to keep channels with large capacity in the routemap,
> because such large channels are more likely to successfully route a payment
> than smaller channels.
> > > So it is reasonable to delete channels with low capacity when the
> routemap memory is becoming close to full.
> >
> > Intuitively (without simulation). I encourage to make that process not
> deerministic but rather probabilistic. It would be good if everyone had a
> different set of channels. (which is somewhat achieved with everyone
> keeping their local view)
>
> At a software engineer point-of-view, probabilistic can be difficult to
> test.
> This can be made deterministic by including an RNG seed in the input to
> this code.
>
> However, let me propose instead, in combination with your later thought:
>
> >
> > > Nodes still need to track their direct channels (so they are
> implicitly always in the routemap).
> >
> > I strongly advice that the local view should be extended. Every node
> should always track their friends of a friend network.
>
> Perhaps the pruning rule can be modified to include *distance from self*
> in addition to channel capacity.
> The nearer the channel is, the more likely it is retained.
> The further, the less likely.
> The larger the channel is, the more likely it is retained.
> The smaller, the less likely.
>
> The capacity divided by the distance can be used as a sorting key, and if
> pruning is needed, the smallest "score" is pruned until the routemap fits.
>
> This will lead to everyone having a different set of channels, while being
> likely to track their friend-of-friend network compared to more distant
> nodes.
>
> Of course, the pruning itself would affect the distance of the channel to
> the "self" node.
> So determinism may be difficult to achieve here anyway.
>
> Regards,
> ZmnSCPxj
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Routemap scaling (was: Just in Time Routing (JIT-Routing) and a channel rebalancing heuristic as an add on for improved routing success in BOLT 1.0)

2019-03-28 Thread René Pickhardt via Lightning-dev
Dear ZmnSCPxj and fellow lightning developers,

I want to point out two things and make a suggestion for a new gossip
message.

A good pruning heuristic is "channel capacity", which can be checked
> onchain (the value of the UTXO backing the channel is the channel capacity).
> It is good to keep channels with large capacity in the routemap, because
> such large channels are more likely to successfully route a payment than
> smaller channels.
> So it is reasonable to delete channels with low capacity when the routemap
> memory is becoming close to full.
>

Intuitively (without simulation). I encourage to make that process not
deerministic but rather probabilistic. It would be good if everyone had a
different set of channels. (which is somewhat achieved with everyone
keeping their local view)

Nodes still need to track their direct channels (so they are implicitly
> always in the routemap).
>

I strongly advice that the local view should be extended. Every node should
always track their friends of a friend network. Maybe we could even create
a new gossip query message `query_ask_egonetwork` that asks for the
egonetwork of a node (the egonetwork are all the direct friends of a node
together with their friendships) every node knows at least the nodes in
their ego network and over time also the edges between them.

If I was interested in my friend of a friend network I could just send the
`query_ask_egonetwork` message to all my peers.

Best Rene






But payee nodes making BOLT1 invoices could also provide `r` routes in the
> invoice, with the given routes being from nodes with high-capacity
> channels, so that even if the intermediate channels are pruned due to low
> capacity, it is possible to get paid.
>
> Regards,
> ZmnSCPxj
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Outsourcing route computation with trampoline payments

2019-03-28 Thread René Pickhardt via Lightning-dev
Dear Pierre-Marie and fellow lightning developers,

I really like that suggestion. In the context of JIT routing I was
tinkering about the same idea (is it possible for sending nodes to only
know a small part of the network - for example the friend of a friend
network - to save Hardware / gossip bandwidth requirements) but I was
thinking about a different solution which I want to drop in here. (I
believe yours is better though)

My thought was to use rendez-vous routing. This would mean the sender would
have to provide a rendez vous point from his local (friend of a friend?)
network and the recipient provides a route to him/herself. Only the
recipient has to know the entire network topology.

One problem with rendez vous routing is of course that the routing fails if
the route from the rendez vous point does not work. This again could be
mitigated with JIT routing.

In the context of JIT routing it also makes sense to "overpay" fees so that
JIT nodes could rebalance without loss. Making my solution also
probabilistic with the fees. The fact that this pattern of probabilistic
fees occurs for the second time now leads me to the following 2 more
general ideas (maybe we should start a new thread if we discuss them to
stay on topic here) that might help with routing.

1.) A different fee mechanism. Let us (only as a radical thought
experiment) assume we drop the privacy of the final amount in routing. A
sending node could offer a fee for successful routing. Every routing node
could decide how much fee it would collect for forwarding. Nodes could try
to collect larger fees than the min they announce but that lowers the
probably for the payment to be successful. Even more radical: Nodes would
not even have to announce min fees anymore. Turning routing and fees to a
real interactive market

2.) A virtual hierarchical address space. Maybe we should start thinking
about the creation of a semantic overlynetwork / address space for nodes
similar to IP. This would allow any node to just have a pruned network view
but still make smart routing decisions. Obviously we would have to find a
way to assign virtual network addresses to nodes which might be hard.

The second suggestion would be of particular interest in your case if N
also did not know the entire network and has to decide to whom to to
forward for the final destination D.

Sorry for "hijacking" your suggestion and throwing so many new ideas but in
my mind this seems all very connected /related.

Best Rene


Pierre  schrieb am Do., 28. März 2019, 23:25:

> Hello List,
>
> I think we can use the upcoming "Multi-frame sphinx onion format" [1]
> to trustlessly outsource the computation of payment routes.
>
> A sends a payment to an intermediate node N, and in the onion payload,
> A provides the actual destination D of the payment and the amount. N
> then has to find a route to D and make a payment himself. Of course D
> may be yet another intermediate node, and so on. The fact that we can
> make several "trampoline hops" preserves the privacy characteristics
> that we already have.
>
> Intermediate nodes have an incentive to cooperate because they are
> part of the route and will earn fees. As a nice side effect, it also
> creates an incentive for "routing nodes" to participate in the gossip,
> which they are lacking currently.
>
> This could significantly lessen the load on (lite) sending nodes both
> on the memory/bandwidth side (they would only need to know a smallish
> neighborhood) and on the cpu side (intermediate nodes would run the
> actual route computation).
>
> As Christian pointed out, one downside is that fee computation would
> have to be pessimistic (he also came up with the name trampoline!).
>
> Cheers,
>
> Pierre-Marie
>
> [1]
> https://lists.linuxfoundation.org/pipermail/lightning-dev/2019-February/001875.html
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] I just released more than 200 slides trying to give an Overview of BOLT 1.0 for beginners. Feedback welcome!

2019-03-20 Thread René Pickhardt via Lightning-dev
Hey everyone,

as you know I am just a mathematician and not a cryptographer by training
who is interested and amazed by the Lightning Network Protocol. I joined
this open source effort pretty late (in the beginning of 2018) and tried to
catch up ever since. At that time reading the BOLTs (despite the fact that
Rusty had his helpful blog articles) was really hard for me.

Over the past year - especially due to your help and open ears - I was able
to catch up by a lot. Yesterday I held a 4 hour workshop giving an
extensive overview about the Lightning Network and BOLT 1.0.

I decided to open source the slides to give something (hopefully useful)
back to the community. To the best of my knowledge this is the most
extensive overview of the BOLTs in form of slides and I also tried to
restructure the content in the hope of making it easier to approach fo
newbies like I was a year ago.

Please feel free to fork the slides on the google doc address:
https://docs.google.com/presentation/d/1-eyceLlSmcLpbPJLzj6_CnVYQdo1AUP3y5XD716U-Lg

I uploaded the pdf version of the slides at
https://commons.wikimedia.org/wiki/File:Introduction_to_the_Lightning_Network_Protocol_and_the_Basics_of_Lightning_Technology_(BOLT_aka_Lightning-rfc).pdf

Obviously I am very happy if you have suggestions how the slides could be
improved even further. Probably I will still have some mistakes with them.
But maybe you have more comments on a meta level about the structure or
detail level or visualizations...

Therefor I will be happy if you send back a pdf with comments. I also share
a commentable version with this mailinglist on google docs:
https://docs.google.com/presentation/d/1YCKZzE53xjnBndl3U0uy1-bHO4Y0rlAwgrnujzef1sY


I will present the slides next time on chaincodelabs lightning residency in
late june where - as far as I know - the talks will also be recorded. So
any feedback before then will be highly appreciated.

While this mail was note particularly about development I hope it is not
considered offtopic. In case you do so I apologize for any inconvenience.

With kind Regards Rene


-- 
https://www.rene-pickhardt.de

Skype: rene.pickhardt

mobile: +49 (0)176 5762 3618
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Potential Privacy issue with dual funded channels

2019-03-15 Thread René Pickhardt via Lightning-dev
Hey everyone,

during the spec meeting we have discussed intensively about dual funded
channels and potential game theory with the fees however I now believe that
we missed out another important crucial part which is the privacy of the
node providing liquidity.

While I have not seen a concrete example for a channel establishment
protocol that supports dual funded channels I have seen this proposal (
https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-November/001532.html
)  for advertising channel liquidity which assumed that the `open_channel`
message would be modified as follows:

`open_channel`:
new feature flag (channel_flags): option_liquidity_buy [2nd least
significant bit]
push_msat: set to fee payment for requested liquidity
[8 liquidity_msat_request]: (option_liquidity_buy) amount of dual funding
requested at channel open

...

If a node cannot provide the liquidity requested in `open_channel`, it must
return an error.

With such a protocol I could now (basically only at the cost of internet
traffic) probe a lower bound for the amount of BTC available by a node that
allows for dual funded channels and abort the channel establishing process
at some time before I ever spend / lock any of my own bitcoin.

As I can even participate in the peer protocol without having a single
channel open this situation seems to be even more severe.

I don't have a clear suggestion how to mitigate against this. One general
potential idea / solution would be to make spamming / probing more
expensive. For example we could require the person to open a channel first
and then ask the partner to splice something in (meaning we don't allow for
one tx dual funded channels). In that case the node requesting liquidity
had to do an onchain tx. also the requests to splice in can be identified
and the person who feels to be probed can choose to fail the channel. I am
not happy with my barrier as it would still be able to relatively cheaply
abuse this and we run into a whole bunch of game theory about fees again.

As the lightning network seems very keen to provide strong privacy to its
users (c.f.: onion routing, keeping channel balances private, encrypted
transport layer,...)  I thought it is worthwhile pointing out the problem
with the privacy for dual funded channels even though I don't have a
concrete solution yet.

best Rene


-- 
https://www.rene-pickhardt.de

Skype: rene.pickhardt

mobile: +49 (0)176 5762 3618
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Just in Time Routing (JIT-Routing) and a channel rebalancing heuristic as an add on for improved routing success in BOLT 1.0

2019-03-14 Thread René Pickhardt via Lightning-dev
Hey everyone,

I am glad the suggestion is being picked up. At this time I want to respond
to two of the concerns that have been thrown in. I have some other comments
and ideas but would like to hold them back so that we can have more people
joining the discussion without bias also this mail will already be quite
long.

ZmnSCPxj suggested to introduce a `success_rate` for JIT routing. While
this success_rate can obviously only be estimated or configured I would
advice against including this to the protocol. As I mentioned before I
suggested to include JIT Routing as a MAY Recommendation so it is up to the
node to decide if it cannot earn `offered_fee_amount` to engage in the
JIT-rebalancing operation. A node operator might be willing in general to
to pay a fee for rebalancing even if there is not an outstanding routing
event taking place. So even while `rebalancing_fee_amount` >
`offered_fee_amount` the node could see the offered_fee_amount as a
discount for the planned rebalancing amount. We don't know that and I
honestly believe that the protocol should not make economical decisions for
the node. In any case rebalancing will overall increase the likelihood for
successful routing but it makes sense to defer the rebalancing operation to
a moment in which the liquidity is actually needed.

Regarding Ariels suggestion about reusing the payment hash with JIT Routing
I have some more thoughts:
Reusing the payment hash indeed seems like a good idea. However it produces
some technical issues which in my opinion can all be mitigated. it is just
a question with these challenges if it is worthwhile doing it?

I have drawn several situations and tried to construct an example in which
using the same payment hash for the JIT-rebalancing would result in a
severe problem with the payment process in the sense that it would be
compromised or somebody could steal funds. It could however be a privacy
issue as more nodes are being aware of the same payment (but that is also
the case with base-AMP)
I was not able to construct such a situation (under the assumption that the
rebalancing amount does not exceed the original payment amount). My feeling
(though I have not done this yet) is that one should be able to proof that
taking the same payment hash would always work and in fact create a
situation in which at least the rebalancing only takes place if the entire
payment was routed successfully.

Assuming someone will be able to proof that using the same payment hash for
JIT Routing is not an issue we still run into another problem (which I
believe can be fixed with another MUST rule but comes with quite some
implementation overhead.)

The deadlock problem when doing JIT Routing with the same payment hash:
When using the same payment hash there will be two htlc's (potentially of
different amounts) in opposing directions on the same channel (or in the
lnd case maybe between separate channels between the same two peers).
Unluckily without a novel rule this can produce a deadlock.

As an example take the situation from my initial email with an additional
recipient R1:

  100 / 110 80 / 200  150/180
S --> B > R --> R1
  \ /
80/200  \ /  100/200
  \ /
 T

Meaning we have the following channels:
S ---> B capacity: 110   A funds: 100  B funds:  10
B ---> R capacity: 200   B funds:  80  R funds: 120
B ---> T capacity: 200   B funds:  80  T funds: 120
T ---> R capacity: 200   T funds: 100  R funds: 100
R ---> R1 capacity: 180   R funds: 150  R1 funds: 30

neglecting fees the following htlcs would be offered
1.) S-->B amount: 90
2.) B-->T amount:50
3.) T-->R amount:50
4.) R-->B amount: 50
5.) B-->R amount: 90 (difficult to set up before 4. settles)
6.) R-->R1 amount: 90

while 1,5 and 6 are the original path 2,3,4 are the JIT rebalancing.

We see that in this situation using the same preimage results in a problem.
Since the rebalancing is not settled R will not accept the 5th htlc (B--->R
amount: 90) as there is not enough liquidity on B's side producing a
deadlock
However since the same payment hash is used it is save to combine the 4th
and 5th htlc to have the following situation:

1.) S-->B amount: 90
2.) B-->T amount:50
3.) T-->R amount:50
4.) R-->B will be removed or settled or replaced by the 5th htlcs with a
different amount (90 - 50)
5.) B-->R amount: 40
6.) R-->R1 amount: 90

Note that while theoretically it seems tempting to just have two htlc
outputs as the second node could always claim the htlc if the first claims
theirs. However this will not work onchain as potentially more funds are
spend than exist.

Therefor we need a MUST-rule to fix the deadlock problem (which could
probably also be formulated in a symmetric way):
If a node N offers an htlc to a partner with an amount x from whom the node
already received an htlc y (where y is smaller than x) the nodes must
create a new channel state discarding the old htlc but 

[Lightning-dev] Just in Time Routing (JIT-Routing) and a channel rebalancing heuristic as an add on for improved routing success in BOLT 1.0

2019-03-05 Thread René Pickhardt via Lightning-dev
Hey everyone,

In this mail I introduce the Just in Time Routing schema (aka JIT Routing).
Its main idea is to mitigate the disadvantages from our current source
based routing (i.e.: guessing a route that will work in the sense that it
has enough liquidity in each channel) and make the routing process a little
bit more like the best effort routing that we know from IP-forwarding. As
far as I know this will not decrease the privacy of the nodes. As part of
this Routing scheme nodes need to be able to quickly rebalance their
channels. Thus in this mail I also propose a heuristic for doing this
efficiently which I have implemented and seems to provide pretty good
results. Obviously the heuristic should be tested with the help of a
simulation. I did not have the chance to do that yet. Partly also because I
am lacking a proper dataset and I don't want to do this on artificial data.

The advantages of JIT Routing are:
* it is possible to do now without any protocol modification. In particular
no modifications of the onions are necessary.
* routing nodes can already easily implement it. By implementing it they
will increase the routing success even for nodes which are running older
implementations
* it seems to be logically equivalent to AMP Routing. In particular its
properties will also help base AMP once it is part of the protocol.
* local channel balance information along the route can now be part of the
path finding process while not decreasing the privacy by sharing
information about channel balances with others. In fact the privacy of
nodes is even being increased.

The disadvantages seem:
* it might economically not be incentivized for a routing node in every
situation. Theoretically it can even happen that a node pays a fee in order
to use this technique but can't earn the routing fee as the onion fails
later. Nodes can implement risk management strategies to mitigate this
issue.
* The routing process might take a longer time as it starts sub routing
processes.
* While doing JIT routing the capacity for channels should be reserved even
before HTLCs are set up (to prevent hostile recursive chains of rebalancing
operations)

Obviously routing single big payments is a challenge for the lightning
network. During the developer summit in Adelaide we have agreed to put Base
AMP to BOLT 1.1. To review Base AMP the idea is basically after receiving a
payment hash to create several onions on various routes to the recipient.
While Base AMP in theory can find the maxflow / min cut and achieve maximum
liquidity it is not clear yet how well Base AMP will really work.

While it has been shown that smaller payments have a higher chance to be
routed successfully there is the downside that we have more payments which
increases the likelihood that any one of those payment eventually fails. As
far as I know there have not been any studies researching this fact. Also
the fact remains that Base AMP is still a source base routing protocol
putting the sender into a tough spot as it has to guess which routes might
work.

How to JIT Routing?

For the BOLTs we basically need one Recommendation (in fact even today
nodes can do this without this explicit recommendation, but I would suggest
to add the recommendation):

If a node cannot forward an incoming HTLC because the node has not enough
funds on the outgoing channel the node MAY pause the routing process and
try to rebalance the channel that misses liquidity. If it isn't able to
rebalance the channel it should fail the onion sending back an insufficient
wire funds error `temporary_channel_failure`

Let us consider the following Graph and situation for an example:

  100 / 110 80 / 200
S --> B > R
  \ /
80/200  \ /  100/200
  \ /
 T

Meaning we have the following channels:
S ---> B capacity: 110   A funds: 100  B funds:  10
B ---> R capacity: 200   B funds:  80  R funds: 120
B ---> T capacity: 200   B funds:  80  T funds: 120
T ---> R capacity: 200   T funds: 100  R funds: 100

Let us assume S wants to send a payment of 90 to R. With this distribution
of funds this will not work with a single route S ---> B ---> R as the
channel B--->R can only forward 78 (taking the channel reserve of 1% into
consideration)

Now with Base AMP after a protocol update this would be resolved by making
two onions forwarding for example 45 each. Also S would have to pay more
routing fees as the basefee of B will be charged twice.

Onion 1: S ---> B ---> R
Onion 2: S ---> B ---> T ---> R

However it is S again who has to guess how the problems with the liquidity
are it does not know how B has spread its funds between R and T (and
potentially other channels)

With the above recommendation in place B can create a different onion with
lets say moving 50 from the B---> T channel to the B ---> R channel
resulting in the following situation:

  100 / 110 130 / 200
S --> B > R
  \  

Re: [Lightning-dev] Lightning and the semantic web

2019-01-28 Thread René Pickhardt via Lightning-dev
Dear Melvin,

I believe the scheme lightning: should only apply
* for payments in the form of bolt11 strings
* to identifiy nodes like lightning:node_id@ipaddr:port
* maybe to identify channels (look up the short_channel_id of the form of a
triple separated by the letterx.  BLOCKHEIGHTxTXINDEXxOUTPUTINDEX)

the orther addresses you mentioned already have a sceme e.g. tcp:

keep in mind that the ip address and port may change but the only real
identifier is the node_id which as you mentioned correctly is the pubkey
from an HD seed and it is bech32 encoded. Also the short channel id points
to a funding transaction on the base layer so implicitly to identify the
base layer transaction we do also need the SHA-256 hash of the genesis
block otherwise it is not clear that the blockheigt, txindex, output index
triple belongs specifically to the bitcoin blockchain.

with kind regards Rene


On Mon, Jan 28, 2019 at 7:13 AM Melvin Carvalho 
wrote:

>
>
> On Thu, 24 Jan 2019 at 13:42, René Pickhardt 
> wrote:
>
>> Dear Melvin,
>>
>> I believe the vocabulary is not consistent across implementations. For
>> example if you look at c lightning there is no such command `describegraph`
>> but there are the two commands `listnodes` and `listchannels` which should
>> give the same information. For both I have attached a sample output at the
>> end of the mail to demonstrate how the naming of the vocabulary differs.
>>
>> Since the data of these commands is taken from the gossip store which
>> stores the gossip messages I would suggest to take the vocabulary from the
>> BOLT 07 which defines the gossip messages. Also this mailinglist is for
>> protocol development and the spec should be the authorative source for
>> naming:
>> https://github.com/lightningnetwork/lightning-rfc/blob/master/07-routing-gossip.md
>>
>> There are more terms specified in other bolts which could be a basis for
>> a vocabulary. I have created an overview by hand and made a Pull Request to
>> the repository which has not been merged yet as it was the wish of the
>> developers to have such a list to be extracted automatically.
>> Still this the overview in that pull request could serve as a basis for
>> such a vocabulary :
>> https://github.com/lightningnetwork/lightning-rfc/pull/458
>>
>> Now the example outputs from c-lightning we already that there are
>> differences in naming. Take the identifier for a node for example:
>> * BOLT07: node_id
>> * LND: pub_key
>> * clightning: nodeid
>>
>> example outputs:
>>
>> lighting-cli listnodes
>>  {
>>   "nodeid":
>> "02396bf51e81f8f67eaca3652271b4fe8d3f57bedb9578af711606391c5c66760e",
>>   "alias": "PuraSloboda",
>>   "color": "68f442",
>>   "last_timestamp": 1548200218,
>>   "globalfeatures": "",
>>   "global_features": "",
>>   "addresses": [
>> {
>>   "type": "ipv4",
>>   "address": "144.136.223.22",
>>   "port": 9735
>> }
>>   ]
>> }
>>
>> lightning-cli listchannels
>> {
>>   "source":
>> "03bb88ccc444534da7b5b64b4f7b15e1eccb18e102db0e400d4b9cfe93763aa26d",
>>   "destination":
>> "0272045af48b9871013753f7cce1cf82ed80b97d669ca44709e01976a67df80adc",
>>   "short_channel_id": "559893:1912:0",
>>   "public": true,
>>   "satoshis": 47000,
>>   "message_flags": 0,
>>   "channel_flags": 1,
>>   "flags": 1,
>>   "active": true,
>>   "last_update": 1548332847,
>>   "base_fee_millisatoshi": 1000,
>>   "fee_per_millionth": 1,
>>   "delay": 144
>> }
>>
>> With kind regards Rene
>>
>
> This is extremely helpful, thank you!
>
> I will go with the RFC naming then.  I'm starting with Nodes and Edges and
> will put together a document and demo for review.  First step is to do
> Nodes.
>
> I have two questions
>
> 1. node_id -- that's basically your public key -- what types of key and
> serialization is this (my guess an ecdsa pub key derived from the HD seed),
> any pointers would be great
>
> 2. regarding the four address types :
>
>- 1: ipv4; data = [4:ipv4_addr][2:port] (length 6)
>- 2: ipv6; data = [16:ipv6_addr][2:port] (length 18)
>- 3: Tor v2 onion service; data = [10:onion_addr][2:port] (length 12)
>   - version 2 onion service addresses; Encodes an 80-bit, truncated
>   SHA-1 hash of a 1024-bit RSA public key for the onion service
>   (a.k.a. Tor hidden service).
>- 4: Tor v3 onion service; data = [35:onion_addr][2:port] (length 37)
>   - version 3 (prop224
>   
> )
>   onion service addresses; Encodes: [32:32_byte_ed25519_pubkey] ||
>   [2:checksum] || [1:version], where checksum = sha3(".onion
>   checksum" | pubkey || version)[:2].
>
>
> These can be looked up in the spec.  But in the semantic web we like,
> where possible, to have self describing data.  In particular we like URIs
> with a scheme or protocol so instead of example.com we'll have
> http://exa

Re: [Lightning-dev] Lightning and the semantic web

2019-01-24 Thread René Pickhardt via Lightning-dev
Dear Melvin,

I believe the vocabulary is not consistent across implementations. For
example if you look at c lightning there is no such command `describegraph`
but there are the two commands `listnodes` and `listchannels` which should
give the same information. For both I have attached a sample output at the
end of the mail to demonstrate how the naming of the vocabulary differs.

Since the data of these commands is taken from the gossip store which
stores the gossip messages I would suggest to take the vocabulary from the
BOLT 07 which defines the gossip messages. Also this mailinglist is for
protocol development and the spec should be the authorative source for
naming:
https://github.com/lightningnetwork/lightning-rfc/blob/master/07-routing-gossip.md

There are more terms specified in other bolts which could be a basis for a
vocabulary. I have created an overview by hand and made a Pull Request to
the repository which has not been merged yet as it was the wish of the
developers to have such a list to be extracted automatically.
Still this the overview in that pull request could serve as a basis for
such a vocabulary :
https://github.com/lightningnetwork/lightning-rfc/pull/458

Now the example outputs from c-lightning we already that there are
differences in naming. Take the identifier for a node for example:
* BOLT07: node_id
* LND: pub_key
* clightning: nodeid

example outputs:

lighting-cli listnodes
 {
  "nodeid":
"02396bf51e81f8f67eaca3652271b4fe8d3f57bedb9578af711606391c5c66760e",
  "alias": "PuraSloboda",
  "color": "68f442",
  "last_timestamp": 1548200218,
  "globalfeatures": "",
  "global_features": "",
  "addresses": [
{
  "type": "ipv4",
  "address": "144.136.223.22",
  "port": 9735
}
  ]
}

lightning-cli listchannels
{
  "source":
"03bb88ccc444534da7b5b64b4f7b15e1eccb18e102db0e400d4b9cfe93763aa26d",
  "destination":
"0272045af48b9871013753f7cce1cf82ed80b97d669ca44709e01976a67df80adc",
  "short_channel_id": "559893:1912:0",
  "public": true,
  "satoshis": 47000,
  "message_flags": 0,
  "channel_flags": 1,
  "flags": 1,
  "active": true,
  "last_update": 1548332847,
  "base_fee_millisatoshi": 1000,
  "fee_per_millionth": 1,
  "delay": 144
}

With kind regards Rene


On Thu, Jan 24, 2019 at 1:21 PM Melvin Carvalho 
wrote:

>
>
> On Mon, 21 Jan 2019 at 14:11, René Pickhardt 
> wrote:
>
>> Dear Melvin,
>>
>> have you looked into the W3C Payment Group?
>> https://www.w3.org/TR/payment-request/ The entire field of semantic web
>> kind of originated from W3C and they are working on a recommendation for
>> browser vendors to enable a low level payment API.
>>
>> Also there is LightningJoule that builds on top of webln. While this is
>> not an otology it goes implicitly in a similar direction (c.f.:
>> https://github.com/wbobeirne/webln and in particular this discussion:
>> https://github.com/wbobeirne/webln/issues/1 in which Will said that in
>> his thoughts webln is different to the W3C Payment Group.)
>>
>> I am looking forward to see your progress with integrating Lightning to
>> the semantic web!
>>
>> with kind regards Rene
>>
>
> My first observation is these two data structures in lnd describe graph,
> one for channels and one for nodes.  These seem to be two fundamental
> concepts in lightning.
>
> Channel
>
> {
> "channel_id": "615605565348708353",
> "chan_point":
> "d8cfed73e0004fe1427d3045c5b20da0418f3cb803e8e35be48ee713aadbf56d:1",
> "last_update": 1548330355,
> "node1_pub":
> "024a2e265cd66066b78a788ae615acdc84b5b0dec9efac36d7ac87513015eaf6ed",
> "node2_pub":
> "03e03c56bb540c36b9e77c2aea2bb6529b907ece6c1395228c05459af13d0e2a5c",
> "capacity": "100",
> "node1_policy": {
> "time_lock_delta": 144,
> "min_htlc": "1000",
> "fee_base_msat": "1000",
> "fee_rate_milli_msat": "1",
> "disabled": false
> },
> "node2_policy": {
> "time_lock_delta": 144,
> "min_htlc": "1000",
> "fee_base_msat": "1000",
> "fee_rate_milli_msat": "1",
> "disabled": false
> }
> }
>
> Node
>
> {
> "last_update": 1547380072,
> "pub_key":
> "0200072fd301cb4a680f26d87c28b705ccd6a1d5b00f1b5efd7fe5f998f1bbb1f1",
> "alias": "OutaSpace",
> "addresses": [
> {
> "network": "tcp",
> "addr": "46.163.78.93:9760"
> },
> {
> "network": "tcp",
> "addr": "[2a01:488:66:1000:2ea3:4e5d:0:1]:9760"
> },
> {
> "network": "tcp",
> "addr": "2dkobxxunnjatyph.onion:9760"
>

Re: [Lightning-dev] lightning-c RPC

2019-01-22 Thread René Pickhardt via Lightning-dev
Hey Alex,

I think this is currently not being implemented. Also there is a
mailinglist particularly for issues related to c-lightning at:
https://lists.ozlabs.org/listinfo/c-lightning and of course issues like
this could also be asked in the bug tracker at:
https://github.com/ElementsProject/lightning

with kind regards Rene

On Tue, Jan 22, 2019 at 1:08 PM Alex P  wrote:

> Hello guys!
>
> Is there a way to bind RPC to IP:port, not to unix socket?
>
> Of course, it's easy to patch source code, but may be there is a param
> for config?
>
> Thanks!
>
>
>
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>


-- 
https://www.rene-pickhardt.de

Skype: rene.pickhardt

mobile: +49 (0)176 5762 3618
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Lightning and the semantic web

2019-01-21 Thread René Pickhardt via Lightning-dev
Dear Melvin,

have you looked into the W3C Payment Group?
https://www.w3.org/TR/payment-request/ The entire field of semantic web
kind of originated from W3C and they are working on a recommendation for
browser vendors to enable a low level payment API.

Also there is LightningJoule that builds on top of webln. While this is not
an otology it goes implicitly in a similar direction (c.f.:
https://github.com/wbobeirne/webln and in particular this discussion:
https://github.com/wbobeirne/webln/issues/1 in which Will said that in his
thoughts webln is different to the W3C Payment Group.)

I am looking forward to see your progress with integrating Lightning to the
semantic web!

with kind regards Rene



On Mon, Jan 21, 2019 at 7:17 AM Melvin Carvalho 
wrote:

> Hi All
>
> I work on the solid project [1] and am very interested in the lightning
> network.
>
> In particular, I am looking at trying to create an integration between
> lightning (layer 2) and solid (layer 3?  web layer?).
>
> The first step towards integration would be to port some of the lightning
> concepts to the semantic web.  This is done by creating an ontology.
>
> Does anyone know of any existing work in this area.  Alternatively, does
> anyone have an interest to collaborate on an ontology?
>
> Best
> Melvin
>
> [1] https://solid.mit.edu/
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>


-- 
https://www.rene-pickhardt.de

Skype: rene.pickhardt

mobile: +49 (0)176 5762 3618
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Colored coins or non-fungible tokens

2018-12-04 Thread René Pickhardt via Lightning-dev
Dear Joao,

there are the people from BHB Networks (in Itally) working on colored coins
for Bitcoin and Lightning. The main contributor seems to be Alekos Filini.
As far as I understand there is quite some progress. You can find more
information in their spec repo at:
https://github.com/rgb-org/spec The 4th. md file in that repo is about the
lightning network (not sure if this is still up to date with the current
implementation that Alekos is also working on.)

I believe Alekos has talked about this at the third lightning hackday in
Berlin and on the LightningHackdayNYC both talks have been recorded and can
be found on youtube. Therefor I suggest to talk to Alekos (ccd) directly.

I hope that helps.

best Rene

On Tue, Dec 4, 2018 at 2:29 PM Joao Joyce  wrote:

> Hi list,
>
> Thank you all for the great work. LN is looking amazing!
>
> I was wondering if there is any discussion about exchanging colored-coins
> or non-fungible tokens through the LN.
> Or even issuance, which I'm not seeing how it would be possible, but
> recognise that  this space is full of surprises.
>
> That would be a great addition to LN and it would enable new use-cases.
>
> Thank you all for the great work.
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>


-- 
https://www.rene-pickhardt.de

Skype: rene.pickhardt

mobile: +49 (0)176 5762 3618
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Reason for having HMACs in Sphinx

2018-11-29 Thread René Pickhardt via Lightning-dev
Hey CJP,

I am still not 100% through the SPHINX paper so it would be great if at
least another pair of eyes could lookt at this. However from the original
SPHINX paper I quote:

"Besides extracting the shared key, each mix has to be provided with
authentic and confidential routing information to direct the message to the
subsequent mix, or to its final destination. We achieve this by a simple
encrypt-then-MAC mechanism. A secure stream cipher or AES in counter mode
is used for encryption, and a secure MAC (with some strong but standard
properties) is used to ensure no part of the message header containing
routing information has been modified. Some padding has to be added at each
mix stage, in order to keep the length of the message invariant at each
hop."

At first I thought this would mean that the HMAC ensures that the previous
hop cannot change the routing information. which was the first answer that
I wanted to give. However I am confused now too. The HMAC commits to the
next onion. So if the entire onion was exchanged and a new HMAC was
provided (as you suggest) the processing hop would not know this. Such a
use case would obviously lead to a routing scenario which would not succeed
and would hardly be useful (unless the previous hop plans a reverse dos
attacks from error messages or some other sabotage attacks which are
references in the SPHINX paper but not discussed explicitly).

On a second thought I reviewed chapter 2.1 of the Sphinx paper in which the
thread model for attackers is described. As far as I understand that
section one attack vector for which the HMAC shall help are man in the
middle attacks. If HMACs are being used some bitflipping by man in the
middles would be detected. However I think if a man in the middle speaks
the BOLT protocol they could exchange the entire package and provide a new
HMAC as a previous hop could do. Also the Thread model does only speak
about security of the message not so much about the reliability of the
protocol. I believe it is quite clear that if a routing node wants to
manipulate the onion they can do so. In the same way how they can decide
not to forward the onion.

--> So the mix network itself can make sure that no wrong messages are
delivered it cannot make sure that messages (which are unseen and unknown
from where they came) are intercepted.

Besides the Bitflipping usecase that I mentioned I agree with your
criticism and also don't see the necessity of the HMAC anymore. The message
is encrypted anyway and if bits are flipped the decrypted version will just
be badly formated. If the header was manipulated the next hop would not be
able to decrypt.

Best regards Rene

Am Do., 29. Nov. 2018, 16:31 hat Corné Plooy via Lightning-dev <
lightning-dev@lists.linuxfoundation.org> geschrieben:

> Hi,
>
>
> Is there a reason why we have HMACs in Sphinx? What could go wrong if we
> didn't?
>
> A receiving node doesn't know anyway what the origin node is; I don't
> see any attack mode where an attacker wouldn't be able to generate a
> valid HMAC.
>
> A receiving node only knows which peer sent it a Sphinx packet;
> verification that this peer really sent this Sphinx packet is (I think)
> already done on a lower protocol layer.
>
>
> AFAICS, The only real use case of the HMAC value is the special case of
> a 0-valued HMAC, indicating the end of the route. But that's just silly:
> it's essentially a boolean, not any kind of cryptographic verification.
>
>
> CJP
>
>
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [META] Organization of 1.1 Spec Effort

2018-11-26 Thread René Pickhardt via Lightning-dev
Hey Rusty,

No matter how we agree for the process I suggest to create a wiki page on
which we make it transparent and link to it from README.md.

The current process was new to me and I think one cannot expect newcomers
to read through the entire Mailinglist.

As soon as we have an agreement I can create this PR together with more
useful information for newcomers.

Best regards Rene

Am Di., 27. Nov. 2018, 01:13 hat Matt Corallo 
geschrieben:

> +100 for IRC meetings, though, really, I'd much much stronger prefer
> substantive discussion happen on GitHub or the mailing list. Doing
> finalization in a live meeting is really unfair to those who can't find the
> time to attend regularly (or happen to miss the one where that thing was
> discussed that they care about).
>
> > On Nov 26, 2018, at 18:29, Rusty Russell  wrote:
> >
> > Hi all,
> >
> >As you may know, for 1.0 spec we had a biweekly Google Hangout,
> > at 5:30am Adelaide time (Monday 19:00 UTC, or 20:00 UTC Q3/4).  You can
> > see the minutes of all meetings here:
> >
> >
> https://docs.google.com/document/d/1oU4wxzGsYd0T084rTXJbedb7Gvdtj4ax638nMkYUmco
> >
> > The current process rules are:
> >
> > 1. Any substantive spec change requires unanimous approval at the
> >   meeting before application.
> > 2. Any implementation changes generally require two interoperable
> >   implementations before they are considered final.
> > 3. "typo, formatting and spelling" fixes which can be applied after two
> >   acks without a meeting necessary.
> >
> > It's time to revisit this as we approach 1.1:
> >
> > 1. Should we move to an IRC meeting?  Bitcoin development does this.
> >   It's more inclusive, and better recorded.  But it can be
> >   lower-bandwidth.
> >
> > 2. Should we have a more formal approval method for PRs, eg. a
> >   "CONSENSUS:YES" tag we apply once we have acks from two teams and no
> >   Naks, then a meeting to review consensus, followed by "FINAL" tag and
> >   commit the next meeting?  That gives you at least two weeks to
> >   comment on the final draft.
> >
> > Side note: I've added milestones to PRs as 1.0/1.1; I'm hoping to clear
> > all 1.0 PRs this week for tagging in the next meeting, then we can start
> > on 1.1 commits.
> >
> > Thanks!
> > Rusty.
> > ___
> > Lightning-dev mailing list
> > Lightning-dev@lists.linuxfoundation.org
> > https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Penalty tx and RBF

2018-11-24 Thread René Pickhardt via Lightning-dev
Dear Cezary,

as far as I understand the problem in the case of a unilateral (force)
close are:

1.) In order to RBF your commitment transaction you would have to have the
signature of your former channel partner. since you initiated a force close
it is unlikely that you get this signature to RBF because then you could
have done a mutual close right away which is cheaper since less tx are
invovled to claim all funds back.
2.) In order to CPFP you have to be able to spend your output which can't
work because there is a timelock on it.

I believe on the last lightning developer summit this issue was discussed
and it was agreed that for BOLT1.1 we want have a third output in the
commitment transactions which anyone can spend (OP_TRUE) and which is just
above the dust level. This output is supposed to have no timelock so that
anyone can CPFP it. In the general case miners of the block could collect
the output as a fee. You can find a pointer to this on this wikipage in the
lightning-rfc git repo:
https://github.com/lightningnetwork/lightning-rfc/wiki/Lightning-Specification-1.1-Proposal-States
(look in the section tx and fees)

best Rene

On Fri, Nov 23, 2018 at 6:30 PM Cezary Dziemian 
wrote:

> Hello all,
>
> Sorry for my ignorance. I have two questions related with penalty txs. I
> assume, that when someone commits obsolete commitment tx, my node
> automatically commit penalty transaction.
>
> What if fees suddenly increases? Can my node use RBF to increase fee?
>
> Is there any approach common to major 3 implementations?
>
> How much time (how many blocks) do my node have to commit penalty tx? Is
> there some value common for implementations?
>
> Best regards,
> Cezary Dziemian
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>


-- 
https://www.rene-pickhardt.de

Skype: rene.pickhardt

mobile: +49 (0)176 5762 3618
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Penalty tx and RBF

2018-11-23 Thread René Pickhardt via Lightning-dev
Hey Cezary,

sorry I misread your initial question. I thought you where referring to the
(probably bigger problem) of getting the commitment transaction to be mined
because RBF does not work. But if we assume that your channel partner
published an outdated commitment transaction which got mined then you can
claim (both) outputs (your node should do this automatically) with this
penalty transaction. This penalty transaction is spending the outputs of
the commitment transaction and is signed with your nodes private key.
Therefor as far as I know you should be able to RBF this penalty
transaction. Also I believe you understand the process correctly.

Actually since the timelock on the commitment transaction will at some
point in time be over (in which case also your channel partner can spend
their output) you have an economic incentive to quickly get the penalty
transaction minded by using rather high fees or in case at this time really
a lot of transactions come in to RBF. I currently see no reason why you
could not RBF the penalty transaction. In case I oversee something I am
sure someone here on the list will correct me.

best regards Rene

On Fri, Nov 23, 2018 at 8:34 PM Cezary Dziemian 
wrote:

> Thanks for answer,
>
> My knowledge is mostly based on this article:
>
>
> https://bitcoinmagazine.com/articles/understanding-the-lightning-network-part-building-a-bidirectional-payment-channel-1464710791/
>
> Graph at the end shows that in order to claim former channel partner funds
> I need to provide child transaction that contains my signature and secret.
> This secret is evidence.that partner didn't commit the last transaction.
>
> So the penalty transaction uses comitment transaction output as its input
> and penalty transaction can be sign by one side only. Am I right, or I just
> don't understand how it works? Or maybe this graph do not represents
> correctly how commitment and penalty transactions are already developed?
>
> Best Regards,
> Cezary Dziemian
>
>
> pt., 23 lis 2018 o 19:07 René Pickhardt 
> napisał(a):
>
>> Dear Cezary,
>>
>> as far as I understand the problem in the case of a unilateral (force)
>> close are:
>>
>> 1.) In order to RBF your commitment transaction you would have to have
>> the signature of your former channel partner. since you initiated a force
>> close it is unlikely that you get this signature to RBF because then you
>> could have done a mutual close right away which is cheaper since less tx
>> are invovled to claim all funds back.
>> 2.) In order to CPFP you have to be able to spend your output which can't
>> work because there is a timelock on it.
>>
>> I believe on the last lightning developer summit this issue was discussed
>> and it was agreed that for BOLT1.1 we want have a third output in the
>> commitment transactions which anyone can spend (OP_TRUE) and which is just
>> above the dust level. This output is supposed to have no timelock so that
>> anyone can CPFP it. In the general case miners of the block could collect
>> the output as a fee. You can find a pointer to this on this wikipage in the
>> lightning-rfc git repo:
>> https://github.com/lightningnetwork/lightning-rfc/wiki/Lightning-Specification-1.1-Proposal-States
>> (look in the section tx and fees)
>>
>> best Rene
>>
>> On Fri, Nov 23, 2018 at 6:30 PM Cezary Dziemian <
>> cezary.dziem...@gmail.com> wrote:
>>
>>> Hello all,
>>>
>>> Sorry for my ignorance. I have two questions related with penalty txs. I
>>> assume, that when someone commits obsolete commitment tx, my node
>>> automatically commit penalty transaction.
>>>
>>> What if fees suddenly increases? Can my node use RBF to increase fee?
>>>
>>> Is there any approach common to major 3 implementations?
>>>
>>> How much time (how many blocks) do my node have to commit penalty tx? Is
>>> there some value common for implementations?
>>>
>>> Best regards,
>>> Cezary Dziemian
>>> ___
>>> Lightning-dev mailing list
>>> Lightning-dev@lists.linuxfoundation.org
>>> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>>>
>>
>>
>> --
>> https://www.rene-pickhardt.de
>>
>> Skype: rene.pickhardt
>>
>> mobile: +49 (0)176 5762 3618
>>
>

-- 
https://www.rene-pickhardt.de

Skype: rene.pickhardt

mobile: +49 (0)176 5762 3618
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Base AMP

2018-11-20 Thread René Pickhardt via Lightning-dev
Hey List,

as this base AMP proposal seems pretty small I just started to write this
up to make a PR for BOLT04 and BOLT11. While doing my write up I realize
that there are smaller things that I would want to verify / double check
and propose with you.

## Verifying:
1.) I understand the receiving node signals support for Base AMP by setting
a feature bit in the BOLT11 String
2.) The sending node signals a multipath payment by setting a feature bit
and by using the same `amount to forward` value in the last hop of the
onion for all paths which will also be bigger that the incoming htlcs whose
sum has to be at least the size of `amount_to_forward`.

## Clarifying:
3.) Senders MUST NOT (SHOULD NOT?) create paths which would have to be
merged by intermediary nodes (as we don't know - and have no means of
querying - if they support the format of the adepted onion packages for
partial paths. Also it even seems impossible since the rest of the path for
at least one partial path could not be stored in the onion / forwarded
onions can't be seen)

## Proposing:
Should we specify an algorithm for executing a multipath payment for the
sending node or should this be left to the implementation. An obvious Idea
for an algorithm would be a divide and conquer scheme which should be
obvious with the following python style pseudo code:

def pay_base_amp(amount):
   success = False
   for route in get_available_routes():
   success = send_via_route(route, amount)
if not success:
   pay_base_amp(amount/2 + 1) # the +1 is to mitigate rounding errors.
there could be other ways to do so.
   pay_base_amp(amount/2 + 1)

Even if we leave the exact AMP execution to the sender we could still
suggest this divide and conquer scheme in BOLT 04

Another idea I had (which is probably a bad one as it allows for probing of
channel balances) would be to allow nodes on a partial path to send back
some hints of how much additional capacity they can forward if they see
that the partial payment feature bit is set (this would require to set this
feature bit in every onion) Also if we want to make use of this information
every node would have to support base amp. So I guess this idea is bad for
several reasons. Still we could have a MAY rule out of it?

best Rene


On Fri, Nov 16, 2018 at 4:45 PM Anthony Towns  wrote:

> On Thu, Nov 15, 2018 at 11:54:22PM +, ZmnSCPxj via Lightning-dev wrote:
> > The improvement is in a reduction in `fee_base_msat` in the C->D path.
>
> I think reliability (and simplicity!) are the biggest things to improve
> in lightning atm. Having the flag just be incuded in invoices and not
> need to be gossiped seems simpler to me; and I think endpoint-only
> merging is better for reliability too. Eg, if you find candidate routes:
>
>   A -> B -> M -- actual directed capacity $6
>   A -> C -> M -- actual directed capacity $5.50
>   M -> E -> F -- actual directed capacity $6
>   A -> X -> F -- actual directed capacity $7
>
> and want to send $9 form A to F, you might start by trying to send
> $5 via B and $4 via C.
>
> With endpoint-only merging you'd do:
>
>$5 via A,B,M,E,F -- partial success
>$4 via A,C,M,E -- failure
>$4 via A,X,F -- payment completion
>
> whereas with in-route merging, you'd do:
>
>$5 via A,B,M -- held
>$4 via A,C,M -- to be continued
>$9 via M,E -- both partial payments fail
>
> which seems a fair bit harder to incrementally recover from.
>
> > Granted, current `fee_base_msat` across the network is very low
> currently.
> > So I do not object to restricting merge points to ultimate payees.
> > If fees rise later, we can revisit this.
>
> So, while we already agree on the approach to take, I think the above
> provides an additional rationale :)
>
> Cheers,
> aj
>
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>


-- 
https://www.rene-pickhardt.de

Skype: rene.pickhardt

mobile: +49 (0)176 5762 3618
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Strawman BOLT11 static "offer" format using probes.

2018-11-16 Thread René Pickhardt via Lightning-dev
Dear Rusty,

I am not getting this proposal (maybe I am lacking some technical basic
understandings) however I decided to ask more questions in order to
complete my onboarding process faster and hope this is fine.

My problem starts with the fact that I can't find the term "lightning probe
message" in the current BOLTs  (actually the term probe only occures two
times and these seem unrelated to what you are talking about) so I am
confused what this is.
As far as I understand your proposal from a high level the payer is
supposed to create an onion package which triggers the offering of HTLCs
with some additional metadata so that the receipient of the final onion can
answer with a BOLT11 invoice. What I don't get is the fact that a payment
hash needs to be known in order to offer HTLCs.
Though I imagine you ment it differently I would not see a problem with the
payer to know the preimage in advance as he is creating the entire onion on
his behalf and sponanious without invoice anyway. However I don't get why a
returned BOLT11 invoice is needed then. I assume that my previouse
statement is wrong anyway since you don't mention anywhere how the preimage
would be send from the payer to the payee.

In general I was wondering (already during the summit) why we don't include
a connection oriented communication layer on top of the current protocol
which would allow payer and payee to communicate more efficiently about
payment and routing process and to negotiate stuff like spontaneos
payments. I see two reasons against this: 1.) more synchronous
communication makes stuff more complicated to implement and 2.) privacy
concerns.
Am I missing something here? (and sorry for splitting the topic but I
didn't want to start a new one when it actually seems to fit to this
proposal.

best Rene

On Thu, Nov 15, 2018 at 4:57 AM Rusty Russell  wrote:

> Hi all,
>
> I want to propose a method of having reusable BOLT11 "offers" which
> provide almost-spontaneous payments as well as not requiring generating
> a BOLT11 invoice for each potential sale.
>
> An "offer" has a `p` field of 26 bytes (128 bits assuming top two are 0)
> (which is ignored by existing nodes).  The payer uses a new lightning
> probe message using the current onion format we use for HTLCs to
> retreive the complete invoice.
>
> The format of the final-hop lightning onion would contain:
>
> [whatever-marker-we-need?][128-bit-`p`-field][[type,len,data]+]
>
> We would probably define a few optional types to start:
>
> 1. quantity: for ordering multiple of an item, default 1.
> 2. delivery-address: steal from
> https://www.w3.org/TR/vcard-rdf/#Delivery_Addressing_Properties ?
> 3. signature: basically a blob so payer can prove it was them.
>
> The return lightning message would contain a new bolt11 invoice (perhaps
> we optimize some fields by copying from the bolt11 offer if they don't
> appear?), and an additional field:
>
> `m` (27) `data_length` 52.  Merkle hash of fields payer provided
> in onion msg above, and the offer `p` value.
>
> The payer checks the signature is correct, `m` is correct, and uses the
> invoice to pay as normal.  The bolt11 offer + fields-from-onion + bolt11
> invoice + preimage is the complete proof of payment.
>
> Refinements
> ---
>
> We can generate alternate leaves for the merkle tree (using
> SHA256(shared-secret | leafnum)) so revealing the `m` value doesn't risk
> revealing your delivery-address for example.
>
> The return needs to list the fields it *didn't* include in the merkle
> because it didn't accept them (the merchant doesn't want to be bound to
> conditions it doesn't understand!).
>
> We could add a `k` field to the bolt11 offer to allow the final invoice
> to delegated to a separate key.
>
> The default `x` (expiry) field for an offer which does not have an
> old-style 53-byte `p` field (ie. a "pure" offer) could be infinite.
>
> We could merkelize the delivery-address too :)
>
> I've handwaved a bit over the detailed format, because there are other
> things we want to put in the onion padding, and because the return is
> similar to the "soft-error"/"partial payment ack" proposals.
>
> Results
> ---
>
> This gives us static invoicing, and a single static invoice (without an
> amount field) can thus be used to approximate "spontaneous" donations,
> while still providing proof of payment; indeed, providing
> non-transferrable proof-of-payment since the invoice now commits to the
> payer-provided signature.
>
> It also provides a platform for recurring payments: while we can do this
> with preimage-is-next-payment_hash, that requires pre-generation and
> isn't compatible with static invoices.
>
> I apologize that this wasn't fleshed out before the summit, but I
> overestimated the power of Scriptless Scripts so had mentally deferred
> this.
>
> Thanks!
> Rusty.
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> ht

[Lightning-dev] Proposal to include some form of best effort routing, fragmentation and local AMP

2018-11-16 Thread René Pickhardt via Lightning-dev
Good morning list,

in Adelaide I had a long conversation with another participant who
complained that slow payments and failing payments are still a major UX
issue for people that try to use the lightning network. While I believe
this is a very valid point and we might want to think heavily about
altering some design decisions after BOLT1.1 in order to mitigate this I
want to make another proposal which could still be an improvement to some
of the problems that we currently have with path finding. This proposal is
basically a standalone thread to my suggestions sketched in Connors PR at
https://github.com/lightningnetwork/lightning-rfc/pull/503 for non strict
forwarding.

I propose to implement a second routing algorithm that works on the
principle of best effort routing / forwarding with the help of local
payment fragmentation or maybe better called local AMP. I understand this
sounds drastic to start with, in particular since it seems that the
destination has to be known but I believe there  are ways to still keep up
with privacy.

The core idea is to allow intermediate nodes to fragment an HTLC to
something similar as Base AMP to reach the next hop that was specified in
the onion. This would still allow to forward a payment and allow the next
hop to to continue with the regular onion package.

The idea is if Alice is supposed to forward an HTLC to Bob with a value
smaller than their channel capacity but alice has not enough funds on her
side alice could try to fragment the payment and try to find several paths
(or maybe just one path without splitting) to Bob.

One particular strategy to find such a path would be to share friend of a
friend (FOAF) information.
Alice could look at the nodes that both she and Bob are connected to and
use them as bridge nodes for the payment. In particular she could even ask
Bob how much inbound capacity he has from those nodes. In case Bob would
share this information about the channel balance of mutual friends it could
deterministically be decided if the original HTLC could be forwarded from
Alice to Bob in a fragmented way.

With the autopilot we are trying to create many triangles in the network so
that there is always the chance for friend of a friend nodes which could be
used with this approach.

On the downside the original payer would have to allow a package to be
locally fragmented by including more fees at each hop and also by
increasing the CLTV deltas in each hop (so that an additional hop can be
included and financed).

With the suggestion I made the payer can still select the basic route and
the full route would still be private. The sender however could chose paths
on which a lot of common friends exist for each pair of nodes on the path
(thereby increasing the probability that the local fragmentation of the
payment has a higher likelihood to be successful)

Of course another solution is if we think that local AMP is too complex and
expensive we could still have the two nodes that fail to forward the htlcs
work collaboratively to find a path between them and return this
information about such a (multi) path as a routing suggestion in the error
message so that the adapted onion package could be tried and sent by the
payer.

best regards Rene

-- 
https://www.rene-pickhardt.de

Skype: rene.pickhardt

mobile: +49 (0)176 5762 3618
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Splicing Proposal: Feedback please!

2018-10-10 Thread René Pickhardt via Lightning-dev
Dear Rusty,

thanks for the initiative. You suggested in your paragraph "messages
changes during splicing" during splicing to duplicate each commitment
transaction. One which spends the old funding tx and one which spends the
spliced tx. I believe this can be simplified. Though I think my workflow
pretty much resembles what you have written in "Splice Signing" from point
1. to 6. Maybe I might have misunderstood some parts of your suggestion.

I will not write this down as formal as your proposal as I believe we are
currently in the feedback and discussion phase. Maybe you already had
"those details" that I am suggesting in mind. In that case sorry for my
mail.

So let us take the example of Splicing in:
* The situation before splicing is that we have one output in our funding
tx that is being spent with each commitment tx. (actually if the channel
was spliced before we have more inputs but that should not change anything)
* Splice in would create one additional output that can be spent in future
commitment tx.
* I propose while splicing in this output should be spent by a special
commitment tx which goes to the funder of the splicing operation. This
should happen before the actual funding takes place. The other commitment
tx spending the original output continues to operate (assuring non blocking
splice in operation).
* Once we have enough confirmations we merge the channels (either
automatically or with the next channel update). A new commitment tx is
being created which now spends each output of each of the two funding tx
and assigns the channel balance to the channel partners accordingly to the
two independent channels. The old commitment txs are being invalidated.
* The disadvantage is that while splicing is not completed and if the
funder of the splicing tx is trying to publish an old commitment tx the
node will only be punished by sending all the funds of the first funding tx
to the partner as the special commitment tx of the 2nd output has no newer
state yet.

I believe splicing out is even safer:
* One just creates a spent of the funding tx which has two outputs. One
output goes to the recipient of the splice out operation and the second
output acts as a new funding transaction for the newly spliced channel.
Once signatures for the new commitment transaction are exchanged (basically
following the protocol to open a channel) the splicing operation can be
broadcasted.
* The old channel MUST NOT be used anymore but the new channel can be
operational right away without blockchain confirmation. In case someone
tries to publish an old state of the old channel it will be a double spent
of the splicing operation and in the worst case will be punished and the
splicing was not successful. if one publishes an old state of the new
channel everything will just work as normal even if the funding tx is not
yet mined. It could only be replaced with an old state of the previous
channel (which as we saw is not a larger risk than the usual operation of a
lightning node)

As mentioned maybe you had this workflow already in your mind but I don't
see why we need to send around all the messages twice with my workflow. We
only need to maintain double state but only until it is fair / safe to do
so. I would also believe that with my approach it should be possible (but
not really necessary) to have multiple splicing operations in parallel.

One other question: What happens to the short_channel_id of a channel to
which founds have been spliced in?

best Rene

On Wed, Oct 10, 2018 at 5:46 AM Rusty Russell  wrote:

> Hi all!
>
> We've had increasing numbers of c-lightning users get upset they
> can't open multiple channels, so I guess we're most motivated to allow
> splicing of existing channels.  Hence this rough proposal.
>
> For simplicity, I've chosen to only allow a single splice at a time.
> It's still complex :(
>
> Feedback welcome!
> --
> Splice Negotiation:
>
> 1. type: 40 (`splice_add_input`) (`option_splice`)
> 2. data:
>* [`32`:`channel_id`]
>* [`8`: `satoshis`]
>* [`32`: `prevtxid`]
>* [`4`: `prevtxoutnum`]
>* [`2`: `scriptlen`]
>* [`scriptlen`: `scriptpubkey`]
>
> 1. type: 41 (`splice_add_output`) (`option_splice`)
> 2. data:
>* [`32`:`channel_id`]
>* [`8`: `satoshis`]
>* [`2`: `scriptlen`]
>* [`scriptlen`: `outscript`]
>
> 1. type: 42 (`splice_all_added`) (`option_splice`)
> 2. data:
>* [`32`:`channel_id`]
>* [`4`:`feerate_per_kw`]
>* [`4`:`minimum_depth`]
>
> Each side sends 0 or more `splice_add_input` and 0 or more
> `splice_add_output` followed by `spice_all_added` to complete the splice
> proposal.  This is done either to initiate a splice, or to respond to a
> `splice_*` from the other party.
>
> `splice_add_input` is checked for the following:
> - must not be during a current splice
> - scriptpubkey is empty, or of form 'HASH160 <20-byte-script-hash> EQUAL'
> - `satoshis` doesn't wrap on addition.
> - MAY check that it matches outpoint specifie

Re: [Lightning-dev] Measuring centrality of nodes in LN graph

2018-08-27 Thread René Pickhardt via Lightning-dev
Hey Kulpreet,

thanks for this nice overview article! I have just today implemented a
first draft for the c-lightning autopilot [0]. I have implemented 4
heuristics to select nodes to which one could connect. On of those [1]
samples from the nodes that contribute to the high diameter. This heuristic
was included not to increase the utility of the node that is running the
autopilot but to improve the network properties. I believe that this
heuristic should also reduce the articulation points and biconnected
components that you mention in your article. As endpoints of longest
shortest paths will most likely be in two different biconnectivity
components (at least if those exist and have a certain size).

Regarding the centrality. I also calculated the betweeness centrality and
have similar results  [2] to yours. I guess the difference will be due to
the fact that we don't work on the exact same snapshot. My autopilot
implementation also connects to a few rather central nodes. I doubt this is
useful for the network but I guess it is good for the node running the
autopilot since it gains access to many nodes. ( Actually I think - but
don't know - that in combination with [1] it even helps the network).

Regarding your 200 Articulation points I would guess that many of those are
just nodes that only have one channel with the node that acts as an
articulation point. I guess this is not something that we would need to
take care of so much since it is also in the responsibility of those nodes
to have more than one channel. for larger biconnectivity components the
problem would probably be resolved with the above mentioned heuristic.
Therefor I believe looking at the articulation points should not be our
main focus.

Something that (regarding the autopilot) I am currently missing in your
article is how much funds should be allocated for the suggested channels. I
am currently experimenting with a probability density function that is
proportional to the average capacity of each node in the candidate set. I
smooth this with a uniform distribution. However simulations at this point
are quite expensive (if done primitively since the centralities have to be
recomputed) I guess this would be a nice future work. I will probably
tomorrow publish the rest of the code for the lib-autopilot that uses this
heuristic for channel balances since I am currently still working on it.

If you consider working more on the autopilot but also on research related
to this I would also suggest the following resources [3],[4] and [5]

[0] https://github.com/ElementsProject/lightning/pull/1888
[1]
https://github.com/renepickhardt/lightning/blob/8c91f57490b51f772513a274d679d3ab62e7201a/contrib/lib-autopilot.py#L205
[2] https://twitter.com/renepickhardt/status/1034066602273193985
[3] https://github.com/lightningnetwork/lnd/issues/677
[4]
https://github.com/renepickhardt/Automatically-Generating-a-Robust-Topology-for-the-Lightning-Network-on-top-of-Bitcoin
[5]
https://www.rene-pickhardt.de/improve-the-autopilot-of-bitcoins-lightning-network-summary-of-the-bar-camp-session-at-the-2nd-lightninghackday-in-berlin/

best regards Rene


On Mon, Aug 27, 2018 at 11:59 PM Kulpreet Singh  wrote:

> Hi all,
>
> I have been thinking about how we could measure the centrality of
> various nodes in the LN graph and the dependence on some nodes to
> route payments and to prevent network partitions. I think measuring
> and tracking the changes in key metrics could help the community
> decide on which nodes to open new channels with.
>
> I measured the centrality of nodes and the central point dominance as
> defined in the seminal paper by Lindon C. Freeman, "A Set of Measures
> of Centrality Based on Betweenness", Sociometry 40, pp. 35-41, 1977.
>
> I also measured the number of articulation points in the network as
> per Robert E. Tarjan, "Depth first search and linear graph algorithms"
> SIAM Journal on Computing, 1(2):146-160, 1972.
>
> I want to add, that this is just a start, I understand that we should
> probably look at treating LN as a directed graph and that we should do
> some work in trying to do some analysis based on treating LN as a flow
> network.
>
> However, I am eager to share my early results and would welcome any
> feedback or suggestions on the way forward.
>
> I wrote a medium post describing the approach and show my results
> there. I also elaborate on the choice of the two metrics and what they
> mean for LN. The post is available here:
>
> https://medium.com/@jungly/measuring-node-centrality-in-lightning-network-8102a5f0
>
> Looking forward to your suggestions and feedback.
>
> Thanks
> Kulpreet
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>


-- 
https://www.rene-pickhardt.de

Skype: rene.pickhardt

mobile: +49 (0)176 5762 3618
___
Lightning-dev mail

[Lightning-dev] W3C Web Payments Working Group / Payment Request API

2018-08-23 Thread René Pickhardt via Lightning-dev
Hey lightning devs,

I was wondering if any of the companies here are members of W3C  and if
anyone here could be member of the W3C Web Payments Working Group (c.f.:
https://www.w3.org/Payments/WG/ )? According to this mail
https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-March.txt
Christian Decker is a member. Which I think would be awesome!

They have just released their candidate recommendation for a payment API
at: https://www.w3.org/TR/payment-request/ According to their site the
proposed recommendation will be published not earlier than October 31st
2018. They are currently looking for feedback in their github repository
at: https://github.com/w3c/payment-request/

I can see that they have bitcoin somewhat on their mind. But I guess it
would be even cooler if we could make sure that lightning payments will
also be compatible with their recommendation.

Christian - if you really are a member -  could you give us an update on
that work? How relevant is it for us?

best Rene

-- 
https://www.rene-pickhardt.de

Skype: rene.pickhardt

mobile: +49 (0)176 5762 3618
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] proposal for Lightning Network improvement proposals

2018-07-22 Thread René Pickhardt via Lightning-dev
Sorry did not realized that BOLTs are the equivalent - and aparently many
people I spoke to also didn't realize that.

I thought BOLT is the protocol specification and the bolts are just the
sections. And the BOLT should be updated to a new version.

Also I suggested that this should take place for example within the
lightning rfc repo. So my suggestion was not about creating another place
but more about making the process more transparent or kind of filling the
gap that I felt was there.

I am sorry for spaming mailboxes with my suggestion just because I didn't
understand the current process.


Olaoluwa Osuntokun  schrieb am So., 22. Juli 2018 20:59:

> We already have the equiv of improvement proposals: BOLTs. Historically
> new standardization documents are proposed initially as issues or PR's when
> ultimately accepted. Why do we need another repo?
>
> On Sun, Jul 22, 2018, 6:45 AM René Pickhardt via Lightning-dev <
> lightning-dev@lists.linuxfoundation.org> wrote:
>
>> Hey everyone,
>>
>> in the grand tradition of BIPs I propose that we also start to have our
>> own LIPs (Lightning Network Improvement proposals)
>>
>> I think they should be placed on the github.com/lightning account in a
>> repo called lips (or within the lightning rfc repo) until that will happen
>> I created a draft for LIP-0001 (which is describing the process and is 95%
>> influenced by BIP-0002) in my github repo:
>>
>> https://github.com/renepickhardt/lips  (There are some open Todos and
>> Questions in this LIP)
>>
>> The background for this Idea: I just came home from the bitcoin munich
>> meetup where I held a talk examining BOLT. As I was asked to also talk
>> about the future plans of the developers for BOLT 1.1 I realized while
>> preparing the talk that many ideas are distributed within the community but
>> it seems we don't have a central place where we collect future enhancements
>> for BOLT1.1. Having this in mind I think also for the meeting in Australia
>> it would be nice if already a list of LIPs would be in place so that the
>> discussion can be more focused.
>> potential LIPs could include:
>> * Watchtowers
>> * Autopilot
>> * AMP
>> * Splicing
>> * Routing Protcols
>> * Broadcasting past Routing statistics
>> * eltoo
>> * ...
>>
>> As said before I would volunteer to work on a LIP for Splicing (actually
>> I already started)
>>
>> best Rene
>>
>>
>> --
>> https://www.rene-pickhardt.de
>>
>> Skype: rene.pickhardt
>>
>> mobile: +49 (0)176 5762 3618
>> ___
>> Lightning-dev mailing list
>> Lightning-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>>
>
Am 22.07.2018 20:59 schrieb "Olaoluwa Osuntokun" :

We already have the equiv of improvement proposals: BOLTs. Historically new
standardization documents are proposed initially as issues or PR's when
ultimately accepted. Why do we need another repo?

On Sun, Jul 22, 2018, 6:45 AM René Pickhardt via Lightning-dev <
lightning-dev@lists.linuxfoundation.org> wrote:

> Hey everyone,
>
> in the grand tradition of BIPs I propose that we also start to have our
> own LIPs (Lightning Network Improvement proposals)
>
> I think they should be placed on the github.com/lightning account in a
> repo called lips (or within the lightning rfc repo) until that will happen
> I created a draft for LIP-0001 (which is describing the process and is 95%
> influenced by BIP-0002) in my github repo:
>
> https://github.com/renepickhardt/lips  (There are some open Todos and
> Questions in this LIP)
>
> The background for this Idea: I just came home from the bitcoin munich
> meetup where I held a talk examining BOLT. As I was asked to also talk
> about the future plans of the developers for BOLT 1.1 I realized while
> preparing the talk that many ideas are distributed within the community but
> it seems we don't have a central place where we collect future enhancements
> for BOLT1.1. Having this in mind I think also for the meeting in Australia
> it would be nice if already a list of LIPs would be in place so that the
> discussion can be more focused.
> potential LIPs could include:
> * Watchtowers
> * Autopilot
> * AMP
> * Splicing
> * Routing Protcols
> * Broadcasting past Routing statistics
> * eltoo
> * ...
>
> As said before I would volunteer to work on a LIP for Splicing (actually I
> already started)
>
> best Rene
>
>
> --
> https://www.rene-pickhardt.de
>
> Skype: rene.pickhardt
>
> mobile: +49 (0)176 5762 3618
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] proposal for Lightning Network improvement proposals

2018-07-22 Thread René Pickhardt via Lightning-dev
Hey everyone,

in the grand tradition of BIPs I propose that we also start to have our own
LIPs (Lightning Network Improvement proposals)

I think they should be placed on the github.com/lightning account in a repo
called lips (or within the lightning rfc repo) until that will happen I
created a draft for LIP-0001 (which is describing the process and is 95%
influenced by BIP-0002) in my github repo:

https://github.com/renepickhardt/lips  (There are some open Todos and
Questions in this LIP)

The background for this Idea: I just came home from the bitcoin munich
meetup where I held a talk examining BOLT. As I was asked to also talk
about the future plans of the developers for BOLT 1.1 I realized while
preparing the talk that many ideas are distributed within the community but
it seems we don't have a central place where we collect future enhancements
for BOLT1.1. Having this in mind I think also for the meeting in Australia
it would be nice if already a list of LIPs would be in place so that the
discussion can be more focused.
potential LIPs could include:
* Watchtowers
* Autopilot
* AMP
* Splicing
* Routing Protcols
* Broadcasting past Routing statistics
* eltoo
* ...

As said before I would volunteer to work on a LIP for Splicing (actually I
already started)

best Rene


-- 
https://www.rene-pickhardt.de

Skype: rene.pickhardt

mobile: +49 (0)176 5762 3618
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Rebalancing argument

2018-07-01 Thread René Pickhardt via Lightning-dev
Hey Dmytro,

thank your for your solid input to the discussion. I think we need to
consider that the setting in the lightning network is not exactly
comparable to the one described in the 2010 paper.

1st: the paper states in section 5.2: "It appears that a mathematical
analysis of a transaction routing model where intermediate nodes charged a
routing fee would require an entirely new approach since it would
invalidate the cycle-reachability relation that forms the basis of our
results."
Since we have routing fees in the lightning network to my understanding the
theorem and lemma you quoted in your medium post won't hold.

2nd: Even if we neglect the routing fees and assume the theorem still holds
true we have two conditions that make the problem way more dynamic:
 A) In the lightning network we do not know the weights of the directed
edges. (only the sum of two opposing edges) So while theoretically the flow
in the network will only depend on the liquidity of the nodes I guess in
practice well balanced channels will increase the probability to actually
find a working route.
B) I believe the HTLCs create a situation where funds are being locked up
while routing takes place and thus have an impact to the entire flow of the
network. While Alice searches for a route for her payment a proper route
could be blocked do to the fact that Bob is using it currently. Since the
funds of Bob have not arrived Alice might also not be able to find a
different route.

However those scientific results are a strong call for Atomic Multipath
Payments. I personally think they are also a strong call for splicing since
this allows to easilly increase the flow of the network by updating a
channel (athough you might argue that following the paper this could be
achieved by just creating a new channel)

best Rene

On Sun, Jul 1, 2018 at 12:21 PM Dmytro Piatkivskyi <
dmytro.piatkivs...@ntnu.no> wrote:

> Hi everyone,
>
> I have been working academically on the Lightning network for a while now.
> I didn’t not participate in the list to form my own vision of what it
> should be. So please, bear with me if I’ll be saying nonsense sometimes.
>
> There has been a lot of discussion on sending cycle transactions to
> oneself to ‘re-balance’ the network. On LN mailing list
> 
>  [1] or
> numerous places elsewhere. There has been even a paper suggesting a smart
> mechanism to do the re-balancing (see Revive or Liquidity network [2]). My
> question is what do we actually get from it? [3] states that the
> distribution of funds in channels does not really affect the network
> liquidity. I can see cheaper fees or shorter paths if the network is kept
> balanced. But don’t you think that a smart fee strategy will do the job?
>
> To save your time, [4] explains the gist from [3].
>
> [1]
> https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-February/001005.html
> [2]
> https://www.reddit.com/r/ethereum/comments/7bse33/were_very_happy_to_announce_the_liquiditynetwork/
> [3] https://arxiv.org/abs/1007.0515
> [4]
> https://medium.com/@dimapiatkivskyi/why-would-you-re-balance-a-payment-network-796756ad4f31
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>


-- 
www.rene-pickhardt.de


Skype: rene.pickhardt

mobile: +49 (0)176 5762 3618
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Including a Protocol for splicing to BOLT

2018-06-25 Thread René Pickhardt via Lightning-dev
Hey everyone,

I found a mail from 6 month ago on this list ( c.f.:
https://lists.linuxfoundation.org/pipermail/lightning-dev/2017-December/000865.html
)
in which it was stated that there was a plan to include a splicing protocol
as BOLT 1.1 (On a side node I wonder weather it would make more sense to
include splicing to BOLT 3?) I checked out the git repo and issues and
don't see that anyone is currently working on that topic and that it hasn't
been included yet. Am I correct?
If noone works on this at the moment and the spec is still needed I might
take the initiative on that one over the next weeks. If someone is working
on this I would kindly offer my support.

The background for my question: Last weekend I have been attending the 2nd
lightninghackday in Berlin and we had quite some intensive discussions
about the autopilot feature and splicing. (c.f. a summary can be found on
my blog:
https://www.rene-pickhardt.de/improve-the-autopilot-of-bitcoins-lightning-network-summary-of-the-bar-camp-session-at-the-2nd-lightninghackday-in-berlin
)

They people from lightning labs told me that they are currently started
working on splicing but even though it seems technically straight forward
the protocols should also be formalized. Previously I planned working on
improving the intelligence of the autopilot feature of the lightning
network however on the weekend I got convinced that splicing should be much
higher priority and the process should be specified in the lightning rfc.

Also it would be nice if someone would be willing to help out improving the
quality of the spec that I would create since it will be my first time
adding work to such a formal rfc.

best Rene


-- 
www.rene-pickhardt.de


Skype: rene.pickhardt

mobile: +49 (0)176 5762 3618
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Fwd: Opening channels with neighbors for cost/connectivity benefit

2018-03-28 Thread René Pickhardt via Lightning-dev
Sorry I sent this mail by accident only to Karan.

-- Forwarded message -
From: René Pickhardt 
Date: Di., 27. März 2018 11:10
Subject: Re: [Lightning-dev] Opening channels with neighbors for
cost/connectivity benefit
To: Karan Verma 


Dear karan,

There is a feature called autopilot in the lnd implementation that tries to
achieve something similar to what you describe.

However the problem is much harder than just using some plausible
heuristic. I have an open issue on github discussing these problems:
https://github.com/lightningnetwork/lnd/issues/677

>From there I also linked a draft of a whitepaper that I am working on in
which I plan to discuss ways of automatically create a well connected
network topology fitting the specific needs of the peers in the lightning
network.

Your help and ideas would be appreciated. Also you could just implement
your idea in the lnd autopilot interface

Best regards Rene Pickhardt

Karan Verma  schrieb am Di., 27. März 2018
05:58:

> Hello,
>
> The sender node doesn’t always have a route to the receiving node
> accepting lightning payments and since opening new channels is costly - I
> was wondering if there was a smarter way to open channels such that it
> increases the connectedness of the sender node with other nodes in the
> network and also possibly save money in the intended transaction.
>
> To clarify, if Bob wants to send money to Alice but doesn’t have a route
> to her. He would need to open a new channel with Alice and send the money.
> This is costly for Bob if that was the only transaction he ever wanted to
> do with Alice. However, if Alice was connected to Charlie and Dave
> (Unidirectional: Charlie -> Alice & Dave -> Alice due to the amount being
> sent). He could instead connect with Charlie/Dave or nodes connected with
> them which have a route to Alice through Charlie/Dave such that it
> minimizes the transaction cost to reaching Alice (some routes might have
> negative fee) and maximizes the number of nodes Bob can now reach through
> this channel. Lets say if Bob chose Charlie's neighbor, then he can now
> reach at-least three nodes - Charlie's neighbor, Charlie and Alice and end
> up paying less.
>
> Essentially we're sorting choice of the nodes to open a channel with by
> transaction fee and connectedness it brings to the origin node. This would
> benefit Bob in the long term and also maybe lightning network as a whole.
> I'm new to lightning and would appreciate feedback on this idea. Thanks.
>
> -Karan
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] New form of 51% attack via lightning's revocation system possible?

2018-03-13 Thread René Pickhardt via Lightning-dev
#x27;t count on there not being a 51%
> attacker, then things are pretty much broken anyway :-)
>
> Cheers,
> Christian
>
> René Pickhardt via Lightning-dev
>  writes:
> > Hey everyone,
> >
> > disclaimer: as mentioned in my other mail (
> > https://lists.linuxfoundation.org/pipermail/lightning-dev/
> 2018-March/001065.html
> > ) I am currently studying the revocation system of duplex micropayment
> > channels in detail but I am also pretty new to the topic. So I hope the
> > attack I am about to describe is not possible and it is just me
> overseeing
> > some detail or rather my lack of understanding.
> > That being said even after waiting one week upon discovery and double
> > checking the assumptions I made I am still positive that the revocation
> > system in its current form allows for a new form of a 51% attack. This
> > attack seems to be way more harmful than a successful 51% attack on the
> > bitcoin network. Afaik within the bitcoin network I could 'only double
> > spend' my own funds with a successful 51% attack. In the lightning case
> it
> > seems that an attacker could steal an arbitrary amount of funds as long
> as
> > the attacker has enough payment channels with enough balance open.
> >
> > The attack itself follows exactly the philosophy of lightning: "If a tree
> > falls in the forest and no one is around to hear it. Does it make a
> sound?"
> > In the context of the attack this would translate to: "If a 51% attacker
> > secretly mines enough blocks after fraudulently spending old commitment
> > transactions and no one sees it during the the *to_self_delay*  period,
> > have the commitment transactions been spent? (How) Can they be revoked?"
> >
> >
> > As for the technical details I quote from the spec of BOLT 3:
> > "*To allow an opportunity for penalty transactions, in case of a revoked
> > commitment transaction, all outputs that return funds to the owner of the
> > commitment transaction (a.k.a. the "local node") must be delayed for *
> > *to_self_delay** blocks. This delay is done in a second-stage HTLC
> > transaction (HTLC-success for HTLCs accepted by the local node,
> > HTLC-timeout for HTLCs offered by the local node)*"
> >
> > Assume an attacker has 51% of the hash power she could open several
> > lightning channels and in particular accept any incoming payment channel
> > (the more balance is in her channels the more lucrative the 51% attack).
> > Since the attacker already has a lot of hash power it is reasonable (but
> > not necessary) to assume that the attacker already has a lot of bitcoins
> > and is well known to honest nodes in the network which makes it even more
> > likely to have many open channels.
> >
> > The attacker keeps track of her (revocable) commitment transactions in
> > which the balance is mostly on the attackers side. Once the attacker
> knows
> > enough of these (old) commitment transactions the attack is being
> executed
> > in the following way:
> > 0.) The max value of to_self_delay is evaluated. Let us assume it is 72
> > blocks (or half a day).
> > 1.) The attacker secretly starts mining on her own but does not
> broadcasts
> > any successfully mined block. Since the attacker has 51% of the hash
> power
> > she will most likely be faster than the network to mine the 72 blocks of
> > the safety period in which fraudulent commitment transactions could be
> > revoked.
> > 2.) The attacker spends all the fraudulent (old) commitment transactions
> in
> > the first block of her secrete mining endeavor.
> > 3.) Meanwhile the attacker starts spending her own funds of her payment
> > channels e.g on decentralized exchanges for any other (crypto)currency.
> > 4.) As soon as the attacker has mined enough blocks that the commitment
> > transactions cannot be revoked she broadcasts her secretly minded
> > blockchain which will be accepted by the network as it is the longest
> > chain. (In Particular she includes all the other bitcoin transactions
> that
> > are also in the original public blockchain so that other people don't
> even
> > realize something suspicious has happened.)
> >
> > Since according to the spec channels should never be balanced worse than
> > 99% to 1% the attacker could steal up to 99% of all the bitcoins
> allocated
> > in the sum of all payment channels the attacker was connected to. This
> > amount could obviously be way higher than just double spending her own
> > funds. This attack would be interesting in particular for the pow

[Lightning-dev] New form of 51% attack via lightning's revocation system possible?

2018-03-13 Thread René Pickhardt via Lightning-dev
Hey everyone,

disclaimer: as mentioned in my other mail (
https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-March/001065.html
) I am currently studying the revocation system of duplex micropayment
channels in detail but I am also pretty new to the topic. So I hope the
attack I am about to describe is not possible and it is just me overseeing
some detail or rather my lack of understanding.
That being said even after waiting one week upon discovery and double
checking the assumptions I made I am still positive that the revocation
system in its current form allows for a new form of a 51% attack. This
attack seems to be way more harmful than a successful 51% attack on the
bitcoin network. Afaik within the bitcoin network I could 'only double
spend' my own funds with a successful 51% attack. In the lightning case it
seems that an attacker could steal an arbitrary amount of funds as long as
the attacker has enough payment channels with enough balance open.

The attack itself follows exactly the philosophy of lightning: "If a tree
falls in the forest and no one is around to hear it. Does it make a sound?"
In the context of the attack this would translate to: "If a 51% attacker
secretly mines enough blocks after fraudulently spending old commitment
transactions and no one sees it during the the *to_self_delay*  period,
have the commitment transactions been spent? (How) Can they be revoked?"


As for the technical details I quote from the spec of BOLT 3:
"*To allow an opportunity for penalty transactions, in case of a revoked
commitment transaction, all outputs that return funds to the owner of the
commitment transaction (a.k.a. the "local node") must be delayed for *
*to_self_delay** blocks. This delay is done in a second-stage HTLC
transaction (HTLC-success for HTLCs accepted by the local node,
HTLC-timeout for HTLCs offered by the local node)*"

Assume an attacker has 51% of the hash power she could open several
lightning channels and in particular accept any incoming payment channel
(the more balance is in her channels the more lucrative the 51% attack).
Since the attacker already has a lot of hash power it is reasonable (but
not necessary) to assume that the attacker already has a lot of bitcoins
and is well known to honest nodes in the network which makes it even more
likely to have many open channels.

The attacker keeps track of her (revocable) commitment transactions in
which the balance is mostly on the attackers side. Once the attacker knows
enough of these (old) commitment transactions the attack is being executed
in the following way:
0.) The max value of to_self_delay is evaluated. Let us assume it is 72
blocks (or half a day).
1.) The attacker secretly starts mining on her own but does not broadcasts
any successfully mined block. Since the attacker has 51% of the hash power
she will most likely be faster than the network to mine the 72 blocks of
the safety period in which fraudulent commitment transactions could be
revoked.
2.) The attacker spends all the fraudulent (old) commitment transactions in
the first block of her secrete mining endeavor.
3.) Meanwhile the attacker starts spending her own funds of her payment
channels e.g on decentralized exchanges for any other (crypto)currency.
4.) As soon as the attacker has mined enough blocks that the commitment
transactions cannot be revoked she broadcasts her secretly minded
blockchain which will be accepted by the network as it is the longest
chain. (In Particular she includes all the other bitcoin transactions that
are also in the original public blockchain so that other people don't even
realize something suspicious has happened.)

Since according to the spec channels should never be balanced worse than
99% to 1% the attacker could steal up to 99% of all the bitcoins allocated
in the sum of all payment channels the attacker was connected to. This
amount could obviously be way higher than just double spending her own
funds. This attack would be interesting in particular for the power nodes
created by the Barabasi-Albert model of lnd's autopilot (c.f.:
https://github.com/lightningnetwork/lnd/issues/677 ).

I understand that with the growth of the bitcoin (mining) network a 51%
attack becomes less and less likely. Also I am very happy to be proven
false about the attack that I am describing.

Another sad thing about this attack is that I currently do not see any
(reasonable) way of preventing this form of a 51% attack (other than
creating payment channels that don't offer the possibility of revocation)
as it is abusing exactly the core idea of lightning to do something in
secret without broadcasting it.

Best regards Rene

---

http://www.rene-pickhardt.de
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Pinging a route for capacity

2018-03-03 Thread René Pickhardt via Lightning-dev
Hey everyone,

disclaimer I am new here and have not a full understanding of the complete
specs yet - however since I decided to participate in lighting dev I will
just be brave and try to add my ideas on the problem jimpo posed. So even
in case by ideas are complete bs please just tell me in a friendly way and
I know I need to read more code and specs in order to participate.

Reading about the described problem techniques like IP-Fragmentation (
https://en.wikipedia.org/wiki/IP_fragmentation ) come to my mind. The
setting is a little bit different but in from my current understanding it
should still be applicable and also be the favorable solution in comparison
to the proposed ping:

1.) IP setting: In IP-Fragmentation one would obviously just split the
IP-package in order to utilize a link layer protocol that doesn't have
enough bandwidth for a bigger size package.
2.) Lightning case: When the capacity of a channel during routing is not
high enough - which means that the channel balance on that side is
somewhere between 0 and the size of the payment - one would have to to send
the second part of the fragmented package on a different route. This is
obvious since the insufficient channel balance cannot come out of thin air
(as in IP-Routing).

My first intuition was that this would become a problem for onion routing
since the router in question does not know the final destination but only
knows the next hop which can't be utilized as the channel doesn't have
enough funds. However I imagine one could just encapsulate the second part
of the fragmented payment in a new onion routed package that goes on a
detour (using different payment channels) to the original "next" hop und
progresses from there as it was originally thought of.

Not sure however how the impacts to the HTLC would be and if it would
actually be possible to fragment a payment that is encapsulated within the
onion routing.

If possible the advantage in comparison to your proposed ping method is
that fragmentation would be highly dynamic and still work if a channel runs
out of funds while routing payment. In your ping scenario it could very
well happen that you do a ping for a designated rout, everything looks
great, you send a payment but while it is on its way a channel on that way
could run dry.

best Rene


On Thu, Mar 1, 2018 at 3:45 PM, Jim Posen  wrote:

> My understanding is that the best strategy for choosing a route to send
> funds over is to determine all possible routes, rank them by estimated fees
> based on channel announcements and number of hops, then try them
> successively until one works.
>
> It seems inefficient to me to actually do a full HTLC commitment handshake
> on each hop just to find out that the last hop in the route didn't have
> sufficient remaining capacity in the first place. Depending on how many
> people are using the network, I could also forsee situations where this
> creates more payment failures because bandwidth is locked up in HTLCs that
> are about to fail anyway.
>
> One idea that would likely help is the ability to send a ping over an
> onion route asking "does every hop have capacity to send X msat?" Every hop
> would forward the onion request if the answer is yes, or immediately send
> the response back up the circuit if the answer is no. This should reveal no
> additional information about the channel capacities that the sender
> couldn't determine by sending a test payment to themself (assuming they
> could find a loop). Additionally, the hops could respond with the latest
> fee rate in case channel updates are slow to propagate.
>
> The main benefit is that this should make it quicker to send a successful
> payment because latency is lower than sending an actual payment and the
> sender could ping all possible routes in parallel, whereas they can't send
> multiple payments in parallel. The main downside I can think of is that,
> by the same token, it is faster and cheaper for someone to extract
> information about channel capacities on the network with a binary search.
>
> -jimpo
>
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
>


-- 
www.rene-pickhardt.de


Skype: rene.pickhardt

mobile: +49 (0)176 5762 3618
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Christian Deckers Duplex Micropayment Channels vs Lightning networks revocation key solution

2018-03-03 Thread René Pickhardt via Lightning-dev
Hey everyone,

as mentioned before I am quite new to lightning dev. Should the questions
I'll ask be too basic please drop me a kind note and I will be more quite
or ask my questions on other places.

Today I studied Chrstian Deckers nice work about duplex micropayment
channels (
http://www.tik.ee.ethz.ch/file/716b955c130e6c703fac336ea17b1670/duplex-micropayment-channels.pdf

 )

I am wondering what was the rational for the lightning spec (
https://github.com/lightningnetwork/lightning-rfc ) to go with the
revocation key system instead of the solution by Christian Decker to the
problem? I understand that the revocation system was already in the
whitepaper and at the time of writing the whitepaper the work by Christian
Decker wasn't out yet. But I guess this will not be the entire reason.

To me the key revocation system seems pretty complex to handle. In
particular as Rusty also mentioned on his article (c.f.
https://medium.com/@rusty_lightning/lightning-watching-for-cheaters-93d365d0d02f
 ) that already in the white paper people knew that potentially a third
party observing service to detect a cheater is needed. This seems to me
like a big drawback.

So what have been strong arguments to go with the revocation system?

On a side note I would like to state my respect to you: The lightning
network (in combination with bitcoin) is really the most beautiful piece of
technology I came across since I learnt about TCP/IP. Great work everybody
for creating such an amazing technology and bringing together all those
beautiful ideas. I am very confident that this technology will become
history.

best Rene

-- 
www.rene-pickhardt.de


Skype: rene.pickhardt
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Pinging a route for capacity

2018-03-01 Thread René Pickhardt via Lightning-dev
Hey everyone,

disclaimer I am new here and have not a full understanding of the complete
specs yet - however since I decided to participate in lighting dev I will
just be brave and try to add my ideas on the problem jimpo posed. So even
in case by ideas are complete bs please just tell me in a friendly way and
I know I need to read more code and specs in order to participate.

Reading about the described problem techniques like IP-Fragmentation (
https://en.wikipedia.org/wiki/IP_fragmentation ) come to my mind. The
setting is a little bit different but in from my current understanding it
should still be applicable and also be the favorable solution in comparison
to the proposed ping:

1.) IP setting: In IP-Fragmentation one would obviously just split the
IP-package in order to utilize a link layer protocol that doesn't have
enough bandwidth for a bigger size package.
2.) Lightning case: When the capacity of a channel during routing is not
high enough - which means that the channel balance on that side is
somewhere between 0 and the size of the payment - one would have to to send
the second part of the fragmented package on a different route. This is
obvious since the insufficient channel balance cannot come out of thin air
(as in IP-Routing).

My first intuition was that this would become a problem for onion routing
since the router in question does not know the final destination but only
knows the next hop which can't be utilized as the channel doesn't have
enough funds. However I imagine one could just encapsulate the second part
of the fragmented payment in a new onion routed package that goes on a
detour (using different payment channels) to the original "next" hop und
progresses from there as it was originally thought of.

Not sure however how the impacts to the HTLC would be and if it would
actually be possible to fragment a payment that is encapsulated within the
onion routing.

If possible the advantage in comparison to your proposed ping method is
that fragmentation would be highly dynamic and still work if a channel runs
out of funds while routing payment. In your ping scenario it could very
well happen that you do a ping for a designated rout, everything looks
great, you send a payment but while it is on its way a channel on that way
could run dry.

best Rene


On Thu, Mar 1, 2018 at 3:45 PM, Jim Posen  wrote:

> My understanding is that the best strategy for choosing a route to send
> funds over is to determine all possible routes, rank them by estimated fees
> based on channel announcements and number of hops, then try them
> successively until one works.
>
> It seems inefficient to me to actually do a full HTLC commitment handshake
> on each hop just to find out that the last hop in the route didn't have
> sufficient remaining capacity in the first place. Depending on how many
> people are using the network, I could also forsee situations where this
> creates more payment failures because bandwidth is locked up in HTLCs that
> are about to fail anyway.
>
> One idea that would likely help is the ability to send a ping over an
> onion route asking "does every hop have capacity to send X msat?" Every hop
> would forward the onion request if the answer is yes, or immediately send
> the response back up the circuit if the answer is no. This should reveal no
> additional information about the channel capacities that the sender
> couldn't determine by sending a test payment to themself (assuming they
> could find a loop). Additionally, the hops could respond with the latest
> fee rate in case channel updates are slow to propagate.
>
> The main benefit is that this should make it quicker to send a successful
> payment because latency is lower than sending an actual payment and the
> sender could ping all possible routes in parallel, whereas they can't send
> multiple payments in parallel. The main downside I can think of is that,
> by the same token, it is faster and cheaper for someone to extract
> information about channel capacities on the network with a binary search.
>
> -jimpo
>
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
>


-- 
Skype: rene.pickhardt
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev