Re: [Lightning-dev] Blind Signing Considered Harmful

2022-05-09 Thread ZmnSCPxj via Lightning-dev
Good morning devrandom,

It seems to me that a true validating Lightning signer would need to be a 
Bitcoin node with active mitigation against eclipse attacks, the ability to 
monitor the blockheight, and the ability to broadcast transactions.

Otherwise, a compromised node can lie and tell the signer that the block height 
is much lower than it really is, letting the node peers clawback incoming HTLCs 
and claim outgoing HTLCs, leading to a net loss of funds in the forwarding case.

Looking at the link, it seems to me that you have a "UTXO Set Oracle", does 
this inform your `lightning-signer` about block height and facilitate 
transaction broadcast?
Is this intended to be a remote device from the `lightning-signer` device?
If so, what happens if the connection between the "UTXO Set Oracle" remote 
device and the `lightning-signer` is interrupted?

In particular:

* Incoming forward arrives.
* Compromised node accepts the incoming HTLC and offers outgoing HTLC.
  * Presumably the `lightning-signer` signs off on this, as long as the 
outgoing HTLC is of lower value etc etc.
* Compromised node stops communicating with the `lightning-signer`.
* Outgoing HTLC times out, but compromised node and the outgoing peer do 
nothing.
* Incoming HTLC times out, and the incoming peer unilaterally closes the 
channel, claiming the timelock branch of the HTLC onchain.
* Outgoing peer unilaterally closes the channel, claiming the hashlock branch 
of the outgoing HTLC onchain.

Unless the `lightning-signer` unilaterally closes the channel when the outgoing 
HTLC times out and actively signs and broadcasts the timelock branch for the 
outgoing HTLC, then this leads to funds loss.
This requires that the `lightning-signer` be attached to a Bitcoin node that is 
capable of:

* Actively finding and connecting to multiple Bitcoin peers.
* Actively checking the block header chain (acceptable at only SPV security 
since you really only care about blockheight, and have a UTOX Set Oracle which 
upgrades the rest of your security from SPV to full?).
* Actively broadcasting unilateral closes and HTLC timelock claims for outgoing 
HTLCs.

Is that how `lightning-signer` is designed?

This seems to be listed in: 
https://gitlab.com/lightning-signer/docs/-/wikis/Potential-Exploits

> an HTLC is failed and removed on the input before it is removed on the 
> output.  The output is then claimed by the counterparty, losing that amount

Is there a mitigation, planned or implemented, against this exploit?


Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Taro - Separating Taro concerns from LN token concerns

2022-05-03 Thread ZmnSCPxj via Lightning-dev
Good morning John,

Thank you for clarifying.

> Zman,
> I was not arguing for moving things from the edge, nor was I arguing to make 
> Taro a BOLT. Laolu is misinterpreting my message.
> I was explaining that the capabilities that would allow Taro to interact with 
> LN have no special relationship to Taro alone and should be designed to 
> accommodate any outside layer/network.
> I gave specific examples of requirements that LL is portraying as Taro Layer 
> design, that are really just new features for LN nodes that do not need to be 
> network/layer-specific:
> - Making LN nodes aware of assets on other networks- Establishing commitments 
> for (atomic) swapping for payments/routing- Supporting the ability to 
> exchange and advertise exchange rates for asset pairs- Supporting other 
> multi-asset routes when considering routing paths, bridging nodes with 
> alternate assets
> I don't care whether this is framed as BOLT or BLIP content, as in the end 
> each implementation will do what it needs to stay relevant in the market. I 
> care that this is framed and designed correctly, so we aren't locked into one 
> specific outside layer. You could argue the degree to which the above 
> features need to exist in the network, and whether to restrict such features 
> to the "edge," but my point is that an LN node that wants to be aware of an 
> outside network, and extra assets in addition to Bitcoin, will need such 
> features, and such features are not Taro-specific.

My understanding here of "the edge" vs "the core" is that the core is 
responsible for multi-hop routes and advertisements for channels.
Thus the below:

> - Supporting the ability to exchange and advertise exchange rates for asset 
> pairs
> - Supporting other multi-asset routes when considering routing paths, 
> bridging nodes with alternate assets

... would be considered part of "the core".

Notwithstanding the previously linked objection against a multi-asset Lightning 
Network, we can discuss these as two topics:

* Advertising exchange rates.
* Routing between channels of different asset types.

### Advertising Exchange Rates

Without changing the BOLT protocol, we can define a particular odd featurebit 
that cross-asset exchanges can set.
Then, odd-numbered messages can be defined, such that I can ask that node:

* What assets it has on what channels.
* Exchange rates of each asset to Bitcoin in msats (to serve as a common 
exchange rate to allow conversion from any one asset to any other asset, 
specifying only N exchange rates instead of N^2).
  * We also need to spec out any rounding algorithm, in order to have the same 
calculation across implementations.

BOLT is flexible enough that this does not need to be "blessed" until more than 
one LN implementation agrees on the new spec.

### Routing Between Channnels Of Different Asset Types

I was the one who first suggested dropping the `realm` byte.

Originally, `realm` was a 1-byte identifier for the asset type.

However, I pointed out that `realm` was simultaneously too large and too small.

* Too Large: We needed a byte in order to allow the new "TLV" thing to be used 
in routing onions. so that we could specify how many sections the TLV thing 
would take up, and we had already taken up all the space in a typical IP packet 
for the onion.
* Too Small: If multi-asset actually materializes, it is hard to imagine that 
there would be only 255 of them (`realm = 0` was already for Bitcoin, so there 
were only 255 possible identifiers left).

The idea in my mind basically was that instead of using the `realm` byte for 
identifying asset, we would instead add a new type for TLV, which would have 20 
bytes.
These 20 bytes would be, say, RIPEMD160 . SHA256 of the name of the asset.

Odd TLV types are ignored, but individual onion layers are targeted to specific 
nodes anyway, so it should be safe to use an even TLV type instead for this.

--

Again, note that this is a change in "the core" (and thus, pedantically, you 
*are* arguing for moving it from the edge, if you want these two items you 
specified).
I personally think it dubious to consider, for the reason that I already linked 
to in the previous reply, but in any case, it is indeed possible to do.

Generally, the path towards updating the BOLT is for at least one 
implementation to actually implement this, then convince at least one other 
implementation that this makes sense (possibly via this mailing list), and 
*then* maybe you have a chance of it getting into the BOLT spec.
You may find it more useful to e.g. hire a freelancer to work on this for`lnd` 
and get it merged.

Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Taro - Separating Taro concerns from LN token concerns

2022-05-02 Thread ZmnSCPxj via Lightning-dev
Good morning John, and Laolu,

> > but instead the requirement to add several feature concepts to LN that
> > would allow tokens to interact with LN nodes and LN routing:
>
> From this list of items, I gather that your vision is actually pretty
> different from ours. Rather than update the core network to understand the
> existence of the various Taro assets, instead we plan on leaving the core
> protocol essentially unchanged, with the addition of new TLV extensions to
> allow the edges to be aware of and interact w/ the Taro assets. As an
> example, we wouldn't need to do anything like advertise exchange rates in
> the core network over the existing gossip protocol (which doesn't seem like
> the best idea in any case given how quickly they can change and the existing
> challenges we have today in ensuring speedy update propagation).

Adding on to this, the American Call Option problem that arises when using 
H/PTLCs: 
https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-December/001752.html

The above objection seems to be one reason for proposing multi-asset "on the 
edge" rather than have it widely deployed in the published Lightning Network.

Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] lightning channels, stablecoins and fifty shades of privacy

2022-04-05 Thread ZmnSCPxj via Lightning-dev
Good morning pushd,

> > Source routing means that Boltz Exchange can report your onchain address, 
> > but cannot correlate it with your published node.
>
> I used boltz onion link: 
> http://tboltzzrsoc3npe6sydcrh37mtnfhnbrilqi45nao6cgc6dr7n2eo3id.onion however 
> I still need to trust boltz that no logs are saved for swaps. Maybe running 
> own boltz backend can be helpful.

Even if Boltz keeps a record of your onchain address forever, all it knows is 
that you had some Lightning funds that you moved onchain.
It does not know that your money was originally from some incident, which is 
the whole point of source routing.
Indeed, using an onion link also means that Boltz is unable to correlate your 
onchain address with your IP address, *and* if you have multiple Boltz swaps 
(which you probably need if you have 1000 BTC to clean) then Boltz cannot 
correlate your multiple swaps with each other (though you might want to do 
something like imitate how `clboss` formats the JSON request, to improve your 
anonymity set).


Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] lightning channels, stablecoins and fifty shades of privacy

2022-04-04 Thread ZmnSCPxj via Lightning-dev
Good morning pushd,

> Good morning,
>
> Things that affect privacy particularly when large sums of money are involved 
> in bitcoin:
>
> Liquidity, Anonymity set, Amounts, Type of addresses/scripts, Block, Locktime 
> and Version
>
> I have left out things that aren't part of bitcoin protocol or blockchain 
> like KYC. It is difficult for users to move large sums of BTC without being 
> observed because bitcoin does not have confidential transactions to hide 
> amounts. Coinjoin implementations have their own issues, trade-offs, some 
> might even censor transactions and big amounts will still be a problem. 
> Coinswap might be an alternative in future however I wanted to share one 
> solution that could be helpful in improving privacy.
>
> Synonym did first [stablecoin transaction][1] in a lightning channel using 
> Omni BOLT. Consider Alice starts a bitcoin project in which a lightning 
> channel is used for assets like stablecoin. Bob wants to use 1000 BTC linked 
> with an incident. He opens channels with Alice, gets stablecoin which can be 
> used in any project that supports Omni BOLT assets.
>
> Questions:
>
> What is the lightning channel capacity when using Omni BOLT?
>
> What else can be improved in this setup? Anything else that I maybe missing?
>
> I added 'fifty shades of privacy' in subject because it was the first thing 
> that came to my mind when I look at privacy in bitcoin and lightning.
>
>   [1]: https://youtu.be/MfaqYeyake8

I am not quite sure that using OmniBOLT and a stablecoin (I ssume you mean an 
asset ostensibly pegged to traditional currency) *improves* the privacy here.

Even if you have onchain confidentiality, your counterparty *has to* know how 
much of the funds are theirs, and by elimination, since there are only the two 
of you on that channel, the remainder of the funds is yours.
No amount of onchain confidential transactions can hide this fact.
And if the channel is unpublished, then the counterparty knows that any send 
from you is your own payment, and any receive to you is your own received funds.

Using a non-Bitcoin assett(whether pegged to a traditional currency or not) 
simply reduces the likelihood that you will be able to *use* the rest of the 
network, since most of the network only works with Bitcoin.
This reduction in liquidity translates to a reduction in anonymity set, meaning 
it is probably more likely that Alice will be running most of the nodes that 
*do* support your OmniBOLT-based asset and even if you try to route your funds 
elsewhere, if you use OmniBOLT, it is likely that Alice will be able to track 
where you moved your funds.


You are better off with this scheme if you want to "clean" 1000 BTC:

* Set up a published LN node with already-clean funds (or just clean a small 
amount of BTC using existing CoinJoin methods).
  * Leave it running for a while, or use your existing one.
  * Make all or at least most of its channels published!
  * Make sure it has at least *some* incoming capacity, use the swap-to-onchain 
trick or buy incoming liquidity.
* Set up a *throwaway* LN node using your dirty 1000 BTC.
* On your throwaway, create channel(s) to randomly-selected LN nodes.
* Send amounts from the throwaway to your published LN node.
* At a later time, send from your published LN node to e.g. Boltz Exchange 
offchain-to-onchain swap to get funds back onchain and get more incoming 
capacity to your published LN node.
* Repeat until you have drained all the funds from your throwaway node.
* Close the channels of your throwaway node and destroy all evidence of it 
having ever existed.

This provides privacy:

* By using an intermediate published node to temporarily hold your funds:
  * You disrupt timing correlation from the outgoing payments of your dubious 
throwaway node to the Boltz Exchange payment: first you pay to your published 
node, let the funds stew a bit, then send to the Boltz Exchange.
  * Published node has deniability: payments *to* that node could conceivably 
be destined elsewhere i.e. the published node can claim it was just forwarding 
to someone else.
* Source routing means that Boltz Exchange can report your onchain address, but 
cannot correlate it with your published node.


Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Code for sub second runtime of piecewise linarization to quickly approximate the minimum convex cost flow problem (makes fast multi part payments with large amounts possible)

2022-03-16 Thread ZmnSCPxj via Lightning-dev
Good morning Rene, sorry for the lateness,

> Last but not least, please allow me to make a short remark on the (still to 
> me very surprisingly controversial) base fee discussion: For simplicity I did 
> not include any fee considerations to the published code (besides a fee 
> report on how expensive the computed flow is). However in practice we wish to 
> optimize at least for high reliability (via neg log success probabilities) 
> and cheap fees which in particular with the ppm is very easily possible to be 
> included to the piece wise linearized cost function. While for small base 
> fees it seems possible to encode the base fee into the first segment of the 
> piecewise linearized approximation I think the base fee will still be tricky 
> to be handled in practice (even with this approximation). For example if the 
> base fee is too high the "base fee adjusted" unit cost of the first segment 
> of the piecewise linearized problem might be higher than the unit cost of the 
> second segment which effectively would break the convexity. Thus I reiterate 
> my e
 arlier point that from the perspective of the year long pursued goal of 
optimizing for fees (which all Dijkstra based single path implementations do) 
it seems to be best if the non linearity that is introduced by the base fee 
would be removed at all. According to discussions with people who crate 
Lightning Network explorer (and according to my last check of gossip) about 90% 
of channels have a base fee of 1 sat or lower and ~38% of all channels already 
set their base fee away from the default value to 0 [16].

I think the issue against 0-base-fee is that, to a forwarding node operator, 
every HTLC in-flight is a potential cost center (there is always some 
probability that the channel has to be forced onchain with the HTLC in-flight, 
and every HTLC has to be published on the commitment tx), and that cost is 
*not* proportional to the value of the HTLC (because onchain fees do not work 
that way).
Thus, it seems reasonable for a forwarding node to decide to pass on that cost 
to their customers, the payers, in the form of base fees.

The response of customers would be to boycott non-0-base fees, by e.g. using a 
heuristic that overweighs non-0-base-fee and reducing usage of such channels 
(but if every forwarding node *has* a base fee, going through them anyway, 
which is why you just overweigh them, not eliminate them from the graph 
outright).
Then forwarding nodes will economically move towards 0-base fee.

So I think you would find it impossible to remove the base fee field, but you 
can strongly encourage 0-base-fee usage by integrating the base fee but 
overweighted.
(I think my previous formulation --- treat the base fee as a proportional fee 
--- would do some overweighing of the base fee.)

Which reminds me, I have not gotten around to make a 0-base-fee flag for 
`clboss`, haha.
And I might need to figure out a learning algorithm that splits base and 
proportional fees as well, *sigh*.

Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [bitcoin-dev] A Comparison Of LN and Drivechain Security In The Presence Of 51% Attackers

2022-02-25 Thread ZmnSCPxj via Lightning-dev


Good morning Paul,


> I don't think I can stop people from being ignorant about Drivechain. But I 
> can at least allow the Drivechain-knowledgable to identify each other.
>
> So here below, I present a little "quiz". If you can answer all of these 
> questions, then you basically understand Drivechain:
>
> 0. We could change DC to make miner-theft impossible, by making it a layer1 
> consensus rule that miners never steal. Why is this cure worse than the 
> disease?

Now miners are forced to look at all sideblocks, not optionally do so if it is 
profitable for them.

> 1. If 100% hashrate wanted to steal coins from a DC sidechain *as quickly as 
> possible*, how long would this take (in blocks)?

13,150 (I think this is how you changed it after feedback from this list, I 
think I remember it was ~3000 before or thereabouts.)

> 2. Per sidechain per year (ie, per 52560 blocks), how many DC withdrawals can 
> take place (maximum)? How many can be attempted?
>  (Ie, how does the 'train track metaphor' work, from ~1h5m in the 
> "Overview and Misconceptions" video)?

I hate watching videos, I can read faster than anyone can talk (except maybe 
Laolu, he speaks faster than I can process, never mind read).

~4 times (assuming 52560 block per year, which may vary due to new miners, 
hashrate drops, etc)

> 3. Only two types of people should ever be using the DC withdrawal system at 
> all.
>   3a. Which two?

a.  Miners destroying the sidechain because the sidechain is no longer viable.
b.  Aggregators of sidechain-to-minechain transfers and large whales.

>   3b. How is everyone else, expected to move their coins from chain to chain?

Cross-system atomic swaps.
(I use "System" here since the same mechanism works for Lightning channels, and 
channels are not blockchains.)

>   3c. (Obviously, this improves UX.) But why does it also improve security?

Drivechain-based pegged transfers are aggregates of many smaller transfers and 
thus every transfer out from the sidechain contributes its "fee" to the 
security of the peg.

> --
> 4. What do the parameters b and m stand for (in the DC security model)?

m is how much people want to kill a sidechain, 0 = everybody would be sad if it 
died and would rather burn all their BTC forever than continue living, 1 = do 
not care, > 1 people want to actively kill the sidechain.

b is how much profit a mainchain miner expects from supporting a sidechain (do 
not remember the unit though).
Something like u = a + b where a is the mainchain, b is the sidechain, u is the 
total profit.
Or fees?  Something like that.

> 5. How can m possibly be above 1? Give an example of a sidechain-attribute 
> which may cause this situation to arise.

The sidechain is a total scam.
A bug may be found in the sidechain that completely negates any security it 
might have, thus removing any desire to protect the sidechain and potentially 
make users want to destroy it completely rather than let it continue.
People end up hating sidechains completely.

> 6. For which range of m, is DC designed to deter sc-theft?

m <= 1

> 7. If DC could be changed to magically deter theft across all ranges of m, 
> why would that be bad for sidechain users in general?

Because the sidechain would already be part of mainchain consensus.

> --
> 8. If imminent victims of a DC-based theft, used a mainchain UASF to prohibit 
> the future theft-withdrawal, then how would this affect non-DC users?

If the non-DC users do not care, then they are unaffected.
If the non-DC users want to actively kill the sidechain, they will 
counterattack with an opposite UASF and we have a chainsplit and sadness and 
mutual destruction and death and a new subreddit.

> 9. In what ways might the BTC network one day become uncompetitive? And how 
> is this different from caring about a sidechain's m and b?

If it does not enable scaling technology fast enough to actually be able to 
enable hyperbitcoinization.

Sidechains are not a scaling solution, so caring about m and b is different 
because your focus is not on scaling.

> --
> 10. If DC were successful, Altcoin-investors would be harmed. Two 
> Maximalist-groups would also be slightly harmed -- who are these?

Dunno!


Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] A Comparison Of LN and Drivechain Security In The Presence Of 51% Attackers

2022-02-24 Thread ZmnSCPxj via Lightning-dev
Good morning lightning-dev and bitcoin-dev,

Recently, some dumb idiot, desperate to prove that recursive covenants are 
somehow a Bad Thing (TM), [necromanced Drivechains][0], which actually caused 
Paul Sztorc to [revive][1] and make the following statement:

> As is well known, it is easy for 51% hashrate to double-spend in the LN, by 
> censoring 'justice transactions'. Moreover, miners seem likely to evade 
> retribution if they do this, as they can restrain the scale, timing, victims, 
> circumstances etc of the attack.

Let me state that, as a supposed expert developer of the Lightning Network 
(despite the fact that I probably spend more time ranting on the lists than 
actually doing something useful like improve C-Lightning or CLBOSS), the above 
statement is unequivocally ***true***.

However, I believe that the following important points must be raised:

* A 51% miner can only attack LN channels it is a participant in.
* A 51% miner can simultaneously attack all Drivechain-based sidechains and 
steal all of their funds.

In order for "justice transactions" to come into play, an attacker has to have 
an old state of a channel.
And only the channel participants have access to old state (modulo bugs and 
operator error on not being careful of toxic waste, but those are arguably as 
out of scope as operator error not keeping your privkey safe, or bugs that 
reveal your privkey).

If the 51% miner is not a participant on a channel, then it simply has no 
access to old state of the channel and cannot even *start* the above theft 
attack.
If the first step fails, then the fact that the 51% miner can perform the 
second step is immaterial.

Now, this is not a perfect protection!
We should note that miners are anonymous and it is possible that there is 
already a 51% miner, and that that 51% miner secretly owns almost all nodes on 
the LN.
However, even this also means there is some probability that, if you picked a 
node at random to make a channel with, then there is some probability that it 
is *not* a 51% miner and you are *still* safe from the 51% miner.

Thus, LN usage is safer than Drivechain usage.
On LN, if you make a channel to some LN node, there is a probability that you 
make a channel with a non-51%-miner, and if you luck into that, your funds are 
still safe from the above theft attack, because the 51% miner cannot *start* 
the attack by getting old state and publishing it onchain.
On Drivechain, if you put your funds in *any* sidechain, a 51% miner has strong 
incentive to attack all sidechains and steal all the funds simultaneously.

--

Now, suppose we have:

* a 51% miner
* Alice
* Bob

And that 51% miner != Alice, Alice != Bob, and Bob != 51% miner.

We could ask: Suppose Alice wants to attack Bob, could Alice somehow convince 
51% miner to help it steal from Bob?

First, we should observe that *all* economically-rational actors have a *time 
preference*.
That is, N sats now is better than N sats tomorrow.
In particular, both the 51% miner *and* Alice the attacker have this time 
preference, as does victim Bob.

We can observe that in order for Alice to benefit from the theft, it has to 
*wait out* the `OP_CSV` before it can finalize the theft.
Alice can offer fees to the miner only after the `OP_CSV` delay.

However, Bob can offer fees *right now* on the justice transaction.
And the 51% miner, being economically rational, would prefer the *right now* 
funds to the *maybe later* promise by Alice.

Indeed, if Bob offered a justice transaction paying the channel amount minus 1 
satoshi (i.e. Bob keeps 1 satoshi), then Alice has to beat that by offering the 
entire channel amount to the 51% miner.
But the 51% miner would then have to wait out the `OP_CSV` delay before it gets 
the funds.
Its time preference may be large enough (if the `OP_CSV` delay is big enough) 
that it would rather side with Bob, who can pay channel amount - 1 right now, 
than Alice who promises to pay channel amount later.

"But Zeeman, Alice could offer to pay now from some onchain funds Alice has, 
and Alice can recoup the losses later!"
But remember, Alice *also* has a time preference!
Let us consider the case where Alice promises to bribe 51% miner *now*, on the 
promise that 51% miner will block the Bob justice transaction and *then* Alice 
gets to enjoy the entire channel amount later.
Bob can counter by offering channel amount - 1 right now on the justice 
transaction.
The only way for Alice to beat that is to offer channel amount right now, in 
which case 51% miner will now side with Alice.

But what happens to Alice in that case?
It loses out on channel amount right now, and then has to wait `OP_CSV` delay, 
to get the exact same amount later!
It gets no benefit, so this is not even an investment.
It is just enforced HODLing, but Alice can do that using `OP_CLTV` already.

Worse, Alice has to trust that 51% miner will indeed block the justice 
transaction.
But if 51% miner is unscrupulous, it could do:

* 

Re: [Lightning-dev] [bitcoin-dev] [Pre-BIP] Fee Accounts

2022-02-20 Thread ZmnSCPxj via Lightning-dev
Good morning Jeremy,

> opt-in or explicit tagging of fee account is a bad design IMO.
>
> As pointed out by James O'Beirne in the other email, having an explicit key 
> required means you have to pre-plan suppose you're building a vault meant 
> to distribute funds over many years, do you really want a *specific* 
> precommitted key you have to maintain? What happens to your ability to bump 
> should it be compromised (which may be more likely if it's intended to be a 
> hot-wallet function for bumping).
>
> Furthermore, it's quite often the case that someone might do a transaction 
> that pays you that is low fee that you want to bump but they choose to 
> opt-out... then what? It's better that you should always be able to fee bump.

Good point.

For the latter case, CPFP would work and already exists.
**Unless** you are doing something complicated and offchain-y and involves 
relative locktimes, of course.


Once could point out as well that Peter Todd gave just a single example, 
OpenTimeStamps, for this, and OpenTimeStamps is not the only user of the 
Bitcoin blockchain.

So we can consider: who benefits and who suffers, and does the benefit to the 
former outweigh the detriment of the latter?


It seems to me that the necromancing attack mostly can *only* target users of 
RBF that might want to *additionally* add outputs (or in the case of OTS, 
commitments) when RBF-ing.
For example, a large onchain-paying entity might lowball an onchain transaction 
for a few withdrawals, then as more withdrawals come in, bump up their feerate 
and add more withdrawals to the RBF-ed transaction.
Such an entity might prefer to confirm the latest RBF-ed transaction, as if an 
earlier transaction (which does not include some other withdrawals requested 
later) is necromanced, they would need to make an *entire* *other* transaction 
(which may be costlier!) to fulfill pending withdrawal requests.

However, to my knowledge, there is no actual entity that *currently* acts this 
way (I do have some sketches for a wallet that can support this behavior, but 
it gets *complicated* due to having to keep track of reorgs as well... sigh).

In particular, I expect that many users do not really make outgoing payments 
often enough that they would actually benefit from such a wallet feature.
Instead, they will generally make one payment at a time, or plan ahead and pay 
several in a batch at once, and even if they RBF, they would just keep the same 
set of outputs and just reduce their change output.
For such low-scale users, a rando third-party necromancing their old 
transactions could only make them happy, thus this nuisance attack cannot be 
executed.

We could also point out that this is really a nuisance attack and not an 
economic-theft attack.
The attacker cannot gain, and can only pay in order to impose costs on somebody 
else.
Rationally, the only winning move is not to play.


So --- has anyone actually implemented a Bitcoin wallet that has such a feature 
(i.e. make a lowball send transaction now, then you can add another send later 
and if the previous send transaction is unconfirmed, RBF it with a new 
transaction that has the previous send and the current send) and if so, can you 
open-source the code and show me?


Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [bitcoin-dev] [Pre-BIP] Fee Accounts

2022-02-20 Thread ZmnSCPxj via Lightning-dev
Good morning DA,


> Agreed, you cannot rely on a replacement transaction would somehow
> invalidate a previous version of it, it has been spoken into the gossip
> and exists there in mempools somewhere if it does, there is no guarantee
> that anyone has ever heard of the replacement transaction as there is no
> consensus about either the previous version of the transaction or its
> replacement until one of them is mined and the block accepted. -DA.

As I understand from the followup from Peter, the point is not "this should 
never happen", rather the point is "this should not happen *more often*."

Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [bitcoin-dev] [Pre-BIP] Fee Accounts

2022-02-19 Thread ZmnSCPxj via Lightning-dev
Good morning Peter and Jeremy,

> Good morning Peter and Jeremy,
>
> > On Sat, Feb 19, 2022 at 05:20:19PM +, darosior wrote:
> >
> > > > Necromancing might be a reasonable name for attacks that work by 
> > > > getting an
> > > > out-of-date version of a tx mined.
> > >
> > > It's not an "attack"? There is no such thing as an out-of-date 
> > > transaction, if
> > > you signed and broadcasted it in the first place. You can't rely on the 
> > > fact that
> > > a replacement transaction would somehow invalidate a previous version of 
> > > it.
> >
> > Anyone on the internet can send you a packet; a secure system must be able 
> > to
> > receive any packet without being compromised. Yet we still call packet 
> > floods
> > as DoS attacks. And internet standards are careful to avoid making packet
> > flooding cheaper than it currently is.
> > The same principal applies here: in many situations transactions do become
> > out of date, in the sense that you would rather a different transaction be
> > mined instead, and the out-of-date tx being mined is expensive and annoying.
> > While you have to account for the possibility of any transaction you have
> > signed being mined, Bitcoin standards should avoid making unwanted 
> > necromancy a
> > cheap and easy attack.
>
> This seems to me to restrict the only multiparty feebumping method to be some 
> form of per-participant anchor outputs a la Lightning anchor commitments.
>
> Note that multiparty RBF is unreliable.
> While the initial multiparty signing of a transaction may succeed, at a later 
> time with the transaction unconfirmed, one or more of the participants may 
> regret cooperating in the initial signing and decide not to cooperate with 
> the RBF.
> Or for that matter, a participant may, through complete accident, go offline.
>
> Anchor outputs can be keyed to only a specific participant, so feebumping of 
> particular transaction can only be done by participants who have been 
> authorized to feebump.
>
> Perhaps fee accounts can include some kind of 
> proof-this-transaction-authorizes-this-fee-account?

For example:

* We reserve one Tapscript version for fee-account-authorization.
  * Validation of this tapscript version always fails.
* If a transaction wants to authorize a fee account, it should have at least 
one Taproot output.
  * This Taproot output must have tapleaf with the fee-account-authorization 
Tapscript version.
* In order for a fee account to feebump a transaction, it must also present the 
Taproot MAST path to the fee-account-authorization tapleaf of one output of 
that transaction.

This gives similar functionality to anchor outputs, without requiring an 
explicit output on the initial transaction, saving blockspace.
In particular, once the number of participants grows, the number of anchor 
outputs must grow linearly with the number of participants being authorized to 
feebump.
Only when the feerate turns out to be too low do we need to expose the 
authorization.
Revelation of the fee-account-authorization is O(log N), and if only one 
participant decides to feebump, then only a single O(log N) MAST treepath is 
published.

Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [bitcoin-dev] [Pre-BIP] Fee Accounts

2022-02-19 Thread ZmnSCPxj via Lightning-dev
Good morning Peter and Jeremy,

> On Sat, Feb 19, 2022 at 05:20:19PM +, darosior wrote:
>
> > > Necromancing might be a reasonable name for attacks that work by getting 
> > > an
> > > out-of-date version of a tx mined.
> >
> > It's not an "attack"? There is no such thing as an out-of-date transaction, 
> > if
> > you signed and broadcasted it in the first place. You can't rely on the 
> > fact that
> > a replacement transaction would somehow invalidate a previous version of it.
>
> Anyone on the internet can send you a packet; a secure system must be able to
> receive any packet without being compromised. Yet we still call packet floods
> as DoS attacks. And internet standards are careful to avoid making packet
> flooding cheaper than it currently is.
>
> The same principal applies here: in many situations transactions do become
> out of date, in the sense that you would rather a different transaction be
> mined instead, and the out-of-date tx being mined is expensive and annoying.
> While you have to account for the possibility of any transaction you have
> signed being mined, Bitcoin standards should avoid making unwanted necromancy 
> a
> cheap and easy attack.
>

This seems to me to restrict the only multiparty feebumping method to be some 
form of per-participant anchor outputs a la Lightning anchor commitments.

Note that multiparty RBF is unreliable.
While the initial multiparty signing of a transaction may succeed, at a later 
time with the transaction unconfirmed, one or more of the participants may 
regret cooperating in the initial signing and decide not to cooperate with the 
RBF.
Or for that matter, a participant may, through complete accident, go offline.

Anchor outputs can be keyed to only a specific participant, so feebumping of 
particular transaction can only be done by participants who have been 
authorized to feebump.

Perhaps fee accounts can include some kind of 
proof-this-transaction-authorizes-this-fee-account?

Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [bitcoin-dev] A suggestion to periodically destroy (or remove to secondary storage for Archiving reasons) dust, Non-standard UTXOs, and also detected burn

2022-02-17 Thread ZmnSCPxj via Lightning-dev
Good morning shymaa,

> I just want to add an alarming info to this thread...
>
> There are at least 5.7m UTXOs≤1000 Sat (~7%), 
> 8.04 m ≤1$ (10%), 
> 13.5m ≤ 0.0001BTC (17%)
>
> It seems that bitInfoCharts took my enquiry seriously and added a main link 
> for dust analysis:
> https://bitinfocharts.com/top-100-dustiest-bitcoin-addresses.html
> Here, you can see just the first address contains more than 1.7m dust UTXOs
> (ins-outs =1,712,706 with a few real UTXOs holding the bulk of 415 BTC) 
> https://bitinfocharts.com/bitcoin/address/1HckjUpRGcrrRAtFaaCAUaGjsPx9oYmLaZ
>
> »
>  That's alarming isn't it?, is it due to the lightning networks protocol or 
> could be some other weird activity going on?
> .

I believe some blockchain tracking analysts will "dust" addresses that were 
spent from (give them 546 sats), in the hope that lousy wallets will use the 
new 546-sat UTXO from the same address but spending to a different address and 
combining with *other* inputs with new addresses, thus allowing them to grow 
their datasets about fund ownership.

Indeed JoinMarket has a policy to ignore-by-default UTXOs that pay to an 
address it already spent from, precisely due to this (apparently common, since 
my JoinMarket maker got dusted a number of times already) practice.

I am personally unsure of how common this is but it seems likely that you can 
eliminate this effect by removing outputs of exactly 546 sats to reused 
addresses.

Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Channel Eviction From Channel Factories By New Covenant Operations

2022-02-17 Thread ZmnSCPxj via Lightning-dev
Channel Eviction From Channel Factories By New Covenant Operations
==

N-of-N channel factories have an important advantage compared to N-of-N
offchain CoinPools or statechains: even if one participant in the channel
factory is offline, payments can still occur within the channel factory
among online participants, because the channel factory has a layer where
funds are split up into 2-of-2 channels and the offline participant is
not a participant in most of those channels.
That is, channel factories have *graceful degradation*.

While CoinPools can adapt to this by falling back to using K-of-N, this
allows a quorum of K participants to outright steal the funds of the
remainder, whether the remaining participants are offline or online.
Additional mechanisms, such as reputation systems, may be constructed to
attempt to dissuade from such behavior, but "exit scams" are always
possible, where reputation is sacrified for a large enough monetary
gain at the expense of those who tr\*sted in the reputation.
An N-of-N channel factory simply does not permit such theft as long as
offline participants come back online within some security parameter.

However, when a participant is offline, they are obviously unable to
fulfill or fail any HTLCs or PTLCs.
If a sizable HTLC or PTLC is about to time out, the entire construction
must be dropped onchain, as the blockchain is the only layer that can
actually enforce timeouts.
This leads to a massive increase in blockchain utilization.

However however, late in 2021, Towns proposed an `OP_TAPLEAFUPDATEVERIFY`
opcode.
This opcode was envisioned to support CoinPools, to allow unilateral
exit of any participant from the CoinPool without requiring that the
entire CoinPool be dropped onchain.

I have observed before that, except for relative locktimes, almost
anything that can be enforced by the blockchain layer can be hosted in
any offchain updatable cryptocurrency system, such as Decker-Wattenhofer,
Decker-Russell-Osuntokun, or Poon-Dryja.
Any such offchain updatable cryptocurrency system can simply drop to its
hosting system, until dropping reaches a layer that can actually enforce
whatever rule is necessary.
As channel factories are just a Decker-Wattenhofer or
Decker-Russell-Osuntokun that hosts multiple 2-participant offchain
updatable cryptocurrency systems ("channels"), channel factories can
also host an `OP_TAPLEAFUPDATEVERIFY`, as long as the base blockchain
layer enforces it.

Since `OP_TAPLEAFUPDATEVERIFY` can be used by CoinPools to allow
exit of a single participant without dropping the rest of the CoinPool,
we can use the same mechanism to allow eviction of a channel from channel
factories.
This allows HTLCs/PTLCs near timeout to be enforced onchain by dropping
only the channel hosting them onchain, while the remaining channels
continue to be hosted by a single onchain UTXO instead of individually
having their own UTXOs.
When the offline participant comes back online, the channel factory
participants can then perform another onchain 1-input-1-output
transaction to "revive" the channel factory and allow in-factory updates
of channels again.
Alternately the factory can continue to operate indefinitely in degraded
mode, with no in-factory updates of channels, but with in-channel payments
continuing (as per graceful degradation) and with only a single onchain
UTXO hosting them all onchain, still allowing individual closure or
eviction of channels.

Safely Evictable Channels
-

I expect that multiparticipant channel factories will be implemented
with Decker-Russell-Osuntokun rather than Decker-Wattenhofer.
While Decker-Wattenhofer allows more than two participants (unlike
Poon-Dryja, which due to its punitive nature is restricted to only
two participants), "unilateral" actions --- or more accurately,
actions that can be performed with only some but not all participants
online --- are very expensive and require a long sequence of
transactions, as well as multiple varying timeouts which make it
difficult to provide a "maximum amount of time offline" security
parameter.

Of course, Decker-Wattenhofer does not require anything more than
relative locktimes and `OP_CHECKSEQUENCEVERIFY`.
Decker-Russell-Osuntokun unfortunately requires that `SIGHASH_NOINPUT`,
or a functionality similar to it, be supported on the blockchain
layer.

The "default" design is that at the channel factory level, the
Decker-Russell-Osuntokun settlement transaction hosts multiple
outpoints that anchor individual 2-participant channels.

Rather than that, I propose that we use a Taproot output with
internal key being an N-of-N of all participants, and with
multiple leaves, each leaf representing one channel and having
the constraints:

* An `OP_TLUV` that requires that the first output be to the same
  address as the first input, except modified to remove this tapleaf
  branch, and with exactly the same internal key.

Re: [Lightning-dev] [RFC] Lightning gossip alternative

2022-02-17 Thread ZmnSCPxj via Lightning-dev
Good morning rusty,

If we are going to switch to a new gossip version, should we prepare now for 
published channels that are backed by channel factories?

Instead of a UTXO serving as a bond to allow advertisement of a *single* 
channel, allow it to advertise *multiple* channels.
This does not require us to flesh out the details of channel factories in the 
gossip protocol, especially post-Taproot --- we could simply require that a 
simple BIP-340 signature of the gossip message attesting to multiple channels 
is enough, and the details of the channel factories can be left to later 
protocol updates.


The reason for this is that for a set of N published nodes, there is an 
incentive to make as many channels as possible between pairs of nodes.
We expect that for N published nodes, all (N * (N - 1)) / 2 possible channels 
will be created, as that maximizes the expected fee return of the N published 
nodes.

Without the ability to gossip channel factories, channel factories can only be 
used for unpublished channels.
Due to not being available for routing, given an "LSP" and N clients, there is 
no incentive for the N clients to make direct channels to each other.
(In particular, one of the reasons given for unpublished channels is that the 
clients of an LSP may not have high onlineness, thus an unpublished channel 
would really only exist between a public LSP and a non-published client of that 
LSP.)
This means that for N clients we expect only N channels backed by the channel 
factory (and thus by the UTXO).

It seems to me to be a good idea to have as much of the public network backed 
by fewer UTXOs, as the published network has far more channels for every N 
participants.

(as well, supporting channel factories for the public graph does not preclude 
the unpublished graph from using channel factories, so even if the unpublished 
graph turns out to be much larger than the published graph, reducing the UTXO 
set of the published graph does not prevent reducing the UTXO set of the 
unpublished graph anyway.)


Against this, we should note that it makes stuffing the public graph cheaper (a 
single UTXO can now justify the addition of multiple edges on the public 
routing graph), which translates to making it easier to increase the complexity 
of the public graph and thus increase the cost of pathfinding.


Thoughts?

Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Split payments within one LN invoice

2021-12-17 Thread ZmnSCPxj via Lightning-dev
Good morning Ronan,

> If there is a payment to go from party A to be split between parties B & C, 
> is it possible that only one of B or C would need a plugin?
>
> If all receiving parties need a plugin that is quite limiting. Thanks, Ronan

Given N payees, only N - 1 need the plugin.

The last payee in a chain of payees issues a normal invoice (C-Lightning plugin 
not needed).
Then the previous payee takes in that invoice, and emits a new invoice, using 
the plugin.
This goes on until the first payee is reached.
The first payee then issues its invoice to the payer.

To follow your example, where A pays to both B and C:

* C issues a normal invoice (no plugin needed).
* C hands its invoice over to B.
* B receives the invoice from C and issues a plugin-provided command 
(`addtoinvoice`?), which creates another invoice
* B hands its invoice over to A.
* A pays the invoice (no plugin needed).

As another example, suppose we have you paying cdecker, jb55, and ZmnSCPxj.
Let us sort them in alphabetical order.

* ZmnSCPxj issues a normal invoice (no plugin needed).
* ZmnSCPxj hands its invoice over to jb55.
* jb55 issues a plugin-provided command, giving it the invoice from ZmnSCPxj 
and getting out a larger invoice.
* jb55 hands its invoice over to cdecker.
* cdecker issues a plugin-provided command, giving it the invoice from jb55 and 
getting  out a larger invoice.
* cdecker hands over its invoice to Ronan.
* Ronan pays the invoice (no plugin needed).

Regards,
ZmnSCPxj


>
> On Fri, Dec 17, 2021 at 3:06 PM ZmnSCPxj  wrote:
>
> > Good morning cdecker,
> >
> > > I was looking into the docs [1] and stumbled over `createinvoice` which 
> > > does almost what you need. However it requires the preimage, and stores 
> > > the invoice in the database which you don't want.
> >
> > Indeed, that is precisely the issue, we need a `signfakeinvoice` command, 
> > as we cannot know at invoice creation time the preimage, and our invoice 
> > database currently assumes every invoice has a preimage known and recorded 
> > in the database.
> >
> > >
> > > However if you have access to the `hsm_secret` you could sign in the 
> > > plugin itself, completely sidestepping `lightningd`. Once you have that 
> > > it should be a couple of days work to get a PoC plugin for the 
> > > coordination and testing. From there it depends on how much polish you 
> > > want to apply and what other systems you want to embed it into.
> >
> > Well, the point of an `hsmd` is that it might be replaced with a driver to 
> > an actual hardware signing module (H S M).
> > The `lightningd`<->`hsmd` interface includes commands for invoice signing, 
> > and `signfakeinvoice` would essentially just expose that interface, so an 
> > HSM has to support that interface.
> > So a plugin cannot rely on `hsm_secret` existing, as the signer might not 
> > be emulated in software (i.e. what we do now) but be an actual hardware 
> > signer that does not keep the secret keys on the same disk.
> > This is the reason why we (well, I) created and exposed `getsharedsecret`, 
> > in theory a plugin could just read `hsm_secret`, but we want to consider a 
> > future where the HSM is truly a hardware module.
> >
> > Regards,
> > ZmnSCPxj


___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Split payments within one LN invoice

2021-12-17 Thread ZmnSCPxj via Lightning-dev
Good morning cdecker,

> I was looking into the docs [1] and stumbled over `createinvoice` which does 
> almost what you need. However it requires the preimage, and stores the 
> invoice in the database which you don't want.

Indeed, that is precisely the issue, we need a `signfakeinvoice` command, as we 
cannot know at invoice creation time the preimage, and our invoice database 
currently assumes every invoice has a preimage known and recorded in the 
database.

>
> However if you have access to the `hsm_secret` you could sign in the plugin 
> itself, completely sidestepping `lightningd`. Once you have that it should be 
> a couple of days work to get a PoC plugin for the coordination and testing. 
> From there it depends on how much polish you want to apply and what other 
> systems you want to embed it into.

Well, the point of an `hsmd` is that it might be replaced with a driver to an 
actual hardware signing module (H S M).
The `lightningd`<->`hsmd` interface includes commands for invoice signing, and 
`signfakeinvoice` would essentially just expose that interface, so an HSM has 
to support that interface.
So a plugin cannot rely on `hsm_secret` existing, as the signer might not be 
emulated in software (i.e. what we do now) but be an actual hardware signer 
that does not keep the secret keys on the same disk.
This is the reason why we (well, I) created and exposed `getsharedsecret`, in 
theory a plugin could just read `hsm_secret`, but we want to consider a future 
where the HSM is truly a hardware module.

Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Split payments within one LN invoice

2021-12-16 Thread ZmnSCPxj via Lightning-dev
Good morning William,


> Has anyone coded up a 'Poor man's rendez-vous' demo yet? How hard would
> it be, could it be done with a clightning plugin perhaps?

Probably not *yet*; it needs each intermediate payee (i.e. the one that is not 
the last one) to sign an invoice for which it does not know the preimage.
Maybe call such a command `signfakeinvoice`.

However, if a command to do the above is implemented (it would have to generate 
and sign the invoice, but not insert it into the database at all), then 
intermediate payees can use `htlc_accepted` hook for the "rendez-vous".

So to generate the invoice:

* Arrange the payees in some agreed fixed order.
* Last payee generates a normal invoice.
* From last payee to second, each one:
  * Passes its invoice to the previous payee.
  * The previous payee then creates its own signed invoice with 
`signfakeinvoice` to itself, adding its payout plus a fee budget, as well as 
adding its own delay budget.
  * The previous payee plugin stores the next-payee invoice and the details of 
its own invoice to db, such as by `datastore` command.
* The first payee sends the sender the invoice.

On payment:

* The sender sends the payment to the first hop.
* From first payee to second-to-last:
  * Triggers `htlc_accepted` hook, and plugin checks if the incoming payment 
has a hash that is in this scheme stored in the database.
  * The plugin gathers `htlc_accepted` hook invocations until they sum up to 
the expected amount (this handles multipath between payees).
  * The plugin marks that it has gathered all `htlc_accepted` hooks for that 
hash in durable storage a.k.a. `datastore` (this handles a race condition where 
the plugin is able to respond to some `htlc_accepted` hooks, but the node is 
restarted before all of them were able to be recorded by C-Lightning in its own 
database --- this makes the plugin skip the "gathering" step above, once it has 
already gathered them all before).
  * The plugin checks if there is already an outgoing payment for that hash 
(this handles the case where our node gets restarted in the meantime --- 
C-Lightning will reissue `htlc_accepted` on startup)
* If the outgoing payment exists and is pending, wait for it to resolve to 
either success or failure.
* If the outgoing payment exists and succeeded, resolve all the gathered 
`htlc_accepted` hooks.
* If the outgoing payment exists and failed, fail all the gathered 
`htlc_accepted` hooks.
* Otherwise, perform a `pay`, giving `maxfeepercent` and `maxdelay` based 
on its fee budget and delay budget.
  When the `pay` succeeds or fails, propagate it to the gathered 
`htlc_accepted` hooks.
* The last payee just receives a normal payment using the normal 
invoice-receive scheme.

Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] A blame ascribing protocol towards ensuring time limitation of stuck HTLCs in flight.

2021-12-16 Thread ZmnSCPxj via Lightning-dev
Good morning Bastien,



> * it's impossible for a node to prove that it did *not* receive a message: 
> you can prove knowledge,
>   but proving lack of knowledge is much harder (impossible?)

Yes, it is impossible.

If there could exist a proof-of-lack-of-knowledge, then even if I personally 
knew some fact, I could simply run a virtual machine that knows everything I 
know *except* for that piece of knowledge, and generate the 
proof-of-lack-of-knowledge there.
This leads to a contradiction, as I myself *actually* know the fact, but I can 
present the proof-of-lack-of-knowledge by pretending to be somebody ignorant.

Regards,
ZmnSCPxj (I definitely do not know that I am an AI)
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] PTLCs early draft specification

2021-12-07 Thread ZmnSCPxj via Lightning-dev
Good morning t-bast,


> I believe these new transactions may require an additional round-trip.
> Let's take a very simple example, where we have one pending PTLC in each
> direction: PTLC_AB was offered by A to B and PTLC_BA was offered by B to A.
>
> Now A makes some unrelated updates and wants to sign a new commitment.
> A cannot immediately send her `commitment_signed` to B.
> If she did, B would be able to broadcast this new commitment, and A would
> not be able to claim PTLC_BA from B's new commitment (even if she knew
> the payment secret) because she wouldn't have B's signature for the new
> PTLC-remote-success transaction.
>
> So we first need B to send a new message `remote_ptlcs_signed` to A that
> contains B's adaptor signatures for the PTLC-remote-success transactions
> that would spend B's future commitment. After that A can safely send her
> `commitment_signed`. Similarly, A must send `remote_ptlcs_signed` to B
> before B can send its `commitment_signed`.
>
> It's actually not that bad, we're only adding one message in each direction,
> and we're not adding more data (apart from nonces) to existing messages.
>
> If you have ideas on how to avoid this new message, I'd be glad to hear
> them, hopefully I missed something again and we can make it better!

`SIGHASH_NONE | SIGHASH_NOINPUT` (which will take another what, four years?) or 
a similar "covenant" opcode, such as `OP_CHECKTEMPLATEVERIFY` without any 
commitments or an `OP_CHECKSIGFROMSTACK` on an empty message.
All you really need is a signature for an empty message, really...

Alternately, fast-forwards, which avoid this because it does not change 
commitment transactions on the payment-forwarding path.
You only change commitment transactions once you have enough changes to justify 
collapsing them.
Even in the aj formulation, when A adds a PTLC it only changes the transaction 
that hosts **only** A->B PTLCs as well as the A main output, all of which can 
be sent outright by A without changing any B->A PTLCs.

Basically... instead of a commitment tx like this:

+---+
funding outpoint -->|   |--> A main
|   |--> B main
|   |--> A->B PTLC
|   |--> B->A PTLC
+---+

We could do this instead:

+---+2of2  +-+
funding outpoint -->|   |->| |--> A main
|   |  | |--> A->B PTLC
|   |  +-+
|   |2or2  +-+
|   |->| |--> B main
|   |  | |--> B->A PTLC
+---+  +-+

Then whenever A wants to add a new A->B PTLC it only changes the tx inputs of 
the *other* A->B PTLCs without affecting the B->A PTLCs.
Payment forwarding is fast, and you only change the "big" commitment tx rarely 
to clean up claimed and failed PTLCs, moving the extra messages out of the 
forwarding hot path.

But this is basically highly similar to what aj designed anyway, so...

Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] PTLCs early draft specification

2021-12-07 Thread ZmnSCPxj via Lightning-dev
Good morning LL, and t-bast,

> > Basically, if my memory and understanding are accurate, in the above, it is 
> > the *PTLC-offerrer* which provides an adaptor signature.
> > That adaptor signature would be included in the `update_add_ptlc` message.
>
> Isn't it the case that all previous PTLC adaptor signatures need to be 
> re-sent for each update_add_ptlc message because the signatures would no 
> longer be valid once the commit tx changes. I think it's better to put it in 
> `commitment_signed` if possible. This is what is done with pre-signed HTLC 
> signatures at the moment anyway.

Agreed.

This is also avoided by fast-forwards, BTW, simply because fast-forwards delay 
the change of the commitment tx.
It is another reason to consider fast-forwards, too

Regards,
ZmnSCPxj


___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] PTLCs early draft specification

2021-12-06 Thread ZmnSCPxj via Lightning-dev
Good morning t-bast,

Long ago: 
https://lists.linuxfoundation.org/pipermail/lightning-dev/2019-December/002385.html

And I quote:

>> A potential issue with MuSig is the increased number of communication rounds 
>> needed to generate signatures.
>
>I think you can reduce this via an alternative script path. In
>particular, if you want a script that the other guy can spend if they
>reveal the discrete log of point X, with musig you do:
>
>   P = H(H(A,B),1)*A + H(H(A,B),2)*B
>   [exchange H(RA),H(RB),RA,RB]
>
>   [send X]
>
>   sb = rb + H(RA+RB+X,P,m)*H(H(A,B),2)*b
>
>   [wait for sb]
>
>   sa = ra + H(RA+RB+X,P,m)*H(H(A,B),1)*a
>
>   [store RA+RB+X, sa+sb, supply sa, watch for sig]
>
>   sig = (RA+RB+X, sa+sb+x)
>
>So the 1.5 round trips are "I want to do a PTLC for X", "okay here's
>sb", "great, here's sa".
>
>But with taproot you can have a script path as well, so you could have a
>script:
>
>   A CHECKSIGVERIFY B CHECKSIG
>
>and supply a partial signature:
>
>   R+X,s,X where s = r + H(R+X,A,m)*a
>
>to allow them to satisfy "A CHECKSIGVERIFY" if they know the discrete
>log of X, and of course they can sign with B at any time. This is only
>half a round trip, and can be done at the same time as sending the "I
>want to do a PTLC for X" message to setup the (ultimately cheaper) MuSig
>spend. It's an extra signature on the sender's side and an extra verification
>on the receiver's side, but I think it works out fine.

It has been a while since I read that post, so my details may be fuzzy, but it 
looks possible as a way to reduce roundtrips, maybe?

Basically, if my memory and understanding are accurate, in the above, it is the 
*PTLC-offerrer* which provides an adaptor signature.
That adaptor signature would be included in the `update_add_ptlc` message.

Does it become more workable that way?

Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Half-Delegation & Chaperones for Decker Channels

2021-11-29 Thread ZmnSCPxj via Lightning-dev
Good morning Jeremy,

> Just a minor curiosity I figured was worth mentioning on the composition of 
> delegations and anyprevout...
>
> DA: Let full delegation be a script S such that I can sign script R and then 
> R may sign for a transaction T.
> DB: Let partial delegation be a script S such that I can sign a tuple (script 
> R, transaction T) and R may sign T.
>
> A simple version of this could be done for scriptless multisigs where S signs 
> T and then onion encrypts to the signers of R and distributes the shares.

Just to be clear, do you mean, "for the case where R is a scriptless multisig"?
And, "onion encrypts the signature"?

Since part of the signature `(R, s)` would be a scalar modulo k, `s`, another 
way would be to SSS that scalar and distribute the shares to the R multisig 
signers, that may require less computation and would allow R to be k-of-n.

> However, under such a model, if T is signed by S with AnyPrevOut, then T is 
> now arbitrarily rebindable.
>
> Therefore let us define more strictly:
>
> DC: Let half-delegation be a script S such that I can sign a tuple (script R, 
> transaction T) and R may sign T and revealing T/R does grant authorization to 
> any other party.

Do you mean "does *not* grant"?

If S is a delegator that intends to delegate to R, and creates a simple Taproot 
with keypath S, and signs a spend from that using `SIGHASH_ANYPREVOUT` and 
distributes shares of the signature to R, then once the signature is revealed 
onchain, anyone (not just R) may rebind the transaction to any other Taproot 
with keypath S, which I think is what you wish to prevent with the stricter 
definition "does *not* grant authorization to any other party"?

>
> The signer of R could choose to sign with APO, in which case they make the 
> txn rebindable. They could also reveal the private keys for R similarly.
> For "correct" use, R should sign with SIGHASH_ALL, binding the transaction to 
> a single instance.

Well, for the limited case where R is a k-of-n multisig (including n-of-n) it 
seems the "sign and SSS" would work similarly, for "correct" use R should sign 
with `SIGHASH_ALL` anyway, so in the "sign and SSS" method S should always sign 
with `SIGHASH_ALL`.

This does not work if the script S itself is hosted in some construction that 
requires `SIGHASH_ANYPREVOUT` at the base layer, which I believe is what you 
are concerned about?
In that case all signers should really give fresh pubkeys, i.e. no address 
reuse.

> Observation: a tuple script R + transaction T can, in many cases, be 
> represented by script R ||  CTV.
> Corollary: half-delegation can be derived from full delegation and a covenant.
>
> Therefore delegation + CTV + APO may be sufficient for making chaperone 
> signatures work, if they are desired by a user.

Hmm what?
Is there some other use for chaperone signatures other than to artificially 
encumber `SIGHASH_ANYPREVOUT` or have definitions drifted over time?

> Remarks:
>
> APO's design discussion should not revisit Chaperone signatures (hopefully 
> already a dead horse?) but instead consider how APO might compose with 
> Delegation proposals and CTV.

no chaperones == good

Regards,
ZmnSCPxj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] INTEROPERABILITY

2021-11-24 Thread ZmnSCPxj via Lightning-dev
Good morning x raid,

> We are talkin interoperability among impl not individual node operators 
> version management of chosen impl.
>
> where Pierre of Acinq says
> "So we eat our own dog food and will experience force closes before our users 
> do.."
> hahaha made my day ...
>
> a node operator do tests live in its continuous integration efforts would be 
> expected and should be able do so with a by impl assured latest stable 
> release version.
>
> what is suggested for dialog is the different impl maintainers before sign 
> off a stable release do a extra test live on mainnet with liquidity in 
> channels towards the other impl versions and by doing so can catch unforseen 
> glitches that tests of impl in isolation can not catch.


***We developers already ARE running nodes that are connected to other 
implementations, on mainnet, 24/7.***
In practice we have large release windows before we actually make a final 
release, precisely to catch these kinds of bugs that are not easily visible in 
isolation, but need real data on the network.
My own node (C-Lightning) has channels to LNBIG (lnd), and I suspect at least 
some of the unpublished channels made to me are Electrum, for example, so I 
already have a node that is already running against other implementations.

We have been telling you that over several emails already.


Is there any specific thing you can offer?
Put up hardware and coins yourself if you really want this to happen.
I can give you an SSH key so I can install C-Lightning and CLBOSS on your 
hardware myself, then give you the addresses of the C-Lightning node so you can 
provide coins for CLBOSS to manage.


Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] INTEROPERABILITY

2021-11-23 Thread ZmnSCPxj via Lightning-dev
Good morning x-raid,

> so You propose Acinq / Blockstream / Lightning Labs do not have funds to run 
> a box or 2 ?

Not at all, I am proposing that these people, who have already done the effort 
to release working Lightning Network Node implementations free of charge to 
you, are not obligated to *also* devote more hardware and resources.

Let me tell a little story...

Some years ago, during the SegWit wars, there was a sentiment "when are 
**they** going to implement Lightning??"
Both anti-SegWit and pro-SegWit asked this:

* anti-SegWit: Yeah, you need bigblocks, Lightning is vaporware, when are 
**they** going to implement Lightning?
* pro-SegWit: Lightning is so totes kool, this is why we SegWit, when are 
**they** going to implement Lightning?

After some time participating in the SegWit wars, I realized that I was, in 
fact, a programmer (LOL).
So why should **I** be asking when **they** are going to implement Lightning?
As a programmer, **I** could implement Lightning myself!
I should be asking myself why **I** was not implementing it.

Thus I started contributing to the Lightning implementation written in a 
language I could understand, C-Lightning.


My question to you is: obviously you are a node operator as otherwise the issue 
you raise would not be relevant to you, but what can *you* do to advance your 
goal?

(In any case: C-Lightning and Eclair devs have already mentioned they already 
run mainnet nodes tracking our respective master branches (i.e. we already eat 
our own dog food, because duh --- for many of us, the reason we are developing 
this is because for *other* reasons, we *have to* run Lightning nodes), is 
there any particular implementation you are concerned about?
Maybe ask them directly?)


Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] INTEROPERABILITY

2021-11-23 Thread ZmnSCPxj via Lightning-dev
Good morning x-raid,

> what i can imagine is each team should provide boxes and channel liquidity as 
> stake on mainnet for tests before announce a public realise as to feel the 
> pain first hand instead of having several K´s of plebs confused and at worst 
> have funds in channelclosed etc. but mostly for helping in smooth 
> transitioning into future envisioned mass.

Not all members of all teams are independently wealthy, and cannot afford 
significant liquidity on mainnet, or can afford good Internet connection and 
keeping a device operational 24/7.
For example, for some time I was a C-Lightning core developer, yet did not run 
a C-Lightning node myself, relying on sheer code review, because I could not 
afford to run a node.
What you imagine would raise the barrier towards contribution (i.e. I might not 
have been able to start contributing to C-Lightning in the first place, for 
example).

I think you misunderstand the open-source model.
If you have the skill, but not the money, you can contribute directly.
If you do not have the skill, but do have the money, you can contribute that by 
hiring developers to work on the project you want.

So, if you are using a particular open-source implementation and storing your 
funds with it, either:

* You have the 1337 skillz0rs: you contribute review and actual code.
* You do not have the 1337 skillz0rs: you contribute hardware and testing 
reports and possibly money.

If the several Ks of plebs are confused, they can aggregate their resources and 
fund one or two developers to review and contribute to the project they are 
using, and maybe some hardware and coins for boxes they keep running.

At my point of view, the Real Issue (TM) here is how to aggregate the will of a 
group of people, without risking that some centralized "manager" of resources 
gets incentives that diverge from the group of people and starts allocating 
resources in ways that the group of people would, in aggregate, disagree with.

>
> If teams rather outsource the running of boxes with channels on mainnet for 
> impl release and rc versions they would of course be able to, but close to 
> home for managing analysis of the team impl themselves is what I would 
> recommend.
>
> Can also see that each box loglines are collected at one central point 
> whereby requests can be made for comparing interoperability per unix.ts 
> identified by box.
> (thats alot of data You say --not really in Big Data terms, question is where 
> to set a proper cap in time for collections ? a week ? a month ?)
> I think i might have a solution for the central point collector that could be 
> run by an outside of impl teams perimeter. (sponsored?)

See, if the money on the node is my own, and not contributed by the group that 
is going to receive the logs, I am not going to send the logs verbatim to them, 
nope not nada.
I do not want to become a target, because logs leak information like who my 
channel counterparties are and how often I forward HTLCs and exact dates and 
times of each event, and thus can be used to locate my node, and location is 
the first step to targeted attack.
I mean I use a frikkin set of 8 random letters, come on.
Possibly if the logs had sensitive information redacted (even dates and 
times??), but we need to automate that redaction, and in particular, if the 
implementation changes log messages, we need to ensure that changed log 
messages do not leak information that gets past the automated redaction.


Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] INTEROPERABILITY

2021-11-23 Thread ZmnSCPxj via Lightning-dev
Good morning again x-raid,

Are you proposing as well to provide the hardware and Internet connection for 
these boxes?

I know of one person at least who runs a node that tracks the C-Lightning 
master (I think they do a nightly build?), and I run a node that I update every 
release of C-Lightning (and runs CLBOSS as well).
I do not know the actual implementations of what they connect to, but LND is 
very popular on the network and LNBIG is known to be an LND shop, and LNBIG is 
so pervasive that nearly every long-lived forwarding node has at least one 
channel with *some* LNBIG node.
I consider this "good enough" in practice to catch interop bugs, but some 
interop bugs are deeper than just direct node-to-node communications.
For example, we had bugs in our interop with LND `keysend` before, by my memory.

Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] INTEROPERABILITY

2021-11-23 Thread ZmnSCPxj via Lightning-dev
Good morning x-raid,

> I propose a dialog of the below joint effort ...
>
> thanks
> /xraid
>
> ***
> A decentralised integration lab where CL Eclair LDK LND (++ ?) runs each the 
> latest release on "one box" rBOX and master.rc on "another box" rcBOX.

I believe Electrum also has its own bespoke implementation.
There was also Ptarmigan.

Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Route reliability<->fee trade-off control parameter

2021-11-21 Thread ZmnSCPxj via Lightning-dev
Good morning Dave,

> If LN software speculatively chooses a series of attempts with a similar
> 95%, accounting for things like the probability of a stuck payment (made
> worse by longer CLTV timeouts on some paths), it could present users
> with the same sort of options:
>
> ~1 second, x fee
> ~3 seconds, y fee
> ~10 seconds, z fee
>
> This allows the software to use its reliability scoring efficiently in
> choosing what series of payment attempts to make and presents to the
> user the information they need to make a choice appropriate for their
> situation. As a bonus, it makes it easier for wallet software to move
> towards a world where there is no user-visible difference between
> onchain and offchain payments, e.g.:
>
> ~1 second, w fee
> ~15 seconds, x fee
> ~10 minutes, y fee
> ~60 minutes, z fee

This may not match ideally, as in the worst case a forwarding might be struck 
by literal lightning and dropped off the network while your HTLC is on that 
node, only for the relevant channel to be dropped onchain days later when the 
timeout comes due.
Providing this "seconds" estimate does not prepare users for the possibility of 
such black swan events where a high fee transaction gets stalled due to an 
accident on the network.

On the other hand, humans never really handle black swan events in any 
reasonably way anyway, and 95% of the time it will probably achieve that number 
of estimated seconds or less.
Even the best onchain estimators fail when a thundering herd of speculators 
decides to trade Bitcoin based on random crap from the noosphere.

The processing to figure out a payment plan also becomes significant at the 
"seconds" level, especially if you switch to mincostflow rather than 
shortestpath.
This means the CPU speed of the local node may become significant, or if you 
are delegating pathfinding to a trusted server, the load on that trusted server 
becomes significant.
Sigh.

Why not just ask for a fee budget for a payment, and avoid committing ourselves 
to paying within some number of seconds, given that the seconds estimate may 
very well vary depending on local CPU load?
Would users really complain overmuch if the number of seconds is not provided, 
given that we cannot really estimate this well?

Regards,
ZmnSCPxj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Route reliability<->fee trade-off control parameter

2021-11-15 Thread ZmnSCPxj via Lightning-dev
Good morning Joost,

> What I did in lnd is to work with so called 'payment attempt cost'. A virtual 
> satoshi amount that represents the cost of a failed attempt.

https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-August/003191.html

And I quote:

> Introduction
> 
>
> What is the cost of a failed LN payment?

See link for more.

Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Removing lnd's source code from the Lightning specs repository

2021-11-03 Thread ZmnSCPxj via Lightning-dev
Good morning again list,

>
> > We could use an identicon, we do that with the lightningnetwork repository. 
> > An official logo is probably better - give the project a real symbol.
>
> I attached an SVG file I have been working on for some time, for the 
> amusement of all.
>
> It is unfortunately not square, and is very very simple, as well.

The icon attached in the previous email was visually inspired by this: 
https://game-icons.net/1x1/sbed/electric.html

However, the icon attached in the previous email is created solely by me 
without any code from the above link; I created a right triangle, made two 
corners round, copied it and rotated 180 degrees, then built the Lightning 
symbol out of them.
I hereby release the icon in the previous email under CC0.

An alternative which looks more "networky" is: 
https://game-icons.net/1x1/willdabeast/chain-lightning.html
A slight modification of removing the outgoing lightning strike from the 
leftmost person and adding a lightning strike between the leftmost and 
rightmost person would certainly imply "network" of "lightning" users, to me.

On the other hand, a simple Lightning symbol like in the previous email does 
have its cachet.
In particular, this simple symbol allows for the possibility of third parties 
to use as basis for their own logo, due to the base logo being very clean and 
lacking visual busyness.
For instance, an implementation may create their own logo based on some 
modification of this base logo, just as its code might implement the BOLT spec 
but add its own modifications on top of the BOLT spec.

Regards,
ZmnSCPxj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Removing lnd's source code from the Lightning specs repository

2021-11-03 Thread ZmnSCPxj via Lightning-dev
Good morning list,

> We could use an identicon, we do that with the lightningnetwork repository. 
> An official logo is probably better - give the project a real symbol.

I attached an SVG file I have been working on for some time, for the amusement 
of all.

It is unfortunately not square, and is very very simple, as well.

Regards,
ZmnSCPxj

>
> On Tue, Nov 2, 2021 at 10:37 PM Olaoluwa Osuntokun  wrote:
>
> > Oh, also there's currently this sort of placeholder logo from waaay back
> > that's used as the org's avatar/image. Perhaps it's time we roll an
> > "official" logo/avatar? Otherwise we can just switch over the randomly
> > generated blocks thingy that Github uses when an account/org has no
> > avatar.  
> >
> > -- Laolu
> >
> > On Tue, Nov 2, 2021 at 7:34 PM Olaoluwa Osuntokun  wrote:
> >
> > > Circling back to close the loop here:
> > >
> > >   * The new Github org (https://github.com/lightning) now exists, and all 
> > > the
> > >     major implementation maintainers have been added to the organization 
> > > as
> > >     admins. 
> > >
> > >   * A new blips repo (https://github.com/lightning/blips) has been 
> > > created to
> > >     continue the PR that was originally started in the lightning-rfc 
> > > repo.  
> > >
> > >   * The old lightning-rfc repo has been moved over, and been renamed to 
> > > "bolts"
> > >     (https://github.com/lightning/bolts -- should it be all caps? )
> > >
> > > Thanks to all that participated in the discussion (particularly in 
> > > meatspace
> > > during the recent protocol dev meetup!), happy we were able to resolve 
> > > things
> > > and begin the next chapter in the evolution of the Lightning protocol! 
> > >
> > > -- Laolu
> > >
> > > On Fri, Oct 15, 2021 at 1:49 AM Fabrice Drouin  
> > > wrote:
> > >
> > > > On Tue, 12 Oct 2021 at 21:57, Olaoluwa Osuntokun  
> > > > wrote:
> > > > > Also note that lnd has _never_ referred to itself as the "reference"
> > > > > implementation.  A few years ago some other implementations adopted 
> > > > > that
> > > > > title themselves, but have since adopted softer language.
> > > >
> > > > I don't remember that but if you're referring to c-lightning it was
> > > > the first lightning implementation, and the only one for a while, so
> > > > in a way it was a "reference" at the time ?
> > > > Or it could have been a reference to their policy of "implementing the
> > > > spec, all the spec and nothing but the spec"  ?
> > > >
> > > > > I think it's worth briefly revisiting a bit of history here w.r.t the 
> > > > > github
> > > > > org in question. In the beginning, the lightningnetwork github org was
> > > > > created by Joseph, and the lightningnetwork/paper repo was added, the
> > > > > manuscript that kicked off this entire thing. Later 
> > > > > lightningnetwork/lnd was
> > > > > created where we started to work on an initial implementation (before 
> > > > > the
> > > > > BOLTs in their current form existed), and we were added as owners.
> > > > > Eventually we (devs of current impls) all met up in Milan and decided 
> > > > > to
> > > > > converge on a single specification, thus we added the BOLTs to the 
> > > > > same
> > > > > repo, despite it being used for lnd and knowingly so.
> > > >
> > > > Yes, work on c-lightning then eclair then lnd all began a long time
> > > > before the BOLTs process was implemented, and we all set up repos,
> > > > accounts...
> > > > I agree that we all inherited things  from the "pre-BOLTS" era and
> > > > changing them will create some friction but I still believe it should
> > > > be done. You also mentioned potential admin rights issues on the
> > > > current specs repos which would be solved by moving them to a new
> > > > clean repo.
> > > >
> > > > > As it seems the primary grievance here is collocating an 
> > > > > implementation of
> > > > > Lightning along with the _specification_ of the protocol, and given 
> > > > > that the
> > > > > spec was added last, how about we move the spec to an independent 
> > > > > repo owned
> > > > > by the community? I currently have github.com/lightning, and would be 
> > > > > happy
> > > > > to donate it to the community, or we could create a new org like
> > > > > "lightning-specs" or something similar.
> > > >
> > > > Sounds great! github.com/lightning is nice (and I like Damian's idea
> > > > of using github.com/lightning/bolts) and seems to please everyone so
> > > > it looks that we have a plan!
> > > >
> > > > Fabrice
> >
> > ___
> > Lightning-dev mailing list
> > Lightning-dev@lists.linuxfoundation.org
> > https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] In-protocol liquidity probing and channel jamming mitigation

2021-10-21 Thread ZmnSCPxj via Lightning-dev
Good morning Joost,

> On Thu, Oct 21, 2021 at 12:00 PM ZmnSCPxj  wrote:
>
> > Good morning Joost,
> >
> > > A potential downside of a dedicated probe message is that it could be 
> > > used for free messaging on lightning by including additional data in the 
> > > payload for the recipient. Free messaging is already possible today via 
> > > htlcs, but a probe message would lower the cost to do so because the 
> > > sender doesn't need to lock up liquidity for it. This probably increases 
> > > the spam potential. I am wondering if it is possible to design the probe 
> > > message so that it is useless for anything other than probing. I guess it 
> > > is hard because it would still have that obfuscated 1300 bytes block with 
> > > the remaining part of the route in it and nodes can't see whether there 
> > > is other meaningful data at the end.
> >
> > For the probe, the onion max size does not *need* to be 1300, we could 
> > reduce the size to make it less useable for *remote* messaging.
>
> Yes, maybe it can be reduced a bit. But if we want to support 27 hops like we 
> do for payments, there will be quite some space left for messaging on real 
> routes which are mostly much shorter.

Does six degrees of separation not apply for the LN?
I assume it would --- presumably some mathist can actually check the actual 
network diameter?

In particular, forwarding nodes have an incentive to shorten the degree of 
separation, at least to popular nodes, by building channels to those, so I 
presume the degrees of separation will remain low.
I expect something like 10 hops would work reasonably well...?

(Longer routes greatly compound their expected failure rate as well, so no 
reasonable payer would prefer longer routes if a shorter route would do)

Regards,
ZmnSCPxj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] In-protocol liquidity probing and channel jamming mitigation

2021-10-21 Thread ZmnSCPxj via Lightning-dev
Good morning Joost,

> A potential downside of a dedicated probe message is that it could be used 
> for free messaging on lightning by including additional data in the payload 
> for the recipient. Free messaging is already possible today via htlcs, but a 
> probe message would lower the cost to do so because the sender doesn't need 
> to lock up liquidity for it. This probably increases the spam potential. I am 
> wondering if it is possible to design the probe message so that it is useless 
> for anything other than probing. I guess it is hard because it would still 
> have that obfuscated 1300 bytes block with the remaining part of the route in 
> it and nodes can't see whether there is other meaningful data at the end.

For the probe, the onion max size does not *need* to be 1300, we could reduce 
the size to make it less useable for *remote* messaging.

Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] In-protocol liquidity probing and channel jamming mitigation

2021-10-19 Thread ZmnSCPxj via Lightning-dev
Good morning Joost,

> There could be some corners where the incentives may not work out 100%, but I 
> doubt that any routing node would bother exploiting this. Especially because 
> there could always be that reputation scheme at the sender side which may 
> cost the routing node a lot more in lost routing fees than the marginal gain 
> from the upfront payment.
>
> Another option is that nodes that don't care to be secretive about their 
> channel balances could include the actual balance in a probe failed message. 
> Related: https://github.com/lightningnetwork/lightning-rfc/pull/695
>
> Overall it seems that htlc-less probes are an improvement to what we 
> currently have. Immediate advantages include a reduction of the load on nodes 
> by cutting out the channel update machinery, better ux (faster probes) and no 
> locked up liquidity. On the longer term it opens up the option to charge for 
> failed payments so that we finally have an answer to channel jamming.

One can argue that if you are hoping that forwarding nodes will not exploit 
this, you can also hope that forwarding nodes will not perform channel-jamming 
attacks.
As I noted before, channel jamming attacks will never be performed by payers or 
payees --- they have an incentive to complete the transaction and earn gains 
from trade.
Channel jamming attacks are performed by large forwarding nodes on their 
smaller competitors, since having 100 capacity on large versus 10 capacity on 
the smaller competitor is worse than having 89 capacity on the large versus 0 
capacity on the smaller competitor.

On the other hand, perhaps it is at this point that we should start computing 
the exact incentives, hmm.

--

A thing to note is that any node along a path can disrupt an onion response by 
the simple expedient of XORing it with random stuff, or even just a non-0 
constant.
This may allow for an additional attack vector.

Suppose I am a forwarding node and I receive a probe request, and it turns out 
the next hop lacks capacity right now.
I could incite the next hop to lie by forwarding the probe request to the next 
hop despite the lack of capacity.

If the next hop responds immediately, I can then corrupt the return onion.
Presumably if the next hop responded immediately it was reporting my lie to the 
sender.
By corrupting the return onion the ultimate sender is unable to determine 
*which* node along the route failed, and I can hope that the reputation penalty 
to my competitor forwarding nodes along the path compensates for the reputation 
hit I personally suffer.

If the next hop takes some time before responding, then possibly it colluded 
with me to lie about the capacity on our channel (i.e. it actually went ahead 
and forwarded to the next hop despite knowing I lied).
Then I could faithfully onion-encrypt the response and send it back to the 
ultimate sender.


To mitigate against the above attack:

* If a forwarding node gets a probe request for an amount that the asker is 
*currently* unable to give anyway:
  * The forwarding node should still forward the probe request.
  * On response, however, it replaces the response with its own report that the 
previous hop was a dirty liar.
  * Note that the asker is the one who has full control over their funds in the 
channel, so the asker cannot later claim "but the network latency mixed up our 
messages!" --- the asker knows when it does `update_add_htlc` to reduce its 
capacity, so it should know it has the capacity or not.


Now, if we are going to add a message "the previous hop was a dirty liar" then 
we should ask if a forwarding node would want to make a false accusation.

* Suppose the previous hop has sufficient capacity and asked us if we have our 
own capacity.
* Does the current hop have any incentive to falsely accuse the previous hop?
  * No: if it did, then the sender would not try their channel again in the 
close future, thus leading to lower fee earnings.


> ZmnSCPxj, as first person to propose the idea (I think?), would you be 
> interested in opening a draft PR on the spec repository that outlines the new 
> message(s) that we'd need and continue detailing from there?

It might end up not happening given the stuff I juggle randomly, so feel free 
to start it.

Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] In-protocol liquidity probing and channel jamming mitigation

2021-10-15 Thread ZmnSCPxj via Lightning-dev
Good morning Owen,

> C now notes that B is lying, but is faced with the dilemma:
>
> "I could either say 'no' because I can plainly see that B is lying, or
> I could say 'yes' and get some free sats from the failed payment (or
> via the hope of a successful payment from a capacity increase in the
> intervening milliseconds)."

Note that if B cannot forward an HTLC to C later, then C cannot have a failed 
payment and thus cannot earn any money from the upfront payment scheme; thus, 
at least that part of the incentive is impossible.

On the other hand, there is still a positive incentive for continuing the lie 
--- later, maybe the capacity becomes OK and C could earn both the upfront fee 
and the success fee.

> So C decides it's in his interest to keep the lie going. D, the payee,
> can't tell that it's a lie when it reaches her.
>
> If C did want to tattle, it's important that he be able to do so in a
> way that blames B instead of himself, otherwise payers will assume
> (incorrectly, and to C's detriment) that the liquidity deficit is with C
> rather than B.

That is certainly quite possible to do.

Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Issue assets on lightning

2021-10-15 Thread ZmnSCPxj via Lightning-dev
Good morning Prayank,


> > https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-December/001752.html
>
> Still trying to understand this problem and possible solutions. Interesting 
> email though (TIL), thanks for sharing the link. Found related things 
> explained Suredbits blog as well.


We can argue that the above problem is really just the "failed HTLCs are free 
and HODL invoices are free" problem, magnified by the fact that as an exchange, 
and with most cryptocurrency assets being very volatile, exchange rate changes 
can be exploited to leak economic power from exchange nodes.

So, fixing the "free HOLD invoices" problem, such as creating a ***palatable*** 
upfront payment scheme, should also fix the American Call Option problem.

On the other hand, if we cannot create an upfront payment scheme and are forced 
to solve the "free HODL invoices" problem by other means, then we may need to 
solve the American Call Option problem separately.

Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] In-protocol liquidity probing and channel jamming mitigation

2021-10-15 Thread ZmnSCPxj via Lightning-dev
Good morning Owen,

> On Thu, Oct 14, 2021 at 09:48:27AM +0200, Joost Jager wrote:
>
> > So how would things work out with a combination of both of the
> > proposals described in this mail? First we make probing free (free as
> > in no liquidity locked up) and then we'll require senders to pay for
> > failed payment attempts too. Failed payment attempts after a
> > successful probe should be extremely rate, so doesn't this fix the ux
> > issue with upfront fees?
>
> Why couldn't a malicious routing node (or group of colluding routing
> nodes) succeed the probe and then fail the payment in order to collect
> the failed payment fee?

Good observation!

I propose substantially the same thing here: 
https://lists.linuxfoundation.org/pipermail/lightning-dev/2021-September/003256.html

In that proposal, I wrote:

> Another thought is: Does the forwarding node have an incentive to lie?
> Suppose the next hop is alive but the forwarding node has insufficient 
> capacity towards the next hop.
> Then the forwarding node can lie and claim it can still resolve the HTLC, in 
> the hope that a few milliseconds later, when the actual HTLC arrives, the 
> capacity towards the next hop has changed.
> Thus, even if the capacity now is insufficient, the forwarding node has an 
> incentive to lie and claim sufficient capacity.
>
> Against the above, we can mitigate this by accepting "no" from *any* node 
> along the path, but only accepting "yes" from the actual payee.

We already have a mechanism to send an onion and get back an "error" reply; the 
reply can be identified by the sender as arising from any node along the path, 
or at the destination.
Basically, we simply reuse this mechanism:

* Do not need an HTLC with this onion.
* Only accept a "everything is OK" result from the destination.
* Accept a "sorry cannot forward" from *any* node along the path.

Thus, a malicious node cannot succeed the probe --- the probe has to reach the 
destination.

Now the malicious forwarding node could be colluding with the destination, but 
presumably the destination wants to *actually* get paid, so we expect that, 
economically, it has no incentive to cooperate with the malicious node to 
*fail* the actual payment later just to extract a tiny failure fee.

Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] A Mobile Lightning User Goes to Pay a Mobile Lightning User...

2021-10-13 Thread ZmnSCPxj via Lightning-dev
Good morning Matt,

> On 10/13/21 02:58, ZmnSCPxj wrote:
>
> > Good morning Matt,
> >
> > >  The Obvious (tm) solution here is PTLCs - just have the sender 
> > > always add some random nonce * G to
> > >  the PTLC they're paying and send the recipient a random nonce in the 
> > > onion. I'd generally suggest we
> > >  just go ahead and do this for every PTLC payment, cause why not? Now 
> > > the sender and the lnurl
> > >  endpoint have to collude to steal the funds, but, like, the sender 
> > > could always just give the lnurl
> > >  endpoint the money. I'd love suggestions for fixing this short of 
> > > PTLCs, but its not immediately
> > >  obvious to me that this is possible.
> > >
> >
> > Use two hashes in an HTLC instead of one, where the second hash is from a 
> > preimage the sender generates, and which the sender sends (encrypted via 
> > onion) to the receiver.
> > You might want to do this anyway in HTLC-land, consider that we have a 
> > `payment_secret` in invoices, the second hash could replace that, and 
> > provide similar protection to what `payment_secret` provides (i.e. 
> > resistance against forwarding nodes probing; the information in both cases 
> > is private to the ultimate sender and ultimate reeceiver).
>
> Yes, you could create a construction which does this, sure, but I'm not sure 
> how you'd do this
> without informing every hop along the path that this is going on, and 
> adapting each hop to handle
> this as well. I suppose I should have been more clear with the requirements, 
> or can you clarify
> somewhat what your proposed construction is?

Just that: two hashes instead of one.
Make *every* HTLC on LN use two hashes, even for current "online RPi user pays 
online RPi user" --- just use the `payment_secret` for the preimage of the 
second hash, the sender needs to send it anyway.

>
> If you're gonna adapt every node in the path, you might as well just use PTLC.

Correct, we should just do PTLCs now.
(Basically, my proposal was just a strawman to say "we should just do PTLCs 
now")


Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] A Mobile Lightning User Goes to Pay a Mobile Lightning User...

2021-10-13 Thread ZmnSCPxj via Lightning-dev
Good morning Matt,


> The Obvious (tm) solution here is PTLCs - just have the sender always add 
> some random nonce * G to
> the PTLC they're paying and send the recipient a random nonce in the 
> onion. I'd generally suggest we
> just go ahead and do this for every PTLC payment, cause why not? Now the 
> sender and the lnurl
> endpoint have to collude to steal the funds, but, like, the sender could 
> always just give the lnurl
> endpoint the money. I'd love suggestions for fixing this short of PTLCs, 
> but its not immediately
> obvious to me that this is possible.

Use two hashes in an HTLC instead of one, where the second hash is from a 
preimage the sender generates, and which the sender sends (encrypted via onion) 
to the receiver.
You might want to do this anyway in HTLC-land, consider that we have a 
`payment_secret` in invoices, the second hash could replace that, and provide 
similar protection to what `payment_secret` provides (i.e. resistance against 
forwarding nodes probing; the information in both cases is private to the 
ultimate sender and ultimate reeceiver).

In addition, I suspect (but have not worked out yet) that this would allow some 
kind of Barrier Escrow-like mechanism while still in HTLC-land.

Otherwise, just PTLC, man, everyone wants PTLC.

Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Issue assets on lightning

2021-10-12 Thread ZmnSCPxj via Lightning-dev
Good morning Prayank,

> Hello everyone,
>
> I wanted to know few things related to asset issuance on lightning:
>
> 1.Is it possible to issue assets on LN right now? If yes, what's the process 
> and is it as easy as few commands in liquid: 
> https://help.blockstream.com/hc/en-us/articles/95127583-How-do-I-issue-an-asset-on-Liquid-
>
> 2.If no, is anyone working or planning to work on it?
>
> 3.I had read few things about Omni BOLT which could solve this problem but 
> not sure about status of project and development: 
> https://github.com/omnilaboratory/OmniBOLT-spec
>
> Few use cases for tokens on lightning:
>
> 1.DEX
> 2.Stablecoins
> 3.Liquidity: If projects could incentivize users with native tokens that are 
> associated with the project on every LN channel opened it would improve 
> liquidity.

I heard before that the RGB colored coin project had plans to be compatible 
with Lightning so that channels could be denominated in an issued asset.

Most plans for colored coins on Lightning generally assume that each channel 
has just a single asset, as that seems to be simpler, at least as a start.
However, this complicates the use of such channels for forwarding, as we would 
like to restrict channel gossip to channels that *any* node can easily prove 
actually exist as a UTXO onchain.
Thus, colored coins would need to somehow be provable as existing to *any* node 
(or at least those that support colored coins somehow) on the LN.

Blockstream I believe has plans to include support for Liquid-issued assets in 
C-Lightning somehow; C-Lightning already supports running on top of Liquid 
instead of directly on the Bitcoin blockchain layer (but still uses Bitcoin for 
the channel asset type).

Generally, the assumption is that there would be a Lightning Network where 
channels have different asset types, and you can forward via any channel, 
suffering some kind of asset conversion fee if you have a hop where the 
incoming asset is different from the outgoing asset.


However, do note that some years ago I pointed out that swaps between two 
*different* assets are a form of very lousy American Call Option: 
https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-December/001752.html

Due to this, issued assets may not be usable on Lightning after all, even if 
someone makes the work to make non-Bitcoin assets on Lightning channels.

I am unaware of any actual decent solutions to the American Call Option 
problem, but it has been a few years since then and someone might have come up 
with a solution by now (we hope, maybe).
I believe CJP had a trust-requiring solution: 
https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-May/001292.html 
and https://bitonic.nl/public/slowdown_prevention.pdf
Here is a paper which requires Ethereum (I have not read it because it required 
Ethereum): https://eprint.iacr.org/2019/896.pdf

It may be possible to use Barrier Escrows: 
https://suredbits.com/payment-points-implementing-barrier-escrows/
Barrier Escrows are still trusted (and I think they can serve as the RM role in 
the CJP paper?) to operate correctly, but the exact use of their service is 
blinded to them.
Of course, any single participant of a multi-participant protocol can probably 
unblind the Barrier Escrow, so still not a perfectly trustless solution.



Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Lightning over taproot with PTLCs

2021-10-11 Thread ZmnSCPxj via Lightning-dev
Good morning aj,

> On Mon, Oct 11, 2021 at 05:05:05PM +1100, Lloyd Fournier wrote:
>
> > ### Scorched earth punishment
> >
> > Another thing that I'd like to mention is that using revocable signatures
> > enables scorched earth punishments [2].
>
> I kind-of think it'd be more interesting to simulate eltoo's behaviour.
> If Alice's current state has balances (A, B) and P in in-flight
> payments, and Bob posts an earlier state with (A', B') and P' (so A+B+P
> = A'+B'+P'), then maybe Alice's justice transaction should pay:
>
> A+P + max(0, B'-B)*0.1 to Alice
> B-f - max(0, B'-B)*0.1 to Bob
>
> (where "f" is the justice transaction fees)
>
> Idea being that in an ideal world there wouldn't be a hole in your pocket
> that lets all your coins fall out, but in the event that there is such
> a hole, it's a nicer world if the people who find your coins give them
> back to you out of the kindness of their heart.

This may model closer to "two tits for a tat" strategy.

"Tit for tat" is optimum in iterated prisoner dilemma assuming mistakes never 
happen; however, in the real world we know quite well that we may injure 
another person by complete accident.
The usual practice in the real world is that the injured person will accept an 
apology *once*, but a repeat will tend to make people assume you are hostile 
and switch them over to tit for tat.
This overall strategy is then "two tits for a tat", you are (in practice) given 
one chance and then you are expected to be very careful in interacting with 
that person to keep being in their good graces.

So, if what you propose is widespread, then a theft attempt is costless: you 
can try using old state, and your victim will, on finding it, instead just use 
what they think is the latest state.
Thus, merely attempting the theft is costless (modulo onchain fees, which may 
be enough punishment in this case?).

However, if we assume that in practice a lot of "theft attempts" are really 
people not taking RAID systems and database replication seriously and getting 
punished by the trickster god Murphy, then your proposal would actually be 
better, and if theft is unlikely enough to succeed, then even a costless theft 
attempt would still be worthless (and onchain fees will bite you anyway).

Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Lightning over taproot with PTLCs

2021-10-08 Thread ZmnSCPxj via Lightning-dev
Good morning aj,

> On Sat, Oct 09, 2021 at 01:49:38AM +, ZmnSCPxj wrote:
>
> > A transaction is required, but I believe it is not necessary to put it 
> > onchain (at the cost of implementation complexity in the drop-onchain case).
>
> The trick with that is that if you don't put it on chain, you need
> to calculate the fees for it in advance so that they'll be sufficient
> when you do want to put it on chain, and you can't update it without
> going onchain, because there's no way to revoke old off-chain funding
> transactions.

Yes, onchain fees, right?

*Assuming* CPFP is acceptable, then fees for the commitment tx on the new 
scheme (or whatever equivalent in the transitioned-to mechanism is) would pay 
for the transitioning transaction, so fees paying for the transitioning 
transaction can still be adjusted at the transitioned-to updatable mechanism.
This probably assumes that the transaction package relay problem is fixed at 
the base layer though.

>
> > This has the advantage of maintaining the historical longevity of the 
> > channel.
> > Many pathfinding and autopilot heuristics use channel lifetime as a 
> > positive indicator of desirability,
>
> Maybe that's a good reason for routing nodes to do shadow channels as
> a matter of course -- call the currently established channel between
> Alice and Bob "C1", and leave it as bolt#3 based, but establish a new
> taproot based channel C2 also between Alice and Bob. Don't advertise C2
> (making it a shadow channel), just say that C1 now supports PTLCs, but
> secretly commit to those PTLCs to C2 instead C1. Once the C2 funding tx
> is buried enough, start advertising C2 instead taking advantage of its
> now sufficiently buried funding transaction, and convert C1 to a shadow
> channel instead.
>
> In particular, that setup allows you to splice funds into or out of the
> shadow channel while retaining the positive longevity heuristics of the
> public channel.

Requires two UTXOs, though, I think?

How about just adding a gossip message "this new short-channel-id is the same 
as this old short-channel-id, use the new-short-channel-id to validate it but 
treat the age as that of the old short-channel-id"?

Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Lightning over taproot with PTLCs

2021-10-08 Thread ZmnSCPxj via Lightning-dev
Good morning aj,

> In order to transition from BOLT#3 format to this proposal, an onchain
> transaction is required, as the "revocable signatures" arrangement cannot
> be mimicked via the existing 2-of-2 CHECKMULTISIG output.

A transaction is required, but I believe it is not necessary to put it 
*onchain* (at the cost of implementation complexity in the drop-onchain case).

An existing channel can "simply" sign a transitioning transaction from the 
current BOLT3 to your new scheme, and then invalidate the last valid commitment 
transactions (i.e. exchange revocation secrets) in the BOLT3 scheme.
The transitioning transaction can simply be kept offchain and its output used 
as the funding outpoint of all "internal" (to the two counterparties directly 
in the channel) states.

This general idea would work for all transitions *from* Poon-Dryja, I believe.
It may be possible with Decker-Russell-Osuntokun I think (give the 
transitioning transaction the next sequence `nLockTime` number), but the 
`SIGHASH_NOINPUT`ness and (maybe?) the `CSV` infects the mechanism being 
transitioned to, so this technique may be too complicated for transitioning 
*from* Decker-Russell-Osuntokun to some hypothetical future offchain updatable 
cryptocurrency system that does not need (or want) `SIGHASH_NOINPUT`.

This has the advantage of maintaining the historical longevity of the channel.
Many pathfinding and autopilot heuristics use channel lifetime as a positive 
indicator of desirability, thus an *onchain* transitioning transaction is 
undesirable as that marks a closure of the previous channel.
And the exact scheme of channels between forwarding nodes are not particularly 
the business of anyone else anyway.


Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [bitcoin-dev] Removing the Dust Limit

2021-10-08 Thread ZmnSCPxj via Lightning-dev
Good morning Pieter,

> Indeed - UTXO set size is an externality that unfortunately Bitcoin's 
> consensus rules fail to account
> for. Having a relay policy that avoids at the very least economically 
> irrational behavior makes
> perfect sense to me.
>
> It's also not obvious how consensus rules could deal with this, as you don't 
> want consensus rules
> with hardcoded prices/feerates. There are possibilities with designs like 
> transactions getting
> a size/weight bonus/penalty, but that's both very hardforky, and hard to get 
> right without
> introducing bad incentives.

Why is a +weight malus *very* hardforky?

Suppose a new version of a node adds, say, +20 sipa per output of a transaction 
(in order to economically discourage the creation of additional outputs in the 
UTXO set).
Older versions would see the block as being lower weight than usual, but as the 
consensus rule is "smaller than 4Msipa" they should still accept any block 
acceptable to newer versions.

It seems to me that only a -weight bonus is hardforky (but then xref SegWit and 
its -weight bonus on inputs).

I suppose the effect is primarily felt on mining nodes?
Miners might refuse to activate such a fork, as they would see fewer 
transactions per block on average?

Regards,
ZmnSCPxj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [bitcoin-dev] Removing the Dust Limit

2021-10-08 Thread ZmnSCPxj via Lightning-dev
Good morning shymaa,

> The suggested idea I was replying to is to make all dust TXs invalid by some 
> nodes.

Is this supposed to be consensus change or not?
Why "some" nodes and not all?

I think the important bit is for full nodes.
Non-full-nodes already work at reduced security; what is important is the 
security-efficiency tradeoff.

> I suggested a compromise by keeping them in secondary storage for full nodes, 
> and in a separate Merkle Tree for bridge servers.
> -In bridge servers they won't increase any worstcase, on the contrary this 
> will enhance the performance even if slightly.
> -In full nodes, and since they will usually appear in clusters, they will be 
> fetched rarely (either by a dust sweeping action, or a malicious attacker)
> In both cases as a batch
> -To not exhaust the node with DoS(as the reply mentioned)one may think of 
> uploading the whole dust partition if they were called more than certain 
> threshold (say more than 1 Tx in a block)  
> -and then keep them there for "a while", but as a separate partition too to 
> exclude them from any caching mechanism after that block.
> -The "while" could be a tuned parameter.

Assuming you meant "dust tx is considered invalid by all nodes".

* Block has no dust sweep
  * With dust rejected: only non-dust outputs are accessed.
  * With dust in secondary storage: only non-dust outputs are accessed.
* Block has some dust sweeps
  * With dust rejected: only non-dust outputs are accessed, block is rejected.
  * With dust in secondary storage: some data is loaded from secondary storage.
* Block is composed of only dust sweeps
  * With dust rejected: only non-dust outputs are accessed, block is rejected.
  * With dust in secondary storage: significant increase in processing to load 
large secondary storage in memory,

So I fail to see how the proposal ever reduces processing compared to the idea 
of just outright making all dust txs invalid and rejecting the block.
Perhaps you are trying to explain some other mechanism than what I understood?

It is helpful to think in terms always of worst-case behavior when considering 
resistance against attacks.

> -Take care that the more dust is sweeped, the less dust to remain in the UTXO 
> set; as users are already much dis-incentivised to create more.

But creation of dust is also as easy as sweeping them, and nothing really 
prevents a block from *both* creating *and* sweeping dust, e.g. a block 
composed of 1-input-1-output transactions, unless you want to describe some 
kind of restriction here?

Such a degenerate block would hit your secondary storage double: one to read, 
and one to overwrite and add new entries; if the storage is large then the 
index structure you use also is large and updates can be expensive there as 
well.


Again, I am looking solely at fullnode efficiency here, meaning all rules 
validated and all transactions validated, not validating and simply accepting 
some transactions as valid is a degradation of security from full validation to 
SPV validation.
Now of course in practice modern Bitcoin is hard to attack with *only* mining 
hashpower as there are so many fullnodes that an SPV node would be easily able 
to find the "True" history of the chain.
However, as I understand it that proporty of fullnodes protecting against 
attacks on SPV nodes only exists due to fullnodes being cheap to keep online; 
if the cost of fullnodes in the **worst case** (***not*** average, please stop 
talking about average case) increases then it may become feasible for miners to 
attack SPV nodes.

Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [bitcoin-dev] Removing the Dust Limit

2021-10-06 Thread ZmnSCPxj via Lightning-dev
Good morning e,

> mostly thinking out loud
>
> suppose there is a "lightweight" node:
>
> 1.  ignores utxo's below the dust limit
> 2.  doesn't validate dust tx
> 3.  still validates POW, other tx, etc.
>
> these nodes could possibly get forked - accepting a series of valid,
> mined blocks where there is an invalid but ignored dust tx, however
> this attack seems every bit as expensive as a 51% attack

How would such a node treat a transaction that spends multiple dust UTXOs and 
creates a single non-dust UTXO out of them (after fees)?
Is it valid (to such a node) or not?

I presume from #1 it never stores dust UTXOs, so the node cannot know if the 
UTXO being spent by such a tx is spending dust, or trying to spend an 
already-spent TXO, or even inventing a TXO out of `/dev/random`.

Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Fast Forwards By Channel-in-Channel Construction, Or: Spillman-Decker-Wattenhofer-Poon-Dryja-Towns

2021-10-05 Thread ZmnSCPxj via Lightning-dev
Introduction


In some direct discussions with me, ajtowns recently came up with a fairly 
interesting perspective on implementing Fast Forwards, which I thought deserved 
its own writeup.

The idea by aj is basically having two CSV-variant Spillman unidirectional 
channels hosted by a Poon-Dryja construction, similar to the proposed Duplex 
channels by Decker-Wattenhofer.
The main insight from aj is: why do we need to chain transactions for Fast 
Forwards, when we can use a channel-like construction for it instead, using two 
unidirectional channels?

Review: Spillman Channels and Variants
==

If you are a Bitcoin OG feel free to skip this section; it is intended for 
newer devs who may have never heard of Spillman channel scheme (and its 
variants).
Otherwise, or if you would like a refresher, feel free to read on.

The Spillman channel, and its variants, are all unidirectional: there is a 
fixed payer and a fixed payee.
The `nLockTime`-based variants also have a maximum lifetime.

* To open:
  * The payer creates, but does not sign or broadcast, an initial funding 
transaction, with a funding transaction output that is a simple 2-of-2 between 
payer and payee, but does *not* sign the transaction.
  * The payer generates a backout transaction that has a future `nLockTime` 
agreed upon and signs it.
  * The payee signs the backout transaction and gives the signature to the 
payer.
  * The payer signs and broadcasts the funding transaction.
  * Both payer and payee wait for deep confirmation of the funding transaction.
* To pay:
  * The payer generates a new version of the offchain transaction, giving more 
funds to the payee and less to the payer than the previous version.
  * The offchain transactions have no `nLockTime`!
  * The payer signs the transaction and hands it and its signature over to the 
payee.
  * The payee keeps the transaction and the payer signature.
* To close:
  * At any time before the timeout, the payee can sign the last offchain 
transaction and broadcast it, closing the channel.
  * The payee has every incentive to use the latest offchain transaction, since 
the channel is unidirectional the latest transaction is always going to give it 
more money than every earlier transaction.
  * At the timeout, the payer can recover all of their funds using the backout 
transaction.

Spillman channels as described are vulnerable to tx malleation, which is fixed 
in SegWit and later, but at this time we already have Poon-Dryja, which is 
better than Spillman in most respects.

`CLTV` Spillman Variant
---

To avoid tx malleation, instead of a presigned backout transaction, the 2-of-2 
between payer and payee can instead be a SCRIPT with the logic:

* Any of the following:
  * Payer signature and Payee signature.
  * Payer signature and CLTV of the timeout.

This is mostly a historical footnote, since tx signature malleation is no 
longer an issue if you use only SegWit, but it is helpful to remember the 
general form of the above SCRIPT logic.

`nSequence` and `CSV` Spillman Variants
---

An issue with classic Spillman is that it has a fixed absolute timeout.
Once the lifetime is reached, even if the payer still has capacity on their 
side, the channel still has to be closed before the absolute timeout.

With the new post-BIP68 semantics for `nSequence` (i.e. relative lock time), 
however, it is possible to use a relative locktime in order to lift the fixed 
channel lifetime.
With this, the channel can be kept open indefinitely, though it would still end 
up losing all utility once the payer has exhausted all capacity on their side.
However, as long as the payer still has capacity, there is no need to close the 
channel just because of an absolute pre-agreed timeout.

To implement this variant using `nSequence`, we need to insert a "kickoff" 
transaction between the funding transaction output and the offchain transaction 
that splits the funds between the payer and payee:

* To open:
  * The payer creates, but does not sign or broadcast, a funding transaction 
with a funding transaction paying out to a plain 2-of-2 between payer and payee.
  * The payer and payee create and sign, and exchange signatures for, a 
"kickoff" transaction.
This is a 1-input-1-output tx that spends the funding TXO and pays to 
another 2-of-2 between payer and payee.
  * The kickoff transaction is completely signed at this point and both payer 
and payee have complete signatures.
  * The payer creates a backout transaction that spends the kickof transaction 
output, with an agreed-upon `nSequence` time.
  * The payee signs the backout transaction and hands the signature to the 
payer.
  * The payer signs and broadcasts the funding transaction.
* To pay:
  * The payer creates a new version of the offchain transaction, which spends 
the kickoff transaction output (instead of the funding transaction output as in 

[Lightning-dev] Ask First, Shoot (PTLC/HTLC) Later

2021-09-28 Thread ZmnSCPxj via Lightning-dev
Good morning list,

While discussing something tangentially related with aj, I wondered this:

> Why do we shoot an HTLC first and then ask the question "can you actually 
> resolve this?" later?

Why not something like this instead?

* For a payer:
  * Generate a path.
  * Ask first hop if it can resolve an HTLC with those specs (passing the 
encrypted onion).
  * If first hop says "yes", actually do the `update_add_htlc` dance.
Otherwise try again.
* For a forwarder:
  * If anybody asks "can you resolve this path" (getting an encrypted onion):
* Decrypt one layer to learn the next hop.
* Check if the next hop is alive and we have the capacity towards it, if 
not, answer no.
* Ask next hop if it can resolve the next onion layer.
* Return the response from the next hop.
* For a payee:
  * If anybody asks "can you resolve this path":
* If it is not a multipart and we have the preimage, say yes.
* If it is a multipart and we have the preimage, wait for all the parts to 
arrive, then say yes to all of them.
* Otherwise say no.

Now, the most obvious reason against this, that comes to mind, is that this is 
a potential DoS vector.
Random node can trigger a lot of network activity by asking random stuff of 
random nodes.
Asking the question is free, after all.

However, we should note that sending *actual* HTLCs is a similar DoS vector 
**today**.
This is still "free" in that the asker has no need to pay fees for failed 
HTLCs; they just lose the opportunity cost of the amount being locked up in the 
HTLCs.
And presumably the opportunity cost is low since Lightning forwarding earnings 
are so tiny.

One way to mitigate against this is to make generating an onion costly but 
validating and decrypting it cheap.
We could use an encryption scheme that is more computationally expensive to 
encrypt but cheap to decrypt, for example.
Or we could require proof-of-work on the onion: each unwrapped onion layer, 
when hashed, has to have a hash that is less than some threshold (this scales 
according to the number of hops in the onion, as well).
Ultimate askers need to grind the shared secret until the onion layer hash 
achieves the target.

Obviously just because you asked a few milliseconds ago if a path is viable 
does not mean that the path *remains* viable right now when you actually send 
out an HTLC, but presumably that risk is now lessened.
Unexpected shutdowns or loss of connectivity has to appear in a smaller and 
shorter time frame to negatively affect intermediate nodes.

Another thought is: Does the forwarding node have an incentive to lie?
Suppose the next hop is alive but the forwarding node has insufficient capacity 
towards the next hop.
Then the forwarding node can lie and claim it can still resolve the HTLC, in 
the hope that a few milliseconds later, when the actual HTLC arrives, the 
capacity towards the next hop has changed.
Thus, even if the capacity now is insufficient, the forwarding node has an 
incentive to lie and claim sufficient capacity.

Against the above, we can mitigate this by accepting "no" from *any* node along 
the path, but only accepting "yes" from the actual payee.
We already have a mechanism where any node along a route can report a 
forwarding or other payment error, and the sender is able to identify which 
node along the path raised it.
Thus, the payer can identify which node along the route responded with a "yes", 
and check that it definitely reached the payee.
Presumably, when a node receives a question, it checks if the asking node has 
sufficient capacity towards it first, and if not, fails the channel between 
them, since obviously the asking node is not behaving according to protocol and 
is buggy.

Now, this can be used to probe capacities, for free, but again --- we already 
*have* probing capacities, for free, today, by just using random hashes.



Why is this advantageous at all?

One reason for doing this is that it improves payment latency.
Some paths *will* fail, because there is no single consistent view of the 
network and its capacity (which is impossible due to others also possibly 
sending out via the same forwarding nodes you are using, and besides, even if 
such a view could be made to exist, it would be dangerously anti-privacy).
This mechanism does not require that intermediate nodes respond with a 
signature and wait for a replied signature *before* they forward the onion to 
the next hop; when they are *just* asking, there is no HTLC involved, no 
updates to the channel state, and the question can be forwarded as soon as we 
can check locally.
Further, in the current mechanism where we shoot HTLCs first and ask questions 
later, failures also require 1.5 roundtrips due to sharing signatures; with the 
"just ask first" phase there is no need for round trips to respond to questions.

Basically, we replace multiple round trips per hop in case of a failure along a 
route, with a single large round trip from the payer to the 

[Lightning-dev] Theory: Proofs of Payment are Signatures

2021-09-23 Thread ZmnSCPxj via Lightning-dev
Introduction


Lightning provides proof-of-payment for standard forms of payment, but does not 
provide it for keysend or non-high hash-based AMP.

Let us consider, then, what exactly a proof-of-payment even *is*.

Lamport Signatures
==

One of the earliest cryptographic digital signing schemes is the Lamport 
Signature Scheme.

With Lamport signatures, you need to define:

* A trapdoor/one-way hash function, which has output bits `n`.
* A number of bits in the message, `m`.

Then, to generate a private key:

* Generate 2 * `m` random `n`-bit numbers, which are preimages.

Then, to derive a public key from the private key:

* Hash the above 2 * `m` preimages.

To sign a message:

* For each bit `b` of the message:
  * If the bit is clear, send the `b` * 2 + 0 preimage (this is 0-indexed, by 
the way).
  * If the bit is set, send the `b` * 2 + 1 preimage.

To validate, simply check that the received preimage hashes to the 
corresponding hash in the public key.

Hash-based Lightning Payments
=

In current Lightning, a BOLT11 invoice provides a single hash.
Then the payer creates an outgoing HTLC, which is transported over the network 
and reaches the payee, who then provides the preimage to that hash.

>From a certain point of view, one can consider that the preimage revelation is 
>a Lamport signature with a 0-bit message.
Instead of signing a message, it is the *existence* of a signature that 
matters, and that is what proof-of-payment is.
In that degenerate case of a 0-bit message, a preimage and its hash can serve 
as a Lamport signature.

A BOLT11 invoice is signed with ECDSA secp256k1.
One can consider that the hash embedded in the BOLT11 invoice is a delegated 
key, and that the BOLT11 mechanism is really a key delegation mechanism.
The payee signs the BOLT11 invoice using ECDSA secp256k1 using its node id as 
pubkey, specifying a 0-bit Lamport public key (the hash in the BOLT11), and 
delegating responsibility for that particular invoice to that key.
Then, when the payee (or its representative on the network) gets paid, it signs 
the Lamport signature using the delegated key by revealing the preimage.

>From this point-of-view, as well, keysend and base AMP do provide a 
>proof-of-payment, it is just the proof-of-payment is the "wrong" direction: 
>the payee gets a proof-of-payment that it got paid, as it is provided the 
>public key (the hash) and the preimage (the 0-bit Lamport signature).
The payee gets proof that it got paid, but typically we expect that people will 
charge (i.e. demand to get paid) for their autographs, so a payee getting a 
signature from the payer does not quite fit the expected economics.

A Scheme For Point-based Payments?
==

Given the above idea, would it be useful to consider that a PTLC-based LN 
should explicitly use Schnorr signatures for proof-of-payment?

Schnorr signatures are `(R, s)`, and once `R` has been established, we do know 
that we can implement a pay-for-signature to acquire the `s` using PTLCs, which 
can be transported across the network.

>From this point of view, then, proof-of-payments are signatures `(R, s)`, with 
>`s` being acquired using PTLCs over the network.
What is needed then is to somehow transport `R`.

For example, a PTLC-based invoice scheme might commit to a specific `R` in the 
invoice.
Then the public key is "really" the payee node public key, *and* this specific 
`R`, which forms a one-time signature scheme (as reuse of `R` is unsafe).
This is similar to the current BOLT11 scheme in that the hash is "really" a 
0-bit Lamport public key, which is similarly one-time-use.
Then the invoice represents a delegation of the node public key to the 
augmented node id + `R` public key.

Additionally, we may provide a mechanism to request for an `R` from the payee, 
together with a message to be signed.
The payee may then use some determinstic `R` generation scheme to provide `R` 
to the payer.

*How* these mechanisms might actually be *used*, I am significantly less 
certain about, but perhaps application developers on top of Lightning may have 
some ideas that can be shoehorned into this.

At its minimum, even a simple scalar-behind-this-point PTLC payment scheme 
could still work for proof-of-payment, as the function `f(x) = x * G` is a 
perfectly fine trapdoor function for the purposes of Lamport signing, so there 
is no real need to have "full" signatures as the basic mechanism in a 
PTLC-based Lightning Network.

Forwarding nodes, of course, need not know about this, but we do need to 
consider whether support for this would be useful at the payer and payee, and 
in our APIs exposed to applications built on top of Lightning.

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Stateless invoices with proof-of-payment

2021-09-21 Thread ZmnSCPxj via Lightning-dev
Good morning Joost,

> > > preimage = H(node_secret | payment_secret | payment_amount | 
> > > encoded_order_details)
> > > invoice_hash = H(preimage)
> > >
> > > The sender sends an htlc locked to invoice_hash for payment_amount and 
> > > passes along payment_secret and encoded_order_details in a custom tlv 
> > > record.
> > >
> > > When the recipient receives the htlc, they reconstruct the preimage 
> > > according to the formula above. At this point, all data is available to 
> > > do so. When H(preimage) indeed matches the htlc hash, they can settle the 
> > > payment knowing that this is an order that they committed to earlier. 
> > > Settling could be implemented as a just-in-time inserted invoice to keep 
> > > the diff small.
> > >
> > > The preimage is returned to the sender and serves as a proof of payment.
> >
> > Does this actually work?
> > How does the sender know the `invoice_hash` to lock the HTLC(s) to?
>
> > If the sender does not know the `node_secret` (from its name, I am guessing 
> > it is a secret known only by the recipient?) then it cannot compute 
> > `invoice_hash`, the `invoice_hash` has to be somehow learned by the sender 
> > from the recipient.
>
> So to be clear: this isn't a spontaneous payment protocol with 
> proof-of-payment. The sender will still request an invoice from the recipient 
> via an ordinary http request (think of a paywall with qr invoice that is 
> presented when web-browsing to a paid article). That is also how the sender 
> learns the invoice_hash. It is part of the bolt11 payment request as it 
> always is.
>
> The goal of the scheme is to alleviate the recipient from storing the 
> invoices that they generate.

Ah, thanks for the clarification.
This should probably work, then.

Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Stateless invoices with proof-of-payment

2021-09-21 Thread ZmnSCPxj via Lightning-dev
Good morning again Joost,

> However, we can do "pay for signature" protocols using PTLCs, and rather than 
> requesting for a scalar behind a point as the proof-of-payment, we can 
> instead ask for a signature of a message that attests "this recipient got 
> paid `payment_amount` with `encoded_order_details`" and have a recipient 
> pubkey (not necessarily the node key, it might be best to reduce signing for 
> node keys) as the signing key.
>
> So it seems to me that this cannot work with hashes, but can work with PTLCs 
> if we use pay-for-signature and the proof-of-payment is a signature rather 
> than a scalar.

No, it does not work either.

The reason is that for signing, we need an `R` as well.
Typically, this is a transient keypair generated by the signer as `r = rand(); 
R = r * G`.

In order to set up a pay-for-signature, the sender needs to know an `R` from 
the recipient, and the recipient, being the signer, has to generate that `R` 
for itself.
And if you are just going to do something like sender->request->receiver, 
receiver->R->sender, and *then* do the sender->PTLC->receiver, then you might 
as well just do sender->request->receiver, receiver->invoice->sender, 
sender->PTLC->receiver.

I think your goal, as I understand it, is to reduce it to one round, i.e. 
sender->PTLC+some_data->receiver, then receiver responds to the PTLC that 
somehow generates the proof-of-payment.
Is my understanding correct?

We cannot have the sender generate the `r` and `R = r * G` as knowledge of `r`, 
`s` and the signed message `m` results in learning the privkey `a`:

s = r - a * h(R | m)
a = (r - s) / h(R | m)

Even with MuSig2 we need a separate round for `R` establishment before the 
round where everyone gives shares of `s`, and one can argue that a 
proof-of-payment, being an agreement of the sender and a receiver, is 
semantically equivalent to a 2-of-2 signature of both sender and the receiver 
signing off on the fact that the payment happened.
Thus, it seems to me that we can adapt any *secure* single-round multisignature 
Schnorr scheme to this problem of needing a single-round pay-for-signature.


Perhaps another mechanism?
WARNING: THIS IS NOVEL CRYPTOGRAPHY I THOUGHT UP IN FIVE MINUTES AND I AM NOT A 
CRYPTOGRAPHER, DO NOT ROLL YOUR OWN CRYPTO.

Instead of having a single receiver-scalar, the receiver knows two scalars.

   A = a * G
   B = b * G

The sender knows both `A` and `B`.

Now, suppose sender wants to make a payment to the receiver.
At its simplest, the sender can simply add `A + B` and lock an outgoing PTLC to 
that point.
The proof-of-payment is the sum `a + b`, but knowledge of this sum does not 
imply knowledge of either `a` or `b` (I THINK --- I AM NOT A CRYPTOGRAPHER).

Now, suppose we want a proof-of-payment to be keyed to some data.
We can translate that data to a scalar (e.g. just hash it) and call it `d`.
Then the sender makes a payment to the receiver using this point:

d * A + B

The sender then receives the scalar behind the above point:

d * a + b

Even with knowledge of `d`, the sender cannot learn either `a` or `b` and thus 
cannot synthesize any other proof-of-payment with a different `d`, thus 
"locking" the proof-of-payment to a specific `d`.

The above proof-of-payment is sufficient by showing the point `d * A + B`, the 
committed data `d`, and the receiver public keys `A` and `B`.

AGAIN THIS IS NOVEL CRYPTOGRAPHY I THOUGHT UP IN FIVE MINUTES AND I AM NOT A 
CRYPTOGRAPHER, THIS NEEDS ACTUAL MATHEMATICAL REVIEW FROM AN ACTUAL 
CRYPTOGRAPHER.

Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Stateless invoices with proof-of-payment

2021-09-21 Thread ZmnSCPxj via Lightning-dev


Good morning Joost,

> It could be something like this:
>
> payment_secret = random
> preimage = H(node_secret | payment_secret | payment_amount | 
> encoded_order_details)
> invoice_hash = H(preimage)
>
> The sender sends an htlc locked to invoice_hash for payment_amount and passes 
> along payment_secret and encoded_order_details in a custom tlv record.
>
> When the recipient receives the htlc, they reconstruct the preimage according 
> to the formula above. At this point, all data is available to do so. When 
> H(preimage) indeed matches the htlc hash, they can settle the payment knowing 
> that this is an order that they committed to earlier. Settling could be 
> implemented as a just-in-time inserted invoice to keep the diff small.
>
> The preimage is returned to the sender and serves as a proof of payment.

Does this actually work?
How does the sender know the `invoice_hash` to lock the HTLC(s) to?

If the sender does not know the `node_secret` (from its name, I am guessing it 
is a secret known only by the recipient?) then it cannot compute 
`invoice_hash`, the `invoice_hash` has to be somehow learned by the sender from 
the recipient.

And that is done in the BOLT12 protocol by having the sender send a message to 
the recipient and getting a reply back, included in the reply is a unique 
BOLT11 invoice for a single intended payment.


Note that even using point shenanigans and PTLCs seems not to work.
If you provide, say, a BIP32 nonhardened point / master pubkey, the sender 
could select an arbitrary `i` and ask for the scalar / privkey behind it, but 
that also lets the sender derive the master privete key used in the derivation.
Hardening the derivation would prevent master public keys from being used in 
derivations in the first place.

However, we can do "pay for signature" protocols using PTLCs, and rather than 
requesting for a scalar behind a point as the proof-of-payment, we can instead 
ask for a signature of a message that attests "this recipient got paid 
`payment_amount` with `encoded_order_details`" and have a recipient pubkey (not 
necessarily the node key, it might be best to reduce signing for node keys) as 
the signing key.


So it seems to me that this cannot work with hashes, but *can* work with PTLCs 
if we use pay-for-signature and the proof-of-payment is a signature rather than 
a scalar.

Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Deriving channel keys deterministically from seed, musig, and channel establishment v2

2021-09-20 Thread ZmnSCPxj via Lightning-dev
Good morning SomberNight,

> Good morning ZmnSCPxj,
>
> > > Solutions:
> > >
> > > 1.  Naively, we could just derive a static key to be used as
> > > payment_basepoint, reused between all our channels, and watch the
> > > single resulting p2wsh script on-chain.
> > > Clearly this has terrible privacy implications.
> > >
> >
> > If the only problem is horrible privacy, and you have an 
> > `OP_RETURN`identifying the channel counterparty node id anyway, would it 
> > not be possible to tweak this for each channel?
> > static_payment_basepoint_key + hash(seed | counterparty_node_id)
> > This (should) result in a unique key for each counterparty, yet each 
> > individual counterparty cannot predict this tweak (and break your privacy 
> > by deriving the `static_payment_basepoint_key * G`).
>
> The OP_RETURN containing the encrypted counterparty node id
> is only an option, ideally it should not be required.
>
> Also, your proposal needs a counter too, to avoid reuse between multiple
> channels with the same counterparty. This counter is actually quite
> problematic as users should be able to open new channels after
> restoring from seed, which means they need to be able to figure out
> the last value of the counter reliably, which seems potentially
> problematic, so actually this might have to be a random nonce that is
> wide enough to make collisions unlikely... (potentially taking up
> valuable blockchain space in the OP_RETURN)


Yes, that does seem to be somewhat problematic.

As to your proposal to change the open v2 protocol --- as I understand it, the 
current channel establishment v2 is already deployed in production on 
C-Lightning nodes, so at minimum your proposed extension should probably use 
different feature bits and message IDs, I think.
CCing lisa for comment.

In any case, I think changing the actual commitment scheme to use 
MuSig1/MuSig2/MuSig-DN is lower priority than deploying PTLCs (and PTLCs can be 
used perfectly fine with the current commitment scheme, since you can spend 
from SegWitv1 P2WPKH to P2TR perfectly fine).
Though it certainly depends on others what exactly they prioritize.
I estimate that by the time we get around to MuSig, we may very well already 
have some kind of `SIGHASH_NOINPUT` or other more complicated scheme (I hope, 
maybe?) and might want to switch directly to Decker-Russell-Osuntokun instead 
of MuSig(2/DN)-Poon-Dryja.


Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Inherited IDs - A safer, more powerful alternative to BIP-118 (ANYPREVOUT) for scaling Bitcoin

2021-09-20 Thread ZmnSCPxj via Lightning-dev
Good morning John Law,


> (at the expense of requiring an on-chain transaction to update
> the set of channels created by the factory).

Hmmm this kind of loses the point of a factory?
By my understanding, the point is that the set of channels can be changed 
*without* an onchain transaction.

Otherwise, it seems to me that factories with this "expense of requiring an 
on-chain transaction" can be created, today, without even Taproot:

* The funding transaction output pays to a simple n-of-n.
* The above n-of-n is spent by an *offchain* transaction that splits the funds 
to the current set of channels.
* To change the set of channels, the participants perform this ritual:
  * Create, but do not sign, an alternate transaction that spends the above 
n-of-n to a new n-of-n with the same participants (possibly with tweaked keys).
  * Create and sign, but do not broadcast, a transaction that spends the above 
alternate n-of-n output and splits it to the new set of channels.
  * Sign the alternate transaction and broadcast it, this is the on-chain 
transaction needed to update the set of channels.

The above works today without changes to Bitcoin, and even without Taproot 
(though for large N the witness size does become fairly large without Taproot).

The above is really just a "no updates" factory that cuts through its closing 
transaction with the opening of a new factory.

Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Deriving channel keys deterministically from seed, musig, and channel establishment v2

2021-09-20 Thread ZmnSCPxj via Lightning-dev
Good morning SomberNight,


> Solutions:
>
> 1.  Naively, we could just derive a static key to be used as
> payment_basepoint, reused between all our channels, and watch the
> single resulting p2wsh script on-chain.
> Clearly this has terrible privacy implications.

If the only problem is horrible privacy, and you have an `OP_RETURN` 
identifying the channel counterparty node id anyway, would it not be possible 
to tweak this for each channel?

static_payment_basepoint_key + hash(seed | counterparty_node_id)

This (should) result in a unique key for each counterparty, yet each individual 
counterparty cannot predict this tweak (and break your privacy by deriving the 
`static_payment_basepoint_key * G`).

?

Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Bandwidth in Lightning Network.

2021-09-02 Thread ZmnSCPxj via Lightning-dev
Good morning xraid,

It helps to consider on-Lightning bitcoins as a substitute good for onchain 
bitcoins.

Converting to or from on-Lightning coins to onchain coins has a cost, either:

* Cost of channel open (converting onchain coins to on-Lightning coins) or 
channel close (converting on-Lightning coins to onchain coins).
* The use of services like Boltz, an exchange that facilitates conversion 
between on-Lightning and onchain coins, and which charges a fee.

Consider the case where Lady Gaga is already onboarded, and has on-Lightning 
bitcoins, and who would very much prefer that her onchain bitcoins are kept in 
a cold wallet that she ideally would never bring online in the foreseeable 
future.
Lady Gaga wishes to pay 0.6BTC to Madonna, using the loose change in her 
Lightning wallet, and not have to go to the bank (cold storage wallet) to move 
funds around (because of risk of getting the keys online and potentially 
hacked).
Madonna, as it happens, has a cold wallet with onchain bitcoins but has no 
ability to receive on-Lightning bitcoins.

Lady Gaga has two choices:

* Lady Gaga closes some channels to convert on-Lightning bitcoins to onchain 
bitcoins.
* Lady Gaga uses Boltz to convert on-Lightning bitcoins to onchain bitcoins.

Now, consider if Lady Gaga had, as is right and proper, decided to make 
multiple channels, in order to reduce counterparty risk (i.e. channel 
counterparties going offline, or deliberately impeding Lady Gaga->Madonna 
exchanges (because seriously Lady Gaga is sexier and Madonna should pay Lady 
Gaga for the privilege of existing) by raising fees for such transactions when 
they detect it).
If so, the first option, closing channels, can be a significant amount of 
onchain activity.
Lady Gaga would need to create multiple closing transactions, and *then* create 
a large (in vbytes) transaction consuming those closing transactions as inputs 
and outputting the amount to Madonna.

Alternately, with proper design of pathfinding algorithms, Lady Gaga can 
deliver the same amount of funds over the Lightning Network, to a Boltz 
Lightning node, and the Boltz service will then send the amount to Madonna.
Boltz can aggregate multiple such transactions into a single onchain 
transaction, saving on onchain fees, and passing on some of those savings to 
Lady Gaga and other clients of Boltz.

Without a pathfinding algorithm that can deliver 0.6BTC from Lady Gaga to Boltz 
over Lightning, the second choice is impossible for Lady Gaga.

Now of course we could be using centralized brokers and avoid onchain fees 
entirely, but that risks censorship (just because Lady Gaga is sexier does not 
mean she is not allowed to purchase tacky purses from the inferior Madonna, 
even though a just and right universe would prevent such a transaction as 
inherent laws of physics).
But the point of Lightning is an attempt to provide:

* Fast
* Cheap
* Reliable
* Non-censorable

payments.
That is why attempts should still be made to keep this option open.

Regards,
ZmnSCPxj


___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Do we really want users to solve an NP-hard problem when they wish to find a cheap way of paying each other on the Lightning Network?

2021-09-02 Thread ZmnSCPxj via Lightning-dev
Good morning Stefan,

> > > For myself, I think a variant of Pickhardt-Richter payments can be 
> > > created which adapts to the reality of the current network where 
> > > `base_fee > 0` is common, but is biased against `base_fee > 0`, can be a 
> > > bridge from the current network with `base_fee > 0` and a future with 
> > > `#zerobasefee`.
> >
> > I have been thinking about your idea (at least what I understood of
> > it) of using amountprop_fee + amountbase_fee/min_flow_size, where
> > min_flow_size is a suitable quantization constant (say, 10k or 100k
> > sats, may also chosen dynamically), as a component of the cost
> > function, and I am pretty sure it is great at achieving exactly what
> > you are proposing here. This is a nicely convex (even linear in this
> > component) function and so it's easy to find min-cost flows for it. It
> > solves the problem (that I hadn't thought about before) that you have
> > pointed out in splitting flows into HTLCs. If you use
> > min_flow_size=max_htlc_size, it is even optimal (for this
> > min_flow_size). If you use a smaller min_flow_size, it is still
> > optimal for channels with base_fee=0 but overestimates the fee for
> > channels with base_fee>0, and is less accurate the smaller the
> > min_flow_size and the larger the base_fee. So it will be biased
> > against channels with larger base_fee. But notice that with min-cost
> > flows, we are rarely looking for the cheapest solution anyway, because
> > these solutions (if they include more than one path) will usually
> > fully saturate the cheapest channels and thus have very low success
> > probability. So all in all, I believe you found a great practical
> > solution for this debate. Everybody is free to use any base_fee they
> > chose, we get a workable cost function, and I conjecture that
> > economics will convince most people to choose a zero or low base_fee.
>
> I am glad that this is helpful.
> Still, I have not really understood well the variant problem "min cost flow 
> with gains and losses" and this scheme might not work there very well.
> On the other hand, the current algorithms are already known to suck for large 
> payments, so even a not-so-good algorithm based on Pickhardt-Richter may be 
> significantly better than existing deployed code.

In yet another thread I proposed the cost function:

fee + fee_budget * (1 - success_probability)

If the base-to-prop hack (i.e. use a quantization constant like I proposed) can 
be done on the `fee` component above, does it now become convex?

With an amount of 0, `success_probability` is 1, and if we use the base-to-prop 
hack to convert base fees to proportional fees, then the output is also 0 at 
`amount = 0`.

It can even be made separable by clever redefinition of addition (as I pointed 
out in that thread) but perhaps it is too clever and breaks other properties 
that a mincostflow algo needs.

The above is attractive since the cost unit ends up being millisatoshi.
In my experience with CLBOSS, hastily-thought heuristics kinda suck unless they 
are based on actual economic theory, meaning everything should really be in 
terms of millisatoshis or other financial units.

Would this be workable?
Pardon my complete ignorance of maths.

Presumably, there is a reason why the Pickhardt-Richter paper suggests 
`-log(success_probability) + fudging_factor * fee`.
My initial understanding is that this makes simple addition a correct behavior 
(success_probabilities are multiplied), making for a separable cost function 
that uses "standard" arithmetic addition rather than the excessively clever one 
I came up with.
However, it might affect convexity as well?

(on the other hand, credibility should really be measured in decibels anyway, 
and one is estimating the credibility of the implied claim of a published 
channel that it can actually deliver the money to where it is needed...)

The neglogprob is in units of decibels, and I am not sure how to convert a 
millisatoshi-unit fee into decibels.
In particular the logarithm implies a non-linear relationship between 
probability and fee.

I think it is reasonable for paying users to say "if it will take more than NN 
millisatoshis to pay for it, never mind, I won't continue the transaction 
anymore", which is precisely why I added the fee budget in the C-Lightning 
`pay` implementation long ago.

On the other hand, perhaps the nonlinear relationship between the success 
probability and the fee makes sense.
If the success probability is already fairly high, then any small change in fee 
is significant to the user, but as the success probability drops, then the user 
would be willing to accept up to the fee budget.
This implies that if success probability is high, the effect of an increase in 
fee should be larger in comparison to the effect of an increase in success 
probability, but if success probability is low, then the effect of an increase 
in fee should be smaller compared to an increase in success 

Re: [Lightning-dev] Do we really want users to solve an NP-hard problem when they wish to find a cheap way of paying each other on the Lightning Network?

2021-09-01 Thread ZmnSCPxj via Lightning-dev
Good morning Matt and all,

> Please be careful accepting the faulty premise that the proposed algorithm is 
> “optimal”. It is optimal under a specific heuristic used to approximate what 
> the user wants. In reality, there are a ton of different things to balance, 
> from CLTV to feed to estimated failure probability calculated from node 
> online percentages at-open liquidity, and even fees.

It may be possible to translate all these "things to balance" to a single unit, 
the millisatoshi.

* CLTV-delta
  - The total CLTV-delta time is the worst-case amount of time that your 
outgoing payment will be stalled.
  - We can compute the expected nominal rate of return if the funds were 
instead put in a random Bitcoin-denominated investment, getting back a 
conversion ratio from time units to percentage of your funds.
This is what C-Lightning already does via the `riskfactor` parameter.
* Node failure probablity
  - Can be multiplied with channel failure probability (the one based on the 
channel capacity).
  - As I pointed out elsewhere, we can ask the user "up to how much are you 
willing to pay in fees?", and that amount is the cost of failure (because 
economics; see my other mail); the failure probability times the cost of 
failure is the effective cost of this path.
* Fees
  - Are already denominated in millisatoshi.

One unit to rule them all


Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Do we really want users to solve an NP-hard problem when they wish to find a cheap way of paying each other on the Lightning Network?

2021-08-31 Thread ZmnSCPxj via Lightning-dev
Good morning Stefan,

> > For myself, I think a variant of Pickhardt-Richter payments can be created 
> > which adapts to the reality of the current network where `base_fee > 0` is 
> > common, but is biased against `base_fee > 0`, can be a bridge from the 
> > current network with `base_fee > 0` and a future with `#zerobasefee`.
>
> I have been thinking about your idea (at least what I understood of
> it) of using amountprop_fee + amountbase_fee/min_flow_size, where
> min_flow_size is a suitable quantization constant (say, 10k or 100k
> sats, may also chosen dynamically), as a component of the cost
> function, and I am pretty sure it is great at achieving exactly what
> you are proposing here. This is a nicely convex (even linear in this
> component) function and so it's easy to find min-cost flows for it. It
> solves the problem (that I hadn't thought about before) that you have
> pointed out in splitting flows into HTLCs. If you use
> min_flow_size=max_htlc_size, it is even optimal (for this
> min_flow_size). If you use a smaller min_flow_size, it is still
> optimal for channels with base_fee=0 but overestimates the fee for
> channels with base_fee>0, and is less accurate the smaller the
> min_flow_size and the larger the base_fee. So it will be biased
> against channels with larger base_fee. But notice that with min-cost
> flows, we are rarely looking for the cheapest solution anyway, because
> these solutions (if they include more than one path) will usually
> fully saturate the cheapest channels and thus have very low success
> probability. So all in all, I believe you found a great practical
> solution for this debate. Everybody is free to use any base_fee they
> chose, we get a workable cost function, and I conjecture that
> economics will convince most people to choose a zero or low base_fee.

I am glad that this is helpful.
Still, I have not really understood well the variant problem "min cost flow 
with gains and losses" and this scheme might not work there very well.
On the other hand, the current algorithms are already known to suck for large 
payments, so even a not-so-good algorithm based on Pickhardt-Richter may be 
significantly better than existing deployed code.

On the software engineering side, the fact that it took you 2 months probably 
means implementing this would take even longer, like 6 months or so.
I mean to say that prior to deployment we would need the dreary tasks of unit 
tests and edge cases (which are needed to ensure that basic functionality is 
not lost if the code is later modified, or more perniciously, that bugfixes do 
not introduce more bugs), code review, and so on.
And for C-Lightning it would have to be implemented in C, which brings its own 
set of problems (memory management, being a lot more stringent about dotting 
every i and crossing every t, explicitly passing objects around, most likely 
rewriting in a continuation passing style/"callback style"...).
Now we could argue that C-Lightning in practice requires Python anyway, but it 
also depends on what libraries you pull in, even if C-Lightning in practice 
requires Python you still want to keep the dependencies few or else deployment 
can suffer.

Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Do we really want users to solve an NP-hard problem when they wish to find a cheap way of paying each other on the Lightning Network?

2021-08-31 Thread ZmnSCPxj via Lightning-dev
Good morning Orfeas,


> Such an approach is much more suitable to debian, since they have 
> full control and a complete view over their "network" of packages, as opposed 
> to LN, which is decentralized, nodes come and go at will and they can be 
> private (even from developers!).

Indeed, I came back to this topic to make this argument as well.
Maintainers of apt repositories often make *some* amount of effort to avoid 
having too many `Conflict`-ing packages, such that typical simple users never 
experience the NP-hard bits.
Worse, such apt repositories are definitely central points of coordination.

Now of course users can add PPAs and anyone can create PPAs that others can 
consume, and the PPA-creation is not curated.
But in practice, even just a handful of PPAs can risk horrible cascades of 
`Conflict`-ing packages and massive headaches that can only be fixed by just 
reinstalling the OS from scratch and never using PPAs ever again.

(Indeed, the ability for Nix and Guix to run "on top of" an existing OS is a 
good alternative to using PPAs, which avoids the `Conflict` issue.)

Now, with Lightning Network, we run the risk that some de facto centralized 
curator will be consulted to provide good payment solutions.
That centralized curator avoids the NP-hard problems by being able to define 
which nodes and which channels are allowed to be considered in any payment 
solution, in much the same way an `apt` repository curator can define which 
packages are in and out of the repository, all in the name of avoiding the 
NP-hard problem for most users.
The risk is that such central coordinators may very well dangerously centralize 
the network in such a way that evicting them is not easy.
In particular, we do not want so central an entity that the choice for users 
becomes between accepting a centralized LN or making do with the expensive but 
decentralized base layer.

Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Do we really want users to solve an NP-hard problem when they wish to find a cheap way of paying each other on the Lightning Network?

2021-08-31 Thread ZmnSCPxj via Lightning-dev
Good morning aj and Rene,

> On Thu, Aug 26, 2021 at 04:33:23PM +0200, René Pickhardt via Lightning-dev 
> wrote:
>
> > As we thought it was obvious that the function is not linear we only 
> > explained
> > in the paper how the jump from f(0)=0 to f(1) = ppm+base_fee breaks 
> > convexity.
>
> (This would make more sense to me as "f(0)=0 but f(epsilon)->b as
> epsilon->0, so it's discontinuous")

I as well, "discontinuous" is how I would understand it, and it really needs to 
be pointed out to me that f(0)=0 and the sudden step to f(1)=1*prop + base is 
problematic.

>
> > "Do we really want users to solve an NP-hard problem when
> > they wish to find a cheap way of paying each other on the Lightning 
> > Network?"
>
> FWIW, my answer to this is "sure, if that's the way it turns out".
>
> Another program which solves an NP-hard problem every time it runs is
> "apt-get install" -- you can simulate 3SAT using Depends: and Conflicts:
> relationships between packages.

Thank you for this, I now understand why functional package managers like Nix 
and Guix etc. exist.

> I worked on a related project in Debian
> back in the day that did a slightly more complicated variant of that
> problem, namely working out if updating a package in the distro would
> render other packages uninstallable (eg due to providing a different
> library version) -- as it turned out, that even actually managed to hit
> some of the "hard" NP cases every now and then. But it was never really
> that big a deal in practice: you just set an iteration limit and consider
> it to "fail" if things get too complicated, and if it fails too often,
> you re-analyse what's going on manually and add a new heuristic to cope
> with it.
>
> I don't see any reason to think you can't do roughly the same for
> lightning; at worst just consider yourself as routing on log(N) different
> networks: one that routes payments of up to 500msat at (b+0.5ppm), one
> that routes payments of up to 1sat at (b+ppm), one that routes payments
> of up to 2sat at (b+2ppm), one that routes payments of up to 4sat at
> (b+4ppm), etc. Try to choose a route for all the funds; if that fails,
> split it; repeat. In some case that will fail despite there being a
> possible successful multipath route, and in other cases it will choose a
> moderately higher fee path than necessary, but if you're talking a paying
> a 0.2% fee vs a 0.1% fee when the current state of the art is a 1% fee,
> that's fine.

Well, something similar is already done by C-Lightning, and the `pay` algorithm 
even includes a time limit after which it just fails instead of retrying 
(roughly equivalent to setting an iteration limit, and actually more practical 
since it is not the number of CPU iterations that matters, it is real time 
during which user economic preferences may change).

And for large enough payments, it *still* does not succeed often enough in 
practice.

I pointed out elsewhere that failure-to-pay has an economic cost.
This economic cost is the difference between the value of the product being 
bought, minus the value of the bitcoin being used to pay for it.
(remember, economic value is subjective to every economic actor; the economy is 
simply how the differences in value plays out as actors exchange objects, where 
each actor has its own subjective valuation of that value.)
We can ask the user explicitly for a numeric amount equivalent to this cost, by 
the simple question "how much are you willing to pay in order for this 
transaction to succeed?" i.e. "what is the maximum transaction fee you will 
accept?"

Thus, failing to deliver imposes an economic cost on users, a cost that 
Lightning Network, rightfully or not, has implicitly promised to reduce in 
comparison to onchain transactions.

For myself, I think a variant of Pickhardt-Richter payments can be created 
which *adapts to* the reality of the current network where `base_fee > 0` is 
common, but is biased against `base_fee > 0`, can be a bridge from the current 
network with `base_fee > 0` and a future with `#zerobasefee`.
With C-Lightning and its plugin nature allowing alternate implementations of 
`pay`, this can be done without requiring an entire new LN implementation.
If popular enough, because it demonstrates actual ability to transfer large 
funds at low fees, this can force forwarding nodes to learn that `#zerobasefee` 
earns them more funds (such as if they use a hill-climbing algorithm like I 
described to discover optimal feerates that gives them maximum earnings).
Then `#zerobasefee` just wins economically.

So I think this really just needs a good implementation somewhere.

Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Fee Budgets: A Possible Path Towards Unified Cost Functions For Lightning Pathfinding Problems

2021-08-30 Thread ZmnSCPxj via Lightning-dev
Good morning Stefan,

> Good Morning Zmn!
>
> If you'd like to understand  the min-cost flow problem and algorithms better, 
> I would really recommend the textbook we have been citing throughout the 
> paper.
>
> The algorithm you have found has a few shortcomings. It'll only work for the 
> linear min-cost flow problem, and it is very slow. In reality, we need to 
> deal with convex cost functions, and the algorithm we have used so far uses 
> an approach called capacity scaling in order to be much faster. It is indeed 
> complex enough that it has taken us about two months to understand and 
> implement it, discovering a nice heuristic in the process of making mistakes. 
>
> Separable in this context means that you can simply add up the costs of the 
> edges to get the total costs. On second thought, your definition would 
> probably work here , by redefining adding up. 
>
> Convex here means that for any two amounts x, y, the cost function f in the 
> interval x, y does not lie below the line connecting the two points (x, f(x)) 
> and (y, f(y)). The intuition here is that a linear approximation never 
> overestimates the real cost. I guess one would need a more involved 
> definition for your more complex coordinates. 

Hmm, that does require that we define "subtract" and "multiply" operations, 
with certain additional rules.
This is probably doable, especially since I think we do not need to multiply a 
`UnifiedCost` by another `UnifiedCost`, only a `UnifiedCost` by a `Rational`.

***HOWEVER*** I realized later that the `#zerobasefee` issue might not actually 
be a problem for the ***mincostflow*** algorithm, but rather a problem for the 
***disect*** algorithm that converts a flow solution to an actual set of 
sub-payments.
In that case, this effort is actually a dead end; it seems to me that, for the 
mincostflow algo at least, you can trivially add base fees, the issue is that 
once the disect algorithm gets its hands on the solution, base fees are 
problematic.

In addition, if you tell me that it took you two months to do the algo, then 
that is an even greater concern --- this is possible to do with a small number 
of operations, but if we need to add more, like "subtract" and "multiply" 
operations, then the risk involved in the software engineering also increases, 
and the expected implementation time might take much, much longer due to 
complexity.

For example, in CLBOSS I have a Dijkstra implementation that has 
inversion-of-control *and* allows users to parametrize "addition" and 
"comparison" operations, it is just a few dozen lines of code.
However, the complexity of the code implementation would greatly increase if 
there is also a need to fill in "multiply" and "subtract" operations as well 
--- consider testing and other code quality needs.

*And* if my conjecture is right --- that the `#zerobasefee` requirement is 
really a requirement of the ***disect*** algorithm rather than the 
***mincostflow*** algorithm then the effort here would be pointless, we should 
focus more on the disect algo, which is why the new thread yet again.


Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Handling nonzerobasefee when using Pickhard-Richter algo variants

2021-08-30 Thread ZmnSCPxj via Lightning-dev
Good morning list,

It seems to me that the cost function defined in Pickhard-Richter
can in fact include a base fee:

-log(success_probability) + fudging_factor * amount * prop_fee + 
fudging_factor * base_fee

(Rant: why do mathists prefer single-symbol var names, in software
engineering there is a strong injunction **against** single-char
var names, with exceptions for simple numeric iteration loop vars,
I mean keeping track of the meaning of each single-symbol var name
takes up working memory space, which is fairly limited on a wetware
CPU, which is precisely why in software engineering there is an
injunction against single-char var names, srsly.)

It seems to me that the above is "convex", as the `fudging_factor`
for a run will be constant, and the `base_fee` for the channel
would also be constant, and since it seems to me, naively, that
the paper defines "convex" as "the second derivative >= 0 for all
`amount`", and since the derivative of a constant is 0, the above
would still remain convex.
(I am not a mathist and this conjecture might be completely asinine.)

So, it seems to me that the *real* reason for `#zerobasefee` is
not that the **mincostflow** algorithm cannot handle non-zero `base_fee`,
but rather, the **disect** phase afterwards cannot handle non-zero
`base_fee`.

For example, suppose the minflowcost algorithm were instructed to
deliver 3,000msat from `S` to `D`, and returned the following flow:

S -->3000--> A ->1000-> B
 |  |
 |1000
 |  v
 +-->2000-> C ->3000-> D

In the "disect" phase afterwards, the above flow solution would have
to be split into two sub-payments, a 1000 sub-payment `S-A-B-C-D`
and a 2000 sub-payment `S-A-C-D`.

However, this does mean that the base cost for `C->D` and `S->A` in
the above flow will pay *twice* the base cost than what the above
cost function would have computed, because in reality two independent
payments pass through those channels.

Thus, the current Pickhardt-Richter scheme cannot work with non-zero
base fees if it were modified to consider fees, but not due to the
mincostflow algorithm, rather because due to the disect algorithm.

### Converting Non-Zero Base Fees To Proportional Fees

An alternative solution would be to optimize for a *maximum* fee
rather than optimize for the *exact* fee.

We can do this by virtually splitting up the entire payment into
smaller bunches of value, and asking mincostflow to solve individual
bunches rather than individual msats.

For example, we can decide that the smallest practical HTLC would
be 1000 msat.
Let us call this the "payment unit", and the mincostflow algo
solves for paying in these payment units instead of 1msat units.
(Actual implementations would need to have some heuristic or
reasoned rule-of-thumb assumption on what the smallest practical
HTLC would be.)

In our example, suppose we need to send 2001msat from `S` to `D`,
with the payment unit being 1000 msat.
Then we would actually have the mincostflow algorithm work to
deliver 3 payment units (3 * 1000msat) from `S` to `D`.

Let us suppose that the mincostflow algorithm returns the following
flow:

S -->3-> A ->1> B
 |  |
 |1 |
 |  v
 +-->2> C ->3> D

Actually, what the mincostflow algorithm uses as a cost function
would be:

-log(success_probability) + fudging_factor * amount * prop_fee * 
payment_unit + fudging_factor * base_fee * amount

== -log(success_probability) + fudging_factor * amount * (prop_fee * 
payment_unit + base_fee)

where: amount is in units of 1000msat, our smallest practical HTLC.
   payment_unit is the unit, i.e. 1000msat.

What the above means is that the mincostflow algorithm *allocates*
`3 * base_fee` for `C->D`, since the `amount` flowing through
`C->D` would be 3.
However, when we pass the above flow to the disect algorithm, it
would actually only split this into 2 sub-payments, so the
actual payment plan would only pay `2 * base_fee` for the
`C->D` leg on both sub-payments.

In short, this effectively converts the base fee to a proportional
fee, removing the zerobasfee requirement imposed by the disect
algorithm.

That is, the cost computed by the mincostflow algorithm is really a
maximum cost budget that the subsequent disect algorithm could later
spend.

In effect, this converts the base fee to a proportional fee.

This may be acceptable in practice.
This approximation has a bias against non-zerobasefee --- it would
treat those channels as being far more expensive than they actually
would end up being in an *actual* payment attempt --- but at least
does not *require* zerobasefee.
It would be able to still use non-zerobasefee channels if those are
absolutely required to reach the destination.

This should at least help create a practical payment algorithm that
handles current LN with nonzerobasefee, 

Re: [Lightning-dev] Fee Budgets: A Possible Path Towards Unified Cost Functions For Lightning Pathfinding Problems

2021-08-23 Thread ZmnSCPxj via Lightning-dev
Good morning Stefan,

> Hi Zmn! That is some amazing lateral thinking you have been applying there. 
> I'm quite certain I haven't understood everything fully, but it has been 
> highly entertaining to read. Will have to give it a closer read when I get 
> some time.
>
> As a first impression, here are some preliminary observations: While I highly 
> like the Haskell-style datatype, and the algorithm we use does mostly use 
> Dijkstra pathfinding, I think what is really important in your definition is 
> the computeCost definition. This is what we would call the cost function 
> IIUC, and in order to be able to solve min-cost flow problems it generally 
> has to be separable and convex. I believe your datatype merely hides the fact 
> that it is neither. 

Well, it really depends on what min flow cost algorithms actually assume of the 
"numbers" being used.

For instance, it is well known that the Dijkstra-A\*-Greedy family of 
algorithms do not handle "negative costs".
What it really means is that the algorithms assume:

a + b >= a
a + b >= b

This holds if `a` and `b` are naturals (0 or positive), but not if they are 
integers.
1 + -1 = 0, and 0 >= 1 is not true, thus the type for costs in those algorithms 
cannot be integer types, they have to be naturals.
However if you restrict the type to naturals,  `a + b >= a` holds, and thus 
Dijkstra and its family of algorithms work.

Thus, if you are going to use Dijkstra-A\*-Greedy, you "only" need to have the 
following "operations":

`+` :: Cost -> Cost -> Cost
`<` :: Cost -> Cost -> Bool
zero :: Cost

With the following derived operations:

a > b = b < a
a >= b = not (a < b)
a <= b = not (b < a)
a == b = (a >= b) && (a <= b)
a /= b = (a < b) || (a > b)

And following the laws:

forall (a :: Cost) => a + zero == a
forall (a :: Cost) => zero + a == a
forall (a :: Cost, b :: Cost) => a + b == b + a
forall (a :: Cost, b :: Cost, c :: Cost) => (a + b) + c == a + (b + c)
forall (a :: Cost, b :: Cost) => a + b >= a
forall (a :: Cost, b :: Cost) => a + b >= b

As a non-mathist I have no idea what "separable" and "convex" actually mean.
Basic search for "convex" and "concave" tends to show up information in 
geometry, which I think is not related (though it is possible there is some 
extension of the geometric concept to pure number theory?).
And definitions on "separable" are not understandable by me, either.

What exactly are the operations involved, and what are the laws those 
operations must follow, for the data type to be "separable" and "convex" 
(vs."concave")?

I guess my problem as well is that I cannot find easy-to-understand algorithms 
for min cost flow --- I can find discussions on the min cost flow "problem", 
and some allusions to solutions to that problem, but once I try looking into 
algorithms it gets quite a bit more complicated.

Basically: do I need these operations?

`*` :: Cost -> Cost -> Cost
`/` :: Cost -> Cost -> Cost --- or Maybe Cost

If not, then why cannot `type Cost = UnifiedCost`?


For example, this page: 
https://www.topcoder.com/thrive/articles/Minimum%20Cost%20Flow%20Part%20Two:%20Algorithms

Includes this pseudocode:

Transform network G by adding source and sink
Initial flow x is zero
while ( Gx contains a path from s to t ) do
Find any shortest path P from s to t
Augment current flow x along P
update Gx

If "find any shortest path" is implemented using Dijkstra-A\*-Greedy, then that 
does not require `Cost` to be an actual numeric type, they just require a type 
that provides `+`, `<`, and `zero`, all of which follow the laws I pointed out, 
*and no more than those*.
`UnifiedCost` follows those laws (tough note that my definition of `zero` has a 
bug, `successProbability` should be `1.0` not `0`).

In short --- the output of the cost function is a `UnifiedCost` structure and 
***not*** a number (in the traditional sense).

Basically, I am deconstructing numbers here and trying to figure out what makes 
them tick, and seeing if I can use a different type to provide the "tick".


Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Fee Budgets: A Possible Path Towards Unified Cost Functions For Lightning Pathfinding Problems

2021-08-20 Thread ZmnSCPxj via Lightning-dev


> Alternative Pathfinding?
> 


Or to put this section more succinctly: Why should cost be a number?

What operations do the minimum cost flow algorithms demand of this thing called 
"cost", and can we provide those operations using something which is not a 
number but is instead a different structure?
What is the minimal interface that the mincostflow algo demands of this "cost" 
datatype?

Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [bitcoin-dev] Removing the Dust Limit

2021-08-20 Thread ZmnSCPxj via Lightning-dev
Good morning Jeremy,

> one interesting point that came up at the bitdevs in austin today that favors 
> remove that i believe is new to this discussion (it was new to me):
>
> the argument can be reduced to:
>
> - dust limit is a per-node relay policy.
> - it is rational for miners to mine dust outputs given their cost of 
> maintenance (storing the output potentially forever) is lower than their 
> immediate reward in fees.
> - if txn relaying nodes censor something that a miner would mine, users will 
> seek a private/direct relay to the miner and vice versa.
> - if direct relay to miner becomes popular, it is both bad for privacy and 
> decentralization.
> - therefore the dust limit, should there be demand to create dust at 
> prevailing mempool feerates, causes an incentive to increase network 
> centralization (immediately)
>
> the tradeoff is if a short term immediate incentive to promote network 
> centralization is better or worse than a long term node operator overhead.

Against the above, we should note that in the Lightning spec, when an output 
*would have been* created that is less than the dust limit, the output is 
instead put into fees.
https://github.com/lightningnetwork/lightning-rfc/blob/master/03-transactions.md#trimmed-outputs

Thus, the existence of a dust limit encourages L2 protocols to have similar 
rules, where outputs below the dust limit are just given over as fees to 
miners, so the existence of a dust limit might very well be 
incentivize-compatible for miners, regardless of centralization effects or not.


Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Fee Budgets: A Possible Path Towards Unified Cost Functions For Lightning Pathfinding Problems

2021-08-20 Thread ZmnSCPxj via Lightning-dev
Subject: Fee Budgets: A Possible Path Towards Unified Cost Functions For 
Lightning Pathfinding Problems

Introduction


What is the cost of a failed LN payment?

Presumably, if a user wants to pay in exchange for something,
that user values that thing higher than the Bitcoin they spend
on that thing.
This is a central assumption of free market economics, that
all transactions are voluntary and that all participants in
the transaction get more utility out of the transaction than
what they put in.

Note that ***value is subjective***.
For example, a farmer values the food they sell less than
the buyer of that food, because the farmer has leet farming
skillz that actually let them **grow food** from literal shit,
sunlight, and water, and presumably the buyer does not have
those leet skillz0rs to convert literal shit to food
using sunlight and water (otherwise they would be growing
their own food).
This applies for all production, given that you puny humans
have such limited time to learn and train leet skillz.

Thus, for a buyer, there is a difference in value between
the product they are buying, and the BTC they are sacrificing
to the elder gods (i.e. the payment network and the seller) in
order to get the product.
The buyer must value the product more than the BTC.

This difference, then, is the cost of a failed payment.
If the attempt to pay fails, then obviously the seller
will not be willing to send the product (as it can receive
no money for it) and the buyer loses the (buyer-subjective)
value of the product minus the value of the BTC they wanted
to use to pay.

This difference in value, while subjective, is quantifiable
(consider how judges at a beauty contest must convert
their subjective judgment of beauty to a number; indeed,
horny humans do this all the time at bars, suggesting that
even judgment-impaired humans intuitively understand that
subjective values can be quantified).
And that quantifiable subjective value can be measured in
units of bitcoin.

Thus, this difference in value is the cost of failure of an
LN payment.
And due to the relationship of failure and success, the
cost of failure is the value of success.

Pickhardt-Richter Payments
==

Why is the cost of failure/value of success even relevant?

In [a 2021 paper](https://arxiv.org/abs/2107.05322)
Pickhardt and Richter present a method of estimating
the probability of payment success, and using that
probability-of-success as a cost function for a
generalization of pathfinding algorithms (specifically
minimum cost flow).

Of course, probabilities of success are not the only
concern that actual pathfinding algorithms need to
worry about.
Another two concerns are:

* Actual fees (measured in bitcoin units).
* Actual total cltv-delta (measured in blocks).

It is possible to convert total cltv-delta to a "Fee".
Basically, the cost of the total cltv-delta is the value
of your funds being locked and unuseable for that many
blocks.
This can be represented as an expected annual return on
investment if those funds were instead locked into some
investment.
In C-Lightning this is the `riskfactor` parameter.

This implies that total cltv-delta can be converted
to an equivalent amount of BTCs.

Now, the issue is, how can we convert probability of
success to some equivalent amount of BTCs?

This is why the value of success --- i.e. the cost
of payment failure --- is relevant.

By multiplying the cost of failure by the probability
of failure, we can acquire a bitcoins-measured
quantity that can be added directly to expected fees.

Fee Budgets
===

Long ago, some weird rando with an unpronouncable name
decided to add a "fee budget" to his implementation of
C-Lightning pay algorithm (back when C-Lightning did not
even have a `pay` algorithm).
For some reason (possibly dark ritual), that rando managed
to get that code into the actual C-Lightning, and the
fee budget --- known as `maxfeepercent` --- has been
retained to this day, even though none of the original
code has survived (dark rituals tend to consume anything
involved in their rites, I would not be surprised if
that includes source code).

Now consider --- how would a buyer assign a fee budget
for a particular payment?

As we noted, a rational buyer will only buy if they
believe the value of the product being bought is higher
than the value of the BTCs they sacrifice to buy that
product.
This difference is the cost of failure (equivalent to
value of success).

And a rational buyer will be willing to pay, as fee,
any value up to this cost of failure/value of success.

For example, if the buyer is not willing to pay more
than half the cost of failure, then if there is no
way to succeed payments at half the cost of failure,
then the payment simply fails and the buyer loses
the entire cost of failure.
Logically, the buyer must be willing to pay, as
fees, up to the cost of failure.

Similarly, if the buyer is willing to pay up to twice
the cost of failure, 

Re: [Lightning-dev] #zerobasefee

2021-08-20 Thread ZmnSCPxj via Lightning-dev

Good morning Stefan,

> > A reason why I suggest this is that the cost function in actual 
> > implementation is *already* IMO overloaded.
> >
> > In particular, actual implementations will have some kind of conversion 
> > between cltv-delta and fees-at-node.
>
> That's an interesting aspect. Would this lead to a constant per edge if 
> incorporated in the cost function? If so, this would lead to another 
> generally hard problem, which, again, needs to be explored more in the 
> concrete cases we have here to see if we can still solve/approximate it. 

No, because each edge defines its own cltv-delta.

>
> > However, I think that in practice, most users cannot intuitively understand 
> > `riskfactor`.
>
> I don't think they have to. Only people like you who write actual software 
> probably need to. 

***I*** do not intuitively understand it either. (^^;)
I understand it with system 2 (expected return on investment if you were 
investing the money instead of having it locked due to node failure along a 
path) but my system 1 just goes "hmmm whut" and I just use the default, which I 
*hope* cdecker chose rationally.

> > Similarly, I think it is easier for users to think in terms of "fee budget" 
> > instead.
> >
> > Of course, algorithms should try to keep costs as low as possible, if there 
> > are two alternate payment plans that are both below the fee budget, the one 
> > with lower actual fee is still preferred.
> > But perhaps we should focus more on payment success *within some fee and 
> > timelock budget*.
> >
> > Indeed, as you point out, your real-world experiments you have done have 
> > involved only probability as cost.
> > However, by the paper you claim to have sent 40,000,000,000msat for a cost 
> > of 814,000msat, or 0.002035% fee percentage, far below the 0.5% default 
> > `maxfeepercent` we have, which I think is fairly reasonable argument for 
> > "let us ignore fees and timelocks unless it hits the budget".
> > (on the other hand, those numbers come from a section labelled 
> > "Simulation", so that may not reflect the real world experiments you had 
> > --- what numbers did you get for those?)
>
> René is going to publish those results very soon. 
>
> Regarding payment success *within some fee and timelock budget*: the 
> situation is a little more complex than it appears. As you have pointed out, 
> at the moment, most of the routes are very cheap (too cheap, IMHO), so you 
> have to be very unlucky to hit an expensive flow. So in the current 
> environment, your approach seems to work pretty well, which is also why we 
> first thought about it. 
>
> Unfortunately, as you know, we have to think adversarially in this domain. 
> And it is clear that if we simply disregarded fees in routing, people would 
> try to take advantage of this. If we just set a fee budget, and try again if 
> it is missed, then I see some problems arise: First, what edges do you 
> exclude in the next try? Where is that boundary? Second, I am pretty sure an 
> adversary could design a DOS vector in this way by forcing people to go 
> through exponentially many min-cost flow rounds (which are not cheap anyway) 
> excluding only few edges per round. 
>
> Indeed, if you read the paper closely you will have seen that this kind of 
> problem (optimizing for some cost while staying under a budget for a second 
> cost) is (weakly) np-hard even for the single path case. So there is some 
> intuition that this is not as simple as you might imagine it. I personally 
> think that the Lagrangian style of combining the costs in a linear fashion is 
> very promising, but you might be successful with more direct methods as well. 
>
> > Is my suggestion not reasonable in practice?
> > Is the algorithm runtime too high?
>
> See above. I don't know, but I believe it would be hard to make safe against 
> adversaries. Including the fees in the cost function appears to be the more 
> holistic approach to me, since min-cost flow algorithms always give you a 
> globally optimized answer.  

Hah, yes, adversarial.

I may have a route towards a unified cost function, will clean up a write up 
and post in a little while on a new thread.

Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] #zerobasefee

2021-08-16 Thread ZmnSCPxj via Lightning-dev
Good morning Stefan,

> >I propose that the algorithm be modified >as such, that is, it *ignore* the 
> >fee  scheme.
>
> We actually started out thinking like this in the event we couldn't find a 
> proper way to handle fees, and the real world experiments we've done so far 
> have only involved probability costs, no fees at all. 
>
> However, I think it is non-trivial to deal with the many cases in which too 
> high fees could occur, and in the end the most systematic way of dealing with 
> them is actually including them in the cost function. 

A reason why I suggest this is that the cost function in actual implementation 
is *already* IMO overloaded.

In particular, actual implementations will have some kind of conversion between 
cltv-delta and fees-at-node.

This conversion implies some kind of "conversion rate" between blocks-locked-up 
and fees-at-node.
For example, in C-Lightning this is the `riskfactor` argument to `getroute`, 
which is also exposed at `pay`.

However, I think that in practice, most users cannot intuitively understand 
`riskfactor`.
I myself cannot; when I write my own `pay` (e.g. in CLBOSS) I just start 
`riskfactor` to the in-manual default value, then tweak it higher if the total 
lockup time exceeds some maximum cltv budget for the payment and call 
`getroute` again.

Similarly, I think it is easier for users to think in terms of "fee budget" 
instead.

Of course, algorithms should try to keep costs as low as possible, if there are 
two alternate payment plans that are both below the fee budget, the one with 
lower actual fee is still preferred.
But perhaps we should focus more on payment success *within some fee and 
timelock budget*.

Indeed, as you point out, your real-world experiments you have done have 
involved only probability as cost.
However, by the paper you claim to have sent 40,000,000,000msat for a cost of 
814,000msat, or 0.002035% fee percentage, far below the 0.5% default 
`maxfeepercent` we have, which I think is fairly reasonable argument for "let 
us ignore fees and timelocks unless it hits the budget".
(on the other hand, those numbers come from a section labelled "Simulation", so 
that may not reflect the real world experiments you had --- what numbers did 
you get for those?)


>
> That said, I agree with Matt that more research needs to be done about the 
> effect of  base fees on these computations. We do know they make the problem 
> hard in general, but we might find a way to deal with them reasonably in 
> practice. 

Is my suggestion not reasonable in practice?
Is the algorithm runtime too high?

>
> I tend to agree with AJ, that I don't  believe the base fee is economically 
> helpful, but I also think that the market will decide that rather than the 
> devs (though I would argue for default Zerobasefee in the implementations). 
>
> In my view, nobody is really earning any money with the base fee, so the 
> discussion is kind of artificial. On the other hand, I would estimate our 
> approach should lead to liquidity being priced correctly in the proportional 
> fee instead of the price being undercut by hobbyists as is the case now. So 
> in the long run I expect our routing method to make running a well-stocked LN 
> router much more profitable.

While we certainly need to defer to economic requirements, we *also* need to 
defer to engineering requirements (else Lightning cannot be implemented in 
practice, so any economic benefits it might provide are not achievable anyway).
As I understand the argument of Matt, we may encounter an engineering reason to 
charge some base fee (or something very much like it), so encouraging 
#zerobasefee *now* might not be the wisest course of action, as a future 
engineering problem may need to be solved with non-zero basefee (or something 
very much like it).


Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] #zerobasefee

2021-08-15 Thread ZmnSCPxj via Lightning-dev
Good morning matt and aj,

Let me cut in here.

>From my reading of the actual paper --- which could be a massive 
>misunderstanding, as I can barely understand half the notation, I am more a 
>dabbler in software engineering than a mathist --- it seems to me that it 
>would be possible to replace the cost function in the planning algorithm with 
>*only* the negative-log-probability, which I think is the key point of the 
>paper.

That is, the algorithm can be run in a mode where it *ignores* whatever fee 
scheme forwarding nodes desire.
(@rene: correct me if I am wrong?)

I propose that the algorithm be modified as such, that is, it *ignore* the fee 
scheme.

However, the algorithm then gets an extra step after getting a payment plan 
(i.e. how to route multiple sub-payments).
It looks over the payment plan and if the fees involved are beyond some 
user-defined limit (with, say, a default of 0.5% of the total amount, as per 
the C-Lightning `pay` default), to look at the highest-fee channels in the 
payment plan.
Then, it can rerun the flow algorithm, telling it to *disallow* the highest-fee 
channels identified if the total fees exceed the fee budget.

It seems to me that this modification of the algorithm may be sufficient to be 
resilient against any and all future fee scheme we may decide for Lightning.

This still achieves "optimality" in the sense of the paper, in a way similar to 
what is suggested in the paper.
The paper suggests to basically ignore gossiped channels with non-zero basefee.
The approach I suggest allows us to *start* without ignoring non-zero basefee, 
but to slowly degrade our view of the network by disallowing high-fee (whether 
high basefee or high propfee) channels.

Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] #zerobasefee

2021-08-15 Thread ZmnSCPxj via Lightning-dev
Good morning lisa, aj, et al.,


> The result is that micropayments have a different payment regime than 
> “non-micropayments”, (which may still incentive almost irrational behavior) 
> but at least there’s no *loss* felt by node operators for handling/supporting 
> low value payments. 10k micropayments is worth 10sats.
>
> It’s also simple to implement and seems rather obvious in retrospect.


It seems simple to implement for *forwarders*, but I think complicates the 
algorithm described by Pickhardt and Richter?

On the other hand, the algorithm is targeted towards "large" payments, so 
perhaps the Pickhardt-Richter payment algo can be forced to have some minimum 
split size, and payments below this minimum size are just sent as single 
payments (on the assumption that such micropayments are so small that the 
probability of failure is negligible).
That is, just have the `pay` command branch based on the payment size, if it is 
below the minimum size, just use the old try-and-try-until-you-die algo, 
otherwise use a variant on the Pickhardt-Richter algo that respects this 
minimum payment size.
This somewhat implies a minimum on the possible feerate, which we could say is 
1 ppm, maybe.

So for example, the minimum size could be 1,000,000msat, or 1,000sat.
If the payment is much larger than that, use the Pickhardt-Richter algorithm 
with zerobasefee.
If payment is lower than that threshold, just do not split and do 
try-and-try-until-you-die.

Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] #zerobasefee

2021-08-15 Thread ZmnSCPxj via Lightning-dev
Good morning aj, et al.

> Hey *,
>
> There's been discussions on twitter and elsewhere advocating for
> setting the BOLT#7 fee_base_msat value [0] to zero. I'm just writing
> this to summarise my understanding in a place that's able to easily be
> referenced later.
>
> Setting the base fee to zero has a couple of benefits:
>
> -   it means you only have one value to optimise when trying to collect
> the most fees, and one-dimensional optimisation problems are
> obviously easier to write code for than two-dimensional optimisation
> problems

Indeed, this is a good point regarding this.


> -   when finding a route, if all the fees on all the channels are
> proportional only, you'll never have to worry about paying more fees
> just as a result of splitting a payment; that makes routing easier
> (see [1])

If we neglect roundoff errors.

On the other hand, roundoff errors involved are <1msat per split, so it 
probably will not matter to most people.

> So what's the cost? The cost is that there's no longer a fixed minimum
> fee -- so if you try sending a 1sat payment you'll pay 0.1% of the fee
> to send a 1000sat payment, and there may be fixed costs that you have
> in routing payments that you'd like to be compensated for (eg, the
> computational work to update channel state, the bandwith to forward the
> tx, or the opportunity cost for not being able to accept another htlc if
> you've hit your max htlcs per channel limit).
>
> But there's no need to explicitly separate those costs the way we do
> now; instead of charging 1sat base fee and 0.02% proportional fee,
> you can instead just set the 0.02% proportional fee and have a minimum
> payment size of 5000 sats (htlc_minimum_msat=5e6, ~$2), since 0.02%
> of that is 1sat. Nobody will be asking you to route without offering a
> fee of at least 1sat, but all the optimisation steps are easier.

Should this minimum a node will be willing to forward be part of gossip, and 
how does this affect routing algorithms?

> You could go a step further, and have the node side accept smaller
> payments despite the htlc minimum setting: eg, accept a 3000 sat payment
> provided it pays the same fee that a 5000 sat payment would have. That is,
> treat the setting as minimum_fee=1sat, rather than minimum_amount=5000sat;
> so the advertised value is just calculated from the real settings,
> and that nodes that want to send very small values despite having to
> pay high rates can just invert the calculation.

I like this idea, as I think it matches more what the incentives are.
But it requires a change in gossip and in routing algorithms, and more 
importantly it requires routing algorithms to support two different fee schemes 
(base + proportional vs min + proportional).

On the other hand, this still is a two-dimensional optimization algorithm, with 
`minimum_fee` and `proportional_fee_millionths` as the two dimensions.
So maybe just have a single proportional-fee mechanism...

>
> I think something like this approach also makes sense when your channel
> becomes overloaded; eg if you have x HTLC slots available, and y channel
> capacity available, setting a minimum payment size of something like
> y/2/x**2 allows you to accept small payments (good for the network)
> when you're channel is not busy, but reserves the last slots for larger
> payments so that you don't end up missing out on profits because you
> ran out of capacity due to low value spam.
>
> Two other aspects related to this:
>
> At present, I think all the fixed costs are also incurred even when
> a htlc fails, so until we have some way of charging failing txs for
> incurring those costs, it seems a bit backwards to penalise successful
> txs who at least pay a proportional fee for the same thing. Until we've
> got a way of handling that, having zero base fee seems at least fair.

Yes, the dreaded mechanism against payment lockup, which as far as I understand 
has a lot of thought already sunk into it without any widely-accepted solution, 
sigh.


Regards,
ZmnSCPxj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Zero Fee Routing

2021-08-14 Thread ZmnSCPxj via Lightning-dev
Good morning Daki,

While certainly a very imaginative approach to the problem, do note that there 
is a substantive difference between running a Bitcoin fullnode and running a 
Lightning forwarding node.

Namely:

* A Bitcoin fullnode does not risk any lockup of its funds just to run.
* A Lightning forwarding node *does* risk having its funds locked and 
unavailable.

While a payment is being forwarded, the funds involved in the forwarding are 
unavailable for use by the putative owner of the funds.
Instead, the funds are kept in an HTLC until the payment forwarding is resolved 
in either success or failure.

Having your funds locked and unavailable to you, even transiently, is only 
tenable if you get something in return, e.g. a return on investment.

Of course, you can also counter-argue that in practice, the amounts and 
timeframes are so short that any return on investment would be ridiculously 
minuscule, which is why in practice most forwarding nodes will earn 0 or even 
negative net income.
On the other hand, larger hubs with significant liquidity invested into them 
would still have total amounts and timeframes that *are* substantial enough 
that it would make sense for them to charge *some* fee.
And discovering that feerate is the point of this exercise.

On the other other hand, this may very well be "trade secret" territory, in 
which case there is no point in me asking about this topic anyway.

Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Algorithm For Channel Fee Settings

2021-08-14 Thread ZmnSCPxj via Lightning-dev
Introduction


As use of Lightning grows, we should consider delegating more and more tasks to 
programs.
Classically, decisions on *where* to make channels have been given to 
"autopilot" programs, for example.
However, there remain other tasks that would still appreciate delegation to a 
policy pre-encoded into a program.

This frees up higher-level sentience from network-level concerns.
Then, higher-level sentience can decide on higher-level concerns, such as how 
much money to put in a Lightning node, how to safely replicate channel state, 
and preparing plans to recover in case of disaster (long downtime, channel 
state storage failure, and so on).

One such task would be setting channel fees for forwarding nodes (which all 
nodes should be, unpublished channels delenda est).
This write-up presents this problem and invites others to consider the problem 
of creating a heuristic that allows channel fee settings to be left to a 
program.

Price Theory


The algorithm I will present here is based on the theory that there is a single 
global maxima for price, where the earnings for a particular price point, per 
unit time, is highest.

The logic is that as the price increases, fewer payments get routed over that 
channel, and thus even though you earn more per payment, you end up getting 
fewer payments.
Conversely, as the price decreases, there are more payments, but because the 
price you impose is lower, you earn less per payment.

The above assumption seems sound to me, given what we know about payment 
pathfinding algorithms (since we implemented them).
Most such algorithms have a bias towards shorter paths, where "shorter" is 
generally defined (at least partially) as having lower fees.
Thus, we expect that, all other things being fixed (channel topology, channel 
sizes, etc.), if we lower the price on a channel (i.e. lower its channel fees) 
we will get more forwarding requests via that channel.
And if we raise the price on the channel, we will get fewer forwarding requests 
via that channel.

It is actually immaterial for this assumption exactly *what* the relationship 
is between price and number of forwards as long as it is true that pricier = 
fewer forwards.
Whether or not this relationship is linear, quadratic, exponential, has an 
asymptote, whatever, as long as higher prices imply fewer payments, we can 
assume that there is a global maxima for the price.

For example, suppose there is a linear relationship.
At price of 1, we get 10 payments for a particular unit of time, at price of 2 
we get 9 payments, and so on until at price of 10 we get 1 payment._
Then the maximum earnings is achieved at price of 5 per payment (times 6 
payments) or price of 6 per payment (times 5 payments).

IF the relationship is nonlinear, then it is not so straightforward, but in any 
case, there is still some price setting that is optimal.
At a price of 0 you earn nothing no matter how many free riders forward over 
your node.
On the higher end, there is some price that is so high that nobody will route 
through you and you also earn nothing (and raising the price higher will not 
change this result).

Thus, there (should) exist some middle ground where the price is such that it 
earns you the most amount of fees per unit time.
The question is how to find it!

Sampling


Given the assumption that there exists some global maxima, obviously a simple 
solution like Hill Climbing would work.

The next issue is that mathematical optimization techniques (like Hill 
Climbing) need to somehow query the "function" that is being optimized.
In our case, the function is "fees earned per unit time".
We do not know exactly how this function looks like, and it is quite possible 
that, given each node has a more-or-less "unique" position on the network, the 
function would vary for each channel of each node.

Thus, our only real hope is to *actually* set our fees to whatever the 
algorithm is querying, then take some time to measure the *actual* earnings in 
a certain specific amount of time, and *then* return the result to the 
algorithm.

Worse, the topology of the network changes all the time, thus the actual 
function being optimized is also changing over time!
Hill Climbing works well here since it is an anytime algorithm, meaning it can 
be interrupted at any time and it will return *some* result which, if not 
optimal, is at least statistically better than a random dart-throw.
A change in the topology is effectively an "interruption" of whatever 
optimization algorithm we use, since any partial results it has may be 
invalidated due to the topology change.

In particular, if we are the only public node to a particular receiver, then we 
have a monopoly on payments going to that node.
If another node opens a channel to that receiver, however, suddenly our maxima 
changes (probably lower) and our optimization algorithm then needs to adapt to 
the new situation.
Others closing channels may 

Re: [Lightning-dev] Turbo channels spec?

2021-08-13 Thread ZmnSCPxj via Lightning-dev
Good morning Rusty,

> ZmnSCPxj zmnsc...@protonmail.com writes:
>
> > Mostly nitpick on terminology below, but I think text substantially like 
> > the above should exist in some kind of "rationale" section in the BOLT, so 
> > ---
> > In light of dual-funding we should avoid "funder" and "fundee" in favor of 
> > "initiator" and "acceptor".
>
> Yes, Lisa has a patch for this in her spec PR :)
>
> > So what matters for the above rationale is the "sender" of an HTLC and the 
> > "receiver" of an HTLC, not really who is acceptor or initiator.
> >
> > -   Risks for HTLC sender is that the channel never confirms, but it 
> > probably ignores the risk because it can close onchain (annoying, and 
> > fee-heavy, but not loss of funds caused by peer).
> > -   Risks for HTLC receiver is that the channel never confirms, so HTLC 
> > must not be routed out to others or resolved locally if the receiver 
> > already knows the preimage, UNLESS the HTLC receiver has some other reason 
> > to trust the peer.
>
> This misses an important case: even with the dual-funding prototol,
> single-sided funding is more common.
>
> So:
>
> -   if your peer hasn't contributed funds:
> -   You are in control, channel is safe (modulo your own conf issues)

Hmm.

In single-funding, if you sent out an HTLC, got the preimage, then now your 
peer has funds in the channel.
If you do this before the channel confirms, then the peer can send to you, and 
you can accept it safely without concern since your peer cannot block the 
channel confirmation.

So yes, it seems correct.


Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [bitcoin-dev] Removing the Dust Limit

2021-08-10 Thread ZmnSCPxj via Lightning-dev
Good morning all,

Thinking a little more, if the dust limit is intended to help keep UTXO sets 
down, then on the LN side, this could be achieved as well by using channel 
factories (including "one-shot" factories which do not allow changing the 
topology of the subgraph inside the factory, but have the advantage of not 
requiring either `SIGHASH_NOINPUT` or an extra CSV constraint that is difficult 
to weigh in routing algorithms), where multiple channels are backed by a single 
UTXO.

Of course, with channel factories there is now a greater set of participants 
who will have differing opinions on appropriate feerate.

So I suppose one can argue that the dust limit becomes less material to higher 
layers, than actual onchain feerates.


Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [bitcoin-dev] Removing the Dust Limit

2021-08-10 Thread ZmnSCPxj via Lightning-dev
Good morning Billy, et al.,

> For sure, CT can be done with computational soundness. The advantage of 
> unhidden amounts (as with current bitcoin) is that you get unconditional 
> soundness.

My understanding is that it should be possible to have unconditional soundness 
with the use of El-Gamal commitment scheme, am I wrong?

Alternately, one possible softforkable design would be for Bitcoin to maintain 
a non-CT block (the current scheme) and a separately-committed CT block (i.e. 
similar to how SegWit has a "separate" "block"/Merkle tree that includes 
witnesses).
When transferring funds from the legacy non-CT block, on the legacy block you 
put it into a "burn" transaction that magically causes the same amount to be 
created (with a trivial/publicly known salt) in the CT block.
Then to move from the CT block back to legacy non-CT you would match one of 
those "burn" TXOs and spend it, with a proof that the amount you are removing 
from the CT block is exactly the same value as the "burn" TXO you are now 
spending.

(for additional privacy, the values of the "burn" TXOs might be made into some 
fixed single allowed value, so that transfers passing through the CT portion 
would have fewer identifying features)

The "burn" TXOs would be some trivial anyone-can-spend, such as ` 
<0> OP_EQUAL OP_NOT` with `` being what is used in the CT to cover 
the value, and knowledge of the scalar behind this point would allow the CT 
output to be spent (assuming something very much like MimbleWimble is used; 
otherwise it could be the hash of some P2WSH or similar analogue on the CT 
side).

Basically, this is "CT as a 'sidechainlike' that every fullnode runs".

In the legacy non-CT block, the total amount of funds that are in all CT 
outputs is known (it would be the sum total of all the "burn" TXOs) and will 
have a known upper limit, that cannot be higher than the supply limit of the 
legacy non-CT block, i.e. 21 million BTC.
At the same time, *individual* CT-block TXOs cannot have their values known; 
what is learnable is only how many BTC are in all CT block TXOs, which should 
be sufficient privacy if there are a large enough number of users of the CT 
block.

This allows the CT block to use an unconditional privacy and computational 
soundness scheme, and if somehow the computational soundness is broken then the 
first one to break it would be able to steal all the CT coins, but not *all* 
Bitcoin coins, as there would not be enough "burn" TXOs on the legacy non-CT 
blockchain.

This may be sufficient for practical privacy.


On the other hand, I think the dust limit still makes sense to keep for now, 
though.

Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Revisiting Link-level payment splitting via intermediary rendezvous nodes

2021-08-09 Thread ZmnSCPxj via Lightning-dev
Good morning Gijs,

> To circumvent the saturated channel D-C, C creates the route C->A->D,
> where node D supports rendezvous routing. C can create a sub-route
> from D to E and treat it as a partial route-to-payee under rendezvous
> routing by using the hop payload found when unwrapping the onion of
> the original route B->C->D->E . Because every node in a route is able
> to create the ephemeral key for the next node by tweaking it with its
> own shared secret, C is also able to create the ephemeral key for D.
> C passes that ephemeral key into the payload of the rendezvous node D
> in the alternate route, signaling to D it needs to swap out the key.
> D, upon unwrapping its onion sees that it needs to swap ephemeral
> keys, does so, and goes on with the route to E.

I confess that I only have a very vague understanding of this bit (Christian 
understands the math involved better than me), but my vague understanding 
suggests this is correct.

However, a practical problem here is that the incoming HTLC B->C has some time 
limit.
Presumably, the payer B allocates every time limit for the individual HTLCs 
D->E, C->D, and B->C so that the time limit is the minimum advertised by the 
receiver.

Thus, if C decides to route via C->A->D, it has to ask C->A and/or A->D to give 
a lower time limit, or else risk its own time limit (i.e. its outgoing C->A has 
a time limit that is too near to the incoming B->C time limit, or even possibly 
exceed its incoming time limit).

Thus:

* For JIT rebalancing, the risk is that the payment ends up failing at some 
later point, and C paid for a rebalance without actually benefiting from it.
* For the link-level splitting, the risk is that C has to give a larger time 
limit for the reroute via A, risking its own time limit if something has to 
drop onchain.

The risks are more extreme with link-level splitting --- it is far less likely 
to occur (the risk only really happens if things have to drop onchain, but if 
things remain offchain and everyone just acts in good faith, then nothing bad 
happens) but the consequences are more dire (C potentially loses the entire 
payment amount, whereas with JIT rebalancing, C only risks the fee to 
rebalance).

If C has some special assurance with D and/or A that reduces its risk of 
dropping onchain (maybe some contract or agreement?) then it may be useful to 
continue this development, as it trades off one kind of risk for another.



Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Impact of eltoo loss of state

2021-07-27 Thread ZmnSCPxj via Lightning-dev
Good morning aj, and list,


> > I don't think you can reliably hide that you forgot some state?

Thinking a little more --- *why* do we need to hide that we forgot some state?

The reason is that if your peer learns you forgot state, the peer can pass off 
obsolete state onchain, thereby stealing funds from you before you can recover 
your data.

But if some completely random node that is ***not*** your peer and has ***no*** 
channels with you is holding your memento, then there is no need to worry --- 
even if you tell them "actually I forgot my state" they have no obsolete state 
to hurt you with.

Suppose that nodes provide a "will remember for you" flag in the feature bits.

Now, your node can then use a secret distance measurement --- for example, it 
could take the keyed hash (with your node privkey as key) of every "will 
remember for you"-advertising node, then look for the hash that is numerically 
lowest.

Locating the "nearest" node, your node then contacts that node and asks them to 
remember our memento.
Now, your node should not be using its "normal" pubkey for this, instead, it 
should generate a "throwaway" keypair derived from its privkey plus the pubkey 
of the selected node.

--

After your node hits its head and becomes amnesiac, you provide it with the 
privkey (which can be represented as some words).

The node then re-downloads gossip map, and uses the same secret distance 
measurement to find, say, the 100 "nearest" nodes with the "will remember for 
you" feature.
Assuming the gossip map has not changed too much since before the amnesia 
event, then it is likely that the previously selected node is still in the 
nearest 100 nodes.

Your node will then iterate over the nearest 100 nodes, starting with the 
nearest, and re-derive the "throwaway" keypair and ask each node if it holds a 
memento for that pubkey.

Since your node contacts them using a throwaway keypair that is not 
correlatable with your normal node pubkey, even if they are conspiring with 
your channel peers, the "will remember for you" node cannot identify that your 
node has suffered amnesia, it only knows that *some* node *somewhere* suffered 
amnesia.

This implies as well that the selected node can even be your peer, and it will 
still not be sure that the amnesiac node is you or might be somebody else 
completely.

--

Of course, the anonymous nature of the client requesting data storage is a 
problem, as this feature is now vulnerable to abuse and DDoS.
As a spam prevention, such a "will remember for you" node can use any number of 
techniques developed for anonymously paying to watchtowers, which have a 
similar "need to pay for anonymous storage to prevent DoS" problem.


Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Impact of eltoo loss of state

2021-07-27 Thread ZmnSCPxj via Lightning-dev
Good morning cdecker, aj, and list,

> . In addition we can store the same data with multiple peers, ensuring that 
> as long as one node is behaving we're good.


Depending on size limits of the stored data, it may be possible to use some 
kind of erasure coding so that at least k of n peers need to be honest so we 
are good.
I suspect peers would prefer to limit the amount of data they have to store, if 
they offer this feature, so use of erausre coding seems to be a good idea.

However, since the peer does not know the data you are storing, this detail can 
be known only by the node saving its data with the peer, so not need to put in 
specifications.

> I don't think you can reliably hide that you forgot some state? If you
> _did_ forget your state, you'll have forgotten their latest bundle too,
> and it seems like there's at least a 50/50 chance you'd have to send
> them their bundle before they sent you yours?

This objection seems quite correct.

Perhaps it is possible to (mis)use Barrier Escrow: 
https://suredbits.com/payment-points-implementing-barrier-escrows/
After all, what is needed is a simultaneous way for both sides to provide the 
data (or admit they lost it) before the other can withhold the data.

1.  Both agree on some Barrier Escrow and generate some temporary points.
2.  Both sides invoke `barrier-commit` on the Barrier Escrow, receiving E.
3.  Both sides *additionally* encrypt the bundle using an asymmetric 
encryption, which can be decrypted only by anyone who knows `e` such that `E = 
e * G`.
4.  Both sides exchange the asymmetrically-encrypted bundles.
5.  Once a side receives the asymmetrically-encrypted bundle from the other 
side, they invoke `barrier-reveal` using their temporary scalar from #1.
6.  When they get `e` from `barrier-reveal` they can decrypt the asymmetric 
encryption layer from the bundle they receive, then proceed to validate and 
decrypt the actual encrypted bundle.

If Alice is amnesiac, it just provides a random vector for the "asymmetric 
encrypted bundle of Bob".

Suppose Bob wants to check if Alice is amnesiac.
Bob cannot delay its send of the Alice-bundle, due to the Barrier Escrow 
ensuring that both parties have sent *some* bundle.
Thus, even if Bob knows later than Alice has gone amnesiac, by the time Bob 
knows that, it has handed over the memento of Alice by which Alice can recover.

Bob can send a bogus bundle to Alice, and then if it also receives a bogus 
bundle, it knows Alice is amnesiac (and it might be a good time to steal from 
Alice).
ALTERNATELY Alice is *also* trying to probe Bob, so Alice sent a bogus bundle 
itself.
In that case, Bob could attempt to steal, but runs the risk that Alice was 
*also* another prober who is not actually amnesiac.
(Not sure if that is valid game theory, though)


On the other hand, Barrier Escrow services have to be paid for their service 
(else why would they run), and if you have not connected to your peer then you 
cannot pay for barrier escrow services over Lightning.

Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Turbo channels spec?

2021-07-04 Thread ZmnSCPxj via Lightning-dev
Good morning Rusty et al,

> Matt Corallo lf-li...@mattcorallo.com writes:
>
> > Thanks!
> > On 6/29/21 01:34, Rusty Russell wrote:
> >
> > > Hi all!
> > >
> > >  John Carvalo recently pointed out that not every implementation
> > >
> > >
> > > accepts zero-conf channels, but they are useful. Roasbeef also recently
> > > noted that they're not spec'd.
> > > How do you all do it? Here's a strawman proposal:
> > >
> > > 1.  Assign a new feature bit "I accept zeroconf channels".
> > > 2.  If both negotiate this, you can send update_add_htlc (etc) before
> > > funding_locked without the peer getting upset.
> > >
> >
> > Does it make sense to negotiate this per-direction in the channel init 
> > message(s)? There's a pretty different threat
> > model between someone spending a dual-funded or push_msat balance vs 
> > someone spending a classic channel-funding balance.
>
> channel_types fixes this :)
>
> Until then, I'd say keep it simple. I would think that c-lightning will
> implement the "don't route from non-locked-in channels" and always
> advertize this option. That means we're always offering zero-conf
> channels, but that seems harmless:
>
> -   Risks for funder is that channel never confirms, but it probably ignores
> the risk because it can close onchain (annoying, and fee-heavy, but not
> loss of funds caused by peer).
>
> -   Risks for fundee (or DF channels where peer contributes any funds) is
> that funder doublespends, so HTLCs must not be routed out to others
> (unless you have other reason to trust peer).

Mostly nitpick on terminology below, but I think text substantially like the 
above should exist in some kind of "rationale" section in the BOLT, so ---

In light of dual-funding we should avoid "funder" and "fundee" in favor of 
"initiator" and "acceptor".
However, we should also note that the substantial feature of turbo channels is 
***not*** in channel opening per se, it is the *confirmation* of the channel.

Once the opening ritual has completed and the funding tx broadcast, that is 
when turbo channels come in, so it actually does not matter which peer is 
"initiator" and which is "acceptor" at that point, the opening ritual has 
completed.
Both peers, at the end of the opening ritual, have a valid commitment tx and 
both can double-spend the funds they put in to back out of the channel.

So what matters for the above rationale is the "sender" of an HTLC and the 
"receiver" of an HTLC, not really who is acceptor or initiator.

* Risks for HTLC sender is that the channel never confirms, but it probably 
ignores the risk because it can close onchain (annoying, and fee-heavy, but not 
loss of funds caused by peer).
* Risks for HTLC receiver is that the channel never confirms, so HTLC must not 
be routed out to others or resolved locally if the receiver already knows the 
preimage, UNLESS the HTLC receiver has some *other* reason to trust the peer.


Basically:

* "funder" and "fundee" are legacy terms that predate dual-funding and are 
depreciated.
  In modern terms, the "funder" is the "initiator" and the "fundee" is the 
"acceptor", and in a legacy pre-dua-funding channel, only the initiator can 
start putting funds into the channel.
* "initiator" is the peer that starts the opening process, and pays for the 
opening fees.
* "acceptor" is the peer that is contacted by the initiator and decides whether 
to allow the creation of a channel with the initiator, and pays no opening fees.
* "HTLC sender" is any peer that, *after* the channel opening completes (but 
possibly before it is locked in), offers an HTLC to the peer.
* "HTLC receiver" is any peer that, *after* the channel opening completes (but 
possibly before it is locked in), is the one who accepts the HTLC from the HTLC 
sender.

Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Lightning Mints

2021-06-29 Thread ZmnSCPxj via Lightning-dev
Good morning elsirion,


> Hi ZmnSCPxj,
>
> let me chime in here, I've been working on federated mint for quite some time 
> now but only recently began talking about it more publicly.
>
> > WabiSabi "simply" replaces blinded signatures with blinded credentials.
> > Blinded signatures are fairly low-bandwidth  either you have a blinded 
> > signature, or you do not.
> > Credentials, however, also include a blinded homomorphic value.
>
> This is a very intriguing idea Casey actually mentioned to me (at least I 
> think it's about the same problem):
>
> In traditional mints we use tokens of the same denomination. For efficiency 
> reasons amount tiers are introduced, reducing the anonymity set per tier. If 
> we had blind signatures not only on random tokens but they also committed to 
> a separately blinded amount with a range proof that would allow one big 
> anonymity set over all tokens instead. Such tokens could then be combined 
> similarly to Liquid transaction inputs.
>
> I think the concept is very interesting, but for now I see a few obstacles:
>
> -   WabiSabi uses KVACs which afaik do not allow client side validation. 
> While I can't say if it will be a big problem it makes detecting certain 
> failure scenarios harder imo.
> -   The KVAC scheme referred to in WabiSabi [1] is not a threshold scheme 
> afaik, undermining the central premise of federated mints. If I got that 
> wrong this would be awesome!
> -   Building such an enhanced threshold blind signature scheme is more 
> complex and probably needs further research. A naive implementation would be 
> more interactive which in a federated context means waiting for consensus 
> rounds for each round trip which is unappealing.

Well, WabiSabi is effectively n-of-n signing, as the produced transaction has 
to be signed by all clients of the coordinator, so threshold federated 
signatures are not necessary.
So yes, the use of credentials seems not possible to the federated mints 
project.
(note: I am not a mathist and have no idea what the hell credentials are, I 
only know how to use them)

>
> So while I'm very sympathetic to the idea and want to pursue it in the 
> future, threshold blind signatures seem like the more efficient way to get to 
> a working implementation with still adequate performance and privacy in time.
>
>
> > Now, let us consider the "nodelets" idea as well.
> > The "nodelets" system allows for a coordinator (which can be a separate 
> > entity, or, for the reduction of needed entities, any nodelet of the node).
>
> I didn't know of nodelets so far and went back to your 2019 post about it. It 
> seems that blind multisig or threshold credentials (the idea seems to be 
> m-of-m, so doesn't nee a general threshold scheme I guess) would improve the 
> privacy of the system. I think the nodelets idea is very interesting for 
> technical people that would otherwise be priced out of running a LN node in a 
> high-fee future. But the complexity of the protocol and online requirements 
> seem to make it suboptimal for non-technical, disinterested users. While 
> automating a lot of the complexity away is possible (big fan of clboss) it's 
> also a lot of work and probably will take a while if ever to get to a point 
> where the experience is plug-and-play as most non-technical users have come 
> to expect.
>
> In that sense both systems just have different target audiences. I think of 
> federated mints mostly as a replacement for Banks and other custodial 
> services that are used for their superior UX. It is fundamentally a 
> compromise. E.g. Bitcoin Beach currently uses Galoy [2], a centralized hosted 
> LN wallet without much privacy. I don't see a future where everyone there is 
> technical enough to run their own node or nodelet client reliably enough. But 
> if we can allow community driven federations with privacy built-in we can 
> mitigate most of the risks inherent to custodial wallets imo.

>From my PoV, any "bank-replacement" that is inherently custodial will 
>eventually become a bank, with all the problems that implies.

It is helpful to remember that banks have, historically, been federations: they 
are typically implemented as corporations, which are basically a bunch of 
people pooling their money and skill together to start a business.
Thus, I argue that banks already *are* federations that take custody of your 
money and manage it for you.

To my mind, any system that is a federation that takes custody of user money 
*will* face the same social, political, and economic forces that the legacy 
banking system faced in the past.
You puny humans simply do not evolve as fast as you think, you know --- your 
instincts still demand that your body stock up on fat and salt and sugar in a 
modern era where such things are available in too much abundance that it is 
killing you, so I imagine that a modern federated system (like Liquid or your 
federated mints) will face similar forces as past successful 

Re: [Lightning-dev] Interactive tx construction and UTXO privacy, some thoughts

2021-06-29 Thread ZmnSCPxj via Lightning-dev

Good morning lisa,

> A dedicated attacker could probably figure out your UTXO set, but that's not
> much different from the current system; the only difference is the span of 
> time
> it takes them to figure it out.
>
> ## Things We've Done to Counter This:
> I had the pleasure of finally meeting Nadav of SuredBits and DLC fame in Miami
> a few weeks ago. The DLC team has adopted a version of the interactive
> transaction protocol for their own purposes. Nadav pointed out that the
> protocol we landed on for lightning interactive construction transactions
> is *quite* interactive; the DLC version modified it to use batching to
> transmit the input/output sets (the interactive protocol is one-by-one).
>
> The rationale for doing the addition of inputs and outputs in a non-batched
> fashion is that this allows for you to interleave UTXOs from a variety
> of sources, for example multiple channel opens in the same tx. With the 
> current
> protocol, you can initiate a dual-funded open with many peers at the same 
> time,
> each of which may contribute UTXOs and outputs for their own respective
> channel opens or UTXO consolidations etc.
>
> This gives us the real possibility of doing multiparty coinjoins on lightning.
> In fact, this is currently possible with c-lightning *today* using
> the multifundchannel command (h/t to ZmnSCPjx for the original framework
> for multifund opens).
>
> As written, the interactive transaction protocol is exceedingly flexible.
> We traded off succinctness for some plausible deniablity wrt
> any UTXOs you send to any peer -- are they yours or are they
> some third party's? How to tell?
>
> I think it's interesting to point out that "succinctness" in rounds
> of required interaction is typically a *highly* desirable trait for
> any cryptographic protocol. The establishment of a lightning channel 
> relationship,
> however, isn't a cryptographic signature. A lightning channel, by its very
> nature, is typically a highly interactive relationship between two peers.
> Increasing the rounds of messaging required to establish the channel doesn't
> change the overall interactivity profile of a channel's operation, thus
> adding rounds of comms to channel open is generally a no-op in terms of
> performance/uptime requirements of a node's operations.

Possibly, a difference between the DLC use-case and the Lightning use-case is 
that the DLC use-case has a definite deadline when the contract expires, 
whereas the Lightning use-case has no definite end termination for the channel.

In addition, DLC requires transmitting significant amounts of data measurable 
in megabytes, whereas Lightning transmits little 32-byte blobs (well not really 
mostly 1366-byte onion-wrapped packages but still much tinier than the 
megabytes of adaptor signatures in DLCs).
So the DLC setup stage getting hit with the optimization hammer (as a 
collateral damage from the optimization hammer being used on the actual core 
DLC) seems like a reasonably thing happening in the DLC use-case.

Finally, there is a big enough directory of Lightning nodes that you can 
reasonably pick up this directory in lots of places, pick some random number of 
them to channel to, and then make channels to them, and making them in a single 
tx is always a good thing.
Whereas I imagine that the DLC use-cases (even in the future) are more limited 
userbase (and with payment points on Lightning I believe the smaller and 
shorter-term DLCs can run on top of Lightning), so the opportunity to aggregate 
may be much rarer in DLCs than in Lightning channel opens.


> ## How important is UTXO privacy on lightning?
> Obviously important. But given that the real transactions happen inside
> of channels, invisibly, and that your public channels really truly
> are public via the gossip protocol the much more important "thing" in the
> lightning arena isn't your UTXO privacy so much as *not* associating your
> identity with your node.

I broadly agree here --- published channels trade off onchain privacy (marking 
"hey this UTXO is totally owned by these two peeps!") but gain offchain privacy 
("no, that is not my payment, somebody else asked me to forward it, promise!")

>
> ## Does Taproot fix this?
> I'm not up to date enough on the progress of Taproot scripts, however,
> assuming the current requirement that every routing node is able to 
> independently
> verify the opening output script via the signatures provided
> in the channel_announcement, it seems reasonable that on-chain transactions
> will still be assignable to a node given gossip data. (Purely on-chain 
> analysis
> will be stymied, however.)

Hmm wait Taproot fixes this?
We can drop/reinterpret `short_channel_id` post-Taproot?

Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] complementing lightning with with a discreet physical delivery protocol?

2021-06-28 Thread ZmnSCPxj via Lightning-dev
Good morning VzxPLnHqr,

> Dear ZmnSCPxj,
>
> Thank you for your reply. I see how the vending machine can be mapped into 
> the Courier role. There are some questions around how to extend this to a 
> multi-courier situation, but let us solve that problem later and further 
> discuss the nuances of hodl-invoices. One thing that seems currently 
> difficult to ascertain right now is how much "time preference liquidity" (for 
> lack of a better term) there exists in the network.
>
> For example, let's say the Merchant is an on-demand furniture maker, and it 
> takes 90 days for her to produce the item. The protocol we are considering, 
> in its current naive form as contemplated in this email thread, stacks up a 
> sequence of hodl invoices which, at least in theory, tries to align the 
> incentives of Merchant, Courier, Purchaser. It could, of course, go even 
> further up/down the entire supply chain too.
>
> However, since the payments themselves are routed through the lightning 
> network, and, in the example here, stuck in this hodling-pattern for up to 90 
> days, then any routing nodes along the way may feel they are not being fairly 
> compensated for having their funds locked up for such time.
>
> Do I correctly understand that moving to payment points[1] instead of HTLCs 
> can help reduce concern here by allowing each node along the route to earn a 
> fee irrespective of whether the hodl invoice is settled or canceled?

This does not need payment points.

*However*, this hodl-payment-problem has multiple proposed solutions (none of 
which *require* payment points, but should still be compatible with them), none 
of which have gained much support, since all of them kind of suck in one way or 
another.

Payment points do allow for certain escrows to be created in a low-trust way, 
but they still involve holding PTLCs for long periods of time, and locking up 
funds until the escrow conditions are satisfied.
Note that one may consider the hodl-invoice as a sort of escrow, and thus the 
generalized escrow services that are proposed in that series of blog posts is a 
strict superset of that, but they still involve PTLCs being unclaimed for long 
periods of time.

>
> Outside of doing a large-scale test on mainnet (which could quickly become 
> expensive and cause some unsuspecting node operators displeasure), is there 
> any way right now for a node operator to determine the likelihood of, for 
> example, being able to even route (e.g. receive payment but not yet be able 
> to settle) a 90-day hodl invoice?

0, since I think most implementations impose a maximum limit on the timelocks 
HTLCs passing through them, which is far lower than 90 days.
Though I should probably go check the code, haha.

--

I think the issue here is the just-in-time nature of the Merchant in your 
example.

Consider an ahead-of-time furniture maker instead.
The furniture maker can, like the vending machine example, simply consign 
furniture to a Vendor.
The Vendor simply releases the already-built furniture conditional on receiving 
the payment secret (i.e. proof-of-payment) of an invoice issued by the Merchant.

The payment secret could then use the payment point homomorphism.
The Vendor acts as a Retailer, buying furniture at reduced prices, in bulk, 
from the Merchant.
Because it buys in bulk, the Retailer+Merchant can probably afford to use a 
hodl PTLC directly onchain, instead of over Lightning, since they makes fewer 
but larger transactions, buying in bulk.

On the other hand, this reduces flexibility --- end consumers can only choose 
among pre-built furniture, and cannot customize.
Buying the flexibility that just-in-time gives requires us to pay with some 
deep thinking over here in Lightning-land on how to implement this without 
sucking.

Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Lightning Mints

2021-06-28 Thread ZmnSCPxj via Lightning-dev
Good morning again CAsey,

>
> I believe a major failing of Chaumian mints is that they are, at their core, 
> inherently custodial.
> The mint issues blinded minted coins in exchaange for people handing over 
> other resources to their custody.
> While the mint itself cannot identify who owns how much, it can outright deny 
> all its clients access to their funds and then run off with the money to 
> places unknown.
>
> However, do note that both Wasabi and WabiSabi are extensions of Chaumian 
> mints.
> These avoid the custodiality issue of Chaumian mints by operating the mint as 
> a temporary entity, whose output is then counterchecked by the users of the 
> Wasabi/WabiSabi scheme.
>
> ...
>
> In any case, you might also be interested in the "nodelets" I described some 
> years ago.
> This link has a presentation where I introduce nodelets towards the end, 
> sorry but most of the beginning is about LN pathfinding (which is currently a 
> non-problem since nobody makes published channels anymore).
> This allows multiple users to implement a single node without a central 
> custodian, and may allow for similar flexibility of liquidity if there are 
> enough users, but every action requires all users to have keys online.


Thinking more, it helps to consider how Wasabi and WabiSabi are constructed.

In Wasabi, there exists a coordinator, which is a server that works as a 
temporary Chaumian mint.
Clients of the coordinator register some UTXOs of some common value to the mint 
(indicating any change outputs if the UTXO total value exceeds the fixed common 
value).
Then the coordinator issues blind signatures, which serve as tokens in a 
Chaumian mint.
Then users re-connect via a different pseudonym, unblind signatures and reclaim 
the funds, indicating a target output address.
The coordinator then creates a single transaction that consumes the registered 
input UTXOs and the indicated outputs.

As a *final* step, the clients then check that the produced transaction is 
correct.
This final step prevents the coordinator from absconding with the funds.

WabiSabi "simply" replaces blinded signatures with blinded credentials.
Blinded signatures are fairly low-bandwidth  either you have a blinded 
signature, or you do not.
Credentials, however, also include a blinded homomorphic value.
On issuing, the issuer can ensure that a particular value is encoded, then when 
the credential is blinded by the receiver, and the issuer can ensure that 
multiple credentials can be presented which sum up to a newly issued 
credential, with the value being correctly added.
Thus, I think for a modern Chaumian mint, you should really consider the 
credentials scheme used by WabiSabi.

--

Now, let us consider the "nodelets" idea as well.
The "nodelets" system allows for a coordinator (which can be a separate entity, 
or, for the reduction of needed entities, any nodelet of the node).

This coordinator in nodelets is simply a way to implement a broadcast medium 
among all the nodelets in a node.
However, the same coordinator in a nodelets system can also serve as a 
coordinator in something very much like a WabiSabi system.

So it seems to me that this can be implemented in a way that is non-custodial, 
as long as we can actually implement nodelets.
(which "just" requires that we use a multiparticipant signing scheme for 
Schnorr signatures that is composable.)

Basically, just as in the WabiSabi case, nodelets can connect to the 
coordinator, register some of the values they have in channels, then get back 
some equivalent credentials.
Then the nodelets can "self-mix" their coins, then get back a new set of 
values, then request that some part of their value be sent over the network.
Then, before signing off on the new state of any channel, the actual nodelets 
check the new state that the coordinator wants them to sign off on, thus 
preventing custodial risk in the same manner as Waasabi/WabiSabi does.

Thus, each state update of the channel is created by a Chaumian mint (using 
credentials instead of blinded signatures), then the state update is "ratified" 
by the actual nodelets, preventing the Chaumian mint from stealing the funds; 
new states are simply not signed (and presumably one or more of the nodelets 
will drop the previous valid state onchain, which allows them to recover funds 
without loss) until all nodelets can confirm that the coordinator has not 
stolen anything.


Nodelets can use pseudonyms in between states of channels, to reduce the 
ability of the coordinator, or the other nodelets, to guess who owns how much.


An issue however is how to handle forwarding.
Forwarding is an important privacy technique.
If you are a forwarder, you can plausibly claim that an outgoing HTLC is not 
from your own funds, but instead was a forward.
By supporting forwarding, the nodelets composing the node can reduce the 
ability of non-participants to determine the payments of the node.

Handling forwarding in such a system 

Re: [Lightning-dev] Lightning Mints

2021-06-27 Thread ZmnSCPxj via Lightning-dev
Good morning Casey,

I believe a major failing of Chaumian mints is that they are, at their core, 
inherently custodial.
The mint issues blinded minted coins in exchaange for people handing over other 
resources to their custody.
While the mint itself cannot identify who owns how much, it can outright deny 
all its clients access to their funds and then run off with the money to places 
unknown.

However, do note that both Wasabi and WabiSabi are extensions of Chaumian mints.
These avoid the custodiality issue of Chaumian mints by operating the mint as a 
temporary entity, whose output is then counterchecked by the users of the 
Wasabi/WabiSabi scheme.


I think a lot of problems are very easy if we go with custodiality; it is the 
variant rule of non-custodiality that makes this field interesting in the first 
place.

Fidelity bonds are hard since the bond has to be at least the value of the 
funds being managed (otherwise the mint can still sacrifice the bond to run off 
with the funds being managed; it would still earn more than what it lost).
That means, at best, locking up to twice the managed amount (i.e. locking it in 
a channel, *and* locking a similar amount in a separate fidelity bond).


In any case, you might also be interested in the "nodelets" I described some 
years ago.
This link has a presentation where I introduce nodelets towards the end, sorry 
but most of the beginning is about LN pathfinding (which is currently a 
non-problem since nobody makes published channels anymore).
This allows multiple users to implement a single node without a central 
custodian, and may allow for similar flexibility of liquidity if there are 
enough users, but every action requires all users to have keys online.


> I originally became interested in blind mints while thinking about Lightning
> Network wallet usability issues. When Lightning works, it is fantastic, but
> keeping a node running and managing a wallet present a number of challenges,
> such as channel unavailability due to force closes, the unpredictability of 
> the
> on-chain fee environment, the complexity of channel backup, and the involved
> and often subtle need to manage liquidity.
>
> All of these problems *are* tractable for a skilled node operator, but may not
> be soluble in the context of self-hosted wallets operated by non-technical
> users, hereafter *normies*. If this is the case, then normies may have no
> choice but to use hosted Lightning wallets, compromising their privacy and
> exposing them to custodial risk.

One of my projects is CLBOSS, which manages a C-Lightning node for you.

* https://github.com/ZmnSCPxj/clboss
* https://lists.ozlabs.org/pipermail/c-lightning/2020-October/000197.html

The target is that at some point, the algorithms and heuristics developed for 
CLBOSS will be widespread and it would be trivial for a "normie" to run a 
well-managed forwarding node, they just have to keep it powered on 100% of the 
time, which should be easy, people keep their refrigerators powered on 100% of 
the time, after all.

I am actually kind of puzzled that nobody else seems to be building node 
managers, everyone just focuses on peer selection, but ignores stuff like 
channel feerates, closing heuristics, liquidity monitoring, etc.

Regards,
ZmnSCPxj


___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] complementing lightning with with a discreet physical delivery protocol?

2021-06-26 Thread ZmnSCPxj via Lightning-dev
Good morning VzxPLnHqr,

This certainly seems workable.

I first encountered similar ideas when people were asking about how to 
implement a vending machine with Lightning, with the vending machine being 
offline and not having any keys.

The idea was to have the vending machine record pregenerated invoices with 
their hashes.
Then a separate online machine (disconnected from the vending machine) would 
operate a LN node and receive the payment, releasing the preimage.
The payer would then enter the preimage into the vending machine, which would 
validate it and release the item being vended.

Under your framework, the vending machine operates as the Courier, except it 
has a fixed geographical location and the Paul goes to the Courier (vending 
machine) to get their item.

Regards,
ZmnSCPxj

> Dear Lightning-dev,
>
> I would like to share some initial research and ask for some feedback. 
> https://github.com/VzxPLnHqr/discreet-physical-delivery-protocol is a 
> repository to gather some thoughts around how it might be possible to utilize 
> some of the current features (hodl invoices), and/or forthcoming features 
> (payment points? dlcs?) of lightning to create a robust, reasonably private, 
> and incentive-compatible network for physical delivery of items.
>
> There has been mention of using hodl invoices for atomic item delivery[1]. 
> However, I seem to remember reading that, essentially, hodl invoices (e.g. 
> invoices which may not settle for quite some time, if ever) are also the 
> primary culprit for some attacks on the network?
>
> Does lightning in a post-taproot world solve any of these issues?
>
> There is some motivation given in the readme for why such a protocol may be 
> desirable, but as quick refresher for those reading who may not be familiar 
> with how lightning and hodl invoices can be used for atomic package delivery:
>
> 0. Merchant Mary operates an e-commerce website and Purchaser Paul would like 
> to buy something and have it delivered. For initial simplicity, assume that 
> both Paul and Mary have a relationship with Charlie, an independent Courier 
> (e.g. neither Paul nor Mary is playing the role of Charlie, but Charlie knows 
> the geographical locations of both).
>
> 1. During checkout, Paul generates preimage and sends hash of preimage to Mary
> Mary creates a hodl invoice invoice0 with hash. The amount of the invoice 
> includes the cost of shipment as quoted to Mary by Courier Charlie. Paul pays 
> invoice0, but Mary cannot yet settle it because preimage is still unknown to 
> Mary.
>
> 2. Merchant Mary now sends hash to Charlie and Charlie creates another hodl 
> invoice invoice1 (for the delivery costs). Mary pays it and gives the 
> physical package to Charlie.
>
> 3. Charlie now has the package and delivers it to Paul.
>
> 4. Upon delivery, Paul gives preimage to Charlie who now can use it to settle 
> his outstanding invoice (invoice1) with Mary, thereby revealing preimage to 
> Mary who then settles her outstanding invoice0 with Paul.
>
> Taking the above, allowing it to be multi-hop (multiple Couriers) and 
> blinding the physical location from one hop to the next, is non-trivial but 
> seems doable. Some of you may have thought a lot more about these types of of 
> protocols (digital-meets-physical-world) already, so please chime in!
>
> Warm Regards,
> -VzxPLnHqr
>
> [1] https://wiki.ion.radar.tech/tech/research/hodl-invoice (though, I think 
> first proposed by Joost?)
> --
> Sent with Tutanota, the secure & ad-free mailbox:
> https://tutanota.com


___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Improving Payment Latency by Fast Forwards

2021-06-19 Thread ZmnSCPxj via Lightning-dev
Good morning LL,

> Hi Z,
>
> Thanks again for getting to the bottom of this. I think we are on the same 
> page except for one clarification:
>
> On Tue, 8 Jun 2021 at 12:37, ZmnSCPxj  wrote:
>  
>
> > Thus, in our model, we have the property that Bob can always recover all 
> > signatures sent by Alice, even if Carol is corrupted by Alice --- we model 
> > the signature-deletion attack as impossible, by assumption.
> > (This is a strengthening of security assumptions, thus a weakening of the 
> > security of the scheme --- if Bob does not take the above mitigations, Bob 
> > ***is*** vulnerable to a signature-deletion attack and might have ***all*** 
> > funds in hostage).
>
> Only where ***all*** refers to the funds in the fast forward -- funds 
> consolidated into the channel balance are not at risk (modulo enforcing 
> correct state on chain).
> I think it should be easy to get a stream of signatures so they can't be 
> deleted. The user "Bob" is creating and sending the invoices so they can 
> always demand and save the signatures from "Carol the Cashier" that 
> correspond to each payment so the "deletion attack" will be thwarted.

To be clear, what I meant here with "***all***" was that risk of funds hostage 
exists if Bob has absolutely no mitigation against this (i.e. makes no copies 
of signatures for itself).
What you suggest is a mitigation that *does* prevent this "***all***" case 
(i.e. Bob makes its own copies of signatures, it does not delegate signature 
storage to Carol the Cashier).

Thus, the model outright assumes that Bob makes *some* mitigation to prevent 
signature deletion, as without *any* mitigation the model is insecure.

Otherwise I think we are mostly in agreement here.

--

Another thing I have been mulling over is an older proposal where some 
Lightning service-provider (who takes on the role of Alice in our description) 
simply generates the invoice+preimage itself.
Then when Bob comes online, the Lightning service-provider (Alice  hmmm A 
Lightning Service-provider hence "ALS" or "Alice") simply forwards the payment 
to Bob at that time.

i.e.

* Bob makes an invoice+preimage for a third party to pay.
* Bob hands over the preimage to Alice.
* Bob goes offline.
* Sender sends to Alice, who has the preimage and can claim it.
* Bob goes online.
* Alice sends the payment.

Note that Alice is trusted to honestly forward the payment to Bob when Bob 
comes online in this older proposal.

However, in this older proposal, any funds "already" in the Bob-side of the 
channel are safe --- Alice cannot steal them.
Alice can only steal funds it has not forwarded to Bob yet.

Now, let us return to the detailed FF scheme (with separate Carol the Cashier 
and Kelly the Keykeeper).

If Carol was operated by Alice, it would have similar security to the above 
older proposal.

* Carol does not have any Bob privkeys or the entire set of revocation keys, so 
cannot steal channel funds outright.
* We assume that Bob has mitigations against signature deletion (i.e. Bob has 
backups of signatures).
* We have already established in previous discussion, Alice+Carol cannot 
cooperate to steal funds "already" in the channel --- they can only steal funds 
from payments that Bob has not come online to claim yet.

However, the older proposal has significant advantage:

* It is simpler and can reuse existing code and tests.

Indeed, C-Lightning plus plugins can implement the older proposal today, with 
fairly small amount of new code (only for the plugin --- no changes to 
C-Lightning necessary, just add a plugin, thus significantly lower testing 
burden).
Contrast this with FF, which requires a new state machine and protocol to 
implement, with greatly increased potential for CVEs.

What FF *does* have as an advantage is that Carol the Cashier can be operated 
by **Bob** rather than Alice.

For example, Bob can have a single-board computer that runs Carol-software, and 
the mobile phone of Bob is simply a remote control for the Carol-software.
The advantage here is that the single-board computer, which is 100% online, 
does *not* have any privkeys.
This is in contrast with current Lightning implementations, where such a 
"remote control" scheme would need privkeys to be kept on the single-board 
computer, at risk of exfiltration.
Bob can have a separate Kelly-hardware that it connects to its mobile phone 
whenever Bob needs to send out money, thus greatly reducing the risk 
experienced by Bob.
The previous proposal cannot do this as honest resolution of the payment is 
simply immediately trusted to Alice A Lightning Service-provider.

Regards,
ZmnSCPxj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Improving Payment Latency by Fast Forwards

2021-06-07 Thread ZmnSCPxj via Lightning-dev
Good morning LL,

> Hi Z,
>
> I agree with your analysis. This is how I pictured eltoo fast forwards 
> working as well.
>
> Another interesting thing about this idea is that it could allow a new type 
> of custodial LN provider where the custodian is only in charge of receiving 
> payments to the channel but cannot spend them.
> With the non-custodial LN phone apps there is this annoying UX where you have 
> to keep the app open to receive a payment (because the pre-image is on my 
> phone).
> I wouldn't mind letting the provider handle receiving payments on my behalf.
> Of course this means they would be able to steal the money in the FF state 
> but this is a big reduction in risk from a full custodial solution.
> In other words, you should be able to get the seamless experience of a fully 
> custodial wallet while only giving them custody of small amounts of coins for 
> a short time.

Yes, that is indeed a good advantage and thank you for this.

> On Wed, 2 Jun 2021 at 13:30, ZmnSCPxj  wrote:
>
> > Another advantage here is that unlike the Poon-Dryja Fast Forwards, we do 
> > *not* build up a long chain of HTLC txes.
> > At the worst case, we have an old update tx that is superseded by a later 
> > update tx instead, thus the overhead is expected to be at most 1 extra 
> > update tx no matter how many HTLCs are offered while Bob has its privkey 
> > offline.
>
> I don't think you need to build up a long chain of HTLC txs for the 
> Poon-Dryja fast forward in the "desync" approach. Each one just replaces the 
> other.

Thinking about it more, this seems correct.

Referring back to the diagrams:

+--++--+
|  Commitment tx 1 of A||  HTLC Tx |
++-+++-+
|| (A[0] && B) |--->|  SigA[0]   | (A[0] && B) |
||||(A && CSV) |||||(A && CSV) |
|SigB+-+|+-+
||  B  |||A->B |
|| |||HTLC |
++-+++-+

+--+
| Commitment tx *2* of B   |
++-+
|SigA|  A  |
|| |
|+-+
|| (A && B[0]) |
||||(B && CSV) |
|+-+
||A->B |
||HTLC |
++-+


On the *next* HTLC, Alice just gives s commitment tx *3* `SigA`, and a 
replacement HTLC Tx `SigA[0]` to the cashier of Bob.

+--++--+
|  Commitment tx 1 of A||   HTLC Tx *2*|
++-+++-+
|| (A[0] && B) |--->|  SigA[0]   | (A[0] && B) |
||||(A && CSV) |||||(A && CSV) |
|SigB+-+|+-+
||  B  |||A->B |
|| |||HTLC |
++-+|+-+
||A->B |
||HTLC |
++-+

+--+
| Commitment tx *3* of B   |
++-+
|SigA|  A  |
|| |
|+-+
|| (A && B[0]) |
||||(B && CSV) |
|+-+
||A->B |
||HTLC |
|+-+
||A->B |
||HTLC |
++-+

This is safe, because:

* If Alice publishes Commitment tx 1 of Alice, Bob has every incentive to 
publish the HTLC Tx *2*, not the older HTLC Tx.
  * Alice cannot force publishing the *previous* HTLC Tx, because Alice has no 
`B` key and cannot force-publish it.
* If Bob wants to close unilaterally, it has every incentive to publish the 
latest Commitment tx ### of B, because that has the most HTLCs going to Bob.

Against the above we should note that the "HTLCs" we are talking about in 
Poon-Dryja are not simple contracts but are instead revocable HTLCs, which 
means additional dependent transactions.
So I think the above *is* doable, but *does* require additional complexity and 
care, in that every A->B HTLC has to have some signatures exchanged as well 
(which, as they are HTLCs "in flight", we can have the keys on the cashier).

-


I also have another subtlety to bring up with the above.

In particular, for a set of *simplex* Alice-to-Bob 

Re: [Lightning-dev] Improving Payment Latency by Fast Forwards

2021-06-01 Thread ZmnSCPxj via Lightning-dev
Good morning again LL,

So I started thinking as well, about Decker-Russell-Osuntokun and the Fast 
Forwards technique, as well as your "desync" idea.

And it seems to me that we can also adapt a variant of this idea with 
Decker-Russell-Osuntokun, with the advantage of **not** requiring the 
additional encumbrance at the outputs.

The technique is that we allow Bob the receiver to have possession of *later* 
states while Alice the sender only possesses an old state.

Alice sends the signatures for a new state (update + settlement) whenever it 
offers an HTLC to Bob, and whenever Bob fulfills the HTLC.
However, Alice *does not* wait for Bob to return signatures for a new state.
So Alice remains stuck with the old state.

* Suppose Alice wants to close the channel unilaterally.
  * Alice broadcasts the old update tx.
  * Bob has an incentive to bring its latest state onchain (bringing its 
privkey online and signing the latest update).
* All the payments are in the Alice->Bob direction.
  * Even though Alice broadcasted an old state, it does not lose money since 
Decker-Russell-Osuntokun is non-punitive.
* Bob can bring its privkey online to close the channel unilaterally with the 
latest state.

So it looks to me that Decker-Russell-Osuntokun similarly does **not** require 
the additional encumbrance at the "main" outputs.
We simply allow the sender to remain at an older state.

So let us give a concrete example.

* Alice and Bob start at state 1: Alice = 50, Bob = 50.
* Alice offers a HTLC of value 10.
  * Alice: state 1: Alice = 50, Bob = 50
  * Bob: state 2: Alice = 40, Bob = 50, A->B HTLC = 10
* Bob fulfills, so Alice sends a new state.which transfers the A->B HTLC value 
to Bob.
  * Alice: state 1: Alice = 50, Bob = 50
  * Bob: state 3: Alice = 40, Bob = 60
* Bob brings its privkey online because it wants to send out via Alice (a 
forwarder).
  It offers an HTLC B->A of value 20.
  * Alice: state 4: Alice = 40, Bob = 40, B->A HTLC = 20
  * Bob: state 3: Alice = 40, Bob = 60

Because publishing old state is "safe" under Decker-Russell-Osuntokun, it is 
fine for one participant to have *only* an older state!
And we can arrange the incentives so that the one with the latest state is the 
one who is most incentivized to publish the latest state.

(We should probably change the subject of this thread BTW)

Another advantage here is that unlike the Poon-Dryja Fast Forwards, we do *not* 
build up a long chain of HTLC txes.
At the worst case, we have an old update tx that is superseded by a later 
update tx instead, thus the overhead is expected to be at most 1 extra update 
tx no matter how many HTLCs are offered while Bob has its privkey offline.



Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Improving Payment Latency by Fast Forwards

2021-06-01 Thread ZmnSCPxj via Lightning-dev
Good morning LL,

> Hi Z,
>
> I just went through the presentation which made your thinking very clear. 
> Thanks.
> I will not be able to match this effort so please bear with me as I try and 
> explain my own thinking.
> I don't see why fast forwards (FF) need "symmetrically encumbered outputs"? 
> To me the protocol should be asymmetric.
>
> This is what I think happens when offering a FF HTLC:
> 1. The offerer creates and signs a new commitment tx as normal with the HTLC 
> except it has the same revocation key as the last one.
> 2. The offerer patches their balance output by sending a tx spending from it 
> to a new tx which has the HTLC output and their balance output (unencumbered).
>
> The HTLC is now irrevocably committed from the perspective of the receiver.
> Now the receiver presents the pre-image and the offerer then:
>
> 1. The offerer creates and signs a new commitment tx as normal consolidating 
> the funds into the receiver's balance output except once again it has the 
> same revocation key as the last one.
> 2. The offerer patches their commitment tx balance output again by sending a 
> tx spending from it to a new tx which splits into the receiver's balance (the 
> value of the claimed HTLC) and the offerer's remaining balance.
>
> You can repeat the above process without having the receiver's revocation 
> keys online or their commitment tx keys for many HTLCs while the offerer 
> still has balance towards the receiver.
> The on-chain cost is about the same as before for an uncooperative close.
>
> Once the receiver brings their keys on line they can consolidate the FF state 
> into a new commitment txs on both sides and with a proper revocation operate 
> the channel normally. What has been the receiver up until now can finally 
> send funds.
>
> Am I missing something?

Basically, you are taking advantage of the fact that we **actually** let the 
commitments on both sides be desynchronized with each other.
I tend to elide this fact when explaining, and also avoid it when planning 
protocols.

However I believe the idea is correct.

Anyway, as I understood it:

So suppose we start with this pair of commitment txes:

+--+
|  Commitment tx 1 of A|
++-+
|| (A[0] && B) |
||||(A && CSV) |
|SigB+-+
||  B  |
|| |
++-+

+--+
|  Commitment tx 1 of B|
++-+
|SigA|  A  |
|| |
|+-+
|| (A && B[0]) |
||||(B && CSV) |
++-+

Now Alice wants to offer an HTLC to Bob.
What Alice does is:

* **Retain** the Alice commitment tx and create an HTLC tx spending from it.
* **Advance** the Bob commitment tx (and letting it desync from the Alice 
commitment tx), adding the same HTLC.

So after Alice sends its new signatures, our offchain txes are:

+--++--+
|  Commitment tx 1 of A||  HTLC Tx |
++-+++-+
|| (A[0] && B) |--->|  SigA[0]   | (A[0] && B) |
||||(A && CSV) |||||(A && CSV) |
|SigB+-+|+-+
||  B  |||A->B |
|| |||HTLC |
++-+++-+

+--+
| Commitment tx *2* of B   |
++-+
|SigA|  A  |
|| |
|+-+
|| (A && B[1]) |
||||(B && CSV) |
|+-+
||A->B |
||HTLC |
++-+

Notes:

* Again, for Alice to offer the HTLC to Bob, only Alice has to make new 
signatures (`SigA[0]` and `SigA` for commitment tx *2* of Bob).
* If Alice goes offline and Bob decides to drop onchain, Bob only needs to sign 
the new commitment tx.
  We can argue that dropping channels *should* be rare enough that requiring 
privkeys for this operation is not a burden.
* If Alice decides to drop the channel onchain, Bob only needs to bring in the 
privkey for the HTLC tx, which we can (at a lower, detailed level) be different 
from the "main" B privkey.

So yes, I think it seems workable without symmetric encumbrance.

Regards.
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Improving Payment Latency by Fast Forwards

2021-05-31 Thread ZmnSCPxj via Lightning-dev
Good morning list,

> It may be difficult to understand this, so maybe I will make a convenient 
> presentation of some sort.

As promised: https://zmnscpxj.github.io/offchain/2021-06-fast-forwards.odp

The presentation is intended to be seen by semi-technical and technical people, 
particular those that have not read (or managed to fully read and understand) 
the original writeup in 2019.
Simply "run" the presentation (F5 in LibreOffice), as the presentation uses 
callouts extensively for explication.

Regards,
ZmnSCPxj


___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Improving Payment Latency by Fast Forwards

2021-05-24 Thread ZmnSCPxj via Lightning-dev
Good morning LL,

> Hey Z,
>
> Thanks for your analysis. I agree with your conclusion. I think the most 
> practical approach is the "ask first" 3 round protocol.
>
> Another option is to have `remote_penaltyclaimpubkey` owned by the node 
> instead of the hardware device.
> This allows funds to accrue in the fast forward state which can be swept into 
> the commit tx at the merchants discretion.
> If a fast forward state needs to be asserted on-chain it can then be done 
> automatically without the hardware device.
> Of course, the funds in the FF state are more vulnerable than the main 
> channel balance during that time because their keys are not in a secure 
> device but this seems ok.
> The obvious analogy is to having cash in the till (less secure) that you send 
> to your bank (more secure™) at the end of the day or week.


This seems a useful technique.

>
> > We ***need*** privkeys to be periodically online more often than 
> > `to_self_delay` anyway, ***in case of theft attempts***.
> >  So this is not an ***additional*** requirement at least.
>
> This is a really important point. I guess you have to actually do this 
> periodically, only when there is an actual attempt at theft. Quite annoying 
> to UX to require this.

If you mean "***not*** only when there is an actual attempt at theft", yes.

My thought that this would be useful for a "big"-ish merchant that primarily 
accepts payments, to mitigate its key exposure.
It would program a small low-power device such that it almost always has its 
network interface disabled, but periodically (at random times) it will enable 
its network interface and connect out to the node, presenting a proof that it 
holds the privkey.
It would have a firewall so that it cannot receive incoming connection requests 
and can only make outgoing connection requests (and as noted above, most of the 
time the network interface would be disabled outright).

Then the node could wait for this hardware to contact it, and proceed to "roll 
up" any fast-forwarded HTLCs.
This could also be an opportunity for the node to send out funds, for example 
to pay the salaries of employees or dividends to shareholders of the merchant.

Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Improving Payment Latency by Fast Forwards

2021-05-23 Thread ZmnSCPxj via Lightning-dev
Good morning list,

Note that there is a possible jamming attack here.

More specifically, when "failing" an incoming HTLC, the receiver of the HTLC 
sends its signature for a transaction spending the HTLC and spending it back to 
the sender revocable contract.
The funds cannot be reused until the channel state is updated to cut-through 
all the transactions (the HTLC transaction and the failure transaction).
(Well it *could* but that greatly amps the complexity of the whole thing --- 
no, just no.)

Thus I could jam a particular receiver by sending, via forwarding nodes, to 
that receiver, payments with a random hash, which with high probability have 
preimages that are unknown by the receiver.
The receiver can only fail those HTLCs, but "to fail an HTLC" under Fast 
Forwards makes the funds unusable, until the previous channel state can be 
revoked and replaced with a new one.
But updating the channel state requires privkeys to be online in order to 
create the signatures for the new channel state.

This creates a practical limit on how long you can keep privkeys offline; if 
you keep it offline too long, an attacker can jam all your incoming capacity 
for long periods of time.
This is not currently a problem without "receiver online, privkey offline", 
since without "privkey offline" case, the receiver can update the channel state 
immediately.

However, if the receiver is willing to lose privacy, the protocol can be mildly 
modified so that the receiver tells the forwarding node to *first* ask the 
receiver about every HTLC hash before actually instantiating and sending the 
HTLC.
Only if the receiver agrees will the forwarder actually send the HTLC.

The forwarder is incentivized to go along with this, as otherwise, the receiver 
cannot actually fail any HTLCs --- it needs to provide a signature, and 
signatures require privkeys, and the receiver has those offline.
Thus, the forwarder would prefer to ask the receiver *before* it instantiates 
the HTLC, as otherwise the HTLC cannot be cancelled until the receiver gets its 
privkeys online, which can take a long time --- and if the HTLC times out in 
the meantime, that can only be enforced by dropping onchain, and Fast Forwards 
are *very* expensive in the unilateral close case.

Obviously this tells the forwarding node that the channel is used for receiving 
and that any payments over it terminate at the next hop, thus a privacy 
degradation.
On the other hand, unpublished channels remain popular despite my best efforts, 
and this is the exact problem unpublished channels have, so  not a 
degradation in privacy in practice, since users of unpublished channels already 
have degraded privacy (axiom of terminus),

This also increases latency once again, as there is now 1.5 roundtrips 
(forwarder asks receiver if this forwarded HTLC is kosher, receiver responds, 
forwarder sends signature to HTLC transaction).
However, the increased latency only occurs at the endpoint; forwarders (which 
need to have privkeys online 100% of the time anyway, and can thus cut-through 
any number of failed HTLCs at any time) can skip the "is this HTLC kosher" 
message and just send the HTLC signatures immediately.
Thus, this may be an acceptable tradeoff.

Thus, one might consider this scheme to be usable for *either* Fast Forwards, 
*or* "receiver online, privkeys offline", but not usefully both (after all, a 
forwarder is both a receiver and a sender, and a sender needs its keys in order 
to send, so it cannot use the "privkeys offline" feature anyway).


It may be difficult to understand this, so maybe I will make a convenient 
presentation of some sort.

Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Improving Payment Latency by Fast Forwards

2021-05-23 Thread ZmnSCPxj via Lightning-dev
Good morning list,

I have decided to dabble in necromancy, thus, I revive this long-dead thread 
from two years ago.
Ph34R mE and my leet thread necromancy skillz.

A short while ago, LL and Steve Lee were discussing about ways to reduce the 
privkey onlineness requirement for Lightning.
And LL mentioned that this proposal could allow privkeys to be kept offline for 
a receiver.

* The receiver has to be online, still.
* Its privkeys can be kept offline.
  * The receiver can still receive even if its privkeys are offline (for 
example the privkeys are in a hardware signing module that is typically kept 
offline and is only put online at rare intervals).
* The sender has to be online and the sender privkeys have to be online.
* Forwarders and their privkeys have to be online.

Unfortunately, I think the proposal, as currently made, cannot support the 
above feature, as stated.

When an HTLC transaction is provided by the sender under a Fast Forward scheme, 
it provides a transaction that spends from "its" output in both versions of the 
latest Poon-Dryja commitment transaction.

However, this output, as provided in the Fast Forward scheme, requires *two* 
signatures.

Here are the scripts mentioned in the previous post:

OP_IF
# Penalty transaction/Fast forward
 OP_CHECKSIGVERIFY 
OP_ELSE
`to_self_delay`
OP_CSV
OP_DROP

OP_ENDIF
OP_CHECKSIG

And:

OP_IF
# Penalty transaction/Fast forward
 OP_CHECKSIGVERIFY 
OP_ELSE
`to_self_delay`
OP_CSV
OP_DROP

OP_ENDIF
OP_CHECKSIG

Now, the first branch is what is used in the Fast Forward scheme.
And we can see that in the first branch, two signatures are needed: one for 
local, and one for remote.

Thus, any HTLC-bearing transactions are signed by both the sender and receiver.
Fast Forwards works its low-latency magic by simply having the sender send the 
signature spending from the current channel state, outright, to the receiver, 
and until the channel is updated, the HTLC is "safe":

* The current channel state (represented by some commitment tx) cannot be 
replaced with an alternative, without the replacer risking funds loss 
(Poon-Dryja punishment mechanism).
* The spending tx that instantiates the HTLC is safe because the receiver will 
not willingly sign an alternate version.

***HOWEVER***, the HTLC is safe on the assumption that *the receiver can 
provide its signature* if the channel is dropped onchain.
And channels can be dropped onchain *at any time*.
If the receiver is unable to provide its signature before the `to_self_delay` 
finishes, then the sender can revoke ***all*** HTLCs it sent!

Thus, at least as initially stated, Fast Forwards cannot be used for this 
"receiver online, privkeys offline, can receive" feature.

***HOWEVER HOWEVER***, we should note that the caveats are something we can 
actually work with:

* The privkeys can only be offline for up to `to_self_delay` blocks.
  * We ***need*** privkeys to be periodically online more often than 
`to_self_delay` anyway, ***in case of theft attempts***.
So this is not an ***additional*** requirement at least.
* Watchtowers cannot guard against attempts to steal Fast Forwarded HTLCs --- 
they need to receive the signatures for the HTLC transactions as well, 
otherwise they can do nothing about it.
  * However, whenever the receiver sends to a watchtower it *does* need to send 
signatures anyway, so it still needs to get privkeys online for signing.
  * Since we need the privkeys to be made online a little more often than every 
`to_self_delay` blocks anyway, this is *not* an additional requirement!

***THUS***, we *can* provide a tweaked version of the above desired feature:

* The receiver has to be online, still.
* Its privkeys can be kept offline, ***BUT***, it has to be regularly brought 
online a little more often than `to_self_delay`.
  * The receiver can still receive even if its privkeys are offline (for 
example the privkeys are in a hardware signing module that is typically kept 
offline and is only put online at rare intervals).
* The sender has to be online and the sender privkeys have to be online.
* Forwarders and their privkeys have to be online.

The additional requirement --- that the receiver privkeys have to be regularly 
brought online --- is not actually an onerous one *since we need it for channel 
safety*.

So this feature is indeed supported by Fast Forwards (with fairly minimal 
caveats!) and kudos to LL for thinking of it!

Against this, we should remind the drawbacks of Fast Forwards:

* Onchain space is much higher (in the unilateral close case only --- channel 
opens and mutual closes remain small) due to chaining of transactions instead 
of strict replacement.
* Fast Forwarded HTLCs need to have their signatures stored on replicated 
storage, and is O(n) (current Poon-Dryja requires O(1) storage for revocation 
keys but O(n) storage for HTLC 

Re: [Lightning-dev] Questions on lightning chan closure privacy

2021-05-18 Thread ZmnSCPxj via Lightning-dev
Good morning LL and Lee,

> Hi Lee,
>
> You are touching on some very relevant privacy challenges for lightning. To 
> your questions:
>
> 1. Is it possible to identify which node funded a lightning channel? (this 
> tells you who owns the change output)
> 2. Is it possible to identify who owns which channel close output?
>
> I think that the answer to both these questions hinges on whether you 
> exclusively use private channels. If you fund private and public channels 
> with the same wallet then it may be possible to identify your private 
> channels and the owner of the channel and channel close outputs[1].

It is helpful to avoid the terminology "public / private" and use instead 
"published / unpublished", precisely because unpublished channels are not 
necessarily an improvement in privacy (but are a degradation in usability for 
the rest of the network).

If a node has a mix of published and unpublished channels, then it is usually 
possible to look at a closed unpublished node and determine which output 
belongs to that node.
And because channels are composed of two participants, by simple elimination, 
the other output obviously belongs to the counterparty.

Now, a node that only has unpublished channels has to (in the current network) 
be connected to a node with *mixed* published and unpublished channels.
Otherwise, it would not be able to find a route to *any* other payee via that 
channel, and thus the channel capacity is wasted.

When that channel is closed, with non-negligible probability it is possible to 
determine which output goes to the "mixed" node and which one goes to the 
"unpublished-only" node.
That can then be tracked as well.

Thus, a node which has only unpublished channels does not really have a much 
improved privacy over one which uses only published channels, or has a mix of 
channels.

--

On the other hand, I have written before about "CoinSwapper", which is 
basically:

* Use some onchain funds to create a channel to some random well-connected node.
* Pay to an offchain-to-onchain swap and withdraw all your coins onchain.
* Close the previous channel and blacklist your output from the mutual close 
(i.e. throw away the key and destroy all evidence that you used that channel).

This allows some privacy, as long as you never use the output from the mutual 
close.
This is a clunky way you can achieve CoinSwap in practice today without waiting 
for specific CoinSwap software.

Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Escrow Over Lightning?

2021-02-12 Thread ZmnSCPxj via Lightning-dev
Good morning Nadav, and list,


>
> More generally, all Boolean logic can be converted to one of two standard 
> forms.
>
> -   sum-of-products i.e. `||` over `&&`
> -   product-of-sums i.e. `&&` over `||`
>
> For example an XOR can be converted to the sum-of-products form:
>
> A ^ B = (!A && B) || (A && !B)
>
> If we have any complicated Boolean logic, we can consider to always use 
> some kind of product-of-sums form.
> So for the example case, escrow service is the logic:
>
> SELLER && (BUYER || ESCROW)
>
> The above is a standard product-of-sums form.
>
> Any sums (i.e. `||`) can be converted by De Morgan Theorem to product, 
> and the payment can be a reversal instead.
>
> SELLER && !(!BUYER && !ESCROW)
>
> The `!(a && b && ...)` can be converted to a reversal of the payment.
> The individual `!BUYER` is just the buyer choosing not to claim the 
> seller->buyer direction, and the individual `!ESCROW` is just the escrow 
> choosing not to reveal its temporary scalar for this payment.
>
>
> And any products (i.e. `&&`) are trivially implemented in PTLCs as trivial 
> scalar and point addition.
>
> So it may actually be possible to express any Boolean logic, by the use of 
> reversal payments and "option not to release scalar", both of which implement 
> the NOT gate needed for the above.
> Boolean logic is a fairly powerful, non-Turing-complete, and consistent 
> programming language, and if we can actually implement any kind of Boolean 
> logic with a set of payments in various directions and Barrier Escrows we can 
> enable some fairly complex use-cases..
>
> For example, a simple DLC binary oracle can provide two points in such a way 
> that it can only reveal one scalar of those two points (e.g. it has a 
> persistent public key `P`, and two temporary points `H` and `T` such that `H 
> = T + P`, and it can only safely reveal either `h` or `t`.).
> Based on the outcome of a coin flip (or other input from the mythical "real 
> world"), it reveals either one or the other scalar.
> Then we can use either point as part of any `!Oracle` or `Oracle` Boolean 
> logic we need.



Okay, so here is a worked example.

Suppose we have two oracles, 1 and 2.
At some point, they will flip coins in the future.
Based on their (independent) coin flip, oracle 1 will reveal either H1 or T1, 
and oracle 2 will reveal either H2 or T2.

Suppose some bettor wants to make some bet:

* Either both coins are heads (H1 && H2), or both coins are tails (T1 && T2).

So we have a Bettor, and a Bookie that facilitates this bet.

So the base logic is that the bettor wins (i.e. there is a payment 
Bookie->Bettor) if:

(H1 && H2) || (T1 && T2)

And the inverse of that logic (Better->Bookie) if the above is false.

We also know that `T1 = !H1` and `T2 = !H2` (i.e. the DLC oracles will only 
publish one scalar or the other), so:

(H1 && H2) || (!H1 && !H2)

Let us transform to product-of-sums (this can be done by computers by using a 
Karnaugh Map):

(H1 || !H2) && (!H1 || H2)

Let us check by Boolean table:

H1   H2(H1 && H2) || (!H2 && !H2)   (H1 || !H2) && (!H1 || H2)
00 11
01 00
10 00
11 11

So the above product-of-sums is correct.

We apply the De Morgan transform:

!(!H1 && H2) && !(H1 && !H2)

Then we return the `T`s:

!(T1 && H2) && !(H1 && T2)

Since the logic is inverted, what actually happens is that the Bettor makes two 
payments:

* Bettor->Bookie : (Bookie && T1 && H2)
* Bettor->Bookie : (Bookie && H1 && T2)

The Bookie would also need to pay out if the Bettor wins, so the Bookie makes 
two payments as well:

* Bookie->Bettor : (Bettor && T1 && T2)
* Bookie->Bettor : (Bettor && H1 && H2)

We can derive the above by inverting the initial `(H1 && H2) || (!H1 && !H2)` 
logic, then going through the same conversion to product-of-sums and De 
Morganizing it as for the Bettor case.

With the above, we now have a setup where either both oracles are heads, or 
both oracles are tails, and if so the Bettor wins, otherwise the Bookie wins.
This all probably needs to be set up with some kind of Barrier Escrow, but 
Nadav already has that covered.

Here is a cute magical trick.
What happens if for example oracle 1 has a failure where the CPU liquid cooler 
on its server fails, and oracle 1 is unable to find a replacement CPU cooler 
because the CPU socket has been obsoleted and nobody makes CPU coolers for that 
CPU socket anymore and the server cannot be brought up again?
In that case, it will be unable to publish either `H1` or `T1`.

And note that all the payments above involve `H1` or `T1`.
In that case, nobody pays out to anyone, as none of the payments are ever 
claimable.
Thus the case where "oracle disappears" is handled "gracefully" by simply not 
having any monetary transfers at 

Re: [Lightning-dev] Escrow Over Lightning?

2021-02-12 Thread ZmnSCPxj via Lightning-dev
Good morning Nadav, and list,


>
> More generally, all Boolean logic can be converted to one of two standard 
> forms.
>
> -   sum-of-products i.e. `||` over `&&`
> -   product-of-sums i.e. `&&` over `||`
>
> For example an XOR can be converted to the sum-of-products form:
>
> A ^ B = (!A && B) || (A && !B)
>
> If we have any complicated Boolean logic, we can consider to always use 
> some kind of product-of-sums form.
> So for the example case, escrow service is the logic:
>
> SELLER && (BUYER || ESCROW)
>
> The above is a standard product-of-sums form.
>
> Any sums (i.e. `||`) can be converted by De Morgan Theorem to product, 
> and the payment can be a reversal instead.
>
> SELLER && !(!BUYER && !ESCROW)
>
> The `!(a && b && ...)` can be converted to a reversal of the payment.
> The individual `!BUYER` is just the buyer choosing not to claim the 
> seller->buyer direction, and the individual `!ESCROW` is just the escrow 
> choosing not to reveal its temporary scalar for this payment.
>
>
> And any products (i.e. `&&`) are trivially implemented in PTLCs as trivial 
> scalar and point addition.
>
> So it may actually be possible to express any Boolean logic, by the use of 
> reversal payments and "option not to release scalar", both of which implement 
> the NOT gate needed for the above.
> Boolean logic is a fairly powerful, non-Turing-complete, and consistent 
> programming language, and if we can actually implement any kind of Boolean 
> logic with a set of payments in various directions and Barrier Escrows we can 
> enable some fairly complex use-cases..
>
> For example, a simple DLC binary oracle can provide two points in such a way 
> that it can only reveal one scalar of those two points (e.g. it has a 
> persistent public key `P`, and two temporary points `H` and `T` such that `H 
> = T + P`, and it can only safely reveal either `h` or `t`.).
> Based on the outcome of a coin flip (or other input from the mythical "real 
> world"), it reveals either one or the other scalar.
> Then we can use either point as part of any `!Oracle` or `Oracle` Boolean 
> logic we need.



Okay, so here is a worked example.

Suppose we have two oracles, 1 and 2.
At some point, they will flip coins in the future.
Based on their (independent) coin flip, oracle 1 will reveal either H1 or T1, 
and oracle 2 will reveal either H2 or T2.

Suppose some bettor wants to make some bet:

* Either both coins are heads (H1 && H2), or both coins are tails (T1 && T2).

So we have a Bettor, and a Bookie that facilitates this bet.

So the base logic is that the bettor wins (i.e. there is a payment 
Bookie->Bettor) if:

(H1 && H2) || (T1 && T2)

And the inverse of that logic (Better->Bookie) if the above is false.

We also know that `T1 = !H1` and `T2 = !H2` (i.e. the DLC oracles will only 
publish one scalar or the other), so:

(H1 && H2) || (!H1 && !H2)

Let us transform to product-of-sums (this can be done by computers by using a 
Karnaugh Map):

(H1 || !H2) && (!H1 || H2)

Let us check by Boolean table:

H1   H2(H1 && H2) || (!H2 && !H2)   (H1 || !H2) && (!H1 || H2)
00 11
01 00
10 00
11 11

So the above product-of-sums is correct.

We apply the De Morgan transform:

!(!H1 && H2) && !(H1 && !H2)

Then we return the `T`s:

!(T1 && H2) && !(H1 && T2)

Since the logic is inverted, what actually happens is that the Bettor makes two 
payments:

* Bettor->Bookie : (Bookie && T1 && H2)
* Bettor->Bookie : (Bookie && H1 && T2)

The Bookie would also need to pay out if the Bettor wins, so the Bookie makes 
two payments as well:

* Bookie->Bettor : (Bettor && T1 && T2)
* Bookie->Bettor : (Bettor && H1 && H2)

We can derive the above by inverting the initial `(H1 && H2) || (!H1 && !H2)` 
logic, then going through the same conversion to product-of-sums and De 
Morganizing it as for the Bettor case.

With the above, we now have a setup where either both oracles are heads, or 
both oracles are tails, and if so the Bettor wins, otherwise the Bookie wins.
This all probably needs to be set up with some kind of Barrier Escrow, but 
Nadav already has that covered.

Here is a cute magical trick.
What happens if for example oracle 1 has a failure where the CPU liquid cooler 
on its server fails, and oracle 1 is unable to find a replacement CPU cooler 
because the CPU socket has been obsoleted and nobody makes CPU coolers for that 
CPU socket anymore and the server cannot be brought up again?
In that case, it will be unable to publish either `H1` or `T1`.

And note that all the payments above involve `H1` or `T1`.
In that case, nobody pays out to anyone, as none of the payments are ever 
claimable.
Thus the case where "oracle disappears" is handled "gracefully" by simply not 
having any monetary transfers at 

Re: [Lightning-dev] Escrow Over Lightning?

2021-02-12 Thread ZmnSCPxj via Lightning-dev
Good morning Nadav,

> Hey ZmnSCPxj,
>
> Your earlier post about how to accomplish ORing points without verifiable 
> encryption was super interesting.
>
> I think this contains a clever general NOT operation where you double the 
> payment and use the point as a condition for the "cancellation payment." This 
> is actually very similar to something that is used in my PTLC DLC scheme 
> where many payments are failed in most cases :) But nice to add it to the 
> toolkit, especially as a way to not use ORs for the price of 
> over-collateralization which is acceptable in many use cases.

Indeed, specifically this point of De Morgan Theorem transformation should 
probably be emphasized.

More generally, all Boolean logic can be converted to one of two standard forms.

* sum-of-products i.e. `||` over `&&`
* product-of-sums i.e. `&&` over `||`

For example an XOR can be converted to the sum-of-products form:

A ^ B = (!A && B) || (A && !B)

If we have any complicated Boolean logic, we can consider to always use some 
kind of product-of-sums form.
So for the example case, escrow service is the logic:

SELLER && (BUYER || ESCROW)

The above is a standard product-of-sums form.

Any sums (i.e. `||`) can be converted by De Morgan Theorem to product, and the 
payment can be a reversal instead.

SELLER && !(!BUYER && !ESCROW)

The `!(a && b && ...)` can be converted to a reversal of the payment.
The individual `!BUYER` is just the buyer choosing not to claim the 
seller->buyer direction, and the individual `!ESCROW` is just the escrow 
choosing not to reveal its temporary scalar for this payment.
And any products (i.e. `&&`) are trivially implemented in PTLCs as trivial 
scalar and point addition.

So it may actually be possible to express *any* Boolean logic, by the use of 
reversal payments and "option not to release scalar", both of which implement 
the NOT gate needed for the above.
Boolean logic is a fairly powerful, non-Turing-complete, and consistent 
programming language, and if we can actually implement any kind of Boolean 
logic with a set of payments in various directions and Barrier Escrows we can 
enable some fairly complex use-cases..

For example, a simple DLC binary oracle can provide two points in such a way 
that it can only reveal one scalar of those two points (e.g. it has a 
persistent public key `P`, and two temporary points `H` and `T` such that `H = 
T + P`, and it can only safely reveal either `h` or `t`.).
Based on the outcome of a coin flip (or other input from the mythical "real 
world"), it reveals either one or the other scalar.
Then we can use either point as part of any `!Oracle` or `Oracle` Boolean logic 
we need.

>
> One comment to make though, is that this mechanism requires the atomic setup 
> of multiple payments otherwise Seller -> Buyer will be set up after which 
> Buyer may keep the free option and not set up the payment in return. Luckily 
> with barrier escrows we can do atomic multi-payment setup to accomplish this!

For this particular use-case, I think it is safe to just use the order 
"Seller->Buyer, then Buyer->Seller" rather than add a barrier escrow.
Remember, the entire setup presumes that both Buyer and Seller can tr\*st the 
Escrow to resolve disputes, and the Seller->Buyer payment requires BUYER && 
ESCROW.
If the buyer never makes the Buyer->Seller payment presumably the Escrow will 
take that into consideration during dispute resolution and not release the 
ESCROW scalar to the Buyer.

And if the Buyer->Seller payment (which requires only SELLER scalar) is claimed 
"early" by the Seller before handing off the item, the Escrow is tr\*sted to 
consider this also (it is substantially the same as the Seller providing 
substandard goods) and release the ESCROW scalar.

Of course in the most general case above where we could potentially do any 
arbitrary logic it probably makes most sense to use a Barrier escrow as well to 
ensure atomicity of the setup.


Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


  1   2   3   4   5   6   >