Re: [Lightning-dev] [bitcoin-dev] CheckTemplateVerify Does Not Scale Due to UTXO's Required For Fee Payment

2024-01-29 Thread Anthony Towns
On Tue, Jan 30, 2024 at 05:17:04AM +, ZmnSCPxj via bitcoin-dev wrote:
> 
> > I should note that under Decker-Russell-Osuntokun the expectation is that 
> > both counterparties hold the same offchain transactions (hence why it is 
> > sometimes called "LN-symmetry").
> > However, there are two ways to get around this:
> > 
> > 1. Split the fee between them in some "fair" way.
> > Definition of "fair" wen?
> > 2. Create an artificial asymmetry: flip a bit of `nSequence` for the 
> > update+state txes of one counterparty, and have each side provide 
> > signatures for the tx held by its counterparty (as in Poon-Dryja).
> > This lets you force that the party that holds a particular update+state tx 
> > is the one that pays fees.
> 
> No, wait, #2 does not actually work as stated.
> Decker-Russell-Osuntokun uses `SIGHASH_NOINPUT` meaning the `nSequence` is 
> not committed in the signature and can be malleated.

BIP 118 as at March 2021 (when it defined NOINPUT rather than APO):

] The transaction digest algorithm from BIP 143 is used when verifying a
] SIGHASH_NOINPUT signature, with the following modifications:
]
] 2. hashPrevouts (32-byte hash) is 32 0x00 bytes
] 3. hashSequence (32-byte hash) is 32 0x00 bytes
] 4. outpoint (32-byte hash + 4-byte little endian) is
]set to 36 0x00 bytes
] 5. scriptCode of the input is set to an empty script
]0x00

BIP 143:

] A new transaction digest algorithm is defined, but only applicable to sigops 
in version 0 witness program:
]
]   Double SHA256 of the serialization of:
] ...
]  2. hashPrevouts (32-byte hash)
]  3. hashSequence (32-byte hash)
]  4. outpoint (32-byte hash + 4-byte little endian) 
]  5. scriptCode of the input (serialized as scripts inside CTxOuts)
] ...
]  7. nSequence of the input (4-byte little endian)

So nSequence would still have been committed to per that proposal.
Dropping hashSequence just removes the commitment to the other inputs
being spent by the tx.

Cheers,
aj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] delvingbitcoin.org discourse forum

2023-11-08 Thread Anthony Towns
Hi all,

It's been mentioned on bitcoin-dev [0] that linuxfoundation is apparently
going to cease hosting mailing lists in the next couple of months.

[0] 
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2023-November/022134.html

Anyway, I know some folks have already seen it, but I've been running a
discourse forum called "Delving Bitcoin" [1] [2] for a little while now,
so thought I'd take this opportunity to invite y'all to consider using
it for posting lightning related R&D topics for discussion.

[1] https://delvingbitcoin.org/
[2] 
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2023-November/022158.html

It seems like it would also be reasonable and low effort to setup
project-specific areas (eg for particular software like CLN/LND/Eclair,
or for work on particular features like BOLT-12 or eltoo or jamming
mitigations); might be an interesting thing to try out.

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Practical PTLCs, a little more concretely

2023-09-20 Thread Anthony Towns
On 21 September 2023 11:44:47 am AEST, Lloyd Fournier  
wrote:
>Hi AJ,
>
>On Wed, 20 Sept 2023 at 17:19, Anthony Towns  wrote:
>
>>
>> I think:
>>
>>   https://github.com/BlockstreamResearch/scriptless-scripts/pull/24
>>
>> describes (w/ proof sketch) how to do a single-signer adaptor with musig2;
>> might need some updates, to match the final musig2 API, but I think it's
>> fundamentally okay: ie, you get the "single-sig adaptor" approach, but
>> can just use the musig2 api, so best of both worlds.
>>
>>
> Can you explain the distinction here? What is a MuSig2 adaptor signature
>vs single-singer adaptor with MuSig2.
>
>Cheers,
>
>LL

You can do ptlcs scriptlessly by having a 2-of-2 musig pubkey that the payer 
signs with an adaptor signature - this can be done via the key path, but then 
requires nonce exchanges leading to extra communication rounds. Alternatively, 
you can do them via the script path, replacing "hash160 equalverify b checksig" 
with "a checksigverify b checksig" where Alice gives Bob an adaptor sig which 
Bob completes when claiming the output. The question is how to do this latter 
approach, and I think it works fine to have Alice do a "single party musig2" 
calculation for it, rather than needing a separate api.

Cheers,
aj


-- 
Sent from my phone.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Practical PTLCs, a little more concretely

2023-09-20 Thread Anthony Towns
On Wed, Sep 06, 2023 at 12:14:08PM -0400, Greg Sanders wrote:
> Since taproot channels are deploying Soon(TM), I think it behooves us to
> turn some attention to PTLCs as a practical matter, drilling down a bit
> deeper.

I think a bunch of these depends on who's interested in doing the work
to roll this out? I guess the path of least resistance would be to start
by writing an optional BLIP that gets implemented in one node software
(as an opt-in, experimental, feature?), and if that works out okay,
implementing it elsewhere and upgrading it for inclusion in the BOLTs? I
think the interop requirements are:

 * #7 routing gossip: does this channel support forwarding PTLCs
 * #4 onion messages: this is a PTLC payment, not an HTLC one; this is
   how you calculate the next point when forwarding
 * #11/#12 invoices: this is a PTLC payment point, not an HTLC payment
   hash

 * #3 tx format: this is the on-chain format we use to reveal a PTLC
   preimage
 * #2 peer messaging: this is how we pass along signatures for txs

It's important (and I think not very hard) to get the first three right
ASAP, since those require network/ecosystem-wide upgrades to change;
whereas the latter two points can be renegotiated between peers without
affecting anyone else. 

I think all the alternatives in

> https://gist.github.com/instagibbs/1d02d0251640c250ceea1c5ec163

fall into the latter two points. So it's probably best to just pick
something easy and pleasant to implement and get something that works,
and worry about optimising it later? Of course, what's easy/pleasant
depends on who's doing the work...

> 1) single-sig adaptors vs MuSig2

I think:

  https://github.com/BlockstreamResearch/scriptless-scripts/pull/24

describes (w/ proof sketch) how to do a single-signer adaptor with musig2;
might need some updates, to match the final musig2 API, but I think it's
fundamentally okay: ie, you get the "single-sig adaptor" approach, but
can just use the musig2 api, so best of both worlds.

> Hopefully this is a useful refresher and can perhaps start the discussion
> of where on the performance/engineering lift curve we want to end up, maybe
> even leading up to a standardization effort. Or maybe all this is wrong,
> let me know!

If I'm understanding your page correctly, you have to either add or
re-order peer messages when updating the commitment txs; so in that case,
I guess I'd go with "single-sig adaptor (via musig2 api), sync updates,
switch up commitment ordering". Should be comparable efficiency on-chain
to HTLCs today, fairly easy to think about/implement, and keeps the 1.5
RTT forwarding delay so adoption shouldn't make UX worse.

After there's something working, I could imagine:

 * adding a feature flag for upgrading a single-sig adaptor to being
   claimable via a musig2 double-sig key path -- you can still forward
   after 1.5 RTT, but after 3.5/4.5 RTT you can claim on-chain more
   cheaply if needed
 * adding a feature flag for APO support so you only have to sign each
   PTLC once, not once-per-update
 * adding a feature flag for 0.5 RTT fast-forwards
 * adding a feature flag for supporting async updates (maybe useful
   if you're a high bandwidth node doing fast-forwards?)
 * etc

but unless whoever's implementing is really excited about some of that,
seems better to defer it, and keep things simple to start.

Cheers,
aj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Scaling Lightning With Simple Covenants

2023-09-11 Thread Anthony Towns
On Fri, Sep 08, 2023 at 06:54:46PM +, jlspc via Lightning-dev wrote:
> TL;DR
> =

I haven't really digested this, but I think there's a trust vs
capital-efficiency tradeoff here that's worth extracting.

Suppose you have a single UTXO, that's claimable by "B" at time T+L,
but at time T that UTXO holds funds belonging not only to B, but also
millions of casual users, C_1..C_100. If B cheats (eg by not signing
any further lightning updates between now and time T+L), then each
casual user needs to drop their channel to the chain, or else lose all
their funds. (Passive rollovers doesn't change this -- it just moves the
responsibility for dropping the channel to the chain to some other
participant)

That then faces the "thundering herd" problem -- instead of the single
one-in/one-out tx that we expected when B is doing the right thing,
we're instead seeing between 1M and 2M on-chain txs as everyone recovers
their funds (the number of casual users multiplied by some factor that
depends on how many outputs each internal tx has).

But whether an additional couple of million txs is a problem depends
on how long a timeframe they're spread over -- if it's a day or two,
then it might simply be impossible; if it's over a year or more, it
may not even be noticable; if it's somewhere in between, it might just
mean you're paying a modest amount in additional fees than you'd have
normally expected.

Suppose that casual users have a factor in mind, eg "If worst comes to
worst, and everyone decides to exit at the same time I do, I want to be
sure that only generates 100 extra transactions per block if everyone
wants to recover their funds prior to B being able to steal everything".

Then in that case, they can calculate along the following lines: 1M users
with 2-outputs per internal tx means 2M transactions, divide that by 100
gives 20k blocks, at 144 blocks per day, that's 5 months. Therefore,
I'm going to ensure all my funds are rolled over to a new utxo while
there's at least 5 months left on the timeout.

That lowers B's capital efficiency -- if all the causal users follow
that policy, then B is going to own all the funds in Fx for five whole
months before it can access them. So each utxo here has its total
lifetime (L) actually split into two phases: an active lifetime LA of
some period, and an inactive lifetime of LI=5 months, which would have
been used by everyone to recover their funds if B had attempted to block
normal rollover. The capital efficiency is then reduced by a factor of
1/(1+LA/LI). (LI is dependent on the number of users, their willingness
to pay high fees to recover their funds, and global blockchain capacity,
LA is L-LI, L is your choice)

Note that casual users can't easily reduce their LI timeout just by
having the provider split them into different utxos -- if the provider
cheats/fails, that's almost certainly a correlated across all their
utxos, and all the participants across each of those utxos will need
to drop to the chain to preserve their funds, each competing with each
other for confirmation.

Also, if different providers collude, they can cause problems: if you
expected 2M transactions over five months due to one provider failing,
that's one thing; but if a dozen providers fail simultaneously, then that
balloons up to perhaps 24M txs over the same five months, or perhaps 25%
of every block, which may be quite a different matter.

Ignoring that caveat, what do numbers here look like? If you're a provider
who issues a new utxo every week (so new customers can join without too
much delay), have a million casual users as customers, and target LA=16
weeks (~3.5 months), so users don't need to rollover too frequently,
and each user has a balanced channel with $2000 of their own funds,
and $2000 of your funds, so they can both pay and be paid, then your
utxos might look like:

   active_1 through active_16: 62,500 users each; $250M balance each
   inactive_17 through inactive_35: $250M balance each, all your funds,
  waiting for timeout to be usable

That's:
  * $2B of user funds
  * $2B of your funds in active channels
  * $4.5B of your funds locked up, waiting for timeout

In that case, only 30% of the $6.5B worth of working capital that you've
dedicated to lightning is actually available for routing.

Optimising that formula by making LA as large as possible doesn't
necessarily work -- if a casual user spends all their funds and
disappears prior to the active lifetime running out, then those
funds can't be easily spent by B until the total lifetime runs out,
so depending on how persistent your casual users are, I think that's
another way of ending up with your capital locked up unproductively.
(There are probably ways around this with additional complexity: eg,
you could peer with a dedicated node, and have the timeout path be
"you+them+timeout", so that while you could steal from casual users who
don't rollover, you can't steal from your dedicated peer, so that $4.5B
could be 

Re: [Lightning-dev] LN Summit 2023 Notes

2023-08-03 Thread Anthony Towns
On Mon, Jul 31, 2023 at 02:42:29PM -0400, Clara Shikhelman wrote:
> > A different way of thinking about the monetary approach is in terms of
> > scaling rather than deterrance: that is, try to make the cost that the
> > attacker pays sufficient to scale up your node/the network so that you
> > continue to have excess capacity to serve regular users.

Just to clarify, my goal for these comments was intended to be mostly
along the lines of:

 "I think monetary-based DoS deterrence is still likely to be a fruitful
  area for research if people are interested, even if the current
  implementation work is focussed on reputation-based methods"

At least the way I read the summit notes, I could see people coming away
with the alternative impression; ie "we've explored monetary approaches
and think there's nothing possible there; don't waste your time", and
mostly just wanted to provide a counter to that impression.

The scheme I outlined was mostly provided as a rough proof-of-work to
justify thinking that way and as perhaps one approach that could be
researched further, rather than something people should be actively
working on, let alone anything that should distract from working on the
reputation-based approach.

After talking with harding on irc, it seems that was not as obvious in
text as it was in my head, so just thought I'd spell it out...

> As for liquidity DoS, the “holy grail” is indeed charging fees as a
> function of the time the HTLC was held. As for now, we are not aware of a
> reasonable way to do this. 

Sure.

> There is no universal clock,

I think that's too absolute a statement. The requirement is either that
you figure out a way of using the chain tip as a clock (which I gave a
sketch of), or you setup local clocks with each peer and have a scheme
for dealing with them being slightly out of sync (and probably use the
chain tip as a way of ensuring they aren't very out of sync).

> and there is no way
> for me to prove that a message was sent to you, and you decided to pretend
> you didn't.

All the messages in the scheme I suggested involve commitment tx updates
-- either introducing/closing a HTLC or making a payment for keeping a
HTLC active and tying up your counterparty's liquidity. You don't need to
prove that messages were/weren't sent -- if they were, your commitment
tx is already updated to deal with it, if they weren't but should have
been, your channel is in an invalid state, and you close it onchain.

To me, proving things seems like something that comes up in reputation
based approaches, where you need to reference a hit on someone else's
reputation to avoid taking a hit on yours, rather than a monetary based
one, where all you should need to do is check you got paid for whatever
service you were providing, and conversely pay for whatever services
you've been requiring.

> It can easily happen that the fee for a two-week unresolved
> HTLC is higher than the fee for a quickly resolving one.

That should be the common case, yes, and it's problematic if you can have
both a high percentage fee (or a high amount), and a distant timeout.
But that may be a situation you can avoid, and I gave a sketch of one
way you could do that.

> I think this is another take on time-based fees. In this variation, the
> victim is trying to take a fee from the attacker. If the attacker is not
> willing to pay the fee (and why would they?), then the victim has to force
> close. There is no way for the victim to prove that it is someone
> downstream holding the HTLC and not them.

The point is that you get paid for your liquidity beind held hostage;
whether the channel is closed or stays open. If that works, there's
no victim in this scenario -- you set a price for your liquidity to be
reserved over time in the hope that the payment will eventually succeed,
and you get paid that fee, until whoever currently holds the HTLC the
chance of success isn't worth the ongoing cost anymore.

The point of force closing it the same as any force close -- your
counterparty stops following the protocol you both agreed to. That can
happy any time, even just due to cosmic rays.

> > >  - They’re not large enough to be enforceable, so somebody always has
> > > to give the money back off chain.
> > If the cap is 500ppm per block, then the liquidity fees for a 2000sat
> > payment ($0.60) are redeemable onchain.
> This heavily depends on the on-chain fees, and so will need to be
> updated as a function of that, and adds another layer of complication.

I don't think that's true -- this is just a direct adjustment to the
commitment tx balance outputs, so doesn't change the on-chain size/cost
of the commitment tx.

The link to on-chain fees (at least in the scheme I outlined) is via
the cap (for which I gave an assumed value above) -- you don't want the
extra profit your counterparty would get from from that adjustment to
outweigh something like sum(their liquidity value of locking their funds
up due to a unilateral close;

Re: [Lightning-dev] LN Summit 2023 Notes

2023-07-26 Thread Anthony Towns
On Wed, Jul 19, 2023 at 09:56:11AM -0400, Carla Kirk-Cohen wrote:
> Thanks to everyone who traveled far, Wolf for hosting us in style in
> NYC and to Michael Levin for helping out with notes <3

Thanks for the notes!

Couple of comments:

> - What is the “top of mempool” assumption?

FWIW, I think this makes much more sense if you think about this as a
few related, but separate goals:

 * transactors want their proposed txs to go to miners
 * pools/miners want to see the most profitable txs asap
 * node operators want to support bitcoin users/businesses
 * node operators also want to avoid wasting too much bandwidth/cpu/etc
   relaying txs that aren't going to be mined, both their own and that
   of other nodes'
 * people who care about decentralisation want miners to get near-optimal
   tx selection with a default bitcoind setup, so there's no secret
   sauce or moats that could encourage a mining monopoly to develop

Special casing lightning unilateral closes [0] probably wouldn't be
horrible. It's obviously good for the first three goals. As far as the
fourth, if it was lightning nodes doing the relaying, they could limit
each unilateral close to one rbf attempt (based on to_local/to_remote
outputs changing). And for the fifth, provided unilateral closes remain
rare, the special config isn't likely to cause much of a profit difference
between big pools and small ones (and maybe that's only a short term
issue, and a more general solution will be found and implemented, where
stuff that would be in the next block gets relayed much more aggressively,
even if it replaces a lot of transactions).

[0] eg, by having lightning nodes relay the txs even when bitcoind
doesn't relay them, and having some miners run special configurations
to pull those txs in.

> - Is there a future where miners don’t care about policy at all?

Thinking about the different goals above seems like it gives a clear
answer to this: as far as mining goes, no there's no need to care
about policy restricitions -- policy is just there to meet other goals:
making it possible to run a node without wasting bandwidth, and to help
decentralisation by letting miners just buy hardware and deploy it,
without needing to do a bunch of protocol level trade secret/black magic
stuff in order to be competitive.

>   - It must be zero fee so that it will be evicted.

The point of making a tx with ephemeral outputs be zero fee is to
prevent it from being mined in non-attack scenarios, which in turn avoids
generating a dust utxo. (An attacking miner can just create arbitrary
dust utxos already, of course)

> - Should we add trimmed HTLCs to the ephemeral anchor?
>   - You can’t keep things in OP_TRUE because they’ll be taken.
>   - You can also just put it in fees as before.

The only way value in an OP_TRUE output can be taken is by confirming
the parent tx that created the OP_TRUE output, exactly the same as if
the value had been spent to fees instead.

Putting the value to fees directly would violate the "tx must be zero
fee if it creates ephemeral outputs" constraint above.

> ### Hybrid Approach to Channel Jamming
> - Generally when we think about jamming, there are three “classes” of
> mitigations:
>   - Monetary: unconditional fees, implemented in various ways.
> - The problem is that none of these solutions work in isolation.
>   - Monetary: the cost that will deter an attacker is unreasonable for an
> honest user, and the cost that is reasonable for an honest user is too low
> for an attacker.

A different way of thinking about the monetary approach is in terms of
scaling rather than deterrance: that is, try to make the cost that the
attacker pays sufficient to scale up your node/the network so that you
continue to have excess capacity to serve regular users.

In that case, if people are suddenly routing their netflix data and
nostr photo libraries over lightning onion packets, that's fine: you
make them pay amazon ec2 prices plus 50% for the resources they use,
and when they do , you deploy more servers. ie, turn your attackers and
spammers into a profit centre.

I've had an email about this sitting in my drafts for a few years now,
but I think this could work something like:

 - message spam (ie, onion traffic costs): when you send a message
   to a peer, pay for its bandwidth and compute. Perhaps something
   like 20c/GB is reasonable, which is something like 1msat per onion
   packet, so perhaps 20msat per onion packet if you're forwarding it
   over 20 hops.

 - liquidity DoS prevention: if you're in receipt of a HTLC/PTLC and
   aren't cancelling or confirming it, you pay your peer a fee for
   holding their funds. (if you're forwarding the HTLC, then whoever you
   forwarded to pays you a slightly higher fee, while they hold your
   funds) Something like 1ppm per hour matches a 1% pa return, so if
   you're an LSP holding on to a $20 payment waiting for the recipient to
   come online and claim it, then you might be paying out $0.0004

Re: [Lightning-dev] Resizing Lightning Channels Off-Chain With Hierarchical Channels

2023-04-23 Thread Anthony Towns
On Tue, Apr 18, 2023 at 07:17:34PM +, jlspc wrote:
> > One thing that confuses me about the paper is how to think about routing
> > to a "channel" rather than a node -- ie the payment from "E->FG->A" where
> > "FG" isn't "F" or "G", but "both of them".
> Yes, I found it very difficult to think about, and I kept getting confused 
> between concepts like "user", "node", "channel", and "factory".
> The thing that works best for me is to create a clear definition of each of 
> these terms, along with the term "party".

Okay, that makes sense. I think it might work better to treat "node" as
synonymous with "user" rather than "party" though -- that way you can say
"you create a lightning node by running lightning node software such as
lnd/cln/eclair/etc". That means not all vertices in the payment routing
network are nodes; but all vertices in the *gossip* network are nodes,
so that seems okay.

Just saying "channel" (instead of "logical channel") and "utxo/off-chain
utxo" (instead of "physical channel") might also work okay.

> I also think it's best to imagine a world in which there are hierarchical 
> channels, but there are no "factories" per se.

Personally, I use "channel factory" to mean "anything that lets a
single utxo contain multiple channels between different users, that
can be reorganised without going on-chain", so I think once you've got
hierarchial channels, you've implicitly got (a variation of) channel
factories.

(I'm not sure "channel factories" is really the most evocative way of
describing them -- at least when I think of a factory, I think the product
should be accessible to everyone; but for channel factories you have to
be involved in the factory's original mutlisig to be able to use one of
its channels. Maybe better to call them "channel coops", where you're
creating a little commune of friends/allies to work together with each
other. Could be pronounced like "workers' co-op" or like "chicken coop",
either way :)

> * Logical Channel: a layer 2 construct that consists of all of the physical 
> channels owned by a specific pair of parties
>   - the size (capacity) of a logical channel is the sum of the sizes of their 
> physical channels
>   - (Footnote: It's possible, with a significant amount of software 
> development work that I in no way discount, to route a payment through a 
> logical channel where the payment traverses multiple physical channels at the 
> same time. This is done by using separate HTLCs, all sharing the same secret, 
> in each of the physical channels that the payment traverses. I can write more 
> about this if that would be helpful.)

I think it might already be interesting to write a BOLT/BLIP for that?
Having a single channel backed by multiple on-chain utxos is probably
interesting for splicing (adding or spending a utxo while keeping the
channel open on the other utxos might be able to be done more simply than
splicing in general), and having multiple utxos might let you increase
some of your channel limits, eg `max_accepted_htlcs` might be able to
be increased to 483*U where U is the number of UTXOs backing the channel.

> > It feels like there's a whole
> > mass of complications hidden in there from a routing perspective; like how
> > do you link "FG" back with "F" and "G", how do you decide fees, how do
> > you communicate fees/max_htlc/etc.
> Regarding the specific issues you raised:
> Q: How do you link "FG" back with "F" and "G"?
> A: In terms of gossiping and the channel graph, you don't

Yeah, I think that simplifies things substantially.

I think the main thing that misled me here was the "CD->E->FG" payment
chain -- it doesn't make sense to me why E would want to devote funds
that can only be used for rebalancing channels, but not normal payments.
Having that be CD->DE->FG seems like it would make much more sense in that
model. (Though, obviously, no one except D and E could necessarily tell
the difference between those two scenarios in practice, and just because
something doesn't make sense, doesn't mean nobody will actually do it)

The other thing was that going from N nodes to C*N channels, then
re-considering each of the C*N channels (A->B, etc) as also potentially
being nodes and adding an additional K*C*N channels (AB->CD, etc) seemed
like it might be quadratic to me. But it's probably not -- C (channels per
node) and K (utxos per channel) are probably constant or logarithmic in
N, so it's probably okay? 

On the other hand, I could see the rebalancing channels not actually
being very useful for routing payments (they require 3+ signatures,
and may not even be publicly connected to any of the level-1 nodes),
so it could make sense to just treat them as two different networks,
where regular people doing payments only see the base channels, but
high-availability nodes also find out about the rebalancing channels.
If so, then the extra nodes/channels in the rebalancing graph only affect
people who can afford to dedicate the resources to stor

Re: [Lightning-dev] Resizing Lightning Channels Off-Chain With Hierarchical Channels

2023-04-23 Thread Anthony Towns
On Sat, Apr 08, 2023 at 10:26:45PM +, jlspc via Lightning-dev wrote:
> From my perspective, this paper makes two contributions (which, to be fair, 
> may only be "interesting" :)

One thing that confuses me about the paper is how to think about routing
to a "channel" rather than a node -- ie the payment from "E->FG->A" where
"FG" isn't "F" or "G", but "both of them". It feels like there's a whole
mass of complications hidden in there from a routing perspective; like how
do you link "FG" back with "F" and "G", how do you decide fees, how do
you communicate fees/max_htlc/etc. I think it also implies that channel
capacity is no longer really something you can gossip very sensibly --
if you have a factory ((A,B),C,D,E) then every payment through AB to C
or D or E will decrease AB's channel capacity. You could still gossip the
capacity of the overall factory, and sum that to get an overall lightning
network capacity, of course. But a lot of the ways of simplifying it
also make it harder to do all the nice rebalancing.

Anyway, I've tried a few times now to put some thoughts together on that
and come up with nothing that I'm happy with, so figured it might be at
least worth posing explicitly as a problem...

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Resizing Lightning Channels Off-Chain With Hierarchical Channels

2023-04-03 Thread Anthony Towns
On Tue, Apr 04, 2023 at 12:00:32AM +1000, Anthony Towns wrote:
> On Sat, Mar 18, 2023 at 12:41:00AM +, jlspc via Lightning-dev wrote:
> > TL;DR
> 
> Step 1: Tunable penalties;
>   https://github.com/JohnLaw2/ln-tunable-penalties
>   
> https://lists.linuxfoundation.org/pipermail/lightning-dev/2022-October/003732.html
> 
> This is a clever constructions that lets you do a 2-party lightning
> channel with existing opcodes where cheating doesn't result in you
> losing all your funds (or, in fact, any of your in-channel funds).

Ah, a significant difference between this and eltoo is in the game
theory of what happens if you lose access to the latest state.

In eltoo, how things would work in that case, is that you would attempt
to close the channel to an old state that you do still remember (from a
backup), at which point either (a) your counterparty publishes a later
state, and you settle with that (possibly with you paying some modest
penalty if you're using a Daric-like protocol), or (b) your counterparty
does nothing, and you settle at the old state.

With tunable penalties, you are in more of a quandry. If you broadcast
an old "St" transaction to attempt to close to an old state, then your
counterparty will simply claim those funds and penalise you; however
there is nothing forcing them to publish any newer state as well. At
that point your counterparty can hold your share of the channel funds
hostage indefinitely.

Holding your funds hostage is probably an improvement on simply losing
them altogether, of course, so I think this is still a strict improvement
on ln-penalty (modulo additional complexity etc).

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Resizing Lightning Channels Off-Chain With Hierarchical Channels

2023-04-03 Thread Anthony Towns
On Sat, Mar 18, 2023 at 12:41:00AM +, jlspc via Lightning-dev wrote:
> TL;DR

Even with Harding's optech write ups, and the optech space, I barely
follow all this, so I'm going to try explaining it too as a way of
understanding it myself; hopefully maybe that helps someone. Corrections
welcome, obviously!

I think understanding all this requires going through each of the four
steps.

Step 1: Tunable penalties;
  https://github.com/JohnLaw2/ln-tunable-penalties
  
https://lists.linuxfoundation.org/pipermail/lightning-dev/2022-October/003732.html

This is a clever constructions that lets you do a 2-party lightning
channel with existing opcodes where cheating doesn't result in you
losing all your funds (or, in fact, any of your in-channel funds). It
also retains the ability to do layered commit transactions, that is you can
immediately commit to claiming an HTLC or that it's already timed out,
even while you're waiting for the to_self_delay to expire to ensure
you're not cheating.

The way that it works is by separating the flow of channels funds, from
the control flow. So instead of managing the channel via a single utxo,
we instead manage it via 3 utxos: F (the channel funds), InA (control
flow for a unilateral close by A), InB (control flow for a unilateral
close by B).

For each update to a new state "i", which has "k" HTLCs, we create 4 primary 
txs, and 8k HTLC claims.

  StAi which spends InA, and has k+1 outputs. The first output is used
  for controlling broadcast of the commitment tx, the remaining k are for
  controlling the resolution of each HTLC.

  ComAi is the commitment for the state. It spends the funding output
  F, and the first output of StAi. In order to spend StAi, it requires
  a to_self_delay (and signature by A), giving B time to object that i
  is a revoked state. If B does object, he is able to immediately spend
  the first output of StAi directly using the revocation information,
  and these funds form the penalty. It has k+2 outputs, one for the
  balance of each participant, and one for each HTLC.

  For each of the k HTLCs, we construct two success and two timeout
  transactions: (HAi-j-s, HAi-j-p); (HAi-j-t, HAi-j-r). HAi-j-s and
  HAi-j-t both spend the jth output of StAi, conditional either on a
  preimage reveal or a timeout respectively; HAi-j-p and HAi-j-r spend
  the output of HAi-j-s and HAi-j-t respectively, as well as the output
  of ComAi. (s=success commitment, t=timeout commitment, p=payment on
  success, r=refund)

  And Bob has similar versions of all of these.

So if Alice is honest, the process is:

  * Alice publishes StAi
  * Alice publishes HAi-j-{s,t} for any HTLCs she is able to resolve
immediately; as does Bob.
  * Alice waits for to_self_delay to complete
  * Alice publishes ComAi, and any HAi-j-{r,p} transactions she is able
to, and if desired consolidates her funds.
  * As any remaining HTLCs resolve, those are also claimed.
  * Bob's InB output is available to do whatever he wants with.

If Alice is dishonest, the process is:

  * Alice publishes StAi, and perhaps publishes some HAi-j-{s,t}
transactions.
  * Bob spends the first output of StAi unilaterally claiming the
  * penalty, meaning ComAi can now never be confirmed.
  * Bob publishes StBi', and continues with the honest protocol.

Bob only needs the usual O(log(n)) state in order to be able to
reconstruct the key to spend the first output of revoked StAi txs.
Because that prevents the corresponding ComAi from ever being published,
no revoked HTLC-related state can make it on chain in any way that Bob
needs to care about.

If both Alice and Bob are dishonest (Alice tries to cheat, but Bob
restored from an old backup and also publishes a revoked state) then
both the StAi and StBi' may have their first output claimed by the other
party, in which case the channels funds are lost (unless Alice and Bob
manage to agree to a cooperative close somehow, even after all the
attempts to cheat each other).

While 4+8k transactions per state is a lot, I think you only actually
need 2+4k signatures in advance (StAi and HAi-j-{s,t} only need to be
signed when they're broadcast). Perhaps using ANYPREVOUT would let you
reduce the number of HTLC states?

Step 2: Efficient Factories for Lightning Channels
 
https://lists.linuxfoundation.org/pipermail/lightning-dev/2023-January/003827.html
 https://github.com/JohnLaw2/ln-efficient-factories

This generalizes the tunable penalties setup for more than two
participants.

The first part of this is a straightforward generalisation, and
doesn't cover HTLCs. Where we had 2(2+4k) transactions previously, we
presumably would now have P(2+4k) transactions, where P is the number
of participants.

The second part of this aims to avoid that factor P. It does this by
introducing Trigger and Mold transactions.

To do this, we first establish the number of states that the factory
will support; perhaps 2**40. In that case, the trigger transaction will
spend the fundi

Re: [Lightning-dev] Taro: A Taproot Asset Representation Overlay

2023-02-06 Thread Anthony Towns
On Mon, Apr 11, 2022 at 02:59:16PM -0400, Olaoluwa Osuntokun wrote:

Thread necromancy, but hey.

> > anything about Taro or the way you plan to implement support for
> > transferring fungible assets via asset-aware LN endpoints[1] will address
> > the "free call option" problem, which I think was first discussed on this
> > list by Corné Plooy[2] and was later extended by ZmnSCPxj[3], with Tamas
> > Blummer[4] providing the following summary
> 
> I agree w/ Tamas' quote there in that the problem doesn't exist for
> transfers using the same asset. Consider a case of Alice sending to Bob,
> with both of them using a hypothetical asset, USD-beef: if the final/last
> hop withholds the HTLC, then they risk Bob not accepting the HTLC either due
> to the payment timing out, or exchange rate fluctuations resulting in an
> insufficient amount delivered to the destination (Bob wanted 10 USD-beef,
> but the bound BTC in the onion route is only now 9 USD-beef), in either case
> the payment would be cancelled.

I don't think this defense actually works. If you have:

 Alice -> Bob -> Carol -> Dave -> Elizabeth

with Alice/Bob and Dave/Elizabeth having USD channels, but
Bob/Carol and Carol/Dave being BTC channels, then Dave has
a reasonable opportunity to cheat:

 - he can be pretty confident that Elizabeth is the final recipient
   (since USD is meant to be at the edges, and this is a BTC to USD
   conversion)

 - he knows the expected USD value of the payment to Elizabeth

 - he knows what the on-chain timeout of the USD payment to Elizabeth
   will be, because he shares the channel, so can likely be confident
   Elizabeth won't cancel the tx as long as he forwards it to her by then

 - he can hold up the outbound USD payment while holding onto the
   inbound BTC payment, only forwarding the payment on to Elizabeth if
   the price of BTC stays the same or increases.

I'm not an expert, but I tried a Black Scholes calculator with an
estimate for Bitcoin's volatility, and it suggests that the fair price
of an option like that that lasts an hour is about 0.3% of the par value
(ie, for a $1000 payment, the ability to hold up the BTC/USD conversion
for an hour and only do it when it's profitable, is worth about $3). That
seems substantial compared to normal lightning fee rates, which I think
are often in the 0.01% to 0.1% range?

(Note that this way of analysing the free option problem means it's
only an issue when the two assets have high volatility -- if they're
sufficiently correlated, like USDT and USDC, or perhaps even USD and EUR,
then the value of the free option is minimised, perhaps to the point
where it's not worth trying to exploit)

Bob may have a similar ability to interfere with the payment, but is
much more constrained: he probably doesn't know Elizabeth's timeout;
and if he's making a profit because the price of BTC has gone down,
then Dave is likely to cancel the transaction rather than forwarding it
to Elizabeth, since he'd be making a lock when converting the BTC amount
to its pre-drop USD value. However, if there wasn't a followup conversion
back from BTC to USD, and Bill was willing to guess at the final timeout
of the payment, he could still make a profit from delaying payments.
(Though it's also less harmful here: only the Alice/Bob funds are being
held up indefinitely, not the funds from random channels)

I think maybe a better approach might be:

 Alice -> Bob -BTC-> Carol -BTC-> Elizabeth -BTC-> Dave -USD-> Elizabeth

That is, Alice sends $100 to Bob who forwards 0.004 BTC (or whatever) to
Carol and then Elizabeth; then, before accepting the payment, Elizabeth
extends the path with a BTC/USD exchange with Dave via a short loop. If
Dave doesn't immediately forward the USD to Elizabeth, she can cancel
the transaction, refunding Carol all the way back to Alice, even while
waiting for Dave. She doesn't need to be concerned that Dave could
claim funds from her, as all the transfers are conditional on a secret
only Elizabeth knows, and that she has not yet revealed. If Dave tries
exploiting the free option, Elizabeth can see he doesn't reliably finish
the loop quickly, and try finding another, better, exchange.

That approach also means Alice doesn't need to know what Elizabeth's
currency preference is; she's just sending BTC, so she only needs to
know about the exchange rate between BTC and her own currency, which
seems like it means there's one less thing that could go wrong.

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Async payments proof-of-payment: a wishlist for researchers]

2023-01-29 Thread Anthony Towns
es "m", "R1", "R2" onto Bob once he's online and sends
the payment to Bob.
 
 5) Bob checks that R1/R2 were what he generated and haven't already
been used; Bob checks that "m" is something he's willing to sign;
Bob calculates s and S, and accepts the payment for S, provided it's
the correct amount as specified in "m", by revealing s.

 6) Alice already calculated R and now receives s from Louise when
Louise claims her funds, and (R,s) is a BIP340 signature of m by
Bob, satisfying s*G = R + H(R,P,m)*P, and that signature serves as
her payment receipt from Bob.

Cheers,
aj

On Thu, Jan 26, 2023 at 11:04:12AM +1000, Anthony Towns wrote:
> On Tue, Jan 10, 2023 at 07:41:09PM +, vwallace via Lightning-dev wrote:
> > The open research question relates to how the sender will get an invoice 
> > from the receiver, given that they are offline at sending-time.
> 
> Assuming the overall process is:
> 
>  * Alice sends a payment to Bob, who has provided a reusable address
>AddrBob
>  * Bob is offline at the time the payment is sent, but his semi-trusted
>LSP Larry is online
>  * Alice is willing/able to do bidirectional communication with Larry
>  * The payment does not complete until Bob is online (at which point
>Alice may be offline)
> 
> I think in this case you want to aim for the receipt to be a BIP340
> signature of the message "Alice has paid me $50 -- signed Bob".
> 
> Given Bob's public signature nonce, R, Alice (and Larry) can calculate
> S = R + H(R,P,m)*P (m is the receipt message, P is Bob's public key),
> and then Alice can send a PTLC conditional on revealing the log of S, ie
> s where s*G=S; and at that point (s, R) is a valid signature by Bob of a
> message confirming payment to Bob, which then serves as the final receipt.
> 
> However for this to work, Alice needs to discover "R" while Bob is
> offline. I think this is only doable if Bob pre-generates a set of
> nonces and shares the public part with Larry, who can then share them
> with potential payers.  I think to avoid attacks via Wagner's algorithm,
> you probably need to do a similar setup as musig2 does, ie share (R1,R2)
> pairs, and calculate R = H(P,R1,R2,m)*R1+R2.
> 
> So a setup like:
> 
>   Alice gets AddrBob. Decodes Bob's pubkey, Larry's pubkey, and the
>   route to Larry.
> 
>   Alice -> Larry: "Hi, I want to send Bob $50, and get a receipt"
>   Larry -> Alice: "The nonce for that will be R"
>   Alice: calculates m = "Hash("Alice paid Bob $50"), S = R+H(R,P,m)*P
>   Alice -> Larry(for Bob): PTLC[$50, S]
> 
>   Larry -> Bob: PTLC[$50, S]
> Alice wants to pay you $50, using nonce pair #12345
>   Bob: verifies nonce #12345 has not been previously used, calculates R,
>calculates m, calculates s, and checks that s*G = S, checks
>there's a $50 PTLC conditional on S waiting for confirmation.
>   Bob -> Alice: claims $50 from PTLC by revealing s
> 
>   Alice: receives s; (R,s) serves as Bob's signature confirming payment
> 
> seems plausible?
> 
> Every "S" here commits to a value chosen by the sender (ie, their
> "identity"), so there's no way for Larry to get two different payers
> to use the same S. Using the same nonce twice will just mean Bob has to
> reject the payment (and find a new LSP).
> 
> It may make sense to require Alice to make a micropayment to Larry in
> order to claim a nonce. You'd want a standard template for "m" so that
> it's easy to generate and parse consistently, of course.
> 
> I think you could even have separate LSPs if you wanted: one to issue
> nonces while you're offline, and the other to actually hold onto incoming
> PTLCs while you're offline.
> 
> FWIW, some previous discussion, which didn't focus on offline recipients:
> 
> https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-February/001034.html
> 
> https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-November/001490.html

- End forwarded message -
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Async payments proof-of-payment: a wishlist for researchers

2023-01-25 Thread Anthony Towns
On Tue, Jan 10, 2023 at 07:41:09PM +, vwallace via Lightning-dev wrote:
> The open research question relates to how the sender will get an invoice from 
> the receiver, given that they are offline at sending-time.

Assuming the overall process is:

 * Alice sends a payment to Bob, who has provided a reusable address
   AddrBob
 * Bob is offline at the time the payment is sent, but his semi-trusted
   LSP Larry is online
 * Alice is willing/able to do bidirectional communication with Larry
 * The payment does not complete until Bob is online (at which point
   Alice may be offline)

I think in this case you want to aim for the receipt to be a BIP340
signature of the message "Alice has paid me $50 -- signed Bob".

Given Bob's public signature nonce, R, Alice (and Larry) can calculate
S = R + H(R,P,m)*P (m is the receipt message, P is Bob's public key),
and then Alice can send a PTLC conditional on revealing the log of S, ie
s where s*G=S; and at that point (s, R) is a valid signature by Bob of a
message confirming payment to Bob, which then serves as the final receipt.

However for this to work, Alice needs to discover "R" while Bob is
offline. I think this is only doable if Bob pre-generates a set of
nonces and shares the public part with Larry, who can then share them
with potential payers.  I think to avoid attacks via Wagner's algorithm,
you probably need to do a similar setup as musig2 does, ie share (R1,R2)
pairs, and calculate R = H(P,R1,R2,m)*R1+R2.

So a setup like:

  Alice gets AddrBob. Decodes Bob's pubkey, Larry's pubkey, and the
  route to Larry.

  Alice -> Larry: "Hi, I want to send Bob $50, and get a receipt"
  Larry -> Alice: "The nonce for that will be R"
  Alice: calculates m = "Hash("Alice paid Bob $50"), S = R+H(R,P,m)*P
  Alice -> Larry(for Bob): PTLC[$50, S]

  Larry -> Bob: PTLC[$50, S]
Alice wants to pay you $50, using nonce pair #12345
  Bob: verifies nonce #12345 has not been previously used, calculates R,
   calculates m, calculates s, and checks that s*G = S, checks
   there's a $50 PTLC conditional on S waiting for confirmation.
  Bob -> Alice: claims $50 from PTLC by revealing s

  Alice: receives s; (R,s) serves as Bob's signature confirming payment

seems plausible?

Every "S" here commits to a value chosen by the sender (ie, their
"identity"), so there's no way for Larry to get two different payers
to use the same S. Using the same nonce twice will just mean Bob has to
reject the payment (and find a new LSP).

It may make sense to require Alice to make a micropayment to Larry in
order to claim a nonce. You'd want a standard template for "m" so that
it's easy to generate and parse consistently, of course.

I think you could even have separate LSPs if you wanted: one to issue
nonces while you're offline, and the other to actually hold onto incoming
PTLCs while you're offline.

FWIW, some previous discussion, which didn't focus on offline recipients:

https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-February/001034.html

https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-November/001490.html

Cheers,
aj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Two-party eltoo w/ punishment

2023-01-05 Thread Anthony Towns
On Thu, Jan 05, 2023 at 06:59:42PM -0500, Antoine Riard wrote:
> > A simple advantage to breaking the symmetry is that if A does a unilateral
> > close, then B can immediately confirm that closure releasing all funds
> > for both parties. Without breaking the symmetry, you can't distinguish
> > that case from B attempting to confirm his own unilateral close, which
> > would allow B to cheat.
> Yes, IIUC the proposed flow is UA.n -> CB.n -> money, and in this
> optimistic case, there is only one CLTV delay to respect the spend of the
> UA.

The only delay in the UA.n/CB.n case is if someone's trying to redeem
an HTLC that times out in the future, in which case you might have UA.n,
CB.n, (CB.n -> A's balance), (CB.n -> B's balance), (CB.n -> A claiming
HTLC x with preimage x') all in the same block, but (CB.n -> A claiming
HTLC y at timeout) happening 100 blocks later, when y times out.

> (Note on the gist, the UA state description includes a Pa or tapscript "IF
> CODESEP n OP_CLTV DROP ENDIF 1 CHECKSIG" as spendable paths and the CA.n
> state description nSequence = 0, so I'm not sure how the update/justice
> delay is enforced)

(Note that the CLTV here is just for eltoo state ratcheting, and is
always in the past so doesn't imply an actual delay)

CA.n is only able to spend UB.n, not UA.n. (Or it can spend WA.n, but
WA.n can only spend UB.k or prior WA.k, so it means much the same
thing).

That's achievable by having the CA.n signature use ANYPREVOUT rather than
ANYPREVOUTANYSCRIPT (thus committing to UB.n/WA.n's shared scriptPubKey)
and having different scriptPubKey's between UA.n and UB.n (which breaks
the symmetry).

SA.n spends UA.n or WB.n in the same way, except also sets nSequence to
force a relative timelock.

> > If Alice is dishonest, and posts a very old state (n-x), then Bob could
> > post up to x watchtower txs (WB.(n-x+1) .. WB.n) causing Alice to be
> > unable to access her funds for up to (x+1)*to_self_delay blocks. But
> > that's just a reason for her to not be dishonest in the first place.
> So I think there still is the case of Bob broadcasting a very old state and
> Alice's watchtowers colluding to prevent Alice's honest funds access,
> potentially preventing the HTLC-timeout, IIUC.

Alice was the dishonest one here, so it'd be Alice broadcasting an old
state, preventing Bob from accessing funds.

If you're not online and have no honest watchtowers, then Alice can just
broadcast an old state, wait for the delay, and confirm the old state
(UA.k -> SA.k), and steal as much as she wants.

If you are online, or have honest watchtowers, then your honest CB.n
or WB.n can be confirmed in the same block as 2000 dishonest WB.(k+1),
WA.(k+2) txs. The point of having a watchtower helping you out is that the
watchtower can do fancier things than your lightning node on your phone,
like observe the mempool and potentially have direct relationships with
mining pools to overcome things like the 25 tx ancestor/descendant limit.

> I don't know if we're not
> introducing some changes in the trust assumptions towards watchtowers where
> with vanilla eltoo a single compromised watchtower can be corrected by the
> honest channel holder or another watchtower, iirc.

The same scenario applies in traditional eltoo, except in that case
Alice doesn't need to compromise any of Bob's watchtowers, she can
just broadcast multiple states herself -- since the txs are symmetric,
there's no difference between Alice.1 -> Alice.2 and Alice.1 -> Bob.2;
so you can't allow the latter while preventing the former (and there's
likewise no difference between those and Alice.1 -> Watchtower.2).

> > No -- the RB.n transactions immediately release A's funds after applying
> > the penalty, so if the watchtower colludes with A and has an old RB.y
> > transaction, Alice can steal funds by posting UA.x and RB.y, provided that
> > her balance now is sufficiently less than her balance then (ie bal.n <
> > bal.y - penalty).
> >
> > In this model, Bob shouldn't be signing RB.n or CB.n txs until Alice
> > has already started a unilateral close and posted UA.n/UA.k.
> So the penalty transactions should not be delegated to untrusted
> watchtowers. 

Yes.

> In case of RB.n signing key compromise, the whole channel
> funds might be lost.

Compromise of pretty much any of the signing keys allows all the channel
funds to be lost; this is always true of the key used for signing
cooperative closes, for instance.

If you do want to delegate punishment, you could probably have an
alternative setup where every watchtower transaction implies punishment.

(I assume watchtower punishment needs to be all or nothing, otherwise a
compromised watchtower would just rbf any attempts to punish, switching
them over to non-punishment, which then encourages attackers to compromise
watchtowers (and prioritise attacking people who use their compromised
watchtowers), and you'd end up with "nothing" anyway...)

Something like:

no-punishment:
  UA.n -> delay -> SA.

Re: [Lightning-dev] Swap-in-Potentiam: Moving Onchain Funds "Instantly" To Lightning

2023-01-03 Thread Anthony Towns
On Wed, Jan 04, 2023 at 01:06:36PM +1100, Lloyd Fournier wrote:
> The advantage of using a covenant
> is that the channel would not have an expiry date and therefore be a first
> class citizen among channels.

I think the approach here would be:

 * receive funds on the in-potentiam address with 8000 block CSV
 * LSP tracks immediately
 * user's wallet wakes up, LSP reports address to user, user signs
   a funding tx to establish the channel, and the state 0 close tx
 * LSP considers it immediately active
 * LSP broadcasts the tx, targeting confirmation within 3 days
 * if the funding tx confirms promptly, you just have an ordinary channel
 * after 800 blocks, if the tx hasn't confirmed, LSP fee bumps or
   closes the channel (relying on the high-feerate unilateral close
   tx to do the fee bumping)

ie:

 day 0: someone -> in-potentiam address (payment made on-chain, confirmed)

 day 7: in-potentiam -> funding (wallet wakes up, tx signed and broadcast,
  not necessarily confirmed, channel active)

 day 12: in-potentiam -> funding (confirmed)

 day : funding -> unilateral/cooperative close

or:

 day 0: someone -> in-potentiam address (payment made on-chain, confirmed)

 day 14: LSP forgets about in-potentiam utxo as its expiry is only 1000
 blocks away

 day 420: in-potentiam -> wherever (payment made on-chain by user)

So while the tx introspection approach you advocate *would* allow the
setup phase to skip the "expiry on day 14" restriction, I think the
*bigger* benefit is that you also wouldn't need the on-chain "in-potentiam
-> funding" transaction, but could instead just leave the in-potentiam
tx on chain indefinitely, until it was time to close the channel (which,
if it was a cooperative close, could just be a musig key path spend).

Either approach probably implies that you either have multiple channels
with your LSP (one for each in-potentiam payment you receive), or
that your single channel with your LSP is backed by multiple UTXOs
(maybe you choose an order for them, so that Alice owns 100% of the
balance in utxos 1..(k-1) and Bob owns 100% of the balance in utxos
(k+1)..n?). Otherwise you'd need an on-chain tx anyway to splice the new
funds into your existing channel; and that seems both annoying of itself,
and probably bad for privacy.

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] "Updates Overflow" Attacks against Two-Party Eltoo ?

2022-12-13 Thread Anthony Towns
On Tue, Dec 13, 2022 at 08:22:55PM -0500, Antoine Riard wrote:
> >  prior to (1): UA.k (k <= n) -- However this allows Bob to immediately
> >  broadcast one of either CA.n or RA.n, and will then have ~150 blocks
> >  to claim the HTLC before its timeout
> From my understanding, with two party eltoo w/punihsment, UA.k has a
> tapscript path with "1 CHECKSIGVERIFY k substituted is "musig(A,B)/1. Mallory should receive Bob's signature for
> UA.k, though also UA.k+1, UA.k+2, UAk+3, until k=n.

Yes, Mallory can be assumed to be able to generate signatures for UA.0
through UA.n. They all spend the funding transaction (only) though,
so she can only choose one of them, which I called UA.k above.

More particular, I'm imagining scriptPubKeys something like:

  F: taproot(AB)

  UA.n: taproot(AB/1, "IF CODESEP  CLTV DROP ENDIF OP_1 CHECKSIG")
  WB.n: taproot(AB/1, "IF CODESEP  CLTV DROP ENDIF OP_1 CHECKSIG")

  UB.n: taproot(AB/2, "IF CODESEP  CLTV DROP ENDIF OP_1 CHECKSIG")
  WA.n: taproot(AB/2, "IF CODESEP  CLTV DROP ENDIF OP_1 CHECKSIG")

where AB=musig(A,B) and AB/1 and AB/2 are unhardened HD subkeys of AB.
(The outputs of SA/RA/CA and SB/RB/CB are the balances and active HTLCs)

Then I think the following setup works to allow each transaction to only
spend from the transactions that it's supposed to:

  UA.n have ALL or SINGLE|ANYONECANPAY signatures spending F with key
AB.

  CA.n/WA.n have ANYPREVOUTANYSCRIPT signatures with codesep_pos=2
against AB/2, with locktime set to n

  RA.n has an ANYPREVOUTANYSCRIPT signature with codesep_pos=2
against AB/2, with locktime set to n-1

  SA.n has an ANYPREVOUT signature with codesep_pos=
against AB/1, with nSequence enforcing to_self_delay

B's signatures are similar, swapping AB/2 and AB/1.

(In order to do the fast forward stuff via scriptless scripts, you also
need F to have an "A CHECKSIGVERIFY B CHECKSIG" tapscript path as well,
and there's probably other things I've glossed over)

> Or is this a tapscript only existing for the dual-funding case ? I think
> this a bit unclear from the gist construction, how Mallory is restrained to
> use the tapscript path on UA.k, with UA.k+1 as she should be in possession
> of Bob's signature for this state.

You lock Mallory into using a particular signature with a particular
script template by only using the key for that signature within that
script template, and you lock them into using a particular path through
that script via use of OP_CODESEPARATOR.

> While update transaction 1 could spend update transaction 0 immediately,
> there is no reliable knowledge by U*.1 transaction broadcaster of the state
> of the network mempools.

That doesn't need to be true; we can easily have lightning nodes
gossip mempool state for channel closes by pattern matching on the
close transaction, including offering "catchup" info for nodes that
were offline, even if that isn't something we do for regular mempool
transactions.

I don't really think getting into the weeds on that now is very productive
though; it's still an open question whether we can get eltoo working in
a laboratory environment, let alone in the wild.

> While I think this solution of eltoo nodes quickly replacing any state K
> previous to the latest state N, there is no guarantee the latest state K
> doesn't offer a higher feerate than state N, making it more attractive to
> the miners.

I think there's really two situations here: one is where miners are
just running friendly mempool software that tries to do right by the
network, in which case "always update to the newest state, even if the
fee rate goes down" is probably workable; the other is where miners want
to profit maximise on every single block and will run MEV software; in
which case all we need is for the final state to be relayed -- provided
its at a reasonable feerate, the MEV miner will include it on top of the
high-fee paying chain of earlier states, even if that would mean it has
"too many" in-mempool descendants.

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] "Updates Overflow" Attacks against Two-Party Eltoo ?

2022-12-12 Thread Anthony Towns
On Mon, Dec 12, 2022 at 08:38:43PM -0500, Antoine Riard wrote:
> The attack purpose is to delay the confirmation of the final settlement
> transaction S, to double-spend a HTLC forwarded by a routing hop.
> The cltv_expiry_delta requested by Ned is equal to N=144.

I believe what you're suggesting here is:

  Mallory has two channels with Bob, M1 and M2. Both have a to_self_delay
  of 144 blocks. In that case cltv_expiry_delay should include some slack,
  I'm going to assume it's 154 blocks in total.

  Mallory forwards a large payment, M1->Bob->M2.

  Mallory claims the funds on M2 just prior to the timeout, but
  goes offline on M1.

  Bob chose the timeout for M2 via cltv_expiry_delay, so now has 154
  blocks before the CLTV on the M1->Bob payment expires.

In this scenario, under the two-party eltoo scheme, Bob should:

  1) immediately broadcast the most recent UB.n state for M1/Bob,
 aiming for this to be confirmed within 5 blocks

  2) wait 144 blocks for the relative timelock to expire

  3) broadcast SB.n to finalise the funds, and immediately claim the
 large HTLC. providing this confirms within 5 blocks, it will confirm
 before the HTLC timelock expires, and Mallory will have been unable
 to claim the funds.

The only transactions Mallory could broadcast are:

  prior to (1): UA.k (k <= n) -- However this allows Bob to immediately
  broadcast one of either CA.n or RA.n, and will then have ~150 blocks
  to claim the HTLC before its timeout

  during (2): CA.n -- Again, this allows Bob to claim the HTLC
  immediately, prior to its timeout

The only delaying attack with repeated transactions comes if Bob
broadcasts an old state UB.k (k < n), in which case Mallory can broadcast
(n-k) WA.i watchtower transactions prior to finalising the state. However
if Bob *only* has old state, Mallory can simply broadcast WA.n, at which
point Bob can do nothing, as (by assumption) he doesn't have access
to current state and thus doesn't have SB.n to broadcast it.

> The attack scenario works in the following way: Malicia updates the Eltoo
> channel N time, getting the possession of N update transactions. At block
> A, she breaks the channel and confirms the update transaction 0 by
> attaching a feerate equal to or superior to top mempool block space + 1
> sat. At each new block, she iterates by confirming the next update
> transaction, i.e update transaction 1 at block A+1, update transaction
> transaction 2 at block A+2, update transaction 3 at block A+3, ...

I think traditional eltoo envisages being able to spend update transaction
1 immediately, without having to wait for the next block.  This might
not be compatible with the version 3 relay rules that are being thought
about, though, and presumably would hit ancestor limits.

I think a simple way to avoid that problem would be for eltoo nodes
to have a priority tx relay network -- if they see a channel close to
state N, always replace any txs closing to an earlier state K From Ned's viewpoint, there is limited rationality of the network mempools,
> as such each punishment transaction R, as it's confirmation could have been
> delay due to "honest" slow propagation on the network is likely to be
> pre-signed with top mempool block space feerate, but not more to save on
> fees. Therefore, transaction RN.0 should fail to punish update transaction
> 0 as it's double-spent by update transaction 1, transaction RN.1 should
> fail to punish update transaction 1 as it's double-spent by update
> transaction 2, transaction RN.2 should fail to punish update transaction 2
> as it's double-spent by update transaction 3...

In the two-party scheme, the only transaction Mallory can broadcast
after sending UA.k and having it confirmed on chain is SA.k, and that
only after a 144 block relative timelock. UA.(k+1) etc only spend the
funding output, but that has already been spent by UA.k.

Cheers,
aj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Two-party eltoo w/ punishment

2022-12-08 Thread Anthony Towns
On Thu, Dec 08, 2022 at 02:14:06PM -0500, Antoine Riard wrote:
> >  - 2022-10-21, eltoo/chia:
> https://twitter.com/bramcohen/status/1583122833932099585
> On the eltoo/chia variant, from my (quick) understanding, the main
> innovation aimed for is 

I'd say the main innovation aimed for is just doing something like
lightning over the top of chia (rather than bitcoin, liquid, ethereum
etc), and making it simple enough to be easily implemented.

> the limitation of the publication of eltoo states
> more than once by a counterparty, by introducing a cryptographic puzzle,
> where the witness can be produced once and only once ? I would say you
> might need the inheritance of the updated scriptpubkey across the chain of
> eltoo states, with a TLUV-like mechanism.

Chia uses different terminology to bitcoin; "puzzle" is just what we call
"scriptPubKey" in bitcoin, more or less. Since its scripting capabilities
are pretty powerful, you can rig up a TLUV/OP_EVICT like mechanism, but
for a two-party setup, in practice I think that mostly just means you
can encode the logic directly as script, and when updating the state you
then only need to exchange CHECKSIGFROMSTACK-like signatures along the
lines of "state N implies outputs of A,B,C,... -- Alice", rather than
signing multiple transactions.

> > The basic idea is "if it's a two party channel with just Alice and Bob,
> > then if Alice starts a unilateral close, then she's already had her say,
> > so it's only Bob's opinion that matters from now on, and he should be
> > able to act immediately", and once it's only Bob's opinion that matters,
> > you can simplify a bunch of things.
> From my understanding, assuming Eltoo paper terminology, Alice can publish
> an update K transaction, and then after Bob can publish an update
> transaction K can publish an update transaction N. The main advantage of this
> construction I can see is a strict bound on the shared_delay encumbered in
> the on-chain publication of the channel ?

If you have fully symmetric transactions, then you could have the
situation where Alice broadcasts update K, then attacks Bob and when
he attempts to post update N, she instead does a pinning attack by
broadcasting update K+1 (spending update K), which then forces Bob to
generate a new version update N, which she then blocks with update K+2,
etc. An attack like that is presumably pretty difficult to pull off in
practice, but it makes it pretty hard to reason about many of the limits.

A simple advantage to breaking the symmetry is that if A does a unilateral
close, then B can immediately confirm that closure releasing all funds
for both parties. Without breaking the symmetry, you can't distinguish
that case from B attempting to confirm his own unilateral close, which
would allow B to cheat.

> > fast forwards: we might want to allow our channel partner
> > to immediately rely on a new state we propose without needing a
> > round-trip delay -- this potentially makes forwarding payments much
> > faster (though with some risk of locking the funds up, if you do a
> > fast forward to someone who's gone offline)
> 
> IIRC, there has already been a "fast-forward" protocol upgrade proposal
> based on update-turn in the LN-penalty paradigm [0]. I think reducing the
> latency of HTLC propagation across payment paths would constitute a UX
> improvement, especially a link-level update mechanism upgrade deployment
> might be incentivized by routing algorithms starting to penalize routing
> hops HTLC relay latency. What is unclear is the additional risk of locking
> the funds up. If you don't receive acknowledgement the fast forward state
> has been received, you should still be able to exit with the state N-1 ?

Yes, you can unilaterally close the channel with state N-1; but even
then they might respond by bumping to state N anyway. If that happens,
then the funds can remain locked up until the timeout, as you can no
longer time the htlc out off-chain.

Still, if it's one hung per htlc for the channel's entire lifetime
(because you close it "immediately" when it happens), that's probably
not going to cause problems frequently...

> > doubled delays: once we publish the latest state we can, we want to
> > be able to claim the funds immediately after to_self_delay expires;
> > however if our counterparty has signatures for a newer state than we
> > do (which will happen if it was fast forwarded), they could post that
> > state shortly before to_self_delay expires, potentially increasing
> > the total delay to 2*to_self_delay.
> 
> While the 2*to_self_delay sounds the maximum time delay in the state
> publication scenario where the cheating counterparty publishes a old state
> then the honest counterparty publishes the latest one, there could be the
> case where the cheating counterparty broadcast chain of old states, up to
> mempool's `limitancestorcount`. However, this chain of eltoo transactions
> could be replaced by the honest party paying a higher-feerate (a

[Lightning-dev] Two-party eltoo w/ punishment

2022-12-06 Thread Anthony Towns
Hi all,

On the eltoo irc channel we discussed optimising eltoo for the 2-party
scenario; figured it was probably worth repeating that here.

This is similar to:

 - 2018-07-18, simplified eltoo: 
https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-July/001363.html
 - 2021-09-17, IID 2Stage, 
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-September/019470.html
 - 2022-09-29, Daric: https://eprint.iacr.org/2022/1295
 - 2022-10-21, eltoo/chia: 
https://twitter.com/bramcohen/status/1583122833932099585

The basic idea is "if it's a two party channel with just Alice and Bob,
then if Alice starts a unilateral close, then she's already had her say,
so it's only Bob's opinion that matters from now on, and he should be
able to act immediately", and once it's only Bob's opinion that matters,
you can simplify a bunch of things.

A "gist" for this idea is 
https://gist.github.com/ajtowns/53e0f735f4d5c06a681429d937200aa5 (it goes into 
a little more detail in places, though doesn't cover trustless watchtowers at 
all).



In particular, there are a few practical constraints that we might like
to consider for 2-party channels with eltoo:

 - fast forwards: we might want to allow our channel partner
   to immediately rely on a new state we propose without needing a
   round-trip delay -- this potentially makes forwarding payments much
   faster (though with some risk of locking the funds up, if you do a
   fast forward to someone who's gone offline)

 - doubled delays: once we publish the latest state we can, we want to
   be able to claim the funds immediately after to_self_delay expires;
   however if our counterparty has signatures for a newer state than we
   do (which will happen if it was fast forwarded), they could post that
   state shortly before to_self_delay expires, potentially increasing
   the total delay to 2*to_self_delay.

 - penalties: when you do a unilateral close, attempting to cheat comes
   with no cost to you and a possible benefit if you succeed, but
   potentially does cost your channel partner (either in forcing them
   to spend on-chain fees to update to the correct state, or in the risk
   of loss if their node malfunctions occassionally) -- a penalty could
   reduce this incentive to cheat

 - trustless watchtowers: we may want to consider the possibility of a
   watchtower holding onto obsolete states and colluding with an
   attacker to attempt to cheat us

What follows is a rough approach for dealing with all those issues for
two-party channels. It's spelled out in a little more detail in the gist.

(I think for initial eltoo experimentation it doesn't make sense to try to
deal with all (or perhaps any) of those constraints; simple and working
is better than complex and theoretical. But having them written down so
the ideas can be thought about and looked up later still seems useful)

In more detail: unilateral closes are handled by each channel participant
maintaining five transactions, which we'll call:

 * UA.n, UB.n : unilaterally propose closing at state n
   - this is for Alice or Bob to spend the funding tx for a unilater
 close to state n. Spends the funding transaction.

 * WA.n, WB.n : watchtower update to state n
   - this is for an untrusted watchtower to correct attempted cheating
 by Bob on behalf of Alice (or vice-versa). Spends UB.k or WA.k
 (or UA.k/WB.k) respectively, provided k < n.

 * CA.n, CB.n : cooperatively claim funds according to state n
   - this is for Alice to confirm Bob's unilateral close (or vice-versa).
 Spends UB.k, WA.k (or UA.k/WB.k respectively), provided k <= n

 * SA.n, SB.n : slowly claim funds according to state n
   - this is for Alice to claim her funds if Bob is completely offline
 (or vice-versa). Spends UA.n, UB.n, WA.n or WB.n with relative
 timelock of to_self_delay.

 * RA.n, RB.n : claim funds with penalty after unilateral close to
   revoked state
   - this is for Alice to update the state if Bob attempted to cheat
 (or vice-versa). Spends UB.k or WA.k (or UA.k/WB.k respectively)
 conditional on k < n - 1; outputs are adjusted to transfer a fixed
 penalty of penalty_msat from Bob's balance to Alice's (or vice-versa)

Each of these "transactions" requires a pre-signed signature; however
the actual transaction/txid will vary in cases where a transaction has
the possibility of spending different inputs (eg "Spends UB.k or WA.k").
In particular UA.n/UB.n can be constructed with known txids and non-APO
signatures but WA.n/WB.n/CA.n/CB.n/SA.n/SB.n/RA.n/RB.n all require
APO signatures.

They're named such that Alice can immediately broadcast all the *A.n
transactions (provided a tx that it can spend has been broadcast) and
Bob can likewise immediately broadcast all the *B.n transactions.

Scenarios where Alice decides to unilaterally close the channel might
look like:

 * if Alice/Bob can't communicate directly, but both are online:

 F -> UA.n -> CB.n -> money

   (balances and h

Re: [Lightning-dev] `htlc_maximum_msat` as a valve for flow control on the Lightning Network

2022-09-28 Thread Anthony Towns
On Thu, Sep 29, 2022 at 12:41:44AM +, ZmnSCPxj wrote:
> > I get what you're saying, but I don't think a "stock of liquidity"
> > is a helpful metaphor/mental model here.
> > "Liquidity" usually means "how easy it is to exchange X for Y" -- assets
> > for cash, etc; but for lightning, liquidity is guaranteed by being
> > able to drop to chain. Likewise, "running out of stock" isn't usually
> > something that gets automatically fixed by someone else coming in and
> > buying something different.
> Semantics.
> You got what I am saying anyway.

Semantics are important. If you choose the wrong analogies, you'll jump
to the wrong conclusions, which I think you're doing here.

> So let me invent a completely new term derived from my local `/dev/random`, 
> "IPpvHg".

If you're going to make up words, at least make them pronouncable...
apt-get install pwgen; pwgen -0A maybe. But there's no need to make
up words; these aren't completely novel concepts, and existing terms
describe the concepts pretty well.

> A patient and rich forwarding node can buy out the IPpvHg stock of many 
> cheaper nodes,

I just spent a lot of words explaining why I disagree with that claim.
Restating it doesn't really seem constructive.

> and that I think is what we are mostly seeing in the network.

I don't really agree. I think we're seeing a combination of unbalanced
overall flows due to an insufficiently circular economy (which would
perhaps be eased by more custodial wallets/exchanges supporting lightning)
and the combination of a lack of any way to limit channel flow other
than raising fees and an inability to dynamically change fees on a
minute-by-minute timescale.

> > (Also, you don't earn 0 profit on an imbalanced channel; you're just
> > forced to stop accepting some txs. Every time you forward $1 in the
> > available direction, you become able to forward another $1 back in the
> > saturated direction; and you earn fees on both those $1s)
> But that is based on the existence of a stock of IPpvHg in another channel.

No, it's not. It applies even if there is only one channel in the
entire network (though I guess that channel would have to be between two
custodial entities, or there wouldn't be any point charging fees in the
first place).

> Actual forwarding node operators classify their peers as "mostly a source" 
> and "mostly a drain" and "mostly balanced", they want CLBOSS to classify 
> peers similarly.

"mostly a source" should trigger rate limiting in one direction, "mostly
a drain" should trigger rate limiting in the other. Both should only be
true briefly, until the rate limiting kicks in and the channel becomes
"mostly balanced".

That's still the case even if the rate limiting is "oops, one side of
the channel has ~0 balance".

> > I think it's better to think in terms of "payment flow" -- are you
> > forwarding $5/hour in one direction, but $10/hour in the other? Is
> > that an ongoing imbalance, or something that evens itself out over time
> > ($120/day in both directions)?
> It is helpful to notice that the channel balance is the integral of the sum 
> of payment flows in both directions.

The channel balance is the sum of the initial balance and all payments,
sure. No need to add an integral in there as well. For a successful,
long lasting channel, sum(incoming payments) and sum(outgoing payments)
will be much greater than the balance, to the point where the balance
is just a rounding error by comparison.

> This is why actual forwarding node operators are obsessed with channel 
> balance.
> They already *are* thinking in terms of payment flow, and using an analytical 
> technique to keep track of it: the channel balance itself.

This is exactly backwards: you don't monitor your profits by looking at
the rounding errors, you monitor your profits by looking at your sales.

If you've forwarded $100,000 in one direction, and $100,200 in the other
direction, you care about the $200,200 total that you were charging fees
on, not the $200 net delta that it's made to your channel balance.

> > Once you start in that direction, there's also a few other questions
> > you can ask:
> > 
> > * can I get make more revenue by getting more payment flow at a
> > lower fee, or by charging a higher fee over less payment flow?
> 
> As I pointed out, if you sell your stock of IPpvHg at too low a price point, 
> other forwarding nodes will snatch up the cheap IPpvHg, buying out that stock.
> They can then form an effective cartel, selling the stock of IPpvHg at a 
> higher price later.

No; changing your fee rate isn't about messing with other people's
channels, it's about encouraging more use of lightning overall.  For
example, if you're charging a base_fee of 1sat per HTLC, then dropping
that to 0sat might reduce fees from existing traffic, but maybe it will
allow you to forward so many micropayments or AMP payments that it's
wortwhile anyway.

The lightning network is tiny; if you're constantly thinking about how
to steal what 

Re: [Lightning-dev] `htlc_maximum_msat` as a valve for flow control on the Lightning Network

2022-09-28 Thread Anthony Towns
On Tue, Sep 27, 2022 at 12:23:38AM +, ZmnSCPxj via Lightning-dev wrote:
> All monetisation is fee-based; the question is who pays the fees.

This isn't true. For example, if you can successfully track the payments
you route, you can monetize by selling data about who's buying what
from whom. (Unless you meant it in some trivial sense, I guess, like
"all monetisation is money-based; the question is who pays the money")

> In particular, discussing with actual forwarding node operators reveals
> that most of them think that CLBOSS undercuts fees too much searching
> a short-term profit, quickly depleting its usable liquidity in the
> long term.
> In short, they want CLBOSS modified to raise fees and preserve the
> liquidity supply.
> This suggests to me that channel saturation due to being cheaper by
> 0.0001% is not something that will occur often,

That seems a bit of a backwards conclusion: "undercutting fees depletes
liquidity" therefore "channel saturation due to offering cheaper fees
seems unlikely" -- channel saturation *is* depleted liquidity...

On Wed, Sep 28, 2022 at 02:07:51AM +, ZmnSCPxj via Lightning-dev wrote:
> Forwarding nodes sell liquidity.
> If a forwarding node runs out of stock of liquidity (i.e. their channel is 
> unbalanced against the direction a payment request fails) they earn 0 profit.

I get what you're saying, but I don't think a "stock of liquidity"
is a helpful metaphor/mental model here.

"Liquidity" usually means "how easy it is to exchange X for Y" -- assets
for cash, etc; but for lightning, liquidity is guaranteed by being
able to drop to chain. Likewise, "running out of stock" isn't usually
something that gets automatically fixed by someone else coming in and
buying something different. 

(Also, you don't earn 0 profit on an imbalanced channel; you're just
forced to stop accepting some txs. Every time you forward $1 in the
available direction, you become able to forward another $1 back in the
saturated direction; and you earn fees on both those $1s)

I think it's better to think in terms of "payment flow" -- are you
forwarding $5/hour in one direction, but $10/hour in the other? Is
that an ongoing imbalance, or something that evens itself out over time
($120/day in both directions)?

Once you start in that direction, there's also a few other questions
you can ask:

 * can I get make more revenue by getting more payment flow at a
   lower fee, or by charging a higher fee over less payment flow?

 * if I had a higher capacity channel, would that let me tolerate
   a temporarily imbalanced flow over a longer period, allowing me
   to forward more payments and make more fee revenue?

If you want to have a long running lightning channel, your payment flows
will *always* be balanced. That might be through luck, it might be through
clever management of channel parameters, but if it's not through those,
it'll be because your channel's saturated, and you're forced to fail
payments.

Ultimately, over the *entire* lifetime of a lightning channel, the only
imbalance you can have is to either lose the funds that you've put in,
or gain the funds your channel partner put in.

That *is* something you could sensibly model as a stock that gets depleted
over time, if your payment flows are reliably unbalanced in a particular
direction. For example, consider a channel that starts off with $100k in
funds and has a $5k imbalance every day: after 20 days, you'll have to
choose between failing that $5k imbalance (though you could still route
the remaining balanced flows), or between rebalancing your channels,
possibly via on-chain transactions. Does the fee income from an additional
$100k of imbalanced transactions justify the cost of rebalancing?

You can calculate that simply enough: if the on-chain/rebalance cost is
$300, then if you were getting a fee rate of more than 0.3% ($300/$100k),
then it's worth paying for the rebalance.

But if "lifetime drain" is the dominant factor, you're reducing
lightning to the same performance as one-way payment channels: you move
the aggregate payments up to the channel capacity, and then close the
channel. If you get balanced payment flows, that allows you to cancel
out 30,000 $1 transactions against 1,000 $30 transactions, and maintain
the channel indefinitely, with all the off-chain scaling that implies.

> If a forwarding node finds a liquidity being sold at a lower price than they 
> would be able to sell it, they will buy out the cheaper stock and then resell 
> it at a higher price.
> This is called rebalancing.

All that does is move the flow imbalance to someone else's channel;
it doesn't improve the state of the network.

There definitely are times when that makes sense:

 * maybe the channel's run by "dumb money" that will eat the fee for
   closing on-chain, so you don't have to

 * maybe you have secret information about the other channel, allowing
   you to route through it for cheaper than the general public

 * maybe you and they hav

Re: [Lightning-dev] `htlc_maximum_msat` as a valve for flow control on the Lightning Network

2022-09-26 Thread Anthony Towns
On Mon, Sep 26, 2022 at 01:26:57AM +, ZmnSCPxj via Lightning-dev wrote:
> > > * you're providing a way of throttling payment traffic independent of
> > > fees -- since fees are competitive, they can have discontinuous effects
> > > where a small change to fee can cause a large change to traffic volume;
> > > but this seems like it should mostly have a proportional response,
> > > with a small decrease in htlc_max_msat resulting in a small decrease in
> > > payment volume, and conversely. Much better for stability/optimisation!

> > This may depend on what gets popular for sender algorithms.
> > Senders may quantize their payments, i.e. select a "standard" value and 
> > divide all payments into multipath sub-payments of this value.

I don't think that's really the case. 

One option is that you quantize based on the individual payment -- you
want to send $100, great, your software splits it into 50x $2 payments,
and routes them. But that doesn't have an all or nothing effect: if you
reject anything over $1.99, then instead of routing 1/50th of payments
up to $100, you're routing 1/50th of payments up to $99.50.

The other approach is to quantize by some fixed value no matter what the
payment is (maybe for better privacy?). I don't think that's a good idea
in the first place -- it trades off maybe a small win for your privacy
for using up everyone else's HTLC slots -- but if it is, it'll need to be
quite a small value so as not to force you to round up the overall payment
too much, and to allow small payments in the first place. But in that case
most channels will have their html_max_msat well above that value anyway.

> Basically, the intuition "small decrease in `htlc_max_msat` == small decrease 
> in payment volume" inherently assumes that HTLC sizes have a flat 
> distribution across all possible sizes.

The intuition is really the other way around: if you want a stable,
decentralised network, then you need the driving decision on routing to
be something other than just "who's cheaper by 0.0001%" -- otherwise
everyone just chooses the same route at all times (which becomes
centralised towards the single provider who can best monetise forwarding
via something other than fees), and probably that route quickly becomes
unusable due to being drained (which isn't stable).

(But of course, I hadn't had any ideas on what such a thing could be,
otherwise I'd have suggested something like this earlier!)

So, to extend the intuition further: that means that if using
htlc_max_msat as a valve/throttle can fill that role, then that's a reason
to not do weird things like force every HTLC to be 2**n msats or similar.

If there is a conflict, far better to have a lightning network that's
decentralised, stable, and doesn't require node operators to spy on
transactions to pay for their servers.

It's not quite as bad as you suggest though -- the payment sizes
don't need to have a flat distribution, they only need to have a
smooth/continuous distribution.

> * Coffee or other popular everyday product may settle on a standard price, 
> which again implies a spike around that standard price.

Imagine the price of coffee is $5, and you find three potential paths 
to pay for that coffee:

  Z -> A -> X
  Z -> B -> C -> X
  Z -> B -> D -> X

(I think you choose both the fee and max_msat for Z->A and Z->B hops,
so we'll assume they're 0%/infinite, respectively)

Suppose the fee on AX is 0.01%, and the total fee for BCX is 0.02%
and the total fee for BDX is 0.1%.

If AX's max_msat is $5, they'll get the entire transaction. If it's
$4.99, you might instead optimise fees by doing AMP: send $4.99 through
AX and $0.01 through BCX, for a total fee rate of 0.01002%.

If everyone quantizes at 10c (500sat?) instead of 1c (50sat?) or lower
then that just means instead of getting maybe a 0.2% reduction in payment
flow, AX gets a 2% reduction in payment flow.

Likewise, if AX's max_msat is $1, BCX's max_msat is $3, and BDX's max_msat
is $20, then you split your payment up as $1/$3/$1 and pay a fee of
0.034%. Meanwhile AX's payment flow has been reduced by perhaps 80%
(if everyone's buying $5 coffees), and BCX's by perhaps 25% (from $4 to
$3), allowing them to maintain balanced channels.

> So the reliability of `htlc_max_msat` as a valve is dependent on market 
> forces, and may be as non-linear as feerates, which *are* the sum total of 
> the market force.

No: without some sort of external throttle, fees have a tendency to be all
or nothing. If there's no metric other than fees, why would I ever choose
to pay 0.02% (let alone 0.1%!) in fees? And if a new path comes along
offering a fee rate of 0.00999% fees, why would I continue paying 0.01%?

Even if everyone does start quantizing their payments -- and does so with
an almost 6 order of magnitude jump from 1msat to 500sats -- you're only
implying traffic bumps of perhaps 2% when tweaking parameters that are
near important thresholds, rather than 100%.

> Feerates on the other hand are

Re: [Lightning-dev] `htlc_maximum_msat` as a valve for flow control on the Lightning Network

2022-09-24 Thread Anthony Towns
On Thu, Sep 22, 2022 at 08:40:30AM +0200, René Pickhardt via Lightning-dev 
wrote:
> While trying to estimate the expected liquidity distribution in depleted
> channels due to drain via Markov Models I realized that we can exploit the
> `htlc_maxium_msat` setting to act as a control valve and regulate the
> "pressure" coming from the drain and mitigate the depletion of channels.

This is really neat!

I think "channel drain" confounds two issues (or, at least, I do when
I think about it):

 1) one is you're trying to collect as many forwarding fees as you can,
and since a drained channel prevents you from forwarding txs, that
feels like a hit on profits

 2) the other is that a drained channel *can't* forward a payment even
for no profit, so even attempting to forward a payment over a drained
channel wastes everyone's time, increases payment latency, and may
increase payment failures if you go through too many failures without
finding a successful path

This seems like a great idea for solving (2) -- if you make lightning
nodes look at htlc_max_msat and throttle their use of a channel based
on its value, then channels can set that value so that their payment
flow is balanced on average, at which point depletion becomes rare,
and payments usually succeed.

I think a simple way of thinking about it is: suppose people are
forwarding X BTC per hour through a channel in one direction, and 2X BTC
through it in the other direction, with all payments being 1000 sats
exactly. Then if you set htlc_max_msat to 500sats on the overloaded
direction, and everyone then triggers their AMP paths and sends half
their payments through a slightly more expensive path, you'll be at
X-vs-X BTC per hour, with balanced flows and stable channel balances.

OTOH, it is relying on senders doing things that are slightly less optimal
in the short term (pay higher fees) for things that benefit them only in
the long term (avoid payment latency/failures due to depleted channels),
and only if most people cooperate. Perhaps there's some privacy-preserving
way that channel operators could throttle payments based on htlc_max_msat
(and channel depletion percentage?) as well, so that cheaters are less
likely to prosper?



But as far as (1) goes -- this isn't actually an improvement: instead
of rejecting X BTC per hour from the overloaded direction because
your channel's depleted, you're now not even getting the opportunity
to forward those payments and collect the corresponding fees. It's no
worse for your profit margins, but it's not any better. (And it could
be worse if you're throttling both sides, and only getting 0.95*X BTC
per hour in both directions.

But there aren't many ways you can actually do better with (1).

One way is if you have a cheap way to rebalance your channels -- in that
case, rebalance your channel, let it drain again, collecting fees all the
while, and repeat. If rebalancing is cheaper than the fees you collect,
this works great!

The other way is if fees rates are expected to change -- if they're likely
to go down later, then you might as well deplete your channel now, since
you'll collect more fees for it now than you would later; likewise if you
expect fees to up up later, then you might want to retain some balance
now, so you can deplete it later. But that's a very dynamic situation,
and the profits are limited -- you can only drain your channel once while
waiting for fee rates to be ready to change, and your profit is going to
be capped by your channel capacity times the difference in the fee rates.



This approach seems *much* better than the fee rate cards idea:

 * you're not decreasing your channel profitability half the time in
   order to avoid your channel depleting

 * you're making routing decisions *less* dependent on internal/private
   state, rather than more

 * you're not adding much gossip/probing traffic -- you might need
   to refine your htlc_max_msat a few times each time you change fees,
   but it shouldn't be often, and this should be reducing the frequency
   you have to change fees anyway

 * you're providing a way of throttling payment traffic independent of
   fees -- since fees are competitive, they can have discontinuous effects
   where a small change to fee can cause a large change to traffic volume;
   but this seems like it should mostly have a proportional response,
   with a small decrease in htlc_max_msat resulting in a small decrease in
   payment volume, and conversely. Much better for stability/optimisation!

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Fee Ratecards (your gateway to negativity)

2022-09-24 Thread Anthony Towns
On Fri, Sep 23, 2022 at 03:13:53PM -0500, lisa neigut wrote:
> Some interesting points here. Will try to respond to some of them.
> > pathfinding algorithms which depend on unscalable data collection
> Failed payment attempts are indistinguishable from data collection probing.

Even so, data collection probing is *preferable* -- it can happen out
of band, and doesn't need to cause latency when you're trying to finish
paying for your coffee so you can sit down and get back to doomscrolling.

In general: if you need to know channel capacities to efficiently make
payments, doesn't that fundamentally mean that that information should
be gossipped?

For instance, in a world where everyone's doing rate cards, maybe every
channel is advertising fees at -0.05, +0.01, +0.1, +1.0 bps because that's
just what turns out to be "best". But then when you're trying to find a
route, it becomes critically important to know which channels are at which
capacity quartile. If you're not gossipping that information, then someone
making a payment needs to either probe every plausible path, or subscribe
to an information provider that is regularly probing every channel.

I still think what I wrote in June applies; from [0], what you want
to maintain is a balanced flow over time, not any particular channel
balance -- so collecting less fees at 25% balance than at 75% balance
is generally a false optimisation; and from [1], having fee rate cards
that just depend on time of day/week is probably a much better method of
optimising for what actually matters -- "these are the times my channel
is in high demand in this direction, so fees are high; these are the
times demand is low, so fees are low".

[0] 
https://lists.linuxfoundation.org/pipermail/lightning-dev/2022-June/003624.html
[1] 
https://lists.linuxfoundation.org/pipermail/lightning-dev/2022-June/003627.html

> I like to think that the introduction of negative fees make channel
> balance data a competitive advantage and will actually cause node
> operators to more closely guard their balances / the balance data
> they've collected about peers, which should hopefully reduce the current
> trend of sharing this information with centralized parties.

Having fees depend on the channel balance makes the data a competitive
advantage to the people trying to use the channel; for the channel owner,
the optimal situation is everyone knows the balance, so that more payments
get routed over the channel (because people don't overestimate the fee
rate). That encourages channel owners to broadcast the information,
not keep it private. If they can't broadcast it, that just creates a
market for centralised information brokers...

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Solving the Price Of Anarchy Problem, Or: Cheap AND Reliable Payments Via Forwarding Fee Economic Rationality

2022-06-29 Thread Anthony Towns
On Wed, Jun 29, 2022 at 12:38:17PM +, ZmnSCPxj wrote:
> > > ### Inverting The Filter: Feerate Cards
> > > Basically, a feerate card is a mapping between a probability-of-success 
> > > range and a feerate.
> > > * 00%->25%: -10ppm
> > > * 26%->50%: 1ppm
> > > * 51%->75%: 5ppm
> > > * 76%->100%: 50ppm
> The economic insight here is this:
> * The payer wants to pay because it values a service / product more highly 
> than the sats they are spending.

> * If payment fails, then the payer incurs an opportunity cost, as it is 
> unable to utilize the difference in subjective value between the service / 
> product and the sats being spent.

(If payment fails, the only opportunity cost they incur is that they
can't use the funds that they locked up for the payment. The opportunity
cost is usually considered to occur when the payment succeeds: at that
point you've lost the ability to use those funds for any other purpose)

>   * Thus, the subjective difference in value between the service / product 
> being bought, and the sats to be paid, is the cost of payment failure.

If you couldn't successfully route the payment at any price, you never
had the opportunity to buy whatever the thing was.

> We can now use the left-hand side of the feerate card table, by multiplying 
> `100% - middle_probability_of_success` (i.e. probability of failure) by the 
> fee budget (i.e. cost of failure), and getting the 
> cost-of-failure-for-this-entry.

I don't think that makes much sense; your expected gain if you just try
one option is:

 (1-p)*0 + p*cost*(benefit/cost - fee)
 
where p is the probability of success that corresponds with the fee.

I don't think you can do that calculation with a range; if I fix the
probabilities as:

  12.5%  -10ppm
  27.5%1ppm
  62.5%5ppm
  87.5%   50ppm

then that approach chooses:

  -10 ppm if the benefit/cost is in (-10ppm, 8.77ppm)
5 ppm if the benefit/cost is in [8.77ppm, 162.52ppm)
   50 ppm if the benefit/cost is >= 162.52ppm

so for that policy, one of those entries is already irrelevant.

But that just feels super unrealistic to me. If your benefit is 8ppm,
and you try at -10ppm, and that fails, why wouldn't you try again at
5ppm? That means the real calculation is:

   p1*(benefit/cost - fee1) 
   + (p2-p1)*(benefit/cost - fee2 - retry_delay)
   - (1-p2)*(2*retry_delay)

Which is:

   p2*(benefit/cost)
 - p1*fee1 - (p2-p1)*fee2
 - (2-p1-p2)*retry_delay

My feeling is that the retry_delay factor's going to dominate...

That's also only considering one hop; to get the entire path, you
need them all to succeed, giving an expected benefit (for a particular
combination of rate card entries) of:

  (p1*p2*p3*p4*p5)*cost*(benefit/cost - (fee1 + fee2 + fee3 + fee4 + fee5)

And (p1*..*p5) is going to be pretty small in most cases -- 5 hops at
87.5% each already gets you down to only a 51% total chance of success.
And there's an exponential explosion of combinations, if each of the
5 hops has 4 options on their rate card, that's up to 1024 different
options to be evaluated...

> We then evaluate the fee card by plugging this in to each entry of the 
> feerate card, and picking which entry gives the lowest total fee.

I don't think that combines hops correctly. For example, if the rate
cards for hop1 and hop2 are both:

   10%  10ppm
  100%  92ppm

and your expected benefit/cost is 200ppm (so 100ppm per hop), then
treated individually you get:

   10%*(100ppm - 10ppm) = 9ppm  <-- this one!
  100%*(100ppm - 92ppm) = 8ppm

but treated together, you get:

1%*(200ppm -  20ppm) =  1.8ppm
   10%*(200ppm - 102ppm) =  9.8ppm (twice)
  100%*(200ppm - 184ppm) = 16ppm <-- this one!

> This is then added as a fee in payment algorithms, thus translated down to 
> "just optimize for low fee".

You're not optimising for low fee though, you're optimising for
maximal expected value, assuming you can't retry. But you can retry,
and probably in reality also want to minimise the chance of failure up
to some threshold.

For example: if I buy a coffee with lightning every week day for a year,
that's 250 days, so maybe I'd like to choose a fee so that my payment
failure rate is <0.4%, to avoid embarassment and holding up the queue.

> * Nodes utilizing wall strategies and doing lots of rebalancing put low 
> limits on the fee budget of the rebalancing cost.
>   * These nodes are willing to try lots of possible routes, hoping to nab the 
> liquidity of a low-fee node on the cheap in order to resell it later.
>   * Such nodes are fine with low probability of success.

Sure. But in that case, they don't care about delays, so why wouldn't they
just try the lowest fee rates all the time, no matter what their expected
value is? They can retry once an hour indefinitely, and eventually they
should get lucky, if the rate card's even remotely accurate. (Though
chances are they won't get -10ppm lucky for the entire path)

Finding out that you're paying 50ppm at the exact same time someone else
is "payin

Re: [Lightning-dev] Solving the Price Of Anarchy Problem, Or: Cheap AND Reliable Payments Via Forwarding Fee Economic Rationality

2022-06-29 Thread Anthony Towns
On Sun, Jun 05, 2022 at 02:29:28PM +, ZmnSCPxj via Lightning-dev wrote:

Just sharing my thoughts on this.

> Introduction
> 
>   Optimize for reliability+
>uncertainty+fee+drain+uptime...
>  .--~~--.
> /\
>/  \
>   /\
>  /  \
> /\
> _--'  `--_
> Just  Just
>   optimize  optimize
> for   for
>   low fee   low fee

I think ideally you want to optimise for some combination of fee, speed
and reliability (both liklihood of a clean failure that you can retry
and of generating stuck payments). As Matt/Peter suggest in another
thread, maybe for some uses you can accept low speed for low fees,
while in others you'd rather pay more and get near-instant results. I
think drain should just go to fee, and uncertainty/uptime are just ways
of estimating reliability.

It might be reasonable to generate local estimates for speed/reliability
by regularly sending onion messages or designed-to-fail htlcs.

Sorry if that makes me a midwit :)

> Rene Pickhardt also presented the idea of leaking friend-of-a-friend 
> balances, to help payers increase their payment reliability.

I think foaf (as opposed to global) gossip of *fee rates* is a very
interesting approach to trying to give nodes more *current* information,
without flooding the entire network with more traffic than it can
cope with.

> Now we can consider that *every channel is a marketplace*.
> What is being sold is the sats inside the channel.

(Really, the marketplace is a channel pair (the incoming channel and
the outgoing channel), and what's being sold is their relative balance)

> So my concrete proposal is that we can do the same friend-of-a-friend balance 
> leakage proposed by Rene, except we leak it using *existing* mechanisms --- 
> i.e. gossiping a `channel_update` with new feerates adjusted according to the 
> supply on the channel --- rather than having a new message to leak 
> friend-of-a-friend balance directly.

+42

> Because we effectively leak the balance of channels by the feerates on the 
> channel, this totally leaks the balance of channels.

I don't think this is true -- you ideally want to adjust fees not to
maintain a balanced channel (50% on each side), but a balanced *flow*
(1:1 incoming/outgoing payment volume) -- it doesn't really matter if
you get the balanced flow that results in an average of a 50:50, 80:20
or 20:80 ratio of channel balances (at least, it doesn't as long as your
channel capacity is 10 or 100 times the payment size, and your variance
is correspondingly low).

Further, you have two degrees of freedom when setting fee rates: one
is how balanced the flows are, which controls how long your channel can
remain useful, but the other is how *much* flow there is -- if halving
your fee rate doubles the flow rate in sats/hour, then that will still
increase your profit. That also doesn't leak balance information.

> ### Inverting The Filter: Feerate Cards
> Basically, a feerate card is a mapping between a probability-of-success range 
> and a feerate.
> * 00%->25%: -10ppm
> * 26%->50%: 1ppm
> * 51%->75%: 5ppm
> * 76%->100%: 50ppm

Feerate cards don't really make sense to me; "probability of success"
isn't a real measure the payer can use -- naively, if it were, they could
just retry at 1ppm 10 times and get to 95% chances of success. But if
they can afford to retry (background rebalancing?), they might as well
just try at -10ppm, 1ppm, 5ppm, 10ppm (or perhaps with a binary search?),
and see if they're lucky; but if they want a 1s response time, and can't
afford retries, what good is even a 75% chance of success if that's the
individual success rate on each hop of their five hop path?

And if you're not just going by odds of having to retry, then you need to
get some current information about the channel to plug into the formula;
but if you're getting *current* information, why not let that information
be the feerate directly?

> More concretely, we set some high feerate, impose some kind of constant 
> "gravity" that pulls down the feerate over time, then we measure the relative 
> loss of outgoing liquidity to serve as "lift" to the feerate.

If your current fee rate is F (ppm), and your current volume (flow) is V
(sats forwarded per hour), then your profit is FV. If dropping your fee
rate by dF (<0) results in an increase of V by dV (>0), then you want:

   (F+dF)(V+dV) > FV
   FV + VdF + FdV + dFdV > FV
   FdV > -VdF
   dV/dF < -V/F (flip the inequality because dF is negative)

   (dV/V)/(dF/F) < -1  (fee-elasticity of volume is in the elastic
region)

(<-1 == elastic == flow changes more than the fee does == drop the fee
rate; >-1 == ineleastic == flow changes less than the fee does == raise
the fee rate; =-1 == unit elastic == yo

Re: [Lightning-dev] PTLCs early draft specification

2021-12-21 Thread Anthony Towns
On Tue, Dec 21, 2021 at 04:25:41PM +0100, Bastien TEINTURIER wrote:
> The reason we have "toxic waste" with HTLCs is because we commit to the
> payment_hash directly inside the transaction scripts, so we need to
> remember all the payment_hash we've seen to be able to recreate the
> scripts (and spend the outputs, even if they are revoked).

I think "toxic waste" refers to having old data around that, if used,
could cause you to lose all the funds in your channel -- that's why it's
toxic. This is more just regular landfill :)

> *_anchor: dust, who cares -- might be better if local_anchor used key =
> > revkey
> I don't think we can use revkey, 

musig(revkey, remote_key) 
  --> allows them to spend after you've revealed the secret for revkey
  you can never spend because you'll never know the secret for
  remote_key

but if you just say:

(revkey)

then you can spend (because you know revkey) immediately (because it's
an anchor output, so intended to be immediately spent) or they can spend
if it's an obsolete commitment and you've revealed the revkey secret.

> this would prevent us from bumping the
> current remote commitment if it appears on-chain (because we don't know
> the private revkey yet if this is the latest commitment). Usually the
> remote peer should bump it, but if they don't, we may want to bump it
> ourselves instead of publishing our own commitment (where our main
> output has a long CSV).

If we're going to bump someone else's commitment, we'll use the
remote_anchor they provided, not the local_anchor, so I think this is
fine (as long as I haven't gotten local/remote confused somewhere along
the way).

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] PTLCs early draft specification

2021-12-19 Thread Anthony Towns
On Wed, Dec 08, 2021 at 04:02:02PM +0100, Bastien TEINTURIER wrote:
> I updated my article [0], people jumping on the thread now may find it
> helpful to better understand this discussion.
> [0] https://github.com/t-bast/lightning-docs/pull/16

Since merged, so 
https://github.com/t-bast/lightning-docs/blob/master/taproot-updates.md

So imagine that this proposal is finished and widely adopted/deployed
and someone adds an additional feature bit that allows a channel to
forward PTLCs only, no HTLCs.

Then suppose that you forget every old PTLC, because you don't like
having your channel state grow without bound. What happens if your
counterparty broadcasts an old state?

 * the musig2 channel funding is irrelevant -- the funding tx has been
   spend at this point
 
 * the unspent commitment outputs pay to:
 to_local: ipk = musig(revkey, mykey) -- known ; scripts also known
 to_remote: claimable in 1 block, would be better if ipk was also musig
 *_anchor: dust, who cares -- might be better if local_anchor used
key = revkey
 *_htlc: irrelevant by definition
 local_ptlc: ipk = musig(revkey, mykey) -- known; scripts also known

 * commitment outputs may be immediately spent via layered txs. if so,
   their outputs are: ipk = musig(revkey, mykey); with fixed scripts,
   that include a relative timelock

So provided you know the revocation key (which you do, because it's an
old transaction and that only requires log(states) data to reconstruct)
and your own private key, you can reconstruct all the scripts and use
key path spends for every output immediately (excepting the local_anchor,
and to_remote is delayed by a block).

So while this doesn't achieve eltoo's goal of "no toxic waste", I believe
it does achieve the goal of "state information is bounded no matter
how long you leave the channel open / how many transactions travel over
the channel".

(Provided you're willing to wait for the other party to attempt to claim
a htlc via their layered transaction, you can use this strategy for
htlcs as well as ptlcs -- however this leaves you the risk that they
never attempt to claim the funds, which may leave you out of pocket,
and may give them the opportunity to do an attack along the lines of
"you don't get access to the $10,000 locked in old HTLCs unless you pay
me $1,000".  So I don't think that's really a smart thing to do)

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] PTLCs early draft specification

2021-12-08 Thread Anthony Towns
On Thu, Dec 09, 2021 at 12:34:00PM +1100, Lloyd Fournier wrote:
> I wanted to add a theoretical note that you might be aware of. The final
> message "Bob -> Alice: revoke_and_ack" is not strictly necessary. Alice
> does not care about Bob revoking a commit tx that gives her strictly more
> coins.

That's true if Alice is only sending new tx's paying Bob; and Rusty's
examples in the `option_simplified_update` proposal do only include new
HTLCs...

But I think it's intended to cover *all* update messages, and if Alice is
also including any `update_fulfill_htlc` or `update_fail_htlc` messages in
the commitment, she's potentially gaining funds, both for the amount of
fees she saves by avoiding extra transactions, but for the fulfill case,
potentially also because she doesn't need to worry about the fulfilled
htlc reaching its timeout.

Actually, as an alternative to the `option_simplified_update` approach,
has anyone considered an approach more like this:

 * each node can unilaterally send various messages that always update
   the state, eg:
 + new htlc/ptlc paying the other node (update_add_htlc)
 + secret reveal of htlc/ptlc paying self (update_fulfil_htlc)
 + rejection of htlc/ptlc paying self (update_fail_htlc)
 + timeout of htlc/ptlc paying the other node (not currently allowed?)
 + update the channel fee rate (update_fee)

 * continue to allow these to occur at any time, asynchronously, but
   to make it easier to keep track of them, add a uint64_t counter
   to each message, that each peer increments by 1 for each message.

 * each channel state (n) then corresponds to the accumulation of
   updates from each each peer, up to message (a) for Alice, and message
   (b) for Bob.

 * so when updating to a new commitment (n+1), the proposal message
   should just include both update values (a') and (b')

 * nodes can then track the state by having a list of
   htlcs/ptlcs/balances, etc for state (n), and a list of unapplied
   update messages for themselves and the other party (a+1,...,a') and
   (b+1,...,b'), and apply them in order when constructing the new state
   (n+1) for a new commitment signing round

I think that retains both the interesting async bits (anyone can queue
state updates immediately) but also makes it fairly simple to maintain
the state?

> Bob's new commit tx can use the same revocation key as the previous
> one

That's a neat idea, but I think the fail/fulfill messages break it.
_But_ I think that means it would still be an interesting technique to
use for fast forwards which get updated for every add message...

> Not sending messages you don't need to is usually
> both more performant and simpler 

The extra message from Bob allows Alice to discard the adaptor sigs
associated with the old state, which I think is probably worthwhile
anyway?

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] PTLCs early draft specification

2021-12-08 Thread Anthony Towns
On Tue, Dec 07, 2021 at 11:52:04PM +, ZmnSCPxj via Lightning-dev wrote:
> Alternately, fast-forwards, which avoid this because it does not change 
> commitment transactions on the payment-forwarding path.
> You only change commitment transactions once you have enough changes to 
> justify collapsing them.

I think the problem t-bast describes comes up here as well when you
collapse the fast-forwards (or, anytime you update the commitment
transaction even if you don't collapse them).

That is, if you have two PTLCs, one from A->B conditional on X, one
from B->A conditional on Y. Then if A wants to update the commitment tx,
she needs to

  1) produce a signature to give to B to spend the funding tx
  2) produce an adaptor signature to authorise B to spend via X from his
 commitment tx
  3) produce a signature to allow B to recover Y after timeout from his
 commitment tx spending to an output she can claim if he cheats
  4) *receive* an adaptor signature from B to be able to spend the Y output
 if B posts his commitment tx using A's signature in (1)

The problem is, she can't give B the result of (1) until she's received
(4) from B.

It doesn't matter if the B->A PTLC conditional on Y is in the commitment
tx itself or within a fast-forward child-transaction -- any previous
adaptor sig will be invalidated because there's a new commitment
transaction, and if you allowed any way of spending without an adaptor
sig, B wouldn't be able to recover the secret and would lose funds.

It also doesn't matter if the commitment transaction that A and B will
publish is the same or different, only that it's different from the
commitment tx that previous adaptor sigs committed to. (So ANYPREVOUT
would fix this if it were available)

So I think this is still a relevant question, even if fast-forwards
make it a rare problem, that perhaps is only applicable to very heavily
used channels.

(I said the following in email to t-bast already)

I think doing a synchronous update of commitments to the channel state,
something like:

   Alice -> Bob: propose_new_commitment
   channel id
   adaptor sigs for PTLCs to Bob

   Bob -> Alice: agree_new_commitment
   channel id
   adaptor sigs for PTLCs to Alice
   sigs for Alice to spend HTLCs and PTLCs to Bob from her own
 commitment tx
   signature for Alice to spend funding tx

   Alice -> Bob: finish_new_commitment_1
   channel id
   sigs for Bob to spend HTLCs and PTLCs to Alice from his own
 commitment tx
   signature for Bob to spend funding tx
   reveal old prior commitment secret
   new commitment nonce

   Bob -> Alice: finish_new_commitment_2
   reveal old prior commitment secret
   new commitment nonce

would work pretty well.

This adds half a round-trip compared to now:

   Alice -> Bob: commitment_signed
   Bob -> Alice: revoke_and_ack, commitment_signed
   Alice -> Bob: revoke_and_ack

The timings change like so:

  Bob can use the new commitment after 1.5 round-trips (previously 0.5)

  Alice can be sure Bob won't use the old commitment after 2 round-trips
  (previously 1)

  Alice can use the new commitment after 1 round-trip (unchanged)

  Bob can be sure Alice won't use the old commitment after 1.5 round-trips
  (unchanged -- note: this is what's relevant for forwarding)

Making the funding tx a musig setup would mean also supplying 64B
of musig2 nonces along with the "adaptor sigs" in one direction,
and providing the other side's 64B of musig2 nonces back along with the
(now partial) signature for spending the funding tx (a total of 256B of
nonce data, not 128B).

Because it keeps both peers' commitments synchronised to a single channel
state, I think the same protocol should work fine with the revocable
signatures on a single tx approach too, though I haven't tried working
through the details.

Fast forwards would then be reducing the 2 round-trip protocol to
update the state commitment to a 0.5 round-trip update, to reduce
latency when forwarding by the same amount as before (1.5 round-trips
to 0.5 round-trips).

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Lightning over taproot with PTLCs

2021-10-18 Thread Anthony Towns
On Sat, Oct 09, 2021 at 11:12:07AM +1000, Anthony Towns wrote:
> Here's my proposal for replacing BOLT#2 and BOLT#3 to take advantage of
> taproot and implement PTLCs. 

I think the conclusion from the discussions at the in-person LN summit
was to split these features up an implement them gradually. I think that
would look like:

 1) taproot funding/anchor output
benefits:
 * LN utxos just look normal, so better privacy
 * mutual closes also look normal, and only need one sig and no
   script, better privacy and lower fees
 * doesn't require updating any HTLC scripts
complexities:
 * requires implementing musig/musig2/similar for mutual
   closes and signing commitment txs
 * affects gossip, which wants to link channels with utxos so needs
   to understand the new utxo format
 * affects splicing -- maybe it's literally an update to the
   splicing spec, and takes effect only when you open new channels
   or splice existing ones?

 2) update commitment outputs to taproot
benefits:
 * slightly cheaper unilateral closes, maybe more private?
complexities:
 * just need to support taproot script path spends

 3) PTLC outputs
benefits:
 * has a different "hash" at every hop, arguably better privacy
 * can easily do cool things with points/secrets that would require
   zkp's to do with hashes/secrets
 * no need to remember PTLCs indefinitely in case of old 
complexities:
 * needs a routing feature bit
 * not usable unless lots of the network upgrades to support PTLCs
 * requires implementing adaptor signatures

 4) symmetric commitment tx (revocation via signature info)
benefits:
 * reduces complexity of layered txs?
 * reduces gamesmanship of who posts the commitment tx?
 * enables low-latency/offline payments?
complexities:
 * requires careful nonce management?

 5) low-latency payments?
benefits:
 * for payments that have no problems, halves the time to complete
 * the latency introduced by synchronous commitment updates doesn't
   matter for successful payments, so peer protocol can be simplified
complexities:
 * ?

 6) offline receipt?

 7) eltoo channels?

 8) eltoo factories?

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Lightning over taproot with PTLCs

2021-10-18 Thread Anthony Towns
On Wed, Oct 13, 2021 at 03:15:14PM +1100, Lloyd Fournier wrote:
> If you're willing to accept that "worst case" happening more often, I
> think you could then retain the low latency forwarding, by having the
> transaction structure be:

So the idea here is that we have two channel parameters:

   PD - the payment delay or payment timeout delta, say 40 blocks
   RD - the channel recovery delay, say 2016 blocks

and the idea is that if you publish an old state, I have the longer delay
(RD) to correct that; *but* if the currently active state includes a
payment that I've forwarded to you, I may only have the shorter delay
(PD) in order to forward the payment claim details back in order to
avoid being out of pocket.

The goal is to keep that working while also allowing me to tell you about
a payment to you in such a way that you can safely forward it on *without*
an additional round-trip back to me (to acknowledge that you've received
it and that I've received your acknowledgement).

It's not really a super-important goal; it could shave off 50% of
the time to accept a ln tx when everything goes right, and there's no
bottlenecks elsewhere in the implementation, but it can't do anything
more than that, and doesn't help the really slow cases when things go
wrong. Mostly, I just find it interesting.

Suppose that a payment is forwarded from Alice to Bob, Carol and finally
reaches Dave. Alice/Bob and Carol/Dave are both colocated in a data centre
and have high bandwidth and have 1ms (rtt) latency, but Bob/Carol are
on different continents (but not via tor) and have 100ms (rtt) latency.

With 1.5 round-trips before forwarding, we'd get:

  t=0 Alice tells Bob
  t=1.5   Bob tells Carol
  t=151.5 Carol tells Dave
  t=153   Dave reveals the secret to Carol
  t=153.5 Carol reveals the secret to Bob
  t=203.5 Bob reveals the secret to Alice
  t=204   Alice knows the secret!

That's how things work now, with "X tells Y" being:

  X->Y: update_add_htlc, commitment_signed
  Y->X: commitment_signed, revoke_and_ack
  X->Y: revoke_and_ack

and "X reveals the secret to Y" being:

  X->Y: update_fulfill_htlc

However, if we could do it optimally we would have:

  t=0 Alice tells Bob about the payment
  t=0.5   Bob tells Carol about the payment
  t=50.5  Carol tells Dave about the payment
  t=51Dave accepts the payment and tells Carol the secret
  t=51.5  Carol accepts the payment and tells Bob the secret
  t=101.5 Bob accepts the payment and tells Alice the secret
  t=102   Alice knows the secret!

Looking just at Bob/Carol we might also have the underlying commitment
state updates:

  t=50.5  Carol acks the payment to Bob (commitment_signed,
  revoke_and_ack)
  t=100.5 Bob acks Carol's ack, revoking old state (revoke_and_ack)
  t=150.5 Carol's safe with the new state including the payment

  t=51.5  Carol reveals the secret and signs a new updated state
  (update_fulfill_htlc, commitmnt_signed)
  t=101.5 Bob acks receipt of the secret (commitment_signed,
  revoke_and_ack)
  t=151.5 Carol's safe with the new state with an increased balance
  (revoke_and_ack)
  t=201.5 Bob's state is up to date

Note that the first of those doesn't complete until well after Alice
would know the secret in an optimal construction; and that as described
the second upate overlaps the first, which might not be particularly
desirable.

> In my mind your "update the base channel state" idea seems to fix everything 
> by
> itself.

Yeah -- if you're willing to do 1.5 round-trips (and thanks to musig2 this
doesn't blow out to 2.5 (?) round-trips) that does solve everything. The
challenge is to do it in 0.5 round-trips. :)

> So at T - to_self_delay (or a bit before) you say to your counterparty
> "can we lift this HTLC out of your in-flight tx into the 'balance tx' (which
> will go back to naming a 'commitment tx' since it doesn't just have balance
> outputs anymore) so I can use it too? -- otherwise I'll have to close the
> channel on chain now to force you to reveal it to me on time?". If they agree,
> after the revocation and new commit tx everything is back to (tx symmetric)
> Poon-Dryja so no need for extra CSVs.

Maybe? So the idea is that:

 1) Bob gets a "low-latency" tx that spends Alice's balance and has a
bunch of outputs for really recent payments
 2) In normal conditions, in 5 or 10 or 30 seconds, Alice/Bob renegotiate
the base commitment to move those payments out of the "low-latency"
tx
 3) In abnormal conditions, with an active forwarded "low-latency" tx and
communications failure of length up to "PD", Alice closes the channel
on chain.
 4) Bob then has "PD" period to post the "low-latency" tx, if he
doesn't, Alice can do a layered claim of her balance preventing Bob
from oing so.
 5) If Bob does post his "low-latency" tx, then he'll also need to reveal
secrets prior to the payment timeout.

So taking the payment timeout as T, then he'll have to post t

Re: [Lightning-dev] Lightning over taproot with PTLCs

2021-10-11 Thread Anthony Towns
On Tue, Oct 12, 2021 at 04:18:37AM +, ZmnSCPxj via Lightning-dev wrote:
> > A+P + max(0, B'-B)*0.1 to Alice
> > B-f - max(0, B'-B)*0.1 to Bob

> So, if what you propose is widespread, then a theft attempt is costless: 

That's what the "max" part prevents -- if your current balance is B and
you try to claim an old state with B' > B for a profit of B'-B, Alice
will instead take 10% of that value.

(Except maybe all the funds they were trying to steal were in P' rather
than B'; so better might have been "A+P + max(0, min(B'+P'-B)*0.1, B)")

Eltoo would enable costless theft attempts (ignoring fees), particularly
for multiparty channels/factories, of course, so getting the game theory
right in advance of that seems worth the effort anyway.

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Lightning over taproot with PTLCs

2021-10-11 Thread Anthony Towns
On Mon, Oct 11, 2021 at 05:05:05PM +1100, Lloyd Fournier wrote:
> ### Scorched earth punishment
> Another thing that I'd like to mention is that using revocable signatures
> enables scorched earth punishments [2]. 

I kind-of think it'd be more interesting to simulate eltoo's behaviour.
If Alice's current state has balances (A, B) and P in in-flight
payments, and Bob posts an earlier state with (A', B') and P' (so A+B+P
= A'+B'+P'), then maybe Alice's justice transaction should pay:

   A+P + max(0, B'-B)*0.1 to Alice
   B-f - max(0, B'-B)*0.1 to Bob

(where "f" is the justice transaction fees)

Idea being that in an ideal world there wouldn't be a hole in your pocket
that lets all your coins fall out, but in the event that there is such
a hole, it's a *nicer* world if the people who find your coins give them
back to you out of the kindness of their heart.

> Note that we number each currently inflight transaction by "k",
> starting at 0. The same htlc/ptlc may have a different value for k
> between different inflight transactions.
> Can you expand on why "k" is needed in addition to "n" and "i". k sounds like
> the same thing as i to me.

"k" is used to distinguish the inflight payments (htlcs/ptlcs), not the
inflight state (which is "i").

> Also what does RP/2/k notation imply given the definition of RP you gave 
> above?

I defined earlier that if P=musig(A,B) then P/x/y = musig(A/x/y,B/x/y);
so RP/2/k = musig(A/2/n/i/2/k,RB2(n,i)/2/k).

>  * if the inflight transaction contains a ptlc output, [...]
> What about just doing a scriptless PTLC to avoid this (just CSV input of
> presigned tx)? The cost is pre-sharing more nonces per PTLC message.

Precisely that reason. Means you have to share "k+1" nonce pairs in
advance of every inflight tx update. Not a show stopper, just seemed
like a headache. (It's already a scriptless-script, this would let you
use a key path spend instead of a script path spend)

> This does not support option_static_remotekey, but compensates for that
> by allowing balances to be recovered with only the channel setup data
> even if all revocation data is lost.
> This is rather big drawback but is this really the case? Can't "in-flight"
> transactions send the balance of the remote party to their unencumbered static
> remote key?

They could, but there's no guarantee that there is an inflight
transaction, or that the other party will post it for you. In those case,
you have to be able to redeem your output from the balance tx directly,
and if you can do that, might as well have every possible address be
derived differently to minimise the amount of information any third
parties could glean.

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Lightning over taproot with PTLCs

2021-10-11 Thread Anthony Towns
On Mon, Oct 11, 2021 at 09:23:19PM +1100, Lloyd Fournier wrote:
> On Mon, 11 Oct 2021 at 17:30, Anthony Towns  wrote:
> I don't think the layering here quite works: if Alice forwarded a payment
> to Bob, with timeout T, then the only way she can be sure that she can
> either reclaim the funds or know the preimage by time T is to close the
> channel on-chain at time T-to_self_delay.
> This problem may not be as bad as it seems.

Maybe you can break it down a little bit further. Consider *three*
delays:

 1) refund delay: how long you have before a payment attempt starts
getting refunded

 2) channel recovery delay: how long you have to recover from node
failure to prevent an old state being committed to, potentially losing
your entire channel balance

 3) payment recovery delay: how long you have to recover from node
failure to prevent losing funds due to a forwarded payment (eg,
Carol claimed the payment, while Alice claimed the refund, leaving
Bob out of pocket)

(Note that if you allow payments up to the total channel balance, there's
not really any meaningful distinction between (2) and (3), at least in
the worst case)

With layered transactions, (2) and (3) are different -- if Bob's node
fails near the timeout, then both Alice and Carol drop to the blockchain,
and Carol knows the preimage, Bob may have as little as the channel
"delay" parameter to extract the preimage from Carol's layered commitment
tx to be able to post a layered commitment on top of Alice's unilateral
close to avoid being out of pocket.

(Note that that's a worst case -- Carol would normally reveal the preimage
onchain earlier than just before the timeout, giving Bob more time to
recover his node and claim the funds from Alice)

If you're willing to accept that "worst case" happening more often, I
think you could then retain the low latency forwarding, by having the
transaction structure be:

commitment tx
  input:
 funding tx
  outputs:
 Alice's balance
 (others)

low-latency inflight tx:
  input:
Alice's balance
  output:
(1) or (2)
Alice's remaining balance

Bob claim:
  input:
(1) [ CSV bob CHECKSIG]
  output:
[ checksigverify  checksig
 ifdup notif  csv endif]

Too-slow:
  input:
(2) [ CLTV alice CHECKSIG]
  output:
Alice

The idea being:

 * Alice sends the low-latency inflight tx which Bob then forwards
   immediately.

 * Bob then tries to update the base channel state with Alice, so both
   sides have a commitment to the new payment, and the low-latency
   inflight tx is voided (since it's based on a revoked channel state)
   If this succeeds, everything is fine as usual.

 * If Alice is unavailable to confirm that update, Bob closes the
   channel prior to (payment-timeout - payment-recover-delay), and posts
   "Bob claim". After an additional pyment recovery delay (and prior
   to payment-timeout) Bob posts Bob claim, ensuring that the only way
   Alice can claim the funds is if he had posted a revoked state.

 * In this case, Alice has at least one payment-recovery-delay period
   prior to the payment-timeout to notice the transaction onchain and
   recover the preimage.

 * If Bob posted the low-latency inflight tx later than
   (payment-timeout - payment-recovery-delay) then Alice will have
   payment-recovery-delay time to notice and post the "too-slow" tx and
   claim the funds via the timeout path.

 * If Bob posted a revoked state, Alice can also claim the funds via
   Bob claim, provided she notices within the channel-recovery-delay

That only allows one low-latency payment to be inflight though, which I'm
not sure is that interesting... It's also kinda complicated, and doesn't
cover both the low-latency and offline cases, which is disappointing...

Cheers,
aj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Lightning over taproot with PTLCs

2021-10-10 Thread Anthony Towns
On Sat, Oct 09, 2021 at 11:12:07AM +1000, Anthony Towns wrote:
>  2. The balance transaction - tracks the funding transaction, contains
> a "balance" output for each of the participants.
>  3. The inflight transactions - spends a balance output from the balance
> transaction and provides outputs for any inflight htlc/ptlc transactions.
>  4. Layered transactions - spends inflight htlc/ptlc outputs by revealing
> the preimage, while still allowing for the penalty path.

I don't think the layering here quite works: if Alice forwarded a payment
to Bob, with timeout T, then the only way she can be sure that she can
either reclaim the funds or know the preimage by time T is to close the
channel on-chain at time T-to_self_delay.

Any time later than that, say T-to_self_delay+x+1, would allow Bob to
post the inflight tx at T+x (prior to Alice being able to claim her
balance directly due to the to_self_delay) and then immediately post the
layered transaction (4, above) revealing the preimage, and preventing
Alice from claiming the refund.

Cheers,
aj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Lightning over taproot with PTLCs

2021-10-09 Thread Anthony Towns
On Sat, Oct 09, 2021 at 12:21:03PM +, Jonas Nick wrote:
> it seems like parts of this proposal rely on deterministic nonces in MuSig.

The "deterministic" nonces are combined with "recoverable" nonces via
musig2, so I think that part is a red-herring?

They're "deterministic" in the sense that the person who generated the
nonce needs to be able to recover the secret/dlog for the nonce later,
without having to store unique randomness for it. Thinking about it,
I think you could make the "deterministic" nonce secrets be

   H( private-key, msg, other-party's-nonce-pair, 1 )
   H( private-key, msg, other-party's-nonce-pair, 2 )

because you only need to recover the secret if the other party posts a
sig for a revoked transaction, in which case you can lookup their nonce
directly anyway. And you're choosing your "deterministic" nonce after
knowing what their ("revocable") nonce is, so can include it in the hash.

As far as the revocable nonce goes, you should only be generating a
single signature based on that, since that's used to finish things off
and post the tx on chain.

> Generally, this is insecure unless combined with heavy machinery that proves
> correctness of the nonce derivation in zero knowledge. If one signer uses
> deterministic nonces and another signer uses random nonces, then two signing
> sessions will have different challenge hashes which results in nonce reuse by
> the first signer [0]. Is there a countermeasure against this attack in the
> proposal? What are the inputs to the function that derive DA1, DA2? Is the
> assumption that a signer will not sign the same message more than once?

I had been thinking DA1,DA2 = f(seed,n) where n increases each round, but I
think the above would work and be an improvement. ie:

   Bob has a shachain based secret generator, producing secrets s_0 to
   s_(2**48). If you've seen s_0 to s_n, you only need to keep O(log(n))
   of those values to regenerate all of them.

   Bob generates RB1_n and RB2_n as H(s_n, 1)*G and H(s_n, 2)*G and sends
   those values to Alice.

   Alice determines the message (ie, the transaction), and sets da1_n
   and da2_n as H(A_priv, msg, RB1_n, RB2_n, 1) and H(A_priv, msg, RB1_n,
   RB2_n, 2). She then calculates k=H(da1_n, da2_n, RB1_n, RB2_n), and
   signs for her nonce which da1_n+k*da2_n, and sends da1_n*G and
   da2_n*G and the partial signature to Bob.

   Bob checks and records Alice's musig2 derivation and partial signature,
   but does not sign himself.

   _If_ Bob wants to close the channel and publish the tx, he completes
   the signature by signing with nonce RB1_n + k*RB2_n.

If you can convince Bob to close the channel repeatedly, using the
same nonce pair, then he'll have problems -- but if you can do that,
you can probably trick him into closing the channel with old state,
which gives him the same problems by design... Or that's my take.

> It may be worth pointing out that an adaptor signature scheme can not treat
> MuSig2 as a black box as indicated in the "Adaptor Signatures" section [1].

Hmm, you had me panicking that I'd been describing how to combine the
two despite having decided it wasn't necessary to combine them... :)

(I figured doing musig for k ptlcs for every update would get old fast --
if you maxed the channel out with ~400 inflight ptlcs you'd be exchanging
~800 nonces for every update. OTOH, I guess that's the only thing you'd
be saving, and the cost is ~176 bytes of extra witness data per ptlc...
Hmm...)

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Lightning over taproot with PTLCs

2021-10-08 Thread Anthony Towns
On Sat, Oct 09, 2021 at 01:49:38AM +, ZmnSCPxj wrote:
> A transaction is required, but I believe it is not necessary to put it 
> *onchain* (at the cost of implementation complexity in the drop-onchain case).

The trick with that is that if you don't put it on chain, you need
to calculate the fees for it in advance so that they'll be sufficient
when you do want to put it on chain, *and* you can't update it without
going onchain, because there's no way to revoke old off-chain funding
transactions.

> This has the advantage of maintaining the historical longevity of the channel.
> Many pathfinding and autopilot heuristics use channel lifetime as a positive 
> indicator of desirability,

Maybe that's a good reason for routing nodes to do shadow channels as
a matter of course -- call the currently established channel between
Alice and Bob "C1", and leave it as bolt#3 based, but establish a new
taproot based channel C2 also between Alice and Bob. Don't advertise C2
(making it a shadow channel), just say that C1 now supports PTLCs, but
secretly commit to those PTLCs to C2 instead C1. Once the C2 funding tx
is buried enough, start advertising C2 instead taking advantage of its
now sufficiently buried funding transaction, and convert C1 to a shadow
channel instead.

In particular, that setup allows you to splice funds into or out of the
shadow channel while retaining the positive longevity heuristics of the
public channel.

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Lightning over taproot with PTLCs

2021-10-08 Thread Anthony Towns
Hi all,

Here's my proposal for replacing BOLT#2 and BOLT#3 to take advantage of
taproot and implement PTLCs. 

It's particularly inspired by ZmnSCPxj's thoughts from Dec 2019 [0], and
some of his and Lloyd Fournier's posts since then (which are listed in
references) -- in particular, I think those allow us to drop the latency
for forwarding a payment *massively* (though refunding a payment still
requires roundtrips), and also support receiving via a mostly offline
lightning wallet, which seems kinda cool.

I somehow hadn't realised it prior to a conversation with @brian_trollz
via twitter DM, but I think switching to PTLCs, even without eltoo,
means that there's no longer any need to permanently store old payment
info in order to recover the entirety of the channel's funds. (Some brute
force is required to recover the timeout info, but in my testing I think
that's about 0.05 seconds of work per ptlc output via python+libsecp256k1)

This doesn't require any soft-forks, so I think we could start work on
it immediately, and the above benefits actually seem pretty worth it,
even ignoring any privacy/efficiency benefits from doing taproot key
path spends and forwarding PTLCs.

I've sketched up the musig/musig2 parts for the "balance" transactions
in python [1] and tested it a little on signet [2], which I think is
enough to convince me that this is implementable. There'll be a bit of
preliminary work needed in actually defining specs/BIPs for musig and
musig2 and adaptor signatures, I think.

Anyway, details follow. They're also up on github as a gist [3] if that
floats your boat.

[0] 
https://lists.linuxfoundation.org/pipermail/lightning-dev/2019-December/002375.html

[1] 
https://github.com/ajtowns/bitcoin/blob/202109-ptlc-lnpenalty/test/functional/feature_ln_ptlc.py

[2] The balance transaction (spending the funding tx and with outputs
being Alice's and Bob's channel balance is at):

https://explorer.bc-2.jp/tx/ba58d99dfaad83e105a0de1a9becfcf8eaf897da54bd7b08134ff579997c?input:0&expand

[3] https://gist.github.com/ajtowns/12f58fa8a4dc9f136ed04ca2584816a2/

Goals
=

1. Support HTLCs
2. Support PTLCs
3. Minimise long-term data storage requirements
4. Minimise latency when forwarding payments
5. Minimise latency when refunding payments
6. Support offline receiving
7. Minimise on-chain footprint
8. Minimise ability for third-parties to analyse

Setup
=

We have two participants in the channel, Alice and Bob. They each have
bip32 private keys, a and b, and share the corresponding xpubs A and B
with each other.

Musig
-

We will use musig to combine the keys, where P = musig(A,B) = H(A,B,1)*A
+ H(A,B,2)*B. We'll talk about subkeys of P, eg P/4/5/6, which are
calculated by taking subkeys of the input and then applying musig,
eg P/4/5/6 = musig(A/4/5/6, B/4/5/6). (Note that we don't use hardened
paths anywhere)

Musig2
--

We'll use musig2 to sign for these keys, that is both parties will
pre-share two nonce points each, NA1, NA2, NB1, NB2, and the nonce will be
calculated as: R=(NA1+NB1)+k(NA2+NB2), where k=Hash(P,NA1,NA2,NB1,NB2,m),
where P is the pubkey that will be signing and m is the message to be
signed. Note that NA1, NA2, NB1, NB2 can be calculated and shared prior
to knowing what message will be signed.

The partial sig by A for a message m with nonce R as above is calculated as:

sa = (na1+k*na2) + H(R,A+B,m)*a

where na1, na2, and a are the secrets generating NA1, NA2 and A respectively.
Calculating the corresponding partial signature for B,

sb = (nb1+k*nb2) + H(R,A+B,m)*b

gives a valid signature (R,sa+sb) for (A+B):

(sa+sb)G = R + H(R,A+B,m)*(A+B)

Note that BIP340 sepcifies x-only pubkeys, so A+B and R implicitly have
even y, however since those values are caluclated via musig and musig2
respectively, this cannot be ensured in advance. Instead, if we find:

H(A,B,1)*A + H(A,B,2)*B

does not have even y, we calculate:

P = (-H(A,B,1))*A + (-H(A,B,2))*B

instead, which will have even y. Similarly, if (NA1+NB1+k(NA2+NB2)) does
not have even y, when signing, we replace each partial nonce by its negation,
eg: sa = -(na1+k*na2) + H(R,A+B,m).

Adaptor Sigs


An adaptor signature for P for secret X is calculated as:

s = r + H(R+X, P, m)*p

which gives:

(s+x)G = (R+X) + H(R+X, P, m)*P

so that (R+X,s+x) is a valid signature by P of m, and the preimage for
X can be calculated as the difference between the published sig and the
adaptor sig, x=(s+x)-(s).

Note that if R+X does not have even Y, we will need to negate both R and X,
and the recovered secret preimage will be -x instead of x.

Revocable Secrets
-

Alice and Bob have shachain secrets RA(n) and RB(n) respectively,
and second level shachain secrets RA2(n,i) and RB2(n,i), with n and i
counting up from 0 to a maximum.

Summary
===

We'll introduce four layers of transactions:

 1. The funding transaction - used to establish the channel, provides
the utxo backi

Re: [Lightning-dev] [bitcoin-dev] Inherited IDs - A safer, more powerful alternative to BIP-118 (ANYPREVOUT) for scaling Bitcoin

2021-09-18 Thread Anthony Towns
On Fri, Sep 17, 2021 at 09:58:45AM -0700, Jeremy via bitcoin-dev wrote,
on behalf of John Law:

> I'd like to propose an alternative to BIP-118 [1] that is both safer and more
> powerful. The proposal is called Inherited IDs (IIDs) and is described in a
> paper that can be found here [2]. [...]

Pretty sure I've skimmed this before but hadn't given it a proper look.
Saying "X is more powerful" and then saying it can't actually do the
same stuff as the thing it's "more powerful" than always strikes me as
a red flag. Anyhoo..

I think the basic summary is that you add to each utxo a new resettable
"structural" tx id called an "iid" and indetify input txs that way when
signing, so that if the details of the transaction changes but not the
structure, the signature remains valid.

In particular, if you've got a tx with inputs tx1:n1, tx2:n2, tx3:n3, etc;
and outputs out1, out2, out3, etc, then its structual id is hash(iid(tx1),
n1) if any of its outputs are "tagged" and it's not a coinbase tx, and
otherwise it's just its txid.  (The proposed tagging is to use a segwit
v2 output in the tx, though I don't think that's an essential detail)

So if you have a tx A with 3 outputs, then tx B spends "A:0, A:1" and
tx C spends "B:0" and tx D spends "C:0", if you replace B with B',
then if both B and B' were tagged, and the signatures for C (and D,
assuming C was tagged) will still be valid for spending from B'.

So the question is what you can do with that.

The "2stage" protocol is proposed as an alternative to eltoo is
essentially just:

 a) funding tx gets dropped to the chain
 b) closing state is proposed by one party
 c) other party can immediately finalise by confirming a final state
that matches the proposed closing state, or was after it
 d) if the other party's not around for whatever delay, the party that
proposed the close can finalise it

That doesn't work for more than two participants, because two of
the participants could collude to take the fast path in (c) with some
earlier state, robbing any other participants. That said, this is a fine
protocol for two participants, and might be better than doing the full
eltoo arrangement if you only have a two participant channel.

To make channel factories work in this model, I think the key step is
using invalidation trees to allow updating the split of funds between
groups of participants. I think invalidation trees introduce a tradeoff
between (a) how many updates you can make, and (b) how long you have to
notice a close is proposed and correct it, before an invalidated state
can be posted, and (c) how long it will take to be able to extract your
funds from the factory if there are problems initially. You reduce those
delays substantially (to a log() factor) by introducing a hierarchy of
update txs (giving you a log() number of txs), I think.

That's the "multisig factories" section anyway, if I'm
following correctly. The "timeout trees", "update-forest" and
"challenge-and-response" approaches both introduce a trusted user ("the
operator"), I think, so are perhaps more comparable to statechains
than eltoo?

So how does that compare, in my opinion?

If you consider special casing two-party channels with eltoo, then I
think eltoo-2party and 2stage are equally effective. Comparing
eltoo-nparty and the multisig iid factories approach, I think the
uncooperative case looks like:

 ms-iid:
   log(n) txs (for the invalidation tree)
   log(n) time (?) (for the delays to ensure invalidated states don't
get published)

 eltoo: 1 tx from you
1 block after you notice, plus the fixed csv delay

A malicious counterparty can post many old update states prior to you
poisting the latest state, but those don't introduce extra csv delays
and you aren't paying the fees for those states, so I don't think it
makes sense to call that an O(n) delay or cost.

An additional practical problem with lightning is dealing with layered
commitments; that's a problem both for the delays while waiting for a
potential rejection in 2stage and for the invalidation tree delays in the
factory construction. But it's not a solved problem for eltoo yet, either.

As far as implementation goes, introducing the "iid" concept would mean
that info would need to be added to the utxo database -- if every utxo
got an iid, that would be perhaps a 1.4GB increase to the utxo db (going
by unique transaction rather than unique output), but presumably iid txs
would end up being both uncommon and short-lived, so the cost is probably
really mostly just in the additional complexity. Both iid and ANYPREVOUT
require changes to how signatures are evaluated and apps that use the
new feature are written, but ANYPREVOUT doesn't need changes beyond that.

(Also, the description of OP_CODESEPARATOR (footnote 13 on page 13,
ominous!) doesn't match its implementation in taproot. It also says BIP
118 introduces a new address type for floating transactions, but while
this was floated on the list, t

Re: [Lightning-dev] Do we really want users to solve an NP-hard problem when they wish to find a cheap way of paying each other on the Lightning Network?

2021-08-30 Thread Anthony Towns
On Thu, Aug 26, 2021 at 04:33:23PM +0200, René Pickhardt via Lightning-dev 
wrote:
> As we thought it was obvious that the function is not linear we only explained
> in the paper how the jump from f(0)=0 to f(1) = ppm+base_fee breaks convexity.

(This would make more sense to me as "f(0)=0 but f(epsilon)->b as
epsilon->0, so it's discontinuous")

> "Do we really want users to solve an NP-hard problem when
> they wish to find a cheap way of paying each other on the Lightning Network?" 

FWIW, my answer to this is "sure, if that's the way it turns out".

Another program which solves an NP-hard problem every time it runs is
"apt-get install" -- you can simulate 3SAT using Depends: and Conflicts:
relationships between packages. I worked on a related project in Debian
back in the day that did a slightly more complicated variant of that
problem, namely working out if updating a package in the distro would
render other packages uninstallable (eg due to providing a different
library version) -- as it turned out, that even actually managed to hit
some of the "hard" NP cases every now and then. But it was never really
that big a deal in practice: you just set an iteration limit and consider
it to "fail" if things get too complicated, and if it fails too often,
you re-analyse what's going on manually and add a new heuristic to cope
with it.

I don't see any reason to think you can't do roughly the same for
lightning; at worst just consider yourself as routing on log(N) different
networks: one that routes payments of up to 500msat at (b+0.5ppm), one
that routes payments of up to 1sat at (b+ppm), one that routes payments
of up to 2sat at (b+2ppm), one that routes payments of up to 4sat at
(b+4ppm), etc. Try to choose a route for all the funds; if that fails,
split it; repeat. In some case that will fail despite there being a
possible successful multipath route, and in other cases it will choose a
moderately higher fee path than necessary, but if you're talking a paying
a 0.2% fee vs a 0.1% fee when the current state of the art is a 1% fee,
that's fine.

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] #zerobasefee

2021-08-25 Thread Anthony Towns
On Tue, Aug 24, 2021 at 08:50:42PM -0700, Matt Corallo wrote:
> I feel like we're having two very, very different conversations here. On one
> hand, you're arguing that the base fee is of marginal use, and that maybe we
> can figure out how to average it out such that we can avoid needing it.

I'm not sure about the "we" in that sentence -- I'm saying node operators
shouldn't bother with it, not that lightning software devs shouldn't offer
it as a config option or take it into account when choosing routes. The
only software change that /might/ make sense is changing defaults from
1sat to 0msat, but it seems a bit early for that too, to me.

(I'm assuming comments like "We'll most definitely support #zerobasefee"
[0] just means "you can set it to zero if you like" which seems like a
weird thing to have to say explicitly...)

[0] https://twitter.com/Snyke/status/1418109408438063104

> On
> the other hand, I'm arguing that, yes, maybe you can, but ideally you
> wouldn't have to, because its still pretty nice to capture those costs
> sometimes.

I don't really think it captures costs at all, but I do agree it could
be nice (at least in theory) to have it available since then you might
be able to better optimise your fee income based on whatever demand
happens to be. That's to increase profits, not match costs though, and
I'm not convinced the theory will play out in practice presuming AMP is
often useful/necessary.

> Also, even if we can maybe do away with the base fee, that still
> doesn't mean we should start relying on the lack of any
> not-completely-linear-in-HTLC-value fees in our routing algorithms,

I mean, exprimental/research routing algorithms should totally rely
on that if they feel like it? I just don't see any evidence that
anyone's thinking of moving that out of research and into production
until there's feedback from operators and a lot more results from the
research in general...

> as maybe
> we'll want to do upfront payments or some other kind of anti-DoS payment in
> the future to solve the gaping, glaring, giant DoS hole that is HTLCs taking
> forever to time out.

Until we've got an even vaguely workable scheme for that, I don't
think it's relevant to consider. (If my preferred scheme turns out
to be workable, I don't think it needs to be taken into account when
(multi)pathfinding at all)

> I'm not even sure that you're trying to argue, here, that we should start
> making key assumptions about the only fee being a proportional one in our
> routing algorithms, but that is what the topic at hand is, so I can't help
> but assume you are?

No, that's not the topic at hand, at all?

I mean, it's related, and interesting to talk about, but it's a digression
into "wild ideas that might happen in the future", not the topic... I
don't think anyone's currently advocating for node software to work that
way? (I do think having many/most channels have a zero base fee will make
multipath routing algos work better even when they *don't* assume the
base fee is zero)

I think I'm arguing for these things:

 a) "everyone" should drop their base fee msat from the default,
probably to 0 because that's an easy fixed point that you don't need
to think about again as the price of btc changes, but anything at
or below 10msat would be much more reasonable than 1000msat.

 b) if people are concerned about wasting resources forwarding super
small payments for correspondingly super small fees, they should
raise min_htlc_amount from 0 (or 1000) to compensate, instead of
raising their base fee.

 c) software should dynamically increase min_htlc_amount as the
number of available htlc slots decreases, as a DoS mitigation measure.
(presuming this is a temporary increase, probably this wouldn't
be gossiped, and possibly not even communicated to the channel
counterparty -- just a way of immediately rejecting htlcs? I think
if something along these lines were implemented, (b) would almost
never be necessary)

 d) the default base fee should be changed to 0, 1, or 10msat instead
of 1000msat

 e) trivially: (I don't think anyone's saying otherwise)
 - 0 base fee should be a supported config option
 - research/experimental routing algorithms are great and should
   be encouraged
 - deploying new algorithms in production should only be done with
   a lot of care
 - changing the protocol should only be done with even more care
 - proportional fees should be rounded up to the next msat and never
   rounded down to 0
 - research/experiments on charging for holding htlcs open should
   continue (likewise research on other DoS prevention measures)

I'm not super sure about (c) or (d), and the "everyone" in (a) could
easily not really be everyone.

> If you disagree with the above characterization I'm happy to go line-by-line
> tit-for-tat, but usually those kinds of tirades aren't exactly useful and
> end up being more about semantics than 

Re: [Lightning-dev] #zerobasefee

2021-08-20 Thread Anthony Towns
On Mon, Aug 16, 2021 at 12:48:36AM -0400, Matt Corallo wrote:
> > The base+proportional fees paid only on success roughly match the *value*
> > of forwarding an HTLC, they don't match the costs particularly well
> > at all.
> Sure, indeed, there's some additional costs which are not covered by failed
> HTLCs, [...]
> Dropping base fee makes the whole situation a good chunk *worse*.

Can you justify that quantitatively?

Like, pick a realistic scenario, where you can make things profitable
with some particular base_fee, prop_fee, min_htlc_amount combination,
but can't reasonably pick another similarly profitable outcome with
base_fee=0?  (You probably need to have a bimodal payment distribution
with a micropayment peak and a regular payment peak, I guess, or perhaps
have particularly inelastic demand and highly competitive supply?)

> > And all those costs can be captured equally well (or badly) by just
> > setting a proportional fee and a minimum payment value. I don't know why
> > you keep ignoring that point.
> I didn't ignore this, I just disagree, and I'm not entirely sure why you're 
> ignoring the points I made to that effect :).

I don't think I've seen you explicitly disagree with that previously,
nor explain why you disagree with it? (If I've missed that, a reference
appreciated; explicit re-explanation also appreciated)

> In all seriousness, I'm entirely unsure why you think proportional is just
> as good?

In principle, because fee structures already aren't a good match, and
a simple approximation is better that a complicated approximation.
Specifically, because you can set
 
 min_htlc_msat=old_base_fee_msat * 1e6 / prop_fee_millionths

which still ensures every HTLC you forward offers a minimum fee of
old_base_fee_msat, and your fees still increase as the value transferred
goes up, which in the current lightning environment seems like it's just
as good an approximation as if you'd actually used "old_base_fee_msat".

For example, maybe you pay $40/month for your node, which is about 40msat
per second [0], and you really can only do one HTLC per second on average
[1]. Then instead of a base_fee of 40msat, pick your proportional rate,
say 0.03%, and calculate your min_htlc amount as above, ie 133sat. So if
someone sends 5c/133sat through you, they'll pay 40msat, and for every
~3 additional sats, they'll pay you an additional 1msat. Your costs are
covered, and provided your fee rate is competitive and there's traffic
on the network, you'll make your desired profit.

If your section of the lightning network is being used mainly for
microtransactions, and you're not competitive/profitable when limiting
yourself to >5c transactions, you could increase your proportional fee
and lower your min_htlc amount, eg to 1% and 4sat so that you'll get
your 40msat from a 4sat/0.16c HTLC, and increase at a rate of 10msat/sat
after that.

That at least matches the choices you're probably actually making as a
node operator: "I'm trying to be cheap at 0.03% and focus on relatively
large transfers" vs "I'm focussing on microtransactions by reducing the
minimum amount I'll support and being a bit expensive". I don't think
anyone's setting a base fee by calculating per-tx costs (and if they
were, per the footnote, I'm not convinced it'd even justify 1msat let
alone 1sat per tx).

OTOH, if you want to model an arbitrary concave fee function (because
you have some scheme that optimises fee income by discriminating against
smaller payments), you could do that by having multiple channels between
the same nodes, which is much more effective with (base, prop) fee pairs
than with (prop, min) pairs. (With (prop, min) pairs, you end up with
large ranges of the domain that would prefer to pay prop2*min2 rather
than prop1*x when x As you note, the cost for nodes is a function of the opportunity
> cost of the capital, and opportunity cost of the HTLC slots. Lets say as a
> routing node I decide that the opportunity cost of one of my HTLC slots is
> generally 1 sat per second, and the average HTLC is fulfilled in one second.
> Why is it that a proportional fee captures this "equally well"?!

If I send an HTLC through you, I can pay your 1 sat fee, then keep the
HTLC open for a day, costing you 86,400 sats by your numbers. So I don't
think that's even remotely close to capturing the costs of the individual
HTLC that's paying the fee.

But if your averages are right, and enough people are nice despite me
being a PITA, then you can get the same minimum with a proportional fee;
if you're charging 0.1% you set the minimum amount to be 1000 sats.

(But 1sat per HTLC is ridiculously expensive, like at least 20x over
what your actual costs would be, even if your hardware is horribly slow
and horribly expensive)

> Yes, you could amortize it, 

You're already amortizing it: that's what "generally 1 sat per second"
and "average HTLC is fulfilled in one second" is capturing.

> but that doesn't make it "equally" good, and
> there are semi-ser

Re: [Lightning-dev] #zerobasefee

2021-08-15 Thread Anthony Towns
On Sun, Aug 15, 2021 at 10:21:52PM -0400, Matt Corallo wrote:
> On 8/15/21 22:02, Anthony Towns wrote:
> > > In
> > > one particular class of applicable routing algorithms you could use for
> > > lightning routing having a base fee makes the algorithm intractably slow,
> > I don't think of that as the problem, but rather as the base fee having
> > a multiplicative effect as you split payments.
> Yes, matching the real-world costs of forwarding an HTLC.

Actually, no, not at all. 

The base+proportional fees paid only on success roughly match the *value*
of forwarding an HTLC, they don't match the costs particularly well
at all.

Why not? Because the costs are incurred on failed HTLCs as well, and
also depend on the time a HTLC lasts, and also vary heavily depending
on how many other simultaneous HTLCs there are.

> Yes. You have to pay the cost of a node. If we're really worried about this,
> we should be talking about upfront fees and/or refunds on HTLC fulfillment,
> not removing the fees entirely.

(I don't believe either of those are the right approach, but based on
previous discussions, I don't think anyone's going to realise I'm right
until I implement it and prove it, so *shrug*)

> > Being denominated in sats, the base fee also changes in value as the
> > bitcoin price changes -- c-lightning dropped the base fee to 1sat (from
> > 546 sat!) in Jan 2018, but the value of 1sat has increased about 4x
> > since then, and it seems unlikely the fixed costs of a successful HTLC
> > payment have likewise increased 4x.  Proportional fees deal with this
> > factor automatically, of course.
> This isn't a protocol issue, implementations can automate this without issue.

I don't think anyone's proposing the protocol be changed; just that node
operators set an option to a particular value?

Well, except that Lisa's maybe proposing that 0 not be allowed, and a
value >= 0.001 sat be required? I'm not quite sure.

> > > There's real cost to distorting the fee structures on the network away 
> > > from
> > > the costs of node operators,
> > That's precisely what the base fee is already doing.
> Huh? For values much smaller than a node's liquidity, the cost for nodes is
> (mostly) a function of HTLCs, not the value. 

Yes, the cost for nodes is a function of the requests that come in, not
how many succeed. The fees are proportional to how many succeed, which
is at best a distorted reflection of the number of requests that come in.

> The cost to nodes is largely [...]

The cost to nodes is almost entirely the opportunity cost of not being
able to accept other txs that would come in afterwards and would pay
higher fees.

And all those costs can be captured equally well (or badly) by just
setting a proportional fee and a minimum payment value. I don't know why
you keep ignoring that point.

> so I'd argue for many HTLCs forwarded
> today per-payment costs mirror the cost to a node much, much, much, much
> better than some proportional fees?

You're talking yourself into a *really* broken business model there.

> > Additionally, I don't think HTLC slot usage needs to be kept as a
> > limitation after we switch to eltoo;
> The HTLC slot limit is to keep transactions broadcastable. I don't see why
> this would change, you still get an output for each HTLC on the latest
> commitment in eltoo, AFAIU.

eltoo gives us the ability to have channel factories, where we divide
the overall factory balance amongst different channels, all updated
off-chain. It seems likely we'll want to do factories from day one,
so that we don't implicitly limit either the lifetime of the channel
or its update rate (>1 update/sec ~= <4 year lifetime otherwise if I
did the maths right). Once we're doing factories, if we have more than
however many htlcs for a channel, we can re-divide the factory balance
and add a new channel. If the limit is 500 HTLCs per tx, you'd have to
amortize 0.2% of the new tx across each HTLC, in addition to the cost
of the HTLC itself, but that seems trivial.

> > and in the meantime, I think it can
> > be better managed via adjusting the min_htlc_amount -- at least for the
> > scenario where problems are being caused by legitimate payment attempts,
> > which is also the only place base fee can help.
> Sure, we could also shift towards upfront fees or similar solutions,

Upfront fees seem extremely vulnerable to attacks, and are certainly a
(pretty large) protocol change.

> > > Instead, we should investigate how we can
> > > apply the ideas here with the more complicated fee structures we have.
> > Fee structures should be *simple* not complicated.
> > I mean, it's kind

Re: [Lightning-dev] #zerobasefee

2021-08-15 Thread Anthony Towns
On Sun, Aug 15, 2021 at 07:19:01AM -0500, lisa neigut wrote:
> My suggestion would be that, as a compromise, we set a network wide minimum 
> fee
> at the protocol level of 1msat.

Is that different in any meaningful way to just saying "fees get rounded
up to the nearest msat" ? If the fee is 999.999msat, expecting to get
away with paying less than 1sat seems kinda buggy to me.

On Sun, Aug 15, 2021 at 08:04:52PM -0400, Matt Corallo wrote:
> I'm frankly still very confused why we're having these conversations now.

Because it's what people are thinking about. The bar for having a
conversation about something is very low...

> In
> one particular class of applicable routing algorithms you could use for
> lightning routing having a base fee makes the algorithm intractably slow,

I don't think of that as the problem, but rather as the base fee having
a multiplicative effect as you split payments.

If every channel has the same (base,proportional) fee pair, and send a
payment along a single path, you're paying n*(base+k*proportional). If
you split the payment, and send half of it one way, and half the other
way, you're paying n*(2*base+k*proportional). If you split the payment
four ways, you're paying n*(4*base+k*proportional). Where's the value
to the network in penalising payment splitting?

Being denominated in sats, the base fee also changes in value as the
bitcoin price changes -- c-lightning dropped the base fee to 1sat (from
546 sat!) in Jan 2018, but the value of 1sat has increased about 4x
since then, and it seems unlikely the fixed costs of a successful HTLC
payment have likewise increased 4x.  Proportional fees deal with this
factor automatically, of course.

> There's real cost to distorting the fee structures on the network away from
> the costs of node operators, 

That's precisely what the base fee is already doing. Yes, we need some
other way of charging fees to prevent using up too many slots or having
transactions not fail in a timely manner, but the base fee does not
do that.

> Imagine we find some great way to address HTLC slot flooding/DoS attacks (or
> just chose to do it in a not-great way) by charging for HTLC slot usage, now
> we can't fix a critical DoS issue because the routing algorithms we deployed
> can't handle the new costing.

I don't think that's true. The two things we don't charge for that can
be abused by probing spam are HTLC slot usage and channel balance usage;
both are problems only in proportion to the amount of time they're held
open, and the latter is also only a problem proportional to the value
being reserved. [0]

Additionally, I don't think HTLC slot usage needs to be kept as a
limitation after we switch to eltoo; and in the meantime, I think it can
be better managed via adjusting the min_htlc_amount -- at least for the
scenario where problems are being caused by legitimate payment attempts,
which is also the only place base fee can help.

[0] (Well, ln-penalty's requirement to permanently store HTLC information
 in order to apply the penalty is in some sense a constant
 cost, however the impact is also proportional to value, and for
 sufficiently low value HTLCs can be ignored entirely if the HTLC
 isn't included in the channel commitment)

> Instead, we should investigate how we can
> apply the ideas here with the more complicated fee structures we have.

Fee structures should be *simple* not complicated.

I mean, it's kind of great that we started off complicated -- if it
turns out base fee isn't necessary, it's easy to just set it to zero;
if we didn't have it, but needed it, it would be much more annoying to
add it in later.

> Color me an optimist, but I'm quite confident with sufficient elbow grease
> and heuristics we can get 95% of the way there. We can and should revisit
> these conversations if such exploration is done and we find that its not
> possible, but until then this all feels incredibly premature.

Depends; I don't think it makes sense to try to ban nodes that don't have
a base fee of zero or anything, but random people on twitter advocating
that node operators should set it to zero and just worry about optimising
via the proportional fee and the min htlc amount seems fine.

For an experimental plugin that aggressively splits payments up, I think
either ignoring channels with >0 base fee entirely, or deciding that
you're happy to spend a total of X sats on base fees, and then ignoring
channels whose base fee is greater than X/paths/path-length sats is fine.

But long term, I also think that the base fee is an entirely unhelpful
complication that will eventually just be hardcoded to zero by everyone,
and eventually channels that propose non-zero base fees won't even be
gossiped. I don't expect that to happen any time soon though.

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] #zerobasefee

2021-08-14 Thread Anthony Towns
Hey *,

There's been discussions on twitter and elsewhere advocating for
setting the BOLT#7 fee_base_msat value [0] to zero. I'm just writing
this to summarise my understanding in a place that's able to easily be
referenced later.

Setting the base fee to zero has a couple of benefits:

 - it means you only have one value to optimise when trying to collect
   the most fees, and one-dimensional optimisation problems are
   obviously easier to write code for than two-dimensional optimisation
   problems

 - when finding a route, if all the fees on all the channels are
   proportional only, you'll never have to worry about paying more fees
   just as a result of splitting a payment; that makes routing easier
   (see [1])

So what's the cost? The cost is that there's no longer a fixed minimum
fee -- so if you try sending a 1sat payment you'll pay 0.1% of the fee
to send a 1000sat payment, and there may be fixed costs that you have
in routing payments that you'd like to be compensated for (eg, the
computational work to update channel state, the bandwith to forward the
tx, or the opportunity cost for not being able to accept another htlc if
you've hit your max htlcs per channel limit).

But there's no need to explicitly separate those costs the way we do
now; instead of charging 1sat base fee and 0.02% proportional fee,
you can instead just set the 0.02% proportional fee and have a minimum
payment size of 5000 sats (htlc_minimum_msat=5e6, ~$2), since 0.02%
of that is 1sat. Nobody will be asking you to route without offering a
fee of at least 1sat, but all the optimisation steps are easier.

You could go a step further, and have the node side accept smaller
payments despite the htlc minimum setting: eg, accept a 3000 sat payment
provided it pays the same fee that a 5000 sat payment would have. That is,
treat the setting as minimum_fee=1sat, rather than minimum_amount=5000sat;
so the advertised value is just calculated from the real settings,
and that nodes that want to send very small values despite having to
pay high rates can just invert the calculation.

I think something like this approach also makes sense when your channel
becomes overloaded; eg if you have x HTLC slots available, and y channel
capacity available, setting a minimum payment size of something like
y/2/x**2 allows you to accept small payments (good for the network)
when you're channel is not busy, but reserves the last slots for larger
payments so that you don't end up missing out on profits because you
ran out of capacity due to low value spam.

Two other aspects related to this:

At present, I think all the fixed costs are also incurred even when
a htlc fails, so until we have some way of charging failing txs for
incurring those costs, it seems a bit backwards to penalise successful
txs who at least pay a proportional fee for the same thing. Until we've
got a way of handling that, having zero base fee seems at least fair.

Lower value HTLCs don't need to be included in the commitment transaction
(if they're below the dust level, they definitely shouldn't be included,
and if they're less than 1sat they can't be included), and as such don't
incur all the same fixed costs that HTLCs that are committed too do.
Having different base fees for microtransactions that incur fewer costs
would be annoying; so having that be "amortised" into the proportional
fee might help there too.

I think eltoo can help in two ways by reducing the fixed costs: you no
longer need to keep HTLC information around permanently, and if you do
a multilevel channel factory setup, you can probably remove the ~400
HTLCs per channel at any one time limit. But there's still other fixed
costs, so I think that would just lower the fixed costs, not remove them
altogether and isn't a fundamental change.

I think the fixed costs for forwarding a HTLC are very small; something
like:

   0.02sats -- cost of permanently storing the HTLC info
   (100 bytes, $500/TB/year, 1% discount rate)
   0.04sats -- compute and bandwidth cost for updating an HTLC ($40/month
   at linode, 1 second of compute)

The opportunity cost of having HTLC slots or Bitcoin locked up until
the HTLC succeeds/fails could be much more significant, though.

Cheers,
aj

[0] 
https://github.com/lightningnetwork/lightning-rfc/blob/master/07-routing-gossip.md#the-channel_update-message
[1] https://basefee.ln.rene-pickhardt.de/

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [bitcoin-dev] Removing the Dust Limit

2021-08-12 Thread Anthony Towns
On Tue, Aug 10, 2021 at 06:37:48PM -0400, Antoine Riard via bitcoin-dev wrote:
> Secondly, the trim-to-dust evaluation doesn't correctly match the lifetime of
> the HTLC.

Right: but that just means it's not something you should determine once
for the HTLC, but something you should determine each time you update the
channel commitment -- if fee rates are at 1sat/vb, then a 10,000 sat HTLC
that's going to cost 100 sats to create the utxo and eventually claim it
might be worth committing to, but if fee rates suddenly rise to 75sat/vb,
then the combined cost of 7500 sat probably isn't worthwhile (and it
certainly isn't worthwhile if fees rise to above 100sat/vb).

That's independent of dust limits -- those only give you a fixed size
lower limit or about 305sats for p2wsh outputs.

Things become irrational before they become uneconomic as well: ie the
100vb is perhaps 40vb to create then 60vb to spend, so if you create
the utxo anyway then the 40vb is a sunk cost, and redeeming the 10k sats
might still be marginally wortwhile up until about 167sat/vb fee rate.

But note the logic there: it's an uneconomic output if fees rise above
167sat/vb, but it was already economically irrational for the two parties
to create it in the first place when fees were at or above 100sat/vb. If
you're trying to save every sat, dust limits aren't your problem. If
you're not trying to save every sat, then just add 305 sats to your
output so you avoid the dust limit.

(And the dust limit is only preventing you from creating outputs that
would be irrational if they only required a pubkey reveal and signature
to spend -- so a HTLC that requires revealing a script, two hashes,
two pubkeys, a hash preimage and two signatures with the same dust
threshold value for p2wsh of ~305sats would already be irrational at
about 2.1sat/vb and unconomic at 2.75 sat/vb).

> (From a LN viewpoint, I would say we're trying to solve a price discovery
> issue, namely the cost to write on the UTXO set, in a distributed system, 
> where
> any deviation from the "honest" price means you trust more your LN
> counterparty)

At these amounts you're already trusting your LN counterparty to not just
close the channel unilaterally at a high fee rate time and waste your
funds in fees, vs doing a much for efficient mutual/cooperative close.

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Impact of eltoo loss of state

2021-07-20 Thread Anthony Towns
On Wed, Jul 14, 2021 at 04:44:24PM +0200, Christian Decker wrote:
> Not quite sure if this issue is unique to eltoo tbh. While in LN-penalty
> loss-of-state equates to loss-of-funds, in eltoo this is reduced to
> impact only funds that are in a PTLC at the time of the loss-of-state.

Well, the idea (in my head at least) is it should be "safe" to restore
an eltoo channel from a backup even if it's out of date, so the question
is what "safe" can actually mean. LN-penalty definitely isn't safe in
that scenario.

>  2) Use the peer-storage idea, where we deposit an encrypted bundle with
>  our peers, and which we expect the peers to return. by hiding the fact
>  that we forgot some state, until the data has been exchanged we can
>  ensure that peers always return the latest snapshot of whatever we gave
>  them.

I don't think you can reliably hide that you forgot some state? If you
_did_ forget your state, you'll have forgotten their latest bundle too,
and it seems like there's at least a 50/50 chance you'd have to send
them their bundle before they sent you yours?

Sharing with other peers has costs too -- if you can't commit to an
updated state with peer A until you've sent the updated data to peers
B and C as backup, then you've got a lot more latency on each channel,
for example. And if you commit first, then you've got the problem of
what happens if you crash before the update has made it to either B or C?

But I guess what I'm saying is sure -- those are great ideas, but they
only reduce the chance that you'll not have the latest state, they don't
eliminate it.

But it seems like it can probably be reduced enough that it's fine that
you're risking the balances in live HTLCs (or perhaps HTLCs that have
been initiated since your last state backup), as long as you're at least
able to claim your channel balance from whatever more recent state your
peers may have.

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [bitcoin-dev] Eltoo / Anyprevout & Baked in Sequences

2021-07-13 Thread Anthony Towns
On Mon, Jul 12, 2021 at 03:07:29PM -0700, Jeremy wrote:
> Perhaps there's a more general principle -- evaluating a script should
> only return one bit of info: "bool tx_is_invalid_script_failed"; every
> other bit of information -- how much is paid in fees (cf ethereum gas
> calculations), when the tx is final, if the tx is only valid in some
> chain fork, if other txs have to have already been mined / can't have
> been mined, who loses funds and who gets funds, etc... -- should already
> be obvious from a "simple" parsing of the tx.
> I don't think we have this property as is.
> E.g. consider the transaction:
> TX:
>    locktime: None
>    sequence: 100
>    scriptpubkey: 101 CSV

That tx will never be valid, no matter the state of the chain -- even if
it's 420 blocks after the utxo it's spending: it fails because "top stack
item is greater than the transaction input sequence" rule from BIP 112.

> How will you tell it is able to be included without running the script?

You have to run the script at some point, but you don't need to run the
script to differentiate between it being valid on one chain vs valid on
some other chain.

> What's nice is the transaction in this form cannot go from invalid to valid --
> once invalid it is always invalid for a given UTXO.

Huh? Timelocks always go from invalid to valid -- they're invalid prior
to some block height (IsFinal() returns false), then valid after.

Not going from valid to invalid is valuable because it limits the cases
where you have to remove txs (and their descendents) from the mempool.

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Impact of eltoo loss of state

2021-07-12 Thread Anthony Towns
Hello world,

Suppose you have some payments going from Alice to Bob to Carol with
eltoo channels. Bob's lightning node crashes, and he recovers from an
old backup, and Alice and Carol end up dropping newer channel states
onto the blockchain.

Suppose the timeout for the payments is a few hours away, while the
channels have specified a week long CSV delay to rectify any problems
on-chain.

Then I think that that means that:

 1) Carol will reveal the point preimages on-chain via adaptor
signatures, but Bob won't be able to decode those adaptor signatures
because those signatures will need to change for each state

 2) Even if Bob knows the point preimages, he won't be able to
claim the PTLC payments on-chain, for the same reason: he needs
newer adaptor signatures that he'll have lost with the state update

 3) For any payments that timeout, Carol doesn't have any particular
incentive to make it easy for Bob to claim the refund, and Bob won't
have the adaptor signatures for the latest state to do so

 4) But Alice will be able to claim refunds easily. This is working how
it's meant to, at least!

I think you could fix (3) by giving Carol (who does have all the adaptor
signatures for the latest state) the ability to steal funds that are
meant to have been refunded, provided she gives Bob the option of claiming
them first.

However fixing (1) and (2) aren't really going against Alice or Carol's
interests, so maybe you can just ask: Carol loses nothing by allowing
Bob to claim funds from Alice; and Alice has already indicated that
knowing P is worth more to her than the PTLC's funds -- otherwise she
wouldn't have forwarded the PTLC to Bob in the first place.

Likewise, everyone's probably incentivised to negotiate cooperative
closes instead of going on-chain -- better privacy, less fees, and less
delay before the funds can be used elsewhere.

FWIW, I think a similar flaw exists even in the original eltoo spec --
Alice could simply decline to publish the settlement transaction until
the timeout has been reached, preventing Bob from revealing the HTLC
preimage before Alice can claim the refund.

So I think that adds up to:

 a) Nodes should share state on reconnection; if you find a node that
doesn't do this, close the channel and put the node on your enemies
list. If you disagree on what the current state is, share your most
recent state, and if the other guy's state is more recent, and all
the signatures verify, update your state to match theirs.

 b) Always negotiate a mutual/cooperative close if possible, to avoid
actually using the eltoo protocol on-chain.

 c) If you want to allow continuing the channel after restoring an old
state from backup, set the channel state index based on the real time,
eg (real_time-start_time)*(max_updates_per_second). That way your
first update after a restore from backup will ensure that any old
states that your channel partner may not have told you about are
invalidated.

 d) Accept that if you lose connectivity to a channel partner, you will
have to pay any PTLCs that were going to them, and won't be able
to claim the PTLCs that were funding them. Perhaps limit the total
value of inbound PTLCs for forwarding that you're willing to accept
at any one itme?

Also, layered commitments seem like they make channel factories
complicated too. Nobody came up with a way to avoid layered commitments
while I wasn't watching did they?

Cheers,
aj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [bitcoin-dev] Eltoo / Anyprevout & Baked in Sequences

2021-07-11 Thread Anthony Towns
On Thu, Jul 08, 2021 at 08:48:14AM -0700, Jeremy wrote:
> This would disallow using a relative locktime and an absolute locktime
> for the same input. I don't think I've seen a use case for that so far,
> but ruling it out seems suboptimal.
> I think you meant disallowing a relative locktime and a sequence locktime? I
> agree it is suboptimal.

No? If you overload the nSequence for a per-input absolute locktime
(well in the past for eltoo), then you can't reuse the same input's
nSequence for a per-input relative locktime (ie CSV).

Apparently I have thought of a use for it now -- cut-through of PTLC
refunds when the timeout expires well after the channel settlement delay
has passed. (You want a signature that's valid after a relative locktime
of the delay and after the absolute timeout)

> What do you make of sequence tagged keys?

I think we want sequencing restrictions to be obvious from some (simple)
combination of nlocktime/nsequence/annex so that you don't have to
evaluate scripts/signatures in order to determine if a transaction
is final.

Perhaps there's a more general principle -- evaluating a script should
only return one bit of info: "bool tx_is_invalid_script_failed"; every
other bit of information -- how much is paid in fees (cf ethereum gas
calculations), when the tx is final, if the tx is only valid in some
chain fork, if other txs have to have already been mined / can't have
been mined, who loses funds and who gets funds, etc... -- should already
be obvious from a "simple" parsing of the tx.

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Eltoo Burst Mode & Continuations

2021-07-10 Thread Anthony Towns
On Sat, Jul 10, 2021 at 02:07:02PM -0700, Jeremy wrote:
> Let's say you're about to hit your sequence limits on a Eltoo channel... Do 
> you
> have to go on chain?
> No, you could do a continuation where for your *final* update, you sign a move
> to a new update key. E.g.,

That adds an extra tx to the uncooperative path every 2**30 states.

> Doing layers like this inherently adds a bunch of CSV layers, so it increases
> resolution time linearly.

I don't think that's correct -- you should be using the CLTV path for
updating the state, rather than the CSV path; so CSV shouldn't matter.

On Sat, Jul 10, 2021 at 04:25:06PM -0700, Jeremy wrote:
> [...] signing a eltoo "trampoline".
> essentially, begin a burst session at pk1:N under pk2, but always include a
> third branch to go to any pk1:N+1.

I think this is effectively reinventing/special casing channel
factories? That is you start an eltoo channel factory amongst group
{A,B,C,...}, then if {A,B} want an eltoo channel, that's a single update
to the factory; that channel can get updated independently until A and
B get bored and want to close their channel, which is then a single
additional update to the factory. In this case, the factory just doesn't
include the additional members {C,...}.

On Sat, Jul 10, 2021 at 05:02:35PM -0700, Jeremy wrote:
> suppose you make a Taproot tree with N copies (with different keys) of the
> state update protocol.

This feels cleverer/novel to me -- but as you point out it's actually
more costly than the trampoline/factory approach so perhaps it's not
that great.

I think what you'd do is change from a single tapscript of "OP_1
CHECKSIG <500e6+i> CLTV" to a tree of tapscripts:

  " CHECKSIG <500e6+j+1> CLTV"

so if your state is (i*2**30 + j) you're spending using  with a
locktime of 500e6+j, and you're allowing later spends with the above script
filled in with (i,j) or (i',0) for i You can take a random path through which leaf you are using which, if you're
> careful about how you construct your scripts (e.g., keeping the trees the same
> size) you can be more private w.r.t. how many state updates you performed
> throughout the protocol (i.e., you can see the low order bits in the CLTV
> clause, but the high order bits of A, B, C's relationship is not revealed if
> you traverse them in a deterministically permuted order).

Tapscript trees are shuffled randomly based on the hashes of their
scripts, so I think that's a non-issue. You could keep the trees the
same size by adding scripts " CHECKSIG <500e6+j+1> RETURN".

> The space downside of this approach v.s. the approach presented in the prior
> email is that the prior approach achieves 64 bits with 2 txns one of which
> should be like 150 bytes, a similar amount of data for the script leaves may
> only gets you 5 bits of added sequence space. 

You'd get 2**34 states (4 added bits of added sequence space) for
about 161 extra bytes (4 merkle branches at 32B each and revealing the
pubkey for 33B), compared to about 2**60 states (2**30 states for the
second tx, with a different second tx for each of the 2**30 states of
the first tx). Haven't done the math to check the 150 byte estimate,
but it seems the right ballpark.

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [bitcoin-dev] Eltoo / Anyprevout & Baked in Sequences

2021-07-08 Thread Anthony Towns
On Wed, Jul 07, 2021 at 06:00:20PM -0700, Jeremy via bitcoin-dev wrote:
> This means that you're overloading the CLTV clause, which means it's 
> impossible
> to use Eltoo and use a absolute lock time,

It's already impossible to simultaneously spend two inputs if one
requires a locktime specified by mediantime and the other by block
height. Having per-input locktimes would satisfy both concerns.

> 1) Define a new CSV type (e.g. define (1<<31 && 1<<30) as being dedicated to
> eltoo sequences). This has the benefit of giving a per input sequence, but the
> drawback of using a CSV bit. Because there's only 1 CSV per input, this
> technique cannot be used with a sequence tag.

This would disallow using a relative locktime and an absolute locktime
for the same input. I don't think I've seen a use case for that so far,
but ruling it out seems suboptimal.

Adding a per-input absolute locktime to the annex is what I've had in
mind. That could also be used to cheaply add a commitment to an historical
block hash (eg "the block at height 650,000 ended in cc6a") in order to
disambiguate which branch of a chain split or reorg your tx is valid for.

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Lightning dice

2021-01-25 Thread Anthony Towns
Hey all,

Here's a rough design for doing something like satoshi dice (ie, gambling
on "guess the number I'm thinking of" but provably fair after the fact
[0]) on lighting, at least once PTLCs exist.

[0] 
https://bitcoin.stackexchange.com/questions/4609/how-can-a-wager-with-satoshidice-be-proven-to-be-fair

The security model is that if the casino cheats, you can prove they
cheated, but turning that proof into a way of getting your just rewards
is out of scope. (You could use the proof to discourage other people
from losing their money at the casino, or perhaps use it as evidence to
get a court or the police to act in your favour)

That we don't try to cryptographically guarantee the payout means we
can send both bets over lightning, but don't need to reserve the funds
for the bet payout for the lifetime of the bet. The idea is that's much
friendlier to the lightning network (you're not holding up funds of the
intermediary routing nodes) and it requires less capital to run the casino
than other approaches.

So the first thing to do is to set up a wager, between Bob the better
and Carol the casino, say. Carol offers a standard bet to anyone, say
1.8% chance of winning and a 50x payout, with up to 200 satoshi stake
(so 10k satoshi max payout).

We assume the bet is implemented as Bob and Carol both picking random
numbers (b and c, respectively), and who wins being decided based on
the relationship between those numbers.

We start off with two messages:

  m1: "C owes B ${amount}, provided values b and c are given where
   0 <= (b+c)%500 < 9 and b*G = ${Pb} and c*G = ${Pc}"

  m2: "C has paid B ${amount} for the ${b} ${c} bet"

The first message, if signed by C, and accompanied by consisent values
for b and c, serves as proof that Bob took the bet and won. The second
message, if signed by B, serves as proof that Carol didn't cheat Bob.

So the idea then is that Bob should get a signature for the first message
as soon as he pays the lightning invoice for the bet, and Carol should
get a signature for the latter, as soon as she's gotten the payout
after winning.

PTLCs make this possible, because when verifying a Schnorr signature,
you want:

  s*G = R + H(R,P,m)*P

but if you provide (R,P,m) initially, then you can calculate the right
hand side of the equation as the point, and then use a PTLC on that
point to pay for its preimage "s", at which point you have both s,R
which is the signature you were hoping for.

But you want to be even cleverer than this -- because as soon as Bob pays
Carol, Bob needs to not only have the signature but also have Carol's
"c". He can't have "c" before he pays, because that would allow him to
cheat (he could choose to bet only when the value of c guarantees he
wins). We can do that by making it an adaptor signature conditional on
c. That is, provide R,(s-c) as the adaptor signature instead of R,s.
Bob can verify "s-c" is correct, by verifying:

  (s-c)*G = R + H(R,P,m)*P - C

So the protocol becomes:

1 -- Setup)
  Bob has a pubkey B; picks random number b, calculates Pb = b*G.
  Sends bet, B, Pb to Carol.

  Carol decides she wants to accept the bet.
  Carol picks c, calculates Pc = c*G.
  Carol calculates m1(amount=50*bet, C, B, Pb, Pc), and generates a
   signature R1,s1 for it.
  Carol sends Pc,R1,(s1-c) to Bob, and a PTLC invoice for (bet,Pc)

  Bob checks the adaptor signature -- (s1-c)*G = R1 + H(C,R1,m1)*C - Pc

2 -- Bet)
  Bob pays the invoice, receiving "c".
  Bob checks if (b+c)%500 < 9, and if it isn't stops, having lost the
bet.
  Bob calculates m2(amount=50*bet, b, c) and produces a signature for
it, namely R2,s2.
  Bob calculates S2=s2*G.
  Bob sends b, R2 to Carol, and a PTLC invoice for (50*bet, S2)

3 -- Payout)
  Carol checks b,c complies with the bet parameters.
  Carol checks the signature -- S2 = R2 + H(R2,B,m2)*B
  Carol pays the invoice, receiving s2
 
I think it's pretty straightforward to see how this meets the goals:
as soon as Bob puts up the bet money, he can prove to anyone whether or
not he won the bet; and as soon as Carol pays, she has proof that she
paid.

Note that Bob could abort the protocol with a winning bet before
requesting the payout from Carol -- he already has enough info to prove
he's won and claim Carol isn't paying him out at this point. 

One way of dealing with this is to vet Bob's claim by sending b,R2 and a
PTLC invoice of (50*bet,S2) to Carol with yourself as the recipient -- you
can construct all that info from Bob's claim that Carol is cheating. If
Carol isn't cheating, she won't be able to tell you're not Bob and
will try paying the PTLC; at which point you know Carol's not cheating.
This protocol does't work without better spam defenses in lightning --
PTLC payments have to be serialised or Carol risks sending the payout
to Bob multiple times, and if many people want to verify Carol is(n't)
cheating, they can be delayed by just one verifier forcing Carol to wait
for the PTLC timeout to be reached.

Another way of

Re: [Lightning-dev] A proposal for up-front payments.

2020-02-27 Thread Anthony Towns
On Mon, Feb 24, 2020 at 01:29:36PM +1030, Rusty Russell wrote:
> Anthony Towns  writes:
> > On Fri, Feb 21, 2020 at 12:35:20PM +1030, Rusty Russell wrote:
> >> And if there is a grace period, I can just gum up the network with lots
> >> of slow-but-not-slow-enough HTLCs.
> > Well, it reduces the "gum up the network for  blocks" to "gum
> > up the network for  seconds", which seems like a pretty
> > big win. I think if you had 20 hops each with a 1 minute grace period,
> > and each channel had a max_accepted_htlcs of 30, you'd need 25 HTLCs per
> > second to block 1000 channels (so 2.7% of the 36k channels 1ml reports),
> > so at the very least, successfully performing this attack would be
> > demonstrating lightning's solved bitcoin's transactions-per-second
> > limitation?
> But the comparison here is not with the current state, but with the
> "best previous proposal we have", which is:
> 
> 1. Charge an up-front fee for accepting any HTLC.
> 2. Will hang-up after grace period unless you either prove a channel
>close, or gain another grace period by decrypting onion.

In general I don't really like comparing ideas that are still in
brainstorming mode; it's never clear whether there are unavoidable
pitfalls in one or the other that won't become clear until they're
actually implemented...

Specifically, I'm not a fan of either channel closes or peeling the onion
-- the former causes problems if you're trying to route across sidechains
or have lightning as a third layer above channel factories or similar,
and I'm not convinced even within Bitcoin "proving a channel close"
is that meaningful, and passing around decrypted onions seems like it
opens up privacy attacks.

Aside from those philosophical complaints, seems to me the simplest
attack would be:

  * route 1000s of HTLCs from your node A1 to your node A2 via different,
long paths, using up the total channel capacity of your A1/A2 nodes,
with long timeouts
  * have A2 offer up a transaction claiming that was the channel
close to A3; make it a real thing if necessary, but it's probably
fake-able
  * then leave the HTLCs open until they time out, using up capacity
from all the nodes in your 1000s of routes. For every satoshi of
yours that's tied up, you should be able to tie up 10-20sat of other
people's funds

That increases the cost of the attack by one on-chain transaction per
timeout period, and limits the attack surface by how many transactions
you can get started/completed within whatever the grace period is, but
it doesn't seem a lot better than what we have today, unless onchain
fees go up a lot.

(If the up-front fee is constant, then A1 paid a fee, and A2 collected a
fee so it's a net wash; if it's not constant then you've got a lot of
hassle making it work with any privacy I think)

> >   A->B: here's a HTLC, locked in
> >   B->C: HTLC proposal
> >   C->B: sure: updated commitment with HTLC locked in
> >   B->C: great, corresponding updated commitment, plus revocation
> >   C->B: revocation
> Interesting; this adds a trip, but not in latency (since C can still
> count on the HTLC being locked in at step 3).
> I don't see how it helps B though?  It still ends up paying A, and C
> doesn't pay anything?

The updated commitment has C paying B onchain; if B doesn't receive that
by the time the grace period's about over, B can cancel the HTLC with A,
and then there's statemachine complexity for B to cancel it with C if
C comes alive again a little later.

> It forces a liveness check of C, but TBH I dread rewriting the state
> machine for this when we can just ping like we do now.

I'd be surprised if making musig work doesn't require a dread rewrite
of the state machine as well, and then there's PTLCs and eltoo...

> >> There's an old proposal to fast-fail HTLCs: Bob sends an new message "I
> >> would fail this HTLC once it's committed, here's the error" 
> > Yeah, you could do "B->C: proposal, C->B: no way!" instead of "sure" to
> > fast fail the above too. 
> > And I think something like that's necessary (at least with my view of how
> > this "keep the HTLC open" payment would work), otherwise B could send C a
> > "1 microsecond grace period, rate of 3e11 msat/minute, HTLC for 100 sat,
> > timeout of 2016 blocks" and if C couldn't reject it immediately would
> > owe B 50c per millisecond it took to cancel.
> Well, surely grace period (and penalty rate) are either fixed in the
> protocol or negotiated up-front, not per-HTLC.

I think the "kee

Re: [Lightning-dev] A proposal for up-front payments.

2020-02-23 Thread Anthony Towns
On Fri, Feb 21, 2020 at 12:35:20PM +1030, Rusty Russell wrote:
> > I think the way it would end up working
> > is that the further the route extends, the greater the payments are, so:
> >   A -> B   : B sends A 1msat per minute
> >   A -> B -> C : C sends B 2msat per minute, B forwards 1msat/min to A
> >   A -> B -> C -> D : D sends C 3 msat, etc
> >   A -> B -> C -> D -> E : E sends D 4 msat, etc
> > so each node is receiving +1 msat/minute, except for the last one, who's
> > paying n msat/minute, where n is the number of hops to have gotten up to
> > the last one. There's the obvious privacy issue there, with fairly
> > obvious ways to fudge around it, I think.
> Yes, it needs to scale with distance to work at all.  However, it has
> the same problems with other upfront schemes: how does E know to send
> 4msat per minute?

D tells it "if you want this HTLC, you'll need to pay 4msat/minute after
the grace period of 65 seconds". Which also means A as the originator can
also choose whatever fees they like. The only consequence of choosing too
high a fee is that it's more likely one of the intermediate nodes will
say "screw that!" and abort the HTLC before it gets to the destination.

> > I think it might make sense for the payments to have a grace period --
> > ie, "if you keep this payment open longer than 20 seconds, you have to
> > start paying me x msat/minute, but if it fulfills or cancels before
> > then, it's all good".
> But whatever the grace period, I can just rely on knowing that B is in
> Australia (with a 1 second HTLC commit time) to make that node bleed
> satoshis.  I can send A->B->C, and have C fail the htlc after 19
> seconds for free.  But B has to send 1msat to A.  B can't blame A or C,
> since this attack could come from further away, too.

So A gives B a grace period of 35 seconds, B deducts 5 seconds
processing time and 10 seconds for latency, so gives C a grace period of
20 seconds; C rejects after 19 seconds, and B still has 15 seconds to
notify A before he has to start paying fees. Same setup as decreasing
timelocks when forwarding HTLCs.

> And if there is a grace period, I can just gum up the network with lots
> of slow-but-not-slow-enough HTLCs.

Well, it reduces the "gum up the network for  blocks" to "gum
up the network for  seconds", which seems like a pretty
big win. I think if you had 20 hops each with a 1 minute grace period,
and each channel had a max_accepted_htlcs of 30, you'd need 25 HTLCs per
second to block 1000 channels (so 2.7% of the 36k channels 1ml reports),
so at the very least, successfully performing this attack would be
demonstrating lightning's solved bitcoin's transactions-per-second
limitation?

I think you could do better by having the acceptable grace period be
dynamic: both (a) requiring a shorter grace period the more funds a HTLC
locks up, which stops a single HTLC from gumming up the channel, and (b) 
requiring a shorter grace period the more active HTLCs you have (or, the
more active HTLCs you have that are in the grace period, perhaps). That
way if the network is loaded, you're prioritising more efficient routes
(or at least ones that are willing to pay their way), and if it's under
attack, you're dynamically increasing the resources needed to maintain
the attack.

Anyway, that's my hot take; not claiming it's a perfect solution or
final answer, rather that this still seems worth brainstorming out.

My feeling is that this might interact nicely with the sender-initiated
upfront fee. Like, you could replace a grace period of 30 seconds at
2msat/minute by always charging 2msat/minute but doing a forward payment
of 1msat. But at this point I can't keep it all in my head at once to
figure out something that really makes sense.

> > Maybe this also implies a different protocol for HTLC forwarding,
> > something like:
> >   1. A sends the HTLC onion packet to B
> >   2. B decrypts it, makes sure it makes sense
> >   3. B sends a half-signed updated channel state back to A
> >   4. A accepts it, and forwards the other half-signed channel update to B
> > so that at any point before (4) Alice can say "this is taking too long,
> > I'll start losing money" and safely abort the HTLC she was forwarding to
> > Bob to avoid paying fees; while only after (4) can she start the time on
> > expecting Bob to start paying fees that she'll forward back. That means
> > 1.5 round-trips before Bob can really forward the HTLC on to Carol;
> > but maybe it's parallelisable, so Bob/Carol could start at (1) as soon
> > as Alice/Bob has finished (2).
> We added a ping-before-commit[1] to avoid the case where B has disconnected
> and we don't know yet; we have to assume an HTLC is stuck once we send
> commitment_signed.  This would be a formalization of that, but I don't
> think it's any better?

I don't think it's any better as things stand, but with the "B pays A
holding fees" I think it becomes necessary. If you've got a route
A->B->C then from B's perspective I think it cur

Re: [Lightning-dev] A proposal for up-front payments.

2020-02-19 Thread Anthony Towns
On Thu, Feb 20, 2020 at 03:42:39AM +, ZmnSCPxj wrote:
> A thought that arises here is, what happens if I have forwarded a payment, 
> then the outgoing channel is dropped onchain and that peer disconnects from 
> me?
> 
> Since the onchain HTLC might have a timelock of, say, a few hundred blocks 
> from now, the outgoing peer can claim it up until the timelock.
> If the peer does not claim it, I cannot claim it in my incoming as well.
> I also cannot safely fail my incoming, as the outgoing peer can still claim 
> it until the timelock expires.

Suppose the channel state looks like:

  Bob's balance:   $150
  Carol's balance: $500
  Bob to Carol: $50, hash X, timelock +2016 blocks

The pre-signed close transaction will have already deducted maybe $1 in
fees, say 50c from each balance.

At 5% pa, that's $50*0.05*2/52, so about 10 cents worth of "holding"
fees, so that seems like it's worth just committing to up-front, ie:

  Bob's balance:   $149.60 (-.50+.10)
  Carol's balance: $499.40 (-.50-.10)
  Bob to Carol: $50, hash X, timelock +2016 blocks
  Fees:  $1

And that seems necessary anyway: if the channel does drop to the chain,
then the HTLC can't be cancelled, so if it never confirms, Bob will have
had to pay, say, 9.5c to Alice waiting for the timeout, and can then
immediately cancel the HTLC with Alice allowing it to finish unwinding.

So I think the idea would be not to accept a (rate, amount, timelock)
tuple for an incoming HTLC unless the rate*amount*timelock product
is substantially less than what you're putting towards the blockchain
fees anyway, as otherwise you've got bad incentives for the other guy to
drop to the chain.

Note the rate increases with number of hops, so if it's 1% pa per hop,
the 11th peer will be emitting 10% pa. I think that's probably okay,
because BTC's deflationary nature probably means you don't need to earn
much interest on it, and you can naturally choose the rate dynamically
based on how many HTLCs you currently have open and how much of your
channel funds are being used up by the HTLC?

Also, you'd presumably update your channel state every hundred blocks,
reducing the 10c by half a cent or so each time, so you could have your
risk reduce. Maybe there could be some way of bumping the timelock across
a HTLC path so that the risk is capped, but if the HTLC is still being
paid for it doesn't have to be cancelled?

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] A proposal for up-front payments.

2020-02-19 Thread Anthony Towns
On Tue, Feb 18, 2020 at 10:23:29AM +0100, Joost Jager wrote:
> A different way of mitigating this is to reverse the direction in which the
> bond is paid. So instead of paying to offer an htlc, nodes need to pay to
> receive an htlc. This sounds counterintuitive, but for the described jamming
> attack there is also an attacker node at the end of the route. The attacker
> still pays.

I think this makes a lot of sense. I think the way it would end up working
is that the further the route extends, the greater the payments are, so:

  A -> B   : B sends A 1msat per minute
  A -> B -> C : C sends B 2msat per minute, B forwards 1msat/min to A
  A -> B -> C -> D : D sends C 3 msat, etc
  A -> B -> C -> D -> E : E sends D 4 msat, etc

so each node is receiving +1 msat/minute, except for the last one, who's
paying n msat/minute, where n is the number of hops to have gotten up to
the last one. There's the obvious privacy issue there, with fairly
obvious ways to fudge around it, I think.

But that's rational, because that last node can either (a) collect the
payment, covering their cost; or (b) forward the payment, at which point
they'll start collecting funds rather than paying them; or (c) cancel
the payment releasing all the locked up funds all the way back.

I think it might make sense for the payments to have a grace period --
ie, "if you keep this payment open longer than 20 seconds, you have to
start paying me x msat/minute, but if it fulfills or cancels before
then, it's all good".

I'm not sure if there needs to be any enforcement for this beyond "this
peer isn't obeying the protocol, so I'm going to close the channel"; not
even sure it's something that needs to be negotiated as part of payment
routing -- it could just be something each peer does for HTLCs on their
channels? If that can be made to work, it doesn't need much crypto or
bitcoin consensus changes, or even much deployment coordination, all of
which would be awesome.

I think at $10k/BTC then 1msat is about the fair price for locking up $5
worth of BTC (so 50k sat) for 1 minute at a 1% pa interest rate, fwiw.

Maybe this opens up some sort of an attack where a peer lies about the
time to make the "per minute" go faster, but if msats-per-minute is the
units, not sure that really matters.

Maybe this also implies a different protocol for HTLC forwarding,
something like:

  1. A sends the HTLC onion packet to B
  2. B decrypts it, makes sure it makes sense
  3. B sends a half-signed updated channel state back to A
  4. A accepts it, and forwards the other half-signed channel update to B

so that at any point before (4) Alice can say "this is taking too long,
I'll start losing money" and safely abort the HTLC she was forwarding to
Bob to avoid paying fees; while only after (4) can she start the time on
expecting Bob to start paying fees that she'll forward back. That means
1.5 round-trips before Bob can really forward the HTLC on to Carol;
but maybe it's parallelisable, so Bob/Carol could start at (1) as soon
as Alice/Bob has finished (2).

Cheers
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Layered commitments with eltoo

2020-01-21 Thread Anthony Towns
Hi all,

At present, BOLT-3 commitment transactions provide a two-layer
pay-to-self path, so that you can reduce the three options:

  1) pay-to-them due to revoked commitment
  2) pay-to-me due to timeout (or: preimage known)
  3) pay-to-them due to preimage known (or: timeout)

to just the two options:

  1) pay-to-them due to revoked commitment
  2) pay-to-me due to timeout (or: preimage known)

This allows the `to_self_delay` and the HTLC timeout (and hence the
`cltv_expiry_delta`) to be chosen independently.

As it stands, both the original eltoo proposal [0] and the
ANYPREVOUT-based sketch [1] don't have this property, which means that
either the `cltv_expiry_delta` needs to take the `to_self_delay` into
account, or you risk not being able to claim funds to cover payments
you forward.

[0] https://blockstream.com/eltoo.pdf
[1] 
https://lists.linuxfoundation.org/pipermail/lightning-dev/2019-May/001996.html

I think if we drop the commitment to the input value from
ANYPREVOUTANYSCRIPT signatures, it's possible to come up with a scheme
that preserves the other benefits of eltoo while also having the same
benefits BOLT-3 currently achieves. I think for eltoo it needs to be a
channel-wide "shared_delay" rather than a "to_self" delay, so I'll use
that.

Here's the setup. We have four types of transaction:

 * funding transaction, posted on-chain as part of channel setup
 * update transaction, posted on-chain to close the channel at a
   given state
 * revocable claim transaction, posted on-chain to reveal a preimage
   or establish a timeout has passed
 * settlement transaction, to actually claim funds

As with eltoo, if a stale update transaction is posted, it can be spent
via any subsequent update transaction with no penalty. The revocable
claim transactions have roughly the same goal as the second layer BOLT-3
transactions, that is going from:

  1) spent by a later update transaction
  2) pay-to-me due to timeout (or: preimage known)
  3) pay-to-them due to preimage known (or: timeout)

to

  1) spent by a later update transaction
  2) pay-to-me due to timeout (or: preimage known)

In detail:

 * Get a pubkey from each peer (A, B), and calculate P=MuSig(A,B).

 * Each state update involves constructing and calculating signatures
   for new update transactions, revocable claim transactions and
   settlement transactions.

 * The update transaction has k+2 outputs, where k is the number of open
   PTLCs. Each PTLC output pays to P as the internal key, and:

 IF CODESEP [i] ELSE [500e6+n+1] CLTV ENDIF DROP OP_1 CHECKSIG

   as the script. i varies from 1..k; n is the state counter, starting
   at 1 and counting up.

   Each balance output pays to P as the internal key and:

 IF CODESEP IF [balance_pubkey_n] [shared_delay] CSV ELSE OP_1 OP_0 ENDIF
 ELSE OP_1 [500e6+n+1] CLTV ENDIF 
 DROP CHECKSIG

   as the script.

   The signature that allows an update tx to spend a previous update tx
   is calculated using ALL|ANYPREVOUTANYSCRIPT, a locktime of 500e6+n,
   with the key P, and codesep_pos=0x_.

 * For each output of the update tx and each party that can spend it,
   we also construct a revocable claim transaction. These are designed
   to update a single output of each PTLC, and their output pays to P
   as the internal key, and the script:

 IF [i*P+p] CODESEP ELSE [500e6+n+1] CLTV ENDIF DROP OP_1 CHECKSIG

   (swapping the position of the CODESEP opcode, and encoding both i and
   p in the script -- P is the number of peers in the channel, so 2
   here, and p is an identifier for each peer so either 0 or 1; i=1..k
   for HTLCs, i=0 for the balances)

   The signature that allows this tx to be applied to the update tx
   is calculated as SINGLE|ANYPREVOUT, with the script committed and
   codesep_pos=1. This signature should be made conditional for each
   PTLC, either by being an adaptor signature requiring the point preimage
   to be added, or by having a locktime given.

 * For each revocable claim transaction, we also construct a settlement
   transaction. The outputs of the settlement transactions are just
   an address for whichever peer receives the funds.

   These are also done by SINGLE|ANYPREVOUT signatures, with nSequence
   set to the shared_delay. There's no locktime or adaptor signatures
   needed here, since they were taken care of for the revocable claim
   transaction.  The signatures commit to the respective scripts, and
   set codesep_pos to either 1 or 2 depending on whether a revocable
   claim is being spent or not.

 * The funding transaction pays to internal key P, with tapscript:

 "OP_1 CHECKSIGVERIFY 500e6 CLTV"

Then: to spend from the funding transaction cooperatively, you make a
new SIGHASH_ALL signature based the output key Q for the funding tx.

If you can't do that, you post two transactions: the latest update tx,
and another tx that includes any revocable claim tx's you can already
claim and an input to cover fees, and any change from th

Re: [Lightning-dev] Lightning in a Taproot future

2019-12-17 Thread Anthony Towns
On Sun, Dec 15, 2019 at 03:43:07PM +, ZmnSCPxj via Lightning-dev wrote:
> For now, I am assuming the continued use of the existing Poon-Dryja update 
> mechanism.
> Decker-Russell-Osuntokun requires `SIGHASH_NOINPUT`/`SIGHASH_ANYPREVOUT`, and 
> its details seem less settled for now than taproot details.

Supporting PTLCs instead of HTLCs is a global upgrade in that you need
all nodes along your payment path to support it; moving from Poon-Dryja
to Decker-Russell-Osuntokun is only relevant to individual peers. So I
think it makes sense to do PTLCs first if the required features aren't
both enabled at the same time.

> Poon-Dryja with Schnorr
> ---

I think MuSig between the two pairs is always superior to a NUMS point
for the taproot internal key; you definitely want to calculate a point
rather than use a constant, or you're giving away that it's lightning,
and if you're calculating you might as well calculate something that can
be used for a cooperative key path spend if you ever want to.

> A potential issue with MuSig is the increased number of communication rounds 
> needed to generate signatures.

I think you can reduce this via an alternative script path. In
particular, if you want a script that the other guy can spend if they
reveal the discrete log of point X, with musig you do:

   P = H(H(A,B),1)*A + H(H(A,B),2)*B
   [exchange H(RA),H(RB),RA,RB]

   [send X]

   sb = rb + H(RA+RB+X,P,m)*H(H(A,B),2)*b

   [wait for sb]

   sa = ra + H(RA+RB+X,P,m)*H(H(A,B),1)*a

   [store RA+RB+X, sa+sb, supply sa, watch for sig]

   sig = (RA+RB+X, sa+sb+x)

So the 1.5 round trips are "I want to do a PTLC for X", "okay here's
sb", "great, here's sa".

But with taproot you can have a script path as well, so you could have a
script:

   A CHECKSIGVERIFY B CHECKSIG

and supply a partial signature:

   R+X,s,X where s = r + H(R+X,A,m)*a

to allow them to satisfy "A CHECKSIGVERIFY" if they know the discrete
log of X, and of course they can sign with B at any time. This is only
half a round trip, and can be done at the same time as sending the "I
want to do a PTLC for X" message to setup the (ultimately cheaper) MuSig
spend. It's an extra signature on the sender's side and an extra verification 
on the receiver's side, but I think it works out fine.

> Pointlocked Timelocked Contracts
> 
> First, I will discuss how to create a certain kind of PTLCs, which I call 
> "purely scriptless" PTLCs.
> In particular, I would like to point out that we *actually* use in current 
> Poon-Dryja Lightning Network channels is *revocable* HTLCs, thus we need to 
> have *revocable* PTLCs to replace them.
> * First, we must have a sender A, who is buying a secret scalar, and knows 
> the point equivalent to that scalar.
> * Second, we have a receiver B, who knows this secret scalar (or can somehow 
> learn this secret scalar).
> * A and B agree on the specifications of the PTLC: the point, the future 
> absolute timelock, the value.
> * A creates (but *does not* sign or broadcast) a transaction that pays to a 
> MuSig of A and B and shares the txid and output number with the relevant 
> MuSig output.
> * A and B create a backout transaction.
>   * This backout has an `nLockTime` equal to the agreed absolute timelock.
>   * It spends the above MuSig output (this input must enable `nLockTime`, 
> e.g. by setting `nSequence` to `0xFFFE`).
>   * It creates an output that is solely controlled by A.
> * A and B perform a MuSig ritual to sign the backout transaction.
> * A now signs and broadcast the first transaction, the one that has an output 
> that represents the PTLC.
> * A and B wait for the above transaction to confirm deeply.
>   This completes the setup phase for the PTLC.
> * After this point, if the agreed-upon locktime is reached, A broadcasts the 
> backout transaction and aborts the ritual.
> * A and B create a claim transaction.
>   * This has an `nLockTime` of 0, or a present or past blockheight, or 
> disabled `nLockTime`.
>   * This spends the above MuSig output.
>   * This creates an output that is solely controlled by B.
> * A and B generate an adaptor signature for the claim transaction, which 
> reveals the agreed scalar.
>   * This is almost entirely a MuSig ritual, except at `s` exchange, B 
> provides `t + r + h(R | MuSig(A,B) | m) * MuSigTweak(A, B, B) * b` first, 
> then demands `r + h(R | MuSig(A,B) | m) * MuSigTweak(A, B, A) * a` from A, 
> then reveals `r + h(R | MuSig(A,B) | m) * MuSigTweak(A, B, B) * b` (or the 
> completed signature, by publishing onchain), revealing the secret scalar `t` 
> to A.
> * A is able to learn the secret scalar from the above adaptor signature 
> followed by the full signature, completing the ritual.

(I think it makes more sense to provide "r + H(R+T, P, m)*b" instead of
"r+t + H(R,P,m)*b" -- you might not know "t" at the point you need to
start the signature exchange)

I think the setup can be similar to BOLT-3:

  Funding 

Re: [Lightning-dev] eltoo towers and implications for settlement key derivation

2019-12-02 Thread Anthony Towns
On Tue, Nov 26, 2019 at 03:41:14PM -0800, Conner Fromknecht wrote:
> I recently revisited the eltoo paper and noticed some things related
> watchtowers that might affect channel construction.
> In order to spend, however, the tower must also produce a witness
> script which when hashed matches the witness program of the input. To
> ensure settlement txns can only spend from exactly one update txn,
> each update txn uses unique keys for the settlement clause, meaning
> that each state has a _unique_ witness program.

I don't believe that's necessary with the ANYPREVOUT design, see

https://lists.linuxfoundation.org/pipermail/lightning-dev/2019-May/001996.html

The design I'm thinking of use a common taproot internal key
P=muSig(A,B) for update transactions. The tapscript paths are
(with the chaperone sigs dropped):

  Update n: [nLockTime = 500e6+n]
script: OP_1 CHECKSIGVERIFY [500e6+n+1] CLTV
witness: [ANYPREVOUTANYSCRIPT sig]

  Settlement n: [nSequence = delay; nLockTime=500e6+n+1]
witness: [ANYPREVOUT sig]  

(This relies on having the two variants of ANYPREVOUT, one of which
commits to the state number via commiting to the [500e6+n+1] value in
the update tx's script, so that you don't need unique keys to ensure
settlement tx n can't spend settlement tx n+k)

With this you can tell which update was posted by subtracting 500e6 from
the nLocktime, and use that to calculate the tapscript the update tx used,
and the internal key is constant.

The watchtower only needs to post the update tx -- as long as the latest
update is posted, the only tx that can spend it is the correct settlement,
so you can post that whenever you're back online, even if that's weeks
or months later, and likewise for actually claiming your funds from the
settlement tx's outputs.

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] A proposal for up-front payments.

2019-11-08 Thread Anthony Towns
On Fri, Nov 08, 2019 at 01:08:04PM +1030, Rusty Russell wrote:
> Anthony Towns  writes:
> [ Snip summary, which is correct ]

Huzzah!

This correlates all the hops in a payment when the route reaches its end
(due to the final preimage getting propogated back for everyone to justify
the funds they claim). Maybe solvable by converting from hashes to ECC
as the trapdoor function?

The refund amount propogating back also reveals the path, probably.
Could that be obfusticated by somehow paying each intermediate node
both as the funds go out and come back, so the refund decreases on the
way back?

Oh, can we make the amounts work like the onion, where it stays constant?
So:

  Alice wants to pay Dave via Bob, Carol. Bob gets 700 msat, Carol gets
  400 msat, Dave gets 300 msat, and Alice gets 100 msat refunded.

  Success:
Alice forwards 1500 msat to Bob   (-1500, +1500, 0, 0)
Bob forwards 1500 msat to Carol   (-1500, 0, +1500, 0)
Carol forwards 1500 msat to Dave  (-1500, 0, 0, +1500)
Dave refunds 1200 msat to Carol   (-1500, 0, +1200, +300)
Carol refunds 800 msat to Bob (-1500, +800, +400, +300)
Bob refunds 100 msat to Alice (-1400, +700, +400, +300)

  Clean routing failure at Carol/Dave:
Alice forwards 1500 msat to Bob   (-1500, +1500, 0, 0)
Bob forwards 1500 msat to Carol   (-1500, 0, +1500, 0)
Carol says Dave's not talking
Carol refunds 1100 msat to Bob(-1500, +1100, +400, 0)
Bob refunds 400 msat to Alice (-1100, +700, +400, 0)

I think that breaks the correlation pretty well, so you just need a
decent way of obscuring path length?

In the uncooperative routing failure case, I wonder if using an ECC
trapdoor and perhaps scriptless scripts, you could make it so Carol
doesn't even get an updated state without revealing the preimage...

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] A proposal for up-front payments.

2019-11-07 Thread Anthony Towns
On Thu, Nov 07, 2019 at 02:56:51PM +1030, Rusty Russell wrote:
> > What you wrote to Zmn says "Rusty decrypts the onion, reads the prepay
> > field: it says 14, ." but Alice doesn't know anything other than
> >  so can't put  in the onion?
> Alice created the onion.  Alice knows all the preimages, since she
> created the chain AZ.

In your reply to Zmn, it was Rusty (Bob) preparing the nonce and creating
the chain ... -- so I was lost as to what you were proposing...

Here's what I now think you're saying, which I think mostly hangs
together:

Alice sends a HTLC with hash X and value V to Dave via Bob then Carol

Alice generates a nonce , and calculates H^25() = .

Alice creates an onion, and sends the HTLC to Bob, revealing  and
6, to Bob, along with 2500 msat (25 for the hashing ops between
 and , and *100 for round numbers). Bob calculates "6" is a
fair price.

Bob checks H^6()=. If not, Bob refunds the 2500 msat, and fails
the HTLC immediately. Otherwise, Bob passes the onion on to Carol, with
1900 msat and ; Carol unwraps the onion revealing 15,. Carol
calcualtes "15" is a fair price.

Carol checks H^15()=, and fails the route if not, refunding
1900msat to Bob. Otherwise, Carol passes the onion on to Dave, with 400
msat and .  Dave unwraps the onion, revealing 2,, so can claim
200 msat as well as the HTLC amount, etc.

After the successful route, Dave passes 2, and 200msat back to Carol,
who validates and continues passing things back.

If Carol instead passes, say, 3, back, then she also has to refund
300msat to avoid Bob closing the channel, which would be fine, because
Bob can just pass that back too -- Carol's the only one losing money in
that case.

If Carol wants to close the channel anyway and collect the HTLC on
chain, then Bob's situation is:

   channel with Alice: +2500 msat
   channel with Carol: -1900 msat , -fees , -HTLC funds

If Carol isn't cooperative, Bob only definitely knows , so to keep
the channel open with Alice, has to refund 1900msat, so:

   channel with Alice:  +600 msat , +HTLC funds
   channel with Carol: -1900 msat , -fees , -HTLC funds

(or Bob could keep the 2500 msat at the cost of Alice closing the channel
too:

   channel with Alice: +2500 msat , -fees , +HTLC funds
   channel with Carol: -1900 msat , -fees , -HTLC funds
)

So Bob and either keep the channel open but is out 1300 msat because of
Carol, or can gain 600 msat at the cost of closing the channel with
Alice?

As far as the "fair price" goes, the spitballed formula is "16 - X/4"
where X is number of zero bits in some PoW-y thing. The proposal is
the thing is SHA256(blockhash|revealedonion) which works, and (I think)
means each step is individually grindable.

I think an alternative would be to use the prepayment hashes themselves,
so you generate the nonce  as the value you'll send to Dave then
hash it repeatedly to get .., then check if pow(,) has
60 leading zero bits or pow(,) has 56 leading zero bits etc.
If you made pow(a,b) be SHA256(a,b,shared-onion-key) I think it'd
preserve privacy, but also mean you can't meaningfully grind unfairly
cheap routing except for very short paths?

If you don't grind and just go by luck, the average number of hashes
per hop is ~15.93 (if I got my maths right), so you should be able to
estimate path length pretty accurate by dividing claimed prepaid funds by
15.93*25msat or whatever. If everyone grinds at each level independently,
I think you'd just subtract maybe 6 hops from that, but the maths would
mostly stay the same?

Though I think you could obfusticate that pretty well by moving
some of the value from the HTLC into the prepayment -- you'd risk losing
that extra value if the payment made it all the way to the recipient but
they declined the HTLC that way though.

> >> Does Alice lose everything on any routing failure?
> > That was my thought yeah; it seems weird to pay upfront but expect a
> > refund on failure -- the HTLC funds are already committed upfront and
> > refunded on failure.
> AFAICT you have to overpay, since anything else is very revealing of
> path length.  Which kind of implies a refund, I think.

I guess you'd want to pay for a path length of about 20 whether the
path is actually 17, 2, 10 or 5. But a path length of 20 is just paying
for bandwidth for maybe 200kB of total traffic which at $1/GB is 2% of
1 cent, which doesn't seem that worth refunding (except for really tiny
micropayments, where paying for payment bandwidth might not be feasible
at all).

If you're buying a $2 coffee and paying 500ppm in regular fees per hop
with 5 hops, then each routing attempt increases your fees by 4%, which
seems pretty easy to ignore to me.

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] A proposal for up-front payments.

2019-11-06 Thread Anthony Towns
On Wed, Nov 06, 2019 at 10:43:23AM +1030, Rusty Russell wrote:
> >> Rusty prepares a nonce, A and hashes it 25 times = Z.
> >> ZmnSCPxj prepares the onion, but adds extra fields (see below).  
> > It would have made more sense to me for Alice (Zmn) to generate
> > the nonce, hash it, and prepare the onion, so that the nonce is
> > revealed to Dave (Rusty) if/when the message ever actually reaches its
> > destination. Otherwise Rusty has to send A to Zmn already so that
> > Zmn can prepare the onion?
> The entire point is to pay *up-front*, though, to prevent spam.

Hmm, I'm not sure I see the point of paying upfront but not
unconditionally -- you already commit the funds as part of the HTLC,
and if you're refunding some of them, you kind-of have to keep them
reserved or you risk finalising the HTLC causing a failure because you
don't have enough msats spare to do the refund?

If you refund on routing failure, why wouldn't a spammer just add a fake
"Ezekiel" at the end of the route after Dave, so that the HTLCs always
fail and all the fees are returned?

> Bob/ZmnSCPxj doesn't prepare anything in the onion.  They get handed the
> last hash directly: Alice is saying "I'll pay you 50msat for each
> preimage you can give me leading to this hash".

So my example was Alice paying Dave via Bob and Carol (so Alice/Bob,
Bob/Carol, Carol/Dave being the individual channels).

What you wrote to Zmn says "Rusty decrypts the onion, reads the prepay
field: it says 14, ." but Alice doesn't know anything other than
 so can't put  in the onion?

Are you using Hornet so that every intermediary can communicate a nonce
back to the source of the route? If not, "Rusty" generating the nonce
seems like you're informing Rusty that you're actually the origin of the
HTLC, and not just innocently forwarding it along; if so, it seems like
you have independent nonces at each step, rather than
/// in a direct chain.

> > I'm not sure why lucky hashing should result in a discount?
> Because the PoW adds noise to the amounts, otherwise the path length is
> trivially exposed, esp in the failure case.  It's weak protection
> though.

With a linear/exponential relationship you just get "half the time it's
1 unit, 25% of the time it's 2 units, 12% of the time it's 3 units", so
I don't think that's adding much noise?

> > You've only got two nonce choices -- the initial  and the depth
> > that you tell Bob and Carol to hash to as steps in the route;
> No, the sphinx construction allows for grinding, that was my intent
> here.  The prepay hashes are independent.

Oh, because you're also xoring with the onion packet, right, I see.

> > I think you could just make the scheme be:
> >   Alice sends HTLC(k,v) + 1250 msat to Bob
> >   Bob unwraps the onion and forwards HTLC(k,v) + 500 msat to Carol
> >   Carol unwraps the onion and forwards HTLC(k,v) + 250 msat to Dave
> >   Dave redeems the HTLC, claims an extra 300 msat and refunds 200 msat to 
> > Carol

The math here doesn't add up. Let's assume I meant:

  Bob keeps 500 sat, forwards 750 sat
  Carol keeps 250 sat, forwards 500 sat
  Dave keeps 300 sat, refunds 200 sat

> >   Carol redeems the HTLC and refunds 200 msat to Bob
> >   Bob redeems the HTLC and refunds 200 msat to Alice
> >
> > If there's a failure, Alice loses the 1250 msat, and someone in the
> > path steals the funds.
> This example confuses me.

Well, that makes us even at least? :)

> So, you're charging 250msat per hop?  Why is Bob taking 750?  Does Carol
> now know Dave is the last hop?

No, Alice is choosing to pay 500, 250 and 300 msat to Bob, Carol and
Dave respectively, as part of setting up the onion, and picks those
numbers via some magic algo trading off privacy and cost.

> Does Alice lose everything on any routing failure?

That was my thought yeah; it seems weird to pay upfront but expect a
refund on failure -- the HTLC funds are already committed upfront and
refunded on failure.

> If so, that is strong incentive for Alice to reduce path-length privacy
> by keeping payments minimal, which I was really trying to avoid.

Assuming v is much larger than 1250msat, and 1250 msat is much lower than
the cost to Bob of losing the channel with Alice, I don't think that's
a problem. 1250msat pays for 125kB of bandwdith under your assumptions
I think?

> > Does that miss anything that all the hashing achieves?
> It does nothing if Carol is the one who can't route.

If Carol can't route, then ideally she just refunds all the money and
everyone's happy.

If Carol tries to steal, then she can keep 750 msat instead of 250 msat.
This doesn't give any way for Bob to prove Carol cheated on him though;
but Bob could just refund the 1250 msat and write the 750 msat off as a
loss of dealing with cheaters like Carol.

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/ligh

Re: [Lightning-dev] A proposal for up-front payments.

2019-11-05 Thread Anthony Towns
On Tue, Nov 05, 2019 at 07:56:45PM +1030, Rusty Russell wrote:
> Sure: for simplicity I'm sending a 0-value HTLC.
> ZmnSCPxj has balance 1msat in channel with Rusty, who has 1000msat
> in the channel with YAIjbOJa.

Alice, Bob and Carol sure seem simpler than Zmn YAI and Rusty...

> Rusty prepares a nonce, A and hashes it 25 times = Z.
> ZmnSCPxj prepares the onion, but adds extra fields (see below).  

It would have made more sense to me for Alice (Zmn) to generate
the nonce, hash it, and prepare the onion, so that the nonce is
revealed to Dave (Rusty) if/when the message ever actually reaches its
destination. Otherwise Rusty has to send A to Zmn already so that
Zmn can prepare the onion?

> He then
> sends the HTLC to Rusty, but also sends Z, and 25x50 msat (ie. those
> fields are in the update_add_htlc msg).  His balance with Rusty is now
> 8750msat (ie. 25x50 to Rusty).
> 
> Rusty decrypts the onion, reads the prepay field: it says 14, L.
> Rusty checks: the hash of the onion & block (or something) does indeed
> have the top 8 bits clear, so the cost is in fact 16 - 8/2 == 14.  He
> then hashes L 14 times, and yes, it's Z as ZmnSCPxj said it
> should be.

I'm not sure why lucky hashing should result in a discount? You're
giving a linear discount for exponentially more luck in hashing which
also seems odd.

You've only got two nonce choices -- the initial  and the depth
that you tell Bob and Carol to hash to as steps in the route; so the
incentive there seems to be to do a large depth, so you might hash
 1000 times, and figure that you'll find a leading eight 0's once
in the first 256 entries, then another by the time you get up to 512,
and another by the time you get to 768, which gets you discounts on
three intermediaries. But the cost there is that your intermediaries
collectively have to do the same amount of hashing you did, so it's not
proof-of-work, because it's as hard to verify as it is to generate.

I think you could just make the scheme be:

  Alice sends HTLC(k,v) + 1250 msat to Bob
  Bob unwraps the onion and forwards HTLC(k,v) + 500 msat to Carol
  Carol unwraps the onion and forwards HTLC(k,v) + 250 msat to Dave
  Dave redeems the HTLC, claims an extra 300 msat and refunds 200 msat to Carol
  Carol redeems the HTLC and refunds 200 msat to Bob
  Bob redeems the HTLC and refunds 200 msat to Alice

If there's a failure, Alice loses the 1250 msat, and someone in the
path steals the funds. You could make the accountable by having Alice
also provide "Hash(, refund=200)" to everyone, encoding  in the
onion to Dave, and then each hop reveals  and refunds 200msat to
demonstrate their honesty.

Does that miss anything that all the hashing achieves?

I think the idea here is that you're paying tiny amounts for the
bandwidth, which when it's successful does in fact pay for the bandwidth;
and when it's unsuccessful results in a channel closure, which makes it
unprofitable to cheat the system, but doesn't change the economics of
lightning much overall because channel closures can happen anytime anyway.
I think that approach makes sense.

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [bitcoin-dev] Continuing the discussion about noinput / anyprevout

2019-10-05 Thread Anthony Towns
On Thu, Oct 03, 2019 at 01:08:29PM +0200, Christian Decker wrote:
> >  * anyprevout signatures make the address you're signing for less safe,
> >which may cause you to lose funds when additional coins are sent to
> >the same address; this can be avoided if handled with care (or if you
> >don't care about losing funds in the event of address reuse)
> Excellent points, I had missed the hidden nature of the opt-in via
> pubkey prefix while reading your proposal. I'm starting to like that
> option more and more. In that case we'd only ever be revealing that we
> opted into anyprevout when we're revealing the entire script anyway, at
> which point all fungibility concerns go out the window anyway.
>
> Would this scheme be extendable to opt into all sighash flags the
> outpoint would like to allow (e.g., adding opt-in for sighash_none and
> sighash_anyonecanpay as well)? That way the pubkey prefix could act as a
> mask for the sighash flags and fail verification if they don't match.

For me, the thing that distinguishes ANYPREVOUT/NOINPUT as warranting
an opt-in step is that it affects the security of potentially many
UTXOs at once; whereas all the other combinations (ALL,SINGLE,NONE
cross ALL,ANYONECANPAY) still commit to the specific UTXO being spent,
so at most you only risk somehow losing the funds from the specific UTXO
you're working with (apart from the SINGLE bug, which taproot doesn't
support anyway).

Having a meaningful prefix on the taproot scriptpubkey (ie paying to
"[SIGHASH_SINGLE][32B pubkey]") seems like it would make it a bit easier
to distinguish wallets, which taproot otherwise avoids -- "oh this address
is going to be a SIGHASH_SINGLE? probably some hacker, let's ban it".

> > I think it might be good to have a public testnet (based on Richard Myers
> > et al's signet2 work?) where we have some fake exchanges/merchants/etc
> > and scheduled reorgs, and demo every weird noinput/anyprevout case anyone
> > can think of, and just work out if we need any extra code/tagging/whatever
> > to keep those fake exchanges/merchants from losing money (and write up
> > the weird cases we've found in a wiki or a paper so people can easily
> > tell if we missed something obvious).
> That'd be great, however even that will not ensure that every possible
> corner case is handled [...]

Well, sure. I'm thinking of it more as a *necessary* step than a
*sufficient* one, though. If we can't demonstrate that we can deal with
the theoretical attacks people have dreamt up in a "laboratory" setting,
then it doesn't make much sense to deploy things in a real world setting,
does it?

I think if it turns out that we can handle every case we can think of
easily, that will be good evidence that output tagging and the like isn't
necessary; and conversely if it turns out we can't handle them easily,
it at least gives us a chance to see how output tagging (or chaperone
sigs, or whatever else) would actually work, and if they'd provide any
meaningful protection at all. At the moment the best we've got is ideas
and handwaving...

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [bitcoin-dev] Continuing the discussion about noinput / anyprevout

2019-10-02 Thread Anthony Towns
On Wed, Oct 02, 2019 at 02:03:43AM +, ZmnSCPxj via Lightning-dev wrote:
> So let me propose the more radical excision, starting with SegWit v1:
> * Remove `SIGHASH` from signatures.
> * Put `SIGHASH` on public keys.
>   OP_SETPUBKEYSIGHASH

I don't think you could reasonably do this for key path spends -- if
you included the sighash as part of the scriptpubkey explicitly, that
would lose some of the indistinguishability of taproot addresses, and be
more expensive than having the sighash be in witness data. So I think
that means sighashes would still be included in key path signatures,
which would make the behaviour a little confusingly different between
signing for key path and script path spends.

> This removes the problems with `SIGHASH_NONE` `SIGHASH_SINGLE`, as they are 
> allowed only if the output specifically says they are allowed.

I don't think the problems with NONE and SINGLE are any worse than using
SIGHASH_ALL to pay to "1*G" -- someone may steal the money you send,
but that's as far as it goes. NOINPUT/ANYPREVOUT is worse in that if
you use it, someone may steal funds from other UTXOs too -- similar
to nonce-reuse. So I think having to commit to enabling NOINPUT for an
address may make sense; but I don't really see the need for doing the
same for other sighashes generally.

FWIW, one way of looking at a transaction spending UTXO "U" to address
"A" is something like:

 * "script" lets you enforce conditions on the transaction when you
   create "A" [0]
 * "sighash" lets you enforce conditions on the transaction when
   you sign the transaction
 * nlocktime, nsequence, taproot annex are ways you express conditions
   on the transaction

In that view, "sighash" is actually an *extremely* simple scripting
language itself (with a total of six possible scripts).

That doesn't seem like a bad design to me, fwiw.

Cheers,
aj

[0] "graftroot" lets you update those conditions for address "A" after
the fact
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [bitcoin-dev] Continuing the discussion about noinput / anyprevout

2019-10-01 Thread Anthony Towns
On Mon, Sep 30, 2019 at 03:23:56PM +0200, Christian Decker via bitcoin-dev 
wrote:
> With the recently renewed interest in eltoo, a proof-of-concept implementation
> [1], and the discussions regarding clean abstractions for off-chain protocols
> [2,3], I thought it might be time to revisit the `sighash_noinput` proposal
> (BIP-118 [4]), and AJ's `bip-anyprevout` proposal [5].

Hey Christian, thanks for the write up!

> ## Open questions
> The questions that remain to be addressed are the following:
> 1.  General agreement on the usefulness of noinput / anyprevoutanyscript /
> anyprevout[?]
> 2.  Is there strong support or opposition to the chaperone signatures[?]
> 3.  The same for output tagging / explicit opt-in[?]
> 4.  Shall we merge BIP-118 and bip-anyprevout. This would likely reduce the
> confusion and make for simpler discussions in the end.

I think there's an important open question you missed from this list:
(1.5) do we really understand what the dangers of noinput/anyprevout-style
constructions actually are?

My impression on the first 3.5 q's is: (1) yes, (1.5) not really,
(2) weak opposition for requiring chaperone sigs, (3) mixed (weak)
support/opposition for output tagging.

My thinking at the moment (subject to change!) is:

 * anyprevout signatures make the address you're signing for less safe,
   which may cause you to lose funds when additional coins are sent to
   the same address; this can be avoided if handled with care (or if you
   don't care about losing funds in the event of address reuse)

 * being able to guarantee that an address can never be signed for with
   an anyprevout signature is therefore valuable; so having it be opt-in
   at the tapscript level, rather than a sighash flag available for
   key-path spends is valuable (I call this "opt-in", but it's hidden
   until use via taproot rather than "explicit" as output tagging
   would be)

 * receiving funds spent via an anyprevout signature does not involve any
   qualitatively new double-spending/malleability risks.
   
   (eltoo is unavoidably malleable if there are multiple update
   transactions (and chaperone signatures aren't used or are used with
   well known keys), but while it is better to avoid this where possible,
   it's something that's already easily dealt with simply by waiting
   for confirmations, and whether a transaction is malleable is always
   under the control of the sender not the receiver)

 * as such, output tagging is also unnecessary, and there is also no
   need for users to mark anyprevout spends as "tainted" in order to
   wait for more confirmations than normal before considering those funds
   "safe"

I think it might be good to have a public testnet (based on Richard Myers
et al's signet2 work?) where we have some fake exchanges/merchants/etc
and scheduled reorgs, and demo every weird noinput/anyprevout case anyone
can think of, and just work out if we need any extra code/tagging/whatever
to keep those fake exchanges/merchants from losing money (and write up
the weird cases we've found in a wiki or a paper so people can easily
tell if we missed something obvious).

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [bitcoin-dev] Continuing the discussion about noinput / anyprevout

2019-10-01 Thread Anthony Towns
On Mon, Sep 30, 2019 at 11:28:43PM +, ZmnSCPxj via bitcoin-dev wrote:
> Suppose rather than `SIGHASH_NOINPUT`, we created a new opcode, 
> `OP_CHECKSIG_WITHOUT_INPUT`.

I don't think there's any meaningful difference between making a new
opcode and making a new tapscript public key type; the difference is
just one of encoding:

   3301AC   [CHECKSIG of public key type 0x01]
   32B3 [CHECKSIG_WITHOUT_INPUT (replacing NOP4) of key]

> This new opcode ignores any `SIGHASH` flags, if present, on a signature,

(How sighash flags are treated can be redefined by new public key types;
if that's not obvious already)

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Selling timestamps (via payment points and scalars + Pedersen commitments ) [try2]

2019-09-25 Thread Anthony Towns
On Wed, Sep 25, 2019 at 01:30:39PM +, ZmnSCPxj wrote:
> > Since it's off chain, you could also provide R and C and a zero knowledge
> > proof that you know an r such that:
> > R = SHA256( r )
> > C = SHA256( x || r )

> > in which case you could do it with lightning as it exists today.
> I can insist on paying only if the server reveals an `r` that matches some 
> known `R` such that `R = SHA256(r)`, as currently in Lightning network.
> However, how would I prove, knowing only `R` and `x`, and that there exists 
> some `r` such that `R = SHA256(r)`, that `C = SHA256(x || r)`?

If you know x and r, you can generate C and R and a zero knowledge proof
of the relationship between x,C,R that doesn't reveal r (eg, I think
you could do that with bulletproofs). Unfortunately that zkp already
proves that C was generated based on x, so you get your timestamp for
free. Ooops. :(

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Selling timestamps (via payment points and scalars + Pedersen commitments ) [try2]

2019-09-25 Thread Anthony Towns
On Wed, Sep 25, 2019 at 11:01:28AM +0200, Konstantin Ketterer wrote:
> Motivation: If I had to timestamp multiple messages I could simply aggregate
> them in a merkle tree and pay relatively low fees per message. However, if I
> only need to timestamp something once in a while I need to rely on free
> services or pay high fees.

Maybe model the timestamping service as having fixed and floating users,
in which case the fixed users pay a subscription fee that covers the costs
and get placed relatively high in the merkle tree, while the floating
users are placed low in the merkle tree and are basically free money?

Your merkle tree might then have 2**N-1 fixed slots, all at height N,
then 2**K floating slots, all at height N+K, but you don't need to charge
the floating slots anything up front, because your fixed costs are all
paid for by subscription income from the fixed slots.

You might still want to charge some up front fee to prevent people
spamming you with things to timestamp that they're never going to pay
for though.

> Solution: buy a place in a merkle tree "risk-free"
> 1. send hash x of my message (or the merkle root of another tree) to the
> timstamping server
> 2. server calculates Pedersen commit: C = x*H + r*G, hashes it, builds merkle
> tree with other commits in it and publishes a valid transaction containing the
> merkle root to the Bitcoin blockchain
> 3. after a certain number of block confirmations and with the given proof I 
> can
> confirm that the commitment C is indeed part of the Bitcoin blockchain
> 4. I now have to send a lightning payment with C - x*H = r*G as the payment
> point  to the timestamping server and as a proof of payment the server must
> reveal r to receive the money.

Nice.

Since it's off chain, you could also provide R and C and a zero knowledge
proof that you know an r such that:

   R = SHA256( r )
   C = SHA256( x || r )

in which case you could do it with lightning as it exists today.

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Eltoo, anyprevout and chaperone signatures

2019-05-16 Thread Anthony Towns
On Thu, May 16, 2019 at 09:55:57AM +0200, Bastien TEINTURIER wrote:
> Thanks for your answers and links, the previous discussions probably happened
> before I joined this list so I'll go dig into the archive ;)

The discussion was on a different list anyway, I think, this might be
the middle of the thread:

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2019-March/016777.html

> > Specifically we can't make make use of the collaborative path where
> > we override an `update_tx` with a newer one in taproot as far as I can
> > see, since the `update_tx` needs to be signed with noinput (for
> > rebindability) but there is no way for us to specify the chaperone key
> > since we're not revealing the committed script.
> Can you expand on that? Why do we need to "make use of the collaborative path"
> (maybe it's unclear to me what you mean by collaborative path here)?

I think Christian means the "key path" per the terminology in the
taproot bip. That's the path that just provides a signature, rather than
providing an internal key, a script, and signatures etc for the script.

> I feel like there will be a few other optimizations that are unlocked by
> taproot/tapscript, it will be interesting to dig into that.

I had a go at drafting up scripts, and passed them around privately to
some of the people on this list already. They're more "thought bubble"
than even "draft" yet, but for the sake of discussion:

---
FWIW, the eltoo scripts I'm imaginging with this spec are roughly:

UPDATE TX n:
  nlocktime: 500e6+n
  nsequence: 0
  output 0:
P = muSig(A,B)
scripts = [
  "OP_1 CHECKSIGVERIFY X CHECKSIGVERIFY 500e6+n+1 CLTV"
]
 witness:
sig(P,hash_type=SINGLE|ANYPREVOUTANYSCRIPT=0xc3)
sig(X,hash_type=0)

SETTLEMENT TX n:
  nlocktime: 500e6+n+1
  nsequence: [delay]
  output 0: A
  output 1: B
  output n: (HTLC)
P = muSig(A,B)
scripts = [
  "OP_1 CHECKSIGVERIFY X CHECKSIG"
  "A CHECKSIGVERIFY  CLTV"
]
  witness:
sig(P,hash_type=ALL|ANYPREVOUT=0x41)
sig(X,hash_type=0)

HTLC CLAIM (reveal secp256k1 preimage R):
  witness:
hash-of-alternative-script
sig(P,hash_type=SINGLE|ANYPREVOUT,reveal R)
sig(X,hash_type=0)

HTLC REFUND (timeout):
  witness:
hash-of-alternative-script
sig(A,hash_type=ALL)

Because "n" changes for each UPDATE tx, each ANYPREVOUT signature
(for the SETTLEMENT tx) commits to a specific UPDATE tx via both the
scriptPubKey commitment and the tapleaf_hash commitment.

So the witness data for both txs involve revealing:

33 byte control block
43 byte redeem script
65 byte anyprevout sig
64 byte sighash all sig

Compared to a 65 byte key path spend (if ANYPREVOUT worked for key paths),
that's an extra 143 WU or 35.75 vbytes, so about 217% more expensive. The
update tx script proposed in eltoo.pdf is (roughly):

"IF 2 Asi Bsi ELSE <500e6+n+1> CLTV DROP 2 Au Bu ENDIF 2 OP_CHECKMULTISIG"

148 byte redeem script
65 byte anyprevout sig by them
64 byte sighash all sig by us
"1" or "0" to control the IF

which I think would be about 282 WU total, or an extra 216 WU/54 vbytes
over a 65 byte key path spend, so about 327% more expensive. So at least
we're a lot better than where we were with BIP 118, ECDSA and p2wsh.

Depending on if you can afford generating a bunch more signatures you
could also have a SIGHASH_ALL key path spend for the common unilateral
case where only a single UPDATE TX is published.

UPDATE TX n (alt):
  input: FUNDING TX
  witness: sig(P,hash_type=0)
  output 0:
P = muSig(A,B)
scripts = [
  "OP_1 CHECKSIGVERIFY X CHECKSIGVERIFY 500e6+n+1 CLTV"
]

SETTLEMENT TX n (alt):
  nsequence: [delay]
  input: UPDATE TX n (alt)
  witness: sig(P+H(P||scripts)*G,hash_type=0)
  outputs: [as above]

(This approach can either use the same ANYPREVOUT sigs for the HTLC
claims, or could include an additional sig for each active HTLC for each
channel update to allow HTLC claims via SIGHASH_ALL scriptless scripts...)

Despite using SIGHASH_SINGLE, I don't think you can combine two UPDATE txs
generally, because the nlocktime won't match (this could possibly be fixed
in a future soft-fork by using the annex to support per-input absolute
locktimes). You can't combine SETTLEMENT tx, because the ANYPREVOUT
signature needs to commit to multiple outputs (one for my balance, one
for yours, one for each active HTLC). Combining HTLC refunds is kind-of
easy, but only possible in the first place if you've got a bunch expiring
at the same time, which might not be that likely. Combining HTLC claims
should be easy enough since they just need scriptless-script signatures.

For fees, because of ALL|ANYPREVOUT, you can add a new input and new
change output to bring-your-own-fees for the UPDATE tx; and while you
can't do that for the SETTLEMENT tx, you can immediately spend your
channel-balance output to add fees via CPFP.

As far as "X" goes, calculating the private key as a HD key using ECDH
between the peers t

Re: [Lightning-dev] More thoughts on NOINPUT safety

2019-03-21 Thread Anthony Towns
On Fri, Mar 22, 2019 at 01:59:14AM +, ZmnSCPxj wrote:
> > If codeseparator is too scary, you could probably also just always
> > require the locktime (ie for settlmenet txs as well as update txs), ie:
> > OP_CHECKLOCKTIMEVERIFY OP_DROP
> >  OP_CHECKDLSVERIFY  OP_CHECKDLS
> > and have update txs set their timelock; and settlement txs set a absolute
> > timelock, relative timelock via sequence, and commit to the script code.
> 
> I think the issue I have here is the lack of `OP_CSV` in the settlement 
> branch.

You can enforce the relative timelock in the settlement branch simply
by refusing to sign a settlement tx that doesn't have the timelock set;
the OP_CSV is redundant.

> Consider a channel with offchain transactions update-1, settlement-1, 
> update-2, and settlement-2.
> If update-1 is placed onchain, update-1 is also immediately spendable by 
> settlement-1.

settlement-1 was signed by you, and when you signed it you ensured that
nsequence was set as per BIP-68, and NOINPUT sigs commit to nsequence,
so if anyone changed that after the fact the sig isn't valid. Because
BIP-68 is enforced by consensus, update-1 isn't immediately spendable
by settlement-1.

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] More thoughts on NOINPUT safety

2019-03-21 Thread Anthony Towns
On Thu, Mar 21, 2019 at 10:05:09AM +, ZmnSCPxj wrote:
> > IF OP_CODESEPARATOR  OP_CHECKLOCKTIMEVERIFY OP_DROP ENDIF
> >  OP_CHECKDLSVERIFY  OP_CHECKDLS
> > Signing with NOINPUT,NOSCRIPT and codeseparatorpos=1 enforces CLTV
> > and allows binding to any prior update tx -- so works for an update tx
> > spending previous update txs; while signing with codeseparatorpos=-1
> > and NOINPUT but committing to the script code and nSequence (for the
> > CSV delay) allows binding to only that update tx -- so works for the
> > settlement tx. That's two pubkeys, two sigs, and the taproot point
> > reveal.
> 
> Actually, the shared keys are different in the two branches above.

Yes, if you're not committing to the script code you need the separate
keys as otherwise any settlement transaction could be used with any
update transaction. 

If you are committing to the script code, though, then each settlement
sig is already only usable with the corresponding update tx, so you
don't need to roll the keys. But you do need to make it so that the
update sig requires the CLTV; one way to do that is using codeseparator
to distinguish between the two cases.

> Also, I cannot understand `OP_CODESEPARATOR`, please no.

If codeseparator is too scary, you could probably also just always
require the locktime (ie for settlmenet txs as well as update txs), ie:

  OP_CHECKLOCKTIMEVERIFY OP_DROP
   OP_CHECKDLSVERIFY  OP_CHECKDLS

and have update txs set their timelock; and settlement txs set a absolute
timelock, relative timelock via sequence, and commit to the script code.

(Note that both those approaches (with and without codesep) assume there's
some flag that allows you to commit to the scriptcode even though you're
not committing to your input tx (and possibly not committing to the
scriptpubkey). BIP118 doesn't have that flexibility, so the A_s_i and
B_s_i key rolling is necessary)

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] More thoughts on NOINPUT safety

2019-03-21 Thread Anthony Towns
On Wed, Mar 20, 2019 at 08:07:00AM +, ZmnSCPxj via Lightning-dev wrote:
> Re-reading again, I think perhaps I was massively confused by this:
> > that commits to the input. In that case, you could do eltoo with a
> > script like either:
> >  CHECKSIGVERIFY  CHECKSIG
> > or  CHECKSIGVERIFY  CHECKSIG
> Do you mean that *either* of the above two scripts is OK, *or* do you mean 
> they are alternatives within a single MAST or `OP_IF`?

I meant "either of the two scripts is okay".

> In the blob sent to Watchtower, A (or B) includes the `SIGHASH_NOINPUT` as 
> well as the `q` private key.
> Would it be safe for Watchtower to know that?

I think so. From Alice/Bob's point-of-view, the NOINPUT sig ensures they
control their money; and from the network's point-of-view (or at least
that part of the network that thinks NOINPUT is unsafe) the Q private
key being shared makes the tx no worse than a 1-of-n multisig setup,
which has to be dealt with anyway.

> Then each update transaction pays out to:
> OP_IF
>  OP_CSV OP_DROP
>  OP_CHECKSIGVERIFY  OP_CHECKSIG
> OP_ELSE
>  OP_CHECKLOCKTIMEVERIFY OP_DROP
>  OP_CHECKSIGVERIFY  OP_CHECKSIG
> OP_ENDIF

Yeah.

I think we could potentially make that shorter still:

   IF OP_CODESEPARATOR  OP_CHECKLOCKTIMEVERIFY OP_DROP ENDIF
OP_CHECKDLSVERIFY  OP_CHECKDLS

Signing with NOINPUT,NOSCRIPT and codeseparatorpos=1 enforces CLTV
and allows binding to any prior update tx -- so works for an update tx
spending previous update txs; while signing with codeseparatorpos=-1
and NOINPUT but committing to the script code and nSequence (for the
CSV delay) allows binding to only that update tx -- so works for the
settlement tx. That's two pubkeys, two sigs, and the taproot point
reveal.

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] More thoughts on NOINPUT safety

2019-03-17 Thread Anthony Towns
On Thu, Mar 14, 2019 at 01:00:56PM +0100, Christian Decker wrote:
> Anthony Towns  writes:
> > I'm thinking of tagged outputs as "taproot plus" (ie, plus noinput),
> > so if you used a tagged output, you could do everything normal taproot
> > address could, but also do noinput sigs for them.
> > So you might have:
> >funding tx -> cooperative claim
> >funding tx -> update 3 [TAGGED] -> settlement 3 -> claim
> >funding tx -> update 3 [TAGGED] -> 
> >  update 4 [TAGGED,NOINPUT] -> 
> >  settlement 4 [TAGGED,NOINPUT] -> 
> >  claim [NOINPUT]
> > In the cooperative case, no output tagging needed.
> I might be missing something here, but how do you bind update 3 to the
> funding tx output, when that output is not tagged? Do we keep each
> update in multiple separate states, one bound to the funding tx output
> and another signed with noinput?

I don't know that "separate states" is a great description -- until it
hits the blockchain "update N" is a template that can be filled out in a
variety of ways -- in the above the ways are:
 - with a NOINPUT sig and a previous "update" tx as its input
 - or with a SINGLE|ANYONECANPAY sig and the funding tx as input

The important thing is that approach means two sigs for each update tx.
The above also has two sigs for each settlement tx (and likewise two sigs
for each HTLC claim if using scriptless scripts) -- one using NOINPUT
in case multiple update tx's make it to the blockchain, and one assuming
everything works as expected that can just use direct key path spending.

I think you can do SINGLE,ANYCANPAY and combine multiple channel closures
if you're directly spending the funding tx, but can't do that if you're
using a NOINPUT sig, because the NOINPUT sig would commit to the tx's
locktime and different channel's states will generally have different
locktimes. You still probably want SINGLE,ANYCANPAY in that case so you
can bump fees though.

> If that's the case we just doubled our
> storage and communication requirements for very little gain.

There's three potential gains:
 * it lets us make a safer version of NOINPUT
 * it makes the common paths give fewer hints that you're using eltoo
 * it puts less data/computation load on the blockchain

With tagged outputs your update tx already indicates you're maybe going
to use NOINPUT, so that probably already gives away that you're using
eltoo, so, at least with output tagging, the second benefit probably
doesn't exist. Using a key path spend (without a script) is probably
going to be cheaper on the blockchain though.

But while I think output tagging is probably better than nothing,
requiring a non-NOINPUT signature seems a better approach to me. With
that one, having a dedicated sig for the normal "publish the latest
state spending the funding tx" case, reduces a unilateral close to only
being special due to the settlement tx having a relative timelock, and
the various tx's using SINGLE|ANYCANPAY, which seems like a win. In that
scenario, just using a single sig is much cheaper than revealing a taproot
point, a pubkey or two, and using two sigs and a CLTV check of course.

It does goes from 1+n signatures per update today to 4+n signatures,
if you're using scriptless scripts. If you don't mind revealing the
HTLCs are HTLCs, and could do them with actual scripts, that reduces to
4 signatures. You could reduce it to 2 signatures by also always posting
"funding tx -> update 0 -> update N -> settlement N", or you could reduce
it to 2+2/k signatures by only doing the non-NOINPUT sigs for every k'th
state (or no more often than every t seconds or similar).

> An
> alternative is to add a trigger transaction that needs to be published
> in a unilateral case, but that'd increase our on-chain footprint.

(The above essentially uses update tx's as optional trigger tx's)

Also, I'd expect the extra latency introduced by the interactive signing
protocol for muSig would be more of a hit (share the nonce hash, share
the nonce, calculate the sig). Particularly if you're doing multiparty
channels with many participants, rather than just two.

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] More thoughts on NOINPUT safety

2019-03-14 Thread Anthony Towns
On Thu, Mar 14, 2019 at 05:22:59AM +, ZmnSCPxj via Lightning-dev wrote:
> When reading through your original post I saw you mentioned something about 
> output tagging somehow conflicting with Taproot, so I assumed Taproot is not 
> useable in this case.

I'm thinking of tagged outputs as "taproot plus" (ie, plus noinput),
so if you used a tagged output, you could do everything normal taproot
address could, but also do noinput sigs for them.

So you might have:

   funding tx -> cooperative claim

   funding tx -> update 3 [TAGGED] -> settlement 3 -> claim

   funding tx -> update 3 [TAGGED] -> 
 update 4 [TAGGED,NOINPUT] -> 
 settlement 4 [TAGGED,NOINPUT] -> 
 claim [NOINPUT]

In the cooperative case, no output tagging needed.

For the unilateral case, you need to tag all the update tx's, because
they *could* be spend by a later update with a NOINPUT sig, and if
that actually happens, then the settlement tx also needs to use a
NOINPUT sig, and if you're using scriptless scripts to resolve HTLCs,
claiming/refunding the HTLCs needs a partially-pre-signed tx which also
needs to be a NOINPUT sig, meaning the settlement tx also needs to be
tagged in that case.

You'd only need the script path for the last case where there actually
are multiple updates, but because you have to have a tagged output in the
second case anyway, maybe you've already lost privacy and always using
NOINPUT and the script path for update and settlement tx's would be fine.

> However, it is probably more likely that I simply misunderstood what you 
> said, so if you can definitively say that it would be possible to hide the 
> clause "or a NOINPUT sig from A with a non-NOINPUT sig from B" behind a 
> Taproot then I am fine.

Yeah, that's my thinking.

> Minor pointless reactions:
> > 5.  if you're using scriptless scripts to do HTLCs, you'll need to
> > allow for NOINPUT sigs when claiming funds as well (and update
> > the partial signatures for the non-NOINPUT cases if you want to
> > maximise privacy), which is a bit fiddly
> If I remember accurately, we do not allow bilateral/cooperative close when 
> HTLC is in-flight.
> However, I notice that later you point out that a non-cheating unilateral 
> close does not need NOINPUT, so I suppose. the above thought applies to that 
> case.

Yeah, exactly.

Trying to maximise privacy there has the disadvantage that you have to
do a new signature for every in-flight HTLC every time you update the
state, which could be a lot of signatures for very active channels.

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] More thoughts on NOINPUT safety

2019-03-13 Thread Anthony Towns
On Wed, Mar 13, 2019 at 06:41:47AM +, ZmnSCPxj via Lightning-dev wrote:
> > -   alternatively, we could require every script to have a valid signature
> > that commits to the input. In that case, you could do eltoo with a
> > script like either:
> >  CHECKSIGVERIFY  CHECKSIG
> > or  CHECKSIGVERIFY  CHECKSIG
> > where A is Alice's key and B is Bob's key, P is muSig(A,B) and Q is
> > a key they both know the private key for. In the first case, Alice
> > would give Bob a NOINPUT sig for the tx, and when Bob wanted to publish
> > Bob would just do a SIGHASH_ALL sig with his own key. In the second,
> > Alice and Bob would share partial NOINPUT sigs of the tx with P, and
> > finish that when they wanted to publish.
> At my point of view, if a NONINPUT sig is restricted and cannot be
> used to spend an "ordinary" 2-of-2, this is output tagging regardless
> of exact mechanism.

With taproot, you could always do the 2-of-2 spend without revealing a
script at all, let alone that it was meant to be NOINPUT capable. The
setup I'm thinking of in this scenario is something like:

  0) my key is A, your key is B, we want to setup an eltoo channel

  1) post a funding tx to the blockchain, spending money to an address
 P = muSig(A,B)

  2) we cycle through a bunch of states from 0..N, with "0" being the
 refund state we establish before publishing the funding tx to
 the blockchain. each state essentially has two corresponding tx's,
 and update tx and a settlement tx.

  3) the update tx for state k spends to an output Qk which is a
 taproot address Qk = P + H(P,Sk)*G where Sk is the eltoo ratchet
 condition:
Sk = (5e8+k+1) CLTV A CHECKDLS_NOINPUT B CHECKDLS_NOINPUT_VERIFY

 we establish two partial signatures for update state k, one which
 is a partial signature spending the funding tx with key P and
 SIGHASH_ALL, the other is a NOINPUT signature via A (for you) and
 via B (for me) with locktime set to (k+5e8), so that we can spend
 any earlier state's update tx's, but not itself or any later
 state's update tx's.

  4) for each state we have also have a settlement transaction,
 Sk, which spends update tx k, to outputs corresponding to the state
 of the channel, after a relative timelock delay.

 we have two partial signatures for this transaction too, one with
 SIGHASH_ALL assuming that we directly spent the funding tx with
 update state k (so the input txid is known), via the key path with
 key Qk; the other SIGHASH_NOINPUT via the Sk path. both partially
 signed tx's have nSequence set to the required relative timelock
 delay.

  5) if you're using scriptless scripts to do HTLCs, you'll need to
 allow for NOINPUT sigs when claiming funds as well (and update
 the partial signatures for the non-NOINPUT cases if you want to
 maximise privacy), which is a bit fiddly

  6) when closing the channel the process is then:

   - if you're in contact with the other party, negotiate a new
 key path spend of the funding tx, publish it, and you're done.

   - otherwise, if the funding tx hasn't been spent, post the latest
 update tx you know about, using the "spend the funding tx via
 key path" partial signature

   - otherwise, trace the children of the funding tx, so you can see
 the most recent published state:
   - if that's newer than the latest state you know about, your
 info is out of date (restored from an old backup?), and you
 have to wait for your counterparty to post the settlement tx
   - if it's equal to the latest state you know about, wait
   - if it's older than the latest state, post the latest update
 tx (via the NOINPUT script path sig), and wait

   - once the CSV delay for the latest update tx has expired, post
 the corresponding settlement tx (key path if the update tx
 spent the funding tx, NOINPUT if the update tx spent an earlier
 update tx)

   - once the settlement tx is posted, claim your funds

So the cases look like:

   mutual close:
 funding tx -> claimed funds

 -- only see one key via muSig, single signature, SIGHASH_ALL
 -- if there are active HTLCs when closing the channel, and they
timeout, then the claiming tx will likely be one-in, one-out,
SIGHASH_ALL, with a locktime, which may be unusual enough to
indicate a lightning channel.

   unilateral close, no cheating: 
 funding tx -> update N -> settlement N -> claimed funds

 -- update N is probably SINGLE|ANYONECANPAY, so chain analysis
of accompanying inputs might reveal who closed the channel
 -- settlement N has relative timelock
 -- claimed funds may have timelocks if they claim active HTLCs via
the refund path
 -- no NOINPUT signatures needed, and all signatures use the key path
so don't reveal any scripts

  

[Lightning-dev] More thoughts on NOINPUT safety

2019-03-12 Thread Anthony Towns
Hi all,

The following has some more thoughts on trying to make a NOINPUT
implementation as safe as possible for the Bitcoin ecosystem.

One interesting property of NOINPUT usage like in eltoo is that it
actually reintroduces the possibility of third-party malleability to
transactions -- ie, you publish transactions to the blockchain (tx A,
which is spent by tx B, which is spent by tx C), and someone can come
along and change A or B so that C is no longer valid). The way this works
is due to eltoo's use of NOINPUT to "skip intermediate states". If you
publish to the blockchain:

  funding tx -> state 3 -> state 4[NOINPUT] -> state 5[NOINPUT] -> finish

then in the event of a reorg, state 4 could be dropped, state 5's
inputs adjusted to refer to state 3 instead (the sig remains valid
due to NOINPUT, so this can be done by anyone not just holders of some
private key), and finish would no longer be a valid tx (because the new
"state 5" tx has different inputs so a different txid, and finish uses
SIGHASH_ALL for the signature so committed to state 5's original txid).

There is a safety measure here though: if the "finish" transaction is
itself a NOINPUT tx, and has a a CSV delay (this is the case in eltoo;
the CSV delay is there to give time for a hypothetical state 6 to be
published), then the only way to have a problem is for some SIGHASH_ALL tx
that spends finish, and a reorg deeper than the CSV delay (so that state
4 can be dropped, state 5 and finish can be altered). Since the CSV delay
is chosen by the participants, the above is still a possible scenario
in eltoo, though, and it means there's some risk for someone accepting
bitcoins that result from a non-cooperative close of an eltoo channel.


Beyond that, I think NOINPUT has two fundamental ways to cause problems
for the people doing NOINPUT sigs:

 1) your signature gets applied to a unexpectedly different
script, perhaps making it look like you've being dealing
with some blacklisted entity. OP_MASK and similar solves
this.

 2) your signature is applied to some transaction and works
perfectly; but then someone else sends money to the same address
and reuses your prior signature to forward it on to the same
destination, without your consent

I still like OP_MASK as a solution to (1), but I can't convince myself that
the problem it solves is particularly realistic; it doesn't apply to
address blacklists, because for OP_MASK to make the signature invalid
the address has to be different, and you could just short circuit the
whole thing by sending money from a blacklisted address to the target's
personal address directly. Further, if the sig's been seen on chain
before, that's probably good evidence that someone's messing with you;
and if it hasn't been seen on chain before, how is anyone going to tell
it's your sig to blame you for it?

I still wonder if there isn't a real problem hiding somewhere here,
but if so, I'm not seeing it.

For the second case, that seems a little more concerning. The nightmare
scenario is maybe something like:

 * naive users do silly things with NOINPUT signatures, and end up
   losing funds due to replays like the above

 * initial source of funds was some major exchange, who decide it's
   cheaper to refund the lost funds than deal with the customer complaints

 * the lost funds end up costing enough that major exchanges just outright
   ban sending funds to any address capable of NOINPUT, which also bans
   all taproot/schnorr addresses

That's not super likely to happen by chance: NOINPUT sigs will commit
to the value being spent, so to lose money, you (Alice) have to have
done a NOINPUT sig spending a coin sent to your address X, to someone
(Bob) and then have to have a coin with the exact same value sent from
someone else again (Carol) to your address X (or if you did a script
path NOINPUT spend, to some related address Y with a script that uses the same
key). But because it involves losing money to others, bad actors might
trick people into having it happen more often than chance (or well
written software) would normally allow.

That "nightmare" could be stopped at either the first step or the
last step:

 * if we "tag" addresses that can be spent via NOINPUT then having an
   exchange ban those addresses doesn't also impact regular
   taproot/schnorr addresses, though it does mean you can tell when
   someone is using a protocol like eltoo that might need to make use
   of NOINPUT signatures.  This way exchanges and wallets could simply
   not provide NOINPUT capable addresses in the first place normally,
   and issue very large warnings when asked to send money to one. That's
   not a problem for eltoo, because all the NOINPUT-capable address eltoo
   needs are internal parts of the protocol, and are spent automatically.

 * or we could make it so NOINPUT signatures aren't replayable on
   different transactions, at least by third parties. one way of doing
   this might be to require NOIN

Re: [Lightning-dev] Base AMP

2018-11-16 Thread Anthony Towns
On Thu, Nov 15, 2018 at 11:54:22PM +, ZmnSCPxj via Lightning-dev wrote:
> The improvement is in a reduction in `fee_base_msat` in the C->D path.

I think reliability (and simplicity!) are the biggest things to improve
in lightning atm. Having the flag just be incuded in invoices and not
need to be gossiped seems simpler to me; and I think endpoint-only
merging is better for reliability too. Eg, if you find candidate routes:

  A -> B -> M -- actual directed capacity $6
  A -> C -> M -- actual directed capacity $5.50
  M -> E -> F -- actual directed capacity $6
  A -> X -> F -- actual directed capacity $7

and want to send $9 form A to F, you might start by trying to send
$5 via B and $4 via C.

With endpoint-only merging you'd do:

   $5 via A,B,M,E,F -- partial success
   $4 via A,C,M,E -- failure
   $4 via A,X,F -- payment completion

whereas with in-route merging, you'd do:

   $5 via A,B,M -- held
   $4 via A,C,M -- to be continued
   $9 via M,E -- both partial payments fail

which seems a fair bit harder to incrementally recover from.

> Granted, current `fee_base_msat` across the network is very low currently.
> So I do not object to restricting merge points to ultimate payees.
> If fees rise later, we can revisit this.

So, while we already agree on the approach to take, I think the above
provides an additional rationale :)

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Packet switching via intermediary rendezvous node

2018-11-16 Thread Anthony Towns
On Thu, Nov 15, 2018 at 07:24:29PM -0800, Olaoluwa Osuntokun wrote:
> > If I'm not mistaken it'll not be possible for us to have spontaneous
> > ephemeral key switches while forwarding a payment
> If this _was_ possible, then it seems that it would allow nodes to create
> unbounded path lengths (looks to other nodes as a normal packet), possibly
> by controlling multiple nodes in a route, thereby sidestepping the 20 hop
> limit all together.

If you control other nodes in the route you can trivially create a "path"
of more than 20 hops -- go 18 hops from your first node to your second
node, and have the second node trigger on the payment hash to create
an entirely new onion to go another 18 hops, repeating if necessary to
create an arbitrarily long route.

> This would be undesirable many reasons, the most dire of
> which being the ability to further amplify null-routing attacks.

That doesn't really *amplify* null-routing attacks -- even if its
circular, you're still locking additional funds up each time you
route through yourself.

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Probe cancellation

2018-11-09 Thread Anthony Towns
PING,

It seems like ensuring reliability is going to involve nodes taking
active measures like probing routes fairly regularly. Maybe it would
be worth making that more efficient? For example, a risk of probing is
that if the probe discovers a failing node/channel, the probe HTLC will
get stuck, and have to gradually timeout, which at least uses up HTLC
slots and memory for each of the well-behaved nodes, but if the probe
has a realistic value rather than just a few (milli)satoshis, it might
lock up real money too.

It might be interesting to allow for cancelling stuck probes from
the sending direction as well as the receiving direction. eg if the
payment hash wasn't generated as SHA256("something") but rather as
SHA256("something") XOR 0xFF..FF or similar, then everyone can safely drop
the incoming transaction because they know that even if they forwarded
the tx, it will be refunded eventually anyway (or otherwise sha256 is
effectively broken and they're screwed anyway). So all I have to do is
send a packet saying this was a probe, and telling you the "something"
to verify, and I can free up the slot/funds from my probe, as can everyone
else except for the actual failing nodes.

>From the perspective of the sending node:

  generate 128b random number X
  calculate H=bitwise-not(SHA256(X))
  make probe payment over path P, hash H, amount V
  wait for response:
- success: Y, s.t. SHA256(Y)=H=not(SHA256(X)) -- wtf, sha is broken
- error, unknown hash: path works
- routing failed:  mark failing node, reveal X cancelling HTLC
- timeout: mark path as failed (?), reveal X cancelling HTLC

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Proposal for Advertising Channel Liquidity

2018-11-08 Thread Anthony Towns
On Thu, Nov 08, 2018 at 05:32:01PM +1030, Olaoluwa Osuntokun wrote:
> > A node, via their node_announcement,
> Most implementations today will ignore node announcements from nodes that
> don't have any channels, in order to maintain the smallest routing set
> possible (no zombies, etc). It seems for this to work, we would need to undo
> this at a global scale to ensure these announcements propagate?

Having incoming capacity from a random node with no other channels doesn't
seem useful though? (It's not useful for nodes that don't have incoming
capacity of their own, either)

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Proposal for Advertising Channel Liquidity

2018-11-07 Thread Anthony Towns
On Wed, Nov 07, 2018 at 06:40:13PM -0800, Jim Posen wrote:
> can simply close the channel. So if I'm charging for liquidity, I'd actually
> want to charge for the amount (in mSAT/BTC) times time. 

So perhaps you could make a market here by establishing a channel saying
that

  "I'll pay 32 msat per 500 satoshi per hour for the first 3 days"

When you open the channel with 500,000 satoshi donated by the other guy,
you're then obliged to transfer 32 satoshi every hour to the other guy
for three days (so a total of 14c or so).

If the channel fails beforehand, they don't get paid; if you stop
paying you can still theoretically do a mutual close.

Maybe a complicated addition to the protocol though?

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] RFC: simplifications and suggestions on open/accept limits.

2018-11-07 Thread Anthony Towns
On Wed, Nov 07, 2018 at 02:26:29AM +, Gert-Jaap Glasbergen wrote:
> > Otherwise, if you're happy accepting 652 satoshis, I don't see why you
> > wouldn't be happy accepting an off-chain balance of 652.003 satoshis;
> > you're no worse off, in any event.
> I wouldn’t be worse off when accepting the payment, I agree. I can safely 
> ignore whatever fraction was sent if I don’t care about it anyway. The 
> protocol is however expecting (if not demanding) me to also route payments 
> with fractions, provided they are above the set minimum. In that case I’m 
> also expected to send out fractions. Even though they don’t exist on-chain, 
> if I send a fraction of a satoshi my new balance will be 1 satoshi lower 
> on-chain since everything is rounded down.

But that's fine: suppose you want everything divided up into lots of
1 satoshi, and you see 357.719 satoshis coming in and 355.715 satoshis
going out. Would you have accepted 357 satoshis going in (rounded down)
and 356 satoshis going out (rounded up)? If so, you're set. If not,
reject the HTLC as not having a high enough fee.

Yes, you're still expected to send fractions of a satoshi around, but
that doesn't have to affect your accounting (except occassionally to
your benefit when you end up with a thousand millisatoshis).

I think you can set your fee_base_msat to 2000 msat to make sure every
HTLC you route pays you at least a satoshi, even with losses from
rounding. If you're willing to find yourself having routed payments for
free (after rounding), then setting it to 1000 msat should work too.

> > Everything in open source is configurable by end users: at worst, either
> > by them changing the code, or by choosing which implementation to use…
> Well, yes, in that sense it is. But the argument was made that it’s too 
> complex for average users to understand: I agree there, [...]

Then it's not really a good thing for different implementations to have
as a differentiator...

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] RFC: simplifications and suggestions on open/accept limits.

2018-11-06 Thread Anthony Towns
On Tue, Nov 06, 2018 at 10:22:56PM +, Gert-Jaap Glasbergen wrote:
> > On 6 Nov 2018, at 14:10, Christian Decker  
> > wrote:
> > It should be pointed out here that the dust rules actually prevent us
> > from creating an output that is smaller than the dust limit (546
> > satoshis on Bitcoin). By the same logic we would be forced to treat the
> > dust limit as our atomic unit, and have transferred values and fees
> > always be multiples of that dust limit.
> I don’t follow the logic behind this.

I don't think it quite makes sense either fwiw.

> > 546 satoshis is by no means a tiny amount anymore, i.e., 546'000 times
> > the current minimum fee and value transferred. I think we will have to
> > deal with values that are not representable / enforceable on-chain
> > anyway, so we might as well make things more flexible by keeping
> > msatoshis.
> I can see how this makes sense. If you deviate from the realms of what is 
> possible to enforce on chain,

What's enforcable on chain will vary though -- as fees rise, even if the
network will still relay your 546 satoshi output, it may no longer be
economical to claim it, so you might as well save fees by not including
it in the first place.

But equally, if you're able to cope with fees rising _at all_ then
you're already okay with losing a few dozen satoshis here and there, so
how much difference does it make if you're losing them because fees
rose, or because there was a small HTLC that you could've claimed in
theory (or off-chain) but just can't claim on-chain?

> Again, I am not advocating mandatory limitations to stay within base layer 
> enforcement, I am advocating _not_ making it mandatory to depart from it.

That seems like it adds a lot of routing complexity for every node
(what is the current dust level? does it vary per node/channel? can I
get a path that accepts my microtransaction HTLC? do I pay enough less
in fees that it's better to bump it up to the dust level?), and routing
is already complex enough...

You could already get something like this behaviour by setting a high
"fee_base_msat" and a low "fee_proportional_millionths" so it's just
not economical to send small transactions via your channel, and a
corresponding "htlc_maximum_msat" to make sure you aren't too cheap at
the top end.

Otherwise, if you're happy accepting 652 satoshis, I don't see why you
wouldn't be happy accepting an off-chain balance of 652.003 satoshis;
you're no worse off, in any event.

> I would not envision this to be even configurable by end users. I am just 
> advocating the options in the protocol so that an implementation can choose 
> what security level it prefers. 

Everything in open source is configurable by end users: at worst, either
by them changing the code, or by choosing which implementation to use...

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] BOLT11 In the World of Scriptless Scripts

2018-11-04 Thread Anthony Towns
On Sun, Nov 04, 2018 at 08:04:20PM +1030, Rusty Russell wrote:
> >> >  - just send multiple payments with the same hash:
> >> > works with sha256
> >> > privacy not improved much (some intermediary nodes no longer know
> >> >   full invoice value)
> >> > can claim partial payments as soon as they arrive
> >> > accepting any partial payment provides proof-of-payment
> >> Interestingly, if vendor takes part payment, rest can be stolen by
> >> intermediaries.
> > Or you could just see a $5 bill, send $0.50 through, and wait to see
> > if the take the partial payment immediately before even trying the
> > remaining $4.50.
> Sure, that's true today, too?

Yeah, exactly. So to get correct behaviour vendors/payees need to check
the HTLC amount matches what they expect already... They could just
automatically pause instead of rejecting here to see if more payments
come through in the next n seconds via (presumably) different paths,
with no extra message bit required. (A bit in the invoice indicating
you'll do this would probably be useful though)

> >  Vendor -> *:"I sell widgets for 0.01 BTC, my pubkey is P"
> >  Customer -> Vendor: "I want to buy a widget"
> >  Vendor -> Customer: "Here's an R value"
> >  Customer: calculates S = R + H(P,R,"send $me a widget at $address")*P
> >  Customer -> Vendor: "here's 0.01 BTC for s corresponding to S, my
> >   details are R, $me, $address"
> >  Vendor: looks up r for R=r*G, calculates s = r + H(P,R,"send $me a
> >  widget at $address")*p, checks S=s*G
> >  Vendor -> Customer: 
> >
> >  Customer -> Court: reveals the invoice ("send $me a widget...") and the
> > signature by Vendor's pubkey P, (s,R)
> >
> > I think the way to do secp256k1 AMP with that is that when sending
> > through the payment is for the customer to send three payments to the
> > Vendor conditional on preimages for A,B,C calculated as:
> >
> >A = S + H(1,secret)*G
> >B = S + H(2,secret)*G
> >C = S + H(3,secret)*G
> Note: I prefer the construction H(,)
> which doesn't require an explicit order.

Yes, you're quite right.

> I'm not sure I see the benefit over just treating them independently,
> so I also think we should defer.

If you've got a path that merges, then goes for a few hops, you'd save
on the fee_base_msat fees, and allow the merged hops to have smaller
commitment transactions. Kinda neat, but the complexity in doing the
onion stuff means it definitely makes sense to defer IMO.

> >> [1] If we're not careful we're going to implement HORNET so we can pass
> >> arbitrary messages around, which means we want to start charging for
> >> them to prevent spam, which means we reopen the pre-payment debate, and
> >> need reliable error messages...
> > Could leave the interactivity to the "web store" layer, eg have a BOLT
> > 11 v1.1 "offer" include a url for the website where you go an enter your
> > name and address and whatever other info they need, and get a personalised
> > BOLT 11 v1.1 "invoice" back with payment-hash/nonce/signature/whatever?
> I think that's out-of-scope, and I generally dislike including a URL
> since it's an unsigned externality and in practice has horrible privacy
> properties.

Maybe... I'm not sure that it'll make sense to try to negotiate postage
and handling fees over lightning, rather than over https, though?

BTW, reviewing contract law terminology, I think the way lawyers would
call it is:

   "invitation to treat" -- advertising that you'll sell widgets for $x
   "offer" -- I'll pay you $3x for delivery of 3 widgets to my address
   "acceptance" -- you agree, take my $3x and give me a receipt
   "consideration" -- you get my $3x, I get 3 widgets

So it might be better to have the terms be "advertisment", "invoice",
"receipt", because the "advertisement" isn't quite an offer in
contract-law terms. In any event, I think that would mean the BOLT-11
terms and lightning payment process would map nicely into contract law
101, which seems helpful.

Oh! Post-Schnorr I think there's a good reason for the payee to
include their own crypto key in the invoice; so you can generate an
scriptless-script address for an on-chain fallback payment directly
between the payer/payee that reveals proof-of-payment on acceptance
(and allow refund on timeout via taproot I guess). At least, I think
all that might be theoretically feasible.

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] BOLT11 In the World of Scriptless Scripts

2018-11-04 Thread Anthony Towns
On Mon, Nov 05, 2018 at 01:05:17AM +, ZmnSCPxj via Lightning-dev wrote:
> > And it just doesn't work unless you give over uniquely identifying
> > information. AJ posts to r/bitcoin demonstrating payment, demanding his
> > goods. Sock puppet says "No, I'm the AJ in Australia" and cut & pastes
> > the same proof.
> Technically speaking, all that AJ in Australia needs to show is that he or 
> she knows, the private key behind the public key that is indicated on the 
> invoice.

Interesting. I think what you're saying is that with secp256k1 preimages
(with decorrelation), if you have the payment hash Q, then the payment
preimage q (Q=q*G) is only known to the payee and the payer (and not
any intermediaries thanks to decorrelation), so if you see a statement

  m="This invoice has been paid but not delivered as at 2018-11-05"

signed by "Q" (so, some s,R s.t. s*G = R + H(Q,R,m)*Q) then that means
either the payee signed it, in which case there's no dispute, or the
payer signed it... And that's publicly verifiable with only the original
invoice information (ie "Q").

(I don't think there's any need for multiple rounds of signatures)


FWIW, I don't see reddit as a particularly viable "court"; there's
no way for reddit to tell who's actually right in a dispute, eg if I
say blockstream didn't send stickers I paid for, and blockstream says
they did; ie there's no need for a sock puppet in the above scenario,
blockstream can just say "according to our records you signed for
delivery, stop whinging". (And if we both agree that it did or didn't
arrive, there's no need to post cryptographic proofs to reddit afaics)

I think there's maybe four sorts of "proof of payment" people might
desire:

  0) no proof: "completely" deniable payments (donations?)

  1) shared secret: ability to prove directly to the payee that an
 invoice was paid (what we have now)

  2) signed payment: ability to prove to a different business unit of
 the payee that payment was made, so that you can keep all the 
 secrets in the payment-handling part, and have the service-delivery
 part not be at risk for losing all your money

  3) third-party verifiable: so you can associate a payment with real
 world identity information, and take them to court (or reddit) as a
 contract dispute; needs PKI infrastructure so you can be confident
 the pubkey maps to the real world people you think it does, etc

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] BOLT11 In the World of Scriptless Scripts

2018-11-03 Thread Anthony Towns
On Sun, Nov 04, 2018 at 01:30:48PM +1030, Rusty Russell wrote:
> I'm not sure.  Jonas Nick proposed a scheme, which very much assumes
> Schnorr AFAICT:
> Jonas Nick wrote:
> > How I thought it would work is that the invoice would contain a
> > Schnorr nonce R.

(Note this means the "invoice" must be unique for each payment)

> > Then the payer would construct s*G = R +
> > H(payee_pubkey,R,"I've bought 5 shirts shipped to Germany")*G. Then
> > the payer builds the scriptless script payment path such that when the
> > payee claims, the payer learns s and thus has a complete
> > signature. However, that doesn’t work with recurrent payments because
> > the payee can use the nonce only once.

So that's totally fine to do however you receive the "s" value -- the
message that's getting the Schnorr signature isn't a valid bitcoin
transaction, so it's something that only needs to be validated by
BOLT-aware courts.


I also think you can get recurrent payments easily by extending the
verification algorithm. Basically instead of Verify(m,P,sig) have
Verify(m,P,n,sig) to verify you've made n payments of the invoice "m".

Construct "m" to include the postimage X = H(pre,1000) which indicates
"pre" has been hashed 1000 times, so X = H(H(pre,1000-n),n).

Calculate the original signature as:

   s = r + H(P,R,m+X)*p

and verify that n payments have been made by checking:

   Verify(m,P,n,(s,R,rcpt)) :: s*G = R + H(P,R,m+H(rcpt,n))*P

You'd provide s,R,X when setting up the subscription, then reveal the
preimage to X, the preimage to the preimage of X etc on each payment.
(Maybe shachain would work here?)

I think that approach is independent of using sha256/secp256k1 for
preimages over lightning too.

> I would probably enhance this to include a nonce, which allows for AMP
> (you have to xor the AMP payments to get the nonce):
> R + H(payee_pubkey,R,"I've bought 5 shirts shipped to Germany",NONCE)*G

R is already a unique nonce under the hash here, so I don't think a
second one adds any value fwiw.

> > I think it makes sense to think of proof-of-payment in terms of a
> > verification algorithm (that a third party court could use), that takes:
> >
> >   m - the invoice details, eg
> >   "aj paid $11 for stickers to be delivered to Australia"
> >   P - the pubkey of the vendor
> >   sig - some signature
> >
> > With the current SHA256 preimages, you can make sig=(R,s,pre)
> > where the sig is valid if:
> >
> >   s*G = R + H(P,R,m+SHA256(pre))*P
> >
> > If you share R,s,SHA256(pre) beforehand, the payer can tell they'll have
> > a valid signature if they pay to SHA256(pre). That's a 96B signature,
> > and it requires "pre" be different for each sale, and needs pre-payment
> > interactivity to agree on m and communicate R,s back to the payer.
> For current-style invoices (no payer-supplied data), the payee knows
> 'm', so no interactivity needed, which is nice.

I'm looking at it as needing interactivity to determine m prior to
the payment going through -- the payer needs to send through "aj" and
"Australia" in the example above, before the payee can generate s,R to
send back, at which point the payer can make the payment knowing they'll
either get a cryptographic proof of payment or a refund.

> In the payer-supplied data case, I think 'm' should include a signature
> for a key only the payer knows: this lets them prove *they* made the
> payment.

I don't object to that, but I think it's unnecessary; as long as there
was a payment for delivery of the widget to "aj" in "Australia" does it
matter if the payment was technically made by "aj" by "Visa on behalf
of aj" or by "Bank of America on behalf of Mastercard on behalf of aj's
friend who owed him some money" ?

> How does this interact with AMP, however?

The way I see it is they're separate: you have a way of getting the
preimage back over lightning (which is affected by AMP), and you have a
way of turning a preimage into a third-party-verifiable PoP (with
Schnorr or whatever).

(That might not be true if there's a clever way of safely feeding the
nonce R back, so that you can go straight from a generic offer to an
accepted payment with proof of payment)

> > With seckp256k1 preimages, it's easy to reduce that to sig=(R,s),
> > and needing to communicate an R to the payer initially, who can then
> > calculate S and send "m" along with the payment.
> OK, I buy that.

Crap, do I need to give you proof of payment for it now? :)

> > Maybe it makes sense to disambiguate the term "invoice" -- when you don't
> > know who you might be giving the goods/service to, call it an "offer",
> > which can be a write-once/accept-by-anyone deal that you just leave on
> > a webpage or your email signature; but an "invoice" should be specific
> > to each individual payment, with a "receipt" provided once an invoice
> > is paid.
> "offer" is a good name, since I landed on the same one while thinking
> about this too :)

Yay!

> > It seems to me like there are three levels that could be im

Re: [Lightning-dev] BOLT11 In the World of Scriptless Scripts

2018-11-02 Thread Anthony Towns
On Fri, Nov 02, 2018 at 03:45:58PM +1030, Rusty Russell wrote:
> Anthony Towns  writes:
> > On Fri, Nov 02, 2018 at 10:20:46AM +1030, Rusty Russell wrote:
> >> There's been some discussion of what the lightning payment flow
> >> might look like in the future, and I thought I'd try to look forwards so
> >> we can avoid painting ourselves into a corner now.  I haven't spent time
> >> on concrete design and implementation to be sure this is correct,
> >> however.
> > I think I'd like to see v1.1 of the lightning spec include
> > experimental/optional support for using secp256k1 public/private keys
> > for payment hashes/preimages. That assumes using either 2-party ECDSA
> > magic or script magic until it's viable to do it via Schnorr scriptless
> > scripts, but that seems like it's not totally infeasible?
> Not totally infeasible, but since every intermediary needs to support
> it, I think we'd need considerable buy-in before we commit to it in 1.1.

"every intermediary" just means "you have to find a path where every
channel supports it"; nodes/channels that aren't in the route you choose
aren't a problem, and can still pass on the gossiped announcements,
I think?

> > I think the
> > components would need to be:
> >  - invoices: will the preimage for the hash be a secp256k1 private key
> >or a sha256 preimage? (or payer's choice?)
> From BOLT11:
>The `p` field supports the current 256-bit payment hash, but future
>specs could add a new variant of different length, in which case
>writers could support both old and new, and old readers would ignore
>the one not the correct length.
> So the plan would be you provide two `p` fields in transition.

Yeah, that sounds workable.

> >  - channel announcements: do you support secp256k1 for hashes or just
> >sha256?
> Worse, it becomes "I support secp256k1 with ECDSA" then a new "I support
> secp256k1 with Schnorr".  You need a continuous path of channels with
> the same feature.

I don't think that's correct: whether it's 2p-ecdsa, Schnorr or script
magic only matters for the two nodes directly involved in the channel
(who need to be able to understand the commitment transactions they're
signing, and extract the private key from the on-chain tx if the channel
gets unilaterally closed). For everyone else, they just need to know that
they can put in a public key based HTLC, and get back the corresponding
private key when the HTLC goes through.

It's also (theoretically) upgradable afaics: if two nodes have a channel
that supports 2p-ecdsa, and eventually both upgrade to support segwit
v1 scriptless schnorr sigs or whatever, they just need to change the
addresses they use in new commitment txs, even for existing HTLCs.

> > Even if you calculate r differently, I don't think you can do this
> > without Bob and Alice interacting to get the nonce R prior to sending
> > the transaction, which seems effectively the same as having dynamic
> > invoice hashes, though.
> I know Andrew Poelstra thought it was possible, so I'm going to leave a
> response to him :)

AFAICT, in general, if you're going to have n signatures with a public
key P, you need to generate the n R=r*G values from n*32B worth of random data,
that's previously unknown to the signature recipients. If you've got
less than that, then you will have calculated each R by doing something
like based on  I think a general scheme is: payer creates a random group-marker, sends
> <32-byte-randomness>[encrypted data...] in each payment.
> Receipient collects payments by , xoring the
> <32-byte-randomness>; if that xor successfully decrypts the data, you've
> got all the pieces.
> 
> (For low-AMP, you use payment_hash as , and just use
> SHA256(<32-byte-randomness>) as the per-payment
> preimage so no [encrypted data] needed).

Hmm, right, I've got decorrelation and AMP combined in my head. I'm also
a bit confused about what exactly you mean by "low-AMP"...

Rereading through the AMP threads, Christian's post makes a lot of sense
to me:

https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-February/001023.html

I'm not really seeing the benefits in complicated AMP schemes without
decorrelation...

It seems to me like there are three levels that could be implemented:

 - laolu/conner: ("low AMP" ?)
works with sha256
some privacy improvement
loses proof-of-payment
can't claim unless all payments arrive

 - just send multiple payments with the same hash:
works with sha256
privacy not improved much (some intermediary nodes no longer know
  full i

Re: [Lightning-dev] BOLT11 In the World of Scriptless Scripts

2018-11-01 Thread Anthony Towns
On Fri, Nov 02, 2018 at 10:20:46AM +1030, Rusty Russell wrote:
> There's been some discussion of what the lightning payment flow
> might look like in the future, and I thought I'd try to look forwards so
> we can avoid painting ourselves into a corner now.  I haven't spent time
> on concrete design and implementation to be sure this is correct,
> however.

I think I'd like to see v1.1 of the lightning spec include
experimental/optional support for using secp256k1 public/private keys
for payment hashes/preimages. That assumes using either 2-party ECDSA
magic or script magic until it's viable to do it via Schnorr scriptless
scripts, but that seems like it's not totally infeasible? I think the
components would need to be:

 - invoices: will the preimage for the hash be a secp256k1 private key
   or a sha256 preimage? (or payer's choice?)
 - channel announcements: do you support secp256k1 for hashes or just
   sha256?
 - node features: how do you support secp256k1? not at all (default),
   via 2p-ecdsa, via script magic, (eventually) via schnorr, ...?

I think this is (close to) a necessary precondition for payment
decorrelation, AMP, and third-party verifiable proof-of-payment.

> Desired Status
> --
> Ideally, you could create one invoice which could be paid arbitrary many
> times, by different individuals.  eg. "My donation invoice is on my web
> page", or "I've printed out the invoice for a widget and stuck it to the
> widget", or "Pay this invoice once a month please".
> 
> Also, you should be able to prove you've paid, in a way I can't just
> copy the proof and claim I paid, too, even if I'm the merchant, and that
> you agreed to my terms, eg. "I'm paying for 500 widgets to be shipped to
> Rusty in Australia".

So, I think at a high level the logic here goes:

  1. Alice: "Buy a t-shirt from me for $5!"
  2. Bob: "Alice, I want to buy a t-shirt from you, here's $5"
  3. Alice: "Receipt: Bob bought a t-shirt from me"
  4. Bob: "Your Honour, here's my receipt from Alice for a t-shirt, please
 make her deliver on it!"

Going backwards; for the last step to be useful, the receipt has to be
a signature with the Alice's public key -- if it were anything short of
that, Alice will claim Bob could have just made up all the numbers. For a
Schnorr sig, that means (R,s) with the vendor choosing R and not revealing
R's preimage as that would reveal their private key.

If both vendor and customer know R, then to get the signature, you need
the private key holder to reveal s which is just revealing the secp256k1
private key corresponding to S, calculated as:

S = R + H(P,R,"Bob bought a $5 t-shirt from me")*P

where P is Alice's public key. If R is calculated via the Schnorr BIP's
recommendation, then r = H(p, "Bob bought a $5 t-shirt from me") -- ie,
based on the private key and the message being signed.

Even if you calculate r differently, I don't think you can do this
without Bob and Alice interacting to get the nonce R prior to sending
the transaction, which seems effectively the same as having dynamic
invoice hashes, though.

Maybe querying for a nonce through the lightning network would make
sense though, which would allow the "invoice" to be static, and all the
dynamic things would be via lightning p2p? That step could perhaps be
combined with the 0 satoshi payment probes that Fabrice proposes in

 
https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-October/001484.html

but I think replying with a public nonce value would need a new message
type of some sort?



I think AMP is independent, other than also using secp256k1 preimages
rather than SHA256. I think AMP splits and joins are just:

 - if you're joining incoming payments, don't forward until you've
   got all the HTLCs, and ensure you can generate the secret for each
   incoming payment from the single outgoing payment

 - if you're splitting an incoming payment into many outgoing payments,
   ensure you can claim the incoming payment from *any* outgoing
   payments' secret

Which I think in practice just means knowing x_i for each input, and
y_j for each output other than the first, and verifying:

I_i = O_1 + x_i*G
O_j = O_1 + y_j*G

(this gives I_i = O_j + (x_i-y_j)*G and the corresponding secret being
i_i = o_j + x_i - y_j) allowing you to claim all incoming HTLCs given
the secret from any outgoing HTLC)

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] eltoo: A Simplified update Mechanism for Lightning and Off-Chain Contracts

2018-10-13 Thread Anthony Towns
On 13 October 2018 7:12:03 pm GMT+09:00, Christian Decker 
 wrote:
>Great find ZmnSCPxj, we can also have an adaptive scheme here, in which
>we
>start with a single update transaction, and then at ~90% of the
>available
>range we add a second. This is starting to look a bit like the DMC
>invalidation tree :-)
>But realistically speaking I don't think 1B updates is going to be
>exhausted any time soon, but the adaptive strategy gets the best of
>both worlds.
>
>Cheers,
>Christian
>
>On Fri, Oct 12, 2018 at 5:21 AM ZmnSCPxj 
>wrote:
>
>> Another way would be to always have two update transactions,
>effectively
>> creating a larger overall counter:
>>
>> [anchor] -> [update highbits] -> [update lobits] -> [settlement]
>>
>> We normally update [update lobits] until it saturates.  If lobits
>> saturates we increment [update highbits] and reset [update lobits] to
>the
>> lowest valid value.
>>
>> This will provide a single counter with 10^18 possible updates, which
>> should be enough for a while even without reanchoring.
>>
>> Regards,
>> ZmnSCPxj
>>
>>
>> Sent with ProtonMail <https://protonmail.com> Secure Email.
>>
>> ‐‐‐ Original Message ‐‐‐
>> On Friday, October 12, 2018 1:37 AM, Christian Decker <
>> decker.christ...@gmail.com> wrote:
>>
>> Thanks Anthony for pointing this out, I was not aware we could
>> roll keypairs to reset the state numbers.
>>
>> I basically thought that 1billion updates is more than I would
>> ever do, since with splice-in / splice-out operations we'd be
>> re-anchoring on-chain on a regular basis anyway.
>>
>>
>>
>> On Wed, Oct 10, 2018 at 10:25 AM Anthony Towns 
>wrote:
>>
>>> On Mon, Apr 30, 2018 at 05:41:38PM +0200, Christian Decker wrote:
>>> > eltoo is a drop-in replacement for the penalty based invalidation
>>> > mechanism that is used today in the Lightning specification. [...]
>>>
>>> Maybe this is obvious, but in case it's not, re: the locktime-based
>>> sequencing in eltoo:
>>>
>>>  "any number above 0.500 billion is interpreted as a UNIX timestamp,
>and
>>>   with a current timestamp of ~1.5 billion, that leaves about 1
>billion
>>>   numbers that are interpreted as being in the past"
>>>
>>> I think if you had a more than a 1B updates to your channel (50
>updates
>>> per second for 4 months?) I think you could reset the locktime by
>rolling
>>> over to use new update keys. When unilaterally closing you'd need to
>>> use an extra transaction on-chain to do that roll-over, but you'd
>save
>>> a transaction if you did a cooperative close.
>>>
>>> ie, rather than:
>>>
>>>   [funding] -> [coop close / re-fund] -> [update 23M] -> [HTLCs etc]
>>> or
>>>   [funding] -> [coop close / re-fund] -> [coop close]
>>>
>>> you could have:
>>>   [funding] -> [update 1B] -> [update 23,310,561 with key2] ->
>[HTLCs]
>>> or
>>>   [funding] -> [coop close]
>>>
>>> You could repeat this when you get another 1B updates, making
>unilateral
>>> closes more painful, but keeping cooperative closes cheap.
>>>
>>> Cheers,
>>> aj
>>>
>>>
>>

Hmm - the range grows by one every second though, so as long as you don't go 
through a billion updates per second, you can go to 100% of the range, knowing 
that by the time you have to increment, you'll have 115% of the original range 
available, meaning you never need more than two transactions (until locktime 
overflows anyway) for the commitment, even at 900MHz transaction rates...

Cheers,
aj

-- 
Sent from my phone.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] eltoo: A Simplified update Mechanism for Lightning and Off-Chain Contracts

2018-10-10 Thread Anthony Towns
On Mon, Apr 30, 2018 at 05:41:38PM +0200, Christian Decker wrote:
> eltoo is a drop-in replacement for the penalty based invalidation
> mechanism that is used today in the Lightning specification. [...]

Maybe this is obvious, but in case it's not, re: the locktime-based
sequencing in eltoo:

 "any number above 0.500 billion is interpreted as a UNIX timestamp, and
  with a current timestamp of ~1.5 billion, that leaves about 1 billion
  numbers that are interpreted as being in the past"

I think if you had a more than a 1B updates to your channel (50 updates
per second for 4 months?) I think you could reset the locktime by rolling
over to use new update keys. When unilaterally closing you'd need to
use an extra transaction on-chain to do that roll-over, but you'd save
a transaction if you did a cooperative close.

ie, rather than:

  [funding] -> [coop close / re-fund] -> [update 23M] -> [HTLCs etc]
or
  [funding] -> [coop close / re-fund] -> [coop close]

you could have:
  [funding] -> [update 1B] -> [update 23,310,561 with key2] -> [HTLCs]
or
  [funding] -> [coop close]

You could repeat this when you get another 1B updates, making unilateral
closes more painful, but keeping cooperative closes cheap.

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] eltoo: A Simplified update Mechanism for Lightning and Off-Chain Contracts

2018-07-18 Thread Anthony Towns
(bitcoin-dev dropped from cc)

On Mon, Apr 30, 2018 at 05:41:38PM +0200, Christian Decker wrote:
> eltoo is a drop-in replacement for the penalty based invalidation
> mechanism that is used today in the Lightning specification. [...]

I think you can simplify eltoo further, both in the way the transactions
work and in the game theory ensuring people play fair.

In essence: rather than having a funding transaction spending to address
"X", and a set of ratcheting states that spend from-and-to the same
address "X", I think it's feasible to have a simpler ratchet mechanism:

  (1) funding address: multisig by A and B as usual

  (2) commit to state >=N by A

  (3a) commit to state N by A after delay D; or
  (3b) commit to state M (M>=N) by B

I believe those transactions (while partially signed, before posting to
the blockchain) would look like:

  (1) pay to "2 A1 B1 2 OP_CHECKMULTISIG"

  (2) signed by B1, nlocktime set to (N+E)
  pay to "(N+E) OP_CLTV OP_DROP 2 A2a B2a 2 OP_CHECKMULTISIG"

  (3a) signed by B2a, nSequence set to the channel pay to self delay,
   nlocktime set to (N+E)
   pays to the channel balances / HTLCs, with no delays or
   revocation clauses

  (3b) signed by A2a with SIGHASH_NOINPUT_UNSAFE, nlocktime set to (M+E)
   pays to the channel balances / HTLCs, with no delays or
   revocation clauses

You spend (2)+delay+(3a)+[claim balance/HTLC] if your counterparty
goes away.  You spend (2) and your counterparty spends (3b) if you're
both monitoring the blockchain. (3a) and (3b) should have the same tx
size, fee rate and outputs.

(A1, A2a are keys held by A; B1, B2a are keys held by B; E is
LOCKTIME_THRESHOLD; N is the current state number)

That seems like it has a few nice features:

 - txes at (3a) and (3b) can both pay current market fees with minimal
   risk, and can be CPFPed by a tx spending your own channel balance

 - txes at (2) can pay a non-zero fee, provided it's constant for the
   lifetime of the channel (to conform with the NOINPUT rules)

 - if both parties are monitoring the blockchain, then the channel
   can be fully closed in a single block, by (2)+(3b)+[balance/HTLC
   claims], and the later txes can do CPFP for tx (2).

 - both parties can claim their funds as soon as the other can, no
   matter who initiates the close

 - you only need 3 pre-signed txes for the current state; the txes
   for claiming HTLCs/balances don't need to be half-signed (unless
   you're doing them via schnorr scriptless scripts etc)

The game theory looks fine to me. If you're posting transaction (2), then
you can choose between a final state F, paying you f and your counterparty
b-f, or some earlier state N, paying you n, and your counterparty b-n. If
f>n, it makes sense for you to choose F, in which case your counterparty
is also forced to choose state F for (3b) and you're forced to choose F
for (3a). If n>f, then if you choose N, your counterparty will either
choose state F because b-f>b-n and you will receive f as before, or
will choose some other state M>N, where b-m>b-f, and you will receive
m eltoo addresses some of the issues we encountered while speficying and
> implementing the Lightning Network. For example outsourcing becomes very
> simple since old states becoming public can't hurt us anymore.

The scheme above isn't great for (untrusted) outsourcing, because if
you reveal enough for an adversary to post tx (3b) for state N, then
they can then collaborate with your channel counterparty to roll you
back from state N+1000 back to state N.

With eltoo if they do the same, then you have the opportunity to catch
them at it, and play state N+1000 to the blockchain -- but if you're
monitoring the blockchain carefully enough to catch that, why are you
outsourcing in the first place? If you're relying on multiple outsourcers
to keep each other honest, then I think you run into challenges paying
them to publish the txes for you.

Thoughts? Apart from still requiring NOINPUT and not working with
adversarial outsourcing, this seems like it works nicely to me, but
maybe I missed something...

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [bitcoin-dev] BIP sighash_noinput

2018-05-14 Thread Anthony Towns
On Thu, May 10, 2018 at 08:34:58AM +0930, Rusty Russell wrote:
> > The big concern I have with _NOINPUT is that it has a huge failure
> > case: if you use the same key for multiple inputs and sign one of them
> > with _NOINPUT, you've spent all of them. The current proposal kind-of
> > limits the potential damage by still committing to the prevout amount,
> > but it still seems a big risk for all the people that reuse addresses,
> > which seems to be just about everyone.
> If I can convince you to sign with SIGHASH_NONE, it's already a problem
> today.

So, I don't find that very compelling: "there's already a way to lose
your money, so it's fine to add other ways to lose your money". And
again, I think NOINPUT is worse here, because a SIGHASH_NONE signature
only lets others take the coin you're trying to spend, messing up when
using NOINPUT can cause you to lose other coins as well (with caveats).

> [...]
> In a world where SIGHASH_NONE didn't exist, this might be an argument :)

I could see either dropping support for SIGHASH_NONE for segwit
v1 addresses, or possibly limiting SIGHASH_NONE in a similar way to
limiting SIGHASH_NOINPUT. Has anyone dug through the blockchain to see
if SIGHASH_NONE is actually used/useful?

> That was also suggested by Mark Friedenbach, but I think we'll end up
> with more "magic key" a-la Schnorr/taproot/graftroot and less script in
> future.

Taproot and graftroot aren't "less script" at all -- if anything they're
the opposite in that suddenly every address can have a script path.
I think NOINPUT has pretty much the same tradeoffs as taproot/graftroot
scripts: in the normal case for both you just use a SIGHASH_ALL
signature to spend your funds; in the abnormal case for NOINPUT, you use
a SIGHASH_NOINPUT (multi)sig for unilateral eltoo closes or watchtower
penalties, in the abnormal case for taproot/graftroot you use a script.

> That means we'd actually want a different Segwit version for
> "NOINPUT-can-be-used", which seems super ugly.

That's backwards. If you introduce a new opcode, you can use the existing
segwit version, rather than needing segwit v1. You certainly don't need
v1 segwit for regular coins and v2 segwit for NOINPUT coins, if that's
where you were going?

For segwit v0, that would mean your addresses for a key "X", might be:

   [pubkey]  X
- not usable with NOINPUT
   [script]  2 X Y 2 CHECKMULTISIG
- not usable with NOINPUT
   [script]  2 X Y 2 CHECKMULTISIG_1USE_VERIFY
- usable with NOINPUT (or SIGHASH_ALL)

CHECKMULTISIG_1USE_VERIFY being soft-forked in by replacing an OP_NOP,
of course. Any output spendable via a NOINPUT signature would then have
had to have been deliberately created as being spendable by NOINPUT.

For a new segwit version with taproot that likewise includes an opcode,
that might be:

   [taproot]  X
- not usable with NOINPUT
   [taproot]  X or: X CHECKSIG_1USE
- usable with NOINPUT

If you had two UTXOs (with the same value), then if you construct
a taproot witness script for the latter address it will look like:

X [X CHECKSIG_1USE] [sig_X_NOINPUT]

and that signature can't be used for addresses that were just intending
to pay to X, because the NOINPUT sig/sighash simply isn't supported
without a taproot path that includes the CHECKSIG_1USE opcode.

In essence, with the above construction there's two sorts of addresses
you generate from a public key X: addresses where you spend each coin
individually, and different addresses where you spend the wallet of
coins with that public key (and value) at once; and that remains the
same even if you use a single key for both.

I think it's slightly more reasonable to worry about signing with NOINPUT
compared to signing with SIGHASH_NONE: you could pretty reasonably setup
your (light) bitcoin wallet to not be able to sign (or verify) with
SIGHASH_NONE ever; but if you want to use lightning v2, it seems pretty
likely your wallet will be signing things with SIGHASH_NOINPUT. From
there, it's a matter of having a bug or a mistake cause you to
cross-contaminate keys into your lightning subsystem, and not be
sufficiently protected by other measures (eg, muSig versus checkmultisig).

(For me the Debian ssh key generation bug from a decade ago is sufficient
evidence that people you'd think are smart and competent do make really
stupid mistakes in real life; so defense in depth here makes sense even
though you'd have to do really stupid things to get a benefit from it)

The other benefit of a separate opcode is support can be soft-forked in
independently of a new segwit version (either earlier or later).

I don't think the code has to be much more complicated with a separate
opcode; passing an extra flag to TransactionSignatureChecker::CheckSig()
is probably close to enough. Some sort of flag remains needed anyway
since v0 and pre-segwit signatures won't support NOINPUT.

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lis

Re: [Lightning-dev] [bitcoin-dev] BIP sighash_noinput

2018-05-08 Thread Anthony Towns
On Mon, May 07, 2018 at 09:40:46PM +0200, Christian Decker via bitcoin-dev 
wrote:
> Given the general enthusiasm, and lack of major criticism, for the
> `SIGHASH_NOINPUT` proposal, [...]

So first, I'm not sure if I'm actually criticising or playing devil's
advocate here, but either way I think criticism always helps produce
the best proposal, so

The big concern I have with _NOINPUT is that it has a huge failure
case: if you use the same key for multiple inputs and sign one of them
with _NOINPUT, you've spent all of them. The current proposal kind-of
limits the potential damage by still committing to the prevout amount,
but it still seems a big risk for all the people that reuse addresses,
which seems to be just about everyone.

I wonder if it wouldn't be ... I'm not sure better is the right word,
but perhaps "more realistic" to have _NOINPUT be a flag to a signature
for a hypothetical "OP_CHECK_SIG_FOR_SINGLE_USE_KEY" opcode instead,
so that it's fundamentally not possible to trick someone who regularly
reuses keys to sign something for one input that accidently authorises
spends of other inputs as well.

Is there any reason why an OP_CHECKSIG_1USE (or OP_CHECKMULTISIG_1USE)
wouldn't be equally effective for the forseeable usecases? That would
ensure that a _NOINPUT signature is only ever valid for keys deliberately
intended to be single use, rather than potentially valid for every key.

It would be ~34 witness bytes worse than being able to spend a Schnorr
aggregate key directly, I guess; but that's not worse than the normal
taproot tradeoff: you spend the aggregate key directly in the normal,
cooperative case; and reserve the more expensive/NOINPUT case for the
unusual, uncooperative cases. I believe that works fine for eltoo: in
the cooperative case you just do a SIGHASH_ALL spend of the original
transaction, and _NOINPUT isn't needed.

Maybe a different opcode maybe makes sense at a "philosophical" level:
normal signatures are signing a spend of a particular "coin" (in the
UTXO sense), while _NOINPUT signatures are in some sense signing a spend
of an entire "wallet" (all the coins spendable by a particular key, or
more accurately for the current proposal, all the coins of a particular
value spendable by a particular key). Those are different intentions,
so maybe it's reasonable to encode them in different addresses, which
in turn could be done by having a new opcode for _NOINPUT.

A new opcode has the theoretical advantage that it could be deployed
into the existing segwit v0 address space, rather than waiting for segwit
v1. Not sure that's really meaningful, though.

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] New form of 51% attack via lightning's revocation system possible?

2018-03-13 Thread Anthony Towns
On Tue, Mar 13, 2018 at 06:07:48PM +0100, René Pickhardt via Lightning-dev 
wrote:
> Hey Christian,
> I agree with you on almost anything you said. however I disagree that in the
> lightning case it produces just another double spending. I wish to to 
> emphasize
> on my statement that the in the case with lightning such a 51% attack can 
> steal
> way more BTC than double spending my own funds.

I think you can get a simpler example:

 * I setup a channel, funding it with 10 BTC (ie, balance is 100% on my side)

 * Someone else sets up a channel with me, funding it with 5 BTC
   (balance is 100% on their side)

 * I route 5 BTC to myself from the first channel through the second:
aj -> X -> ... -> victim -> aj
 * I save the state that says I own all 5BTC in the victim <-> aj channel

 * I route 5 BTC to myself from the second channel throught the first:
aj -> victim -> ... -> X -> aj
 * At this point I'm back to having 10 BTC (minus some small amont
   of lightning fees) in the first channel

 * I use 51% hashing power to mine a secret chain that uses the saved
   state to close the victim<->aj channel. Once that chain is long enough
   that I can claim the funds I do so. Once I have claimed the funds on
   my secret chain and the secret chain has more work than the public
   chain, I publish it, causing a reorg.

 * At this point I still have 10 BTC in the original channel, and I have
   the victim's 5 BTC.

I can parallelise this attack as well: before doing any private mining or
closing the victim's channel, I can do the same thing with another victim,
allowing me to collect old states worth many multiples of up to 10 BTC, and
mine them at once, leaving with my original 10BTC minus fees, plus n*10BTC
stolen from victims.

This becomes more threatening if you add in conspiracy theories about
there already being a miner with >51% hashpower, who has financial
interests in seeing lightning fail...

The main limitation is that it still only allows a 51% miner to steal
funds from channels they participate in, so creating channels with
identifiable entities with whom you have an existing relationship (as
opposed to picking random anonymous nodes) is a defense against this
attack. Also, if 51% of hashpower is mining in secret for an extended
period, that may be detectable, which may allow countermeasures to
be taken?

You could also look at this the other way around: at the point when
lightning is widely deployed, this attack vector seems like it gives an
immediate, personal, financial justification for large economic actors
to ensure that hash rate is very decentralised.

> In particular I could run for a decade on stable payment channels
> storing old state and at some point realizing it would be a really big
> opportunity secretly cashing in all those old transactions which can't be
> revoked.

(I'd find it surprising if many channels stayed open for a decade; if
nothing else, I'd expect deflation over that time to cause people to
want to close channels)

Cheers,
aj

___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Post-Schnorr lightning txes

2018-02-22 Thread Anthony Towns
On Tue, Feb 20, 2018 at 08:59:07AM +1000, Anthony Towns wrote:
> My understanding of lightning may be out of date, so please forgive
> (or at least correct :) any errors on my behalf.

> I'm not 100% sure how this approach works compared to the current one
> for the CSV/CLTV overlap problem. I think any case you could solve by
> obtaining a HTLC-Timeout or HTLC-Success transaction currently, you could
> solve in the above scenario by just updating the channel state to remove
> the HTLC.

So, I didn't understand the HTLC-Timeout/HTLC-Success transactions (you
don't have to obtain them separately, they're provided along with every
commitment tx), and the current setup works better than what I suggest
unless to_self_delay is very small.

It could be possible to make that a tradeoff: choose a small to_self_delay
because you're confident you'll monitor the chain and quickly penalise any
cheating, with the bonus that that makes monitoring cheaply outsourcable
even for very active channels; or choose a large to_self_delay and have
it cost a bit more to outsource monitoring.

Anyway.

You can redo all the current txes with Schnorr/muSig/scriptless-scripts
fine, I think:

 - funding tx is 2-of-2 muSig

 - the commitment tx I hold has outputs for:
  your balance - payable to A(i)
  my balance - payable to A(i)+R(B,i)
  each in-flight HTLC - payable to A(i)+R(B,i)+X(j)
   where
  A(i) is your pubkey for commitment i
  R(B,i) is my revocation hash for commitment i
  X(j) is a perturbation for the jth HTLC to make it hard to know
which output is a HTLC and which isn't
   spends the funding tx
   locktime and sequence of the funding tx input encode i
   partially signed by you

 - the HTLC-Success/HTLC-Timeout txes need to have two phases, one that
   can immediately demonstrate the relevent condition has been met, and
   a second with a CSV delay to ensure cheating can be penalised.

   so:
 HTLC-Success: pays A(i)+R(B,i)+Y(j), partially signed by you
   with scriptless script requirement of revealing preimage for
   corresponding payment hash
 HTLC-Timeout: pays A(i)+R(B,i)+Y(j), partially signed by you
   with locktime set to enforce timeout

 - you also need a claim transaction for each output you can possibly
   spend:
 Balance-Claim: pays B(i), funded by my balance output, partially
   signed by you, with sequence set to enforce relative timelock of
   to_self_delay
 HTLC-Claim: pays B(i)+Z(j), funded by the j'th
   HTLC-Success/HTLC-Timeout transaction, partially signed by you,
   with sequence set to enforce relative timelock of to_self_delay

   where Y(j) and Z(j) are similar to X(j) and are just to make it hard
   for third parties to tell the relationship between outputs

Each of those partial signatures require me to have sent you a unique ECC
point J, for which I know the corresponding secret. I guess you'd just
need to include those in the revoke_and_ack and update_add_htlc messages.

The drawback with this approach is that to outsource claiming funds
(without covenants or SIGHASH_NOINPUT), you'd need to send signatures
for 2+2N outputs for every channel update, rather than just 1, and the
claiming transactions would be a lot larger.

This retains the advantage that you don't have to store any info about
outdated HTLCs if you're monitoring for misbehaviour yourself; you just
need to send an extra two signatures for every in-flight HTLC for every
channel update if you're outsourcing channel monitoring.

Posting a penalty transaction in this scheme isn't as cheap as just
being 1-in-1-out, but if you're doing it yourself, it's still cheaper
than trying to claim the funds while misbehaving: you can do it all in a
single transaction, and if cross-input signature aggregation is supported,
you can do it all with a single signature; while they will need to supply
at least two separate transactions, and 1+2N signatures.

> If your channel updates 100 times a second for an entire year, that's
> 200GB of data, which seems pretty feasible.

If you update the channel immediately whenever a new HTLC starts or
ends, that's 50 HTLCs per second on average; if they last for 20 seconds
on average, it's 1000 HTLCs at any one time on average, so trustless
outsourcing would require storing about 2000 signatures per update,
which at 64B per signature, is 13MB/second, or about a terabyte per
day. Not so feasible by comparison.

The channel update rate is contributing quadratically to that calculation
though, so reducing the rate of incoming HTLCs to 2 per second on average,
but capping channel updates at 1 per second, gives an average of 40
HTLCs at any one time and 81 signatures per update, for 450MB per day
or 163GB per year, which isn't too bad.

(I guess if you want the priva

Re: [Lightning-dev] Proof of payment (Re: AMP: Atomic Multi-Path Payments over Lightning)

2018-02-21 Thread Anthony Towns
On Tue, Feb 13, 2018 at 09:23:37AM -0500, ZmnSCPxj via Lightning-dev wrote:
> Good morning Corne and Conner,
> Ignoring the practical matters that Corne rightly brings up, I think,
> it is possible to use ZKCP to provide a "stronger" proof-of-payment in
> the sense that Conner is asking for.

I think Schnorr scriptless scripts work for this (assuming HTLC payment
hashes are ECC points rather than SHA256 hashes). In particular:

 - Alice agrees to pay Bob $5 for a coffee.

 - Bob calculates a lightning payment hash preimage r, and payment hash
   R=r*G. Bob also prepares a receipt message, saying "I've been paid $5
   to give Alice a coffee", and calculates a partial Schnorr signature
   of this receipt (n a signature nonce, N=n*G, s=n+H(R+N,B,receipt)*b),
   and sends Alice (R, N, s)

 - Alice verfies the partial signature:
  s*G = N + H(R+N,B,receipt)*B

 - Alice pays over lightning conditional on receiving the preimage r of R.

 - Alice then has a valid signature of the receipt, signed by Bob:
  (R+N, r+s)

The benefit over just getting a hash preimage, is that you can use this to
prove that you paid Bob, rather than Carol or Dave, at some later date,
including to a third party (a small-claims court, tax authorities,
a KYC/AML audit?).

The nice part is you get that just by doing some negotiation at the
start, it's not something the lightning protocol needs to handle at all
(beyond switching to ECC points for payment hashes).

>  Original Message 
>  On February 13, 2018 10:33 AM, Corné Plooy via Lightning-dev 
>  wrote:
> >Hi Conner,
> > I do believe proof of payment is an important feature to have,
> > especially for the use case of a payer/payee pair that doesn't
> > completely trust each other, but does have the possibility to go to court.

Cheers,
aj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


  1   2   >