Re: [Lightning-dev] [bitcoin-dev] HTLC output aggregation as a mitigation for tx recycling, jamming, and on-chain efficiency (covenants)

2024-01-01 Thread Johan Torås Halseth
ates.
>
> > That sounds possible, but how would you deal with the exponential
> > blowup in the number of combinations?
>
> In a taproot-world, "swallow the bullet" in terms of witness size growth in 
> case of non-cooperative closure.
> I think this is where introducing an accumulator at the Script level to 
> efficiently test partial set membership would make sense.
> Note, exponential blowup is an issue for mass non-coordinated withdrawals of 
> a payment pool too.
>
> Best,
> Antoine
>
>
> Le lun. 11 déc. 2023 à 09:17, Johan Torås Halseth  a écrit 
> :
>>
>> Hi, Antoine.
>>
>> > The attack works on legacy channels if the holder (or local) commitment 
>> > transaction confirms first, the second-stage HTLC claim transaction is 
>> > fully malleable by the counterparty.
>>
>> Yes, correct. Thanks for pointing that out!
>>
>> > I think one of the weaknesses of this approach is the level of 
>> > malleability still left to the counterparty, where one might burn in 
>> > miners fees all the HTLC accumulated value promised to the counterparty, 
>> > and for which the preimages have been revealed off-chain.
>>
>> Is this a concern though, if we assume there's no revoked state that
>> can be broadcast (Eltoo)? Could you share an example of how this would
>> be played out by an attacker?
>>
>> > I wonder if a more safe approach, eliminating a lot of competing interests 
>> > style of mempool games, wouldn't be to segregate HTLC claims in two 
>> > separate outputs, with full replication of the HTLC lockscripts in both 
>> > outputs, and let a covenant accepts or rejects aggregated claims with 
>> > satisfying witness and chain state condition for time lock.
>>
>> I'm not sure what you mean here, could you elaborate?
>>
>> > I wonder if in a PTLC world, you can generate an aggregate curve point for 
>> > all the sub combinations of scalar plausible. Unrevealed curve points in a 
>> > taproot branch are cheap. It might claim an offered HTLC near-constant 
>> > size too.
>>
>> That sounds possible, but how would you deal with the exponential
>> blowup in the number of combinations?
>>
>> Cheers,
>> Johan
>>
>>
>> On Tue, Nov 21, 2023 at 3:39 AM Antoine Riard  
>> wrote:
>> >
>> > Hi Johan,
>> >
>> > Few comments.
>> >
>> > ## Transaction recycling
>> > The transaction recycling attack is made possible by the change made
>> > to HTLC second level transactions for the anchor channel type[8];
>> > making it possible to add fees to the transaction by adding inputs
>> > without violating the signature. For the legacy channel type this
>> > attack was not possible, as all fees were taken from the HTLC outputs
>> > themselves, and had to be agreed upon by channel counterparties during
>> > signing (of course this has its own problems, which is why we wanted
>> > to change it).
>> >
>> > The attack works on legacy channels if the holder (or local) commitment 
>> > transaction confirms first, the second-stage HTLC claim transaction is 
>> > fully malleable by the counterparty.
>> >
>> > See 
>> > https://github.com/lightning/bolts/blob/master/03-transactions.md#offered-htlc-outputs
>> >  (only remote_htlcpubkey required)
>> >
>> > Note a replacement cycling attack works in a future package-relay world 
>> > too.
>> >
>> > See test: 
>> > https://github.com/ariard/bitcoin/commit/19d61fa8cf22a5050b51c4005603f43d72f1efcf
>> >
>> > > The idea of HTLC output aggregation is to collapse all HTLC outputs on
>> > > the commitment to a single one. This has many benefits (that I’ll get
>> > > to), one of them being the possibility to let the spender claim the
>> > > portion of the output that they’re right to, deciding how much should
>> > > go to fees. Note that this requires a covenant to be possible.
>> >
>> > Another advantage of HTLC output aggregation is the reduction of 
>> > fee-bumping reserves requirements on channel counterparties, as 
>> > second-stage HTLC transactions have common fields (nVersion, nLocktime, 
>> > ...) *could* be shared.
>> >
>> > > ## A single HTLC output
>> > > Today, every forwarded HTLC results in an output that needs to be
>> > > manifested on the commitment transaction in order to claw back money
>> > > in case of an uncooperative channel 

Re: [Lightning-dev] [bitcoin-dev] HTLC output aggregation as a mitigation for tx recycling, jamming, and on-chain efficiency (covenants)

2023-12-11 Thread Johan Torås Halseth
 if in a PTLC world, you can generate an aggregate curve point for 
> all the sub combinations of scalar plausible. Unrevealed curve points in a 
> taproot branch are cheap. It might claim an offered HTLC near-constant size 
> too.
>
> > ## The bad news
> > The most obvious problem is that we would need a new covenant
> > primitive on L1 (see below). However, I think it could be beneficial
> > to start exploring these ideas now in order to guide the L1 effort
> > towards something we could utilize to its fullest on L2.
>
> > As mentioned, even with a functioning covenant, we don’t escape the
> > fact that a preimage needs to go on-chain, pricing out HTLCs at
> > certain fee rates. This is analogous to the dust exposure problem
> > discussed in [6], and makes some sort of limit still required.
>
> Ideally such covenant mechanisms would generalize to the withdrawal phase of 
> payment pools, where dozens or hundreds of participants wish to confirm their 
> non-competing withdrawal transactions concurrently. While unlocking preimage 
> or scalar can be aggregated in a single witness, there will still be a need 
> to verify that each withdrawal output associated with an unlocking secret is 
> present in the transaction.
>
> Maybe few other L2s are answering this N-inputs-to-M-outputs pattern with 
> advanced locking scripts conditions to satisfy.
>
> > ### Open question
> > With PTLCs, could one create a compact proof showing that you know the
> > preimage for m-of-n of the satoshis in the output? (some sort of
> > threshold signature).
>
> > If we could do this we would be able to remove the slot jamming issue
> > entirely; any number of active PTLCs would not change the on-chain
> > cost of claiming them.
>
> See comments above, I think there is a plausible scheme here you just 
> generate all the point combinations possible, and only reveal the one you 
> need at broadcast.
>
> > ## Covenant primitives
> > A recursive covenant is needed to achieve this. Something like OP_CTV
> > and OP_APO seems insufficient, since the number of ways the set of
> > HTLCs could be claimed would cause combinatorial blowup in the number
> > of possible spending transactions.
>
> > Personally, I’ve found the simple yet powerful properties of
> > OP_CHECKCONTRACTVERIFY [4] together with OP_CAT and amount inspection
> > particularly interesting for the use case, but I’m certain many of the
> > other proposals could achieve the same thing. More direct inspection
> > like you get from a proposal like OP_TX[9] would also most likely have
> > the building blocks needed.
>
> As pointed out during the CTV drama and payment pool public discussion years 
> ago, what would be very useful to tie-break among all covenant constructions 
> would be an efficiency simulation framework. Even if the same semantic can be 
> achieved independently by multiple covenants, they certainly do not have the 
> same performance trade-offs (e.g average and worst-case witness size).
>
> I don't think the blind approach of activating many complex covenants at the 
> same time is conservative enough in Bitcoin, where one might design 
> "malicious" L2 contracts, of which the game-theory is not fully understood.
>
> See e.g https://blog.bitmex.com/txwithhold-smart-contracts/
>
> > ### Proof-of-concept
> > I’ve implemented a rough demo** of spending an HTLC output that pays
> > to a script with OP_CHECKCONTRACTVERIFY to achieve this [5]. The idea
> > is to commit to all active HTLCs in a merkle tree, and have the
> > spender provide merkle proofs for the HTLCs to claim, claiming the sum
> > into a new output. The remainder goes back into a new output with the
> > claimed HTLCs removed from the merkle tree.
>
> > An interesting trick one can do when creating the merkle tree, is
> > sorting the HTLCs by expiry. This means that one in the timeout case
> > claim a subtree of HTLCs using a single merkle proof (and RBF this
> > batched timeout claim as more and more HTLCs expire) reducing the
> > timeout case to constant size witness (or rather logarithmic in the
> > total number of HTLCs).
>
> > **Consider it an experiment, as it is missing a lot before it could be
> > usable in any real commitment setting.
>
> I think this is an interesting question if more advanced cryptosystems based 
> on assumptions other than the DL problem could constitute a factor of 
> scalability of LN payment throughput by orders of magnitude, by decoupling 
> number of off-chain payments from the growth of the on-chain witness size 
> need to claim them, without lowering in security as with trimmed HTLC

[Lightning-dev] Fwd: HTLC output aggregation as a mitigation for tx recycling, jamming, and on-chain efficiency (covenants)

2023-10-27 Thread Johan Torås Halseth
Cross-posting this to the lightning-dev list, as was my intended destination.

-- Forwarded message -
From: Johan Torås Halseth 
Date: Thu, Oct 26, 2023 at 12:52 PM
Subject: HTLC output aggregation as a mitigation for tx recycling,
jamming, and on-chain efficiency (covenants)
To: Bitcoin Protocol Discussion 


Hi all,

After the transaction recycling has spurred some discussion the last
week or so, I figured it could be worth sharing some research I’ve
done into HTLC output aggregation, as it could be relevant for how to
avoid this problem in a future channel type.

TLDR; With the right covenant we can create HTLC outputs that are much
more chain efficient, not prone to tx recycling and harder to jam.

## Transaction recycling
The transaction recycling attack is made possible by the change made
to HTLC second level transactions for the anchor channel type[8];
making it possible to add fees to the transaction by adding inputs
without violating the signature. For the legacy channel type this
attack was not possible, as all fees were taken from the HTLC outputs
themselves, and had to be agreed upon by channel counterparties during
signing (of course this has its own problems, which is why we wanted
to change it).

The idea of HTLC output aggregation is to collapse all HTLC outputs on
the commitment to a single one. This has many benefits (that I’ll get
to), one of them being the possibility to let the spender claim the
portion of the output that they’re right to, deciding how much should
go to fees. Note that this requires a covenant to be possible.

## A single HTLC output
Today, every forwarded HTLC results in an output that needs to be
manifested on the commitment transaction in order to claw back money
in case of an uncooperative channel counterparty. This puts a limit on
the number of active HTLCs (in order for the commitment transaction to
not become too large) which makes it possible to jam the channel with
small amounts of capital [1]. It also turns out that having this limit
be large makes it expensive and complicated to sweep the outputs
efficiently [2].

Instead of having new HTLC outputs manifest for each active
forwarding, with covenants on the base layer one could create a single
aggregated output on the commitment. The output amount being the sum
of the active HTLCs (offered and received), alternatively one output
for received and one for offered. When spending this output, you would
only be entitled to the fraction of the amount corresponding to the
HTLCs you know the preimage for (received), or that has timed out
(offered).

## Impacts to transaction recycling
Depending on the capabilities of the covenant available (e.g.
restricting the number of inputs to the transaction) the transaction
spending the aggregated HTLC output can be made self sustained: the
spender will be able to claim what is theirs (preimage or timeout) and
send it to whatever output they want, or to fees. The remainder will
go back into a covenant restricted output with the leftover HTLCs.
Note that this most likely requires Eltoo in order to not enable fee
siphoning[7].

## Impacts to slot jamming
With the aggregated output being a reality, it changes the nature of
“slot jamming” [1] significantly. While channel capacity must still be
reserved for in-flight HTLCs, one no longer needs to allocate a
commitment output for each up to some hardcoded limit.

In today’s protocol this limit is 483, and I believe most
implementations default to an even lower limit. This leads to channel
jamming being quite inexpensive, as one can quickly fill a channel
with small HTLCs, without needing a significant amount of capital to
do so.

The origins of the 483 slot limits is the worst case commitment size
before getting into unstandard territory [3]. With an aggregated
output this would no longer be the case, as adding HTLCs would no
longer affect commitment size. Instead, the full on-chain footprint of
an HTLC would be deferred until claim time.

Does this mean one could lift, or even remove the limit for number of
active HTLCs? Unfortunately, the obvious approach doesn’t seem to get
rid of the problem entirely, but mitigates it quite a bit.

### Slot jamming attack scenario
Consider the scenario where an attacker sends a large number of
non-dust* HTLCs across a channel, and the channel parties enforce no
limit on the number of active HTLCs.

The number of payments would not affect the size of the commitment
transaction at all, only the size of the witness that must be
presented when claiming or timing out the HTLCs. This means that there
is still a point at which chain fees get high enough for the HTLC to
be uneconomical to claim. This is no different than in today’s spec,
and such HTLCs will just be stranded on-chain until chain fees
decrease, at which point there is a race between the success and
timeout spends.

There seems to be no way around this; if you want to claim an HTLC
on-chain, you need to put the preimage

Re: [Lightning-dev] [bitcoin-dev] Taro: A Taproot Asset Representation Overlay

2022-11-07 Thread Johan Torås Halseth
Hi Laolu,

Yeah, that is definitely the main downside, as Ruben also mentioned:
tokens are "burned" if they get sent to an already spent UTXO, and
there is no way to block those transfers.

And I do agree with your concern about losing the blockchain as the
main synchronization point, that seems indeed to be a prerequisite for
making the scheme safe in terms of re-orgs and asynchronicity.

I do think the scheme itself is sound though (maybe not off-chain, see
below): it prevents double spending and as long as the clients adhere
to the "rule" of not sending to a spent UTXO you'll be fine (if not
your tokens will be burned, the same way as if you don't satisfy the
Taro script when spending).

Thinking more about the examples you gave, I think you are right it
won't easily be compatible with LN channels though:
If you want to refill an existing channel with tokens, you need the
channel counterparties to start signing new commitments that include
spending the newly sent tokens. A problem arises however, if the
channel is force-closed with a pre-existing commitment from before the
token transfer took place. Since this commitment will be spending the
funding UTXO, but not the new tokens, the tokens will be burned. And
that seems to be harder to deal with (Eltoo style channels could be an
avenue to explore, if one could override the broadcasted commitment).

Tl;dr: I think you're right, the scheme is not compatible with LN.

- Johan


On Sat, Nov 5, 2022 at 1:36 AM Olaoluwa Osuntokun  wrote:
>
> Hi Johan,
>
> I haven't really been able to find a precise technical explanation of the
> "utxo teleport" scheme, but after thinking about your example use cases a
> bit, I don't think the scheme is actually sound. Consider that the scheme
> attempts to target transmitting "ownership" to a UTXO. However, by the time
> that transaction hits the chain, the UTXO may no longer exist. At that
> point, what happens to the asset? Is it burned? Can you retry it again? Does
> it go back to the sender?
>
> As a concrete example, imagine I have a channel open, and give you an
> address to "teleport" some additional assets to it. You take that addr, then
> make a transaction to commit to the transfer. However, the block before you
> commit to the transfer, my channel closes for w/e reason. As a result, when
> the transaction committing to the UTXO (blinded or not), hits the chain, the
> UTXO no longer exists. Alternatively, imagine the things happen in the
> expected order, but then a re-org occurs, and my channel close is mined in a
> block before the transfer. Ultimately, as a normal Bitcoin transaction isn't
> used as a serialization point, the scheme seems to lack a necessary total
> ordering to ensure safety.
>
> If we look at Taro's state transition model in contrast, everything is fully
> bound to a single synchronization point: a normal Bitcoin transaction with
> inputs consumed and outputs created. All transfers, just like Bitcoin
> transactions, end up consuming assets from the set of inputs, and
> re-creating them with a different distribution with the set of outputs. As a
> result, Taro transfers inherit the same re-org safety traits as regular
> Bitcoin transactions. It also isn't possible to send to something that won't
> ultimately exist, as sends create new outputs just like Bitcoin
> transactions.
>
> Taro's state transition model also means anything you can do today with
> Bitcoin/LN also apply. As an example, it would be possible for you to
> withdrawn from your exchange into a Loop In address (on chain to off chain
> swap), and have everything work as expected, with you topping off your
> channel. Stuff like splicing, and other interactive transaction construction
> schemes (atomic swaps, MIMO swaps, on chain auctions, etc) also just work.
>
> Ignoring the ordering issue I mentioned above, I don't think this is a great
> model for anchoring assets in channels either. With Taro, when you make the
> channel, you know how many assets are committed since they're all committed
> to in the funding output when the channel is created. However, let's say we
> do teleporting instead: at which point would we recognize the new asset
> "deposits"? What if we close before a pending deposits confirms, how can one
> regain those funds? Once again you lose the serialization of events/actions
> the blockchain provides. I think you'd also run into similar issues when you
> start to think about how these would even be advertised on a hypothetical
> gossip network.
>
> I think one other drawback of the teleport model iiuc is that: it either
> requires an OP_RETURN, or additional out of band synchronization to complete
> the transfer. Since it needs to commit to w/e hash description of the
> teleport, it either needs to use an OP_RETURN (so the receiver can see the
> on chain action), or the sender needs to contact the receiver to initiate
> the resolution of the transfer (details committed to in a change addr or
> w/e).
>
> With 

Re: [Lightning-dev] [bitcoin-dev] Taro: A Taproot Asset Representation Overlay

2022-11-03 Thread Johan Torås Halseth
Hi,

I wanted to chime in on the "teleport" feature explained by Ruben, as I
think exploring something similar for Taro could be super useful in an LN
setting.

In today's Taro, to transfer tokens you have to spend a UTXO, and present a
proof showing that there are tokens committed to in the output you are
spending. Let's say this UTXO is 'utxo:0'.

In contrast, to spend teleported tokens, you would still spend utxo:0, but
you would only have to present a proof that _some txout_ on-chain have
committed tokens to utxo:0.

As Ruben points out, this makes it possible to send tokens to an already
spent TXO, essentially burning the tokens.

However, it opens up some exciting possibilities IMO. You can in essence
use this to "re-fill" UTXOs with tokens, which is very interesting for LN
channels:

- You could "add" tokens to your already open channels. The only thing
needed is for the channel participants to be presented the proof that
tokens were sent to the funding output, and they can update their
commitment transaction to start spending these tokens.
- You can "top-up" all your channels in a single on-chain tx. Since a
single output can commit tokens to several UTXOs, you could with a single
on-chain transaction add tokens to many channels without opening and
closing them.

RGB also has the ability to "blind" the UTXO that tokens get teleported to,
hiding the recipient UTXO. This is cool, since I could withdraw tokens from
an exchange directly into my LN channel, without revealing my channel UTXO.

I found the explanation of the teleport feature in this blog post pretty
good:
https://medium.com/@FedericoTenga/understanding-rgb-protocol-7dc7819d3059

- Johan

On Sun, Apr 10, 2022 at 6:52 PM Ruben Somsen  wrote:

> Hi Laolu,
>
> >happy to hear that someone was actually able to extract enough details
> from the RGB devs/docs to be able to analyze it properly
>
> Actually, even though I eventually puzzled everything together, this did
> not go well for me either. There is a ton of documentation, but it's a maze
> of unhelpful details, and none of it clearly maps out the fundamental
> design. I was also disappointed by the poor response I received when asking
> questions, and I ended up getting chastised for helping others understand
> it and pointing out potential flaws[1][2][3].Given my experience, I think
> the project is not in great shape, so the decision to rebuild from scratch
> seems right to me.
>
> That said, in my opinion the above should not factor into the decision of
> whether RGB should be credited in the Taro documentation. The design
> clearly precedes (and seems to have inspired) Taro, so in my opinion this
> should be acknowledged. Also, the people that are responsible for the
> current shape of RGB aren't the people who originated the idea, so it would
> not be fair to the originators either (Peter Todd, Alekos Filini, Giacomo
> Zucco).
>
> >assets can be burnt if a user doesn't supply a valid witness
>
> I am in agreement with what you said, but it is not clear to me whether we
> are on the same page. What I tried to say was that it does not make sense
> to build scripting support into Taro, because you can't actually do
> anything interesting with it due to this limitation. The only type of smart
> contract you can build is one where you limit what the owner (as defined by
> Bitcoin's script) can do with their own Taro tokens, or else he will burn
> them – not very useful. Anything involving a conditional transfer of
> ownership to either A or B (i.e. any meaningful type of script) won't work.
> Do you see what I mean, or should I elaborate further?
>
> >TAPLEAF_UPDATE_VERIFY can actually be used to further _bind_ Taro transitions
> at the Bitcoin level, without Bitcoin explicitly needing to be aware
>
> That is conceptually quite interesting. So theoretically you could get
> Bitcoin covenants to enforce certain spending conditions on Taro assets.
> Not sure how practical that ends up being, but intriguing to consider.
>
> >asset issuer to do a "re-genesis"
>
> Yes, RGB suggested the same thing, and this can work under some
> circumstances, but note that this won't help for tokens that aim to have a
> publicly audited supply, as the proof that a token was legitimately
> re-issued is the history of the previous token (so you'd actually be making
> things worse, as now everyone has to verify it). And of course the idea
> also requires the issuer to be active, which may not always be the case.
>
> >I'm not familiar with how the RGB "teleport" technique works [...] Can
> you point me to a coherent explanation of the technique
>
> To my knowledge no good explanation exists. "Teleporting" is just what I
> thought was a good way of describing it. Basically, in your design when
> Alice wants to send a Taro token to Bob, Alice has to spend her own output,
> make a new output for Bob, and make a change output for herself. Inside the
> Taro tree you'll then point to the index of Bob's output in order 

Re: [Lightning-dev] Dynamic Commitments Part 2: Taprooty Edition

2022-10-28 Thread Johan Torås Halseth
Hi, Matt.

You're correct, I made the suggestion mainly because it would open up for
PTLCs (or other future features) in today's channels.

Having the existing network close and reopen new channels would really slow
the adoption of new channel features I reckon.

And I don't think it adds much complexity compared to the adapter approach.

- Johan

On Thu, Oct 27, 2022 at 4:54 PM Matt Corallo 
wrote:

> I’m not sure I understand this - is there much reason to want taproot
> commitment outputs? I mean they’re cool, and witnesses are a bit smaller,
> which is nice I guess, but they’re not providing materially new features,
> AFAIU. Taproot funding, on the other hand, provides a Bitcoin-wide privacy
> improvement as well the potential future ability of channel participants to
> use multisig for their own channel funds transparently.
>
> Sure, if we’re doing taproot funding outputs we should probably just do it
> for the commitment outputs as well, because why not (and it’s a prereq for
> PTLCs). But trying to split them up seems like added complexity “just
> because”? I suppose it tees us up for eventual PTLC support in todays
> channels, but we can also consider that separately when we get to that
> point, IMO.
>
> Am I missing some important utility of taproot commitment transaction
> outputs?
>
> Matt
>
> On Oct 27, 2022, at 02:17, Johan Torås Halseth  wrote:
>
> 
> Hi, Laolu.
>
> I think it could be worth considering dividing the taprootyness of a
> channel into two:
> 1) taproot funding output
> 2) taproot commitment outputs
>
> That way we could upgrade existing channels only on the commitment level,
> not needing to close or re-anchor the channels using an adapter in order to
> get many of the taproot benefits.
>
> New channels would use taproot multisig (musig2) for the funding output.
>
> This seems to be less disruptive to the existing network, and we could get
> features enabled by taproot to larger parts of the network quicker. And to
> me this seems to carry less complexity (and closing fees) than an adapter.
>
> One caveat is that this wouldn't work (I think) for Eltoo channels, as the
> funding output would not be plain multisig anymore.
>
> - Johan
>
> On Sat, Mar 26, 2022 at 1:27 AM Antoine Riard 
> wrote:
>
>> Hi Laolu,
>>
>> Thanks for the proposal, quick feedback.
>>
>> > It *is* still the case that _ultimately_ the two transactions to close
>> the
>> > old segwit v0 funding output, and re-open the channel with a new segwit
>> v1
>> > funding output are unavoidable. However this adapter commitment lets
>> peers
>> > _defer_ these two transactions until closing time.
>>
>> I think there is one downside coming with adapter commitment, which is
>> the uncertainty of the fee overhead at the closing time. Instead of closing
>> your segwit v0 channel _now_ with known fees, when your commitment is empty
>> of time-sensitive HTLCs, you're taking the risk of closing during fees
>> spikes, due a move triggered by your counterparty, when you might have
>> HTLCs at stake.
>>
>> It might be more economically rational for a LN node operator to pay the
>> upgrade cost now if they wish  to benefit from the taproot upgrade early,
>> especially if long-term we expect block fees to increase, or wait when
>> there is a "normal" cooperative closing.
>>
>> So it's unclear to me what the economic gain of adapter commitments ?
>>
>> > In the remainder of this mail, I'll describe an alternative
>> > approach that would allow upgrading nearly all channel/commitment
>> related
>> > values (dust limit, max in flight, etc), which is inspired by the way
>> the
>> > Raft consensus protocol handles configuration/member changes.
>>
>> Long-term, I think we'll likely need a consensus protocol anyway for
>> multi-party constructions (channel factories/payment pools). AFAIU this
>> proposal doesn't aim to roll out a full-fledged consensus protocol *now*
>> though it could be wise to ensure what we're building slowly moves in this
>> direction. Less critical code to maintain across bitcoin
>> codebases/toolchains.
>>
>> > The role of the signature it to prevent "spoofing" by one of the parties
>> > (authenticate the param change), and also it serves to convince a party
>> that
>> > they actually sent a prior commitment propose update during the
>> > retransmission phase.
>>
>> What's the purpose of data origin authentication if we assume only
>> two-parties running over Noise_XK ?
>>
>> I think it's already a security property we ha

Re: [Lightning-dev] Dynamic Commitments Part 2: Taprooty Edition

2022-10-27 Thread Johan Torås Halseth
Hi, Laolu.

I think it could be worth considering dividing the taprootyness of a
channel into two:
1) taproot funding output
2) taproot commitment outputs

That way we could upgrade existing channels only on the commitment level,
not needing to close or re-anchor the channels using an adapter in order to
get many of the taproot benefits.

New channels would use taproot multisig (musig2) for the funding output.

This seems to be less disruptive to the existing network, and we could get
features enabled by taproot to larger parts of the network quicker. And to
me this seems to carry less complexity (and closing fees) than an adapter.

One caveat is that this wouldn't work (I think) for Eltoo channels, as the
funding output would not be plain multisig anymore.

- Johan

On Sat, Mar 26, 2022 at 1:27 AM Antoine Riard 
wrote:

> Hi Laolu,
>
> Thanks for the proposal, quick feedback.
>
> > It *is* still the case that _ultimately_ the two transactions to close
> the
> > old segwit v0 funding output, and re-open the channel with a new segwit
> v1
> > funding output are unavoidable. However this adapter commitment lets
> peers
> > _defer_ these two transactions until closing time.
>
> I think there is one downside coming with adapter commitment, which is the
> uncertainty of the fee overhead at the closing time. Instead of closing
> your segwit v0 channel _now_ with known fees, when your commitment is empty
> of time-sensitive HTLCs, you're taking the risk of closing during fees
> spikes, due a move triggered by your counterparty, when you might have
> HTLCs at stake.
>
> It might be more economically rational for a LN node operator to pay the
> upgrade cost now if they wish  to benefit from the taproot upgrade early,
> especially if long-term we expect block fees to increase, or wait when
> there is a "normal" cooperative closing.
>
> So it's unclear to me what the economic gain of adapter commitments ?
>
> > In the remainder of this mail, I'll describe an alternative
> > approach that would allow upgrading nearly all channel/commitment related
> > values (dust limit, max in flight, etc), which is inspired by the way the
> > Raft consensus protocol handles configuration/member changes.
>
> Long-term, I think we'll likely need a consensus protocol anyway for
> multi-party constructions (channel factories/payment pools). AFAIU this
> proposal doesn't aim to roll out a full-fledged consensus protocol *now*
> though it could be wise to ensure what we're building slowly moves in this
> direction. Less critical code to maintain across bitcoin
> codebases/toolchains.
>
> > The role of the signature it to prevent "spoofing" by one of the parties
> > (authenticate the param change), and also it serves to convince a party
> that
> > they actually sent a prior commitment propose update during the
> > retransmission phase.
>
> What's the purpose of data origin authentication if we assume only
> two-parties running over Noise_XK ?
>
> I think it's already a security property we have. Though if we think we're
> going to reuse these dynamic upgrades for N counterparties communicating
> through a coordinator, yes I think it's useful.
>
> > In the past, when ideas like this were brought up, some were concerned
> that
> > it wouldn't really be possible to do this type of updates while existing
> > HTLCs were in flight (hence some of the ideas to clear out the commitment
> > beforehand).
>
> The dynamic upgrade might serve in an emergency context where we don't
> have the leisury to wait for the settlement of the pending HTLCs. The
> timing of those ones might be beyond the coordination of link
> counterparties. Thus, we have to allow upgrade of non-empty commitments
> (and if there are undesirable interferences between new commitment types
> and HTLCs/PTLCs present, deal case-by-case).
>
> Antoine
>
> Le jeu. 24 mars 2022 à 18:53, Olaoluwa Osuntokun  a
> écrit :
>
>> Hi y'all,
>>
>> ## Dynamic Commitments Retrospective
>>
>> Two years-ish ago I made a mailing list post on some ideas re dynamic
>> commitments [1], and how the concept can be used to allow us to upgrade
>> channel types on the fly, and also remove pesky hard coded limits like the
>> 483 HTLC in-flight limit that's present today. Back then my main target
>> was
>> upgrading all the existing channels over to the anchor output commitment
>> variant, so the core internal routing network would be more resilient in a
>> persistent high fee environment (which hasn't really happened over the
>> past
>> 2 years for various reasons tbh). Fast forward to today, and with taproot
>> now active on mainnet, and some initial design work/sketches for
>> taproot-native channels underway, I figure it would be good to bump this
>> concept as it gives us a way to upgrade all 80k+ public channels to
>> taproot
>> without any on chain transactions.
>>
>> ## Updating Across Witness Versions w/ Adapter Commitments
>>
>> In my original mail, I incorrectly concluded that the dynamic commitments

Re: [Lightning-dev] SIGHASH_SINGLE + update_fee Considered Harmful

2020-11-23 Thread Johan Torås Halseth
Hi,

Posting an update to this thread, as we are inching closer to an
implementation in lnd that handles this scenario.

I put up a proposed PR today that attempts to solve this in a
backwards compatible manner:
https://github.com/lightningnetwork/lnd/pull/4795

The gist is that in every state we check that the "worst case fee
leak" is at most the channel reserve. The idea is that there should be
no incentive to perform the attack as described, as the cheating party
will gain at most the channel reserve, but at the same time lose its
channel reserve.

Since this makes small channels unusable at high fee rates (the leaked
fee would exceed the channel reserve for just a few, even a single
HTLC) we also clamp the maximum update_fee we'll send at 10 sat/b (a
configurable value). As an example a 1,000,000 sat channel with a 1%
channel reserve would have space for 6 HTLCs at this fee rate.

> Completely adhering to the bring-your-own-fee model for HTLC-txn sounds 
> better as it splits more fairly fees burden between channel participants.

I totally agree that this sounds like the best solution! AFAICT this
would require a (simple) spec change, but would definitely be a big
win and a simple implementation change when we are doing BYOF on the
HTLC transactions anyway :)

- Johan

On Mon, Sep 14, 2020 at 1:30 AM Antoine Riard  wrote:
>
> Hi Johan,
>
> > I would be open to patching the spec to disallow update_fee for anchor
> > channels, but maybe we can just add a warning and discourage it.
>
> My initial thinking was just to restrain it for the commitment-level only.
>
> Completely adhering to the bring-your-own-fee model for HTLC-txn sounds 
> better as it splits more fairly fees burden between channel participants. The 
> initiator won't have to pay for the remote's HTLC-txn, especially in periods 
> of high-congestion. A participant shouldn't have to bear the cost of the 
> counterparty choosing to go onchain, as it's mostly a client security 
> parameter ("how many blocks it will take me to confirm ?")  or an economic 
> decision ("is this HTLC worthy to claim/expire ?").
>
> One could argue it's increasing the blockspace footprint as you will use one 
> more pair of input-output but if you're paying the feerate that's lawful 
> usage.
>
> Antoine
>
> Le ven. 11 sept. 2020 à 04:15, Johan Torås Halseth  a 
> écrit :
>>
>> Hi,
>>
>> Very good observation, most definitely not a type of attack I forseen!
>>
>> Luckily, it was the plan to phase out update_fee all along, in favor
>> of only accepting the minimum relay fee (zero fee if/when package
>> relay is a reality). If I understand the scenario correctly, that
>> should mitigate this attack completely, as the attacker cannot impact
>> the intended miner fees on the HTLCs, and could only siphon off the
>> minimal miner fee if anything at all.
>>
>> I would be open to patching the spec to disallow update_fee for anchor
>> channels, but maybe we can just add a warning and discourage it.
>>
>> Johan
>>
>>
>> On Thu, Sep 10, 2020 at 8:13 PM Olaoluwa Osuntokun  wrote:
>> >
>> > Hi Antoine,
>> >
>> > Great findings!
>> >
>> > I think an even simpler mitigation is just for the non-initiator to 
>> > _reject_
>> > update_fee proposals that are "unreasonable". The non-initiator can run a
>> > "fee leak calculation" to compute the worst-case leakage of fees in the
>> > revocation case. This can be done to day without any significant updates to
>> > implementations, and some implementations may already be doing this.
>> >
>> > One issue is that we don't have a way to do a "soft reject" of an 
>> > update_fee
>> > as is. However, depending on the implementations, it may be possible to 
>> > just
>> > reconnect and issue a co-op close if there're no HTLCs on the commitment
>> > transaction.
>> >
>> > As you mentioned by setting proper values for max allowed htlcs, max in
>> > flight, reserve, etc, nodes are able to quantify this fee leak risk ahead 
>> > of
>> > time, and set reasonable parameters based on their security model. One 
>> > issue
>> > is that these values are set in stone rn when the channel is opened, but
>> > future iterations of dynamic commitments may allow us to update them on the
>> > fly.
>> >
>> > In the mid-term, implementations can start to phase out usage of update_fee
>> > by setting a minimal commitment fee when the channel is first opened, then
>> > relying on CPFP to bump up the commitment and any HTLCs if needed. Th

Re: [Lightning-dev] Why should funders always pay on-chain fees?

2020-10-16 Thread Johan Torås Halseth
Many good thoughts here.

Personally I think we should design any changes for a package-relay
future, where the commitment can be 0-fee, update_fee doesn't longer
exist and fees are only decided upon on channel close.

- johan


On Wed, Oct 14, 2020 at 10:25 AM Bastien TEINTURIER via Lightning-dev
 wrote:
>
> I totally agree with the simplicity argument, I wanted to raise this because 
> it's (IMO) an issue
> today because of the way we deal with on-chain fees, but it's less impactful 
> once update_fee is
> scoped to some min_relay_fee.
>
> Let's put this aside for now then and we can revisit later if needed.
>
> Thanks for the feedback everyone!
> Bastien
>
> Le lun. 12 oct. 2020 à 20:49, Olaoluwa Osuntokun  a écrit :
>>
>> > It seems to me that the "funder pays all the commit tx fees" rule exists
>> > solely for simplicity (which was totally reasonable).
>>
>> At this stage, I've learned that simplicity (when doing anything that
>> involves multi-party on-chain fee negotiating/verification/enforcement can
>> really go a long way). Just think about all the edge cases w.r.t _allocating
>> enough funds to pay for fees_ we've discovered over the past few years in
>> the state machine. I fear adding a more elaborate fee splitting mechanism
>> would only blow up the number of obscure edge cases that may lead to a
>> channel temporarily or permanently being "borked".
>>
>> If we're going to add a "fairer" way of splitting fees, we'll really need to
>> dig down pre-deployment to ensure that we've explored any resulting edge
>> cases within our solution space, as we'll only be _adding_ complexity to fee
>> splitting.
>>
>> IMO, anchor commitments in their "final form" (fixed fee rate on commitment
>> transaction, only "emergency" use of update_fee) significantly simplifies
>> things as it shifts from "funding pay fees", to "broadcaster/confirmer pays
>> fees". However, as you note this doesn't fully distribute the worst-case
>> cost of needing to go to chain with a "fully loaded" commitment transaction.
>> Even with HTLCs, they could only be signed at 1 sat/byte from the funder's
>> perspective, once again putting the burden on the broadcaster/confirmer to
>> make up the difference.
>>
>> -- Laolu
>>
>>
>> On Mon, Oct 5, 2020 at 6:13 AM Bastien TEINTURIER via Lightning-dev 
>>  wrote:
>>>
>>> Good morning list,
>>>
>>> It seems to me that the "funder pays all the commit tx fees" rule exists 
>>> solely for simplicity
>>> (which was totally reasonable). I haven't been able to find much discussion 
>>> about this decision
>>> on the mailing list nor in the spec commits.
>>>
>>> At first glance, it's true that at the beginning of the channel lifetime, 
>>> the funder should be
>>> responsible for the fee (it's his decision to open a channel after all). 
>>> But as time goes by and
>>> both peers earn value from this channel, this rule becomes questionable. 
>>> We've discovered since
>>> then that there is some risk associated with having pending HTLCs 
>>> (flood-and-loot type of attacks,
>>> pinning, channel jamming, etc).
>>>
>>> I think that *in some cases*, fundees should be paying a portion of the 
>>> commit-tx on-chain fees,
>>> otherwise we may end up with a web-of-trust network where channels would 
>>> only exist between peers
>>> that trust each other, which is quite limiting (I'm hoping we can do 
>>> better).
>>>
>>> Routing nodes may be at risk when they *receive* HTLCs. All the attacks 
>>> that steal funds come from
>>> the fact that a routing node has paid downstream but cannot claim the 
>>> upstream HTLCs (correct me
>>> if that's incorrect). Thus I'd like nodes to pay for the on-chain fees of 
>>> the HTLCs they offer
>>> while they're pending in the commit-tx, regardless of whether they're 
>>> funder or fundee.
>>>
>>> The simplest way to do this would be to deduce the HTLC cost (172 * 
>>> feerate) from the offerer's
>>> main output (instead of the funder's main output, while keeping the base 
>>> commit tx weight paid
>>> by the funder).
>>>
>>> A more extreme proposal would be to tie the *total* commit-tx fee to the 
>>> channel usage:
>>>
>>> * if there are no pending HTLCs, the funder pays all the fee
>>> * if there are pending HTLCs, each node pays a proportion of the fee 
>>> proportional to the number of
>>> HTLCs they offered. If Alice offered 1 HTLC and Bob offered 3 HTLCs, Bob 
>>> pays 75% of the
>>> commit-tx fee and Alice pays 25%. When the HTLCs settle, the fee is 
>>> redistributed.
>>>
>>> This model uses the on-chain fee as collateral for usage of the channel. If 
>>> Alice wants to forward
>>> HTLCs through this channel (because she has something to gain - routing 
>>> fees), she should be taking
>>> on some of the associated risk, not Bob. Bob will be taking the same risk 
>>> downstream if he chooses
>>> to forward.
>>>
>>> I believe it also forces the fundee to care about on-chain feerates, which 
>>> is a healthy incentive.
>>> It may create a feedback loop between 

Re: [Lightning-dev] SIGHASH_SINGLE + update_fee Considered Harmful

2020-09-11 Thread Johan Torås Halseth
Hi,

Very good observation, most definitely not a type of attack I forseen!

Luckily, it was the plan to phase out update_fee all along, in favor
of only accepting the minimum relay fee (zero fee if/when package
relay is a reality). If I understand the scenario correctly, that
should mitigate this attack completely, as the attacker cannot impact
the intended miner fees on the HTLCs, and could only siphon off the
minimal miner fee if anything at all.

I would be open to patching the spec to disallow update_fee for anchor
channels, but maybe we can just add a warning and discourage it.

Johan


On Thu, Sep 10, 2020 at 8:13 PM Olaoluwa Osuntokun  wrote:
>
> Hi Antoine,
>
> Great findings!
>
> I think an even simpler mitigation is just for the non-initiator to _reject_
> update_fee proposals that are "unreasonable". The non-initiator can run a
> "fee leak calculation" to compute the worst-case leakage of fees in the
> revocation case. This can be done to day without any significant updates to
> implementations, and some implementations may already be doing this.
>
> One issue is that we don't have a way to do a "soft reject" of an update_fee
> as is. However, depending on the implementations, it may be possible to just
> reconnect and issue a co-op close if there're no HTLCs on the commitment
> transaction.
>
> As you mentioned by setting proper values for max allowed htlcs, max in
> flight, reserve, etc, nodes are able to quantify this fee leak risk ahead of
> time, and set reasonable parameters based on their security model. One issue
> is that these values are set in stone rn when the channel is opened, but
> future iterations of dynamic commitments may allow us to update them on the
> fly.
>
> In the mid-term, implementations can start to phase out usage of update_fee
> by setting a minimal commitment fee when the channel is first opened, then
> relying on CPFP to bump up the commitment and any HTLCs if needed. This
> discovery might very well hasten the demise of update_fee in the protocol
> all together as well.  I don't think we need to depend entirely on a
> theoretical package relay Bitcoin p2p upgrade assuming implementations are
> willing to make an assumption that say 20 sat/byte or w/e has a good chance
> of widespread propagation into mempools.
>
> From the perspective of channel safety, and variations of attacks like
> "flood & loot", imo it's absolutely critical that nodes are able to update
> the fees on their second-level HTLC transactions. As this is where the real
> danger lies: if nodes aren't able to get 2nd level HTLCs in the chain in
> time, then the incoming HTLC expiry will expire, creating a race condition
> across both commitments which can potentially cascade.
>
> In lnd today, anchors is still behind a build flag, but we plan to enable
> it by default for our upcoming 0.12 release. The blockers on our end were to
> add support for towers, and add basic deadline aware bumping, both of which
> are currently on track. We'll now also look into setting clamps on the
> receiver end to just not accept unreasonable values for the fee rate of a
> commitment, as this ends up eating into the true HTLC values for both sides.
>
> -- Laolu
>
>
> On Thu, Sep 10, 2020 at 9:28 AM Antoine Riard  wrote:
>>
>> Hi,
>>
>> In this post, I would like to expose a potential vulnerability introduced by 
>> the recent anchor output spec update related to the new usage of 
>> SIGHASH_SINGLE for HTLC transactions. This new malleability combined with 
>> the currently deployed mechanism of `update_fee` is likely harmful for funds 
>> safety.
>>
>> This has been previously shared with deployed implementations devs, as 
>> anchor channels are flagged as experimental it's better to discuss and solve 
>> this publicly. That said, if you're currently running experimental anchor 
>> channels with non-trusted parties on mainnet, you might prefer to close them.
>>
>> # SIGHASH_SINGLE and `update_fee` (skip it if you're familiar)
>>
>> First, let's get started by a quick reminder of the data set committed by 
>> signature digest algorithm of Segwit transactions (BIP 143):
>> * nVersion
>> * hashPrevouts
>> * hashSequence
>> * outpoint
>> * scriptCode of the input
>> * value of the output spent by this input
>> * nSequence of the input
>> * hashOutputs
>> * nLocktime
>> * sighash type of the signature
>>
>> Anchor output switched the sighash type from SIGHASH_ALL to SIGHASH_SINGLE | 
>> SIGHASH_ANYONECANPAY for HTLC signatures sent to your counterparty. Thus it 
>> can spend non-cooperatively its HTLC outputs on its commitment transactions. 
>> I.e when Alice broadcasts her commitment transaction, every Bob's signatures 
>> on Alice's HTLC-Success/Timeout transactions are now flagging the new 
>> sighash type.
>>
>> Thus `hashPrevouts`, `hashSequence` (ANYONECANPAY) and `hashOutputs` 
>> (SINGLE) aren't committed anymore. SINGLE only enforces commitment to the 
>> output scriptpubkey/amount at the same index that
>> the 

Re: [Lightning-dev] Sphinx and Push Notifications

2020-02-04 Thread Johan Torås Halseth
2) lnd is getting the API you need in the next release (v0.10), that
let you subscribe to HTLC events. See PR
https://github.com/lightningnetwork/lnd/pull/3848. The notification
won't be signed (but the stream uses TLS), but that can easily be
added using the `signmessage` API:
https://api.lightning.community/#signmessage

Cheers,
Johan

On Sun, Feb 2, 2020 at 1:46 PM Pavol Rusnak via Lightning-dev
 wrote:
>
> Hi all!
>
> I have a couple of unrelated questions, hope you can give me some pointers.
>
> 1) Is c-lightning going to support Sphinx or other form of spontaneous 
> payments?
>
> 2) Can a lightning node (such as lnd or c-lightning) send a push notification 
> (e.g. to a webhook) when it receives or routes a payment? If yes, is this 
> notification cryptographically signed (for example with the node's private 
> key)? Is this documented somewhere?
>
> Thanks!
>
> --
> Best Regards / S pozdravom,
>
> Pavol "stick" Rusnak
> CTO, SatoshiLabs
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] CPFP Carve-Out for Fee-Prediction Issues in Contracting Applications (eg Lightning)

2019-10-30 Thread Johan Torås Halseth
On Mon, Oct 28, 2019 at 6:16 PM David A. Harding  wrote:

> A parent transaction near the limit of 100,000 vbytes could have almost
> 10,000 outputs paying OP_TRUE (10 vbytes per output).  If the children
> were limited to 10,000 vbytes each (the current max carve-out size),
> that allows relaying 100 mega-vbytes or nearly 400 MB data size (larger
> than the default maximum mempool size in Bitcoin Core).
>

Thanks, Dave, I wasn't aware the limits would allow this many outputs. And
as your calculation shows, this opens up the potential for free relay of
large amounts of data.

We could start special casing to only allow this for "LN commitment-like"
transactions, but this would be application specific changes, and your
calculation shows that even with the BOLT2 numbers there still exists cases
with a large number of children.

We are moving forward with adding a 1 block delay to all outputs to utilize
the current carve-out rule, and the changes aren't that bad. See Joost's
post in "[PATCH] First draft of option_simplfied_commitment"

- Johan
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] CPFP Carve-Out for Fee-Prediction Issues in Contracting Applications (eg Lightning)

2019-10-28 Thread Johan Torås Halseth
>
>
> I don’te see how? Let’s imagine Party A has two spendable outputs, now
> they stuff the package size on one of their spendable outlets until it is
> right at the limit, add one more on their other output (to meet the
> Carve-Out), and now Party B can’t do anything.

Matt: With the proposed change, party B would always be able to add a child
to its output, regardless of what games party A is playing.


Thanks for the explanation, Jeremy!


> In terms of relay cost, if an ancestor can be replaced, it will invalidate
> all it's children, meaning that no one paid for that broadcasting. This can
> be fixed by appropriately assessing Replace By Fee update fees to
> encapsulate all descendants, but there are some tricky edge cases that make
> this non-obvious to do.


Relay cost is the obvious problem with just naively removing all limits.
Relaxing the current rules by allowing to add a child to each output as
long as it has a single unconfirmed parent would still only allow free
relay of O(size of parent) extra data (which might not be that bad? Similar
to the carve-out rule we could put limits on the child size). This would be
enough for the current LN use case (increasing fee of commitment tx), but
not for OP_SECURETHEBAG I guess, as you need the tree of children, as you
mention.

I imagine walking the mempool wouldn't change much, as you would only have
one extra child per output. But here I'm just speculating, as I don't know
the code well enough know what the diff would look like.


> OP_SECURETHEBAG can help with the LN issue by putting all HTLCS into a
> tree where they are individualized leaf nodes with a preceding CSV. Then,
> the above fix would ensure each HTLC always has time to close properly as
> they would have individualized lockpoints. This is desirable for some
> additional reasons and not for others, but it should "work".


This is interesting for an LN commitment! You could really hide every
output of the commitment within OP_STB, which could either allow bypassing
the fee-pinning attack entirely (if the output cannot be spent unconfirmed)
or adding fees to the commitment using SIGHASH_SINGLE|ANYONECANPAY.

- Johan

On Sun, Oct 27, 2019 at 8:13 PM Jeremy  wrote:

> Johan,
>
> The issues with mempool limits for OP_SECURETHEBAG are related, but have
> distinct solutions.
>
> There are two main categories of mempool issues at stake. One is relay
> cost, the other is mempool walking.
>
> In terms of relay cost, if an ancestor can be replaced, it will invalidate
> all it's children, meaning that no one paid for that broadcasting. This can
> be fixed by appropriately assessing Replace By Fee update fees to
> encapsulate all descendants, but there are some tricky edge cases that make
> this non-obvious to do.
>
> The other issue is walking the mempool -- many of the algorithms we use in
> the mempool can be N log N or N^2 in the number of descendants. (simple
> example: an input chain of length N to a fan out of N outputs that are all
> spent, is O(N^2) to look up ancestors per-child, unless we're caching).
>
> The other sort of walking issue is where the indegree or outdegree for a
> transaction is high. Then when we are computing descendants or ancestors we
> will need to visit it multiple times. To avoid re-expanding a node, we
> currently cache it with a set. This uses O(N) extra memory and makes O(N
> Log N) (we use std::set not unordered_set) comparisons.
>
> I just opened a PR which should help with some of the walking issues by
> allowing us to cheaply cache which nodes we've visited on a run. It makes a
> lot of previously O(N log N) stuff O(N) and doesn't allocate as much new
> memory. See: https://github.com/bitcoin/bitcoin/pull/17268.
>
>
> Now, for OP_SECURETHEBAG we want a particular property that is very
> different from with lightning htlcs (as is). We want that an unlimited
> number of child OP_SECURETHEBAG txns may extend from a confirmed
> OP_SECURETHEBAG, and then at the leaf nodes, we want the same rule as
> lightning (one dangling unconfirmed to permit channels).
>
> OP_SECURETHEBAG can help with the LN issue by putting all HTLCS into a
> tree where they are individualized leaf nodes with a preceding CSV. Then,
> the above fix would ensure each HTLC always has time to close properly as
> they would have individualized lockpoints. This is desirable for some
> additional reasons and not for others, but it should "work".
>
>
>
> --
> @JeremyRubin <https://twitter.com/JeremyRubin>
> <https://twitter.com/JeremyRubin>
>
>
> On Fri, Oct 25, 2019 at 10:31 AM Matt Corallo 
> wrote:
>
>> I don’te see how? Let’s imagine Party A has two spendable outputs, now
>> they stuff the package size on one of their spendable outlets until it i

Re: [Lightning-dev] CPFP Carve-Out for Fee-Prediction Issues in Contracting Applications (eg Lightning)

2019-10-25 Thread Johan Torås Halseth
It essentially changes the rule to always allow CPFP-ing the commitment as
long as there is an output available without any descendants. It changes
the commitment from "you always need at least, and exactly, one non-CSV
output per party. " to "you always need at least one non-CSV output per
party. "

I realize these limits are there for a reason though, but I'm wondering if
could relax them. Also now that jeremyrubin has expressed problems with the
current mempool limits.

On Thu, Oct 24, 2019 at 11:25 PM Matt Corallo 
wrote:

> I may be missing something, but I'm not sure how this changes anything?
>
> If you have a commitment transaction, you always need at least, and
> exactly, one non-CSV output per party. The fact that there is a size
> limitation on the transaction that spends for carve-out purposes only
> effects how many other inputs/outputs you can add, but somehow I doubt
> its ever going to be a large enough number to matter.
>
> Matt
>
> On 10/24/19 1:49 PM, Johan Torås Halseth wrote:
> > Reviving this old thread now that the recently released RC for bitcoind
> > 0.19 includes the above mentioned carve-out rule.
> >
> > In an attempt to pave the way for more robust CPFP of on-chain contracts
> > (Lightning commitment transactions), the carve-out rule was added in
> > https://github.com/bitcoin/bitcoin/pull/15681. However, having worked on
> > an implementation of a new commitment format for utilizing the Bring
> > Your Own Fees strategy using CPFP, I’m wondering if the special case
> > rule should have been relaxed a bit, to avoid the need for adding a 1
> > CSV to all outputs (in case of Lightning this means HTLC scripts would
> > need to be changed to add the CSV delay).
> >
> > Instead, what about letting the rule be
> >
> > The last transaction which is added to a package of dependent
> > transactions in the mempool must:
> >   * Have no more than one unconfirmed parent.
> >
> > This would of course allow adding a large transaction to each output of
> > the unconfirmed parent, which in effect would allow an attacker to
> > exceed the MAX_PACKAGE_VIRTUAL_SIZE limit in some cases. However, is
> > this a problem with the current mempool acceptance code in bitcoind? I
> > would imagine evicting transactions based on feerate when the max
> > mempool size is met handles this, but I’m asking since it seems like
> > there has been several changes to the acceptance code and eviction
> > policy since the limit was first introduced.
> >
> > - Johan
> >
> >
> > On Wed, Feb 13, 2019 at 6:57 AM Rusty Russell  > <mailto:ru...@rustcorp.com.au>> wrote:
> >
> > Matt Corallo  > <mailto:lf-li...@mattcorallo.com>> writes:
> > >>> Thus, even if you imagine a steady-state mempool growth, unless
> the
> > >>> "near the top of the mempool" criteria is "near the top of the
> next
> > >>> block" (which is obviously *not* incentive-compatible)
> > >>
> > >> I was defining "top of mempool" as "in the first 4 MSipa", ie.
> next
> > >> block, and assumed you'd only allow RBF if the old package wasn't
> > in the
> > >> top and the replacement would be.  That seems incentive
> > compatible; more
> > >> than the current scheme?
> > >
> > > My point was, because of block time variance, even that criteria
> > doesn't hold up. If you assume a steady flow of new transactions and
> > one or two blocks come in "late", suddenly "top 4MWeight" isn't
> > likely to get confirmed until a few blocks come in "early". Given
> > block variance within a 12 block window, this is a relatively likely
> > scenario.
> >
> > [ Digging through old mail. ]
> >
> > Doesn't really matter.  Lightning close algorithm would be:
> >
> > 1.  Give bitcoind unileratal close.
> > 2.  Ask bitcoind what current expidited fee is (or survey your
> mempool).
> > 3.  Give bitcoind child "push" tx at that total feerate.
> > 4.  If next block doesn't contain unilateral close tx, goto 2.
> >
> > In this case, if you allow a simpified RBF where 'you can replace if
> > 1. feerate is higher, 2. new tx is in first 4Msipa of mempool, 3.
> > old tx isnt',
> > it works.
> >
> > It allows someone 100k of free tx spam, sure.  But it's simple.
> >
> > We could further restrict it by marking the unilateral close somehow
> to
> > say "gonna be pushed" and further limiting the child tx weight (say,
> > 5kSipa?) in that case.
> >
> > Cheers,
> > Rusty.
> > ___
> > Lightning-dev mailing list
> > Lightning-dev@lists.linuxfoundation.org
> > <mailto:Lightning-dev@lists.linuxfoundation.org>
> > https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
> >
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] CPFP Carve-Out for Fee-Prediction Issues in Contracting Applications (eg Lightning)

2019-10-24 Thread Johan Torås Halseth
Reviving this old thread now that the recently released RC for bitcoind
0.19 includes the above mentioned carve-out rule.

In an attempt to pave the way for more robust CPFP of on-chain contracts
(Lightning commitment transactions), the carve-out rule was added in
https://github.com/bitcoin/bitcoin/pull/15681. However, having worked on an
implementation of a new commitment format for utilizing the Bring Your Own
Fees strategy using CPFP, I’m wondering if the special case rule should
have been relaxed a bit, to avoid the need for adding a 1 CSV to all
outputs (in case of Lightning this means HTLC scripts would need to be
changed to add the CSV delay).

Instead, what about letting the rule be

The last transaction which is added to a package of dependent
transactions in the mempool must:
  * Have no more than one unconfirmed parent.

This would of course allow adding a large transaction to each output of the
unconfirmed parent, which in effect would allow an attacker to exceed the
MAX_PACKAGE_VIRTUAL_SIZE limit in some cases. However, is this a problem
with the current mempool acceptance code in bitcoind? I would imagine
evicting transactions based on feerate when the max mempool size is met
handles this, but I’m asking since it seems like there has been several
changes to the acceptance code and eviction policy since the limit was
first introduced.

- Johan


On Wed, Feb 13, 2019 at 6:57 AM Rusty Russell  wrote:

> Matt Corallo  writes:
> >>> Thus, even if you imagine a steady-state mempool growth, unless the
> >>> "near the top of the mempool" criteria is "near the top of the next
> >>> block" (which is obviously *not* incentive-compatible)
> >>
> >> I was defining "top of mempool" as "in the first 4 MSipa", ie. next
> >> block, and assumed you'd only allow RBF if the old package wasn't in the
> >> top and the replacement would be.  That seems incentive compatible; more
> >> than the current scheme?
> >
> > My point was, because of block time variance, even that criteria doesn't
> hold up. If you assume a steady flow of new transactions and one or two
> blocks come in "late", suddenly "top 4MWeight" isn't likely to get
> confirmed until a few blocks come in "early". Given block variance within a
> 12 block window, this is a relatively likely scenario.
>
> [ Digging through old mail. ]
>
> Doesn't really matter.  Lightning close algorithm would be:
>
> 1.  Give bitcoind unileratal close.
> 2.  Ask bitcoind what current expidited fee is (or survey your mempool).
> 3.  Give bitcoind child "push" tx at that total feerate.
> 4.  If next block doesn't contain unilateral close tx, goto 2.
>
> In this case, if you allow a simpified RBF where 'you can replace if
> 1. feerate is higher, 2. new tx is in first 4Msipa of mempool, 3. old tx
> isnt',
> it works.
>
> It allows someone 100k of free tx spam, sure.  But it's simple.
>
> We could further restrict it by marking the unilateral close somehow to
> say "gonna be pushed" and further limiting the child tx weight (say,
> 5kSipa?) in that case.
>
> Cheers,
> Rusty.
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Base AMP

2018-11-25 Thread Johan Torås Halseth
This shouldn't be problem, as the invoice will already indicate that the
node supports BaseAMP. If you have a reason to not reveal that you support
BAMP for certain invoices, you'll just not specify it in the invoice, and
act non-BAMPy when receiving payments to this payment hash.

Of course, this will also be opt-in for both sides and won't affect
existing nodes in any way.

Cheers,
Johan

On Wed, Nov 21, 2018 at 11:54 PM Rusty Russell 
wrote:

> Johan Torås Halseth  writes:
> > Seems like we can restrict the changes to BOLT11 by having the receiver
> > assume NAMP for incoming payments < invoice_amount. (with some timeout of
> > course, but that would need to be the case even when the sender is
> > signalling NAMP).
>
> This would effectively become a probe for Base AMP; if you get a partial
> payment error, it's because the recipient didn't support Base AMP.
>
> Seems cleaner to have a flag, both on BOLT11 and inside the onion.  Then
> it's explicitly opt-in for both sides and doesn't affect existing nodes
> in any way.
>
> Cheers,
> Rusty.
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Base AMP

2018-11-21 Thread Johan Torås Halseth
Seems like we can restrict the changes to BOLT11 by having the receiver
assume NAMP for incoming payments < invoice_amount. (with some timeout of
course, but that would need to be the case even when the sender is
signalling NAMP).

Cheers,
Johan

On Wed, Nov 21, 2018 at 3:55 AM ZmnSCPxj via Lightning-dev <
lightning-dev@lists.linuxfoundation.org> wrote:

> Good morning Rusty,
>
> > And do not play with `amount_to_forward`, as it's an important
> > signal to the final node that the previous node did not offer less value
> > for the HTLC than it was supposed to. (You could steal the top bit to
> > signal partial payment if you really want to).
>
> I do not view this as playing with the existing `amt_to_forward`, but
> rather retaining its previous use.
>
> If it helps, we can rewrite the *current* pre-AMP spec as below:
>
> 2. data:
> ...
> * [`8` : `amt_to_forward` / `amt_to_pay`]
>
> ...
>
> * `amt_to_forward` - for **non-final** nodes, this is the value to forward
> to the next node.
>   Non-final nodes MUST check:
>
> incoming_htlc_amt - fee >= amt_to_forward
>
> * `amt_to_pay` - for **final** nodes, this is the value that is intended
> to reach it.
>   Final nodes MUST check:
>
> incoming_htlc_amt >= amt_to_pay
>
> Then for Base AMP:
>
> * `amt_to_pay` - for **final** nodes, this is the total value that is
> intended to reach it.
>   If `incomplete_payment` flag is not set, final nodes MUST check:
>
> incoming_htlc_amt >= amt_to_pay
>
>   If `incomplete_payment` flag is set, then final nodes must claim HTLCs
> only if:
>
> sum(incoming_htlc_amt) >= amt_to_pay
>
>   Where `sum(incoming_htlc_amt)` is the total `incoming_htlc_amt` for all
> incoming HTLCs terminating at this final node with the same `payment_hash`.
>
>
>
> Now perhaps we can argue that for AMP we should have two fields
> `amt_to_pay_for_this_partial_payment` and `amt_to_pay_for_total_payment`
> instead.
>
>
> Regards,
> ZmnSCPxj
>
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Base AMP

2018-11-13 Thread Johan Torås Halseth
Good evening Z and list,

I'm wondering, since these payments are no longer atomic, should we name it
accordingly?

Cheers,
Johan

On Tue, Nov 13, 2018 at 1:28 PM ZmnSCPxj via Lightning-dev <
lightning-dev@lists.linuxfoundation.org> wrote:

> Good morning list,
>
> I propose the below to support Base AMP.
>
> The below would allow arbitrary merges of paths, but not arbitrary
> splits.  I am uncertain about the safety of arbitrary splits.
>
> ### The `multipath_merge_per_hop` type (`option_base_amp`)
>
> This indicates that payment has been split by the sender using Base AMP,
> and that the receiver should wait for the total intended payment before
> forwarding or claiming the payment.
> In case the receiving node is not the last node in the path, then
> succeeding hops MUST be the same across all splits.
>
> 1. type: 1 (`termination_per_hop`)
> 2. data:
>   * [`8` : `short_channel_id`]
>   * [`8` : `amt_to_forward`]
>   * [`4` : `outgoing_cltv_value`]
>   * [`8` : `intended_total_payment`]
>   * [`4` : `zeros`]
>
> The contents of this hop will be the same across all paths of the Base AMP.
> The `payment_hash` of the incoming HTLCs will also be the same across all
> paths of the Base AMP.
>
> `intended_total_payment` is the total amount of money that this node
> should expect to receive in all incoming paths to the same `payment_hash`.
>
> This may be the last hop of a payment onion, in which case the `HMAC` for
> this hop will be `0` (the same rule as for `per_hop_type` 0).
>
> The receiver:
>
> * MUST impose a reasonable timeout for waiting to receive all component
> paths, and fail all incoming HTLC offers for the `payment_hash`  if they
> have not totalled equal to `intended_total_payment`.
> * MUST NOT forward (if an intermediate node) or claim (if the final node)
> unless it has received a total greater or equal to `intended_total_payment`
> in all incoming HTLCs for the same `payment_hash`.
>
> The sender:
>
> * MUST use the same `payment_hash` for all paths of a single multipath
> payment.
>
> Regards,
> ZmnSCPxj
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Link-level payment splitting via intermediary rendezvous nodes

2018-11-09 Thread Johan Torås Halseth
Neat! I think this is similar to what has been briefly discussed earlier
about hybrid packet-switched/circuit-switched payment routing.

B doesn't have to care about how the payment gets from C to D, but she
knows it must go through D, keeping privacy intact. This would be exactly
equivalent to how TOR works today I would think.

C must also make sure the detour route stays within the fee limit of course.

Cheers,
Johan

On Fri, Nov 9, 2018 at 7:02 AM ZmnSCPxj via Lightning-dev <
lightning-dev@lists.linuxfoundation.org> wrote:

> Good morning list,
>
> As was discussed directly in summit, we accept link-lvel payment splitting
> (scid is not binding), and provisionally accept rendez-vous routing.
>
> It strikes me, that even if your node has only a single channel to the
> next node (c-lightning), it is possible, to still perform link-level
> payment splitting/re-routing.
>
> For instance, consider this below graph:
>
>   E<---D--->C<---B
>^  /
>| /
>|L
>A
>
> In the above, B requests a route from B->C->D->E.
>
> However, C cannot send to D, since the channel direction is saturated in
> favor of D.
>
> Alternately, C can route to D via A instead.  It holds the (encrypted)
> route from D to E.  It can take that sub-route and treat it as a partial
> route-to-payee under rendez-vous routing, as long as node A supports
> rendez-vous routing.
>
> This can allow re-routing or payment splitting over multiple hops.
>
> Even though C does not know the number of remaining hops between D and the
> destination, its alternative is to earn nothing anyway as its only
> alternative is to fail the routing.  At least with this, there is a chance
> it can succeed to send the payment to the final destination.
>
> Regards,
> ZmnSCPxj
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Recovering protocol for Lightning network

2018-11-01 Thread Johan Torås Halseth
Hi, Margherita,

If you haven't already, look into "data loss protection" as defined in the
BOLTs. It is similar to to what you are suggesting, with A being able to
tell B that it lost state for the A<->B channel, and if B is cooperative it
will help A recover its funds.

You can also look into watchtowers, and static and dynamic channel backups,
as they touch onto what you are describing.

Cheers,
Johan

On Thu, Nov 1, 2018 at 12:59 AM Margherita Favaretto 
wrote:

> Good morning dev-lightning community,
>
> Last weekend, I had the opportunity to take part in "LightningHackdayNYC".
> It was an incredible event, thanks to all organizers, to all speakers and
> all people available to share all own knowledge.
> Discussing with the people of the community, I could define better the
> problem that I'm focusing on and have some ideas for the solution.
> I've created a project on GitHub:
> https://github.com/margheritafav/LightningNetworkProject , where you
> could find a draft of my research, and also you are welcome to add your
> comments and feedback.
>
>
> To recap, the aim of the project is realizing a recovering protocol for
> Lightning Network. As someone suggested me in the previous e-mails, Eltoo
> is already solving this problem in part. With Eltoo, the nodes are able to
> share the last status of the channel, so if one of the two nodes loses some
> information, it can simply ask to the other node to share the most recent
> status. Unfortunately, this mechanism doesn't include the case that the
> other node doesn't share the last transaction, but instead an older one,
> more favourable for own balance. My project aims to solve this particular
> issue and make the protocol more solid to face completely the case of a
> false positive node.
>
> Idea: The main idea of the design is using the other connected nodes as a
> back-up of own recent status.
> *I**f a node A is connected to a node B and a node C, for each
> transaction between A and B, A sends an encrypted information, regarding
> the last commitment transaction with B, to C. For each commitment
> transaction with C, A sends an encrypted information, regarding the
> last commitment transaction with C, to B.*
> In this way, if A loses the last transactions, she may ask the information
> to the other connected node and update the status.
>
> I think that this idea can solve the current lack in Eltoo and I'm
> planning to analyze more this solution in the next days. Any
> thoughts/suggestions are really appreciated to proceed in the wisest way
> and find a solution that can also cover all your needs. If someone of you
> is interested in this research, I'm available to share more details about
> the design of my idea and I'm open to discussion. Moreover, if someone is
> already working on this research topic, please not hesitate to write me for
> possible collaborations to work together.
>
>
> Thank you in advance,
> Margherita Favaretto
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Splicing Proposal: Feedback please!

2018-10-16 Thread Johan Torås Halseth
>
> This is one of the cases where a simpler solution (relatively
> speaking ^^) is to be preferred imho, allowing for future
> iterations.
>

I think we should strive to splice in 1 on-chain tx, as if not the biggest
benefit really is lost compared to just closing and reopening the channel.

Complexity wise I don't think it will be that much to gain from the 2-tx
proposal as (if I understand the proposal correctly) there will be even
more transaction types with new scripts to code up and maintain.

On Tue, Oct 16, 2018 at 5:38 AM Christian Decker 
wrote:

> ZmnSCPxj via Lightning-dev 
> writes:
>
> >> One thing that I think we should lift from the multiple funding output
> >> approach is the "pre seating of inputs". This is cool as it would allow
> >> clients to generate addresses, that others could deposit to, and then
> have
> >> be spliced directly into the channel. Public derivation can be used,
> along
> >> with a script template to do it non-interactively, with the clients
> picking
> >> up these deposits, and initiating a splice in as needed.
> >
> > I am uncertain what this means in particular, but let me try to
> > restate what you are talking about in other terms:
> >
> > 1.  Each channel has two public-key-derivation paths (BIP32) to create
> onchain addresses.  One for each side of the channel.
> > 2.  When somebody sends to one of the onchain addresses in the path,
> their client detects this.
> > 3.  The client initiates a splice-in automatically from this UTXO paying
> to that address into the channel.
> >
> > It seems to me naively that the above can be done by the client
> > software without any modifications to the Lightning Network BOLT
> > protocol, as long as the BOLT protocol is capable of supporting *some*
> > splice-in operation, i.e. it seems to be something that a client
> > software can implement as a feature without requiring a BOLT change.
> > Or is my above restatement different from what you are talking about?
> >
> > How about this restatement?
> >
> > 1.  Each channel has two public-key-derivation paths (BIP32) to create
> onchain addresses.  One for each side of the channel.
> > 2.  The base of the above is actually a combined private-public keypair
> of both sides (e.g. created via MuSig or some other protocol).  Thus the
> addresses require cooperation of both parties to spend.
> > 3.  When somebody sends to one of the onchain addresses in the path,
> their client detects this.
> > 4.  The client updates the current transaction state, such that the new
> commit transaction has two inputs ( the original channel transaction and
> the new UTXO).
> >
> > The above seems unsafe without trust in the other peer, as, the other
> > peer can simply refuse to create the new commit transaction.  Since
> > the address requires both parties to spend, the money cannot be spent
> > and there is no backoff transaction that can be used.  But maybe you
> > can describe some mechanism to ensure this, if this is what is meant
> > instead?
>
> This could easily be solved by making the destination address a Taproot
> address, which by default is just a 2-of-2, but in the uncooperative
> case it can reveal the script it commits to, which is just a timelocked
> refund that requires a single-sig. The only problem with this is that
> the refund would be non-interactive, and so the entirety of the funds,
> that may be from a third-party, need to be claimed by one endpoint,
> i.e., there is no splitting the funds in case of an uncollaborative
> refund. Not sure how important that is though, since I don't think
> third-party funds will come from unrelated parties, e.g., most of these
> funds will come from an on-chain wallet that is under the control of
> either parties so the refund should go back to that party anyway.
>
> Cheers,
> Christian
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] RouteBoost: Adding 'r=' fields to BOLT 11 invoices to flag capacity

2018-09-20 Thread Johan Torås Halseth
Any reason not to include _all_ (up to a limit) incoming channels with
sufficient capacity?

Cheers,
Johan

On Thu, Sep 20, 2018 at 4:12 AM Rusty Russell  wrote:

> Hi all,
>
> I'm considering a change to c-lightning, where `invoice` would
> automatically append an 'r' field for a channel which has sufficient
> *incoming* capacity for the amount (using a weighted probability across
> our peers).
>
>  This isn't quite what 'r' was for, but it would be a useful
> hint for payment routing and also potentially for establishing an
> initial channel.  This is an issue for the Blockstream Store which
> deliberately doesn't advertize an address any more to avoid
> centralization.
>
> Thoughts welcome!
> Rusty.
> ___
> Lightning-dev mailing list
> Lightning-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Rebalancing argument

2018-07-10 Thread Johan Torås Halseth
A simple way to see how rebalancing could be beneficial, is to observe that you 
only know the channel capacity (not the balance!) of the channels you don’t 
belong to.
If everybody is being good stewards and are rebalancing their channels, then 
picking a route to send a payment over is more likely to succeed than if 
everybody only has channels depleted in one direction (extreme case).

On Thu, Jul 5, 2018 at 12:06, Dmytro Piatkivskyi  
wrote:
Hi Olaoluwa,

I¹m glad we¹re thinking the same direction.

Generally, I think we (the community) worry too much about equilibrium.
There is no really proof that it improves the network. On the other hand,
money being locked in channel is of major issue. Some nodes may be used
mostly for sending payments, whereas others mostly receiving. Therefore,
the distribution of funds in channels should be dictated not by
equilibrium but by nodes' plans to send and receive.

> In this case, equilibrium means being able to recv as much as you can
>send on all your channels.

Now it seems there are two ways to define equilibrium. You have described
one where each node is trying to keep the ability to send and receive at
the same level. I¹ll repeat the above argument, some nodes may be used
mostly for sending payments, whereas others mostly receiving. Therefore,
such definition is unjustified from the individual viewpoint. Another
definition of equilibrium is when a node distributes equally the available
balance amongst the channels it has. Your argument still stands here as
such equilibrium minimises the number of depleted channels.

> The argument here is that by maintaining this balance, one is likely to
>reduce the number of routing failures from failed HTLC forwarding, as the
>channel is equally likely to be able to carry an HTLC in either direction.

If a node has no balance, its channels are depleted. There is nothing we
can do with this. Such nodes are bad for topology and should be
discouraged. Possibly by the autopilot.

> However if a few sources/sinks dominate, then channels may regularly
>become biased requiring more maintenance.

Now you¹re bringing up even more important question. If we had balanced
payments, LN could function without touching blockchain ever again
indefinitely. Skewed traffic is a big problem. Re-balancing won¹t be of
use here because having a fixed nodes¹ balances, there is only a limited
max flow that can be pushed in a particular direction. I believe autopilot
could mitigate the problem. Please, find my argument at the bottom of [1].

[1] https://github.com/lightningnetwork/lnd/issues/677


Best,
Dmytro

From: Olaoluwa Osuntokun 
Date: Tuesday, 3 July 2018 at 22:13
To: Dmytro Piatkivskyi 
Cc: "lightning-dev@lists.linuxfoundation.org"

Subject: Re: [Lightning-dev] Rebalancing argument


Hi Dmytro,

Thanks for bringing this up! Sometime last year when I was at a developer
meetup that cdecker also attended, we briefly discussed a similar
question. We
we're discussing if "active rebalancing" was actually ever really
necessary.
>From my PoV, active rebalancing is rebalancing done for the purpose of
being
able to send/recv a particular payment. On white board, cdecer sketched
out a
similar argument as to Lemma 2 in that paper you linked: namely that there
will
exist an alternative path, therefore active rebalancing isn't necessary.

IMO, in a world pre-AMP, it is indeed necessary. Consider a node Bob that
wishes to receive a payment of 0.5 BTC. Bob has two channels, one with 2
BTC
max capacity, and the other with 1 BTC max capacity. If channel 1 only has
0.2
available for him to receive, and channel 2 only has 0.3 available for him
to
receive, then without active rebalancing, he's unable to receive the
payment.
However, if he completes a circular payment from channel 1 to channel 2
(or the
other way around), then he's able to receive the payment (under ideal
conditions).

In a world post-AMP, then this would seem to no longer apply. Alice the
sender
may be able to utilize the aggregate bandwidth of the network to send
minimally
two payment flows (one 0.2 and one 0.3) to satisfy the payment. As a
result,
active rebalancing isn't needed as payments can be split up to fully
utilize
the payment bandwidth at both the sender and the receiver.

> total balances of nodes in the network define the feasibility of a
>particular
> transaction, not the distribution of balances.

With multi-path payments this is precisely the case!

However, there might be an argument in favor of "passive rebalancing". I
define
passive rebalancing as rebalancing a node carries out independent of
needing to
send/receive payments themselves, in order to ensure an equilibrium state
amongst the available balances of their channels. In this case, equilibrium
means being able to recv as much as you can send on all your channels. The
argument here is that by maintaining this balance, one is likely to reduce
the
number of routing failures from failed HTLC forwarding, as the channel is

Re: [Lightning-dev] AMP: Atomic Multi-Path Payments over Lightning

2018-02-08 Thread Johan Torås Halseth
Yeah, that is true, it would only give you the atomicity, not the 
decorrelation. I don’t see how you could get all the same properties using only 
one hash though. I guess the sender has no incentive to claim any of the 
payments before all of them have arrived, but you get no guarantee that partial 
payments cannot be made. Seems hard to do without introducing new primitives.
- Johan

On Thu, Feb 8, 2018 at 12:44, Jim Posen <jim.po...@gmail.com> wrote:
If using two hashes to deliver the payment while still getting a proof, I'm not 
sure what that provides above just sending regular lightning payments over 
multiple routes with one hash. Firstly, if there is a second hash, it would 
presumably be the same for all routes, making them linkable again, which AMP 
tries to solve. And secondly, the receiver has no incentive to claim any of the 
HTLCs before all of them are locked in, because in that case they are releasing 
the transaction receipt before fully being paid.

On Thu, Feb 8, 2018 at 8:41 AM, Johan Torås Halseth < joha...@gmail.com 
[joha...@gmail.com] > wrote:
An obvious way to make this compatible with proof-of-payment would be to 
require two hashes to claim the HTLC: the presage from the invoice payment hash 
(as today) + the new hash introduced here. This would give the sender a receipt 
after only one of the HTLCs was claimed. Would require changes to the scripts 
of course.
With Schnorr/EC operations this could probably be made more elegant, as 
mentioned.

- Johan
On Wed, Feb 7, 2018 at 18:21, Rusty Russell < ru...@rustcorp.com.au 
[ru...@rustcorp.com.au] > wrote:
Olaoluwa Osuntokun < laol...@gmail.com [laol...@gmail.com] > writes:
> Hi Y'all,
>
> A common question I've seen concerning Lightning is: "I have five $2
> channels, is it possible for me to *atomically* send $6 to fulfill a
> payment?". The answer to this question is "yes", provided that the receiver

This is awesome! I'm kicking myself for not proposing it :)

Unfortunately, your proposal defines a way to make multipath donations,
not multipath payments :(

In other words, you've lost proof of payment, which IMHO is critical.

Fortunately, this can be fairly trivially fixed when we go to scriptless
scripts or other equivalent decorrelation mechanism, when I think this
mechanism becomes extremely powerful.

> - Potential fee savings for larger payments, contingent on there being a
> super-linear component to routed fees. It's possible that with
> modifications to the fee schedule, it's actually *cheaper* to send
> payments over multiple flows rather than one giant flow.

This is a stretch. I'd stick with the increased reliability/privacy
arguments which are overwhelmingly compelling IMHO.

If I have any important feedback on deeper reading (and after a sccond
coffee), I'll send a separate email.

Thanks!
Rusty.
__ _
Lightning-dev mailing list
Lightning-dev@lists. linuxfoundation.org 
[Lightning-dev@lists.linuxfoundation.org]
https://lists.linuxfoundation. org/mailman/listinfo/ lightning-dev 
[https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev]

__ _
Lightning-dev mailing list
Lightning-dev@lists. linuxfoundation.org 
[Lightning-dev@lists.linuxfoundation.org]
https://lists.linuxfoundation. org/mailman/listinfo/ lightning-dev 
[https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev]___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] AMP: Atomic Multi-Path Payments over Lightning

2018-02-08 Thread Johan Torås Halseth
An obvious way to make this compatible with proof-of-payment would be to 
require two hashes to claim the HTLC: the presage from the invoice payment 
hash (as today) + the new hash introduced here. This would give the sender 
a receipt after only one of the HTLCs was claimed. Would require changes to 
the scripts of course.
With Schnorr/EC operations this could probably be made more elegant, as 
mentioned.


- Johan
On Wed, Feb 7, 2018 at 18:21, Rusty Russell  wrote:
Olaoluwa Osuntokun  writes:
> Hi Y'all,
>
> A common question I've seen concerning Lightning is: "I have five $2
> channels, is it possible for me to *atomically* send $6 to fulfill a
> payment?". The answer to this question is "yes", provided that the 
receiver


This is awesome! I'm kicking myself for not proposing it :)

Unfortunately, your proposal defines a way to make multipath donations,
not multipath payments :(

In other words, you've lost proof of payment, which IMHO is critical.

Fortunately, this can be fairly trivially fixed when we go to scriptless
scripts or other equivalent decorrelation mechanism, when I think this
mechanism becomes extremely powerful.

> - Potential fee savings for larger payments, contingent on there being a
> super-linear component to routed fees. It's possible that with
> modifications to the fee schedule, it's actually *cheaper* to send
> payments over multiple flows rather than one giant flow.

This is a stretch. I'd stick with the increased reliability/privacy
arguments which are overwhelmingly compelling IMHO.

If I have any important feedback on deeper reading (and after a sccond
coffee), I'll send a separate email.

Thanks!
Rusty.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] [Question] Unilateral closing during fee increase.

2018-01-16 Thread Johan Torås Halseth
Hi Jonathan,
This is definitely a problem! I have a mainnet channel I force closed 2 weeks 
ago that is still not mined :(
With current spec I guess it is not much that can be done other than crossing 
fingers. For future specs maybe someone could come up with some SIGHASH flag 
magic to either (1) allow adding an extra input that could go to fees, or (2) 
add both a new input and output, where the difference goes to fees. I believe 
this would change the TXID of the commitment transaction, but not sure if 
that’s a problem? (Watchtowers comes to mind).
If keeping the TXID intact is important, one solution would be to always add a 
small (1 satoshi?) output to every commitment transaction, that anyone can 
spend. So if the commitment transaction has a fee too low, you could do CPFP on 
this small output, making it confirm. You could even make a batch sweep of many 
such outputs, and they would be (un)fairly cheap since they don’t need a 
signature. Do you think this would work?
- Johan

On Sun, Jan 14, 2018 at 2:30, Jonathan Underwood  
wrote:
Hey everybody.
Say that the last time we updated channel state, we assumed 40 satoshi/byte was 
enough to get confirmed, then I leave the channel for a few weeks, come back to 
find my partner fell off the face of the internet.
I perform unilateral close with my output on CSV timelock... but it turns out 
there’s 500 MB of txes at around 100 satoshi/byte and lets say my transaction 
will never get confirmed at 40 sat/byte.
What course of action can I take?
1. to_local output can't be redeemed until the commitment transaction (which 
will "never confirm") is confirmed + the CSV timeout.
2. to_remote output probably won't be redeemed as the other person is offline.
The only remedy I can think of is hope that the other person comes back online 
and CPFPs your to_remote output for you... but at that point it would be better 
for them to just amicably close with normal outputs... so basically your only 
hope is wait for other person to come online.

Since CSV will cause script verification to fail, a CPFP transaction will not 
be propagated.
If we can't CPFP, the CSV timer won't start (it starts once the CSV containing 
output is confirmed).
Seems like a problem.
Anyone have any solutions?
Thanks, Jon
--
-
Jonathan Underwood ビットバンク社 チーフビットコインオフィサー -
暗号化したメッセージをお送りの方は下記の公開鍵をご利用下さい。
指紋: 0xCE5EA9476DE7D3E45EBC3FDAD998682F3590FEA3
___ Lightning-dev mailing list 
Lightning-dev@lists.linuxfoundation.org 
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Fee disentanglement for 1.1 spec?

2018-01-16 Thread Johan Torås Halseth
Hi Rusty,
This is something I’ve been thinking a bit about, as I’ve stumbled into some of 
the edge cases you mention. Just to get on the same page: does the other side 
(non-funder) pay any fees in the current implementation? [1] suggests that the 
funder pays everything atm (on both sides’ commitment), so I reckon you are 
talking about dual-funder from here on.
[1] 
https://github.com/lightningnetwork/lightning-rfc/blob/master/03-transactions.md#fee-payment
Problem#1 If we can make each side pay the fee for the HTLCs they are adding 
(on both commitments!), in addition to your suggestion of having them set fees 
independently, we get a nice bonus: Any update Alice makes, can only decrease 
*her* balance, not Bob’s. She can add HTLCs (she must pay HTLC+fee), do a fee 
update (potentially increasing the fees she must pay), or settle/fail HTLCs 
(can not decrease her balance). This is in contrast to the current spec, where 
an added HTLC can make Bob pay the fee (if he’s funder), and a fee update can 
make Bob not afford the new fee (the race you mentioned).
I think this will work without the check (2) you mention, since if Alice sends 
a fee update, then it will apply only to her HTLCs, and as mentioned can only 
decrease her balance. It doesn’t matter if Bob adds `max_accepted_htlcs` at the 
same time, since those fees will then be subtracted from his balance, and is 
not affected by the fee update.
This would make a lot of the edge cases go away, and would make it a lot easier 
to verify that a node is not violating its channel reserve. Let me know if I’m 
missing something.
- Johan

On Tue, Jan 16, 2018 at 0:55, Rusty Russell  wrote:
Hi all,

Wanted to post some thinking on this for everyone to mull over,
though I know we're all busy.

Your consideration and feedback gratefully accepted!

Problem #1
==
For simplicity, one side (funder) sets fees, but other side
needs to check they're sufficient, and not insanely high (as *they* end
up paying for HTLC txs). If they disagree, they close channel; this
happens a fair bit on testnet, for example, and will be worse with
different backends (eg. different bitcoind versions, or btcd, etc).

Solution

Have both sides set fees independently. I specify what fees my
commitment tx and HTLC txs will have, you do the same. This works in
well with dual-funder proposals.

Implementation:
--
c-lightning did something similar pre-spec. The way it works is both
sides set an initial fee in `open_channel`: this is the only point at
which the counterparty checks it's reasonable.

You send an `update_fee` message like now, but it has no effect on the
other side: it's applied to *you* when they 'revoke_and_ack', like any
other change.

You disallow any `update_fee` which the other side could not afford with
(1) all their current HTLCs, AND (2) the `max_accepted_htlcs` from you.
That covers the race where one side sets fees and the other adds a heap
of HTLCs and the two cross over.

Also disallow any new HTLCs you can't pay fees for (given same
worst-case calculation as above), and if the one side can't afford fees,
pull from its reserve and other side as necessary (this can only happen
during initial phase, and is the same logic as now).


Problem #2
==
Predicting the future is hard. Today's sufficient fee may be a gross
overpayment or insufficient tomorrow.

Solution

Allow multiple simultaneous fee levels.

Implementation:
---
This means multiple signatures in each `commitment_signed` message,
which is more work and more storage. But since our nSequence is <
0xFFFE anyway, all transactions are RBF-ready except
closing_transaction. We might want to change that; we already allow
re-negotiation of closing tx by reconnecting anyway.
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Insufficient funder balance for paying fees

2018-01-12 Thread Johan Torås Halseth
Hi Pierre,
You’re right, that looks very much like the same kind of situation!
I agree, it looks from [2] that a node may fail the channel in this case, and 
that it probably should to not risk end up with all funds in an unpublishable 
tx. Seems like something that could be used as a DOS attack vector by a 
malicious counter party otherwise.

Relevant to this: We use a node’s resulting output (that is, after subtracting 
fees) when checking that the channel_reserve is met. In these cases we can 
therefore end up violating the reserve, even though none of the nodes are 
actually violating the protocol. When this happens we don’t really end up with 
an unpublishable tx, as the fees are still high enough, and I guess each node 
can choose what to do. I think we will just fail the channel to not have to 
deal with this as a special case.
Anyway, I think these are cases that are not very likely to occur, especially 
with the right choice of parameters as you mention. And because of this it 
might be less error-prone to just fail the channel instead of trying to recover 
from it.
Thanks! - Johan
On Fri, Jan 12, 2018 at 12:56, Pierre  wrote:
Hi Johan,

That's an interesting corner case. I think it shares some similarities
with the race condition described in BOLT 2 [1], which handling is
specified in BOLT 3 [2].

Note that what matters really is the timing of the
`commit_sig`/`revoke_and_ack` messages, not the `update_add_htlc`s,
because of the acknowledgment logic that excludes remote's unsigned
updates. A side effect is that there can be multiple HTLCs on each
side.

Each party will end up receiving a commitment tx which has
insufficient (possibly zero) fees. At that point according to [2] it
may decide to fail the channel, using its previous commitment (which
it hasn't yet revoked). Currently eclair won't fail the channel, but I
think we probably should, especially if we are the fundee and would
end up with all funds in an unpublishable tx. The funder could face
the same situation if the pending htlcs have a high value (at this
point its main output is zero anyway).

An appropriate choice of channel parameters (`mainly
max_htlc_value_in_flight_msat`, `channel_reserve_satoshis`,
`max_accepted_htlcs`) could probably reduce the probability of this
happening.

Hope that helps,

Pierre

[1] 
https://github.com/lightningnetwork/lightning-rfc/blob/master/02-peer-protocol.md#updating-fees-update_fee
[2] 
https://github.com/lightningnetwork/lightning-rfc/blob/master/03-transactions.md#fee-payment___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


[Lightning-dev] Insufficient funder balance for paying fees

2018-01-12 Thread Johan Torås Halseth
Hi all,
I am wondering how Eclair and c-lightning is handling the following case, as I 
wasn’t able to derive exactly how to handle it from the BOLTs. If it is 
described somewhere, please point me to it, if not, let’s agree on a strategy, 
and get it in :)
Let's say Alice is the funder of the channel (meaning she is paying all fees) 
between Alice and Bob. She wants to add an HTLC, and has just enough balance 
available for the HTLC + the extra fee for adding it to the commitment 
transaction. At the same time Bob wants to add an HTLC, and sees that Alice has 
enough balance to be able to pay the fee for receiving this HTLC (add it to her 
commitment tx).
They both send the AddHTLC at the same time, thinking Alice has enough balance 
available, but she doesn't have enough to cover her own HTLC+fee, in addition 
to the fee for adding Bob's. Adding both the HTLCs to her commitment 
transaction will either 1) violate the channel reserve requirement, or 2) 
deplete her channel completely if the channel reserve is set to 0.
What is the expected way of handling this case, from both Alice and Bob’s point 
of view?
Cheers, Johan___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Invoice without amount

2018-01-11 Thread Johan Torås Halseth
Hi, Cezary,
I initially read the issue you created as a bug filing, my bad. I’ve updated it 
to reflect that it is a feature request.
Cheers! Johan

On Thu, Jan 11, 2018 at 18:16, Cezary Dziemian  
wrote:
I made issue 5 days ago: https://github.com/lightningnetwork/lnd/issues/564 
[https://github.com/lightningnetwork/lnd/issues/564]
2018-01-11 0:21 GMT+01:00 Olaoluwa Osuntokun < laol...@gmail.com 
[laol...@gmail.com] > :

Hi Cezary,
I invite you to make an issue on lnd's issue tracker: https://github.com/ 
lightningnetwork/lnd/issues [https://github.com/lightningnetwork/lnd/issues] . 
This list isn't the place for support requests/features for individual 
implementations.
On Tue, Jan 9, 2018 at 5:21 AM Cezary Dziemian < cezary.dziem...@gmail.com 
[cezary.dziem...@gmail.com] > wrote:
Good news, thanks.

Do Lightning Labs also have plan to introduce this soon? I prefer to stay with 
lnd, as we already know this implementation better, but if this option will not 
be introduced soon, we have to switch to use c-lightning.

Cheers,
Cezary
2018-01-09 5:31 GMT+01:00 Rusty Russell < ru...@rustcorp.com.au 
[ru...@rustcorp.com.au] > :
ZmnSCPxj via Lightning-dev < lightning-dev@lists. linuxfoundation.org 
[lightning-dev@lists.linuxfoundation.org] > writes:

> Good morning Cezary,
>
> Currently, c-lightning can PAY amountless invoices via the "pay" command, but 
> cannot CREATE them via the c-lightning "invoice" command.

Good point, I've filed an issue for this, so we don't lose it:

https://github.com/ ElementsProject/lightning/ issues/534 
[https://github.com/ElementsProject/lightning/issues/534]

> To pay an amountless invoice lntb... 4 satoshis in c-lightning: lightning-cli 
> pay lntb.. 4000

Note that I'd prefer the "msatoshi" field to be a magic string
(eg. "any") rather than omitting the parameter, since it's too easy for
a bug to omit the parameter.

Cheers,
Rusty.

__ _
Lightning-dev mailing list
Lightning-dev@lists. linuxfoundation.org 
[Lightning-dev@lists.linuxfoundation.org]
https://lists.linuxfoundation. org/mailman/listinfo/ lightning-dev 
[https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev]


___ Lightning-dev mailing list 
Lightning-dev@lists.linuxfoundation.org 
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] SegWit and LN

2018-01-02 Thread Johan Torås Halseth
That’s correct :)

On Tue, Jan 2, 2018 at 15:34, Praveen Baratam <praveen.bara...@gmail.com> wrote:
Thank you for explaining @Hafeez & @Johan
Now that all the BIPs necessary for LN including SegWit (Softfork) are active 
on the mainnet, are we just waiting for the LN implementation to mature or are 
there any other issues? ᐧ
On Tue, Jan 2, 2018 at 8:01 PM, Johan Torås Halseth < joha...@gmail.com 
[joha...@gmail.com] > wrote:
Hi,
Before you can safely broadcast the funding transaction, the two parties 
involved in a channel must have signed a commitment transaction spending the 
output from the funding transaction. Without segwit, the funding transaction 
can be malleated, leaving the commitment transaction invalid, and funds locked 
up if one of the parties stops cooperating.

Cheers, Johan
On Tue, Jan 2, 2018 at 15:11, Hafeez Bana < hafeez.b...@gmail.com 
[hafeez.b...@gmail.com] > wrote:
to fix transaction malleability

On Tue, Jan 2, 2018 at 1:53 PM, Praveen Baratam < praveen.bara...@gmail.com 
[praveen.bara...@gmail.com] > wrote:
Why is SegWit required for LN? If we wait for the funding transaction to be 
confirmed , we can then safely create and update unconfirmed commitment 
transactions...
I don't see how SegWit is important here... Am I missing something?

--
Dr. Praveen Baratam
about.me [http://about.me/praveen.baratam] ᐧ
__ _
Lightning-dev mailing list
Lightning-dev@lists.linuxfound ation.org 
[Lightning-dev@lists.linuxfoundation.org]
https://lists.linuxfoundation. org/mailman/listinfo/lightning -dev 
[https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev]



__ _ Lightning-dev mailing list 
Lightning-dev@lists. linuxfoundation.org 
[Lightning-dev@lists.linuxfoundation.org] https://lists.linuxfoundation. 
org/mailman/listinfo/ lightning-dev 
[https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev]


--
Dr. Praveen Baratam
about.me [http://about.me/praveen.baratam]___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Comments on BOLT#11

2017-12-12 Thread Johan Torås Halseth
Just a few quick comments, as any improvements to BOLT#11 are very much 
appreciated :)

* I think we should set a reasonable max length for invoices, that MUST be met. 
This would simplify internal database logic (since you don’t have to plan for 
invoices infinitely big), and could make sure the error detection is fair for 
all supported lengths. Not sure if 1023 is enough, considering possibly 
multiple `r` tags and a juicy description. * Agree with UTF-8 support, and up 
to 640 bytes length ( this can be made explicit in the Bolt, as now it is 
limited by the 5 bit length field) :) * Why must the description hash URL be 
part of the invoice? I always imagined this would be used between clients that 
already had agreed on payment for some kind of data, and that this hash would 
just ensure you were paying the correct one.
- Johan (please explain the Japanese lightning meme plz)
On Tue, Dec 12, 2017 at 6:15, Jonathan Underwood  
wrote:
I made a payment request using UTF-8 description here: 
lntb1pdz7e9epp5qqqsyqcyq5rqwzqfqqqsyqcyq5rqwzqfqqqsyqcyq5rqwzqfqypqdpquwpc4curk03c9wlrswe78q4eyqc7d8d0e5l2jffz6amujxz82mtagde82dv8jku2jac79k0yxnjmr0l3f4y5x0jxt46vmcrc0ukzh7l99vdmkezsettfwr4gqnhs2ndx8wdqgfsp82rnvp

using this code: (I just separated encode from sign)
ln.sign(ln.encode({ tags:[ { tagName:'payment_hash', 
data:'0001020304050607080900010203040506070809000102030405060708090102' }, { 
tagName:'description', data:'ナンセンス 1杯' } ] }, false), 
Buffer.from('e126f68f7eafcc8b74f54d269fe206be715000f94dac067d1c04a8ca3b2db734', 
'hex')).paymentRequest


Full results:
{ "coinType": "testnet", "payeeNodeKey": 
"03e7156ae33b0a208d0744199163177e909e80176e55d97a2f221ede0f934dd9ad", 
"paymentRequest": 
"lntb1pdz7e9epp5qqqsyqcyq5rqwzqfqqqsyqcyq5rqwzqfqqqsyqcyq5rqwzqfqypqdpquwpc4curk03c9wlrswe78q4eyqc7d8d0e5l2jffz6amujxz82mtagde82dv8jku2jac79k0yxnjmr0l3f4y5x0jxt46vmcrc0ukzh7l99vdmkezsettfwr4gqnhs2ndx8wdqgfsp82rnvp",
 "recoveryFlag": 1, "satoshis": null, "signature": 
"cd3ea92522d777c9184756d7d437275358795b8a9771e2d9e434e5b1bff14d49433e465d74cde0787f2c2bfbe52b1bbb6450cad6970ea804ef054da63b9a0426",
 "tags": [ { "tagName": "payment_hash", "data": 
"0001020304050607080900010203040506070809000102030405060708090102" }, { 
"tagName": "description", "data": "ナンセンス 1杯" } ], "timestamp": 1513055417, 
"timestampString": "2017-12-12T05:10:17.000Z" }

___ Lightning-dev mailing list 
Lightning-dev@lists.linuxfoundation.org 
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Directionality of the transaction fees

2017-12-07 Thread Johan Torås Halseth
Hi, Edward! Welcome to the mailing list :)
The fees can indeed be set for each direction of the channel, check out 
https://github.com/lightningnetwork/lightning-rfc/blob/master/07-routing-gossip.md#the-channel_update-message
 
[https://github.com/lightningnetwork/lightning-rfc/blob/master/07-routing-gossip.md#the-channel_update-message]
Basically each node in the channel can announce the fee til will take to route 
a payment in the direction leading “away” from it. We also have something 
called “channel reserves” ensuring that each node always has some balance at 
stake in case an old state is broadcast.
Cheers! Johan

On Wed, Dec 6, 2017 at 12:04, Edziu Marynarz  wrote:
I tried to find this information in the BOLT documents but I couldn't find it. 
Is it possible to set up the channel so that the fee depends on the direction, 
i.e. a different fee on the receive direction and different on the send one? 
Why such functionality?
Imagine that you start with a bidirectional channel to Alice and a channel to 
Bob with 1000 satoshi each. For some reasons, the network routes most of the 
transactions from Alice to you and then to Bob and you end up with 1900 satoshi 
in the Alice channel and only 100 satoshi in the Bob one. The route will stop 
working and if there is little traffic in the other direction you will have to 
close the channels to rebalance them or wait a very long time for the 
rebalancing.
If the fee could depend on the direction, one could start to ramp up fees on 
the receiving end of the channel that is getting large and lower the one that 
is empty to prevent the imbalance.
There is also a risk factor involved. The lightning network channels get 
riskier on the receive side the more the channel value deviates from the 
original state since the counterparty may try to broadcast the old state so you 
may want to regulate this imbalance with fees. It would be best if the LN 
applications could do it automatically.
Regards,
Edward___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev