[Lightning-dev] CVE-2020-26895: LND Low-S Tx-Relay Standardness
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Hi all, Today we are writing to disclose the details of CVE-2020-26895 as a follow up to the partial disclosure sent to lightning-dev [1]. ## Abstract Prior to v0.10.0-beta, a malicious peer could force an lnd node to accept a high-S ECDSA signature when updating new off-chain states. Though the signatures are valid according to consensus rules, the mempool policy would reject transactions containing high-S values, potentially leading to loss of funds if time-sensitive transactions cannot be relayed and confirmed. We have no evidence of the bug being exploited in the wild. It affects all classes of lnd nodes: routing, merchant, mobile, etc. The vulnerability was reported privately to the lnd team by Antoine Riard. ## Background The lightning-rfc specifies a fixed-width, 64-byte encoding used to transmit ECDSA signatures in the Lightning protocol, which differs from the DER-encoding used at the consensus layer. For regular, on-chain transactions, signature serialization is handled by the btcec library's Signature.Serialize() method [2]. This method always normalizes signatures to their low-S variant before performing the DER-encoding to ensure that the btcec library can't _produce_ high-S signatures. Early in lnd's history, however, serialization modeled off btcec was added to produce DER-encoded signatures directly from the fixed-width representation, bypassing the conversion into big.Int representation used internally by btcec. In doing so, retaining the low-S normalization behavior was overlooked, and so Sig.ToSignatureBytes() [3] would return high-S DER signatures whenever the fixed-size signature was encoded with a high-S value. During unilateral closure, this can be exploited by an attacker to cause a second-level HTLC-success transaction from being accepted to the mempool. If the victim is unable to patch before the HTLC's CLTV expires, the attacker can then broadcast their HTLC-timeout transaction and recover the full value of the HTLC minus fees. On the other hand, lnd’s cooperative close fully verifies the remote party’s signatures using full policy-aware verification. As a result, the only exploitation vector occurs during the force close scenario. ## Updates to Lighting RFC As noted by Riard during the process, the lightning-rfc is lacking in terms of specifying how nodes should validate signatures accepted off-chain. Notably, the signatures should be checked for conformation to both consensus _and_ tx-relay standardness rules, and rejected otherwise. Riard has confirmed that he is planning to submit an update to the specification incorporating these recommendations. ## Patch This vulnerability was fixed in v0.10.0-beta by converting all witness construction methods in lnd to accept signatures according to the input.Signature interface introduced in PR 4172 [4], which requires the passed object to have a Serialize() method. lnwire.Sig does not have a Serialize() method, and so cannot satisfy the interface. As a result, the relevant call sites were updated to pass in a btcec.Signature, forcing witness signature serialization through btcec's Serialize() method which includes low-S normalization. Note: A high-S signature can be converted to a low-S one manually w/o software changes, or by a 3rd party, assuming one is aware of the reason for rejection. Though the above recommendation to the spec by Riard also mitigates the issue, this approach was chosen because it could retroactively patch affected nodes if they are upgraded before the HTLC deadline expires, as well as its covertness. After upgrading, any outstanding broadcasts would be reattempted, this time normalizing any previously-persisted high-S signatures into their low-S variant. Following the disclosure, lnd will also introduce the full tx-relay standardness checks that are to be added to the lightning-rfc, as this offers a more general and complete approach to ensuring lnd always adheres to standardness rules. ## Timeline 04/03/2020 - Initial report from Antoine Riard 04/10/2020 - PR 4172 merged into master 04/29/2020 - lnd v0.10.0-beta released 08/20/2020 - lnd v0.11.0-beta released 10/08/2020 - Partial Disclosure sent to lightning-dev and lnd mailing list [1] 10/20/2020 - Full Disclosure sent to lightning-dev and lnd mailing list ## References [1] https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-October/002819.html [2] https://github.com/btcsuite/btcd/blob/ba530c4abb35ea824a1d3c01d74969b5564c3b08/btcec/signature.go#L47 [3] https://github.com/lightningnetwork/lnd/blob/0f94b8dc624cf0e96ddc8fe1b8e3bf4b3fc4c074/lnwire/signature.go#L92 [4] https://github.com/lightningnetwork/lnd/pull/4172 [5] https://gist.github.com/ariard/fb432a9d2cd3ba24fdc18ccc8c5c6eb4 Huge thanks to Antoine Riard for the responsible disclosure and for helping to make lnd more safu. More information can be found in Antoine’s disclosure [5]. Regards, Conner Fromknecht -BEGIN PGP
[Lightning-dev] CVE-2020-26896: LND Invoice Preimage Extraction
ctim, limiting control of timing and the amount that can be siphoned. Malice must also somehow infer or guess that Bob has the corresponding invoice being paid. If Malice runs the same attack without intercepting a real HTLC, she pays routing fees, and possibly chain fees, in exchange for the invoice preimage and identity of the receiver. However, it is possible for her to indirectly profit from this if the service provider releases tangible goods or services to anyone with knowledge of the invoice preimage, which is not recommended in practice. The upstream attacker does not need to be adjacent, they only need to know which channel to target and watch for closure. Being adjacent increases the assuredness of pulling off an exploit, but is not strictly required. Similarly, the downstream attacker (possibly distinct from Malice) does not need to be adjacent, they can settle the malicious HTLC further downstream to the same effect at the cost of more routing fees. ## Patch This vulnerability was patched in lnd v0.11.0-beta, by properly isolating the preimage database from the invoice database according to the HTLC's next_hop field in commit cf739f3f [3] of PR 4157 [4]. The isolation ensures that we can only claim forwarded HTLCs as a result of learning the preimage from an outgoing HTLC. It also fixes the privacy leak by not revealing invoice preimages unless the node is the final destination. Due to the complexities involved in describing vulnerabilities over textual mediums, the full nature of the issue wasn’t fully understood until after v0.10.0-beta had been released. Additionally, the covert fix contained in the v0.11.0-beta release was pushed back due to a concurrent investigation into network instabilities resulting in unexpected channel closures. Note, although the above patch fixes the issue, this issue could have also been avoided by having receivers require payment secrets (BOLT 11 `s` field) since the attackers would be unable to guess the payment secret. However, left as optional, the attacker can always downgrade to using malicious HTLCs that omit the payment secret. For some time we have debated flipping the switch on requiring payment secrets across the three major implementations. This vulnerability is further evidence to the additional safety and privacy benefits. Now almost a year since the initial deployment of payment secrets in lnd, the upcoming v0.12.0-beta release of lnd is likely to make payment secrets required by default. We would welcome other implementations to do the same. ## Timeline 04/19/2020 - Initial report from Antoine Riard 04/29/2020 - lnd v0.10.0-beta released 07/07/2020 - PR 4157 merged into master 08/20/2020 - lnd v0.11.0-beta released 10/08/2020 - Partial Disclosure sent to lightning-dev and lnd mailing list [1] 10/20/2020 - Full Disclosure sent to lightning-dev and lnd mailing list ## References [1] https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-October/002819.html [2] https://github.com/lightningnetwork/lnd/blob/9f32942a90bcd91cc37a4a9c6c2fb454f534a65d/invoices/update.go#L229 [3] https://github.com/lightningnetwork/lnd/pull/4157/commits/cf739f3f87fdcb28ab45dfd48e3d18adf26e45b3 [4] https://github.com/lightningnetwork/lnd/pull/4157 [5] https://gist.github.com/ariard/6bdeb995565d1cc292753e1ee4ae402d A big thank you to Antoine for the responsible disclosure and for helping to make lnd more safu. More information can be found in Antoine’s disclosure [5]. Regards, Conner Fromknecht -BEGIN PGP SIGNATURE- iQIzBAEBCAAdFiEEnI1hhop8SSADsnRO59c3tn+lkscFAl+PWzMACgkQ59c3tn+l kseIJw//UwswUyh6BNgmi4D8NoC6olelW0dRmecqcZF7JBQa619kVFm/D7rixp33 J1YsXvZC2OLTpqmaJcJ3OvBKLVcW7CxheDp3Pm0JjrfVnmOl1NGX4CSymL6Zpou7 nFqh+nqOZ2n6o4OIv+mx0y2YANKjAVtAcr9LakubMn/3LgYzqvKKu39QGqrtz9vZ lYGAAPU3zlAjIjFNv56xWpF0Pj9VE2mQB27w2QmbSuNtR21feOSJhJimEvmXhk6d O0Ze78Fea+eaS+d1uyRkB7aaEKBRAA5WCtDKgSOwfEY+mHC7u5+LRasyegjlc8Ie hYBNOsjEZqVjwIgr+lqMDbQ8B5RtW4LVro/LMYGCbVRnGuF16gHu/lkDnVgz/sY7 sbsPVG11wfVFH0U/TyJoBC8qOmeHMJoVsvGbY9I2XQiFw7yAbWxEdU+7mMhQZA2Z Zd9pl0ATByLFPyg58gA6G4JV+F45DvYrG3jj6cdkUvL2nQST08IZtTjnDxAnkDTk HwnJo0fd7vsixEyssTMuSCjbGSaPDMPCkmNQg8PAhhoIK8MeKUlylCKJuM6gMWeW YypzGBmE6O7OtoMTOYFWysU67edVXgQTV2dD/PE6abTYOfS79gvkNekU4BvW9NDE af0JBywXovzNVshdqpijPBleOT8/QSyTdLvI78ev+zpJMsKMugU= =PY6W -END PGP SIGNATURE- ___ Lightning-dev mailing list Lightning-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
Re: [Lightning-dev] Partial LND Vulnerability Disclosure, Upgrade to 0.11.x
Hi all, For those looking to verify the gpg signature, please be sure the support email is formatted correctly. For example, the archive replaces "@" with " at ", and apparently google groups trims "support" to "sup...". If you run into issues, please double check the plaintext matches verbatim with what was sent on lightning-dev. Cheers, Conner On Thu, Oct 8, 2020 at 5:19 PM Conner Fromknecht wrote: > > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA256 > > Hi all, > > We are writing to let the Lightning community know about the existence of > vulnerabilities that affect lnd versions 0.10.x and below. The full details of > these vulnerabilities will be disclosed on October 20, 2020. The circumstances > surrounding the discovery resulted in a compressed disclosure timeline > compared > to our usual timeframes. We will be publishing more details about this in the > coming weeks along with a comprehensive bug bounty program. > > While we have no reason to believe these vulnerabilities have been exploited > in > the wild, we strongly urge the community to upgrade to lnd 0.11.0 or above > ASAP. > Please ping us on the #lnd IRC channel, the LND Slack, or at > support@lightning.engineering if you need any assistance in doing so. Upgrade > instructions can be found in our installation docs: > https://github.com/lightningnetwork/lnd/blob/master/docs/INSTALL.md#installing-lnd. > > Regards, > Conner Fromknecht > -BEGIN PGP SIGNATURE- > > iQIzBAEBCAAdFiEEnI1hhop8SSADsnRO59c3tn+lkscFAl9/ozwACgkQ59c3tn+l > kscVvBAAk21z6tlHPkOSwfj1lBE0pqc65A6Qa927WEjN5hdUpjjof4Xo2j+GzbnN > Uoj4HGZu+koakzoVpJ4mzN+vg086zAnv+K668hhl7bbPHsQu6FqA1ALiAyy0nH6H > 1yukXxpRflq53RTIVPjrEnFVdt6FCLhkCm9LuOk0a/SUf8D4b/N6OaB1Bxupeceu > QFSCIkb9kvW/Eplwkv7PEnx/IZNGIQP9F11DaKLTAjWY5RnIxmCw/oamvlP8Mxt8 > /AqlzWVtPVqvwgJLhbMziraXNVV05naHrIXvbXrOI2Q7FZjdaxF+S4EKT4feuq1w > iW7NYSS/u5N2FP3yK8YIdoX0I/nwYQQcpsfbAv2dS4Ql2Td/dyREId4NcchmaKSV > N3w1jByMPWrgUtinl5WEDDOJdUKS2PHkQ95t3s/1uYDFsPz1kXJR2x37a/1AVz/K > 6zQ45wFvHEopFR49hu/CV6MUvsvn4XKzPa46Ii7puaBaNqygx0RwuwlxbxCNxPNQ > v45CaCUEq2Tj3stu7YoYGntFvrXVkxXJocn51eK6D+g0bIEXxaGlPJeTuvifKMTO > 3T3ZEEbCe9UhDUT8Ja2boP2IIi8wAyExGS59k0tndQGzMSjkzWZ0fzgYyyf+y4nt > r3nTCGi5WWe4y1i2KpiYZTRrQkbrNkRf+fnVdlnTS4lcgEWFFiY= > =8t9Q > -END PGP SIGNATURE- ___ Lightning-dev mailing list Lightning-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
[Lightning-dev] Partial LND Vulnerability Disclosure, Upgrade to 0.11.x
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Hi all, We are writing to let the Lightning community know about the existence of vulnerabilities that affect lnd versions 0.10.x and below. The full details of these vulnerabilities will be disclosed on October 20, 2020. The circumstances surrounding the discovery resulted in a compressed disclosure timeline compared to our usual timeframes. We will be publishing more details about this in the coming weeks along with a comprehensive bug bounty program. While we have no reason to believe these vulnerabilities have been exploited in the wild, we strongly urge the community to upgrade to lnd 0.11.0 or above ASAP. Please ping us on the #lnd IRC channel, the LND Slack, or at support@lightning.engineering if you need any assistance in doing so. Upgrade instructions can be found in our installation docs: https://github.com/lightningnetwork/lnd/blob/master/docs/INSTALL.md#installing-lnd. Regards, Conner Fromknecht -BEGIN PGP SIGNATURE- iQIzBAEBCAAdFiEEnI1hhop8SSADsnRO59c3tn+lkscFAl9/ozwACgkQ59c3tn+l kscVvBAAk21z6tlHPkOSwfj1lBE0pqc65A6Qa927WEjN5hdUpjjof4Xo2j+GzbnN Uoj4HGZu+koakzoVpJ4mzN+vg086zAnv+K668hhl7bbPHsQu6FqA1ALiAyy0nH6H 1yukXxpRflq53RTIVPjrEnFVdt6FCLhkCm9LuOk0a/SUf8D4b/N6OaB1Bxupeceu QFSCIkb9kvW/Eplwkv7PEnx/IZNGIQP9F11DaKLTAjWY5RnIxmCw/oamvlP8Mxt8 /AqlzWVtPVqvwgJLhbMziraXNVV05naHrIXvbXrOI2Q7FZjdaxF+S4EKT4feuq1w iW7NYSS/u5N2FP3yK8YIdoX0I/nwYQQcpsfbAv2dS4Ql2Td/dyREId4NcchmaKSV N3w1jByMPWrgUtinl5WEDDOJdUKS2PHkQ95t3s/1uYDFsPz1kXJR2x37a/1AVz/K 6zQ45wFvHEopFR49hu/CV6MUvsvn4XKzPa46Ii7puaBaNqygx0RwuwlxbxCNxPNQ v45CaCUEq2Tj3stu7YoYGntFvrXVkxXJocn51eK6D+g0bIEXxaGlPJeTuvifKMTO 3T3ZEEbCe9UhDUT8Ja2boP2IIi8wAyExGS59k0tndQGzMSjkzWZ0fzgYyyf+y4nt r3nTCGi5WWe4y1i2KpiYZTRrQkbrNkRf+fnVdlnTS4lcgEWFFiY= =8t9Q -END PGP SIGNATURE- ___ Lightning-dev mailing list Lightning-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
Re: [Lightning-dev] eltoo towers and implications for settlement key derivation
Good evening, > I didn't think this was the design. The update transaction can spend any prior, with a fixed script, due to NOINPUT. >From my reading of the final construction, each update transaction has a unique script to bind settlement transactions to exactly one update. > My understanding is that this is not logically possible? The update transaction has no fixed txid until it commits to a particular output-to-be-spent, which is either the funding/kickoff txout, or a lower-`nLockTime` update transaction output. > Thus a settlement transaction *must* use `NOINPUT` as well, as it has no txid it can spend, if it is constrained to spend a particular update transaction. This is also my understanding. Any presigned descendants of a NOINPUT txn must also use NOINPUT as well. This chain must continue until a signer is online to bind a txn to a confirmed input. The unique settlement keys thus prevent rebinding of settlement txns since NOINPUT with a shared script would be too liberal. Cheers, Conner On Mon, Dec 2, 2019 at 18:55 ZmnSCPxj wrote: > Good morning Rusty, > > > > Hi all, > > > I recently revisited the eltoo paper and noticed some things related > > > watchtowers that might affect channel construction. > > > Due to NOINPUT, any update transaction can spend from any other, so > > > in theory the tower only needs the most recent update txn to resolve > > > any dispute. > > > In order to spend, however, the tower must also produce a witness > > > script which when hashed matches the witness program of the input. To > > > ensure settlement txns can only spend from exactly one update txn, > > > each update txn uses unique keys for the settlement clause, meaning > > > that each state has a unique witness program. > > > > I didn't think this was the design. The update transaction can spend > > any prior, with a fixed script, due to NOINPUT. > > > > The settlement transaction does not use NOINPUT, and thus can only > > spend the matching update. > > My understanding is that this is not logically possible? > The update transaction has no fixed txid until it commits to a particular > output-to-be-spent, which is either the funding/kickoff txout, or a > lower-`nLockTime` update transaction output. > Thus a settlement transaction *must* use `NOINPUT` as well, as it has no > txid it can spend, if it is constrained to spend a particular update > transaction. > > Unless I misunderstand how update transactions work, or what settlement > transactions are. > > Regards, > ZmnSCPxj > -- —Sent from my Spaceship ___ Lightning-dev mailing list Lightning-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
[Lightning-dev] eltoo towers and implications for settlement key derivation
Hi all, I recently revisited the eltoo paper and noticed some things related watchtowers that might affect channel construction. Due to NOINPUT, any update transaction _can_ spend from any other, so in theory the tower only needs the most recent update txn to resolve any dispute. In order to spend, however, the tower must also produce a witness script which when hashed matches the witness program of the input. To ensure settlement txns can only spend from exactly one update txn, each update txn uses unique keys for the settlement clause, meaning that each state has a _unique_ witness program. Naively then a tower could store settlement keys for all states, permitting it to reconstruct arbitrary witness scripts for any given sequence of confirmed update txns. So far, the only work around I’ve come up with to avoid this is to give the tower an extended parent pubkey for each party, and then derive non-hardened settlement keys on demand given the state numbers that get confirmed. It's not the most satisfactory solution though, since leaking one hot settlement key now compromises all sibling settlement keys. Spending the unique witness programs is mentioned somewhat in section 4.1.4, which refers to deriving keys via state numbers, but to me it reads mostly from the PoV of the counterparties and not a third-party service. Is requiring non-hardened keys a known consequence of the construction? Are there any alternative approaches folks are aware of? Cheers, Conner ___ Lightning-dev mailing list Lightning-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
Re: [Lightning-dev] type,len,value standard
Hi ZmnSCPxj, Precisely, something like that is what I had in mind. Since the max message size is 65KB, one option could be to only allow the varint to be max 2 bytes where: - if the 8th bit is not set, the lowest 7 bits of the first bytes translate to 0 - 127 - if the 8th bit is set, this implies that the second byte is also treated as part of the length, and the total value is 0x7f & first byte + second byte << 7 This would be fairly straightforward to implement, at the cost of limiting a particular field to 2^15 bytes. I wonder, is this too restrictive? At the same time, we could offer a varint that could extend up to the three bytes. The third byte would only come in to play if the length of the field is greater than 2^14 - 1. The argument could be made that for values of this size, one extra byte is irrelevant compared to the size of these larger fields. Cheers, Conner On Thu, Nov 15, 2018 at 1:45 AM ZmnSCPxj wrote: > > Good morning Conner et al, > > > > > 5. `len` - one byte or two? I believe we tend to use two bytes for > > > > various > > > > lengths. > > > > > > > > > > Maybe varint? One byte is not enough for all lengths, but two seems > > > excessive > > > for uint8 or even uint32. > > > > Given that messages are currently only up to 65536 bytes total, is that not > > a bit much? > > Sorry, I misunderstood. > > This is varint, correct? http://learnmeabitcoin.com/glossary/varint > > If so, I think this is good idea. > It seems we do not have varint currently in Lightning (at least the parts I > am familiar with). > I suppose the t-l-v being in a different BOLT would let us make some section > or part for describing `varint`. > > Regards, > ZmnSCPxj ___ Lightning-dev mailing list Lightning-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
Re: [Lightning-dev] type,len,value standard
Hi ZmnSCPxj, Thanks for writing this up! I had started an email, but you beat me to it :) > 1. For a sequence of `type,len,value`, each `type` must be unique. -- > accepted. To add to this, it seemed that there was some agreement that repeated fields should be serialized under a single root key, since a receiver can't know if a field is allowed to have duplicates if they don't understand the field. > For a sequence of `type,len,value`, the `type`s must be in ascending order > -- not explicitly accepted or rejected. It would be easier to check > uniqueness > (the previous rule we accepted) here for a naive parser (keep > track of some "minimum allowed type" that initializes at zero, check current > type >= this, update to current type + 1) if `type`s are in ascending order. Yep ascending makes sense to me, for the reasons you stated. > 1, `type` - one byte or two? I'd lean towards one, if a message has 256 optional fields, it might be time to consider a new message type altogether. > 3. `type` - does "it's OK to be odd" apply? i.e. if an even `type` that is > not known is found, crash and burn. But intent of this system is for future > expansion for optional fields, so...? Perhaps this depends on context: - for gossip messages, I think the primary concern is not breaking signature validation, and that these would need to remain optional for backwards compatibility. - for link-level messages, we have a little more control. I imagined the fields would be gated by feature bit negotiation, and deviating from unsupported/required would result in being disconnected. > 5. `len` - one byte or two? I believe we tend to use two bytes for various > lengths. Maybe varint? One byte is not enough for all lengths, but two seems excessive for uint8 or even uint32. > 6. BOLT - I propose making a separate BOLT for `type,len,value`, which other > messages and so on simply refer to. Indeed, are you thinking we'd use this to add new fields proposed in 1.1? In addition to the above, do we also want to flesh out what sub-TLV structures would look like? Or perhaps that isn't necessary, if we can continue adding more root-level keys. --Conner On Wed, Nov 14, 2018 at 8:54 PM ZmnSCPxj via Lightning-dev wrote: > > Good morning list, > > An item added discussed in the summit was the proposed "type,len,value", > which is added to the end of messages and other intercommunication structures > (invoices and so on). > This would allow some transition to future additional fields while > maintaining backward compatibility. > > I believe these were brought up: > > 1. For a sequence of `type,len,value`, each `type` must be unique. -- > accepted. > 2. For a sequence of `type,len,value`, the `type`s must be in ascending > order -- not explicitly accepted or rejected. It would be easier to check > uniqueness (the previous rule we accepted) here for a naive parser (keep > track of some "minimum allowed type" that initializes at zero, check current > type >= this, update to current type + 1) if `type`s are in ascending order. > > Now for bikeshedding: > > 1, `type` - one byte or two? > 2. `type` - maybe some other name, since we already use `type` for messages? > How about, `key` instead? > 3. `type` - does "it's OK to be odd" apply? i.e. if an even `type` that is > not known is found, crash and burn. But intent of this system is for future > expansion for optional fields, so...? > 4. `len` - measures bytes of `value`, obviously since if the receiver does > not know the `type` then it cannot know what unit is used for the `value`. > 5. `len` - one byte or two? I believe we tend to use two bytes for various > lengths. > 6. BOLT - I propose making a separate BOLT for `type,len,value`, which other > messages and so on simply refer to. > > Regards, > ZmnSCPxj > > ___ > Lightning-dev mailing list > Lightning-dev@lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev ___ Lightning-dev mailing list Lightning-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
Re: [Lightning-dev] Trustless Watchtowers
Quick correction: > Thus, the cost to perform the attack would be many orders of > magnitude greater than the cost to back up one channel. This was written assuming the attacker was trying to upload multiple encrypted blobs for the same txid, which seems like an unlikely attack vector if the tower inherently defends against it. If instead they are just trying to fill up the tower, the cost is linear in the amount of blobs they send. --Conner On Tue, Nov 13, 2018 at 4:12 PM Conner Fromknecht wrote: > > Hi ZmnSCPxj, > > I haven't yet gotten around to writing up everything documenting in the > working > watchtower design. However, I think we are nearing that phase where things > seem > mostly solidified and would welcome feedback before attempting to formalize > it. > Expect some follow up posts on the ML :) > > > From my bare knowledge of go, it seems data structures and messages so far, > > without actual logic, but please inform me if I am incorrect. > > Much of the server side has been implemented, which accepts encrypted blobs > from > watchtower clients and stores them. The functionality related to scanning > blocks > and publishing justice txns has also been implemented, but has not been merged > yet. The big remaining task is to integrate the client such that it properly > backs up states after receiving revocations from the remote peer. > > > Note however that watchtowers would require to keep all encrypted blobs that > > are keyed to the same partial txid. I.e. watchtowers need to store the pair > > in a set with the set looking at the entire txid+blob as the identity of the > > object. Otherwise it would be possible, if your watchtower is identified by > > your counterparty, for the counterparty to give the commitment transaction's > > txid with a randomly-generated blob to your watchtower before it gives the > > revocation key to you. > > > > I have described the above problem before here: > > https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-April/001203.html > > with an unsatisfactory solution. > > Indeed, this was great observation! The tower can't be sure which client is > uploading the "real" blob either. In light of that, the chosen design uses a > two level bucketing structure that maps: > >-> client_pubkey1 : encrypted_blob1 > -> client_pubkey2 : encrypted_blob2 > > ensuring that different client's can't overwrite each other. Further, the > tower > will only store one blob for a given txid per client. Upon decryption, the > tower > would learn that only one of this a valid update (and possibly delete state > for > the offender). > > > However, this remains your counterparty best avenue of attack, is to simply > > spam your watchtower until it runs out of resources and crashes. > > The client pubkeys described above are tied to what we've been referring to > as a > session. In order for a client to facilitate the attack described above, they > would have to pay the tower for multiple sessions tied to different ephemeral > session keys. > > A session grants the client the ability to store up to N blobs, where N would > be > several thousand. Thus, the cost to perform the attack would be many orders of > magnitude greater than the cost to back up one channel. In the private tower > case, there isn't necessarily payment, though it's more or less assumed that > one > wouldn't DOS their own tower. > > In practice, the tower should only ever accept sessions if it can be certain > it > has the appropriate disk-space to facilitate them, so I don't think > there is much > risk in the node crashing due to this. Someone could still pay to fill > up my tower, > but the tower would be compensated appropriately. The tower could also raise > it's price point if it detects such behavior. > > > And if the watchtower identifies the user, then this leaks the privacy of > > the > > user to the watchtower, and what would then be the point of encrypted blob? > > I believe the same session-based, encrypted-blob approach would work eltoo > towers as well, if the concern is primarily about the channel partner > presuming > the valid blob. The general design should be readily able to serve > eltoo clients, > with some slight modifications to breach detection and justice txn > construction. > > My greater concern with the update-and-replace model is that it leaks timing > information about a particular channel to the tower, since the tower must know > which prior state needs replacing. So even though it is possible to make eltoo > towers constant-space per channel, IMO we're better off storing all prior > enc
Re: [Lightning-dev] Trustless Watchtowers
Hi ZmnSCPxj, I haven't yet gotten around to writing up everything documenting in the working watchtower design. However, I think we are nearing that phase where things seem mostly solidified and would welcome feedback before attempting to formalize it. Expect some follow up posts on the ML :) > From my bare knowledge of go, it seems data structures and messages so far, > without actual logic, but please inform me if I am incorrect. Much of the server side has been implemented, which accepts encrypted blobs from watchtower clients and stores them. The functionality related to scanning blocks and publishing justice txns has also been implemented, but has not been merged yet. The big remaining task is to integrate the client such that it properly backs up states after receiving revocations from the remote peer. > Note however that watchtowers would require to keep all encrypted blobs that > are keyed to the same partial txid. I.e. watchtowers need to store the pair > in a set with the set looking at the entire txid+blob as the identity of the > object. Otherwise it would be possible, if your watchtower is identified by > your counterparty, for the counterparty to give the commitment transaction's > txid with a randomly-generated blob to your watchtower before it gives the > revocation key to you. > > I have described the above problem before here: > https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-April/001203.html > with an unsatisfactory solution. Indeed, this was great observation! The tower can't be sure which client is uploading the "real" blob either. In light of that, the chosen design uses a two level bucketing structure that maps: -> client_pubkey1 : encrypted_blob1 -> client_pubkey2 : encrypted_blob2 ensuring that different client's can't overwrite each other. Further, the tower will only store one blob for a given txid per client. Upon decryption, the tower would learn that only one of this a valid update (and possibly delete state for the offender). > However, this remains your counterparty best avenue of attack, is to simply > spam your watchtower until it runs out of resources and crashes. The client pubkeys described above are tied to what we've been referring to as a session. In order for a client to facilitate the attack described above, they would have to pay the tower for multiple sessions tied to different ephemeral session keys. A session grants the client the ability to store up to N blobs, where N would be several thousand. Thus, the cost to perform the attack would be many orders of magnitude greater than the cost to back up one channel. In the private tower case, there isn't necessarily payment, though it's more or less assumed that one wouldn't DOS their own tower. In practice, the tower should only ever accept sessions if it can be certain it has the appropriate disk-space to facilitate them, so I don't think there is much risk in the node crashing due to this. Someone could still pay to fill up my tower, but the tower would be compensated appropriately. The tower could also raise it's price point if it detects such behavior. > And if the watchtower identifies the user, then this leaks the privacy of the > user to the watchtower, and what would then be the point of encrypted blob? I believe the same session-based, encrypted-blob approach would work eltoo towers as well, if the concern is primarily about the channel partner presuming the valid blob. The general design should be readily able to serve eltoo clients, with some slight modifications to breach detection and justice txn construction. My greater concern with the update-and-replace model is that it leaks timing information about a particular channel to the tower, since the tower must know which prior state needs replacing. So even though it is possible to make eltoo towers constant-space per channel, IMO we're better off storing all prior encrypted blobs to maintain adequate privacy. On private towers, perhaps this privacy/space tradeoff may acceptable, but not sure if the tradeoff makes sense on public towers. Cheers, Conner On Mon, Nov 12, 2018 at 1:18 AM ZmnSCPxj via Lightning-dev wrote: > > Good morning list, > > We were not able to discuss this topic much at recent summit, but I noticed > that lnd has some code related to watchtowers already. From my bare > knowledge of go, it seems data structures and messages so far, without actual > logic, but please inform me if I am incorrect. > > I assume much of the watchtowers code and design in lnd is by Conner, simply > because, he discussed this on this list earlier this year. > > I have seen recently, some paper about paying watchtowers by actually > simulating breaches. You would give a watchtower some txid+blob pair, then > send that txid and see if the watchtower claims it. If it does, then you > have evidence of liveness and correct behavior, and have also paid for and > incentivized the watchtower to
Re: [Lightning-dev] Link-level payment splitting via intermediary rendezvous nodes
Good morning all, Taking a step back—even if key switching can be done mathematically, it seems dubious that we would want to introduce re-routing or rendezvous routing in this manner. If the example provided _could_ be done, it would directly violate the wrap-resistance property of the ideal onion routing scheme defined in [1]. This property is proven for Sphinx in section 4.3 of [2]. Schemes like HORNET [3] support rendezvous routing and are formally proven in this model. Seems this would be the obvious path forward, given that we've already done a considerable amount of work towards implementing HORNET via Sphinx. Cheers, Conner [1] A Formal Treatment of Onion Routing: https://www.iacr.org/cryptodb/archive/2005/CRYPTO/1091/1091.pdf [2] Sphinx: https://cypherpunks.ca/~iang/pubs/Sphinx_Oakland09.pdf [3] HORNET: https://arxiv.org/pdf/1507.05724.pdf On Mon, Nov 12, 2018 at 8:47 PM ZmnSCPxj via Lightning-dev wrote: > > Good morning Christian, > > I am nowhere near a mathematician, thus, cannot countercheck your expertise > here (and cannot give a counterproposal thusly). > > But I want to point out the below scenarios: > > 1. C is the payer. He is in contact with an unknown payee (who in reality > is E). E provides the onion-wrapped route D->E with ephemeral key and other > data necessary, as well as informing C that D is the rendez-vous point. Then > C creates a route from itself to D (via channel C->D or via C->A->D). > > 2. B is the payer. He knows the entire route B->C->D->E and knows that > payee is C. Unfortunately the C<->D channel is low capacity or down or etc > etc. At C, B has provided the onion-wrapped route D->E with ephemeral key > and other data necessary, as well as informing to C that D is the next node. > Then C either pays via C->D or via C->A->D. > > Even if there is an off-by-one error in our thinking about rendez-vous nodes, > could it not be compensated also by an off-by-one in the link-level payment > splitting via intermediary rendez-vous node? > In short, D is the one that switches keys instead of A. > > The operation of processing a hop would be: > > 1. Unwrap the onion with current ephemeral key. > 2. Dispatch based on realm byte. > 2.1. If realm byte 0: > 2.1.1. Normal routing behavior, extract HMAC, etc etc > 2.2. If realm byte 2 "switch ephemeral keys": > 2.2.1. Set current ephemeral key to bytes 1 -> 32 of packet. > 2.2.2. Shift onion by one hop packet. > 2.2.3. Goto 1. > > Would that not work? > (I am being naive here, as I am not a mathist and I did not understand half > what you wrote, sorry) > > Then at C, we have the onion from D->E, we also know the next ephemeral key > to use (we can derive it since we would pass it to D anyway). > It rightshifts the onion by one, storing the next ephemeral key to the new > hop it just allocated. > Then it encrypts the onion using a new ephemeral key that it will use to > generate the D<-A<-C part of the onion. > > Regards, > ZmnSCPxj > > > Sent with ProtonMail Secure Email. > > ‐‐‐ Original Message ‐‐‐ > On Tuesday, November 13, 2018 11:45 AM, Christian Decker > wrote: > > > Great proposal ZmnSCPxj, but I think I need to raise a small issue with > > it. While writing up the proposal for rendez-vous I came across a > > problem with the mechanism I described during the spec meeting: the > > padding at the rendez-vous point would usually zero-padded and then > > encrypted in one go with the shared secret that was generated from the > > previous ephemeral key (i.e., the one before the switch). That ephemeral > > key is not known to the recipient (barring additional rounds of > > communication) so the recipient would be unable to compute the correct > > MACs. There are a number of solutions to this, basically setting the > > padding to something that the recipient could know when generating its > > half onion. > > > > My current favorite goes like this: > > > > 1. Rendez-vous RV receives an onion, performs ECDH like normal to get > > the shared secret, decrypts its payload, simultaneously encrypts > > the padding. > > > > 2. It extracts its per-hop payload and shifts the entire packet over > > (shift its payload out and the newly generated padding in) > > > > 3. It then notices that it should perform an ephemeral key switch, now > > deviating from the normal protocol (which would just be to generate > > the new ephemeral key, serialize and forward) > > 3.1. It zero-fills the padding that it just added (so we are in a > > state that the recipient knew when generating its partial onion > > 3.2 It performs ECDH with the switched in ephemeral key to get a new > > shared secret that which is then used to unwrap one additional > > layer of encryption, and most importantly encrypt the padding so > > the next hop doesn't see the zero-filled padding. > > 3.3 Only then will it generate the new ephemeral key for the next > > hop, based on the switched
Re: [Lightning-dev] Base AMP
Good morning all, > MUST NOT forward (if an intermediate node) or claim (if the final node) unless > it has received a total greater or equal to `intended_total_payment` in all > incoming HTLCs for the same `payment_hash`. I was under the impression that this would not require changes on behalf of the intermediaries, and only need to be implemented by the sender and receiver? If not, then nodes would need to advertise that they support this so that the sender can be sure to route through the subset of nodes that support it. Either way, it would seem that this constraint can only be accurately enforced by the receiver. If any partial payments fail, then the `intended_total_payment` through an intermediary may never arise and the payment would be held. This would also seem to exclude the possibility of iterative path finding, since the entire payment flow must be known up front during onion packet construction. Seems the proposal still works without the intermediaries needing to know this? We may want to add that the receiver: * SHOULD fail the payment if `intended_total_payment` is less than the invoice amount > I'm wondering, since these payments are no longer atomic, should we name it > accordingly? Indeed this true. Perhaps NAMP or CPHR (Concurrent Payment Hash Re-use) are more accurate and may avoid confusion? Cheers, Conner On Tue, Nov 13, 2018 at 8:33 AM Johan Torås Halseth wrote: > > Good evening Z and list, > > I'm wondering, since these payments are no longer atomic, should we name it > accordingly? > > Cheers, > Johan > > On Tue, Nov 13, 2018 at 1:28 PM ZmnSCPxj via Lightning-dev > wrote: >> >> Good morning list, >> >> I propose the below to support Base AMP. >> >> The below would allow arbitrary merges of paths, but not arbitrary splits. >> I am uncertain about the safety of arbitrary splits. >> >> ### The `multipath_merge_per_hop` type (`option_base_amp`) >> >> This indicates that payment has been split by the sender using Base AMP, and >> that the receiver should wait for the total intended payment before >> forwarding or claiming the payment. >> In case the receiving node is not the last node in the path, then succeeding >> hops MUST be the same across all splits. >> >> 1. type: 1 (`termination_per_hop`) >> 2. data: >> * [`8` : `short_channel_id`] >> * [`8` : `amt_to_forward`] >> * [`4` : `outgoing_cltv_value`] >> * [`8` : `intended_total_payment`] >> * [`4` : `zeros`] >> >> The contents of this hop will be the same across all paths of the Base AMP. >> The `payment_hash` of the incoming HTLCs will also be the same across all >> paths of the Base AMP. >> >> `intended_total_payment` is the total amount of money that this node should >> expect to receive in all incoming paths to the same `payment_hash`. >> >> This may be the last hop of a payment onion, in which case the `HMAC` for >> this hop will be `0` (the same rule as for `per_hop_type` 0). >> >> The receiver: >> >> * MUST impose a reasonable timeout for waiting to receive all component >> paths, and fail all incoming HTLC offers for the `payment_hash` if they >> have not totalled equal to `intended_total_payment`. >> * MUST NOT forward (if an intermediate node) or claim (if the final node) >> unless it has received a total greater or equal to `intended_total_payment` >> in all incoming HTLCs for the same `payment_hash`. >> >> The sender: >> >> * MUST use the same `payment_hash` for all paths of a single multipath >> payment. >> >> Regards, >> ZmnSCPxj >> ___ >> Lightning-dev mailing list >> Lightning-dev@lists.linuxfoundation.org >> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev > > ___ > Lightning-dev mailing list > Lightning-dev@lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev ___ Lightning-dev mailing list Lightning-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
Re: [Lightning-dev] RFC: simplifications and suggestions on open/accept limits.
> How do i unsubscribe from this email list? Could someone help me please. There’s a link in the footer to the linux list, there you can enter your email to unsubscribe Cheers, Conner -- Sent from my Spaceship On Fri, Nov 9, 2018 at 17:19 alexis petropoulos wrote: > How do i unsubscribe from this email list? Could someone help me please. > > Kindly, > > Alex > -- > *From:* lightning-dev-boun...@lists.linuxfoundation.org < > lightning-dev-boun...@lists.linuxfoundation.org> on behalf of Gert-Jaap > Glasbergen > *Sent:* Monday, November 5, 2018 3:48:56 PM > *To:* lightning-dev@lists.linuxfoundation.org; Rusty Russell > *Subject:* Re: [Lightning-dev] RFC: simplifications and suggestions on > open/accept limits. > > > Op 1 nov. 2018 om 03:38 heeft Rusty Russell het > volgende geschreven: > > > I believe this would render you inoperable in practice; fees are > frequently sub-satoshi, so you would fail everything. The entire > network would have to drop millisatoshis, and the bitcoin maximalist in > me thinks that's unwise :) > > > I can see how not wanting to use millisatoshis makes you less compatible > with other people that do prefer using that unit of account. But in this > case I think it's important to allow the freedom to choose. > > I essentially feel we should be allowed to respect the confines of the layer > we're building upon. There's already a lot of benefits to achieve from second > layer scaling whilst still respecting the limits of the base layer. Staying > within those limits means optimally benefit form the security it offers. > > Essentially by allowing to keep satoshi as the smallest fraction, you ensure > that everything you do off-chain is also valid and enforced by the chain when > you need it to. It comes at trade offs though: it would mean that if someone > routes your payment, you can only pay fees in whole satoshis - essentially > meaning if someone wants to charge a (small) fee, you will be overpaying to > stay within your chosen security parameters. Which is a consequence of your > choice. > > I would be happy to make a further analysis on what consequences allowing this > choice would have for the specification, and come up with a proposal on how to > add support for this. But I guess this discussion is meant to "test the > waters" > to see how much potential such a proposal would have to eventually be > included. > > I guess what I'm searching for is a way to achieve the freedom of choice, > without negatively impacting other clients or users that decide to accept some > level of trust. In my view, this would be possible - but I think working it > out > in a concrete proposal/RFC to the spec would be a logical next step. > > Gert-Jaap > > ___ > Lightning-dev mailing list > Lightning-dev@lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev > ___ Lightning-dev mailing list Lightning-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
Re: [Lightning-dev] Splicing Proposal: Feedback please!
Good evening lightning-dev, > If we receive later receive two `channel_update`s whose `short_channel_id`s > reference the spending transaction (and the node pubkeys are the same), we > assume the splice was successful and that this channel has been subsumed. I > think this works so long as the spending transaction doesn't contain multiple > funding outputs, though I think the current proposal is fallible to this as > well. Thought about this some more. The main difference seems to be whether the gossiped data is forward or backward looking. By forward looking, I mean that we gossip where the splice will move to, and backward looking gossips where the splice moved from. If we want to make the original proposal work w/ multiple funding outputs on one splice, I think it can be accomplished by sending the funding outpoint as opposed to just the txid. For the backward looking proposal, the `channel_update` could be modified to include the `short_channel_id` of the prior funding output. IMO we probably want to include the extra specificity even if we don't plan to have multiple funding outputs on a commitment implemented tomorrow, since outputs are what we truly care about. Of the two, it still seems like the backward looking approach results in less gossiped data since are able to reference a single confirmed output by location (8 bytes), instead of N unconfirmed outputs by outpoint (N*34 bytes). Another advantage I see with the backward looking splice announcments is that they can be properly verified before forwarding to the network by examining the channel lineage. In contrast, one can't be sure if the outpoint in a forward looking announcement will ever confirm, or even if it spends from the original channel point unless one also has the transaction. Until a splice does confirm, a node has to store multiple potential splice outpoints. Seeing this, it seems to me that backward looking announcements are less susceptible to abuse and DOS in this regard. Thoughts? Cheers, Conner On Thu, Oct 18, 2018 at 8:04 PM Conner Fromknecht wrote: > Good evening all, > > Thank you Rusty for starting us down this path :) and to ZmnSCPxj and Lisa > for > your thoughts. I think this narrows down the design space considerably! > > In light of this, and if I'm following along, it seems our hand is forced > in > splicing via a single on-chain transaction. In my book, this is preferable > anyway. I'd much rather push complexity off-chain than having to do a > mutli-stage splicing pipeline. > > > To add some context to this, if you start accepting HTLC's for the new > balance > > after the parallel commitment is made, but before the re-anchor is > buried, > > there's the potential for a race condition between a unilateral close > (or any > > revoked commitment transaction) and the re-anchoring commitment > transaction, > > that spends the 'pre-committed' UTXO of splicing in funds and the > original > > funding transaction > > Indeed, I'm not aware of any splicing mechanism that enables off-chain use > of > spliced-in funds before the new funding output confirms. Even in the async, > single-txn case, the new funds cannot be spent until the new funding output > confirms sufficiently. > > From my POV, the desired properties of a splice are: > 1. non-blocking (asynchronous) usage of the channel > 2. single on-chain txn > 3. ability to RBF (have multiple pending splices) > > Of these, it seems we've solidified 1 and 2. I understand the desire to not > tackle RBF on the first attempt given the additional complexity. However, > I > do believe there are ways we can proceed in which our first attempt largely > coincides with supporting it in the future. > > With that in mind, here are some thoughts on the proposals above. > > ## RBF and Multiple Splices > > > 1. type: 132 (`commitment_signed`) > > 2. data: > >* [`32`:`channel_id`] > >* [`64`:`signature`] > >* [`2`:`num_htlcs`] > >* [`num_htlcs*64`:`htlc_signature`] > >* [`num_htlcs*64`:`htlc_splice_signature`] (`option_splice`) > > This will overflow the maximum message size of 65535 bytes for num_htlcs > > 511. > > I would propose sending a distinct message, which references the > `active_channel_id` and a `splice_channel_id` for the pending splice: > > 1. type: XXX (`commitment_splice_signed`) (`option_splice`) > 2. data: >* [`32`:`active_channel_id`] >* [`32`:`splice_channel_id`] >* [`64`:`signature`] >* [`2`:`num_htlcs`] >* [`num_htlcs*64`:`htlc_signature`] > > This more directly addresses handling multiple pending splices, as well as > preventing us from running into any size constraints. The purpose of > including the `active_channel_id` would be to remote node loca
Re: [Lightning-dev] Commitment Transaction Format Update Proposals?
Good morning everyone, > We could also use SIGHASH_ANYONECANPAY|SIGHASH_SINGLE > for HTLC txs, without adding the "OP_TRUE" > output to the commitment transaction Doesn’t this require a non-zero number of HTLCs on the commitment txn? We would still require the OP_TRUE if there are no HTLCs, right? >From my recollection, HTLC txns with an absolute timeout won’t be accepted in the mempool until the expiry has matured. So the commitment would have to be held until that time before it’s descendants can bump the fee rate I think. I agree that we should probably modify the HTLC sighashes regardless, though I wonder if it is a standalone replacement for OP_TRUE. > 3. The CLTV timeout should be symmetrical to avoid > trying to game the peer into closing. (Connor IIRC?). I believe Jimpo proposed this :) Best, Conner On Fri, Oct 19, 2018 at 03:43 Rusty Russell wrote: > Fabrice Drouin writes: > > Hello, > > > >> 1. Rather than trying to agree on what fees will be in the future, we > > > should use an OP_TRUE-style output to allow CPFP (Roasbeef) > > > > We could also use SIGHASH_ANYONECANPAY|SIGHASH_SINGLE for HTLC txs, > without > > adding the "OP_TRUE" output to the commitment transaction. We would still > > need the update_fee message to manage onchain fees for the commit tx (but > > not the HTLC txs) but there would be no reason anymore to refuse fee > rates > > that are too high and channels would not get closed anymore when there's > a > > spike in onchain fees. > > Agreed, that was in the details below: > > - HTLC-timeout and HTLC-success txs sigs are > SIGHASH_ANYONECANPAY|SIGHASH_SINGLE, so you can Bring Your Own Fees. > > The only problem with these proposals is that it requires you have an > available UTXO to make the CPFP etc. > > Cheers, > Rusty. > > ___ > Lightning-dev mailing list > Lightning-dev@lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev > ___ Lightning-dev mailing list Lightning-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
Re: [Lightning-dev] Trustless WatchTowers?
The ability for a watchtower to spend them independently seems to resolve this* On Tue, Apr 17, 2018 at 01:30 Conner Fromknecht <conner@lightning.engineering> wrote: > Hi ZmnSCPxj, > > > > I understand. For myself, I will also wait for comment from other > c-lightning > > developers: this seems to require a bit of surgery on our code I think > > (currently construction of justice transactions is done in a separate > process, > > and we always generate a justice transaction that claims all claimable > outputs > > of the revoked commitment transaction), and we might decide to defer this > > feature for later (leaking revocation basepoint secret is easy and > requires > > maybe a few dozen sloc, but that requires a trusted WatchTower). > > Certainly, it will require changes to ours as well. Would also love to > hear what the > other implementations think of such a proposal. As of now, we detect if the > > commitment outputs have been spent, and if so, attempt to spend an > aggregate of > the commitment outputs and second-level outputs conditioned on which are > reported as spent. To realize this fully, we would need to also detect the > case > in which the second-level txns have already been spent, and then forgo > sweeping > them entirely (on the assumption that it has already been done by a > watchtower). > > > > > Ah, I thought you wanted to impose some kind of contract on > > HTLC-timeout/HTLC-success to enforce this behavior, you are referring > to a > > technique that the attacker might attempt to use if we use only a single > > justice transaction that claims all HTLC outputs. > > Precisely, if the attacker knows that we can only sweep a particular sets > of > outputs when batched, they can choose other sets that the watchtower can't > act > on. Spending them independently seems to resolve this. > > > > -Conner > > On Tue, Apr 17, 2018 at 8:02 AM ZmnSCPxj <zmnsc...@protonmail.com> wrote: > >> Good morning Conner, >> >> > I understand. It would be good to know what you have, and perhaps >> consider >> > planning a new BOLT document for such. >> Yes, that is the ultimate goal. I think it might be a little to soon to >> have a >> full-on BOLT spec. There are still some implementation details that we >> would >> like to address before formalizing everything. I am working to have >> something >> written up in the short-term documenting the approach[es] that ends up >> being >> solidified. Hopefully that can get some eyes during development, and >> perhaps >> serve as working document/rough draft. >> >> >> I understand. For myself, I will also wait for comment from other >> c-lightning developers: this seems to require a bit of surgery on our code >> I think (currently construction of justice transactions is done in a >> separate process, and we always generate a justice transaction that claims >> all claimable outputs of the revoked commitment transaction), and we might >> decide to defer this feature for later (leaking revocation basepoint secret >> is easy and requires maybe a few dozen sloc, but that requires a trusted >> WatchTower). >> >> > Sorry, I seem confused this idea. Can you give example for commitment >> with 2x >> > HTLC? >> >> Sure thing! The confirmation of second level HTLC txns can be separated >> by >> arbitrary delays. This is particularly true if the CLTVs have already >> expired, >> offering an attacker total control over when the txns appear on the >> network. One >> way this can happen is if the attacker iteratively broadcasts a single >> second-level txn, waits for confirmation and CSV to expire, then repeat >> with >> another second-level txn. >> >> Since the CSVs begin ticking as soon as they are included in the chain, >> the >> attacker could try to sweep each one immediately after its CSV expires. >> If the >> watchtower doesn't have the ability to sweep outputs independently, it >> would >> have no way to intercept this behavior, and prevent the breacher from >> sweeping >> individual HTLCs sequentially. >> >> Ah, I thought you wanted to impose some kind of contract on >> HTLC-timeout/HTLC-success to enforce this behavior, you are referring to a >> technique that the attacker might attempt to use if we use only a single >> justice transaction that claims all HTLC outputs. >> >> >> > 5. 0 or 1 or 2 signatures for the main outputs. These sign a single >> > transaction that claims only the main outputs. >
Re: [Lightning-dev] Trustless WatchTowers?
Hi ZmnSCPxj, > I understand. For myself, I will also wait for comment from other c-lightning > developers: this seems to require a bit of surgery on our code I think > (currently construction of justice transactions is done in a separate process, > and we always generate a justice transaction that claims all claimable outputs > of the revoked commitment transaction), and we might decide to defer this > feature for later (leaking revocation basepoint secret is easy and requires > maybe a few dozen sloc, but that requires a trusted WatchTower). Certainly, it will require changes to ours as well. Would also love to hear what the other implementations think of such a proposal. As of now, we detect if the commitment outputs have been spent, and if so, attempt to spend an aggregate of the commitment outputs and second-level outputs conditioned on which are reported as spent. To realize this fully, we would need to also detect the case in which the second-level txns have already been spent, and then forgo sweeping them entirely (on the assumption that it has already been done by a watchtower). > Ah, I thought you wanted to impose some kind of contract on > HTLC-timeout/HTLC-success to enforce this behavior, you are referring to a > technique that the attacker might attempt to use if we use only a single > justice transaction that claims all HTLC outputs. Precisely, if the attacker knows that we can only sweep a particular sets of outputs when batched, they can choose other sets that the watchtower can't act on. Spending them independently seems to resolve this. -Conner On Tue, Apr 17, 2018 at 8:02 AM ZmnSCPxjwrote: > Good morning Conner, > > > I understand. It would be good to know what you have, and perhaps > consider > > planning a new BOLT document for such. > Yes, that is the ultimate goal. I think it might be a little to soon to > have a > full-on BOLT spec. There are still some implementation details that we > would > like to address before formalizing everything. I am working to have > something > written up in the short-term documenting the approach[es] that ends up > being > solidified. Hopefully that can get some eyes during development, and > perhaps > serve as working document/rough draft. > > > I understand. For myself, I will also wait for comment from other > c-lightning developers: this seems to require a bit of surgery on our code > I think (currently construction of justice transactions is done in a > separate process, and we always generate a justice transaction that claims > all claimable outputs of the revoked commitment transaction), and we might > decide to defer this feature for later (leaking revocation basepoint secret > is easy and requires maybe a few dozen sloc, but that requires a trusted > WatchTower). > > > Sorry, I seem confused this idea. Can you give example for commitment > with 2x > > HTLC? > > Sure thing! The confirmation of second level HTLC txns can be separated by > arbitrary delays. This is particularly true if the CLTVs have already > expired, > offering an attacker total control over when the txns appear on the > network. One > way this can happen is if the attacker iteratively broadcasts a single > second-level txn, waits for confirmation and CSV to expire, then repeat > with > another second-level txn. > > Since the CSVs begin ticking as soon as they are included in the chain, > the > attacker could try to sweep each one immediately after its CSV expires. > If the > watchtower doesn't have the ability to sweep outputs independently, it > would > have no way to intercept this behavior, and prevent the breacher from > sweeping > individual HTLCs sequentially. > > Ah, I thought you wanted to impose some kind of contract on > HTLC-timeout/HTLC-success to enforce this behavior, you are referring to a > technique that the attacker might attempt to use if we use only a single > justice transaction that claims all HTLC outputs. > > > > 5. 0 or 1 or 2 signatures for the main outputs. These sign a single > > transaction that claims only the main outputs. > > Yes, it seems necessary to separate the commitment outpoints from the HTLC > outpoints in case the commitment txn is broadcasted before the CLTVs > expire. > You could try to batch these with the HTLCs, but then we get back into > exponential territory. > > Agreed. > > > Is that approximately what is needed? Have I missed anything? > > Nope, I think your understanding is on point. IMO this seems to be a > reasonable > compromise of the tradeoffs at hand, and definitely something that could > serve > as an initial iteration due to its simplicity. In the future, there are > definitely > ways > to improve on this on make it even more efficient! Though having a > solid/workable v0 is important if it is to be deployed. I enjoy hearing > your > thoughts on this, thank you for your responses! > > Thank you for this confirmation. > > Regards, > ZmnSCPxj >
Re: [Lightning-dev] Trustless WatchTowers?
Good evening ZmnSCPxj, > Also, thank you for the link. Definitely! I had to do some digging myself to recover these hidden gems. > I understand. It would be good to know what you have, and perhaps consider > planning a new BOLT document for such. Yes, that is the ultimate goal. I think it might be a little to soon to have a full-on BOLT spec. There are still some implementation details that we would like to address before formalizing everything. I am working to have something written up in the short-term documenting the approach[es] that ends up being solidified. Hopefully that can get some eyes during development, and perhaps serve as working document/rough draft. > Sorry, I seem confused this idea. Can you give example for commitment with 2x > HTLC? Sure thing! The confirmation of second level HTLC txns can be separated by arbitrary delays. This is particularly true if the CLTVs have already expired, offering an attacker total control over when the txns appear on the network. One way this can happen is if the attacker iteratively broadcasts a single second-level txn, waits for confirmation and CSV to expire, then repeat with another second-level txn. Since the CSVs begin ticking as soon as they are included in the chain, the attacker could try to sweep each one immediately after its CSV expires. If the watchtower doesn't have the ability to sweep outputs independently, it would have no way to intercept this behavior, and prevent the breacher from sweeping individual HTLCs sequentially. > When the commitment txid is found onchain, the WatchTower creates a single > main output claim transaction using the 1 or 2 signatures for the main > outputs. And for each HTLC outpoint on the commitment transaction, if it gets > spent, the WatchTower creates one HTLC justice transaction from the > second-stage HTLC transaction. Yes, this is how it would work in context of what I was suggesting. Certainly, there are other ways to accomplish the same thing. I don't wish to claim that this is the best solution available, there are a lot of tradeoffs that need to be evaluated. I'm hoping that you and others can bring any shortcomings to light and help us sift through them. > 5. 0 or 1 or 2 signatures for the main outputs. These sign a single > transaction that claims only the main outputs. Yes, it seems necessary to separate the commitment outpoints from the HTLC outpoints in case the commitment txn is broadcasted before the CLTVs expire. You could try to batch these with the HTLCs, but then we get back into exponential territory. > Is that approximately what is needed? Have I missed anything? Nope, I think your understanding is on point. IMO this seems to be a reasonable compromise of the tradeoffs at hand, and definitely something that could serve as an initial iteration due to its simplicity. In the future, there are definitely ways to improve on this on make it even more efficient! Though having a solid/workable v0 is important if it is to be deployed. I enjoy hearing your thoughts on this, thank you for your responses! Best, Conner On Tue, Apr 17, 2018 at 6:14 AM ZmnSCPxjwrote: > Good morning Conner, > > > > > Hi ZmnSCPxj, > > Can you describe the "encrypted blob" approach to me? Or point me to > > materials? > > There's an awesome watchtower thread on the mailing list from 2016 that > starts > here [1]. It covers a broader range of possibilities than just the > encrypted > blob approach, and also considers other revocation schemes, e.g. elkrem. > > Similar to what you described, one encrypted blob approached discussed in > that thread is: > 1. hint = tixd[:16] > 2. blob = Enc(data, txid[16:]) > 3. Send (hint, blob) to watchtower. > > Whenever a new block is mined, the watchtower checks if it has an entry > for each > txid[:16]. If so, it decrypts using txid[16:], assembles the justice txn, > and > broadcasts (assuming the reward output matches what was negotiated). > > > Thank you, that is indeed similar to what I was thinking given the name > "encrypted blob". > > Also, thank you for the link. I have not had much time to back-read > anything older than 2017 in the archives. I observe that neither Poon nor > Dryja seem to strongly participate in this list from 2017 onwards. > > > > > Do you have a description of the WatchTower protocol used in lnd? It > may be > > useful to be intercompatible. > We don't have anything written up formally, though what we have currently > operates on the design above. > > > I understand. It would be good to know what you have, and perhaps consider > planning a new BOLT document for such. > > Nicolas Dorier mentioned plans for BTCPay to somehow host "merchant > support networks" where merchants may expose WatchTower endpoints, which > other merchants may post revocation information for their channels to. > > > > I'll also take this time to brain dump some recent investigations I've > been doing on > > > > watchtowers. TL;DR @ fin. > > > >
Re: [Lightning-dev] Trustless WatchTowers?
Hi ZmnSCPxj, > Can you describe the "encrypted blob" approach to me? Or point me to > materials? There's an awesome watchtower thread on the mailing list from 2016 that starts here [1]. It covers a broader range of possibilities than just the encrypted blob approach, and also considers other revocation schemes, e.g. elkrem. Similar to what you described, one encrypted blob approached discussed in that thread is: 1. hint = tixd[:16] 2. blob = Enc(data, txid[16:]) 3. Send (hint, blob) to watchtower. Whenever a new block is mined, the watchtower checks if it has an entry for each txid[:16]. If so, it decrypts using txid[16:], assembles the justice txn, and broadcasts (assuming the reward output matches what was negotiated). > Do you have a description of the WatchTower protocol used in lnd? It may be > useful to be intercompatible. We don't have anything written up formally, though what we have currently operates on the design above. There are more complex proposals discussed allowing an encrypted blob to reference data stored in a prior encrypted blob. Primary advantage would be reducing the storage costs of HTLCs present on multiple successive commitment transactions; primary disadvantage is that it's significantly more complex, in addition to the other points brought up by Laolu. I'm not positive as to the extent this approach was implemented/fleshed out, or if any other pros/cons may have been realized in the process. I haven't done nearly as much research as Tadge on that front, he's probably got some extensive thoughts on the tradeoffs. === I'll also take this time to brain dump some recent investigations I've been doing on watchtowers. TL;DR @ fin. FWIW, I've been thinking about this in the context of the simple encrypted blob approach, though the observations can generalize to other schemes. As Laolu mentioned, the storage requirement for the watchtower is dominated by the number of HTLC signatures included in the encrypted blob. Due to independence of the second stage transactions, there is a combinatoric blowup in the number of signatures that would need to be pre-signed under the revocation private key _if sweeping of HTLC outputs is batched_. If we want to batch sweep without more liberal sighash flags, I think we'd need to pre-sign n*2^n signatures. There are 2^n possible ways that n HTLCs can straddle the first and second stages, and each permutation would require n distinct signatures since the set of inputs is unique to each permutation. Needless to say, this isn't feasible with the maximum number of HTLCs allowed in the protocol. However, I have some observations that might inform an efficient set of signatures we can choose to include in the encrypted blobs. The first is that the HTLC timeout or HTLC success transaction _must_ be broadcast before the attacker can move funds back into their wallet. If these transactions are never mined, it is actually fine to do nothing and leave those outputs in the breached state. If/when the victim comes back online, they themselves can sign and broadcast a justice transaction that executes the revocation clause of either the offered or received HTLC scripts, based on the observed spentness of the various commitment HLTC outputs at that time. So, we can save on signature data by only requiring the watchtower to act if second stage transactions are confirmed. One reallyyy nice thing about not having the watchtower sweep the HTLC outputs on the commitment txn directly is that it doesn't need to know how to reconstruct the more complex HTLC redeem scripts. It only needs to reconstruct commitment to-local and second-stage to-local scripts and witnesses. This means the blob primarily contains: - 1 revocation pubkey - 1 local delay pubkey - 1 CSV delay - 2 commitment signatures - n HTLC signatures and we don't have to bother sending CLTVs, local/remote htlc pubkeys, or payment hashes at all. The storage for this ends up being something like ~100 + 64*(2+nhtlcs) when you include other things like the sweep address. The second observation is that the second stage transactions could be broadcast sequentially such that the CSV delays don't overlap at all. In this event, the watchtower needs to sweep the HTLCs iteratively to prevent the attacker from sweeping any of the outputs as the relative timelocks expire. One minimal solution could be to send signatures for independent sweep transactions, allowing the watchtower to sweep each HTLC output individually. This is nice because it permits the watchtower to sweep exactly the subset of HTLCs that ever transition into the second stage, and under any permutation wrt. ordering of confirmed second stage transactions. With the single transaction per HTLC approach, the total number of signatures that are sent to the watchtower remains linear in the number HTLCs on the commitment transaction. This approach does have the downside of consuming slightly more fees, since each output is swept with a
Re: [Lightning-dev] AMP: Atomic Multi-Path Payments over Lightning
Hi everyone, I've seen some discussions over losing proofs of payment in the AMP setting, and wanted to address some lingering concerns I have regarding the soundness of using the current invoicing system to prove payments. In general, I think we are ascribing too much weight to simply having a preimage and BOLT 11 invoice. The structure of non-interactive payments definitely poses some interesting challenges in adapting the existing invoicing scheme. However, I believe there exist stronger and better means of doing proofs of payment, and would prefer not to tie our hands by assuming this is the best way to approach the problem. IMHO, the current signed invoice + preimage is a very weak proof of payment. It's the hash equivalent to proving you own a public key by publishing the secret key. There is an assumption that the only way someone could get that preimage is by having made a payment, but this assumption is broken most directly by the proving mechanism. Similarly, any intermediary who acquires an invoice with the appropriate hash could also make this claim since they also have the preimage. Further, I think it's a mistake to conflate 1) me being able to present a valid preimage/invoice pair, with 2) me having received the correct preimage in response to an onion packet that I personally crafted for the receiving node in the invoice. The main issue is that the proof does not bind a specific sender, making statement 1 producible by multiple individuals. I think it would be potentially worthwhile to explore proofs of stronger statements, such as 2, that could utilize the ephemeral keys in the onion packets, or even the onion as a witness, which is more rigidly coupled to having actually completed a payment. Without any modification to the spec, we can always use something like ZKBoo to prove (w/o trusted setup) knowledge of a preimage without totally revealing it to the verifier. This isn't perfect, but at least gives the sender the option to prove the statement without necessarily giving up the preimage. TL;DR: I'm not convinced the signed invoice + hash is really a good yardstick by which to measure provability, and I think doing some research into proofs of payment on stronger statements would be incredibly valuable. Therefore, I'm not sure if AMPs really lose this, so much as force us to reconsider what it actually requires to soundly prove a payment to an external verifier. Best, Conner On Mon, Feb 12, 2018 at 6:56 PM ZmnSCPxj via Lightning-dev < lightning-dev@lists.linuxfoundation.org> wrote: > Good morning Christian and Corne, > > Another idea to consider, is techniques like ZKCP and ZKCSP, which provide > atomic access to information in exchange for monetary compensation. > Ensuring atomicity of the exchange can be done by providing the information > encrypted, a hash of the encryption key, and proofs that the encrypted data > is the one desired and that the data was encrypted with the given key; the > proof-of-payment is the encryption key, and possession of the encryption > key is sufficient to gain access to the information, with no need to bring > in legal structures. > > (admittedly, ZKCP and ZKCSP are dependent on new cryptography...) > > (also, AMP currently cannot provide a proof-of-payment, unlike current > payment routing that has proof-of-payment, but that is an eventual design > goal that would enable use of ZKC(S)P on-Lightning, assuming we eventually > find out that zk-SNARKs and so on are something we can trust) > > Regards, > ZmnSCPxj > > > Sent with ProtonMail Secure Email. > > > Original Message > On February 13, 2018 2:05 AM, Christian Decker < > decker.christ...@gmail.com> wrote: > > >Honestly I don't get why we are complicating this so much. We have a > > system that allows atomic multipath payments using a single secret, and > > future decorrelation mechanisms allow us to vary the secret in such a > > way that multiple paths cannot be collated, why introduce a whole set of > > problems by giving away the atomicity? The same goes for the overpaying > > and trusting the recipient to only claim the owed amount, there is no > > need for this. Just pay the exact amount, by deriving secrets from the > > main secret and make the derivation reproducible by intermediate hops. > > > > Having proof-of-payment be presentable in a court is a nice feature, but > > it doesn't mean we need to abandon all guarantees we have worked so hard > > to establish in LN. > > > > Corné Plooy via Lightning-dev lightning-dev@lists.linuxfoundation.org > >writes: > > > >>I was thinking that, for that use case, a different signed invoice could > >> be formulated, stating > >> - several payment hashes with their corresponding amounts > >> > >> - the obligation of signer to deliver Z if all corresponding payment > >> keys are shown > >> > >> - some terms to handle the case where only a part of the payments was > >> successful, e.g. an obligation to refund > >>The