Re: [Lightning-dev] Dual Funding Proposal
> 128-bit seed in > open_channel2 could be added, with sorting by SHA(seed | input> | ) and SHA(seed | )? `open_channel2` contains a good amount of entropy --- temporary channel ID, various basepoints. Would not hashing `open_channel2` to get this `seed` be sufficient? Regards, ZmnSCPxj > > Phew! > Rusty. > > Lightning-dev mailing list > Lightning-dev@lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev ___ Lightning-dev mailing list Lightning-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
Re: [Lightning-dev] Base AMP
Good morning Rusty, Sent with ProtonMail Secure Email. ‐‐‐ Original Message ‐‐‐ On Friday, November 30, 2018 7:46 AM, Rusty Russell wrote: > ZmnSCPxj zmnsc...@protonmail.com writes: > > > Good morning all, > > > > > I initially suggested we could just have a 2-byte "number of total > > > pieces", but it turns out there's a use-case where that doesn't work > > > well: splitting the bill. There each payer is unrelated, so doesn't > > > know how the others are paying. > > > > This would also not work well in case of a dynamic algorithm that greedily > > tries to pay the whole amount at once, then splits it if it does not fit, > > with each split also being liable to splitting. > > Such a dynamic algorithm would not know in the first place how many splits > > it will take, but it will know the total amount it intends to deliver. > > Well, that would have worked because received takesmax of the values > received, ie, sender starts with A and B, both with "numpieces=2", > then splits B into BA and BB, both with "numpieces=3". Consider a network where there are 4 paths between payer and payee. 3 paths have low capacity but negligible feerate, while the 4th has high capacity but ridiculously high feerates measurable in whole microbitcoins. The rational thing to try, when paying a somewhat large amount but trying to minimize fees, would be to split among the three lowcost paths. But what if 2 of those paths fail? It would be better to merge them into a single payment along the expensive 4th path. However, the remaining succeeding path has already given `numpaths`=3. Using `numpaths` overcommits to what you will do in the future, and is unnecessary anyway. The payee is interested in the total value, not the details of the split. Regards, ZmnSCPxj > > But it's bad for the separate-payer case anyway, so... > > Thanks, > Rusty. ___ Lightning-dev mailing list Lightning-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
Re: [Lightning-dev] Dual Funding Proposal
lisa neigut writes: > Hello fellow Lightning devs! > > What follows is a draft for the new dual funding flow. Note that the > `option_will_fund_for_food` specification has been omitted for this draft. Hi! Wow, my mailer really mangled this! I've liberally demangled below as I quote. The proposal is great, but intense, so I've bikeshedded the language. My only objection is that I'd love to simplify RBF. > = Proposal > > Create a new channel open protocol set (v2), with three new message types: > `funding_puts2`, `commitment_signed2`, and `funding_signed2`, plus two for > negotiating RBF, `init_rbf` and `accept_rbf`. > > > Quick overview of the message exchange for v2: > >++ ++ >||--(1)--- open_channel2 >|| >||<-(2)-- accept_channel2 || >|| || >||--(3)-- funding_puts2 ->|| >||<-(4)-- funding_puts2 - || >|| || >||--(5)-- commitment_signed2 -->|| >||<-(6)-- commitment_signed2 ---|| >| A | | B | >||--(7)--- funding_signed2 >|| >||<-(8)--- funding_signed2 -|| >|| || >||--(a)--- init_rbf --->|| >||<-(b)--- accept_rbf --|| >|| || >||--(9)--- funding_locked2 >|| >||<-(10)---funding_locked2 -|| >++ ++ > where node A is the ‘initiator’ and node B is the ‘dual-funder’ We currently use the terms funder and fundee, which are now inaccurate ofc. Perhaps 'opener' and 'accepter' are not great english, but they map to the messages well? > Willingness to use v2 is flagged in init via `option_dual_fund`. > `init` > > local channel feature flag, `option_dual_fund` > > == Channel establishment with dual_funding > > `open_channel2`: > [32:chain_hash] > … // unchanged > [1:channel_flags] > [?: options_tlv] Always prefix variable fields by length, even this one. Otherwise we can never extend, and you never know... [2:tlv_len] [tlv_len:opening_tlv] I think we can remove `funding_satoshis` here; we'll know when they add their inputs, so it's redundant. Another subtle point is the feerate_per_kw field; in the old scheme it applied to the first commitment tx, but here it applies to both the first commitment tx and the funding tx itself (unless option_simplified_commitment, though roasbeef has suggested further splitting that option, in which case we'll want another fee field here). > options_tlv: Let's call this `opening_tlv` since there are other TLVs coming? >1. >Type: 1 `option_upfront_shutdown_script` >1. > > [2:len] > 2. > > Value: `shutdown_scriptpubkey` > If nodes have negotiated `option_dual_fund` > The sending node: >- >MAY begin channel establishment using `open_channel2` - MUST NOT send `open_channel`. > Otherwise the receiving node: >- >MUST return an error. This is a requirement for receiving `open_channel` IIUC? ie. The receiving node MUST fail the channel if: ... - `option_dual_fund` has been negotiated. > `accept_channel2`: > > [32:temporary_channel_id] > … // unchanged > [33:first_per_commitment_point] > [?: options_tlv] > If we call this `opening_tlv` we can just reuse the definition from before. > `funding_puts2` We can probably drop the 2 here and call it, um.. `funding_compose`? (Thanks thesaurus.com). I get where you're going with 'puts, but it took me a while :) > This message exchanges the input and output information necessary to > compose the funding transaction. > > [32:temporary_channel_id] > [`2`:`num_inputs`] > [`num_inputs*input_info`] > [`2`:`num_outputs`] > [`num_outputs`*ouput_info`] typo: output_info > 1. subtype: `input_info` > 2. data: > * [`8`:`satoshis`] > * [`32`:`prevtxid`] > * [`4`:`prevtxoutnum`] > * [`2`:`scriptlen`] > * [`scriptlen`:`script`] > * [`2`:`max_extra_witness_len`] > * [`2`:`wscriptlen`] > * [`wscriptlen`:`wscript`] > > 1. subtype: `output_info` > 2. data: > * [`8`:`satoshis`] > * [`2`:`scriptlen`] > * [`scriptlen`:`script`] > > Requirements: > > The sending node: > >- MUST ensure each `input_info` refers to an existing UTXO >- MUST ensure the `output_info`.`script` is a standard script >- MUST NOT spend any UTXOs specified in funding_puts2 until/unless the > channel establishment has failed > If is the initiator (A): > - MUST NOT send an empty message (`num_inputs` + `num_outputs` = 0) > > If is the dual-funder (B): >- >consider the `put_limit` the total number of `num_inputs` plus >`num_outputs` from `funding_puts2`, with minimum 2. >- >MUST NOT send a number of `input_data` and/or `output_data` which >exceeds the `put_limit` Side note:
Re: [Lightning-dev] Dual Funding Proposal
lisa neigut writes: >> * [`2`:`scriptlen`] >> >> * [`scriptlen`:`script`] >> >> * [`2`:`max_extra_witness_len`] >> >> * [`2`:`wscriptlen`] >> >> * [`wscriptlen`:`wscript`] >> >> >> `script` here is the `scriptPubKey`? This is needed for `hashPrevouts` in >> BIP143 I believe. >> >> What is the `wscript`? Is this the `scriptCode` in BIP143? I was thinking the BIP141 witnessScript; this mixes weirdly with P2WPKH though where the witnessScript does not really exist. So I guess an empty witnessScript means P2WPKH. >> Are non-SegWit inputs disallowed? No, since we'd have a malleability problem :( Cheers, Rusty. ___ Lightning-dev mailing list Lightning-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
Re: [Lightning-dev] Base AMP
ZmnSCPxj writes: > Good morning all, > >> I initially suggested we could just have a 2-byte "number of total >> pieces", but it turns out there's a use-case where that doesn't work >> well: splitting the bill. There each payer is unrelated, so doesn't >> know how the others are paying. > > This would also not work well in case of a dynamic algorithm that greedily > tries to pay the whole amount at once, then splits it if it does not fit, > with each split also being liable to splitting. > Such a dynamic algorithm would not know in the first place how many splits it > will take, but it *will* know the total amount it intends to deliver. Well, that would have worked because received takes *max* of the values received, ie, sender starts with A and B, both with "numpieces=2", then splits B into BA and BB, both with "numpieces=3". But it's bad for the separate-payer case anyway, so... Thanks, Rusty. ___ Lightning-dev mailing list Lightning-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
[Lightning-dev] CPFP Carve-Out for Fee-Prediction Issues in Contracting Applications (eg Lightning)
(cross-posted to both lists to make lightning-dev folks aware, please take lightning-dev off CC when responding). As I'm sure everyone is aware, Lightning (and other similar systems) work by exchanging pre-signed transactions for future broadcast. Of course in many cases this requires either (a) predicting what the feerate required for timely confirmation will be at some (or, really, any) point in the future, or (b) utilizing CPFP and dependent transaction relay to allow parties to broadcast low-feerate transactions with children created at broadcast-time to increase the effective feerate. Ideally transactions could be constructed to allow for after-the-fact addition of inputs to increase fee without CPFP but it is not always possible to do so. Option (a) is rather obviously intractible, and implementation complexity has led to channel failures in lightning in practice (as both sides must agree on a reasonable-in-the-future feerate). Option (b) is a much more natural choice (assuming some form of as-yet-unimplemented package relay on the P2P network) but is made difficult due to complexity around RBF/CPFP anti-DoS rules. For example, if we take a simplified lightning design with pre-signed commitment transaction A with one 0-value anyone-can-spend output available for use as a CPFP output, a counterparty can prevent confirmation of/significantly increase the fee cost of confirming A by chaining a large-but-only-moderate-feerate transaction off of this anyone-can-spend output. This transaction, B, will have a large absolute fee while making the package (A, B) have a low-ish feerate, placing it solidly at the bottom of the mempool but without significant risk of it getting evicted during memory limiting. This large absolute fee forces a counterparty which wishes to have the commitment transaction confirm to increase on this absolute fee in order to meet RBF rules. For this reason (and many other similar attacks utilizing the package size limits), in discussing the security model around CPFP, we've generally considered it too-difficulty-to-prevent third parties which are able to spend an output of a transaction from delaying its confirmation, at least until/unless the prevailing feerates decline and some of the mempool backlog gets confirmed. You'll note, however, that this attack doesn't have to be permanent to work - Lightning's (and other contracting/payment channel systems') security model assumes the ability to get such commitment transactions confirmed in a timely manner, as otherwise HTLCs may time out and counterparties can claim the timeout-refund before we can claim the HTLC using the hash-preimage. To partially-address the CPFP security model considerations, a next step might involve tweaking Lightning's commitment transaction to have two small-value outputs which are immediately spendable, one by each channel participant, allowing them to chain children off without allowng unrelated third-parties to chain children. Obviously this does not address the specific attack so we need a small tweak to the anti-DoS CPFP rules in Bitcoin Core/BIP 125: The last transaction which is added to a package of dependent transactions in the mempool must: * Have no more than one unconfirmed parent, * Be of size no greater than 1K in virtual size. (for implementation sanity, this would effectively reduce all mempool package size limits by 1 1K-virtual-size transaction, and the last would be "allowed to violate the limits" as long as it meets the above criteria). For contracting applications like lightning, this means that as long as the transaction we wish to confirm (in this case the commitment transaction) * Has only two immediately-spendable (ie non-CSV) outputs, * where each immediately-spendable output is only spendable by one counterparty, * and is no larger than MAX_PACKAGE_VIRTUAL_SIZE - 1001 Vsize, each counterparty will always be able to independantly CPFP the transaction in question. ie because if the "malicious" (ie transaction-delaying) party bradcasts A with a child, it can never meet the "last transaction" carve-out as its transaction cannot both meet the package limit and have only one unconfirmed ancestor. Thus, the non-delaying counterparty can always independently add its own CPFP transaction, increasing the (A, Tx2) package feerate and confirming A without having to concern themselves with the (A, Tx1) package. As an alternative proposal, at various points there have been discussions around solving the "RBF-pinning" problem by allowing transactors to mark their transactions as "likely-to-be-RBF'ed", which could enable a relay policy where children of such transactions would be rejected unless the resulting package would be "near the top of the mempool". This would theoretically imply such attacks are not possible to pull off consistently, as any "transaction-delaying" channel participant will have to place the
Re: [Lightning-dev] Reason for having HMACs in Sphinx
Hi Corne, the HMACs are necessary in order to make sure that a hop cannot modify the packet before forwarding, and the next node not detecting that modification. One potential attack that could facilitate is that an attacker could learn the path length by messing with different per-hop payloads: set n=0 the attacker flips bits in the nth per-hop payload, and forwards it. If the next node doesn't return an error it was the final recipient, if if returns an error, increment n and flip bits in the (n+1)th per-hop payload, until no error is returned. Congratulation you just learned the path length after you. The same can probably be done with the error packet, meaning you can learn the exact position in the route. Add to that the information you already know about the network (cltv_deltas, amounts, fees, ...) and you can probably detect sender and recipient. Adding HMACs solves this by ensuring that the next hop will return an error if anything was changed, i.e., removing the leak about which node would have failed the route. Cheers, Christian Corné Plooy via Lightning-dev writes: > Hi, > > > Is there a reason why we have HMACs in Sphinx? What could go wrong if we > didn't? > > A receiving node doesn't know anyway what the origin node is; I don't > see any attack mode where an attacker wouldn't be able to generate a > valid HMAC. > > A receiving node only knows which peer sent it a Sphinx packet; > verification that this peer really sent this Sphinx packet is (I think) > already done on a lower protocol layer. > > > AFAICS, The only real use case of the HMAC value is the special case of > a 0-valued HMAC, indicating the end of the route. But that's just silly: > it's essentially a boolean, not any kind of cryptographic verification. > > > CJP > > > ___ > Lightning-dev mailing list > Lightning-dev@lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev ___ Lightning-dev mailing list Lightning-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
Re: [Lightning-dev] [PATCH] First draft of option_simplfied_commitment
For the low low cost of 3 witness bytes, I think the simplification of analysis/separation of concerns is worth it, though I agree it is probably not strictly required. On 11/26/18 3:12 AM, Rusty Russell wrote: Matt Corallo writes: Hmm, are we willing to consider CLTV sufficient? In case you have two HTLCs, one of medium-small value that has a low CLTV and one of high value that has a higher CLTV, you could potentially use the soon-CLTV to delay the commitment transaction somewhat further if you broadcast it right as the sooner HTLC expires. I think you haven't got the commitment tx onchain by the time the HTLC expires, you're already in trouble. But since there's no script length difference, it *is* simpler to prepend `1 OP_CHECKSEQUENCEVERIFY OP_DROP` to the start of each script. Cheers, Rusty. ___ Lightning-dev mailing list Lightning-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
Re: [Lightning-dev] Reason for having HMACs in Sphinx
Hey CJP, I am still not 100% through the SPHINX paper so it would be great if at least another pair of eyes could lookt at this. However from the original SPHINX paper I quote: "Besides extracting the shared key, each mix has to be provided with authentic and confidential routing information to direct the message to the subsequent mix, or to its final destination. We achieve this by a simple encrypt-then-MAC mechanism. A secure stream cipher or AES in counter mode is used for encryption, and a secure MAC (with some strong but standard properties) is used to ensure no part of the message header containing routing information has been modified. Some padding has to be added at each mix stage, in order to keep the length of the message invariant at each hop." At first I thought this would mean that the HMAC ensures that the previous hop cannot change the routing information. which was the first answer that I wanted to give. However I am confused now too. The HMAC commits to the next onion. So if the entire onion was exchanged and a new HMAC was provided (as you suggest) the processing hop would not know this. Such a use case would obviously lead to a routing scenario which would not succeed and would hardly be useful (unless the previous hop plans a reverse dos attacks from error messages or some other sabotage attacks which are references in the SPHINX paper but not discussed explicitly). On a second thought I reviewed chapter 2.1 of the Sphinx paper in which the thread model for attackers is described. As far as I understand that section one attack vector for which the HMAC shall help are man in the middle attacks. If HMACs are being used some bitflipping by man in the middles would be detected. However I think if a man in the middle speaks the BOLT protocol they could exchange the entire package and provide a new HMAC as a previous hop could do. Also the Thread model does only speak about security of the message not so much about the reliability of the protocol. I believe it is quite clear that if a routing node wants to manipulate the onion they can do so. In the same way how they can decide not to forward the onion. --> So the mix network itself can make sure that no wrong messages are delivered it cannot make sure that messages (which are unseen and unknown from where they came) are intercepted. Besides the Bitflipping usecase that I mentioned I agree with your criticism and also don't see the necessity of the HMAC anymore. The message is encrypted anyway and if bits are flipped the decrypted version will just be badly formated. If the header was manipulated the next hop would not be able to decrypt. Best regards Rene Am Do., 29. Nov. 2018, 16:31 hat Corné Plooy via Lightning-dev < lightning-dev@lists.linuxfoundation.org> geschrieben: > Hi, > > > Is there a reason why we have HMACs in Sphinx? What could go wrong if we > didn't? > > A receiving node doesn't know anyway what the origin node is; I don't > see any attack mode where an attacker wouldn't be able to generate a > valid HMAC. > > A receiving node only knows which peer sent it a Sphinx packet; > verification that this peer really sent this Sphinx packet is (I think) > already done on a lower protocol layer. > > > AFAICS, The only real use case of the HMAC value is the special case of > a 0-valued HMAC, indicating the end of the route. But that's just silly: > it's essentially a boolean, not any kind of cryptographic verification. > > > CJP > > > ___ > Lightning-dev mailing list > Lightning-dev@lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev > ___ Lightning-dev mailing list Lightning-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
[Lightning-dev] Reason for having HMACs in Sphinx
Hi, Is there a reason why we have HMACs in Sphinx? What could go wrong if we didn't? A receiving node doesn't know anyway what the origin node is; I don't see any attack mode where an attacker wouldn't be able to generate a valid HMAC. A receiving node only knows which peer sent it a Sphinx packet; verification that this peer really sent this Sphinx packet is (I think) already done on a lower protocol layer. AFAICS, The only real use case of the HMAC value is the special case of a 0-valued HMAC, indicating the end of the route. But that's just silly: it's essentially a boolean, not any kind of cryptographic verification. CJP ___ Lightning-dev mailing list Lightning-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev