Re: [Lightning-dev] [bitcoin-dev] Taro: A Taproot Asset Representation Overlay
Hi Ruben, > Also, the people that are responsible for the current shape of RGB aren't > the people who originated the idea, so it would not be fair to the > originators either (Peter Todd, Alekos Filini, Giacomo Zucco). Sure I have no problems acknowledging them in the current BIP draft. Both the protocols build off of ideas re client-side-validation, but then end up exploring different parts of the large design space. Peter Todd is already there, but I can add the others you've listed. I might even just expand that section into a longer "Related Work" section along the way. > What I tried to say was that it does not make sense to build scripting > support into Taro, because you can't actually do anything interesting with > it due to this limitation. can do with their own Taro tokens, or else he > will burn them – not very useful I agree that the usage will be somewhat context specific, and dependent on the security properties one is after. In the current purposefully simplified version, it's correct that ignoring the rules leads to assets being burnt, but in most cases imo that's a sufficient enough incentive to maintain and validate the relevant set of witnesses. I was thinking about the scripting layer a bit over the weekend, and came up with a "issuance covenant" design sketch that may or may not be useful. At a high level, lets say we extend the system to allow a specified (so a new asset type) or generalized script to be validated when an asset issuance transaction is being validated. If we add some new domain specific covenant op codes at the Taro level, then we'd be able to validate issuance events like: * "Issuing N units of this assets can only be done if 1.5*N units of BTC are present in the nth output of the minting transaction. In addition, the output created must commit to a NUMs point for the internal key, meaning that only a script path is possible. The script paths must be revealed, with the only acceptable unlocking leaf being a time lock of 9 months". I don't fully have a concrete protocol that would use something like that, but that was an attempt to express certain collateralization requirements for issuing certain assets. Verifiers would only recognize that asset if the issuance covenant script passes, and (perhaps) the absolute timelock on those coins hasn't expired yet. This seems like a useful primitive for creating assets that are somehow backed by on-chain BTC collateralization. However this is just a design sketch that needs to answer questions like: * are the assets still valid after that timeout period, or are they considered to be burnt? * assuming that the "asset key family" (used to authorize issuance of related assets) are jointly owned, and maintained in a canonical Universe, then would it be possible for 3rd parties to verify the level of collateralization on-chain, with the join parties maintaining the pool of collateralized assets accordingly? * continuing with the above, is it feasible to use a DLC script within one of these fixed tapscript leaves to allow more collateral to be added/removed from the pool backing those assets? I think it's too early to conclude that the scripting layer isn't useful. Over time I plan to add more concrete ideas like the above to the section tracking the types of applications that can be built on Taro. > So theoretically you could get Bitcoin covenants to enforce certain > spending conditions on Taro assets. Not sure how practical that ends up > being, but intriguing to consider. Exactly! Exactly how practical it ends up being would depend on the types of covenants deployed in the future. With something like a TLUV and OP_CAT (as they're sufficiently generalized vs adding op codes to very the proofs) a Script would be able to re-create the set of commitments to restrict the set of outputs that can be created after spending. One would use OP_CAT to handle re-creating the taro asset root, and TLUV (or something similar) to handle the Bitcoin tapscript part (swap out leaf index 0 where the taro commitment is, etc). > The above also reminds me of another potential issue which you need to be > aware of, if you're not already. Similar to my comment about how the > location of the Taro tree inside the taproot tree needs to be > deterministic for the verifier, the output in which you place the Taro > tree also needs to be Yep, the location needs to be fully specified which includes factoring the output index as well. A simple way to restrict this would just to say it's always the first output. Otherwise, you could lift the output index into the asset ID calculation. -- Laolu ___ Lightning-dev mailing list Lightning-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
Re: [Lightning-dev] Taro: A Taproot Asset Representation Overlay
Hi Harding, Great questions! > anything about Taro or the way you plan to implement support for > transferring fungible assets via asset-aware LN endpoints[1] will address > the "free call option" problem, which I think was first discussed on this > list by Corné Plooy[2] and was later extended by ZmnSCPxj[3], with Tamas > Blummer[4] providing the following summary I agree w/ Tamas' quote there in that the problem doesn't exist for transfers using the same asset. Consider a case of Alice sending to Bob, with both of them using a hypothetical asset, USD-beef: if the final/last hop withholds the HTLC, then they risk Bob not accepting the HTLC either due to the payment timing out, or exchange rate fluctuations resulting in an insufficient amount delivered to the destination (Bob wanted 10 USD-beef, but the bound BTC in the onion route is only now 9 USD-beef), in either case the payment would be cancelled. > I know several attempts at mitigation have previously been discussed on > this list, such as barrier escrows[5], so I'm curious whether it's your > plan to use one of those existing mitigations, suggest a new mitigation, > or just not worry about it at this point (as Blummer mentioned, it > probably doesn't matter for swaps where price volatility is lower than fee > income). I'd say our current plan is a combination of not worry about it at this point, rely on proper pricing of the spread/fee-rate that exists at the first/last mile, and potentially introducing an upfront payment as well if issues pop up (precise option pricing would need to be worked out still). One side benefit of introducing this upfront payment at the edges (the idea is that the asset channels are all private chans from the LN perfective, so a hophint/blinded path is needed to route to them), is that it presents a controlled experiment where we can toy with the mechanics of such upfront payment proposals (which are a lot simpler since there's just one hop to factor in). Another difference here vs past attempts/proposals is that since all the assets are at the edges, identifying a party creating long lived HTLCs that cross an asset boundary is much simpler: the origin party is likely the one sending those payments. This makes it easier to detect abuse and stop forwarding those HTLCs (or close the channel) as unlike the prior generalized LN-DEX ideas, the origin will always be that first hop. I think another open question was exactly how a nuisance party would take advantage of this opportunity: * Do they close out the channel and instead go to a traditional exchange to make that arbitrage trade? What guarantee do they have that their deposit gets there in time and they're able to profit. * Do they instead attempt to re-route the swap to use some other market maker elsewhere in the network? In this case, won't things just recurse with each party in the chain attempting to exploit the arbitrage trade? IMO as long as the spread/fees make sense at the last/first mile, then the parties are inactivated to carry out the transfers as they have more certainty w.r.t revenues from the fees vs needing to reply on an arbitrage trade that may or may not exist when they go to actually exploit it. > I'd also be curious to learn what you and others on this list think will > be different about using Taro versus other attempts to get LN channels to > cross assets, e.g. allowing payments to be routed from a BTC-based channel > to a Liquid-BTC-based channel through an LN bridge node. I believe a fair > amount of work in LN's early design and implementation was aimed at > allowing cross-chain transfers but either because of the complexity, the > free call option problem, or some other problem, it seemed to me that work > on the problem was largely abandoned. I think the main difference with our approach is that the LN Bitcoin Backbone won't explicitly be aware of the existence of any of the assets. As a result, we won't need core changes to the channel_update method, nor a global value carved out in the "realm" field (instead w/ the scid alias feature that can be used to identify which channel should be used to complete the route), which was meant to be used to identify public LN routes that crossed chains. One other difference with our approach is that given all the assets are presented on Bitcoin itself, we don't need to worry about things like the other chain being down, translating time lock values, navigating forks across several chains, etc. As a result, the software can be a lot simpler, as everything is anchored in the Bitcoin chain, and we don't actually need to build in N different wallets which would really blow up the complexity. I think most of the other attempts were also focused on being able to emulate DEX-like functionality over the network. In contrast, we're concerned mainly with payments, though I can see others attempting to tackle building out an off-chain DEX systems with this new protocol base. -- Laolu __
Re: [Lightning-dev] [bitcoin-dev] Taro: A Taproot Asset Representation Overlay
https://twitter.com/dr_orlovsky/status/1513555717218873355?s=21&t=NbHfD-n1NEu8Gdh-dOPifA You do not deserve any other form of answer. On Tue, Apr 5, 2022 at 5:06 PM, Olaoluwa Osuntokun via bitcoin-dev wrote: > Hi y'all, > > I'm excited to publicly publish a new protocol I've been working on over the > past few months: Taro. Taro is a Taproot Asset Representation Overlay which > allows the issuance of normal and also collectible assets on the main Bitcoin > chain. Taro uses the Taproot script tree to commit extra asset structured meta > data based on a hybrid merkle tree I call a Merkle Sum Sparse Merkle Tree or > MS-SMT. An MS-SMT combined the properties of a merkle sum tree, with a sparse > merkle tree, enabling things like easily verifiable asset supply proofs and > also efficient proofs of non existence (eg: you prove to me you're no longer > committing to the 1-of-1 holographic beefzard card during our swap). Taro > asset > transfers are then embedded in a virtual/overlay transaction graph which uses > a > chain of asset witnesses to provably track the transfer of assets across > taproot outputs. Taro also has a scripting system, which allows for > programmatic unlocking/transfer of assets. In the first version, the scripting > system is actually a recursive instance of the Bitcoin Script Taproot VM, > meaning anything that can be expressed in the latest version of Script can be > expressed in the Taro scripting system. Future versions of the scripting > system > can introduce new functionality on the Taro layer, like covenants or other > updates. > > The Taro design also supports integration with the Lightning Network (BOLTs) > as > the scripting system can be used to emulate the existing HTLC structure, which > allows for multi-hop transfers of Taro assets. Rather than modify the internal > network, the protocol proposes to instead only recognize "assets at the > edges", > which means that only the sender+receiver actually need to know about and > validate the assets. This deployment route means that we don't need to build > up > an entirely new network and liquidity for each asset. Instead, all asset > transfers will utilize the Bitcoin backbone of the Lightning Network, which > means that the internal routers just see Bitcoin transfers as normal, and > don't > even know about assets at the edges. As a result, increased demand for > transfers of these assets as the edges (say like a USD stablecoin), which in > will turn generate increased demand of LN capacity, result in more transfers, > and > also more routing revenue for the Bitcoin backbone nodes. > > The set of BIPs are a multi-part suite, with the following breakdown: > * The main Taro protocol: > https://github.com/Roasbeef/bips/blob/bip-taro/bip-taro.mediawiki > * The MS-SMT structure: > https://github.com/Roasbeef/bips/blob/bip-taro/bip-taro-ms-smt.mediawiki > * The Taro VM: > https://github.com/Roasbeef/bips/blob/bip-taro/bip-taro-vm.mediawiki > * The Taro address format: > https://github.com/Roasbeef/bips/blob/bip-taro/bip-taro-addr.mediawiki > * The Taro Universe concept: > https://github.com/Roasbeef/bips/blob/bip-taro/bip-taro-universe.mediawiki > * The Taro flat file proof format: > https://github.com/Roasbeef/bips/blob/bip-taro/bip-taro-proof-file.mediawiki > > Rather than post them all in line (as the text wouldn't fit in the allowed > size > limit), all the BIPs can be found above. > > -- Laolu___ Lightning-dev mailing list Lightning-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
Re: [Lightning-dev] [bitcoin-dev] [Pre-BIP] Fee Accounts
> nonsense marketing I'm sure the people who are confused about "blockchain schemes as \"world computers\" and other nonsense marketing" are avid and regular readers of the bitcoin devs mailing list so I offer my sincerest apologies to all members of the intersection of those sets who were confused by the description given. > useless work progress is not useless work, it *is* useful work in this context. you have committed to some subset of data that you requested -- if it was 'useless', why did you *ever* bother to commit it in the first place? However, it is not 'maximally useful' in some sense. However, progress is progress -- suppose you only confirmed 50% of the commitments, is that not progress? If you just happened to observe 50% of the commitments commit because of proximity to the time a block was mined and tx propagation naturally would you call it useless? > Remember that OTS simply proves data in the past. Nothing more. > OTS doesn't have a chain of transactions Gotcha -- I've not been able to find an actual spec of Open Time Stamps anywhere, so I suppose I just assumed based on how I think it *should* work. Having a chain of transactions would serve to linearize history of OTS commitments which would let you prove, given reorgs, that knowledge of commit A was before B a bit more robustly. > I'd rather do one transaction with all pending commitments at a particular time rather than waste money on mining two transactions for a given set of commitments This sounds like a personal preference v.s. a technical requirement. You aren't doing any extra transactions in the model i showed, what you're doing is selecting the window for the next based on the prior conf. See the diagram below, you would have to (if OTS is correct) support this sort of 'attempt/confirm' head that tracks attempted commitments and confirmed ones and 'rewinds' after a confirm to make the next commit contain the prior attempts that didn't make it. [.] --^ confirm head tx 0 at height 34 ^ attempt head after tx 0 ---^ confirm head tx 1 at height 35 --^ attempt head after tx 1 ^ confirm head tx 2 at height 36 ---^ attempt head after tx 2 ---^ confirm head tx 3 at height 37 you can compare this to a "spherical cow" model where RBF is always perfect and guaranteed inclusion: [.] --^ confirm head tx 0 at height 34 -^ confirm head tx 1 at height 35 ---^ confirm head at tx 1 height 36 -^ confirm head tx 3 at height 37 The same number of transactions gets used over the time period. ___ Lightning-dev mailing list Lightning-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev