Hi all,
Most light wallets will want to download the minimum amount of data required to
operate, which means they would ideally download the smallest possible filters
containing the subset of elements they need.
What if instead of trying to decide up front which subset of elements will be
most u
Maybe I didn't make it clear, but the distinction is that the current track
allocates
one service bit for each "filter type", where it has to be agreed upon up
front what
elements such a filter type contains.
My suggestion was to advertise a bitfield for each filter type the node
serves,
where the
Thanks, Jimpo!
This is very encouraging, I think. I sorta assumed that separating the
elements into their own sub-filters would hurt the compression a lot more.
Can the compression ratio/false positive rate be tweaked with the
sub-filters in mind?
With the total size of the separated filters bein
Reviving this old thread now that the recently released RC for bitcoind
0.19 includes the above mentioned carve-out rule.
In an attempt to pave the way for more robust CPFP of on-chain contracts
(Lightning commitment transactions), the carve-out rule was added in
https://github.com/bitcoin/bitcoin
It essentially changes the rule to always allow CPFP-ing the commitment as
long as there is an output available without any descendants. It changes
the commitment from "you always need at least, and exactly, one non-CSV
output per party. " to "you always need at least one non-CSV output per
party.
>
>
> I don’te see how? Let’s imagine Party A has two spendable outputs, now
> they stuff the package size on one of their spendable outlets until it is
> right at the limit, add one more on their other output (to meet the
> Carve-Out), and now Party B can’t do anything.
Matt: With the proposed ch
On Mon, Oct 28, 2019 at 6:16 PM David A. Harding wrote:
> A parent transaction near the limit of 100,000 vbytes could have almost
> 10,000 outputs paying OP_TRUE (10 vbytes per output). If the children
> were limited to 10,000 vbytes each (the current max carve-out size),
> that allows relaying
Hi, Greg.
I find this proposal super interesting, and IIUC something that seems
fairly "safe" to allow (assuming V3).
For LN having the commitment transaction pay a non-zero fee is a cause for
a lot of complexity in the channel state machine. Something like this would
remove a lot of edge cases a
Hi,
I wanted to chime in on the "teleport" feature explained by Ruben, as I
think exploring something similar for Taro could be super useful in an LN
setting.
In today's Taro, to transfer tokens you have to spend a UTXO, and present a
proof showing that there are tokens committed to in the output
Hi Laolu,
Yeah, that is definitely the main downside, as Ruben also mentioned:
tokens are "burned" if they get sent to an already spent UTXO, and
there is no way to block those transfers.
And I do agree with your concern about losing the blockchain as the
main synchronization point, that seems in
Hi, Salvatore.
I find this proposal very interesting. Especially since you seemingly
can achieve such powerful capabilities by such simple opcodes.
I'm still trying to grok how this would look like on-chain (forget
about the off-chain part for now), if we were to play out such a
computation.
Let
Thank you for the example.
It sounds like we can generalize the description of the construct to:
Access to (the hash of) embedded data of inputs and outputs, and the
enforcement of output keys and (static) taptrees. In other words, as
long as you can dynamically compute the output embedded data in
Hi, Salvatore.
As a further exploration of this idea, I implemented a
proof-of-concept of OP_CICV and OP_COCV in btcd[1] that together with
OP_CAT enables a set of interesting use cases.
One such use case is, as mentioned earlier, CoinPools[2]. The opcodes
let you easily check the "dynamically co
I should clarify: the current proposal already achieves the first part
needed for coin pools: removing some data from the merkle tree (I was
indeed referring to the embedded data, not the taptree).
The thing that is missing is removal of a public key from the taproot
internal key, but as mentioned
Hi,
It was briefly mentioned in the original post, but wanted to show how
simple it is to use COCV as an alternative to CTV, removing that
dependency.
> In particular, it also inherits the choice of using OP_CTV as a primitive,
> building on top of the bitcoin-inquisition's current branch that ha
Hi, Salvatore.
Thanks for the update! I like the fact that taptree verification now
can be done on both input and outputs, and having them be symmetrical
also makes the learning curve a bit easier.
I have implemented the updated opcodes in btcd (very rough
implementation)]1] as well as updated th
Hi, all!
I've been working on an implementation of the original MATT challenge
protocol[0], with a detailed description of how we go from a
"high-level arbitrary program" to something that can be verified
on-chain in Bitcoin Script.
You can find the write-up here, which also includes instructions
ld read "0|0|2 -> 0|1|4"
Yes, fixed! Thanks :)
- Johan
On Mon, Oct 2, 2023 at 5:10 PM Anthony Towns wrote:
>
> On Fri, Sep 29, 2023 at 03:14:25PM +0200, Johan Torås Halseth via bitcoin-dev
> wrote:
> > TLDR; Using the proposed opcode OP_CHECKCONTRACTVERIFY and OP_CAT,
Hi, Antoine.
It sounds like perhaps OP_CHECKCONTRACTVERIFY can achieve what you are
looking for:
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2023-May/021719.html
By committing the participants' pubkeys and balances in the dynamic
data instead of the taptree one can imagine a subset o
Hi,
Yes, one would need to have the be a merkle root of all
participants' keys and balances. Then, as you say, the scripts would
have to enforce that one correctly creates new merkle roots according
to the coin pool rules when spending it.
- Johan
On Thu, Oct 5, 2023 at 3:13 AM Antoine Riard w
Hi, Antoine.
A brief update on this:
I created a demo script for the unilateral exit of 2-of-4 participants in a
Coinpool using OP_CCV:
https://github.com/halseth/tapsim/tree/matt-demo/examples/matt/coinpool/v2.
It shows how pubkeys and balances can be committed, how traversal and
modification of
Hi all,
After the transaction recycling has spurred some discussion the last
week or so, I figured it could be worth sharing some research I’ve
done into HTLC output aggregation, as it could be relevant for how to
avoid this problem in a future channel type.
TLDR; With the right covenant we can c
l active HTLCs in a merkle tree, and have the
> > spender provide merkle proofs for the HTLCs to claim, claiming the sum
> > into a new output. The remainder goes back into a new output with the
> > claimed HTLCs removed from the merkle tree.
>
> > An interesting tric
l the point combinations possible, and only reveal the one you
>> > need at broadcast.
>> >
>> > > ## Covenant primitives
>> > > A recursive covenant is needed to achieve this. Something like OP_CTV
>> > > and OP_APO seems insufficient, since the n
24 matches
Mail list logo