Re: [bitcoin-dev] Ark: An Alternative Privacy-preserving Second Layer Solution

2023-05-24 Thread adiabat via bitcoin-dev
Hi - thanks for the Ark write up; I have a bunch of questions but here's 2:

---
Q1:
"Pool transactions are created by ark service providers perpetually
every 5 seconds"

What exactly happens every 5 seconds?  From the 15.44.21-p-1080.png
diagram [1], a pool transaction is a bitcoin transaction, with all the
inputs coming from the ASP.  My understanding is that every 5 seconds,
we progress from PoolTx(N) to PoolTx(N+1).  Does the ASP sign a new
transaction which spends the same ASP funding inputs as the previous
pool transaction, which is a double spend or fee bump?  Or does it
spend the outputs from the previous PoolTx?

In other words, does PoolTx(2) replace PoolTx(1) RBF-style, spending
the same inputs (call this method A), or does PoolTx(2) spend an
output Of Pooltx(1) such that PoolTx(1) must be confirmed in order for
PoolTx(2) to become valid (method B)?  Or are they completely separate
transactions with unconflicting inputs (method C)?

When the ASP creates a pool transaction, what do they do with it?  Do
they broadcast it to the gossip network?  Or share it with other pool
participants?

With method A, if the ASP shares pool transactions with other people,
there Doesn't seem to be any way to ensure which PoolTx gets
confirmed, invalidating all the other ones.  They're all valid so
whichever gets into a block first wins.

With method B, there seems to be a large on-chain load, with ~120
chained transactions trying to get in every block. This wouldn't play
nicely with mempool standardness and doesn't seem like you could ever
"catch up".

With method C, ASPs would need a pretty large number of inputs but
could recycle them as blocks confirm.  It would cost a lot but maybe
could work.

---
Q2:

The other part I'm missing is: what prevents the ASP from taking all
the money?  Before even getting to vTXOs and connector outputs, from
the diagram there are only ASP inputs funding the pool transaction.
If the pool transaction is confirmed, the vTXOs are locked in place,
since the vTXO output cannot be changed and commits to all
"constrained outs" via OP_CTV.  If the pool transaction is
unconfirmed, the ASP can create & sign a transaction spending all ASP
funding inputs sending the money back to the ASP, or anywhere else.
In this case, users don't have any assurance that their vTXO can ever
turn into a real UTXO; the ASP can "rug-pull" at any time, taking all
the money in the pool.  Adding other inputs not controlled by the ASP
to the transaction wouldn't seem to fix the problem, because then any
user removing their inputs would cancel the whole transaction.

More detail about how these transactions work would be appreciated, thanks!

-Tadge

[1] 
https://uploads-ssl.webflow.com/645ae2e299ba34372614141d/6467d1f1bf91e0bf2c2eddef_Screen%20Shot%202023-05-19%20at%2015.44.21-p-1080.png
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP Proposal: Compact Client Side Filtering for Light Clients

2017-06-19 Thread adiabat via bitcoin-dev
This has been brought up several times in the past, and I agree with
Jonas' comments about users being unaware of the privacy losses due to
BIP37.  One thing also mentioned before but not int he current thread
is that the entire concept of SPV is not applicable to unconfirmed
transactions.  SPV uses the fact that miners have committed to a
transaction with work to give the user an assurance that the
transaction is valid; if the transaction were invalid, it would be
costly for the miner to include it in a block with valid work.

Transactions in the mempool have no such assurance, and are costlessly
forgeable by anyone, including your ISP.  I wasn't involved in any
debate over BIP37 when it was being written up, so I don't know how
mempool filtering got in, but it never made any sense to me.  The fact
that lots of lite clients are using this is a problem as it gives
false assurance to users that there is a valid but yet-to-be-confirmed
transaction sending them money.

-Tadge
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Per-block non-interactive Schnorr signature aggregation

2017-05-10 Thread adiabat via bitcoin-dev
I messed up and only replied to Russel O'Connor; my response is copied below.
And then there's a bit more.

-
Aha, Wagner's generalized birthday attack, the bane of all clever tricks!
I didn't realize it applied in this case but looks like it in fact does.
 applies to this case.  It would have to be a miner performing the
attack as the s-value would only be aggregated in the coinbase tx, but
that's hardly an impediment.

In fact, sketching it out, it doesn't look like the need to know m1,
m2... m_n is a big problem.  Even if the m's are fixed after being
chosen based on the P1... Pn's, (in bitcoin, m always commits to P so
not sure why it's needed in the hash) there is still freedom to
collide the hashes.  The R values can be anything, so getting h(m1,
R1, P1) + h(m2, R2, P2)... to equal -h(m0, R0, P0) is doable with
Wagner's attack by varying R1, R2... to get different hashes.

I *think* there is a viable defense against this attack, but it does
make the whole aggregation setup less attractive.  The miner who
calculates s-aggregate could also aggregate all the public keys from
all the aggregated signatures in the block (P0, P1...), sort them and
hash the concatenated list of pubkeys.  They could then multiply s by
this combo-pubkey hash (call it h(c)).  Then when nodes verify the
aggregate signature, they need to go through all the pubkeys in the
block, create the same combo-pubkey hash, and multiply s by the
multiplicative inverse of the h(c) they calculate, then verify s.  I
believe this breaks the Wagner generalized birthday attack because
every h(m_i, R_i, P_i)*h(c) included or omitted affects the c part of
h(m0, R0, P0)*h(c).

I'm not sure how badly this impacts the verification speed.  It might
not be too bad for verification as it's amortized over the whole
block.  For the miner doing the aggregation it's a bit slower as they
need to re-sort and hash all the pubkeys every time a new signature is
added.  Might not be too slow.

I'm not super confident that this actually prevents the generalized
birthday attack though.  I missed that attack in the previous post so
I'm 0 for 1 against Wagner so far :)

-

Andrew: Right, commiting to all the R values would also work; is there
an advantage to using the R's instead of the P's?  At first glance it
seems about the same.

Another possible optimization: instead of sorting, concatenate all the
R's or P's in the order they appear in the block.  Then have the miner
commit to s*h(c)^1, the multiplicative inverse of the hash of all
those values.  Then when nodes are verifying in IBD, they can just
multiply by h(c) and they don't have to compute the inverse.  A bit
more work for the miner and a bit less for the nodes.

-Tadge
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Requesting BIP assignment; Flexible Transactions.

2016-09-21 Thread adiabat via bitcoin-dev
Hi-

One concern is that this doesn't seem compatible with Lightning as
currently written.  Most relevant is that non-cooperative channel close
transactions in Lightning use OP_CHECKSEQUENCEVERIFY, which references the
sequence field of the txin; if the txin doesn't have a sequence number,
OP_CHECKSEQUENCEVERIFY can't work.

LockByBlock and LockByTime aren't described and there doesn't seem to be
code for them in the PR (186).  If there's a way to make OP_CLTV and OP_CSV
work with this new format, please let us know, thanks!

-Tadge
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev