[bitcoin-dev] WabiSabi Inside Batched CoinSwap

2020-06-11 Thread ZmnSCPxj via bitcoin-dev
Introduction


THIS ENTIRE PROTOCOL IS NOVEL CRYPTO AND HAS NO PROOF THAT IT IS SECURE AND 
PRIVATE AND WHY WOULD YOU TRUST SOME RANDOM PSEUDONYM ON THE INTERNET SRSLY.

While 
[WabiSabi](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-June/017969.html)
 is planned for some kind of CoinJoin operation, a limitation is that the use 
of CoinJoin creates a transaction where the inputs are known to be linked to 
the outputs, as the generated transaction directly consumes the inputs.

It would be better if the server in the WabiSabi created outputs from 
independent outputs it owns, acquired from previous clients.
Then the outputs would, onchain, be linked to previous clients of the server 
instead of the current clients.
This is precisely the issue that CoinSwap, and the new swap scheme [Succinct 
Atomic 
Swaps](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-May/017846.html),
 can be used to solve.
By using [Batched 
CoinSwap](https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-June/017967.html),
 makers can act as WabiSabi servers, and batched takers can act as WabiSabi 
clients.

Of course, WabiSabi has the advantage that payments between the clients are 
obscured from the server.
But a naive CoinSwap requires that outputs from the maker be linkable, at least 
by the maker, to inputs given to the maker, which is precisely the information 
that WabiSabi seeks to hide from the server.

However, by instead using [Signature 
Selling](https://lists.linuxfoundation.org/pipermail/lightning-dev/2019-July/002077.html)
 in combination with standard Scriptless Script adaptor signatures, it is 
possible to arrange for a CoinSwap to occur without the make being able to link 
outputs to inputs.

Signature Selling
=

The final output of the Schnorr signing process is a pair (R, s) for a pubkey A 
= a * G and ephemeral nonce R = r * G, where:

s = r + h(P | R | m) * a

Now, instead of the pair (R, s), the signer can provide (R, s * G).
The receiver of (R, s * G) can validate that s * G is correct using the same 
validation as for Schnorr signatures.

s * G = R + h(P | R | m) * A

The receiver of (R, s * G) can then offer a standard Scriptless Script adaptor 
signature, which when completed, lets them learn s.
The receiver may incentivize this by having the completed signature authorize a 
transaction to the sender of the original (R, s * G), so that the completed 
signature atomically gives the receiver the correct signature.

This can be used as a basis for atomic CoinSwap, and which we will use in this 
proposal.

Note that even in a MuSig case, it is possible for a participant to sell its 
share of the final signature, after the R exchange phase in MuSig.

WabiSabi


WabiSabi replaces blind signatures with credentials.
The primary advantage of credentials is that credentials can include a 
homomorphic value.
We use this homomorphic value to represent a blinded amount.

WabiSabi has a single server that issues credentials, and multiple clients that 
the server serves.
Clients can exchange value by swapping credentials, then claiming credentials 
they received from the server and exchanging them for fresh credentials.
Clients hold multiple credentials at a time, and the server consumes (and 
destroys) a set of credentials and outputs another set of fresh credentials, 
ensuring that the output value is the same as the input value (minus any fees 
the server wants to charge for the operation).

>From a high enough vantage point, the WabiSabi process is:

1.  Server issues 0-valued credentials to all clients.
2.  Clients put in money into the server by providing onchain inputs to the 
server plus their existing credentials, getting credentials with their input 
credential value plus the onchain input value.
3.  Clients swap credentials with each other to perform payments between 
themselves, then claim the credentials by asking the server to reissue them.
* Receiving clients move their amounts among all the credentials they own 
(via server consume-reissue credential operations) so as to make one of their 
multiple credentials into a 0-value credential.
* Sending clients move their amounts among all the credentials they own so 
that one of their multiple credentials has the sent value.
* The receiving client exchanges its 0-value credential for the sent-value 
credential from the sending client, by cooperatively making a consume-reissue 
operation with the server.
4.  Clients then claim the value in their credentials by providing pubkeys to 
pay to, and amount, to the server, plus existing credentials, getting back 
credentials whose total value is minus the onchain output value.
5.  The server generates the final output set.
6.  The clients check that the final output set is indeed what they expected 
(their claimed outputs exist) and ratify the overall transaction.
* In the CoinJoin case, the overall transaction is ratified by 

[bitcoin-dev] WabiSabi: a building block for coordinated CoinJoins

2020-06-11 Thread Yuval Kogman via bitcoin-dev
Hi,

As part of research into how CoinJoins in general and Wasabi in
particular can be improved, we'd like to share a new building block
we're calling WabiSabi, which utilizes keyed verification anonymous
credentials instead of blind signatures to verify the honest
participation of users in centrally coordinated CoinJoin protocols.

Blind signatures have been used to facilitate centrally coordinated
CoinJoins, but require standard denominations, each associated with a
key, because blind signatures can only convey a single bit of
information from the signer to the verifier (both roles are the
coordinator in this setting). Anonymous credentials carry attributes,
and in our case these are homomorphic value commitments as in
Confidential Transactions.

Note that this is an early draft with a deliberately narrow scope, and
only introduces this abstract building block. At this stage we'd like
to solicit feedback and criticism about our scheme and inputs with
regards to its potential applications before proceeding. We do not not
(yet) address the structure of the CoinJoin transactions, fee
structures, or other implementation details, but discussion of these
aspects is welcome.

The repository is https://github.com/zkSNACKs/WabiSabi, and the latest
version is available here:
https://github.com/zkSNACKs/WabiSabi/releases/latest/download/WabiSabi.pdf
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] CoinPool, exploring generic payment pools for Fun and Privacy

2020-06-11 Thread Jeremy via bitcoin-dev
Stellar work Antoine and Gleb! Really excited to see designs come out on
payment pools.

I've also been designing some payment pools (I have some not ready code I
can share with you guys off list), and I wanted to share what I learned
here in case it's useful.

In my design of payment pools, I don't think the following requirement: "A
CoinPool must satisfy the following *non-interactive any-order withdrawal*
property: at any point in time and any possible sequence of previous
CoinPool events, a participant should be able to move their funds from the
CoinPool to any address the participant wants without cooperation with
other CoinPool members." is desirable in O(1) space. I think it's much
better to set the requirement to O(log(n)), and this isn't just because of
wanting to use CTV, although it does help.

Let me describe a quick CTV based payment pool:

Build a payment pool for N users as N/2 channels between participants
created in a payment tree with a radix of R, where every node has a
multisig path for being used as a multi-party channel and the CTV branch
has a preset timeout. E.g., with radix 2:

  Channel(a,b,c,d,e,f,g,h)
 /   \
   Channel(a,b,c,d)
Channel(e,f,g,h)
/
\/ \
Channel(a,b)Channel(c,d)  Channel(e,f)
Channel(g,h)


All of these channels can be constructed and set up non-interatively using
CTV, and updated interactively. By default payments can happen with minimal
coordination of parties by standard lightning channel updates at the leaf
nodes, and channels can be rebalanced at higher layers with more
participation.


Now let's compare the first-person exit non cooperative scenario across
pools:

CTV-Pool:
Wait time: Log(N). At each branch, you must wait for a timeout, and you
have to go through log N to make sure there are no updated states. You can
trade off wait time/fees by picking different radixes.
TXN Size: Log(N) 1000 people with radix 4 --> 5 wait periods. 5*4 txn size.
Radix 20 --> 2 wait periods. 2*20 txn size.

Accumulator-Pool:
Wait Time: O(1)
TXN Size: Depending on accumulator: O(1), O(log N), O(N) bits. Let's be
favorable to Accumulators and assumer O(1), but keep in mind constant may
be somewhat large/operations might be expensive in validation for updates.


This *seems* like a clear win for Accumulators. But not so fast. Let's look
at the case where *everyone* exits non cooperatively from a payment pool.
What is the total work and time?

CTV Pool:
Wait time: Log(N)
Txn Size: O(N) (no worse than 2x factor overhead with radix 2, higher
radixes dramatically less overhead)

Accumulator Pool:
Wait time: O(N)
Txn Size: O(N) (bear in mind *maybe* O(N^2) or O(N log N) if we use an
sub-optimal accumulator, or validation work may be expensive depending on
the new primitive)


So in this context, CTV Pool has a clear benefit. The last recipient can
always clear in Log(N) time whereas in the accumulator pool, the last
recipient has to wait much much longer. There's no asymptotic difference in
Tx Size, but I suspect that CTV is at least as good or cheaper since it's
just one tx hash and doesn't depend on implementation.

Another property that is nice about the CTV pool style is the bisecting
property. Every time you have to do an uncooperative withdrawal, you split
the group into R groups. If your group is not cooperating because one
person is permanently offline, then Accumulator pools *guarantee* you need
to go through a full on-chain redemption. Not so with a CTV-style pool, as
if you have a single failure among [1,2,3,4,5,6,7,8,9,10] channels (let's
say channel 8 fails), then with a radix 4 setup your next steps are:
[1,2,3,4,5,6,7,8,9,10]
[1,2,3,4,5,6,7,X,9,10]
[1,2,3,4] [5,6,7,X] [9,10]
[1,2,3,4] 5 6 7 X [9,10]

So you only need to do Log(N) chain work to exit the bad actor, but then it
amortizes! A future failure (let's say of 5) only causes 5 to have to close
their channel, and does not affect anyone else.

With an accumulator based pool, if you re-pool after one failure, a second
failure causes another O(N) work. So then total work in that case is
O(N^2). You can improve the design by making the evict in any order option
such that you can *kick out* a member in any order, that helps solve some
of this nastiness (rather than them opting to leave). But I'm unclear how
to make this safe w.r.t. updated states. You could also allow, perhaps, any
number of operators to simultaneously leave in a tx. Also not sure how to
do that.



Availability:
With CTV Pools, you can make a payment if just your immediate conterparty
is online in your channel. Opportunistically, if people above you are
online, you can make channel updates higher up in the tree which have
better timeout properties. You can also create new channels, binding
yourself to different parties if 

[bitcoin-dev] Hiding CoinSwap Makers Among Custodial Services

2020-06-11 Thread ZmnSCPxj via bitcoin-dev
Good morning Chris, and bitcoin-dev (but mostly Chris),


I made a random comment regarding taint on bitcoin-dev recently: 
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-June/017961.html

> For CoinSwap as well, we can consider that a CoinSwap server could make 
> multiple CoinSwaps with various clients.
> This leads to the CoinSwap server owning many small UTXOs, which it at some 
> point aggregates into a large UTXO that it then uses to service more clients 
> (for example, it serves many small clients, then has to serve a single large 
> client that wants a single large UTXO for its own purposes).
> This aggregation again leads to spreading of taint.

I want to propose some particular behaviors a SwapMarket maker can engage in, 
to improve the privacy of its customers.

Let us suppose that individual swaps use some variant of Succinct Atomic Swap.
Takers take on the role of Alice in the SAS description, makers take on the 
role of Bob.
We may be able to tweak the SAS protocol or some of its parameters for our 
purposes.

Now, what we will do is to have the maker operate in rounds.

Suppose two takers, T1 and T2, contact the sole maker M in its first ever round.
T1 and T2 have some coins they want to swap.
They arrange things all the way to confirmation of the Alice-side funding tx, 
and pause just before Bob creates its own funding tx for their individual swaps.
The chain now shows these txes/UTXOs:

 42 of T1 --->  42 of T1 & M
 50 of T2 --->  50 of T2 & M
100 of T1 ---> 100 of T1 & M

200 of M  -

Now the entire point of operating in rounds is precisely so that M can service 
multiple clients at the same time with a single transaction, i.e. batching.
So now M provides its B-side tx and complete the SAS protocols with each of the 
takers.
SAS gives unilateral control of the outputs directly to the takers, so we elide 
the fact that they are really 2-of-2s below:

 42 of T1 --->  42 of T1 & M
 50 of T2 --->  50 of T2 & M
100 of T1 ---> 100 of T1 & M

200 of M  +-->  11 of M
  +--> 140 of T1
  +-->  49 of T2

(M extracted 1 unit from each incoming coin as fee; they also live in a 
fictional universe where miners mine transactions out of the goodness of their 
hearts.)
Now in fact the previous transactions are, after the SAS, solely owned by M the 
maker.
Now suppose on the next round, we have 3 new takers, T3, T4, and T5, who offer 
some coins to M to CoinSwap, leading to more blockchain data:

 42 of T1 --->  42 of T1 & M
 50 of T2 --->  50 of T2 & M
100 of T1 ---> 100 of T1 & M

200 of M  -+->  11 of M
   +-> 140 of T1
   +->  49 of T2

 22 of T3 --->  22 of T3 & M
 90 of T3 --->  90 of T3 & M
 11 of T4 --->  11 of T4 & M
 50 of T4 --->  50 of T4 & M
 20 of T5 --->  20 of T5 & M

In order to service all the new takers of this round, M takes the coins that it 
got from T1 and T2, and uses them to fund a new combined CoinSwap tx:

 42 of T1 --->  42 of T1 & M -+--+-> 110 of T3
 50 of T2 --->  50 of T2 & M -+  +->  59 of T4
100 of T1 ---> 100 of T1 & M -+  +->  14 of T5
 +->   9 of M
200 of M  -+->  11 of M
   +-> 140 of T1
   +->  49 of T2

 22 of T3 --->  22 of T3 & M
 90 of T3 --->  90 of T3 & M
 11 of T4 --->  11 of T4 & M
 50 of T4 --->  50 of T4 & M
 15 of T5 --->  15 of T5 & M

That transaction, we can observe, looks very much like a batched transaction 
that a custodial service might produce.

Now imagine more rounds, and I think you can begin to imagine that the magic of 
transaction batching, ported into SwapMarket, would help mitigate the 
blockchain size issues that CoinSwap has.

Makers are expected to adopt this technique as this reduces the overall cost of 
transactions they produce, thus they are incentivized to use this technique to 
increase their profitability.

At the same time, it spreads taint around and increases the effort that chain 
analysis must go through to identify what really happened.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Tainting, CoinJoin, PayJoin, CoinSwap

2020-06-11 Thread nopara73 via bitcoin-dev
Thank you all for your replies, I think everyone agrees here how it "should
be" and indeed I risked my post and my used terminology to further
legitimize the thinking of adversaries.
I'd have one clarification to my original post. It may not be clear why I
put PJ/CS to the same box. One way of thinking of CoinSwap is to swap coin
histories and PayJoin is to share coin histories. For the purposes of this
attack the consequences are roughly the same so that's why I think it's ok
to put them under the same umbrella in this discussion, but I wouldn't die
for it :)

And indeed I perhaps wrongly called this the "Taint Issue", maybe it should
be called "Coin Discrimination Issue" or something like that, not sure if
we have a term for this, but I'm sure we should have a term for this as
unlike some other, so far theoretical attacks on Bitcoin's fungibility, it
is currently being applied in practice.


On Thu, Jun 11, 2020 at 7:24 AM Mr. Lee Chiffre via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Thought provoking. In my opinion bitcoin should be designed in a way to
> where there is no distinction between "clean" bitcoins and "dirty"
> bitcoins. If one bitcoin is considered dirty then all bitcoins should be
> considered dirty. Fungibility is important. And bitcoin or its users
> should not be concerned with pleasing governments. Bitcoin should be or
> remain neutral. The term "clean" or "dirty" is defined by whatever
> government is in power. Bitcoin is not to please government but to be
> independent of government control and reliance on government or any other
> centralized systems. To act as censorship resistant money to give people
> freedom from tyranny. I'm just saying that if anyone can determine if a
> bitcoin is clean or dirty then I think we are doing something wrong. What
> is great with certain protocols like coinjoin coinswap and payjoin there
> is that plausible deniability that hopefully would spread the entire
> "taint" of bitcoin collectively either for real or just as a possibility
> to any blockchain analysis entities (with no real way to tell or interpret
> with accuracy).
>
> Bitcoin should be designed in a way where the only way to stop "dirty"
> bitcoins is to reject all bitcoins.
>
> If "dirty" bitcoins is actually a real thing then I guess I could have fun
> by polluting random peoples bitcoin addresses with "dirty" coins right? No
> way to prove if it is a self transfer or an unsolicited "donation".  I
> just do not see how any bitcoin UTXO censorship could work because of
> plausible deniability.
>
> If any company actually used UTXO censorship then customers can just use
> services that are respecting of freedom and do not use censorship.
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>


-- 
Best,
Ádám
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] CoinPool, exploring generic payment pools for Fun and Privacy

2020-06-11 Thread Antoine Riard via bitcoin-dev
Hi list,

We (Gleb Naumenko + I) think that a wide range of second-layer protocols
(LN, vaults, inheritance, etc) will be used by average Bitcoin users. We
are interested in finding and addressing the privacy issues coming from the
unique fingerprints these protocols bring.

More specifically, we are interested in answering the following questions:
1. How bad are privacy leaks from on-chain txn of second-layer protocols
and how much is leaked via protocol-specific metadatas (LN domain names,
watchtowers, ...) ?
2. How to establish a list of Bitcoin fingerprints and their severity to
inform protocol designers and clarify threat models ?
3. What kind of sophisticated heuristics spies may use in the future ?
4. How to mitigate privacy leaks ? Should each protocol adopt a common
toolbox (scriptless scripts, taproot, ...) in its own way or should we
design a confidential-layer to wrap around all of them ?
5. How to make the solution usable (cheaper, easier to integrate, safer)
for a daily basis ?

We suggest CoinPool: a generic payment pool [0] as a solution to those
problems. Although the design we propose is somewhat a scaling solution, we
won't focus on this aspect. This work is rather an exploration of *how a
pool construction could serve as a TLS for Bitcoin, enhancing both on-chain
and off-chain privacy*.

### Motivation: cross-protocols privacy

It has always been a challenge to make the on-chain UTXO graph more
private. We all know the issues with cleartext amounts, the linkability of
inputs/outputs, and other metadatas. Combining with p2p-level spying
(transaction-to-IP mapping) or some other patterns leading to real-world
identities enable serious spying.

Protocols on top of Bitcoin (LN, vaults[1], complicated spending conditions
based on Miniscript, DLC [2] are even more vulnerable to spying because:
- each of them brings new unique fingerprint/metadata [3]
- known spying techniques against second-layer are currently limited to
trivial heuristics, but we can't assume spies will always this
unsophisticated

There is already a wiki list [4] attempting to cover all issues like that,
although maintaining it would be challenging considering privacy is a
moving target.

Let's consider this example: Alice is a well-known LN merchant with a node
tied to a domain name. She always directs the output of channel closing to
her vault address. If she has another vault address on-chain with the same
unique unlocking script (like a CSV timelock with a specific delta) this
can be leveraged to cluster her transactions. And since one of her
addresses is tied to a domain name, all her funds can now be linked to a
real-world identity.

In theory, one may use CoinJoin-like solutions to mask cross-protocol
on-chain transfers. Unfortunately, robust designs like CoinSwap depend on
timelocking coins, extensive use of the on-chain space, and paying fees to
provide sufficient privacy, as we explain further. These properties imply
we can't expect users to be using strong CoinSwaps by default.

That's why instead of specialized high-latency, high-chain-use
CoinJoin-style protocols, we propose CoinPool: a low-latency, generic
off-chain protocol used to be wrapped around any other protocol. CoinPool
is based on shared UTXO ownership. It may reasonably improve on-chain
privacy while avoiding latency and locked liquidity issues. CoinPool may
also reduce the on-chain use (thus, help to scale Bitcoin) if participants
cooperate sufficiently.

We do believe that CoinSwap and other CoinJoins are of interest, but we
have to consider the trade-offs and choose the best tool for a job to make
privacy usable with regards to user resources. We will compare CoinPool to
CoinSwap in more detail later in this write-up.

### Extra-motivation: on-chain scalability

Even though it's not the main focus of this proposal, we also want to
mention that since CoinPool is a payment pool, it helps with on-chain
scalability. More specifically:
1. Shared UTXO ownership allows to represent many outputs as one, reducing
the UTXO set in size.
2. The CoinPool design enables off-chain transfers within the pool, helping
to save the block space by committing fewer transactions on-chain.
3. CoinPool provides decent support for batching activities from different
users, also helping to have fewer individual transactions on-chain.

Since the CoinPool provides scalability benefits, users will be even
incentivized to join CoinPools due to the conservative chain resources
usage and such enjoy privacy as a side-effect.

### CoinPool design

A CoinPool must satisfy the following *non-interactive any-order
withdrawal* property: at any point in time and any possible sequence of
previous CoinPool events, a participant should be able to move their funds
from the CoinPool to any address the participant wants without cooperation
with other CoinPool members.

The state of a CoinPool is represented by one on-chain UTXO (a funding
multisig of all pool participants) and a set of 

Re: [bitcoin-dev] Time-dilation Attacks on the Lightning Network

2020-06-11 Thread Antoine Riard via bitcoin-dev
Hi ZmnSCPxj

Well your deeclipser is already WIP ;)

See my AltNet+Watchdog proposals in Core:
https://github.com/bitcoin/bitcoin/pull/18987/https://github.com/bitcoin/bitcoin/pull/18988

It's almost covering what you mention, a driver framework to plug
alternative transports protocols : radio, DNS, even LN Noise, Tor's
Snowflake... Proposal is a PoC with a multi-threaded process but yes I want
production-design to be a multi-process for the reasons you mentioned.
Drivers should be developed out-of-tree but with an interface to plug them
smoothly (tm).

Proposal is more generic than pure LN, like some privacy-concerned users
may want to broadcast by default their transactions over radio. But for LN
support it should a) detect network/block issuance anomalies b) dynamically
react by closing channels or c) fetch headers/blocks through redundant
communication channels and d) provide emergency transactions broadcast if
your time-sensitive transactions are censored.

It's long-term work so be patient but getting opt-in support in Core would
make it far easier for any LN routing/vaulting node to deploy it. In the
meanwhile you can have multiple nodes on different infrastructures to serve
as a backend for your LN node.

Bonus: if LN nodes are incentivized to deploy such strong anti-eclipsing
measures to mitigate time-dilation it would benefit base layer p2p security
network-wise. In case of network partition, your node with link layer
redundancy will keep it in-sync its connected peers on the same side of the
partition, even if they don't deploy anything.

I'm sure you have improvements to suggest !

Best,
Antoine


Le mer. 10 juin 2020 à 19:35, ZmnSCPxj  a écrit :

> Good morning Antoine and Gleb,
>
> One thing I have been idly thinking about would be to have a *separate*
> software daemon that performs de-eclipsing for your Bitcoin fullnode.
>
> For example, you could run this deeclipser on the same hardware as your
> Bitcoin fullnode, and have the deeclipser bind to port 8334.
> Then you set your Bitcoin fullnode with `addnode=localhost:8334` in your
> `bitcoind.conf`.
>
> Your Bitcoin fullnode would then connect to the deeclipser using normal
> P2P protocol.
>
> The deeclipser would periodically, every five minutes or so, check the
> latest headers known by your fullnode, via the P2P protocol connection your
> fullnode makes.
> Then it would attempt to discover any blocks with greater blockheight.
>
> The reason why we have a separate deeclipser process is so that the
> deeclipser can use a plugin system, and isolate the plugins from the main
> fullnode software.
> For example, the deeclipser could query a number of plugins:
>
> * One plugin could just try connecting to some random node, in the hopes
> of getting a new connection that is not eclipsed.
> * Another plugin could try polling known blockchain explorers and using
> their APIs over HTTPS, possibly over Tor as well.
> * Another plugin could try connecting to known Electrum servers.
> * New plugins can be developed for new mitigations, such as sending
> headers over DNS or blocks over mesh or etc.
>
> Then if any plugin discovers a block later than that known by your
> fullnode, the deeclipser can send an unsolicited `block` or `header`
> message to your fullnode to update it.
>
> The advantage of using a plugin system is that it becomes easier to
> prototype, deploy, and maybe even test new de-eclipsing mitigations.
>
> At the same time, by running a separate daemon from the fullnode, we
> provide some amount of process isolation in case some problem with the
> plugin system exists.
> The deeclipser could be run by a completely different user, for example,
> and you might even run multiple deeclipser daemons in the same hardware,
> with different non-overlapping plugins, so that an exploit of one plugin
> will only bring down one deeclipser, with other deeclipser daemons
> remaining functional and still protecting your fullnode.
>
> Finally, by using the P2P protocol, the fullnode you run could be a
> non-Bitcoin-Core fullnode, such as btcd or rust-bitcoin or whatever other
> fullnode implementations exist, assuming you actually want to use them for
> some reason.
>
> What do you think?
>
> Regards,
> ZmnSCPxj
>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev