Good morning shymaa,

> The suggested idea I was replying to is to make all dust TXs invalid by some 
> nodes.

Is this supposed to be consensus change or not?
Why "some" nodes and not all?

I think the important bit is for full nodes.
Non-full-nodes already work at reduced security; what is important is the 
security-efficiency tradeoff.

> I suggested a compromise by keeping them in secondary storage for full nodes, 
> and in a separate Merkle Tree for bridge servers.
> -In bridge servers they won't increase any worstcase, on the contrary this 
> will enhance the performance even if slightly.
> -In full nodes, and since they will usually appear in clusters, they will be 
> fetched rarely (either by a dust sweeping action, or a malicious attacker)
> In both cases as a batch
> -To not exhaust the node with DoS(as the reply mentioned)one may think of 
> uploading the whole dust partition if they were called more than certain 
> threshold (say more than 1 Tx in a block)  
> -and then keep them there for "a while", but as a separate partition too to 
> exclude them from any caching mechanism after that block.
> -The "while" could be a tuned parameter.

Assuming you meant "dust tx is considered invalid by all nodes".

* Block has no dust sweep
  * With dust rejected: only non-dust outputs are accessed.
  * With dust in secondary storage: only non-dust outputs are accessed.
* Block has some dust sweeps
  * With dust rejected: only non-dust outputs are accessed, block is rejected.
  * With dust in secondary storage: some data is loaded from secondary storage.
* Block is composed of only dust sweeps
  * With dust rejected: only non-dust outputs are accessed, block is rejected.
  * With dust in secondary storage: significant increase in processing to load 
large secondary storage in memory,

So I fail to see how the proposal ever reduces processing compared to the idea 
of just outright making all dust txs invalid and rejecting the block.
Perhaps you are trying to explain some other mechanism than what I understood?

It is helpful to think in terms always of worst-case behavior when considering 
resistance against attacks.

> -Take care that the more dust is sweeped, the less dust to remain in the UTXO 
> set; as users are already much dis-incentivised to create more.

But creation of dust is also as easy as sweeping them, and nothing really 
prevents a block from *both* creating *and* sweeping dust, e.g. a block 
composed of 1-input-1-output transactions, unless you want to describe some 
kind of restriction here?

Such a degenerate block would hit your secondary storage double: one to read, 
and one to overwrite and add new entries; if the storage is large then the 
index structure you use also is large and updates can be expensive there as 
well.


Again, I am looking solely at fullnode efficiency here, meaning all rules 
validated and all transactions validated, not validating and simply accepting 
some transactions as valid is a degradation of security from full validation to 
SPV validation.
Now of course in practice modern Bitcoin is hard to attack with *only* mining 
hashpower as there are so many fullnodes that an SPV node would be easily able 
to find the "True" history of the chain.
However, as I understand it that proporty of fullnodes protecting against 
attacks on SPV nodes only exists due to fullnodes being cheap to keep online; 
if the cost of fullnodes in the **worst case** (***not*** average, please stop 
talking about average case) increases then it may become feasible for miners to 
attack SPV nodes.

Regards,
ZmnSCPxj
_______________________________________________
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev

Reply via email to