Re: [Lightning-dev] [bitcoin-dev] Removing the Dust Limit

2021-08-10 Thread ZmnSCPxj via Lightning-dev
Good morning all,

Thinking a little more, if the dust limit is intended to help keep UTXO sets 
down, then on the LN side, this could be achieved as well by using channel 
factories (including "one-shot" factories which do not allow changing the 
topology of the subgraph inside the factory, but have the advantage of not 
requiring either `SIGHASH_NOINPUT` or an extra CSV constraint that is difficult 
to weigh in routing algorithms), where multiple channels are backed by a single 
UTXO.

Of course, with channel factories there is now a greater set of participants 
who will have differing opinions on appropriate feerate.

So I suppose one can argue that the dust limit becomes less material to higher 
layers, than actual onchain feerates.


Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Removing the Dust Limit

2021-08-10 Thread Antoine Riard
>  As developers, we have no
control over prevailing feerates, so this is a problem LN needs to deal
with regardless of Bitcoin Core's dust limit.

Right, as of today, we're going to trim-to-dust any commitment output of
which the value is inferior to the transaction owner's
`dust_limit_satoshis` plus the HTLC-claim (either success/timeout) fee at
the agreed on feerate. So the feerate is the most significant variable in
defining what's a LN *uneconomical output*.

IMO this approach presents annoying limitations. First, you still need to
come with an agreement among channel operators on the mempools feerate.
Such agreement might be problematic to find, as on one side you would like
to let your counterparty free to pick up a feerate gauged as efficient for
the confirmation of their transactions but at the same time not too high to
burn to fees your low-values HTLCs that *your* fee-estimator judged as sane
to claim.

Secondly, the trim-to-dust evaluation doesn't correctly match the lifetime
of the HTLC. A HTLC might be considered as dust at block 100, at which
mempools are full. Though its expiration only occurs at block 200, at which
mempools are empty and this HTLC is fine to claim again. I think this
inaccuracy will even become worse with a wider deployment of long-lived
routed packets over LN, such as DLCs or hodl invoices.

All this to say, if for those reasons LN devs remove feerate negotiation
from the trim-to-dust definition to a static feerate, it would likely put a
higher pressure on the full-nodes operators, as the number of uneconomical
outputs might increase.

(From a LN viewpoint, I would say we're trying to solve a price discovery
issue, namely the cost to write on the UTXO set, in a distributed system,
where any deviation from the "honest" price means you trust more your LN
counterparty)

> They could also use trustless probabalistic payments, which have been
discussed in the context of LN for handling the problem of payments too
small to be represented onchain since early 2016:
https://docs.google.com/presentation/d/1G4xchDGcO37DJ2lPC_XYyZIUkJc2khnLrCaZXgvDN0U/edit?pref=2=1#slide=id.g85f425098

Thanks to bringing to the surface probabilistic payments, yes that's a
worthy alternative approach for low-value payments to keep in mind.

Le mar. 10 août 2021 à 02:15, David A. Harding  a écrit :

> On Mon, Aug 09, 2021 at 09:22:28AM -0400, Antoine Riard wrote:
> > I'm pretty conservative about increasing the standard dust limit in any
> > way. This would convert a higher percentage of LN channels capacity into
> > dust, which is coming with a lowering of funds safety [0].
>
> I think that reasoning is incomplete.  There are two related things here:
>
> - **Uneconomical outputs:** outputs that would cost more to spend than
>   the value they contain.
>
> - **Dust limit:** an output amount below which Bitcoin Core (and other
>   nodes) will not relay the transaction containing that output.
>
> Although raising the dust limit can have the effect you describe,
> increases in the minimum necessary feerate to get a transaction
> confirmed in an appropriate amount of time also "converts a higher
> percentage of LN channel capacity into dust".  As developers, we have no
> control over prevailing feerates, so this is a problem LN needs to deal
> with regardless of Bitcoin Core's dust limit.
>
> (Related to your linked thread, that seems to be about the risk of
> "burning funds" by paying them to a miner who may be a party to the
> attack.  There's plenty of other alternative ways to burn funds that can
> change the risk profile.)
>
> > the standard dust limit [...] introduces a trust vector
>
> My point above is that any trust vector is introduced not by the dust
> limit but by the economics of outputs being worth less than they cost to
> spend.
>
> > LN node operators might be willingly to compensate this "dust" trust
> vector
> > by relying on side-trust model
>
> They could also use trustless probabalistic payments, which have been
> discussed in the context of LN for handling the problem of payments too
> small to be represented onchain since early 2016:
>
> https://docs.google.com/presentation/d/1G4xchDGcO37DJ2lPC_XYyZIUkJc2khnLrCaZXgvDN0U/edit?pref=2=1#slide=id.g85f425098_0_178
>
> (Probabalistic payments were discussed in the general context of Bitcoin
> well before LN was proposed, and Elements even includes an opcode for
> creating them.)
>
> > smarter engineering such as utreexo on the base-layer side
>
> Utreexo doesn't solve this problem.  Many nodes (such as miners) will
> still want to store the full UTXO set and access it quickly,  Utreexo
> proofs will grow in size with UTXO set size (though, at best, only
> log(n)), so full node operators will still not want their bandwidth
> wasted by people who create UTXOs they have no reason to spend.
>
> > I think the status quo is good enough for now
>
> I agree.
>
> -Dave
>
___
Lightning-dev mailing list

Re: [Lightning-dev] [bitcoin-dev] Removing the Dust Limit

2021-08-10 Thread ZmnSCPxj via Lightning-dev
Good morning Billy, et al.,

> For sure, CT can be done with computational soundness. The advantage of 
> unhidden amounts (as with current bitcoin) is that you get unconditional 
> soundness.

My understanding is that it should be possible to have unconditional soundness 
with the use of El-Gamal commitment scheme, am I wrong?

Alternately, one possible softforkable design would be for Bitcoin to maintain 
a non-CT block (the current scheme) and a separately-committed CT block (i.e. 
similar to how SegWit has a "separate" "block"/Merkle tree that includes 
witnesses).
When transferring funds from the legacy non-CT block, on the legacy block you 
put it into a "burn" transaction that magically causes the same amount to be 
created (with a trivial/publicly known salt) in the CT block.
Then to move from the CT block back to legacy non-CT you would match one of 
those "burn" TXOs and spend it, with a proof that the amount you are removing 
from the CT block is exactly the same value as the "burn" TXO you are now 
spending.

(for additional privacy, the values of the "burn" TXOs might be made into some 
fixed single allowed value, so that transfers passing through the CT portion 
would have fewer identifying features)

The "burn" TXOs would be some trivial anyone-can-spend, such as ` 
<0> OP_EQUAL OP_NOT` with `` being what is used in the CT to cover 
the value, and knowledge of the scalar behind this point would allow the CT 
output to be spent (assuming something very much like MimbleWimble is used; 
otherwise it could be the hash of some P2WSH or similar analogue on the CT 
side).

Basically, this is "CT as a 'sidechainlike' that every fullnode runs".

In the legacy non-CT block, the total amount of funds that are in all CT 
outputs is known (it would be the sum total of all the "burn" TXOs) and will 
have a known upper limit, that cannot be higher than the supply limit of the 
legacy non-CT block, i.e. 21 million BTC.
At the same time, *individual* CT-block TXOs cannot have their values known; 
what is learnable is only how many BTC are in all CT block TXOs, which should 
be sufficient privacy if there are a large enough number of users of the CT 
block.

This allows the CT block to use an unconditional privacy and computational 
soundness scheme, and if somehow the computational soundness is broken then the 
first one to break it would be able to steal all the CT coins, but not *all* 
Bitcoin coins, as there would not be enough "burn" TXOs on the legacy non-CT 
blockchain.

This may be sufficient for practical privacy.


On the other hand, I think the dust limit still makes sense to keep for now, 
though.

Regards,
ZmnSCPxj
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev


Re: [Lightning-dev] Removing the Dust Limit

2021-08-10 Thread David A. Harding via Lightning-dev
On Mon, Aug 09, 2021 at 09:22:28AM -0400, Antoine Riard wrote:
> I'm pretty conservative about increasing the standard dust limit in any
> way. This would convert a higher percentage of LN channels capacity into
> dust, which is coming with a lowering of funds safety [0]. 

I think that reasoning is incomplete.  There are two related things here:

- **Uneconomical outputs:** outputs that would cost more to spend than
  the value they contain.

- **Dust limit:** an output amount below which Bitcoin Core (and other
  nodes) will not relay the transaction containing that output.

Although raising the dust limit can have the effect you describe, 
increases in the minimum necessary feerate to get a transaction
confirmed in an appropriate amount of time also "converts a higher
percentage of LN channel capacity into dust".  As developers, we have no
control over prevailing feerates, so this is a problem LN needs to deal
with regardless of Bitcoin Core's dust limit.

(Related to your linked thread, that seems to be about the risk of
"burning funds" by paying them to a miner who may be a party to the
attack.  There's plenty of other alternative ways to burn funds that can
change the risk profile.)

> the standard dust limit [...] introduces a trust vector 

My point above is that any trust vector is introduced not by the dust
limit but by the economics of outputs being worth less than they cost to
spend.

> LN node operators might be willingly to compensate this "dust" trust vector
> by relying on side-trust model

They could also use trustless probabalistic payments, which have been
discussed in the context of LN for handling the problem of payments too
small to be represented onchain since early 2016:
https://docs.google.com/presentation/d/1G4xchDGcO37DJ2lPC_XYyZIUkJc2khnLrCaZXgvDN0U/edit?pref=2=1#slide=id.g85f425098_0_178

(Probabalistic payments were discussed in the general context of Bitcoin
well before LN was proposed, and Elements even includes an opcode for
creating them.)

> smarter engineering such as utreexo on the base-layer side 

Utreexo doesn't solve this problem.  Many nodes (such as miners) will
still want to store the full UTXO set and access it quickly,  Utreexo
proofs will grow in size with UTXO set size (though, at best, only
log(n)), so full node operators will still not want their bandwidth
wasted by people who create UTXOs they have no reason to spend.

> I think the status quo is good enough for now

I agree.

-Dave


signature.asc
Description: PGP signature
___
Lightning-dev mailing list
Lightning-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev