Re: [bitcoin-dev] [Lightning-dev] Removing the Dust Limit

2021-08-10 Thread ZmnSCPxj via bitcoin-dev
Good morning all,

Thinking a little more, if the dust limit is intended to help keep UTXO sets 
down, then on the LN side, this could be achieved as well by using channel 
factories (including "one-shot" factories which do not allow changing the 
topology of the subgraph inside the factory, but have the advantage of not 
requiring either `SIGHASH_NOINPUT` or an extra CSV constraint that is difficult 
to weigh in routing algorithms), where multiple channels are backed by a single 
UTXO.

Of course, with channel factories there is now a greater set of participants 
who will have differing opinions on appropriate feerate.

So I suppose one can argue that the dust limit becomes less material to higher 
layers, than actual onchain feerates.


Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] Removing the Dust Limit

2021-08-10 Thread Antoine Riard via bitcoin-dev
>  As developers, we have no
control over prevailing feerates, so this is a problem LN needs to deal
with regardless of Bitcoin Core's dust limit.

Right, as of today, we're going to trim-to-dust any commitment output of
which the value is inferior to the transaction owner's
`dust_limit_satoshis` plus the HTLC-claim (either success/timeout) fee at
the agreed on feerate. So the feerate is the most significant variable in
defining what's a LN *uneconomical output*.

IMO this approach presents annoying limitations. First, you still need to
come with an agreement among channel operators on the mempools feerate.
Such agreement might be problematic to find, as on one side you would like
to let your counterparty free to pick up a feerate gauged as efficient for
the confirmation of their transactions but at the same time not too high to
burn to fees your low-values HTLCs that *your* fee-estimator judged as sane
to claim.

Secondly, the trim-to-dust evaluation doesn't correctly match the lifetime
of the HTLC. A HTLC might be considered as dust at block 100, at which
mempools are full. Though its expiration only occurs at block 200, at which
mempools are empty and this HTLC is fine to claim again. I think this
inaccuracy will even become worse with a wider deployment of long-lived
routed packets over LN, such as DLCs or hodl invoices.

All this to say, if for those reasons LN devs remove feerate negotiation
from the trim-to-dust definition to a static feerate, it would likely put a
higher pressure on the full-nodes operators, as the number of uneconomical
outputs might increase.

(From a LN viewpoint, I would say we're trying to solve a price discovery
issue, namely the cost to write on the UTXO set, in a distributed system,
where any deviation from the "honest" price means you trust more your LN
counterparty)

> They could also use trustless probabalistic payments, which have been
discussed in the context of LN for handling the problem of payments too
small to be represented onchain since early 2016:
https://docs.google.com/presentation/d/1G4xchDGcO37DJ2lPC_XYyZIUkJc2khnLrCaZXgvDN0U/edit?pref=2&pli=1#slide=id.g85f425098

Thanks to bringing to the surface probabilistic payments, yes that's a
worthy alternative approach for low-value payments to keep in mind.

Le mar. 10 août 2021 à 02:15, David A. Harding  a écrit :

> On Mon, Aug 09, 2021 at 09:22:28AM -0400, Antoine Riard wrote:
> > I'm pretty conservative about increasing the standard dust limit in any
> > way. This would convert a higher percentage of LN channels capacity into
> > dust, which is coming with a lowering of funds safety [0].
>
> I think that reasoning is incomplete.  There are two related things here:
>
> - **Uneconomical outputs:** outputs that would cost more to spend than
>   the value they contain.
>
> - **Dust limit:** an output amount below which Bitcoin Core (and other
>   nodes) will not relay the transaction containing that output.
>
> Although raising the dust limit can have the effect you describe,
> increases in the minimum necessary feerate to get a transaction
> confirmed in an appropriate amount of time also "converts a higher
> percentage of LN channel capacity into dust".  As developers, we have no
> control over prevailing feerates, so this is a problem LN needs to deal
> with regardless of Bitcoin Core's dust limit.
>
> (Related to your linked thread, that seems to be about the risk of
> "burning funds" by paying them to a miner who may be a party to the
> attack.  There's plenty of other alternative ways to burn funds that can
> change the risk profile.)
>
> > the standard dust limit [...] introduces a trust vector
>
> My point above is that any trust vector is introduced not by the dust
> limit but by the economics of outputs being worth less than they cost to
> spend.
>
> > LN node operators might be willingly to compensate this "dust" trust
> vector
> > by relying on side-trust model
>
> They could also use trustless probabalistic payments, which have been
> discussed in the context of LN for handling the problem of payments too
> small to be represented onchain since early 2016:
>
> https://docs.google.com/presentation/d/1G4xchDGcO37DJ2lPC_XYyZIUkJc2khnLrCaZXgvDN0U/edit?pref=2&pli=1#slide=id.g85f425098_0_178
>
> (Probabalistic payments were discussed in the general context of Bitcoin
> well before LN was proposed, and Elements even includes an opcode for
> creating them.)
>
> > smarter engineering such as utreexo on the base-layer side
>
> Utreexo doesn't solve this problem.  Many nodes (such as miners) will
> still want to store the full UTXO set and access it quickly,  Utreexo
> proofs will grow in size with UTXO set size (though, at best, only
> log(n)), so full node operators will still not want their bandwidth
> wasted by people who create UTXOs they have no reason to spend.
>
> > I think the status quo is good enough for now
>
> I agree.
>
> -Dave
>
___
bitcoin-dev mailing li

[bitcoin-dev] Fwd: NLnet cryotoprimitives grant approved

2021-08-10 Thread Luke Kenneth Casson Leighton via bitcoin-dev
with many thanks to NLnet, the EUR 50,000 grant to research and
develop Draft cryptographic primitives and instructions to the
newly-open Power ISA has been approved.

unlike RISC-V where full transparency and trust is problematic and
there are many participants whose interests may not necessarily align,
the OpenPOWER initiative, which has been in careful planning for
nearly 10 years, is a much less crowded space and, crucially, does not
require non-transparent membership of OPF in order to submit ISA RFCs
(Requests for Change)

[non-OPF members cannot participate in actual ISA WG meetings and
certainly cannot vote on RFCs, but they can at least submit them.
whereas whilst the RISC-V Foundation's Commercial Confidence
Requirements are perfectly reasonable, the blanket secrecy even for
submitting RFCs is not]

we at Libre-SOC aim to use this process, based on taking apart key
strategic cryptographic algorithms back to their mathematical roots,
then applying Vector ISA design analysis and seeing what can be
created.

examples include going back to the fundamental basis of Rijndael, and
instead of creating hardcoded custom silicon for MixColumns as is the
"normal" practice, adding a generic Galois Field ALU and a generic
Matrix Multiply system.  another is to design instructions suitable
for "big integer math"

this in turn means that the resultant ISA would be ideally suited to
the experimental development of future cryptographic algorithms for
use in securing wallets and other purposes related to blockchain
management.

[as bitcoin stands we cannot possibly hope to compete with custom
silicon dedicated to SHA hash production, however we would very much
like to see a future version of bitcoin that uses far less power yet
retains its high strategic value, and, at the same time, like e.g.
monero RandomX, is better suited to a general-purpose Vector
Supercomputer ISA, which is what we are developing]

OpenPOWER's commitment to a transparent RFC process allows us to do
that without compromising trust: no discussions that we participate in
will ever be behind closed doors.

if anyone would be interested to participate or collaborate on this,
we have funding available, and welcome involvement in designing and
testing an ISA suitable for securing bitcoin for end-users in a fully
transparent fashion.

l.

---
crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] Removing the Dust Limit

2021-08-10 Thread ZmnSCPxj via bitcoin-dev
Good morning Billy, et al.,

> For sure, CT can be done with computational soundness. The advantage of 
> unhidden amounts (as with current bitcoin) is that you get unconditional 
> soundness.

My understanding is that it should be possible to have unconditional soundness 
with the use of El-Gamal commitment scheme, am I wrong?

Alternately, one possible softforkable design would be for Bitcoin to maintain 
a non-CT block (the current scheme) and a separately-committed CT block (i.e. 
similar to how SegWit has a "separate" "block"/Merkle tree that includes 
witnesses).
When transferring funds from the legacy non-CT block, on the legacy block you 
put it into a "burn" transaction that magically causes the same amount to be 
created (with a trivial/publicly known salt) in the CT block.
Then to move from the CT block back to legacy non-CT you would match one of 
those "burn" TXOs and spend it, with a proof that the amount you are removing 
from the CT block is exactly the same value as the "burn" TXO you are now 
spending.

(for additional privacy, the values of the "burn" TXOs might be made into some 
fixed single allowed value, so that transfers passing through the CT portion 
would have fewer identifying features)

The "burn" TXOs would be some trivial anyone-can-spend, such as ` 
<0> OP_EQUAL OP_NOT` with `` being what is used in the CT to cover 
the value, and knowledge of the scalar behind this point would allow the CT 
output to be spent (assuming something very much like MimbleWimble is used; 
otherwise it could be the hash of some P2WSH or similar analogue on the CT 
side).

Basically, this is "CT as a 'sidechainlike' that every fullnode runs".

In the legacy non-CT block, the total amount of funds that are in all CT 
outputs is known (it would be the sum total of all the "burn" TXOs) and will 
have a known upper limit, that cannot be higher than the supply limit of the 
legacy non-CT block, i.e. 21 million BTC.
At the same time, *individual* CT-block TXOs cannot have their values known; 
what is learnable is only how many BTC are in all CT block TXOs, which should 
be sufficient privacy if there are a large enough number of users of the CT 
block.

This allows the CT block to use an unconditional privacy and computational 
soundness scheme, and if somehow the computational soundness is broken then the 
first one to break it would be able to steal all the CT coins, but not *all* 
Bitcoin coins, as there would not be enough "burn" TXOs on the legacy non-CT 
blockchain.

This may be sufficient for practical privacy.


On the other hand, I think the dust limit still makes sense to keep for now, 
though.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] Removing the Dust Limit

2021-08-10 Thread Billy Tetrud via bitcoin-dev
For sure, CT can be done with computational soundness. The advantage of
unhidden amounts (as with current bitcoin) is that you get unconditional
soundness. My understanding is that there is a fundamental tradeoff between
unconditional soundness and unconditional privacy. I believe Monero has
taken this alternate tradeoff path with unconditional privacy but only
computational soundness

.

> old things that never move more or less naturally "fall leftward"

Ah yes, something like that would definitely be interesting to basically
make dust a moot point. Sounds like the tradeoff mentioned is that proofs
would be twice as big? Except newer UTXOs would have substantially shorter
proofs. It sounds like the kind of thing where there's some point where
there would be so many old UTXOs that proofs would be smaller on average in
the swap tree version vs the dead-leaf version. Maybe someone smarter than
me could estimate where that point is.

On Mon, Aug 9, 2021 at 10:04 PM Jeremy  wrote:

> You might be interested in https://eprint.iacr.org/2017/1066.pdf which
> claims that you can make CT computationally hiding and binding, see section
> 4.6.
>
> with respect to utreexo, you might review
> https://github.com/mit-dci/utreexo/discussions/249?sort=new which
> discusses tradeoffs between different accumulator designs. With a swap
> tree, old things that never move more or less naturally "fall leftward",
> although there are reasons to prefer alternative designs.
>
>
>>>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Exploring: limiting transaction output amount as a function of total input value

2021-08-10 Thread Billy Tetrud via bitcoin-dev
>  By explicitly specifying the start and end block of an epoch, the user
has more flexibility in shifting the epoch

Ok I see. I think I understand your proposal better now. If the output is
spent within the range epochStart - epochEnd, the limit holds, if it is
spent outside that range the change output must also have a range of the
same length (or shorter?). So you want there to be the ability for the user
to precisely define the length and starting block of the
rate-limiting-period (epoch). I'd say it'd be clearer to specify the window
length and the starting block in that case. The same semantics can be kept.

> This would require the system to bookkeep how much was spent since the
first rate-limited output

Yes, for the length of the epoch, after which the bookkeeping can be
discarded/reset until a new transaction is sent. Your proposal also
requires bookkeeping tho - it needs to store the 'remain' value with the
UTXO as well because its not efficient to go back and re-execute the script
just to grab that value.

> using an address as input for a transaction will always spends the full
amount at that address

Using a UTXO will spend the full UTXO. The address may contain many UTXOs.
I'm not suggesting that a change address isn't needed - I'm suggesting that
the *same* address be used as the change address for the change output. Eg
consider the following UTXO info:

Address X: rateLimit(windowSize = 144 blocks, limit = 100k sats)
* UTXO 1: 100k sats, 50k spent by ancestor inputs since epochStart 800100
* UTXO 2: 200k sats, 10k spent since epochStart

When sending a transaction using UTXO 2, a node would look up the list of
UTXOs in Address X, add up the amount spent since epochStart (60k) and
ensure that at most 40k is going to an address that isn't address X. So a
valid transaction might look like:

Input: UTXO 2
Output 1: 30k -> Address A
Output 2: 170k -> Address X

On Thu, Aug 5, 2021 at 7:22 AM Zac Greenwood  wrote:

> Hi Billy,
>
> > It sounds like you're proposing an opcode
>
> No. I don’t have enough knowledge of Bitcoin to be able to tell how (and
> if) rate-limiting can be implemented as I suggested. I am not able to
> reason about opcodes, so I kept my description at a more functional level.
>
> > I still don't understand why its useful to specify those as absolute
> block heights
>
> I feel that this a rather uninteresting data representation aspect that’s
> not worth going back and forth about. Sure, specifying the length of the
> epoch may also be an option, although at the price of giving up some
> functionality, and without much if any gains.
>
> By explicitly specifying the start and end block of an epoch, the user has
> more flexibility in shifting the epoch (using alternate values for
> epochStart and epochEnd) and simultaneously increasing the length of an
> epoch. These seem rather exotic features, but there’s no harm in retaining
> them.
>
> > if you have a UTXO encumbered by rateLimit(epochStart = 800100,
> epochEnd = 800200, limit = 100k, remain = 100k), what happens if you don't
> spend that UTXO before block 800200?
>
> The rate limit remains in place. So if this UTXO is spent in block 90,
> then at most 100k may be spent. Also, the new epoch must be at least 100
> blocks and remain must correctly account for the actual amount spent.
>
> > This is how I'd imagine creating an opcode like this:
>
> > rateLimit(windowSize = 144 blocks, limit = 100k sats)
>
> This would require the system to bookkeep how much was spent since the
> first rate-limited output. It is a more intuitive way of rate-limiting but
> it may be much more difficult to implement, which is why I went with the
> epoch-based rate limiting solution. In terms of functionality, I believe
> the two solutions are nearly identical for all practical purposes.
>
> Your next section confuses me. As I understand it, using an address as
> input for a transaction will always spends the full amount at that address.
> That’s why change addresses are required, no? If Bitcoin were able to pay
> exact amounts then there wouldn’t be any need for change outputs.
>
> Zac
>
>
> On Thu, 5 Aug 2021 at 08:39, Billy Tetrud  wrote:
>
>> >   A maximum amount is allowed to be spent within EVERY epoch.
>>
>> It sounds like you're proposing an opcode that takes in epochStart and
>> epochEnd as parameters. I still don't understand why its useful to specify
>> those as absolute block heights. You mentioned that this enables more
>> straightforward validation logic, but I don't see how. Eg, if you have a
>> UTXO encumbered by rateLimit(epochStart = 800100, epochEnd = 800200, limit
>> = 100k, remain = 100k), what happens if you don't spend that UTXO before
>> block 800200? Is the output no longer rate limited then? Or is the opcode
>> calculating 800200-800100 = 100 and applying a rate limit for the next
>> epoch? If the first, then the UTXO must be spent within one epoch to remain
>> rate limited. If the second, then it seems nearly identica

Re: [bitcoin-dev] [Lightning-dev] Removing the Dust Limit

2021-08-10 Thread Billy Tetrud via bitcoin-dev
> 5) should we ever do confidential transactions we can't prevent it without 
> compromising
privacy / allowed transfers

I wanted to mention the dubiousness of adding confidential transactions to
bitcoin. Because adding CT would eliminate the ability for users to audit
the supply of Bitcoin, I think its incredibly unlikely to ever happen. I'm
in the camp that we shouldn't do anything that prevents people from
auditing the supply. I think that camp is probably pretty large. Regardless
of what I think should happen there, and even if CT were to eventually
happen in bitcoin, I don't think that future possibility is a good reason
to change the dust limit today.

It seems like dust is a scalability problem regardless of whether we use
Utreexo eventually or not, tho an accumulator would help a ton. One idea
would be to destroy/delete dust at some point in the future. However, even
if we were to plan to do this, I still don't think the dust limit should be
removed. But the dust limit should probably be lowered a bit, given that
the 546 sats limit is about 7 cents and its very doable to send 1 sat/vbyte
transactions, so lowering it to 200 sats seems reasonable.


On Mon, Aug 9, 2021 at 6:24 AM Antoine Riard via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> I'm pretty conservative about increasing the standard dust limit in any
> way. This would convert a higher percentage of LN channels capacity into
> dust, which is coming with a lowering of funds safety [0]. Of course, we
> can adjust the LN security model around dust handling to mitigate the
> safety risk in case of adversarial settings, but ultimately the standard
> dust limit creates a  "hard" bound, and as such it introduces a trust
> vector in the reliability of your peer to not goes
> onchain with a commitment heavily-loaded with dust-HTLC you own.
>
> LN node operators might be willingly to compensate this "dust" trust
> vector by relying on side-trust model, such as PKI to authenticate their
> peers or API tokens (LSATs, PoW tokens), probably not free from
> consequences for the "openness" of the LN topology...
>
> Further, I think any authoritative setting of the dust limit presents the
> risk of becoming ill-adjusted  w.r.t to market realities after a few months
> or years, and would need periodic reevaluations. Those reevaluations, if
> not automated, would become a vector of endless dramas and bikeshedding as
> the L2s ecosystems grow bigger...
>
> Note, this would also constrain the design space of newer fee schemes.
> Such as negotiated-with-mining-pool and discounted consolidation during low
> feerate periods deployed by such producers of low-value outputs.
> `
> Moreover as an operational point, if we proceed to such an increase on the
> base-layer, e.g to 20 sat/vb, we're going to severely damage the
> propagation of any LN transaction, where a commitment transaction is built
> with less than 20 sat/vb outputs. Of course, core's policy deployment on
> the base layer is gradual, but we should first give a time window for the
> LN ecosystem to upgrade and as of today we're still devoid of the mechanism
> to do it cleanly and asynchronously (e.g dynamic upgrade or quiescence
> protocol [1]).
>
> That said, as raised by other commentators, I don't deny we have a
> long-term tension between L2 nodes and full-nodes operators about the UTXO
> set growth, but for now I would rather solve this with smarter engineering
> such as utreexo on the base-layer side or multi-party shared-utxo or
> compressed colored coins/authentication smart contracts (e.g
> opentimestamp's merkle tree in OP_RETURN) on the upper layers rather than
> altering the current equilibrium.
>
> I think the status quo is good enough for now, and I believe we would be
> better off to learn from another development cycle before tweaking the dust
> limit in any sense.
>
> Antoine
>
> [0]
> https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-May/002714.html
> [1] https://github.com/lightningnetwork/lightning-rfc/pull/869
>
> Le dim. 8 août 2021 à 14:53, Jeremy  a écrit :
>
>> We should remove the dust limit from Bitcoin. Five reasons:
>>
>> 1) it's not our business what outputs people want to create
>> 2) dust outputs can be used in various authentication/delegation smart
>> contracts
>> 3) dust sized htlcs in lightning (
>> https://bitcoin.stackexchange.com/questions/46730/can-you-send-amounts-that-would-typically-be-considered-dust-through-the-light)
>> force channels to operate in a semi-trusted mode which has implications
>> (AFAIU) for the regulatory classification of channels in various
>> jurisdictions; agnostic treatment of fund transfers would simplify this
>> (like getting a 0.01 cent dividend check in the mail)
>> 4) thinly divisible colored coin protocols might make use of sats as
>> value markers for transactions.
>> 5) should we ever do confidential transactions we can't prevent it
>> without compromising privacy / allowed transfers
>>
>> The main