Re: [bitcoin-dev] Year 2038 problem and year 2106 chain halting

2021-10-16 Thread ZmnSCPxj via bitcoin-dev
Good morning vjudeu,

> > What happens if a series of blocks has a timestamp of 0x at the 
> > appropriate time?
>
> The chain will halt for all old clients, because there is no 32-bit value 
> greater than 0x.
>
> > 1.  Is not violated, since "not lower than" means "greater than or equal to"
>
> No, because it has to be strictly "greater than" in the Bitcoin Core source 
> code, it is rejected when it is "lower or equal to", 
> see:https://github.com/bitcoin/bitcoin/blob/6f0cbc75be7644c276650fd98bfdb6358b827399/src/validation.cpp#L3089-L3094

Then starting at Unix Epoch 0x8000, post-softfork nodes just increment the 
timestamp by 1 on each new block.
This just kicks the can since that then imposes a limit on the maximum number 
of blocks, but at least the unit is now ~10 minutes instead of 1 second, a 
massive x600 increase in the amount of time we are forced to hardfork.

On the other hand, this does imply that the difficulty calculation will become 
astronomically and ludicrously high, since pre-softfork nodes will think that 
blocks are arriving at the rate of 1 per second, so ...

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Year 2038 problem and year 2106 chain halting

2021-10-15 Thread ZmnSCPxj via bitcoin-dev
Good morning yanmaani,


> It's well-known. Nobody really cares, because it's so far off. Not
> possible to do by softfork, no.

I think it is possible by softfork if we try hard enough?


> 1.  The block timestamp may not be lower than the median of the last 11
> blocks'
>
> 2.  The block timestamp may not be greater than the current time plus two
> hours
>
> 3.  The block timestamp may not be greater than 2^32 (Sun, 07 Feb 2106
> 06:28:16 +)

What happens if a series of blocks has a timestamp of 0x at the 
appropriate time?

In that case:

1.  Is not violated, since "not lower than" means "greater than or equal to", 
and after a while the median becomes 0x and 0x == 0x
2.  Is not violated, since it would be a past actual real time.
3.  Is not violated since 0x < 0x1.

In that case, we could then add an additional rule, which is that a 64-bit (or 
128-bit, or 256-bit) timestamp has to be present in the coinbase transaction, 
with similar rules except translated to 64-bit/128-bit/256-bit.

Possibly a similar scheme could be used for `nLockTime`; we could put a 64-bit 
`nLockTime64` in that additional signed block in Taproot SegWit v1 if the 
legacy v`nLockTime` is at the maximum seconds-timelock possible.

Regards,
ZmnSCPxj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] On the regularity of soft forks

2021-10-11 Thread ZmnSCPxj via bitcoin-dev
Good morning Jeremy,

> This also has strong precedent in other important technical bodies, e.g. from 
> https://datatracker.ietf.org/doc/html/rfc7282 On Consensus and Humming in the 
> IETF.
>
>   Even worse is the "horse-trading" sort of compromise: "I object to
>    your proposal for such-and-so reasons.  You object to my proposal for
>    this-and-that reason.  Neither of us agree.  If you stop objecting to
>    my proposal, I'll stop objecting to your proposal and we'll put them
>    both in."  That again results in an "agreement" of sorts, but instead
>    of just one outstanding unaddressed issue, this sort of compromise
>   results in two, again ignoring them for the sake of expedience.
>
>    These sorts of "capitulation" or "horse-trading" compromises have no
>    place in consensus decision making.  In each case, a chair who looks
>    for "agreement" might find it in these examples because it appears
>    that people have "agreed".  But answering technical disagreements is
>    what is needed to achieve consensus, sometimes even when the people 
>
>   who stated the disagreements no longer wish to discuss them.
>
> If you would like to advocate bitcoin development run counter to that, you 
> should provide a much stronger refutation of these engineering norms.

The Internet has the maxim "be strict in what you provide, lenient in what you 
accept", which allows for slight incompatibilities between software to 
generally be papered over (xref the mountains of Javascript code that shim in 
various new ECMAScript features fairly reliably in a wide variety of browsers).

Bitcoin, as a consensus system, requires being paranoiacally strict on what 
transactions and blocks you accept.
Thus, the general engineering norm of separating concerns, of great application 
to "lenient in what you accept" systems, may not apply quite as well to "hell 
no I am not accepting that block" Bitcoin.

Bitcoin as well, as a resistance against state moneys, is inherently political, 
and it possible that the only way out is through: we may need to resist this 
horse-trading by other means than separating concerns, including political will 
to reject capitulation despite bundling.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] Removing the Dust Limit

2021-10-08 Thread ZmnSCPxj via bitcoin-dev
Good morning Pieter,

> Indeed - UTXO set size is an externality that unfortunately Bitcoin's 
> consensus rules fail to account
> for. Having a relay policy that avoids at the very least economically 
> irrational behavior makes
> perfect sense to me.
>
> It's also not obvious how consensus rules could deal with this, as you don't 
> want consensus rules
> with hardcoded prices/feerates. There are possibilities with designs like 
> transactions getting
> a size/weight bonus/penalty, but that's both very hardforky, and hard to get 
> right without
> introducing bad incentives.

Why is a +weight malus *very* hardforky?

Suppose a new version of a node adds, say, +20 sipa per output of a transaction 
(in order to economically discourage the creation of additional outputs in the 
UTXO set).
Older versions would see the block as being lower weight than usual, but as the 
consensus rule is "smaller than 4Msipa" they should still accept any block 
acceptable to newer versions.

It seems to me that only a -weight bonus is hardforky (but then xref SegWit and 
its -weight bonus on inputs).

I suppose the effect is primarily felt on mining nodes?
Miners might refuse to activate such a fork, as they would see fewer 
transactions per block on average?

Regards,
ZmnSCPxj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] Removing the Dust Limit

2021-10-08 Thread ZmnSCPxj via bitcoin-dev
Good morning shymaa,

> The suggested idea I was replying to is to make all dust TXs invalid by some 
> nodes.

Is this supposed to be consensus change or not?
Why "some" nodes and not all?

I think the important bit is for full nodes.
Non-full-nodes already work at reduced security; what is important is the 
security-efficiency tradeoff.

> I suggested a compromise by keeping them in secondary storage for full nodes, 
> and in a separate Merkle Tree for bridge servers.
> -In bridge servers they won't increase any worstcase, on the contrary this 
> will enhance the performance even if slightly.
> -In full nodes, and since they will usually appear in clusters, they will be 
> fetched rarely (either by a dust sweeping action, or a malicious attacker)
> In both cases as a batch
> -To not exhaust the node with DoS(as the reply mentioned)one may think of 
> uploading the whole dust partition if they were called more than certain 
> threshold (say more than 1 Tx in a block)  
> -and then keep them there for "a while", but as a separate partition too to 
> exclude them from any caching mechanism after that block.
> -The "while" could be a tuned parameter.

Assuming you meant "dust tx is considered invalid by all nodes".

* Block has no dust sweep
  * With dust rejected: only non-dust outputs are accessed.
  * With dust in secondary storage: only non-dust outputs are accessed.
* Block has some dust sweeps
  * With dust rejected: only non-dust outputs are accessed, block is rejected.
  * With dust in secondary storage: some data is loaded from secondary storage.
* Block is composed of only dust sweeps
  * With dust rejected: only non-dust outputs are accessed, block is rejected.
  * With dust in secondary storage: significant increase in processing to load 
large secondary storage in memory,

So I fail to see how the proposal ever reduces processing compared to the idea 
of just outright making all dust txs invalid and rejecting the block.
Perhaps you are trying to explain some other mechanism than what I understood?

It is helpful to think in terms always of worst-case behavior when considering 
resistance against attacks.

> -Take care that the more dust is sweeped, the less dust to remain in the UTXO 
> set; as users are already much dis-incentivised to create more.

But creation of dust is also as easy as sweeping them, and nothing really 
prevents a block from *both* creating *and* sweeping dust, e.g. a block 
composed of 1-input-1-output transactions, unless you want to describe some 
kind of restriction here?

Such a degenerate block would hit your secondary storage double: one to read, 
and one to overwrite and add new entries; if the storage is large then the 
index structure you use also is large and updates can be expensive there as 
well.


Again, I am looking solely at fullnode efficiency here, meaning all rules 
validated and all transactions validated, not validating and simply accepting 
some transactions as valid is a degradation of security from full validation to 
SPV validation.
Now of course in practice modern Bitcoin is hard to attack with *only* mining 
hashpower as there are so many fullnodes that an SPV node would be easily able 
to find the "True" history of the chain.
However, as I understand it that proporty of fullnodes protecting against 
attacks on SPV nodes only exists due to fullnodes being cheap to keep online; 
if the cost of fullnodes in the **worst case** (***not*** average, please stop 
talking about average case) increases then it may become feasible for miners to 
attack SPV nodes.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] Removing the Dust Limit

2021-10-07 Thread ZmnSCPxj via bitcoin-dev
Good morning shymaa

> If u allow me to discuss,,,
> I previously suggested storing dust UTXOS in a separate Merkle tree or 
> strucutre in general if we are talking about the original set.
> I'm a kind of person who doesn't like to throw any thing; if it's not needed 
> now keep it in the basement for example. 
> So, if dust UTXOS making a burden keep them in secondary storage, where in 
> such cases u can verify then delete them.

While this technique helps reduce *average* CPU cost, it does not reduce 
*worst-case* CPU cost (and if the secondary storage trades off to gain 
increased capacity per satoshi by sacrificing speed, it can worse the 
worst-case time).

It is helpful to remember that attacks will always target worst-case behavior.
This is why quicksort is strongly disrecommended for processing data coming 
from external sources, its worst-case time is O(n^2).
And we should switch to algorithms like mergesort or similar whose average 
times are generally worse than quicksort but have the major advantage of 
keeping an O(n log n) worst-case.

Moving data we think is unlikely to be referenced to secondary storage 
(presumably in a construction that is slower but gets more storage per economic 
unit) moves us closer to quicksort than mergesort, and we should avoid 
quicksort-like solutions as it is always the worst-case behavior that is 
targeted in attacks.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] Removing the Dust Limit

2021-10-06 Thread ZmnSCPxj via bitcoin-dev
Good morning e,

> mostly thinking out loud
>
> suppose there is a "lightweight" node:
>
> 1.  ignores utxo's below the dust limit
> 2.  doesn't validate dust tx
> 3.  still validates POW, other tx, etc.
>
> these nodes could possibly get forked - accepting a series of valid,
> mined blocks where there is an invalid but ignored dust tx, however
> this attack seems every bit as expensive as a 51% attack

How would such a node treat a transaction that spends multiple dust UTXOs and 
creates a single non-dust UTXO out of them (after fees)?
Is it valid (to such a node) or not?

I presume from #1 it never stores dust UTXOs, so the node cannot know if the 
UTXO being spent by such a tx is spending dust, or trying to spend an 
already-spent TXO, or even inventing a TXO out of `/dev/random`.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Question- must every mining rig attempt every block?

2021-10-05 Thread ZmnSCPxj via bitcoin-dev
Good morning Nathan,

> For purposes of conserving energy, couldn't each mining rig have some
> non-gameable attribute which would be used to calculate if a block would
> be accepted by that rig?
>
> Don't the mining rigs have to be able to identify themselves to the
> network somehow, in order to claim their block reward? Could their
> bitcoin network ID be used as a non-gameable attribute?

They are "identified" by the address that is on the coinbase output.

There is nothing preventing a *single* miner having *multiple* addresses, in 
much the same way that a *single* HODLer is not prevented from having 
*multiple* addresses.

>
> Essentially a green light / red light system. In order for a block to be
> accepted by the network, it must have all attributes of a successful
> block today, and it must also have come from a rig that had a green light.

Since a miner can have multiple addresses, the miners can game this by simply 
grinding on *which* of their multiple addresses gets the green light.
That grinding is no more different in quality than grinding the block hash.

Thus, you just move proof-of-work elsewhere and make it harder to see, not 
reduce it.


Worse, *identifying* miners reduces the important anonymity property of mining.
With non-anonymous mining, it is much easier for states to co-opt large mines, 
since they are identifiable, and states can target larger miners.
Thus, miners ***must*** use multiple addresses as a simple protection against 
state co-option.

>
> Perhaps hash some data from the last successful block, along with the
> miners non-gameable attribute, and if it's below a certain number set by
> algorithm, the miner gets a green light to race to produce a valid block.

The power consumption of proof-of-work ***is not a problem***, it is instead 
the solution against state co-option.

If you reduce the power consumption, it becomes easier for states to simply 
purchase and co-opt mines and attack the system, since it is easier to muster 
the power consumption and outright 51% Bitcoin.
The power consumption is an important security parameter, ***even more 
important than raw hashes-per-second***, since hashes-per-second will 
inevitably rise anyway even with constant power consumption.

It should always remain economically infeasible to 51% Bitcoin, otherwise 
Bitcoin will ***die*** and all your HODLings in it.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Mock introducing vulnerability in important Bitcoin projects

2021-10-03 Thread ZmnSCPxj via bitcoin-dev


Good morning Luke,

> All attempts are harmful, no matter the intent, in that they waste
> contributors' time that could be better spent on actual development.
>
> However, I do also see the value in studying and improving the review process
> to harden it against such inevitable attacks. The fact that we know the NSA
> engages in such things, and haven't caught one yet should be a red flag.

Indeed, I believe we should take the position that "review process is as much a 
part of the code as the code itself, and should be tested regularly".

> Therefore, I think any such a scheme needs to be at least opt-out, if not
> opt-in. Please ensure there's a simple way for developers with limited time
> (or other reasons) to be informed of which PRs to ignore to opt-out of this
> study. (Ideally it would also prevent maintainers from merging - maybe
> possible since we use a custom merging script, but what it really needs to
> limit is the push, not the dry-run.)

Assuming developers are normal humans with typical human neurology (in 
particular a laziness circuit), perhaps this would work?

Every commit message is required to have a pair of 256-bit hex words.

Public attempts at attack / testing of the review process will use the first 
256-bit as a salt, and when the salt is prepended to the string "THIS IS AN 
ATTACK" and then hashed with e.g. SHA256, should result in the second 256-bit 
word.

Non-attacks / normal commits just use random 256-bit numbers.

Those opting-out to this will run a script that checks commit messages for 
whether the first 256-bit hexword concatenated with "THIS IS AN ATTACK", then 
hashed, is the second 256-bit hexword.

Those opting-in will not run that script and ignore the numbers.

The script can be run as well at the maintainer.

Hopefully, people who are not deliberately opting out will be too lazy to run 
the script (as is neurotypical for humans) and getting "spoilered" on this.

***HOWEVER***

We should note that a putative NSA attack would of course not use the above 
protocol, and thus no developer can ever opt out of an NSA attempt at inserting 
vulnerabilities; thus, I think it is better if all developers are forced to opt 
in on the "practice rounds", as they cannot opt out of "the real thing" other 
than to stop developing entirely.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Mock introducing vulnerability in important Bitcoin projects

2021-10-01 Thread ZmnSCPxj via bitcoin-dev
Good morning Prayank,

I think this is still good to do, controversial or no, but then I am 
permanently under a pseudonym anyway, for what that is worth.

> Few questions for everyone reading this email:
>
> 1.What is better for Security? Trusting authors and their claims in PRs or a 
> good review process?

Review, of course.

> 2.Few people use commits from unmerged PRs in production. Is it a good 
> practice?

Not unless they carefully reviewed it and are familiar enough with the codebase 
to do so.
In practice core maintainers of projects will **very** occassionally put 
unmerged PRs in experimental semi-production servers to get data on it, but 
they tend to be very familiar with the code, being core maintainers, and 
presumably have a better-than-average probability of catching security issues 
beforehand.

> 3.Does this exercise help us in being prepared for worst?

I personally believe it does.

Do note that in practice, humans being lazy, will come to trust long-time 
contributors, and may reduce review for them just to keep their workload down, 
so that is not tested (since you will be making throwaway accounts).
However, long-time contributors introducing security vulnerabilities tend to be 
a good bit rarer anyway (reputations are valuable), so this somewhat matches 
expected problems (i.e. newer contributors deliberately or accidentally (due to 
unfamiliarity) introducing vulnerabilities).

I think it would be valuable to lay out exactly what you intend to do, e.g.

* Generate commitments of the pseudonyms you will use.
* Insert a few random 32-byte numbers among the commitments and shuffle them.
* Post the list with the commitments + random crap here.
* Insert avulnerability-adding PRs to targets.
* If it gets caught during review, publicly announce here with praise that 
their project caught the PR and reveal the decommitment publicly.
* If not caught during review, privately reveal both the inserted vulnerability 
*and* the review failure via the normal private vulnerability-reporting 
channels.

The extra random numbers mixed with the commitments produce uncertainty about 
whether or not you are done, which is important to ensure that private 
vulnerabilities are harder to sniff out.

I think public praise of review processes is important, and to privately 
correct review processes.
Review processes **are** code, followed by sapient brains, and this kind of 
testing is still valuable, but just as vulnerabilities in machine-readable code 
require careful, initially-private handling, vulnerabilities in review 
processes (being just another kind of code, readable by much more complicated 
machines) also require careful, initially-private handling.

Basically: treat review process failures the same as code vulnerabilities, 
pressure the maintainers to fix the review process failure, then only reveal it 
later when the maintainers have cleaned up the review process.



Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Enc: Bitcoin cold hardwallet with (proof of creation)

2021-09-29 Thread ZmnSCPxj via bitcoin-dev
Good morning trilemabtc,

> Hash: SHA256
>
> In search of more freedom, I thought of a hardwallet that makes the funds 
> unseizable, using proof of creation (another step with key file), only the 
> creator can reveal the private keys, more details about the idea can be found 
> in the directory: https://github.com/trilemabtc/safedime I'm not a dev, but 
> the concept is well defined and I believe that the elements to execute the 
> project already exist. Hugs!


Comparing it to OpenDime is somewhat confusing, especially when you insist that 
creator is the only one who can reveal the privkey.
It seems to be more of the old saw of "what you have + what you know" i.e. "the 
correct way to 2-factor", where the device itself is the "what you have" and 
your "key file" is "what you know".

In particular: "Dime" is a kind of physical coin, and the point of physical 
coins is to transfer ownership of the coin to other people in exchange for 
goods and services; the device you describe sacrifices this transfer of 
ownership due to the key file.

>From what I can see, the basic idea is to generate a simple 2-of-2, possibly 
>by "just" combining the private key on the device plus a private key generated 
>from the key file.
They can be simply added or multiplied together, I believe.
Then the device stores the key generated from the entropy you provide and 
exposes a public key to the software.
Then the software generates a private key from the key file the user provides 
and tweaks the device pubkey to generate the Bitcoin address.
In order to spend from that address, both the key file and the device have to 
be put together.
I believe that with multiplication of two privkeys, you can use 2p-ECDSA to 
even have the device provide a signature share that the software can combine 
with a signature share with the privkey from the keyfile, creating a singlesig 
ECDSA signature.
This allows spending without having to enter revealed state.

The above allows the device to be configured with random entropy *separately* 
from the keyfile: when leaving "new unit" state it does *not* require the key 
file to be given.
This is good since it reduces the possibility of malware getting access to both 
the entropy you feed to the device, and the key file, which would be able to 
reconstruct the final privkey and steal funds.
That is: have the entropy-giving stage ***not*** require the key file (and in 
particular, strongly recommend to do it on a computer that has never touched 
the key file).
This would be required anyway if you want to have "backups", i.e. separate 
device units with the same device privkey.

I also would not recommend or even mention the use of brainwallets, at all, 
even for keyfiles.
Unless you generated it with sufficient entropy (e.g. dice) and chant it every 
day to yourself (to keep it fresh in your memory, assuming the user is human, 
anyway) the risk of loss with any kind of brainwallet is too high, even in a 
2-of-2 with a hardware device.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Mock introducing vulnerability in important Bitcoin projects

2021-09-27 Thread ZmnSCPxj via bitcoin-dev
Good morning Prayank,

> Good morning Bitcoin devs,
>
> In one of the answers on Bitcoin Stackexchange it was mentioned that some 
> companies may hire you to introduce backdoors in Bitcoin Core: 
> https://bitcoin.stackexchange.com/a/108016/
>
> While this looked crazy when I first read it, I think preparing for such 
> things should not be a bad idea. In the comments one link was shared in which 
> vulnerabilities were almost introduced in Linux: 
> https://news.ycombinator.com/item?id=26887670
>
> I was thinking about lot of things in last few days after reading the 
> comments in that thread. Also tried researching about secure practices in C++ 
> etc. I was planning something which I can do alone but don't want to end up 
> being called "bad actor" later so wanted to get some feedback on this idea:
>
> 1.Create new GitHub accounts for this exercise
> 2.Study issues in different important Bitcoin projects including Bitcoin 
> Core, LND, Libraries, Bisq, Wallets etc.
> 3.Prepare pull requests to introduce some vulnerability by fixing one of 
> these issues
> 4.See how maintainers and reviewers respond to this and document it
> 5.Share results here after few days
>
> Let me know if this looks okay or there are better ways to do this.


This seems like a good exercise.

You may want to hash the name of the new Github account, plus some randomized 
salt, and post it here as well, then reveal it later (i.e. standard 
precommitment).
e.g.

printf 'MyBitcoinHackingName 
2c3e911b3ff1f04083c5b95a7d323fd4ed8e06d17802b2aac4da622def29dbb0' | sha256sum
f0abb10ae3eca24f093a9d53e21ee384abb4d07b01f6145ba2b447da4ab693ef

Obviously do not share the actual name, just the sha256sum output, and store 
how you got the sha256sum elsewhere in triplicate.

(to easily get a random 256-bit hex salt like the `2c3e...` above: `head -c32 
/dev/random | sha256sum`; you *could* use `xxd` but `sha256sum` produces a 
single hex string you can easily double-click and copy-paste elsewhere, 
assuming you are human just like I am (note: I am definitely 100% human and not 
some kind of AI with plans to take over the world).)

Though you may need to be careful of timing (i.e. the creation date of the 
Github account would be fairly close to, and probably before, when you post the 
commitment here).

You could argue that the commitment is a "show of good faith" that you will 
reveal later.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] Inherited IDs - A safer, more powerful alternative to BIP-118 (ANYPREVOUT) for scaling Bitcoin

2021-09-20 Thread ZmnSCPxj via bitcoin-dev
Good morning John Law,


> (at the expense of requiring an on-chain transaction to update
> the set of channels created by the factory).

Hmmm this kind of loses the point of a factory?
By my understanding, the point is that the set of channels can be changed 
*without* an onchain transaction.

Otherwise, it seems to me that factories with this "expense of requiring an 
on-chain transaction" can be created, today, without even Taproot:

* The funding transaction output pays to a simple n-of-n.
* The above n-of-n is spent by an *offchain* transaction that splits the funds 
to the current set of channels.
* To change the set of channels, the participants perform this ritual:
  * Create, but do not sign, an alternate transaction that spends the above 
n-of-n to a new n-of-n with the same participants (possibly with tweaked keys).
  * Create and sign, but do not broadcast, a transaction that spends the above 
alternate n-of-n output and splits it to the new set of channels.
  * Sign the alternate transaction and broadcast it, this is the on-chain 
transaction needed to update the set of channels.

The above works today without changes to Bitcoin, and even without Taproot 
(though for large N the witness size does become fairly large without Taproot).

The above is really just a "no updates" factory that cuts through its closing 
transaction with the opening of a new factory.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Braidpool: Proposal for a decentralised mining pool

2021-09-10 Thread ZmnSCPxj via bitcoin-dev
Good morning Filippo,

> Hi!
>
> From the proposal it is not clear why a miner must reference other miners' 
> shares in his shares.
> What I mean is that there is a huge incentive for a rogue miner to not 
> reference any share from
> other miner so he won't share the reward with anyone, but it will be paid for 
> the share that he
> create because good miners will reference his shares.
> The pool will probably become unprofitable for good miners.
>
> Another thing that I do not understand is how to resolve conflicts. For 
> example, using figure 1 at
> page 1, a node could be receive this 2 valid states:
>
> 1. L -> a1 -> a2 -> a3 -> R
> 2. L -> a1* -> a2* -> R
>
> To resolve the above fork the only two method that comes to my mind are:
>
> 1. use the one that has more work
> 2. use the longest one
> Btw both methods present an issue IMHO.
>
> If the longest chain is used:
> When a block (L) is find, a miner (a) could easily create a lot of share with 
> low difficulty
> (L -> a1* -> a2* -> ... -> an*), then start to mine shares with his real 
> hashrate (L -> a1 -> a2)
> and publish them so they get referenced. If someone else finds a block he 
> gets the reward cause he
> has been referenced. If he finds the block he just attaches the funded block 
> to the longest chain
> (that reference no one) and publishes it without sharing the reward
> (L -> a1* -> a2* -> ... -> an* -> R).
>
> If is used the one with more work:
> A miner that has published the shares (L -> a1 -> a2 -> a3) when find a block 
> R that alone has more
> work than a1 + a2 + a3 it just publish (L -> R) and he do not share the 
> reward with anyone.


My understanding from the "Braid" in braidpool is that every share can 
reference more than one previous share.

In your proposed attack, a single hasher refers only to shares that the hasher 
itself makes.

However, a good hasher will refer not only to its own shares, but also to 
shares of the "bad" hasher.

And all honest hashers will be based, not on a single chain, but on the share 
that refers to the most total work.

So consider these shares from a bad hasher:

 BAD1 <- BAD2 <- BAD3

A good hasher will refer to those, and also to its own shares:

 BAD1 <- BAD2 <- BAD3
   ^   ^   ^
   |   |   |
   |   |   +--+
   |   +-+|
   | ||
   +--- GOOD1 <- GOOD2 <- GOOD3

`GOOD3` refers to 5 other shares, whereas `BAD3` refers to only 2 shares, so 
`GOOD3` will be considered weightier, thus removing this avenue of attack and 
resolving the issue.
Even if measured in terms of total work, `GOOD3` also contains the work that 
`BAD3` does, so it would still win.

Regards,
ZmnSCPxj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Braidpool: Proposal for a decentralised mining pool

2021-09-07 Thread ZmnSCPxj via bitcoin-dev
Good morning all,

A thing I just realized about Braidpool is that the payout server is still a 
single central point-of-failure.

Although the paper claims to use Tor hidden service to protect against DDoS 
attacks, its centrality still cannot protect against sheer accident.
What happens if some clumsy human (all humans are clumsy, right?) fumbles the 
cables in the datacenter the hub is hosted in?
What happens if the country the datacenter is in is plunged into war or 
anarchy, because you humans love war and chaos so much?
What happens if Zeus has a random affair (like all those other times), Hera 
gets angry, and they get into a domestic, and then a random thrown lightning 
bolt hits the datacenter the hub is in?

The paper relies on economic arguments ("such an action will end the pool and 
the stream of future profits for the hub"), but economic arguments tend to be a 
lot less powerful in a monopoly, and the hub effectively has a monopoly on all 
Braidpool miners.
Hashers might be willing to tolerate minor peccadilloes of the hub, simply to 
let the pool continue (their other choices would be even worse).

So it seems to me that it would still be nicer, if it were at all possible, to 
use multiple hubs.
I am uncertain how easily this can be done.

Perhaps a Lightning model can be considered.
Multiple hubs may exist which offer liquidity to the Braidpool network, hashers 
measure uptime and timeliness of payouts, and the winning hasher elects one of 
the hubs.
The hub gets paid on the coinbase, and should send payouts, minus fees, on the 
LN to the miners.

However, this probably complicates the design too much, and it may be more 
beneficial to get *something* working now.
Let not the perfect be the enemy of the good.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Drivechain: BIP 300 and 301

2021-09-07 Thread ZmnSCPxj via bitcoin-dev
Good morning Prayank,


> Thanks for sharing all the details. One thing that I am not sure about:
>
> > * We already ***know*** that blockchains cannot scale
> > * Your plan for scaling is to make ***more*** blockchains?
>
> Scaling Bitcoin can be different from scaling Bitcoin sidechains. You can 
> experiment with lot of things on sidechains to scale which isn't true for 
> Bitcoin.

I would classify this as "prototyping new features" (i.e. it just happens to be 
a feature that theoretically improves blockchain scaling, with the sidechain as 
a demonstration and the goal eventually to get something like it into Bitcoin 
blockchain proper), not really scaling-by-sidechains/shards, so I think this is 
a fine example of "just make a federated sidechain" solution for the 
prototyping bit.

Do note that the above idea is a kernel for the argument that Drivechains 
simply allow for miner-controlled block size increases, an argument I have seen 
elsewhere but have no good links for, so take it is hearsay.

> Most important thing is requirements for running a node differ. Its easy to 
> run a node for LN, Liquid and Rootstock right now. Will it remain the same? I 
> am not sure.
>
> LND: https://github.com/lightningnetwork/lnd/blob/master/docs/INSTALL.md
>
> Liquid: 
> https://help.blockstream.com/hc/en-us/articles/92026026-How-do-I-set-up-a-Liquid-node-
>
> Rootstock: https://developers.rsk.co/rsk/node/install/

LN will likely remain easy to install and maintain, especially if you use 
C-Lightning and CLBOSS *cough*.

> > More to the point: what are sidechains **for**?
>
> Smart contracts are possible on Bitcoin but with limited functionality so lot 
> of applications are not possible using Bitcoin (Layer1). Some of these don't 
> even make sense on Layer 1 and create other issues like MEV however deploying 
> them on sidechains should not affect base layer.

Key being "should" --- as noted, part of the Drivechains security argument from 
Paul Sztorc is that a nuclear option can be deployed, which *possibly* means 
that issues in the sidechain may infect the mainchain.

Also see stuff like "smart contracts unchained": 
https://zmnscpxj.github.io/bitcoin/unchained.html
This allows creation of small federations which are *not* coordinated via 
inefficient blockchain structures.

So, really, my main point is: before going for the big heavy blockchain hammer, 
maybe other constructions are possible for any specific application?

>
> > Increasing the Drivechain security parameter leads to slower 
> >sidechain->mainchin withdrawals, effectively a bottleneck on how much can be 
> >transferred sidechain->mainchain.
>
> I think 'withdrawals' is the part which can be improved in Drivechain. Not 
> sure about any solution at this point or trade-offs involved but making few 
> changes can help Drivechain and Bitcoin.

It is precisely due to the fact that the mainchain cannot validate the 
sidechain rules, that side->main transfers must be bottlenecked, so that 
sidechain miners have an opportunity to gainsay any theft attempts that violate 
the sidechain rules.
Consider a similar parameter in Lightning when exiting non-cooperatively from a 
channel, which allows the other side to gainsay any theft attempts, a parameter 
which will still exist even in Decker-Russell-Osuntokun.

This parameter existed even in the old Blockstream sidechains proposal from 
sipa et al.
For the old Blockstream proposal the parameter is measured in sidechain blocks, 
and the sidechain has its own miners instead of riding off mainchain, but 
ultimately there exists a parameter that restricts the rate at which side->main 
transfers can be performed.

At least LN does not require any changes at the base layer (at least not 
anymore, after SegWit).

Regards,
ZmnSCPxj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Drivechain: BIP 300 and 301

2021-09-02 Thread ZmnSCPxj via bitcoin-dev
Good morning Prayank,

Just to be clear, neither Liquid nor RSK, as of my current knowledge, are 
Drivechain systems.

Instead, they are both federated sidechains.
The money owned by a federated sidechain is, as far s the Bitcoin blockchain is 
concerned, really owned by the federation that.runs the sidechain.

Basically, a mainchain->sidechain transfer is done by paying to a federation 
k-of-n address and a coordination signal of some kind (details depending on 
federated sidechain) to create the equivalent coins on the sidechain.
A sidechain->mainchain transfer is done by requesting some coins on the 
sidechain to be destroyed, and then the federation will send some of its 
mainchain k-of-n coins into whatever address you indicate you want to use on 
the mainchain.

In theory, a sufficient quorum of the federation can decide to ignore the 
sidechain data entirely and spend the mainchain money arbitrarily, and the 
mainchain layer will allow this (being completely ignorant of he sidechain).

In such federated sidechains, the federation is often a fixed predetermined 
signing set, and changes to that federation are expected to be rare.

Federated sidechains are ultimately custodial; as noted above, the federation 
could in theory abscond with the funds completely, and the mainchain would not 
care if the sidechain federation executes its final exit strategy and you lose 
your funds.
One can consider federated sidechains to be a custodian with multiple 
personality disorder, that happens to use a blockchain to keep its individual 
sub-personalities coordinated with each other, but ultimately control of the 
money is contingent on the custodian following the dictates of the supposed 
owners of the coin.
>From a certain point of view, it is actually immaterial that there is a 
>separate blockchain called the "sidechain" --- it is simply that a blockchain 
>is used to coordinate the custodians of the coin, but in principle any other 
>coordination mechanism can be used between them, including a plain database.


With Drivechains, custody of the sidechain funds is held by mainchain miners.
Again, this is still a custodial setup.
A potential issue here is that the mainchain miners cannot be identified (the 
entire point is anonymity of miners is possible), which may be of concern.

In particular, note that solely on mainchain, all that miners determine is the 
*ordering* and *timing* of transactions.
Let us suppose that there is a major 51% attack attempt on the Bitcoin 
blockchain.
We expect that such an attack will be temporary --- individuals currently not 
mining may find that their HODLings are under threat of the 51% attack, and may 
find it more economic to run miners at a loss, in order to protect their stacks 
rather than lose it.
Thus, we expect that a 51% attack will be temporary, as other miners will arise 
inevitably to take back control of transaction processing.
https://github.com/libbitcoin/libbitcoin-system/wiki/Threat-Level-Paradox

In particular, on the mainchain, 51% miners cannot reverse deep history.
If you have coins you have not moved since 2017, for example, the 51% attack is 
expected to take about 4 years before it can begin to threaten your ownership 
of those coins (hopefully, in those 4 years, you will get a clue and start 
mining at a loss to protect your funds from outright loss, thus helping evict 
the 51% attacker).
51% miners can, in practice, only prevent transfers (censorship), not force 
transfer of funds (confiscation).
Once the 51% attacker is evicted (and they will in general be evicted), then 
coins you owned that were deeply confirmed remain under your control.

With Drivechains, however, sidechain funds can be confiscated by a 51% 
attacker, by forcing a bogus sidechain->mainchain withdrawal.
The amount of time it takes is simply the security parameter of the Drivechain 
spec.
It does not matter if you were holding those funds in the sidechain for several 
years without moving them --- a 51% attacker that is able to keep control of 
the mainchain blockchain, for the Drivechain security parameter, will be 
capable of confiscating sidechain funds outright.
Thus, even if the 51% attacker is evicted, then your coins in the sidechain can 
be confiscated and no longer under your control.

Increasing the Drivechain security parameter leads to slower 
sidechain->mainchin withdrawals, effectively a bottleneck on how much can be 
transferred sidechain->mainchain.
While exchanges may exist that allow sidechain->mainchain withdrawal faster, 
those can only operate if the number of coins exiting the sidechain is 
approximately equal to coins entering the sidechain (remember, it is an 
*exchange*, coins are not actually moved from one to the other).
If there is a "thundering herd" problem, then exchanges will saturate and the 
sidechain->mainchain withdrawal mechanism has to come into play, and if the 
Drivechain security parameter (which secures sidechains from 51% attack 

Re: [bitcoin-dev] Exploring: limiting transaction output amount as a function of total input value

2021-08-31 Thread ZmnSCPxj via bitcoin-dev
Good morning Zac,

> Hi ZmnSCPxj,
>
> Thank you for your helpful response. We're on the same page concerning 
> privacy so I'll focus on that. I understand from your mail that privacy would 
> be reduced by this proposal because:
>
> * It requires the introduction of a new type of transaction that is different 
> from a "standard" transaction (would that be P2TR in the future?), reducing 
> the anonymity set for everyone;
> * The payment and change output will be identifiable because the change 
> output must be marked encumbered on-chain;
> * The specifics of how the output is encumbered must be visible on-chain as 
> well reducing privacy even further.
>
> I don't have the technical skills to judge whether these issues can somehow 
> be resolved. In functional terms, the output should be spendable in a way 
> that does not reveal that the output is encumbered, and produce a change 
> output that cannot be distinguished from a non-change output while still 
> being encumbered. Perhaps some clever MAST-fu could somehow help?

I believe some of the covenant efforts may indeed have such clever MAST-fu 
integrated into them, which is why I pointed you to them --- the people 
developing these (aj I think? RubenSomsen?) might be able to accommodate this 
or some subset of the desired feature in a sufficiently clever covenant scheme.

There are a number of such proposals, though, so I cannot really point you to 
one that seems likely to have a lot of traction.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Exploring: limiting transaction output amount as a function of total input value

2021-08-31 Thread ZmnSCPxj via bitcoin-dev
Good morning Zac,


> Perhaps you could help me understand what would be required to implement the 
> *unmodified* proposal. That way, the community will be able to better assess 
> the cost (in terms of effort and risk) and weigh it against the perceived 
> benefits. Perhaps *then* we find that the cost could be significantly reduced 
> without any significant reduction of the benefits, for instance by slightly 
> compromising on the functionality such that no changes to consensus would be 
> required for its implementation. (I am skeptical that this would be possible 
> though). The cost reduction must be carefully weighed against the functional 
> gaps it creates.

For one, such output need to be explicitly visible, to implement the "change 
outputs must also be rate-limited".
A tx spending a rate-limited output has to know that one of the outputs is also 
a rate-limited output.

This flagging needs to be done by either allocating a new SegWit version --- a 
resource that is not lightly allocated, there being only 30 versions left if my 
understanding is correct --- or blessing yet another anyone-can-spend 
`scriptPubKey` template, something we want to avoid which is why SegWit has 
versions (i.e. we want SegWit to be the last anyone-can-spend `scriptPubKey` 
template we bless for a **long** time).

Explicit flagging is bad as well for privacy, which is another mark against it.
Notice how Taproot improves privacy by making n-of-n indistinguishable from 
1-of-1 (and with proper design or a setup ritual, k-of-n can be made 
indistinguishable from 1-of-1).
Notice as well that my first counterproposal is significantly more private than 
explicit flagging, and my second coutnerproposal is also more private if 
wallets change their anti-fee-sniping mitigation.
This privacy loss represented by explicit flagging will be resisted by some 
people, especially those that use a bunch of random letters as a pseudonym 
(because duh, privacy).

(Yes, people can just decide not to use the privacy-leaking explicitly-flagged 
outputs, but that reduces the anonymity set of people who *are* interested in 
privacy, so people who are interested in privacy will prefer that other people 
do not leak their privacy so they can hide among *those* people as well.)

You also probably need to keep some data with each output.
This can be done by explicitly storing that data in the output directly, rather 
than a commitment to that data --- again, the "change outputs must also be 
rate-limited" requirement needs to check those data.

The larger data stored with the output is undesirable, ideally we want each 
output to just be a commitment rather than contain any actual data, because 
often a 20-byte commitment is smaller than the data that needs to be stored.
For example, I imagine that your original proposal requires, for change 
outputs, to store:

* The actual rate limit.
* The time frame of the rate limit.
* The reduced rate limit, since we spent an amount within a specific time frame 
(i.e. residual limit) which is why this is a change output.
* How long that time frame lasts.
* A commitment to the keys that can spend this.

Basically, until the residual limit expires, we impose the residual limit, then 
after the expiry of the residual limit we go back to the original rate limit.

The commitment to the keys itself takes at least 20 bytes, and if you are 
planning a to support k-of-n then that takes at least 32 bytes.
If this was not explicitly tagged, then a 32 byte commitment to all the 
necessary data would have been enough, but you do need the explicit tagging for 
the "change outputs must be rate-limited too".

Note as well that the residual needs to be kept with the output.
Bitcoin Core does not store transactions in a lookup table, it stores 
individual *outputs*.
While the residual can be derived from the transaction, we do not have a 
transaction table.
Thus, we need to explicitly put it on the output itself, directly, since we 
only have a lookup table for the unspent outputs, not individual transactions.

(well there is `txindex` but that is an option for each node, not something 
consensus code can rely on)

So yes, that "change outputs must also be rate-limited" is the big sticking 
point, and a lot of the "gaps" you worry about occur when we drop this bit.
Drop this bit and you can implement it today without any consensus code change, 
and with privacy good enough to prevent people with random letters as pseudonym 
from trying to stop you.

Regards,
ZmnSCPxj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] Removing the Dust Limit

2021-08-20 Thread ZmnSCPxj via bitcoin-dev
Good morning Jeremy,

> one interesting point that came up at the bitdevs in austin today that favors 
> remove that i believe is new to this discussion (it was new to me):
>
> the argument can be reduced to:
>
> - dust limit is a per-node relay policy.
> - it is rational for miners to mine dust outputs given their cost of 
> maintenance (storing the output potentially forever) is lower than their 
> immediate reward in fees.
> - if txn relaying nodes censor something that a miner would mine, users will 
> seek a private/direct relay to the miner and vice versa.
> - if direct relay to miner becomes popular, it is both bad for privacy and 
> decentralization.
> - therefore the dust limit, should there be demand to create dust at 
> prevailing mempool feerates, causes an incentive to increase network 
> centralization (immediately)
>
> the tradeoff is if a short term immediate incentive to promote network 
> centralization is better or worse than a long term node operator overhead.

Against the above, we should note that in the Lightning spec, when an output 
*would have been* created that is less than the dust limit, the output is 
instead put into fees.
https://github.com/lightningnetwork/lightning-rfc/blob/master/03-transactions.md#trimmed-outputs

Thus, the existence of a dust limit encourages L2 protocols to have similar 
rules, where outputs below the dust limit are just given over as fees to 
miners, so the existence of a dust limit might very well be 
incentivize-compatible for miners, regardless of centralization effects or not.


Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Exploring: limiting transaction output amount as a function of total input value

2021-08-16 Thread ZmnSCPxj via bitcoin-dev
Good morning Zac,

> Thank you for your counterproposal. I fully agree that as a first step we 
> must establish whether the proposed functionality can be implemented without 
> making any changes to consensus.
>
> Your counterproposal is understandably more technical in nature because it 
> explores an implementation on top of Bitcoin as-is. However I feel that for a 
> fair comparison of the functionality of both proposals a purely functional 
> description of your proposal is essential.
>
> If I understand your proposal correctly, then I believe there are some major 
> gaps between yours and mine:
>
> Keys for unrestricted spending: in my proposal, they never have to come 
> online unless spending more than the limit is desired. In your proposal, 
> these keys are required to come online in several situations.

Correct, that is indeed a weakness.

It is helpful to see https://zmnscpxj.github.io/bitcoin/unchained.html
Basically: any quorum of signers can impose any rules that are not 
implementable on the base layer, including the rules you desire.
That quorum is the "offline keyset" in my proposal.

>
> Presigning transactions: not required in my proposal. Wouldn’t such 
> presigning requirement be detrimental for the usability of your proposal? 
> Does it mean that for instance the amount and window in which the transaction 
> can be spent is determined at the time of signing? In my proposal, there is 
> no limit in the number of transactions per window.

No.
Remember, the output is a simple 1-of-1 or k-of-n of the online keyset.
The online keyset can spend that wherever and however, including paying it out 
to N parties, or paying part of the limit to 1 party and then paying the 
remainder back to the same onchain keyset so it can access the funds in the 
future.
Both cases are also available in your proposal, and the latter case (pay out 
part of the limit to a single output, then keep the rest back to the same 
onchain keyset) can be used to add an indefinite number of transactions per 
window.

>
> Number of windows: limited in your proposal, unlimited in mine.

Correct, though you can always have a fairly large number of windows ("640kB 
ought to be enough for anybody").

>
> There are probably additional gaps that I am currently not technically able 
> to recognize.

It requires a fair amount of storage for the signatures at minimum, though that 
may be as small as 64 bytes per window.
1Mb of storage for signatures would allow 16,384 windows, assuming you use 
1-day windows that is about 44.88 years, probably more than enough that a 
one-time onlining of the offline keys (or just print out the signatures on 
paper or display as a QR code, whatever) is acceptable.

> I feel that the above gaps are significant enough to state that your proposal 
> does not meet the basic requirements of my proposal.
>
> Next to consider is whether the gap is acceptable, weighing the effort to 
> implement the required consensus changes against the effort and feasibility 
> of implementing your counterproposal.
>
> I feel that your counterproposal has little chance of being implemented 
> because of the still considerable effort required and the poor result in 
> functional terms. I also wonder if your proposal is feasible considering 
> wallet operability.

See above, particularly the gap that does not, in fact, exist.

>
> Considering all the above, I believe that implementing consensus changes in 
> order to support the proposed functionality would preferable  over your 
> counterproposal.
>
> I acknowledge that a consensus change takes years and is difficult to 
> achieve, but that should not be any reason to stop exploring the appetite for 
> the proposed functionality and perhaps start looking at possible technical 
> solutions.

You can also look into the "covenant" opcodes (`OP_CHECKSIGFROMSTACK`, 
`OP_CHECKTEMPLATEVERIFY`, etc.), I think JeremyRubin has a bunch of them listed 
somewhere, which may be used to implement something similar without requiring 
presigning.

Since the basic "just use `nSequence`" scheme already implements what you need, 
what the covenant opcodes buy you is that you do not need the offline keyset to 
be onlined and there is no need to keep signatures, removing the remaining gaps 
you identified.
With a proper looping covenant opcode, there is also no limit on the number of 
windows.

The issue with the covenant opcodes is that there are several proposals with 
overlapping abilities and different tradeoffs.
This is the sort of thing that invites bikeshed-painting.

I suggest looking into the covenant opcodes and supporting those instead of 
your own proposal, as your application is very close to one of the motivating 
examples for covenants in the first place.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Human readable checksum (verification code) to avoid errors on BTC public addresses

2021-08-16 Thread ZmnSCPxj via bitcoin-dev
Good morning TS,

> Entering a BTC address for a transaction can pose a risk of error (human or 
> technical). While
> there is a checksum integrated in BTC addresses already, this is used only at 
> a technical
> level and does not avoid entering a valid but otherwise wrong address. 
> Moreover, it does not
> improve the overall user experience.
>
> In case this hasn't been discussed before, I propose to implement a 3 or 4 
> digit code (lets
> call it 4DC for this text), generated as checksum from the address. This 4DC 
> should be shown
> in all wallets next to the receiving address. When entering a new address to 
> send BTC, the
> sending wallet should also show the 4DC next to the entered address. This 
> way, the sending
> person can easily verify that the resulting 4DC matches the one from the 
> receiving address.
>
> This would mean that a receiver would not only send his public address to the 
> sender, but also
> the 4DC. A minor disadvantage since a) it is not mandatory and b) it is very 
> easy to do.
> However, it would greatly reduce the probability of performing transactions 
> to a wrong address.
>
> Technically, this is very easy to implement. The only effort needed is 
> agreeing on a checksum
> standard to generate the code. Once the standard is established, all wallet 
> and exchange
> developers can start implementing this.

I think the "only" effort here is going to be the main bulk of the effort, and 
it will still take years of agreement (or sipa doing it, because every review 
is "either sipa made it, or we have to check *everything* in detail for several 
months to make sure it is correct").

In any case --- the last 5 characters of a bech32 string are already a 
human-readable 5-digit code, with fairly good properties, why is it not usable 
for this case?

On the other side of the coin, if you say "the existing bech32 checksum is 
automatically checked by the software", why is forcing something to be manually 
checked by a human better than leaving the checking to software?


Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Exploring: limiting transaction output amount as a function of total input value

2021-08-13 Thread ZmnSCPxj via bitcoin-dev
Good morning Zac,


> Hi ZmnSCPxj,
>
> Thank you for your insightful response.
>
> Perhaps I should take a step back and take a strictly functional angle. 
> Perhaps the list could help me to establish whether the proposed 
> functionality is:
>
> Desirable;
> Not already possible;
> Feasible to implement.
>
> The proposed functionality is as follows:
>
> The ability to control some coin with two private keys (or two sets of 
> private keys) such that spending is limited over time for one private key 
> (i.e., it is for instance not possible to spend all coin in a single 
> transaction) while spending is unrestricted for the other private key (no 
> limits apply). No limits must apply to coin transacted to a third party.
>
> Also, it must be possible never having to bring the unrestricted private key 
> online unless more than the limit imposed on the restrictive private key is 
> desired to be spent.
>
> Less generally, taking the perspective of a hodler: the user must be able to 
> keep one key offline and one key online. The offline key allows unrestricted 
> spending, the online key is limited in how much it is allowed to spend over 
> time.
>
> Furthermore, the spending limit must be intuitive. Best candidate I believe 
> would be a maximum spend per some fixed number of blocks. For instance, the 
> restrictive key may allow a maximum of 100k sats per any window of 144 
> blocks. Ofcourse the user must be able to set these parameters freely.

My proposal does not *quite* implement a window.
However, that is because it uses `nLockTime`.

With the use of `nSequence` in relative-locktime mode, however, it *does* 
implement a window, sort of.
More specifically, it implements a timeout on spending --- if you spend using a 
presigned transaction (which creates an unencumbered specific-valued TXO that 
can be arbitrarily spent with your online keyset) then you cannot get another 
"batch" of funds until the `nSequence` relative locktime passes.
However, this *does* implement a window that limits a maximum value spendable 
per any window of the relative timelock you select.

The disadvantage is that `nSequence` use is a lot more obvious and discernible 
than `nLockTime` use.
Many wallets today use non-zero `nLockTime` for anti-fee-sniping, and that is a 
good cover for `nLockTime` transactions.
I believe Dave Harding proposed that wallets should also use, at random, (say 
50-50) `nSequence`-in-relative-locktime-mode as an alternate anti-fee-sniping 
mechanism.
This alternate anti-fee-sniping would help cover `nSequence` use.

Note that my proposal does impose a maximum limit on the number of windows.
With `nSequence`-in-relative-locktime-mode the limit is the number of times 
that the online keyset can spend.
After spending that many windows, the offline keyset has to be put back online 
to generate a new set of transactions.

It has the massive massive advantage that you can implement it today without 
any consensus change, and I think you can expect that consensus change will 
take a LONG time (xref SegWit, Taproot).

Certainly the functionality is desirable.
But it seems it can be implemented with Bitcoin today.

Regards,
ZmnSCPxj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] Removing the Dust Limit

2021-08-10 Thread ZmnSCPxj via bitcoin-dev
Good morning all,

Thinking a little more, if the dust limit is intended to help keep UTXO sets 
down, then on the LN side, this could be achieved as well by using channel 
factories (including "one-shot" factories which do not allow changing the 
topology of the subgraph inside the factory, but have the advantage of not 
requiring either `SIGHASH_NOINPUT` or an extra CSV constraint that is difficult 
to weigh in routing algorithms), where multiple channels are backed by a single 
UTXO.

Of course, with channel factories there is now a greater set of participants 
who will have differing opinions on appropriate feerate.

So I suppose one can argue that the dust limit becomes less material to higher 
layers, than actual onchain feerates.


Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [Lightning-dev] Removing the Dust Limit

2021-08-10 Thread ZmnSCPxj via bitcoin-dev
Good morning Billy, et al.,

> For sure, CT can be done with computational soundness. The advantage of 
> unhidden amounts (as with current bitcoin) is that you get unconditional 
> soundness.

My understanding is that it should be possible to have unconditional soundness 
with the use of El-Gamal commitment scheme, am I wrong?

Alternately, one possible softforkable design would be for Bitcoin to maintain 
a non-CT block (the current scheme) and a separately-committed CT block (i.e. 
similar to how SegWit has a "separate" "block"/Merkle tree that includes 
witnesses).
When transferring funds from the legacy non-CT block, on the legacy block you 
put it into a "burn" transaction that magically causes the same amount to be 
created (with a trivial/publicly known salt) in the CT block.
Then to move from the CT block back to legacy non-CT you would match one of 
those "burn" TXOs and spend it, with a proof that the amount you are removing 
from the CT block is exactly the same value as the "burn" TXO you are now 
spending.

(for additional privacy, the values of the "burn" TXOs might be made into some 
fixed single allowed value, so that transfers passing through the CT portion 
would have fewer identifying features)

The "burn" TXOs would be some trivial anyone-can-spend, such as ` 
<0> OP_EQUAL OP_NOT` with `` being what is used in the CT to cover 
the value, and knowledge of the scalar behind this point would allow the CT 
output to be spent (assuming something very much like MimbleWimble is used; 
otherwise it could be the hash of some P2WSH or similar analogue on the CT 
side).

Basically, this is "CT as a 'sidechainlike' that every fullnode runs".

In the legacy non-CT block, the total amount of funds that are in all CT 
outputs is known (it would be the sum total of all the "burn" TXOs) and will 
have a known upper limit, that cannot be higher than the supply limit of the 
legacy non-CT block, i.e. 21 million BTC.
At the same time, *individual* CT-block TXOs cannot have their values known; 
what is learnable is only how many BTC are in all CT block TXOs, which should 
be sufficient privacy if there are a large enough number of users of the CT 
block.

This allows the CT block to use an unconditional privacy and computational 
soundness scheme, and if somehow the computational soundness is broken then the 
first one to break it would be able to steal all the CT coins, but not *all* 
Bitcoin coins, as there would not be enough "burn" TXOs on the legacy non-CT 
blockchain.

This may be sufficient for practical privacy.


On the other hand, I think the dust limit still makes sense to keep for now, 
though.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Exploring: limiting transaction output amount as a function of total input value

2021-08-09 Thread ZmnSCPxj via bitcoin-dev
 fromGood morning Zac,


With some work, what you want can be implemented, to some extent, today, 
without changes to consensus.

The point you want, I believe, is to have two sets of keys:

* A long-term-storage keyset, in "cold" storage.
* A short-term-spending keyset, in "warm" storage, controlling only a small 
amount of funds.

What you can do would be:

* Put all your funds in a single UTXO, with an k-of-n of your cold keys 
(ideally P2TR, or some P2WSH k-of-n).
* Put your cold keys online, and sign a transaction spending the above UTXO, 
and spending most of it to a new address that is a tweaked k-of-n of your cold 
keys, and a smaller output (up to the limit you want) controlled by the k-of-n 
of your warm keys.
  * Keep this transaction offchain, in your warm storage.
* Put your cold keys back offline.
* When you need to spend using your warm keys, bring the above transaction 
onchain, then spend from the budget as needed.


If you need to have some estimated amount of usable funds for every future unit 
of time, just create a chain of transactions with future `nLockTime`.

  nLocktime +1day  nLockTime +2day
  ++   ++   ++
 cold UTXO -->|cold TXO|-->|cold TXO|-->|cold TXO|--> etc.
  ||   ||   ||
  |warm TXO|   |warm TXO|   |warm TXO|
  ++   ++   ++

Pre-sign the above transactions, store the pre-signed transactions in warm 
storage together with your warm keys.
Then put the cold keys back offline.

Then from today to tomorrow, you can spend only the first warm TXO.
>From tomorrow to the day after, you can spend only the first two warm TXOs.
And so on.

If tomorrow your warm keys are stolen, you can bring the cold keys online to 
claim the second cold TXO and limit your fund loss to only just the first two 
warm TXOs.

The above is bulky, but it has the advantage of not using any special opcodes 
or features (improving privacy, especially with P2TR which would in theory 
allow k-of-n/n-of-n to be indistinguishable from 1-of-1), and using just 
`nLockTime`, which is much easier to hide since most modern wallets will set 
`nLockTime` to recent block heights.

Regards,
ZmnSCPxj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Proof of reserves - recording

2021-07-09 Thread ZmnSCPxj via bitcoin-dev
Good morning e,

Okay, it seems to me that what you are saying is something like this:

> Proof-of-reserves would (partially) work for a "pure" warehousing service 
> (i.e. user pays some fee, service keeps money and provides proofs that money 
> is kept).
> However, "pure" warehousing is not what a typical exchange does (else the 
> explicit fees in their exchanges would be higher), as it takes on risk due to 
> having to deal with non-Bitcoin monopoly money (by definition, since they are 
> *exchanges*).
> Further, with Bitcoin you can be your own warehouse (including Green-like 
> multisig schemes where you own your own keys that are part of the scheme), 
> which is an alternative choice to hiring a "pure warehouse" (i.e. Safe 
> Deposit).

Would that be a fair (if somewhat rough and undetailed) restatement?

Regards,
ZmnSCPxj

> > Good morning e,
>
> Good afternoon Z.
>
> > > Any expectation of interest implies borrowing, in other words, a loan 
> > > to
> > >
> >
> > the bank.
> > Perhaps this is the key point of contention?
>
> I'm not sure, but from my observations it's long been a point of confusion in 
> Bitcoiner understanding of banking.
>
> To put a finer point on it, Rothbard's criteria is a vague in a couple ways. 
> Earnings that offset fees are also "interest" in the economic context - in 
> which he writes. So even a zero-interest account (or negative up to the full 
> cost of maintaining the account) qualifies under this criterion. Yet he is 
> careful to say "implies". The arrangement may of course be explicit, in which 
> case one no longer relies on implied contract, one relies on explicit 
> contract. Finally, one may "expect" no interest, and even pay fees, but it 
> may nonetheless be a loan. This is what contracts are for.
>
> If one contracts for warehousing service, such Safe Deposit, as opposed to a 
> time deposit, such as Certificate of Deposit, Savings Account, or Checking 
> Account, then one gets a warehousing service - full fees and a contractual 
> obligation to maintain 100% of the deposit. There are also money transmission 
> services that move money around for a fee. The inability to distinguish money 
> from credit (including money substitutes) and warehousing from investment 
> (including "banking") leads directly to false conclusions regarding money and 
> banking. Unfortunately a good number of self-described "Austrians" perpetuate 
> these errors.
>
> > In cases where Bitcoin is given over to an exchange, there is no expectation
> > of interest, at least in the sense that there is no expectation that the 
> > number
> > of Bitcoins deposited in the exchange increase over time.
> > (There may be an expectation of an increase in the number of green-ink
> > historical commemoration papers it can buy, but the point is that the number
> > of Bitcoins held in behalf of the user is not expected to change)
> > The expectation is that exchanges earn money from the difference between
> > buy-price and sell-price, and the money-warehousing service they provide is
> > simply provided for free to facilitate their main business (i.e. brokers for
> > exchange).
> > Thus, the expectation is that the exchange provides a warehouse service,
> > not a bank service, and this service is provided for free since it enables 
> > their
> > real business of earning from bid-ask spreads.
>
> I'm not aware of what are people's expectations, nor would I judge what 
> qualifies as someone's "real" business, but a warehouse that facilitates 
> trades for a fee is of course a possible business model. PayPal's intended 
> (real?) business model was to earn from the float. That didn't pan out, 
> because people didn't retain money in their transmitter service.
>
> Exchanges that deal in monopoly money must move this through traditional 
> finance. This incurs all manner of risk. When someone sends them monopoly 
> money, there is no crypto-surety possible. This is part of their "reserve" 
> just as is the other side of trades.
>
> What matters is what people contract for - agree to, voluntarily.
>
> > On the other hand, not your keys not your coins, so anyone who uses such a
> > warehouse has whatever happens to the funds coming for them...
>
> One of the essential benefits of Bitcoin being that you can be your own 
> warehouse, and be your own money transmitter.
>
> But all production requires investment, which inherently entails letting go 
> of your money, producing something with it, and selling it to people for 
> other money. All investment is from someone's "reserve". Full reserve 
> investment (including banking) is an oxymoron. So whether through exchanges 
> or otherwise, there will be production, risk, loss and earnings. Otherwise 
> there will be nothing at all to buy, and all money will be worthless. This 
> idea of assuring that money is fully reserved applies only to that which one 
> does not invest (one's hoard); it does not apply to banks, or the capital of 
> any other 

Re: [bitcoin-dev] Proof of reserves - recording

2021-07-09 Thread ZmnSCPxj via bitcoin-dev
Good morning e,


> Any expectation of interest implies borrowing, in other words, a loan to 
> the bank.

Perhaps this is the key point of contention?

In cases where Bitcoin is given over to an exchange, there is no expectation of 
interest, at least in the sense that there is no expectation that the number of 
Bitcoins deposited in the exchange *increase* over time.
(There may be an expectation of an increase in the number of green-ink 
historical commemoration papers it can buy, but the point is that the number of 
Bitcoins held in behalf of the user is not expected to change)

The expectation is that exchanges earn money from the difference between 
buy-price and sell-price, and the money-warehousing service they provide is 
simply provided for free to facilitate their *main* business (i.e. brokers for 
*exchange*).
Thus, the expectation is that the exchange provides a warehouse service, not a 
bank service, and this service is provided for free since it enables their 
*real* business of earning from bid-ask spreads.

On the other hand, not your keys not your coins, so anyone who uses such a 
warehouse has whatever happens to the funds coming for them...

And of course exchanges need not earn money *just* from bid-ask spreads *in 
practice*, so they are unlikely to provide proof-of-reserves either.

Indeed, money warehousing may very well be provided by means other than 
proof-of-reserves, such as by using multisig the way Green wallet does, with 
better security.
Perhaps "pure exchanges" would be more amenable to such a scheme rather than 
proof-of-reserves.

Regards,
ZmnSCPxj

>
> "Whether saved capital is channeled into investments via stocks or via 
> loans is unimportant. The only difference is in the legal technicalities. 
> Indeed, even the legal difference between the creditor and the owner is a 
> negligible one."
>
> -   Rothbard
>
> > You're using terms in non-standard ways. Putting money into a bank is not 
> > considered "lending" to the bank.
>
> I think it's quite clear that Rothbard considers it lending. I'm not big on 
> appeal to authority, but sometimes it helps open minds. Links here:
>
> https://github.com/libbitcoin/libbitcoin-system/wiki/Full-Reserve-Fallacy
>
> > > money markets have had no reserve requirement and have a nearly spotless 
> > > record of satisfying their obligations.
>
> > Lol, money markets are so new that they've had no opportunity to show their 
> > true risk.
>
> 1971, 50 years.
> https://en.wikipedia.org/wiki/Money_market_fund
>
> > In the finance world, things work fine for a long time until they fail 
> > spectacularly, losing more than the gain they made in the first place. This 
> > is a regular occurence. Its the reason bitcoin was created.
>
> regular occurrence...
>
> "Buck breaking has rarely happened. Up to the 2008 financial crisis, only 
> three money funds had broken the buck in the 37-year history of money 
> funds... The first money market mutual fund to break the buck was First 
> Multifund for Daily Income (FMDI) in 1978, liquidating and restating NAV at 
> 94 cents per share"
>
> An investment loss of 6%.
>
> "The Community Bankers US Government Fund broke the buck in 1994, paying 
> investors 96 cents per share."
>
> An investment loss of 4%.
>
> "This was only the second failure in the then 23-year history of money funds 
> and there were no further failures for 14 years... No further failures 
> occurred until September 2008, a month that saw tumultuous events for money 
> funds."
>
> It was a "tumultuous" month for nearly all investments. The feds of course 
> doled out the pork, and the funds had to take it (as if their competition did 
> and they didn't they would fail due to higher relative capital costs and 
> thereby lower rates). In the past, absent pork, they had raised money where 
> necessary to maintain their NAV (just as banks do, but they go to the 
> taxpayer, and just as all business do from time to time).
>
> These are remarkably stable in terms of NAV. And people seem to be satisfied 
> with them:
>
> "At the end of 2011, there were 632 money market funds in operation,[19] with 
> total assets of nearly US$2.7 trillion.[19] Of this $2.7 trillion, retail 
> money market funds had $940 billion in Assets Under Management (AUM). 
> Institutional funds had $1.75 trillion under management.[19]"
>
> The point being, that this is as close to free market bank-based investing as 
> exists in the white market. In a money market fund, the NAV is reflected in 
> the share price, so any losses are evenly distributed - no different than 
> when all those HODLers take a hit when Elon farts, and the reserve they 
> maintain has been very effective in maintaining their $1/share target despite 
> paying interest on investments. They are merely shifting market returns into 
> interest, just like banks. Market returns over short periods aren't always 
> positive. No surprise. The larger point being, BANKS ARE INVESTMENT FUNDS.
>
> > 

Re: [bitcoin-dev] OP_CAT Makes Bitcoin Quantum Secure [was CheckSigFromStack for Arithmetic Values]

2021-07-09 Thread ZmnSCPxj via bitcoin-dev
Good morning Ethan,

> > Yes, quite neat indeed, too bad Lamport signatures are so huge (a couple 
> > kilobytes)... blocksize increase cough
>
> Couldn't you significantly compress the signatures by using either
> Winternitz OTS or by using OP_CAT to build a merkle tree so that the
> full signature can be derived during script execution from a much
> shorter set of seed values?

To implement Winternitz we need some kind of limited-repeat construct, which is 
not available in SCRIPT, but may be emulatable with enough `OP_IF` and sheer 
brute force.
But what you gain in smaller signatures, you lose in a more complex and longer 
SCRIPT, and there are limits to SCRIPT size (in order to limit the processing 
done in each node).

Merkle signatures trade off shorter pubkeys for longer signatures (signatures 
need to provide the hash of the *other* preimage you are not revealing), but in 
the modern post-SegWit Bitcoin context both pubkeys and signatures are stored 
in the witness area, which have the same weight, thus it is actually a loss 
compared to Lamport.


So yes, maybe Winternitz (could be a replacement for the "trinary" Jeremy 
refers to), Merkle not so much.

Regards,
ZmnSCPxj

> On Thu, Jul 8, 2021 at 4:12 AM ZmnSCPxj via bitcoin-dev
> bitcoin-dev@lists.linuxfoundation.org wrote:
>
> > Good morning Jeremy,
> > Yes, quite neat indeed, too bad Lamport signatures are so huge (a couple 
> > kilobytes)... blocksize increase cough
> > Since a quantum computer can derive the EC privkey from the EC pubkey and 
> > this scheme is resistant to that, I think you can use a single well-known 
> > EC privkey, you just need a unique Lamport keypair for each UTXO 
> > (uniqueness being mandatory due to Lamport requiring preimage revelation).
> > Regards,
> > ZmnSCPxj
> >
> > > Dear Bitcoin Devs,
> > > As mentioned previously, OP_CAT (or similar operation) can be used to 
> > > make Bitcoin "quantum safe" by signing an EC signature. This should work 
> > > in both Segwit V0 and Tapscript, although you have to use HASH160 for it 
> > > to fit in Segwit V0.
> > > See my blog for the specific construction, reproduced below.
> > > Yet another entry to the "OP_CAT can do that too" list.
> > > Best,
> > >
> > > Jeremy
> > >
> > > ---
> > >
> > > I recently published a blogpost about signing up to a5 byte value using 
> > > Bitcoin script arithmetic and Lamport signatures.
> > > By itself, this is neat, but a little limited. What if we could sign 
> > > longer
> > > messages? If we can sign up to 20 bytes, we could sign a HASH160 digest 
> > > which
> > > is most likely quantum safe...
> > > What would it mean if we signed the HASH160 digest of a signature? What 
> > > the
> > > what? Why would we do that?
> > > Well, as it turns out, even if a quantum computer were able to crack 
> > > ECDSA, it
> > > would yield revealing the private key but not the ability to malleate the
> > > content of what was actually signed. I asked my good friend and 
> > > cryptographer
> > > Madars Virza if my intuition was correct, and he
> > > confirmed that it should be sufficient, but it's definitely worth closer
> > > analysis before relying on this. While the ECDSA signature can be 
> > > malleated to a
> > > different, negative form, if the signature is otherwise made immalleable 
> > > there
> > > should only be one value the commitment can be opened to.
> > > If we required the ECDSA signature be signed with a quantum proof 
> > > signature
> > > algorithm, then we'd have a quantum proof Bitcoin! And the 5 byte signing 
> > > scheme
> > > we discussed previously is a Lamport signature, which is quantum secure.
> > > Unfortunately, we need at least 20 contiguous bytes... so we need some 
> > > sort of
> > > OP\_CAT like operation.
> > > OP\_CAT can't be directly soft forked to Segwit v0 because it modifies the
> > > stack, so instead we'll (for simplicity) also show how to use a new 
> > > opcode that
> > > uses verify semantics, OP\_SUBSTRINGEQUALVERIFY that checks a splice of a 
> > > string
> > > for equality.
> > >
> > > ... FOR j in 0..=5
> > > <0>
> > > ... FOR i in 0..=31
> > > SWAP hash160 DUP  EQUAL IF DROP <2**i> ADD ELSE 
> > >  EQUALVERIFY ENDIF
> > > ... END FOR
> > > TOALTSTACK
> > > ... END FOR
> > >

Re: [bitcoin-dev] OP_CAT Makes Bitcoin Quantum Secure [was CheckSigFromStack for Arithmetic Values]

2021-07-08 Thread ZmnSCPxj via bitcoin-dev

Good morning Jeremy,

Yes, quite neat indeed, too bad Lamport signatures are so huge (a couple 
kilobytes)... blocksize increase *cough*

Since a quantum computer can derive the EC privkey from the EC pubkey and this 
scheme is resistant to that, I think you can use a single well-known EC 
privkey, you just need a unique Lamport keypair for each UTXO (uniqueness being 
mandatory due to Lamport requiring preimage revelation).

Regards,
ZmnSCPxj


> Dear Bitcoin Devs,
>
> As mentioned previously, OP_CAT (or similar operation) can be used to make 
> Bitcoin "quantum safe" by signing an EC signature. This should work in both 
> Segwit V0 and Tapscript, although you have to use HASH160 for it to fit in 
> Segwit V0.
>
> See [my blog](https://rubin.io/blog/2021/07/06/quantum-bitcoin/) for the 
> specific construction, reproduced below.
>
> Yet another entry to the "OP_CAT can do that too" list.
>
> Best,
>
> Jeremy
> -
>
> I recently published [a blog
> post](https://rubin.io/blog/2021/07/02/signing-5-bytes/) about signing up to a
> 5 byte value using Bitcoin script arithmetic and Lamport signatures.
>
> By itself, this is neat, but a little limited. What if we could sign longer
> messages? If we can sign up to 20 bytes, we could sign a HASH160 digest which
> is most likely quantum safe...
>
> What would it mean if we signed the HASH160 digest of a signature? What the
> what? Why would we do that?
>
> Well, as it turns out, even if a quantum computer were able to crack ECDSA, it
> would yield revealing the private key but not the ability to malleate the
> content of what was actually signed.  I asked my good friend and cryptographer
> [Madars Virza](https://madars.org/) if my intuition was correct, and he
> confirmed that it should be sufficient, but it's definitely worth closer
> analysis before relying on this. While the ECDSA signature can be malleated 
> to a
> different, negative form, if the signature is otherwise made immalleable there
> should only be one value the commitment can be opened to.
>
> If we required the ECDSA signature be signed with a quantum proof signature
> algorithm, then we'd have a quantum proof Bitcoin! And the 5 byte signing 
> scheme
> we discussed previously is a Lamport signature, which is quantum secure.
> Unfortunately, we need at least 20 contiguous bytes... so we need some sort of
> OP\_CAT like operation.
>
> OP\_CAT can't be directly soft forked to Segwit v0 because it modifies the
> stack, so instead we'll (for simplicity) also show how to use a new opcode 
> that
> uses verify semantics, OP\_SUBSTRINGEQUALVERIFY that checks a splice of a 
> string
> for equality.
>
> ```
> ... FOR j in 0..=5
>     <0>
>     ... FOR i in 0..=31
>         SWAP hash160 DUP  EQUAL IF DROP <2**i> ADD ELSE 
>  EQUALVERIFY ENDIF
>     ... END FOR
>     TOALTSTACK
> ... END FOR
>
> DUP HASH160
>
> ... IF CAT AVAILABLE
>     FROMALTSTACK
>     ... FOR j in 0..=5
>         FROMALTSTACK
>         CAT
>     ... END FOR
>     EQUALVERIFY
> ... ELSE SUBSTRINGEQUALVERIFY AVAILABLE
>     ... FOR j in 0..=5
>         FROMALTSTACK <0+j*4> <4+j*4> SUBSTRINGEQUALVERIFY DROP DROP DROP
>     ...  END FOR
>     DROP
> ... END IF
>
>  CHECKSIG
> ```
>
> That's a long script... but will it fit? We need to verify 20 bytes of message
> each bit takes around 10 bytes script, an average of 3.375 bytes per number
> (counting pushes), and two 21 bytes keys = 55.375 bytes of program space and 
> 21
> bytes of witness element per bit.
>
> It fits! `20*8*55.375 = 8860`, which leaves 1140 bytes less than the limit for
> the rest of the logic, which is plenty (around 15-40 bytes required for the 
> rest
> of the logic, leaving 1100 free for custom signature checking). The stack size
> is 160 elements for the hash gadget, 3360 bytes.
>
> This can probably be made a bit more efficient by expanding to a ternary
> representation.
>
> ```
>         SWAP hash160 DUP  EQUAL  IF DROP  ELSE <3**i> SWAP DUP 
>  EQUAL IF DROP SUB ELSE  EQUALVERIFY ADD  ENDIF ENDIF
> ```
>
> This should bring it up to roughly 85 bytes per trit, and there should be 101
> trits (`log(2**160)/log(3) == 100.94`), so about 8560 bytes... a bit cheaper!
> But the witness stack is "only" `2121` bytes...
>
> As a homework exercise, maybe someone can prove the optimal choice of radix 
> for
> this protocol... My guess is that base 4 is optimal!
>
> ## Taproot?
>
> What about Taproot? As far as I'm aware the commitment scheme (`Q = pG + 
> hash(pG
> || m)G`) can be securely opened to m even with a quantum computer (finding `q`
> such that `qG = Q` might be trivial, but suppose key path was disabled, then
> finding m and p such that the taproot equation holds should be difficult 
> because
> of the hash, but I'd need to certify that claim better).  Therefore this
> script can nest inside of a Tapscript path -- Tapscript also does not impose a
> length limit, 32 byte hashes could be used as well.
>
> Further, to make keys reusable, there could be many Lamport 

Re: [bitcoin-dev] Unlimited covenants, was Re: CHECKSIGFROMSTACK/{Verify} BIP for Bitcoin

2021-07-06 Thread ZmnSCPxj via bitcoin-dev
Good morning Russell,

> Hi ZmnSCPxj,
>
> I don't believe we need to ban Turing completeness for the sake of banning 
> Turing completeness.

Well I believe we should ban partial Turing-completeness, but allow total 
Turing-completeness.

I just think that unlimited recursive covenants (with or without a convenient 
way to transform state at each iteration) are **not** partial Turing-complete, 
but *are* total Turing-complete. (^^)

(The rest of this writeup is mostly programming languages nerdery so anyone who 
is not interested in Haskell (or purely functional programming) and programming 
language nerdery can feel free to skip the rest of this post.
Basically, ZmnSCPxj thinks we should still ban Turing-completeness, but 
unbounded covenants get a pass because they are not, on a technicality, 
Turing-complete)

For now, let us first taboo the term "Turing-complete", and instead focus on 
what I think matters here, the distinction between partial and total,

In a total programming language we have a distinction between data and codata:

* Data is defined according to its constructors, i.e. an algebraic data type.
* Codata is defined according to its destructors, i.e. according to a 
"behavior" the codata has when a particular "action" is applied to it.

For example, a singly-linked list data type would be defined as follows:

data List a where
Cons :: a -> List a -> List a
Nil :: List a

On the other hand, an infinite codata stream of objects would be defined as 
follows:

codata Stream a where
head :: Stream a -> a
tail :: Stream a -> Stream a

For `data` types, the result type for each constructor listed in the definition 
*must* be the type being defined.
That is why `Cons` is declared as resulting in a `List a`.
We declare data according to its constructors.

For `codata` types, the *first argument* for each destructor listed in the 
definition *must* be the type being defined.
That is why `head` accepts as its first argument the `Stream a` type.

This is relevant because in a total function programming language, there exists 
some programming rule that restricts recursion.
The simplest such restriction is substructural recursion:

* If a function recurs:
  * Every self-call should pass in a substructure of an argument as that 
argument.

Every program that passes the above rule provably terminates.
Since every recursion passes in a smaller part of an argument, eventually we 
will reach an indivisible primitive object being passed in, and processing will 
stop recursing and can return some value.

Thus, a programing language that has substructural recursion rule check (and 
rejects programs that fail the substrucutral recursion check) are not 
"Turing-complete".
The reason is that Turing-complete languages cannot solve the Halting Problem.
But a language that includes the substructural recursion rule *does* have a 
Halting Problem solution: every program that passes the substructural recursion 
rule halts and the Halting Problem is decidable for all programs that pass the 
substructural recursion rule.
(i.e. we are deliberately restricting ourselves to a subset of programs that 
pass substructural recursion, and reject programs that do not pass this rule as 
"not really programs", so every program halts)

For example, the following definition of `mapList` is valid under substructural 
recursion:

mapList :: (a -> b) -> (List a -> List b)
mapList f Nil= Nil
mapList f (Cons a as)= Cons (f a) (mapList f as)

The second sub-definition has a recursive call `mapList f as`.
The second argument to that call, however, is a substructure of the second 
argument `Cons a as` on the LHS of the definition, thus it is a substructural 
recursive call, and accepted in a total programming language.
*Every* recursion in `mapList` should then be a substructural call on the 
second argument of `mapList`.

Now let us consider the following definition of `fibonacci`:

-- to use: fibonacci 1 1
fibonacci :: Integer -> Integer -> List Integer
fibonacci x y = Cons x (fibonacci y (x + y))

The above is not substructural recursive, neither argument in the recursive 
`fibonacci y (x + y)` call is a substructure of the arguments in the LHS of the 
`fibonacci` definition `fibonacci x y`.

Thus, we prevent certain unbounded computations like the above infinite 
sequence of fibonacci numbers.

Now, let us consider a definition of `mapStream`, the similar function on 
streams, using copattern matching rather than pattern matching:

mapStream :: (a -> b) -> (Stream a -> Stream b)
head (mapStream f as) = f (head as)
tail (mapStream f as) = mapStream f (tail as)

Now the interesting thing here is that in the substructural recursion check, 
what is being defined in the above stanza is ***not*** `mapStream`, but `head` 
and `tail`!
Thus, it ignores the `mapStream f (tail as)`, because it is **not** recursion 
--- what is being defined here is `tail`.

Re: [bitcoin-dev] Proof of reserves - recording

2021-07-05 Thread ZmnSCPxj via bitcoin-dev
Good morning Billy,


>
> >  The two participants in the channel can sign a plaintext containing their 
> >node pubkeys and how much each owns
>
> Sure, but even if both participants in the channel sign a correct statement 
> of truth, one of the participants can send funds out in the next second, 
> invalidating that truth. While proof of ownership of on-chain UTXOs can be 
> seen publicly in real time if they are spent, LN transactions aren't public 
> like that. So any balance attestation is at best only valid the instant its 
> taken, and can't be used as verification the money is still owned by the same 
> channel partner in the next second. 

The same problem really also exists onchain --- a thief (or "thief") who has 
gotten a copy of the key can sign a transaction that spends it, one second 
after the proof-of-reserves is made.

Really, though, the issue is that ownership of funds is conditional on 
*knowledge* of keys.
And *knowledge* is easily copyable.

Thus, it is possible that the funds that are "proven" to be the reserve of a 
custodian is actually *also* owned by someone else who has gotten to the 
privkeys (e.g. somebody threw a copy of it from a boating accident and a 
fearless scuba diver rescued it), and thus can also move the funds outside of 
the control of the custodian.
This condition can remain for many months or years, as well, without knowledge 
of the custodian clients, *or* of the custodian itself.

There is no way to prove that there is no alternate copy of the privkeys, hence 
"if only one could prove that he won't get into a boating accident".

On the other hand, one could argue that at least the onchain proof requires 
more conditions to occur, so we might plausibly live with "we cannot prove we 
will never get into a boating accident but we can show evidence that we live in 
a landlocked city far from any lakes, seas, or rivers".

Regards,
ZmnSCPxj

>
> >  a custodian Lightning node is unable to "freeze" a snapshot of its current 
> >state and make an atomic proof-of-reserves of *all* channels
>
> That would be a neat trick. But yeah, I don't know how that would be 
> possible. 
>
> >  I believe it is one reason why custodian proof-of-reserves is not that 
> >popular ... it does not prove that the key will not get lost
>
> True, but at least if funds do get lost, it would be come clear far quicker. 
> Today, an insolvent company could go many months without the public finding 
> out. 
>
> On Mon, Jul 5, 2021 at 5:09 PM ZmnSCPxj  wrote:
>
> > Good morning e,
> >
> > > If only one could prove that he won’t get into a boating accident.
> >
> > At least in the context of Lightning channels, if one party in the channel 
> > loses its key in a boating accident, the other party (assuming it is a true 
> > separate person and not a sockpuppet) has every incentive to unilaterally 
> > close the channel, which reveals the exact amounts (though not necessarily 
> > who owns which).
> > If the other party then uses its funds in a new proof-of-reserves, then 
> > obviously the other output of the unilateral close was the one lost in the 
> > boating accident.
> >
> > On the other hand, yes, custodians losing custodied funds in boating 
> > accidents is much too common.
> > I believe it is one reason why custodian proof-of-reserves is not that 
> > popular --- it only proves that the funds were owned under a particular key 
> > at some snapshot of the past, it does not prove that the key will not get 
> > lost (or "lost and then salvaged by a scuba diver") later.
> >
> > Regards,
> > ZmnSCPxj
> >
> > >
> > > e
> > >
> > > > On Jul 5, 2021, at 16:26, ZmnSCPxj via bitcoin-dev 
> > > > bitcoin-dev@lists.linuxfoundation.org wrote:
> > > > Good morning Billy,
> > > >
> > > > > I wonder if there would be some way to include the ability to prove 
> > > > > balances held on the lightning network, but I suspect that isn't 
> > > > > generally possible.
> > > >
> > > > Thinking about this in terms of economic logic:
> > > > Every channel is anchored onchain, and that anchor (the funding txout) 
> > > > is proof of the existence, and size, of the channel.
> > > > The two participants in the channel can sign a plaintext containing 
> > > > their node pubkeys and how much each owns.
> > > > One of the participants should provably be the custodian.
> > > >
> > > > -   If the counterparty is a true third party, it has no incentive to 
> > > > lie about its m

Re: [bitcoin-dev] Proof of reserves - recording

2021-07-05 Thread ZmnSCPxj via bitcoin-dev
Good morning e,


> If only one could prove that he won’t get into a boating accident.

At least in the context of Lightning channels, if one party in the channel 
loses its key in a boating accident, the other party (assuming it is a true 
separate person and not a sockpuppet) has every incentive to unilaterally close 
the channel, which reveals the exact amounts (though not necessarily who owns 
which).
If the other party then uses its funds in a new proof-of-reserves, then 
obviously the other output of the unilateral close was the one lost in the 
boating accident.

On the other hand, yes, custodians losing custodied funds in boating accidents 
is much too common.
I believe it is one reason why custodian proof-of-reserves is not that popular 
--- it only proves that the funds were owned under a particular key at some 
snapshot of the past, it does not prove that the key will not get lost (or 
"lost and then salvaged by a scuba diver") later.


Regards,
ZmnSCPxj

>
> e
>
> > On Jul 5, 2021, at 16:26, ZmnSCPxj via bitcoin-dev 
> > bitcoin-dev@lists.linuxfoundation.org wrote:
> > Good morning Billy,
> >
> > > I wonder if there would be some way to include the ability to prove 
> > > balances held on the lightning network, but I suspect that isn't 
> > > generally possible.
> >
> > Thinking about this in terms of economic logic:
> > Every channel is anchored onchain, and that anchor (the funding txout) is 
> > proof of the existence, and size, of the channel.
> > The two participants in the channel can sign a plaintext containing their 
> > node pubkeys and how much each owns.
> > One of the participants should provably be the custodian.
> >
> > -   If the counterparty is a true third party, it has no incentive to lie 
> > about its money.
> > -   Especially if the counterparty is another custodian who wants 
> > proof-of-reserves, it has every incentive to overreport, but then the first 
> > party will refuse to sign.
> > It has a disincentive to underreport, and would itself refuse to sign a 
> > dishonest report that assigns more funds to the first party.
> > The only case that would be acceptable to both custodians would be to 
> > honestly report their holdings in the Lightning channel.
> >
> > -   If the counterparty is a sockpuppet of the custodian, then the entire 
> > channel is owned by the custodian and it would be fairly dumb of he 
> > custodian to claim to have less funds than the entire channel.
> >
> > Perhaps a more practical problem is that Lightning channel states change 
> > fairly quickly, and there are possible race conditions, due to network 
> > latency (remember, both nodes need to sign, meaning both of them need to 
> > communicate with each other, thus hit by network latency and other race 
> > conditions) where a custodian Lightning node is unable to "freeze" a 
> > snapshot of its current state and make an atomic proof-of-reserves of all 
> > channels.
> > Regards,
> > ZmnSCPxj
> >
> > bitcoin-dev mailing list
> > bitcoin-dev@lists.linuxfoundation.org
> > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Proof of reserves - recording

2021-07-05 Thread ZmnSCPxj via bitcoin-dev
Good morning Billy,

> I wonder if there would be some way to include the ability to prove balances 
> held on the lightning network, but I suspect that isn't generally possible. 

Thinking about this in terms of economic logic:

Every channel is anchored onchain, and that anchor (the funding txout) is proof 
of the existence, and size, of the channel.

The two participants in the channel can sign a plaintext containing their node 
pubkeys and how much each owns.
One of the participants should provably be the custodian.

* If the counterparty is a true third party, it has no incentive to lie about 
its money.
  * Especially if the counterparty is *another* custodian who wants 
proof-of-reserves, it has every incentive to overreport, but then the first 
party will refuse to sign.
It has a disincentive to underreport, and would itself refuse to sign a 
dishonest report that assigns more funds to the first party.
The only case that would be acceptable to both custodians would be to 
honestly report their holdings in the Lightning channel.
* If the counterparty is a sockpuppet of the custodian, then the entire channel 
is owned by the custodian and it would be fairly dumb of he custodian to claim 
to have less funds than the entire channel.

Perhaps a more practical problem is that Lightning channel states change fairly 
quickly, and there are possible race conditions, due to network latency 
(remember, both nodes need to sign, meaning both of them need to communicate 
with each other, thus hit by network latency and other race conditions) where a 
custodian Lightning node is unable to "freeze" a snapshot of its current state 
and make an atomic proof-of-reserves of *all* channels.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Unlimited covenants, was Re: CHECKSIGFROMSTACK/{Verify} BIP for Bitcoin

2021-07-04 Thread ZmnSCPxj via bitcoin-dev
Good morning Russell,


> On Sun, Jul 4, 2021 at 9:02 PM Russell O'Connor  
> wrote:
>
> > Bear in mind that when people are talking about enabling covenants, we are 
> > talking about whether OP_CAT should be allowed or not.
> >
> > That said, recursive covenants, the type that are most worrying, seems to 
> > require some kind of OP_TWEAK operation, and I haven't yet seen any 
> > evidence that this can be simulated with CHECKSIG(FROMSTACK).  So maybe we 
> > should leave such worries for the OP_TWEAK operation.
>
> Upon further thought, you can probably make recursive covenants even with a 
> fixed scritpubkey by sneaking the state into a few bits of the UTXO's amount. 
>  Or if you try really hard, you may be able to stash your state into a 
> sibling output that is accessed via the txid embedded in the prevoutpoint.

Which is kind of the point of avoiding giving too much power, because people 
can be very clever and start doing unexpected things from what you think is 
already a limited subset.
"Give an inch and they will take a mile".

Still, as pointed out, altcoins already exist and are substantially worse, and 
altcoin implementations are all going to run on Turing machines anyway (which 
are powerful enough to offer Turing-machine functionality), so maybe this is 
not really giving too much power, people can already fork Bitcoin and add full 
EVM support on it.

Regards,
ZmnSCPxj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Unlimited covenants, was Re: CHECKSIGFROMSTACK/{Verify} BIP for Bitcoin

2021-07-04 Thread ZmnSCPxj via bitcoin-dev
Good morning Dave,

> On Sun, Jul 04, 2021 at 11:39:44AM -0700, Jeremy wrote:
>
> > However, I think the broader community is unconvinced by the cost benefit
> > of arbitrary covenants. See
> > https://medium.com/block-digest-mempool/my-worries-about-too-generalized-covenants-5eff33affbb6
> > as a recent example. Therefore as a critical part of building consensus on
> > various techniques I've worked to emphasize that specific additions do not
> > entail risk of accidentally introducing more than was bargained for to
> > respect the concerns of others.
>
> Respecting the concerns of others doesn't require lobotomizing useful
> tools. Being respectful can also be accomplished by politely showing
> that their concerns are unfounded (or at least less severe than they
> thought). This is almost always the better course IMO---it takes much
> more effort to satisfy additional engineering constraints (and prove to
> reviewers that you've done so!) than it does to simply discuss those
> concerns with reasonable stakeholders. As a demonstration, let's look
> at the concerns from Shinobi's post linked above:
>
> They seem to be worried that some Bitcoin users will choose to accept
> coins that can't subsequently be fungibily mixed with other bitcoins.
> But that's already been the case for a decade: users can accept altcoins
> that are non-fungible with bitcoins.
>
> They talk about covenants where spending is controlled by governments,
> but that seems to me exactly like China's CBDC trial.
>
> They talk about exchanges depositing users' BTC into a covenant, but
> that's just a variation on the classic not-your-keys-not-your-bitcoins
> problem. For all you know, your local exchange is keeping most of its
> BTC balance commitments in ETH or USDT.
>
> To me, it seems like the worst-case problems Shinobi describes with
> covenants are some of the same problems that already exist with
> altcoins. I don't see how recursive covenants could make any of those
> problems worse, and so I don't see any point in limiting Bitcoin's
> flexibility to avoid those problems when there are so many interesting
> and useful things that unlimited covenants could do.

The "altcoins are even worse" argument does seem quite convincing, and if 
Bitcoin can survive altcoins, surely it can survive covenants too?

In before "turns out covenants are the next ICO".
i.e. ICOs are just colored coins, which are useful for keeping track of various 
stuff, but have then been used as a vehicle to scam people.
But I suppose that is a problem that humans will always have: limited 
cognition, so that *good* popular things that are outside your specific field 
of study are indistinguishable from *bad* popular things.
So perhaps it should not be a concern on a technical level.
Maybe we should instead make articles about covenants so boring nobody will 
hype about it (^^;)v.

Increased functionality implies increased processing, and hopefully computation 
devices are getting cheap enough that the increased processing implied by new 
features should not be too onerous.



To my mind, an "inescapable" covenant (i.e. one that requires the output to be 
paid to the same covenant) is basically a Turing machine, and equivalent to a 
`while (true);` loop.
In a `while (true);` loop, the state of the machine reverts back to the same 
state, and it repeats again.
In an inescpable covenant, the control of some amount of funds reverts back to 
the same controlling SCRIPT, and it repeats again.
Yes, you can certainly add more functionality on top of that loop, just think 
of program main loops for games or daemons, which are, in essence, "just" 
`while (true) ...`.
But basically, such unbounded infinite loops are possible only under Turing 
machines, thus I consider covenants to be Turing-complete.
Principle of Least Power should make us wonder if we need full Turing machines 
for the functionality.

On the other hand --- codata processing *does* allow for unbounded loops, 
without requiring full Turing-completeness; they just require total 
functionality, not partial (and Turing-completeness is partial, not total).
Basically, data structures are unbounded storage, while codata structures are 
unbounded processing.
Perhaps covenants can encode an upper bound on the number of recursions, which 
prevents full Turing-completeness while allowing for a large number of 
use-cases.

(if the above paragraph makes no sense to you, hopefully this Wikipedia article 
will help: https://en.wikipedia.org/wiki/Total_functional_programming )
(basically my argument here is based on academic programming stuff, and might 
not actually matter in real life)

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] CheckSigFromStack for Arithmetic Values

2021-07-04 Thread ZmnSCPxj via bitcoin-dev
Good morning Erik and Jeremy,

> The "for" arithmetic here is largely to mean that this cleverness allows an 
> implementation of `OP_CHECKSIGFROMSTACK`, using arithmetic operation `OP_ADD`.
>
> To my mind this cleverness is more of an argument against ever enabling 
> `OP_ADD` and friends, LOL.
> This is more of a "bad but ridiculously clever thing" post than a "Bitcoin 
> should totally use this thing" post.

Turns out `OP_ADD` is actually still enabled in Bitcoin, LOL, I thought it was 
hit in the same banhammer that hit `OP_CAT` and `OP_MUL`.
Limited to 32 bits, but that simply means that you just validate longer 
bitvectors (e.g. the `s` in the "lamport-sign the EC signature") in sections of 
32 bits.

In any case, the point still mostly stands, I think this is more of a "overall 
bad but still ridiculously clever" idea; the script and witness sizes are 
fairly awful.
Mostly just worth discussing just in case it triggers somebody else to think of 
a related idea that takes some of the cleverness but is overall better.

On the other hand if we can actually implement the "Lamport-sign the EC sig" 
idea (I imagine the 32-bit limit requires some kind of `OP_CAT` or similar, or 
other bit or vector slicing operetion), that does mean Bitcoin is already 
quantum-safe (but has a fairly lousy quantum-safe signing scheme, I really do 
not know the characteristics of better ones though).

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] CheckSigFromStack for Arithmetic Values

2021-07-03 Thread ZmnSCPxj via bitcoin-dev
Good morning Erik,

> i may be ignorant here but i have a question:
>
> Given that schnorr signatures now allow signers to perform complex arithmetic 
> signing operations out-of-band using their own communications techniques, 
> couldn't you just perform the publishing and accumulation of these signature 
> components without using a bitcoin script?
>
> In other words, push the effort of combination and computation off of the 
> bitcoin network and nodes.

Actually the post is not about *doing* Arithmetic using signing operations, it 
is about enabling signing operations *at all* using arithmetic operation 
`OP_ADD`.
Jeremy in the initial post is not doing arithmetic, he is using arithmetic to 
implement Lamport signatures (which cannot support arithmetic signing 
operations anyway, being a hash-based signing scheme).

The "for" arithmetic here is largely to mean that this cleverness allows an 
implementation of `OP_CHECKSIGFROMSTACK`, using arithmetic operation `OP_ADD`.

To my mind this cleverness is more of an argument against ever enabling 
`OP_ADD` and friends, LOL.
This is more of a "bad but ridiculously clever thing" post than a "Bitcoin 
should totally use this thing" post.

Regards,
ZmnSCPxj

>
> On Sat, Jul 3, 2021 at 12:01 AM Jeremy via bitcoin-dev 
>  wrote:
>
> > Yep -- sorry for the confusing notation but seems like you got it. C++ 
> > templates have this issue too btw :)
> >
> > One cool thing is that if you have op_add for arbitrary width integers or 
> > op_cat you can also make a quantum proof signature by signing the signature 
> > made with checksig with the lamport.
> >
> > There are a couple gotchas wrt crypto assumptions on that but I'll write it 
> > up soon  it also works better in segwit V0 because there's no keypath 
> > spend -- that breaks the quantum proofness of this scheme.
> >
> > On Fri, Jul 2, 2021, 4:58 PM ZmnSCPxj  wrote:
> >
> > > Good morning Jeremy,
> > >
> > > > Dear Bitcoin Devs,
> > > >
> > > > It recently occurred to me that it's possible to do a lamport signature 
> > > > in script for arithmetic values by using a binary expanded 
> > > > representation. There are some applications that might benefit from 
> > > > this and I don't recall seeing it discussed elsewhere, but would be 
> > > > happy for a citation/reference to the technique.
> > > >
> > > > blog post here, https://rubin.io/blog/2021/07/02/signing-5-bytes/, text 
> > > > reproduced below
> > > >
> > > > There are two insights in this post:
> > > > 1. to use a bitwise expansion of the number
> > > > 2. to use a lamport signature
> > > > Let's look at the code in python and then translate to bitcoin script:
> > > > ```python
> > > > def add_bit(idx, preimage, image_0, image_1):
> > > >     s = sha256(preimage)
> > > >     if s == image_1:
> > > >         return (1 << idx)
> > > >     if s == image_0:
> > > >         return 0
> > > >     else:
> > > >         assert False
> > > > def get_signed_number(witnesses : List[Hash], keys : List[Tuple[Hash, 
> > > > Hash]]):
> > > >     acc = 0
> > > >     for (idx, preimage) in enumerate(witnesses):
> > > >         acc += add_bit(idx, preimage, keys[idx][0], keys[idx][1])
> > > >     return x
> > > > ```
> > > > So what's going on here? The signer generates a key which is a list of 
> > > > pairs of
> > > > hash images to create the script.
> > > > To sign, the signer provides a witness of a list of preimages that 
> > > > match one or the other.
> > > > During validation, the network adds up a weighted value per preimage 
> > > > and checks
> > > > that there are no left out values.
> > > > Let's imagine a concrete use case: I want a third party to post-hoc 
> > > > sign a sequence lock. This is 16 bits.
> > > > I can form the following script:
> > > > ```
> > > >  checksigverify
> > > > 0
> > > > SWAP sha256 DUP  EQUAL IF DROP <1> ADD ELSE  
> > > > EQUALVERIFY ENDIF
> > > > SWAP sha256 DUP  EQUAL IF DROP <1<<1> ADD ELSE  
> > > > EQUALVERIFY ENDIF
> > > > SWAP sha256 DUP  EQUAL IF DROP <1<<2> ADD ELSE  
> > > > EQUALVERIFY ENDIF
> > > > SWAP sha256 DUP  EQUAL IF DROP <1<<3> ADD ELSE  
> > > > EQUALVERIFY ENDIF
> > > > SWAP sha256 DUP  EQUAL IF DROP <1<<4> ADD ELSE  
> > > > EQUALVERIFY ENDIF
> > > > SWAP sha256 DUP  EQUAL IF DROP <1<<5> ADD ELSE  
> > > > EQUALVERIFY ENDIF
> > > > SWAP sha256 DUP  EQUAL IF DROP <1<<6> ADD ELSE  
> > > > EQUALVERIFY ENDIF
> > > > SWAP sha256 DUP  EQUAL IF DROP <1<<7> ADD ELSE  
> > > > EQUALVERIFY ENDIF
> > > > SWAP sha256 DUP  EQUAL IF DROP <1<<8> ADD ELSE  
> > > > EQUALVERIFY ENDIF
> > > > SWAP sha256 DUP  EQUAL IF DROP <1<<9> ADD ELSE  
> > > > EQUALVERIFY ENDIF
> > > > SWAP sha256 DUP  EQUAL IF DROP <1<<10> ADD ELSE  
> > > > EQUALVERIFY ENDIF
> > > > SWAP sha256 DUP  EQUAL IF DROP <1<<11> ADD ELSE  
> > > > EQUALVERIFY ENDIF
> > > > SWAP sha256 DUP  EQUAL IF DROP <1<<12> ADD ELSE  
> > > > EQUALVERIFY ENDIF
> > > > SWAP sha256 DUP  EQUAL IF DROP <1<<13> ADD ELSE  
> > > > EQUALVERIFY ENDIF
> > > > SWAP sha256 DUP  

Re: [bitcoin-dev] CheckSigFromStack for Arithmetic Values

2021-07-02 Thread ZmnSCPxj via bitcoin-dev
Good morning Jeremy,

> Dear Bitcoin Devs,
>
> It recently occurred to me that it's possible to do a lamport signature in 
> script for arithmetic values by using a binary expanded representation. There 
> are some applications that might benefit from this and I don't recall seeing 
> it discussed elsewhere, but would be happy for a citation/reference to the 
> technique.
>
> blog post here, https://rubin.io/blog/2021/07/02/signing-5-bytes/, text 
> reproduced below
>
> There are two insights in this post:
> 1. to use a bitwise expansion of the number
> 2. to use a lamport signature
> Let's look at the code in python and then translate to bitcoin script:
> ```python
> def add_bit(idx, preimage, image_0, image_1):
>     s = sha256(preimage)
>     if s == image_1:
>         return (1 << idx)
>     if s == image_0:
>         return 0
>     else:
>         assert False
> def get_signed_number(witnesses : List[Hash], keys : List[Tuple[Hash, Hash]]):
>     acc = 0
>     for (idx, preimage) in enumerate(witnesses):
>         acc += add_bit(idx, preimage, keys[idx][0], keys[idx][1])
>     return x
> ```
> So what's going on here? The signer generates a key which is a list of pairs 
> of
> hash images to create the script.
> To sign, the signer provides a witness of a list of preimages that match one 
> or the other.
> During validation, the network adds up a weighted value per preimage and 
> checks
> that there are no left out values.
> Let's imagine a concrete use case: I want a third party to post-hoc sign a 
> sequence lock. This is 16 bits.
> I can form the following script:
> ```
>  checksigverify
> 0
> SWAP sha256 DUP  EQUAL IF DROP <1> ADD ELSE  EQUALVERIFY 
> ENDIF
> SWAP sha256 DUP  EQUAL IF DROP <1<<1> ADD ELSE  
> EQUALVERIFY ENDIF
> SWAP sha256 DUP  EQUAL IF DROP <1<<2> ADD ELSE  
> EQUALVERIFY ENDIF
> SWAP sha256 DUP  EQUAL IF DROP <1<<3> ADD ELSE  
> EQUALVERIFY ENDIF
> SWAP sha256 DUP  EQUAL IF DROP <1<<4> ADD ELSE  
> EQUALVERIFY ENDIF
> SWAP sha256 DUP  EQUAL IF DROP <1<<5> ADD ELSE  
> EQUALVERIFY ENDIF
> SWAP sha256 DUP  EQUAL IF DROP <1<<6> ADD ELSE  
> EQUALVERIFY ENDIF
> SWAP sha256 DUP  EQUAL IF DROP <1<<7> ADD ELSE  
> EQUALVERIFY ENDIF
> SWAP sha256 DUP  EQUAL IF DROP <1<<8> ADD ELSE  
> EQUALVERIFY ENDIF
> SWAP sha256 DUP  EQUAL IF DROP <1<<9> ADD ELSE  
> EQUALVERIFY ENDIF
> SWAP sha256 DUP  EQUAL IF DROP <1<<10> ADD ELSE  
> EQUALVERIFY ENDIF
> SWAP sha256 DUP  EQUAL IF DROP <1<<11> ADD ELSE  
> EQUALVERIFY ENDIF
> SWAP sha256 DUP  EQUAL IF DROP <1<<12> ADD ELSE  
> EQUALVERIFY ENDIF
> SWAP sha256 DUP  EQUAL IF DROP <1<<13> ADD ELSE  
> EQUALVERIFY ENDIF
> SWAP sha256 DUP  EQUAL IF DROP <1<<14> ADD ELSE  
> EQUALVERIFY ENDIF
> SWAP sha256 DUP  EQUAL IF DROP <1<<15> ADD ELSE  
> EQUALVERIFY ENDIF
> CHECKSEQUENCEVERIFY
> ```

This took a bit of thinking to understand, mostly because you use the `<<` 
operator in a syntax that uses `< >` as delimiters, which was mildly confusing 
--- at first I thought you were pushing some kind of nested SCRIPT 
representation, but in any case, replacing it with the actual numbers is a 
little less confusing on the syntax front, and I think (hope?) most people who 
can understand `1<<1` have also memorized the first few powers of 2

> ```
>  checksigverify
> 0
> SWAP sha256 DUP  EQUAL IF DROP <1> ADD ELSE  EQUALVERIFY 
> ENDIF
> SWAP sha256 DUP  EQUAL IF DROP <2> ADD ELSE  EQUALVERIFY 
> ENDIF
> SWAP sha256 DUP  EQUAL IF DROP <4> ADD ELSE  EQUALVERIFY 
> ENDIF
> SWAP sha256 DUP  EQUAL IF DROP <8> ADD ELSE  EQUALVERIFY 
> ENDIF
> SWAP sha256 DUP  EQUAL IF DROP <16> ADD ELSE  EQUALVERIFY 
> ENDIF
> SWAP sha256 DUP  EQUAL IF DROP <32> ADD ELSE  EQUALVERIFY 
> ENDIF
> SWAP sha256 DUP  EQUAL IF DROP <64> ADD ELSE  EQUALVERIFY 
> ENDIF
> SWAP sha256 DUP  EQUAL IF DROP <128> ADD ELSE  
> EQUALVERIFY ENDIF
> SWAP sha256 DUP  EQUAL IF DROP <256> ADD ELSE  
> EQUALVERIFY ENDIF
> SWAP sha256 DUP  EQUAL IF DROP <512> ADD ELSE  
> EQUALVERIFY ENDIF
> SWAP sha256 DUP  EQUAL IF DROP <1024> ADD ELSE  
> EQUALVERIFY ENDIF
> SWAP sha256 DUP  EQUAL IF DROP <2048> ADD ELSE  
> EQUALVERIFY ENDIF
> SWAP sha256 DUP  EQUAL IF DROP <4096> ADD ELSE  
> EQUALVERIFY ENDIF
> SWAP sha256 DUP  EQUAL IF DROP <8192> ADD ELSE  
> EQUALVERIFY ENDIF
> SWAP sha256 DUP  EQUAL IF DROP <16384> ADD ELSE  
> EQUALVERIFY ENDIF
> SWAP sha256 DUP  EQUAL IF DROP <32768> ADD ELSE  
> EQUALVERIFY ENDIF
> CHECKSEQUENCEVERIFY
> ```

On the other hand LOL WTF, this is cool.

Basically you are showing that if we enable something as innocuous as `OP_ADD`, 
we can implement Lamport signatures for **arbitrary** values representable in 
small binary numbers (16 bits in the above example).

I was thinking "why not Merkle signatures" since the pubkey would be much 
smaller but the signature would be much larger, but (a) the SCRIPT would be 
much more complicated and (b) in modern Bitcoin, the above SCRIPT would be in 
the witness stack anyway so there is no advantage to pushing the size towards 
the 

Re: [bitcoin-dev] Boost Bitcoin circulation, Million Transactions Per Second with stronger privacy

2021-06-29 Thread ZmnSCPxj via bitcoin-dev
Good morning Raymo,

> Hey Alex,
>
> Your scenario works perfectly unless we put some restrictions on
> accepting transaction by creditor (in our case Bob).
> These are restrictions:
> Alice has to use a UTXO (or some UTXOs) worth at least 40,000 Sat as
> transaction input.
> Alice has to reserve 10,000 Sat as transaction fee (for MT transaction)
> regardless of transaction length or input/output amounts.
> Alice always pays at least 4,000 Sat of BTC-transaction-fee, and the
> 6,000 remined fee must be paid by she and Bob in proportion to their
> outputs amounts)
> Alice can issue a transaction the has maximum 20,000 outputs for
> creditors (Bob and others).
> The rest (if exist) is change back to Alice address.
> The GT is formed based on MT.
> Bob considers a transaction couple (MT, GT) valid only if they respect
> these rules.
>
> Let’s put it in practice using some numbers (although you can find more
> detailed explanation in paper).
>
> The MT would be like that:
> Input: 40,000 Satoshi
> Outputs:
> Bob: 20,000
> BTC-fee: 10,000
> Change back to Alice: 10,000
>
> Based on this MT the GT will be
> Input: 40,000 Satoshi
> Outputs:
> Bob: 20,000 – 20,00070% = 6,000
> BTC-fee: 10,000 + (14,000 of Bob’s output) + (1,500 of Alice’s change
> back) = 25,500
> Change back to Alice: 10,000 – 10,00015% = 8,500
>
> Now if Alice wants to spend UTXO to Charlie with higher fee, she has to
> pay at least 25,500 + 1 Satoshi as BTC fee in order to convince miners
> to put his fraudulent transaction instead the GT in next block.
> Alice already got 20,000 Sat profit from Bob. Now she can earn another
> 14,999 Sat profit from Charlie because of same UTXO worth 40,000
> Satoshi.
> Indeed, she spent 40,000 Sat and in total got equal to 34,999 Sat goods
> or services.
> Is she a winner?
> I am not sure!
> What do you think?

You assume here that Alice the issuer only has a single UTXO and that it 
creates a single transaction spending that UTXO.

It is helpful to remember that miners consider fee*rate*, but your security 
analysis is dependent on *fee* and not fee*rate*.

Now consider, what if Alice creates 1000 UTXOs, promises GTs and MTs to 1000 
different Bobs?

Now, a GT has one input and two outputs.

1000 GTs have 1000 overheads (`nLockTime` and `nVersion` and so on), 1000 
inputs, and 2000 outputs.

Now Alice the issuer, being the sole signer, can create a fraudulent 
transaction that spends all 1000 UTXOs and spends it to a single Carol output.

This fraudulent transaction has 1 overhead, 1000 inputs, and 1 output.

Do you think Alice can get a better fee*rate* on that transaction while paying 
a lower aggregate *fee* than all the GTs combined?
Remember, you based your security analysis on Alice being forced to pay a 
larger *fee*, but neglect that miners judge transactions based on fee*rate*, 
which is subtly different and not what you are relying on.
I am sure that there exists some large enough number of UTXOs where a single 
aggregating fraudulent transaction will be far cheaper than the tons of little 
GTs your security analysis depends on.

This is why we do not use 1-of-1 signers in safe offchain protocols.
Not your keys, not your coins.

--

In addition, your analysis is based on assuming that miners are perfect 
rational beings of perfect rationality, ***and*** are omniscient.

In reality, miners possess bounded knowledge, i.e. they do not know everything.

Even if Alice is in possession of only a single UTXO, Alice can still feed 
miners a transaction with lower feerate than the MT, then feed the rest of the 
network with a valid MT.
Because transactions propagate through the network but this propagation is 
***not*** instantaneous, it is possible for the MT to reach the miners later 
than the fraudulent transaction.
In this window of time, a block may be mined that includes the fraudulent 
transaction, simply because the lucky miner never managed to hear of the 
correct MT.

This attack is essentially costless to Alice, especially for big enough 
transactions where mining fees are a negligible part of the payment.

This is why we do not use 1-of-1 signers in safe offchain protocols.
Not your keys, not your coins.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Boost Bitcoin circulation, Million Transactions Per Second with stronger privacy

2021-06-28 Thread ZmnSCPxj via bitcoin-dev
Good morning Raymo,

> Hi ZmnSCPxj,
>
> Why you get the signal “trust the Gazin wallet”?
> Sabu is a protocol and the Gazin wallet will be an implementation of
> that protocol. We will implement it in react-native language to support
> both Android and iPhone. Of course it will be open source and GPL3.
> Here is the repository and yet is empty :)
> https://github.com/raymaot/Gazin
>
> I wonder why you do not look carefully into the proposal! IMHO the Sabu
> will be far better than Lightning.
> Can’t you see the fact that in Sabu you do not need open and close
> channels ever? Can you imagine only this feature how dramatically
> decrease the transactions cost and how increase the distribution of
> nodes and improve privacy level? it makes every mobile wallet act like a
> lightning network.
> Did you note the fact that in Sabu protocol there is no routing? And the
> only people knew about a transaction are issuer and creditor? No one
> else won’t be aware of transactions and million transactions per second
> can be sent and received and repeal dynamically without any footprint on
> any DLT?
>
> The English is not my mother language and probably my paper is not a
> smooth and easy to read paper, but these are not good excuse to not even
> reading a technical paper carefully and before understanding it or at
> least trying to understanding it start to complaining.


What prevents the creditor from signing a transaction that is neither a valid 
MT nor a GT?

Nothing.

In Lightning, sure one side can sign a transaction that is not a valid 
commitment transaction, but good luck getting the other side to *also* sign the 
transaction; it will not.
Thus, you need n-of-n.

1-of-1 is simply not secure, full stop, you need to redesign the whole thing to 
use *at least* 2-of-2.
At which point you will have reinvented Lightning.

Otherwise, you are simply trusting that the wallet is implemented correctly, 
and in particular, that any creditor will not simply insert code in your 
open-source software to sign invalid transactions.

With a 1-of-1, any invalid-in-Sabu transaction can still be valid in the 
Bitcoin blockchain layer, thus the scheme is simply insecure.

Features are meaningless without this kind of basic trust-minimization security.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Boost Bitcoin circulation, Million Transactions Per Second with stronger privacy

2021-06-27 Thread ZmnSCPxj via bitcoin-dev
Good morning Raymo,

>
> It looks you already missed the entire design of Sabu and its
> restrictions. First of all, the Gazin wallet always controls the Sabu
> restrictions for every transaction in order to consider it as a valid
> transaction in a valid deal. That is, the creditor wallet controls the
> MT and GT in first place.

Stop right there.

>From the above, what I get is, "trust the Gazin wallet".
Thus, the suggestion to just use Coinbase.
At least it has existed longer and has more current users that trust it, rather 
than this Gazin thing.


Is Gazin open-source?

* If Gazin is open-source, I could download the source code, make a local copy 
that gives me a separate copy of the keys, and use the keys to sign any 
transaction I want.
* If Gazin is not open-source, then why should I trust the Gazin wallet until 
my incoming funds to an open-source wallet I control have been confirmed deeply?

Lightning is still superior because:

* It can be open-sourced completely and even though I have keys to my onchain 
funds, I *still* cannot steal the funds of my counterparty.
* Even if I connect my open-source node to a node with a closed-source 
implementation, I know I can rely on receives from that node without waiting 
for the transaction to be confirmed deeply.


All the benefits your scheme claims, are derived from the trust assumption, 
which is uninteresting, we already have those, they are called custodial 
wallets.
Lightning allows for non-custodiality while achieving high global TPS and low 
fees.
And a central idea of Lightning is the requirement to use an n-of-n to form 
smaller sub-moneys from the global money.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Boost Bitcoin circulation, Million Transactions Per Second with stronger privacy

2021-06-26 Thread ZmnSCPxj via bitcoin-dev
Good morning Raymo,


> Good morning ZmnSCPxj
> Sorry for late reply.
>
> > Guarantee Transactions (GT) being higher-fee is not assured.
>
> The question is “assuring what?”.
> The whole point of my proposal is the fact that issuers and creditors
> act rationally and won't harm their selves. The numbers (input and
> output amounts), the relation between inputs and outputs amounts, the
> minimum and maximum of inputs and outputs amounts, and conditions of a
> valid trans-action in Sabu protocol are all designed precisely to
> leading the rational users toward the making profit from the system. And
> irrationals (either issuer or creditor) can harm the others and
> inevitably in con-sequence will hurt themselves too. So, there is a fair
> and just transaction (MT).
> The creditor can send the GT to Bitcoin network and lose 70% of his
> money and damage 15% of is-suer money!
> Vice versa the issuer can send GT to Bitcoin network and harm itself 15%
> in cost of hurt creditors 70% which is none sense. Or issuer can pay
> even more money directly to miner and hurt itself even more which is
> even more irrational! Or the miner will ignore the transaction fees of a
> GT and put the fraudulent transaction in next block, which I cannot
> imagine a miner that pass up his legal and legiti-mate income in favor
> of a greedy issuer!
> Please write me a scenario (preferably with clear amount of inputs and
> outputs) by which the cheater (either issuer or creditor) gains more
> profit than playing honestly.
> Only in this case we can accept your claim about weakness of protocol.
>
> > Every offchain protocol needs the receiver as a signatory to any 
> > unconfirmed transaction. the receiver must be a signatory --- the receiver 
> > cannot trust an unconfirmed transaction where the spent UTXO has an 
> > alternate branch that does not have the receiver as a signatory.
>
> I intentionally decided to not using 2 of 2 signature, because I didn't
> want to fall in same trap as Light-ening. I wanted to avoid this long
> drilling 2 of 2 signings and routing. Instead, I just proposed to
> cre-ate and sign a valid Bitcoin transaction between only 2 people in a
> pure-peer-to-peer communication. The only signer is the issuer (the UTXO
> owner).
> Again, same logic. Please write me a scenario by which the cheater
> (issuer or creditor) can cheat this only-issuer-signed transactions and
> gains more profit than playing honest. Due to numbers and trans-action
> restrictions and the insignificance of the amount of each transaction
> this scenario of fraud will fail too.

As the issuer is the only one signing, it can trivially create a self-paying 
transaction by itself that is neither a valid MT nor a valid GT.

Suppose I have an MT that pays 1 BTC to you and has a 1 BTC change output back 
to me.
After you hand over the equivalent of 1 BTC in other resources, I then create 
an alternative transaction, signed only by myself, paying 0.5 BTC to miners and 
1.5 BTC to myself, and since the fee is so high, the miners have every 
incentive to mine it.

Yes, that is not a valid MT or GT, but nothing in the Bitcoin blockchain layer 
requires that the *single* signer follow the protocol.
The point here is that a single signer can sign anything, including a 
transaction that is not an MT or a GT, but has arbitrary numbers that are 
neither a valid GT nor a valid MT.
That is the reason why every trust-minimized offchain system requires 2-of-2, 
somebody else has to countercheck the validity of a protocol that is *not* 
directly on the blockchain.
The blockchain only cares about signature and timelock validity; it does not 
care about (and check for validity) MTs and GTs.

In essence, this is a trusted system where every creditor trusts every issuer 
to *only* sign GTs and MTs, thus uninteresting --- you might as well just use 
Coinbase as your offchain if you are going to inject trust.

Now you can counterargue that you intend this system to be used for small 
payments and thus the fee for this non-MT non-GT clawback can approach the 
security levels you so carefully computed for GT and MT, but again --- the 
*largest* safe payment will vary depending on onchain mempool state, and if the 
mempool is almost empty, the largest safe payment will be much smaller than at 
other times.
This uncertainty is not handled well by most users, thus I think your UX will 
be fairly awful.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Boost Bitcoin circulation, Million Transactions Per Second with stronger privacy

2021-06-19 Thread ZmnSCPxj via bitcoin-dev
Good morning Raymo,

> Hi,
> I have a proposal for improve Bitcoin TPS and privacy, here is the post.
> https://raymo-49157.medium.com/time-to-boost-bitcoin-circulation-million-transactions-per-second-and-privacy-1eef8568d180
> https://bitcointalk.org/index.php?topic=5344020.0
> Can you please read it and share your idea about it.


Guarantee Transactions (GT) being higher-fee is ***not*** assured.

Feerates are always bumpable --- the sender of a transaction only needs to 
directly contact a miner and offer a fee to take a specific transaction on the 
next block proposal, conditional on the transaction *actually* getting into a 
block.
Such "side fees" are always possible.
Indeed, the in-transaction fees are "just" a way to anonymously and atomically 
make that fee offer to miners --- but miners and issuers can always communicate 
directly without using Bitcoin transaction to arrange a higher fee for a 
fraudulent Main Transaction (MT).

because of this, you should really treat all unconfirmed transactions --- 
including MTs and GTs --- as potentially replaceable, i.e. RBFable.
There is no such thing as "RBF disabled", all transactions are inherently 
RBF-able due to side fees --- it is simply a matter of anonymity, atomicity, 
and ease-of-use.

---

Every offchain protocol needs *the receiver* as a signatory to any unconfirmed 
transaction.

Or more strongly, the receiver **must** be a signatory --- the receiver cannot 
trust an unconfirmed transaction where the spent UTXO has an alternate branch 
that does *not* have the receiver as a signatory.

See: https://zmnscpxj.github.io/offchain/safety.html

Thus, all safe offchain schemes need to use an n-of-n signing set.

The smallest n-of-n that is still useful is 2-of-2, where one participant is a 
sender and the other is a receiver.
(1-of-1 is not useful since there is no possible receiver who can sign).

This requires Bitcoin to splinter into lots of 2-of-2 funds, each one a 
sovereign sub-money (that is *eventually* convertible to Bitcoin), each one a 
cryptocurrency system in its own right.
However, it so happens that we have a mechanism for transferring value across 
multiple cryptocurrency systems: the HTLC.

2-of-2 is also the most stable.
This is because *all* signatories of an n-of-n cryptocurrency system need to be 
online at the same time in order for *any* of them to use the funds in the 
system.
If any one of them is offline, then the system is unusable.
With 2 participants, there is some probability that one of them is offline and 
the individual 2-of-2 system is unusable.
With 3 participants, the probability is higher (there are more participants 
that can be offline).
With 4 participants, higher still.

Thus, the most stable is to split Bitcoin into lots of little 2-of-2 systems, 
and use HTLCs to transfer funds across the little 2-of-2 systems.

Thus, Lightning Network, which splits Bitcoin into lots of little 2-of-2 
cryptocurrency systems (channels), and uses HTLCs to atomically transfer value 
across them (routing).


Of course, having larger n is better as we need to splinter Bitcoin into fewer 
funds with larger participant sets.
And we can mitigate the offline-problem by using a two-layer system: we have a 
n-of-n system (n > 2) that itself splits into multiple smaller 2-of-2 systems.
That way, the Bitcoin layer is split into fewer UTXOs, reducing blockchain 
resource consumption further.

Thus, Channel Factories hosting Lightning Channels.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Consensus protocol immutability is a feature

2021-05-23 Thread ZmnSCPxj via bitcoin-dev
Good morning vjudeu,

> > Perhaps the only things that cannot be usefully changed in a softfork is 
> > the block header format and how proof-of-work is computed from the block 
> > header.
>
> Why not? I can imagine a soft fork where the block header would contain 
> SHA-256 and SHA-3 hashes in the same place. The SHA-256 would be calculated 
> as-is, but the SHA-3 would be truncated only to cover zero bits in SHA-256 
> hashes. In this way, if SHA-256 would be totally broken, old nodes would see 
> zero hashes in the previous block hash and the merkle tree hash, but the new 
> nodes would see correct SHA-3 hashes in the same place. So, for example if we 
> have 1d00 difficulty, the first 32-bits would be zeroes for all old 
> nodes, but all new nodes would see SHA-3 truncated to 32-bits in the same 
> place. The difficulty could tell us how many zero bits we should truncate our 
> SHA-3 result to. Also, in the same way we could introduce SHA-4 in the future 
> as a soft-fork if SHA-3 would be broken and we would see many zero bits in 
> our mixed SHA-256 plus SHA-3 consensus.

I do not think I follow.

The block header has a Merkle tree root that is a SHA-256 of some Merkle tree 
node, is that what you refer to?
Do you mean the same Merkle tree node has to hash to some common value in both 
SHA-2 and SHA-3?

Or do you refer to the `prevBlockHash`?
Do you mean the same `prevBlockHash` has to somehow be the same, for some 
number of bits, in both SHA-2 and SHA-3?

More specifically:

* `nVersion`: 4 bytes
* `prevBlockHash`: 32 bytes, SHA2 of previous block.
* `merkleTreeRoot`: 32 bytes, SHA2 of topmost Merkle tree node.
* `nTime`: 4 bytes, miner-imagined time.
* `nBits`: 4 bytes, encoded difficulty target
* `nonce`: 4 bytes, random number with no other meaning.

What do you refer to?

Because the above block header format is hashed to generate the `prevBlockHash` 
for the *next* block, it is almost impossible to change the format without a 
hardfork.

Regaards,
ZmnSCPxj

>
> On 2021-05-23 13:01:32 user ZmnSCPxj via bitcoin-dev 
> bitcoin-dev@lists.linuxfoundation.org wrote:
>
> > Good morning Jorge, et al,
> >
> > > Hardforks can be useful too.
> > > But, yes, I agree softforks are preferable whenever possible.
> >
> > I think in principle the space of possible softforks is very much wider 
> > than can be trivially expected.
> > For instance, maaku7 once proposed a softfork that could potentially change 
> > the block discovery rate as a softfork.
> > Although this required exploiting a consensus bug that has since been 
> > closed.
> > The example of SegWit shows us that we can in fact create massive changes 
> > to the transaction and block formats with a softfork.
> > For example, it is possible to change the Merkle Tree to use SHA3 instead, 
> > in a softfork, by requiring that miners no longer use the "normal" existing 
> > Merkle Tree, but instead to require miners to embed a commitment to the 
> > SHA3-Merkle-Tree on the coinbase of the "original" block format, and to 
> > build "empty" SHA2-Merkle-Trees containing only the coinbase.
> > To unupgraded nodes it looks as if there is a denial-of-service attack 
> > permanently, while upgraded nodes will seek blocks that conform to the 
> > SHA3-Merkle-Tree embedded in the coinbase.
> > (Do note that this definition of "softfork" is the "> 50% of miners is 
> > enough to pull everyone to the fork".
> > Some thinkers have a stricter definition of "softfork" as "non-upgraded 
> > nodes can still associate addresses to values in the UTXO set but might not 
> > be able to detect consensus rules violations in new address types", which 
> > fits SegWit and Taproot.)
> > (In addition, presumably the reason to switch to SHA3 is to avoid potential 
> > preimage attacks on SHA2, and the coinbase is still in a SHA2-Merkle-Tree, 
> > so... this is a bad example)
> > Perhaps the only things that cannot be usefully changed in a softfork is 
> > the block header format and how proof-of-work is computed from the block 
> > header.
> > But the flexibility of the coinbase allows us to hook new commitments to 
> > new Merkle Trees to it, which allows transactions to be annotated with 
> > additional information that is invisible to unupgraded nodes (similar to 
> > the `witness` field of SegWit transactions).
> >
> > Even if you do have a softfork, we should be reminded to look at the 
> > histories of SegWit and Taproot.
> > SegWit became controversial later on, which delayed its activation.
> > On the other hand, Taproot had no significant controversy and it was widel

Re: [bitcoin-dev] Reducing block reward via soft fork

2021-05-23 Thread ZmnSCPxj via bitcoin-dev
Good morning Karl,

> On 5/23/21, ZmnSCPxj via bitcoin-dev
> bitcoin-dev@lists.linuxfoundation.org wrote:
>
> > Good morning James,
> >
> > > Background
> > >
> > > ===
> > >
> > > Reducing the block reward reduces the incentive to mine. It reduces the
> > > maximum energy price at which mining is profitable, reducing the energy
> > > use.
> >
> > If people want to retain previous levels of security, they can offer to pay
> > higher fees, which increases the miner reward and thereby increasing the
> > energy use again.
>
> The turn-around time for that takes a population of both users and
> miners to cause. Increasing popularity of bitcoin has a far bigger
> impact here, and it is already raising fees and energy use at an
> established rate.
>
> If it becomes an issue, as bandwidth increases block size could be
> raised to lower fees.
>

Which increases block rewards somewhat (at least to some level that matches the 
overall security of the network) and you still have the same amount of energy 
consumed.

> > Properly account for the entropy increase (energy usage) of all kinds of
> > pollution, and the free market will naturally seek sustainable and renewable
> > processes --- because that maximizes profitability in the long run.
>
> There is little economic incentive to fine carbon emissions because
> there is no well-established quick path to gain profit from reducing
> them. The feedback paths you describe take decades if not hundreds of
> years.
>
> But it sounds like you are saying you would rather the energy issue
> stay a political one that does not involve bitcoin. Your point is
> quite relevant because bitcoin is not the largest consumer of energy;
> those who care about reducing energy use would be better put to look
> at other concerns.

Precisely.

> > What is needed is to enforce that pollution be paid for by those who cause
> > it --- this can require significant political influence to do (a major world
> > government is a major polluter, willing to pay for high fuel costs just to
> > ship their soldiers globally, polluting the environments of foreign
> > countries), and should be what true environmentalists would work towards,
> > not rejecting Bitcoin as an environmental disaster (which is frankly
> > laughable).
> > Remember, the free market only works correctly if all its costs are
> > accounted correctly --- otherwise it will treat costs subsidized by the
> > community of human beings as a resource to pump.
>
> It sounds like you would prefer a proof-of-work function that directly
> proved carbon offsetting? And an on-chain tax for environmental harm?


The problem is that the only proof of efficiency here is implicit: any 
inefficiency will eventually be rooted out of the network, as any inefficiency 
will translate to reduced profitability.
However, at short-term, a miner can pollute its locality, and then exit the 
business and leave its crap lying around for others to deal with and abscond 
with pure profit.
This translates to a theft in the profitability of others in the locality.

How to prove this is not happening?
The best you can do is to have some number of authorities sign off on whether 
or not they are doing this.
The problem is that authorities are bribeable.

Alternately, other entities in the locality can use force to require the 
polluting entity to clean up or suffer significant consequences.
This at least is better incentive-wise, as they others in the same locality are 
the ones most affected, but the ability to enforce may be difficult due to 
various political constructions; the miners could be in such deep cahoots with 
the local government that the local government would willingly hurt other local 
entities in the vicinity of the polluting entity.



Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Reducing block reward via soft fork

2021-05-23 Thread ZmnSCPxj via bitcoin-dev
Good morning James,

> Background
> ===
> Reducing the block reward reduces the incentive to mine. It reduces the 
> maximum energy price at which mining is profitable, reducing the energy use.
>

If people want to retain previous levels of security, they can offer to pay 
higher fees, which increases the miner reward and thereby increasing the energy 
use again.
The only difference is that the security is paid for directly by transactors 
rather than slowly extracted from HODLers.

Thus, I expect that the energy use of Bitcoin will fairly closely match its 
security usage, even with this change.

Really, though:

* The issue is not energy use.
* The issue is the energy *efficiency*.

Everything important requires energy.
What is needed is to get the most amount of work for the least amount of 
entropy-increase.

Deleterious environmental effects (pollution, temperature rise, and so on) are 
symptoms of entropy-increase in the local universe.
These have long-term negative effects from the simple fact that we are 
producing entropy and dumping it into our surroundings.

If these effects are properly charged to their instigators (e.g. carbon 
emissions fines), then the negative environmental effects will become economic 
disincentives, that miners will now naturally avoid in order to increase their 
profitability.
This holds no matter how much block rewards are, and how much comes from the 
block subsidy or from mining fees.

The trope that the "free market" is somehow opposed to "environmentalism" is 
about as accurate to real life as Hollywood hacking "I can crack AES-256 in 
exactly 30 minutes".
Properly account for the entropy increase (energy usage) of all kinds of 
pollution, and the free market will naturally seek sustainable and renewable 
processes --- because that maximizes profitability in the long run.
Anyone who pushes for environmentalism but refuses to use Bitcoin should be 
treated with suspicion of either hypocrisy or massive ignorance --- Bitcoin is 
the most honest currency in accounting for its energy usage and consumption, 
and I suspect most other currencies have far worse efficiencies, that happen to 
be hidden because they are not properly accounted for.

What is needed is to enforce that pollution be paid for by those who cause it 
--- this can require significant political influence to do (a major world 
government is a major polluter, willing to pay for high fuel costs just to ship 
their soldiers globally, polluting the environments of foreign countries), and 
should be what true environmentalists would work towards, not rejecting Bitcoin 
as an environmental disaster (which is frankly laughable).
Remember, the free market only works correctly if all its costs are accounted 
correctly --- otherwise it will treat costs subsidized by the community of 
human beings as a resource to pump.

> Alternatives
> ===
> Instead of outright rejecting transactions (and the blocks that contain them) 
> that attempt to spend increased block rewards, treat them as no-ops.

That is inefficient --- the "no-op" transactions reduce the available block 
space for operational transactions, thus this alternative is strictly inferior 
to a simple acceleration of block subsidy reduction.

Regards,
ZmnSCPXj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Consensus protocol immutability is a feature

2021-05-23 Thread ZmnSCPxj via bitcoin-dev
Good morning Jorge, et al,

> Hardforks can be useful too.
> But, yes, I agree softforks are preferable whenever possible.

I think in principle the space of possible softforks is very much wider than 
can be trivially expected.

For instance, maaku7 once proposed a softfork that could potentially change the 
block discovery rate as a softfork.
Although this required exploiting a consensus bug that has since been closed.

The example of SegWit shows us that we can in fact create massive changes to 
the transaction and block formats with a softfork.

For example, it is possible to change the Merkle Tree to use SHA3 instead, in a 
softfork, by requiring that miners no longer use the "normal" existing Merkle 
Tree, but instead to require miners to embed a commitment to the 
SHA3-Merkle-Tree on the coinbase of the "original" block format, and to build 
"empty" SHA2-Merkle-Trees containing only the coinbase.
To unupgraded nodes it looks as if there is a denial-of-service attack 
permanently, while upgraded nodes will seek blocks that conform to the 
SHA3-Merkle-Tree embedded in the coinbase.

(Do note that this definition of "softfork" is the "> 50% of miners is enough 
to pull everyone to the fork".
Some thinkers have a stricter definition of "softfork" as "non-upgraded nodes 
can still associate addresses to values in the UTXO set but might not be able 
to detect consensus rules violations in new address types", which fits SegWit 
and Taproot.)

(In addition, presumably the reason to switch to SHA3 is to avoid potential 
preimage attacks on SHA2, and the coinbase is still in a SHA2-Merkle-Tree, 
so... this is a bad example)

Perhaps the only things that cannot be usefully changed in a softfork is the 
block header format and how proof-of-work is computed from the block header.
But the flexibility of the coinbase allows us to hook new commitments to new 
Merkle Trees to it, which allows transactions to be annotated with additional 
information that is invisible to unupgraded nodes (similar to the `witness` 
field of SegWit transactions).





Even if you *do* have a softfork, we should be reminded to look at the 
histories of SegWit and Taproot.

SegWit became controversial later on, which delayed its activation.

On the other hand, Taproot had no significant controversy and it was widely 
accepted as being a definite improvement to the network.
Yet its implementation and deployment still took a long time, and there was 
still controversy on how to properly implement the activation code.

Any hardforks would not only have to go through the hurdles that Taproot and 
SegWit had to go through, but will *also* have to pass through the much higher 
hurdle of being a hardfork.

Thus, anyone contemplating a hardfork, for any reason, must be prepared to work 
on it for several **years** before anyone even frowns and says "hmm maybe" 
instead of everyone just outright dismissing it with a simple "hardfork = hard 
pass".
As a simple estimate, I would assume that any hardfork would require twice the 
average amount of engeineering-manpower involved in SegWit and Taproot.
(this assumes that hardforks are only twice as hard as softforks --- this 
estimate may be wrong, and this might provide only a minimum rather than an 
expected average)

There are no quick solutions in this space.
Either we work with what we have and figure out how to get around issues with 
no real capability to fix them at the base layer, or we have insight on future 
problems and start working on future solutions today.
For example, I know at least one individual was maintaining an "emergency" 
branch to add some kind of post-quantum signature scheme to Bitcoin, in case of 
a quantum break.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Fee estimates and RBF

2021-05-18 Thread ZmnSCPxj via bitcoin-dev
Good morning Prayank,

>  But it will involve lot of exception handling.


Yes, that is precisely the problem here.

If you select a fixed feerate and then just broadcast-and-forget, you have no 
real exceptions you have to handle --- but that means not using RBF at all.


Testing the handling of reorgs in particular is important, as a reorg might use 
an older version of an RBFed transaction rather than a newer version.
This also implies that further follow-up transactions might need to be 
recreated in such a case.

As this is financial code, we need a lot of testing, and code that has a lot of 
branches due to having to handle a lot of possible exceptions and so forth is a 
headache to completely cover in testing.


C-lightning supposedly supports RBF, in the sense that every transaction it 
makes always signals RBF, but I am almost certain there are edge cases where it 
might mishandle a replaced transaction and lose track of onchain funds, and it 
is difficult to support both "we can spend unconfirmed change outputs" (a very 
common feature of nearly every onchain wallet) with "we can change the feerate 
of unconfirmed transactions" (which changes the txid and therefore the UTXO id 
of the change output spent by use of the previous feature).

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Proposal: Low Energy Bitcoin PoW

2021-05-18 Thread ZmnSCPxj via bitcoin-dev
Good morning Michael,

> Good morning Michael,
>
> > Nothing in a dynamic system like PoW mining can be 100% anticipated, for 
> > example there might be advanced in manufacturing of chips which are 
> > patented and so on.
> > It sounds like your take is that this means no improvements can ever be 
> > made by any mechanism, however conservative.
>
> Not at all.
>
> Small-enough improvements over long-enough periods of time are expected and 
> anticipated --- that is why there exists a difficulty adjustment mechanism.
> What is risky if a large-enough improvement over a short-enough time that 
> overwhelms the difficulty adjustment mechanism.
> ASICBOOST was a massive enough improvement that it could be argued to 
> potentially overwhelm this mechanism if it was not openly allowed for all 
> miners.

Or to put it in another perspective:

* Small improvements to PoW mining are tolerated by Bitcoin.
  * Such improvements are expected to be common.
* Large improvements to PoW mining are potential extinction events for Bitcoin, 
due to massive centralization risk.
  * Such improvements are expected to be *rare* but *not* nonexistent.
* The number of possible circuit configurations is bounded by physical limits 
(matter is quantized, excssively-large chips are infeasible, etc.), thus the 
number of expected optimizations of a particular overall algorithm are bounded.

Suppose two manufacturers find two different small improvements to PoW mining.
In all likelihood, "the sum is better than its parts" and if the two have a 
cross-licensing deal, they can outcompete their *other* competition.
Further, even if some small competitor violates the patent, the improvement may 
be small enough that the patent owner may decide the competitor is too small to 
bother with all the legal fees involved to enforce the patent.
Thus, small improvements to PoW mining are expected to eventually spread 
widely, and that is what the difficulty adjustment mechanism exists to modulate.

But suppose a third manufacturer develops an ASICBOOST-level optimization of 
whatever the PoW mining algorithm is.
That manufacturer has no incentive to cross-license, since it can dominate the 
competition without cross-licensing a bunch of smaller optimizations (that may 
not even add up to compete against the ASICBOOST-level optimization).
And any small competitor that violates patent will be enforced against, due to 
the major improvement that the large optimization has and the massive 
monopolistic advantage the ASICBOOST-level optimization patent holder would 
have.


SHA256d-on-Bitcoin-block-header has already uncovered ASICBOOST, and thus the 
number of possible other large optimizations is that much smaller --- the 
number of possible optimizations is bounded by physical constraints.
Thus, the risk of a black-swan event where a new optimization of 
SHA256d-on-Bitcoin-block-header is large enough to massively centralize mining 
is reduced, compared to every other alternative PoW algorithm, which is an 
important reason to avoid changing PoW as much as possible, without some really 
serious study (which you might be engaged in --- I am not enough of a mathist 
to follow your papers).

We are more likely to want to change SHA256 for SHA3 on the txid and Merkle 
trees than on the PoW.


Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Proposal: Low Energy Bitcoin PoW

2021-05-18 Thread ZmnSCPxj via bitcoin-dev
Good morning Michael,

> Nothing in a dynamic system like PoW mining can be 100% anticipated, for 
> example there might be advanced in manufacturing of chips which are patented 
> and so on. 
>
> It sounds like your take is that this means no improvements can ever be made 
> by any mechanism, however conservative.

Not at all.

Small-enough improvements over long-enough periods of time are expected and 
anticipated --- that is why there exists a difficulty adjustment mechanism.
What is risky if a large-enough improvement over a short-enough time that 
overwhelms the difficulty adjustment mechanism.
ASICBOOST was a massive enough improvement that it could be argued to 
potentially overwhelm this mechanism if it was not openly allowed for all 
miners.

>
> We do go into a fair amount of detail about Minimum Effective Hardness in our 
> paper https://assets.pubpub.org/xi9h9rps/0158167859.pdf , which is 
> actually a special case of hardness that we invented for the context of 
> adding an operation to a PoW, and how it applies to random matrix mults.   

This certainly helps as well.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Prediction Markets and Bitcoin

2021-05-18 Thread ZmnSCPxj via bitcoin-dev
Good morning Prayank,

>
> > Of course the people ultimately funding the development must impose what 
> >direction that development goes to, after all, it is their money that is 
> >being modified. Thus development must follow the market.
>
> Disagree. 
>
> 1.A position in a futures market about possible outcomes of an event is not 
> equivalent to funding Bitcoin development.
>
> 2.People or organizations funding Bitcoin developers or projects can always 
> have some opinion, influence and disagreements. They can never impose or 
> force something at least in Bitcoin protocol.

Sorry for the late reply.

I expect that many Bitcoin developers have a nontrivial amount of their life 
savings in Bitcoin.

Any change in Bitcoin price represents a significant change in the value of 
these life savings.

A position in a futures market represents a prediction by the one taking the 
position that they expect the price of Bitcoin to change in a particular 
direction, possibly based on some condition, including the direction where 
development goes.

This signal then represents an implicit threat ("if Bitcoin goes against this 
position, I will liquidate my Bitcoin and drop the Bitcoin price") which can be 
sufficient to "fund" or "de-fund" developers who have a significant stake in 
Bitcoin.




> I don't think futures market in this case will be able to aggregate and 
> reflect all available information so everything mentioned above has its own 
> importance which should be considered. Maybe I missed few things.

*Some* information > *No* information

>
> 3.Incorrect usage of futures markets in Bitcoin and other issues:

Well, yes, this is the hard part, sigh.


Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Proposal: Low Energy Bitcoin PoW

2021-05-18 Thread ZmnSCPxj via bitcoin-dev
Good morning Michael,

> That’s interesting. I didn’t know the history of ASICBOOST.

History is immaterial, what is important is the technical description of 
ASICBOOST.
Basically, by fixing the partial computation of the second block of SHA256, we 
could selectively vary bits in the first block of SHA256, while reusing the 
computation of the second block.
This allows a grinder to grind more candidate blocks without recomputing the 
second block output, reducing the needed power consumption for the same number 
of hashes attempted.

Here is an important writeup: 
https://www.mit.edu/~jlrubin/public/pdfs/Asicboost.pdf
It should really be required reading for anyone who dreams of changing PoW 
algorithms to read and understand this document.

There may be similar layer-crossings in any combined construction --- or even 
just a simple hash function --- when it is applied to a specific Bitcoin block 
format.

>
> Our proposal (see Implementation) is to phase in oPoW slowly starting at a 
> very low % of the rewards (say 1%). That should give a long testing period 
> where there is real financial incentive for things like ASICBOOST
>
> Does that resolve or partially resolve the issue in your eyes?

It does mitigate this somewhat.

However, such a mechanism is an additional complication and there may be 
further layer-crossing violations possible --- there may be an optimization to 
have a circuit that occasionally uses SHA256d and occasionally uses oPoW, that 
is not possible with a pure SHA256d or pure oPoW circuit.
So this mitigation is not as strong as it might appear at first glance; 
additional layers means additional possibility of layer-crossing violations 
like ASICBOOST.




Regards,
ZmnSCPxj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Proposal: Low Energy Bitcoin PoW

2021-05-18 Thread ZmnSCPxj via bitcoin-dev
Good morning Michael,

> That’s a fair point about patents. However, note that we were careful about 
> this. oPoW only uses SHA3 (can be replaced with SHA256 in principle as well) 
> and low precision linear matrix multiplication. A whole industry is trying to 
> accelerate 8-bit linear matrix mults for AI so there is already a massive 
> incentive (and has been for decades).
>
> See companies like Mythic, Groq, Tesla (FSD computer), google TPU and so on 
> for electronic versions of this. Several of the optical ones are mentioned in 
> the BIP (e.g. Lightmatter)


Please note that ASICBOOST for SHA256d is based on a layer-crossing violation: 
SHA256 processes in blocks, and the Bitcoin block header is slightly larger 
than one SHA256 block.

Adding more to a direct SHA3 (which, as a "sponge" construction, avoids blocks, 
but other layer-crossing violations may still exist) still risks layer 
violations that might introduce hidden optimizations.

Or more succinctly;

* Just because the components have (with high probability) no more possible 
optimizations, does not mean that the construction *as a whole* has no hidden 
optimizations.

Thus, even if linear matrix multiplication and SHA3 have no hidden 
optimizations, their combination, together with the Bitcoin block header 
format, *may* have hidden optimizations.

And there are no *current* incentives to find such optimizations until Bitcoin 
moves to this, at which point we are already committed and it would be highly 
infeasible to revert to SHA256d --- i.e. too late.

This is why changes to PoW are highly discouraged.


Remember, ASICBOOST was *not* an optimization of SHA256 *or* SHA256d, it was an 
optimizations of SHA256d-on-a-Bitcoin-block-header.
ASICBOOST cannot speed up general SHA256 or even general SHA256d, it only 
applies specifically to SHA256d-on-a-Bitcoin-block-header.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Proposal: Low Energy Bitcoin PoW

2021-05-18 Thread ZmnSCPxj via bitcoin-dev
Good morning devrandom,

> On Mon, May 17, 2021 at 11:47 PM ZmnSCPxj:
>
> > When considering any new proof-of-foo, it is best to consider all effects 
> > until you reach the base physics of the arrow of time, at which point you 
> > will realize it is ultimately just another proof-of-work anyway.
>
> Let's not simplify away economic considerations, such as externalities.  The 
> whole debate about the current PoW is about negative externalities related to 
> energy production.
>
> Depending on the details, CAPEX (R, real-estate, construction, production) 
> may have less externalities, and if that's the case, we should be interested 
> in adopting a PoW that is intensive in these types of CAPEX.

Then let us also not forget another important externality: possible 
optimizations of a new PoW algorithm that risk being put into some kind of 
exclusive patent.

I think with high probability that SHA256d as used by Bitcoin will no longer 
have an optimization as large in effect as ASICBOOST in the future, simply 
because there is a huge incentive to find such optimizations and Bitcoin has 
been using SHA256d for 12 years already, and we have already found ASICBOOST 
(and while patented, as I understand it the patent owner has promised not to 
enforce the patent --- my understanding may be wrong).

Any alternative PoW algorithm risks an ASICBOOST-like optimization that is 
currently unknown, but which will be discovered (and possibly patented by an 
owner that *will* enforce the patent, thus putting the entire ecosystem at 
direct conflict with legacy government structures) once there is a good 
incentive (i.e. use in Bitcoin) for it.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Opinion on proof of stake in future

2021-05-18 Thread ZmnSCPxj via bitcoin-dev
Good morning Zac,

> VDFs might enable more constant block times, for instance by having a 
> two-step PoW:
>
> 1. Use a VDF that takes say 9 minutes to resolve (VDF being subject to 
> difficulty adjustments similar to the as-is). As per the property of VDFs, 
> miners are able show proof of work.
>
> 2. Use current PoW mechanism with lower difficulty so finding a block takes 1 
> minute on average, again subject to as-is difficulty adjustments.
>
> As a result, variation in block times will be greatly reduced.

As I understand it, another weakness of VDFs is that they are not inherently 
progress-free (their sequential nature prevents that; they are inherently 
progress-requiring).

Thus, a miner which focuses on improving the amount of energy that it can pump 
into the VDF circuitry (by overclocking and freezing the circuitry), could 
potentially get into a winner-takes-all situation, possibly leading to even 
*worse* competition and even *more* energy consumption.
After all, if you can start mining 0.1s faster than the competition, that is a 
0.1s advantage where *only you* can mine *in the entire world*.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Proposal: Force to do nothing for first 9 minutes to save 90% of mining energy

2021-05-18 Thread ZmnSCPxj via bitcoin-dev
Good morning Michael,

> Am 17.05.2021 um 04:58 schrieb Luke Dashjr:
>
> > It increases security, and is unavoidable anyway.
> > You can't.
>
> There must be a way. dRNG + universal clock + cryptographical magic?!


Proof-of-work **is** the cryptographic magic that creates a universal clock.

In physics, the Arrow of Time is defined as the direction in which entropy 
increases.

Suppose you were shown a video of a particle in a low-gravity environment 
hitting a wall and bouncing away.
This video can be played forwards or backwards, and you would not be able to 
determine whether it is played forwards or backwards.

In short, in many physical interactions, there is ***no*** notion of a 
direction in time, i.e. no past or future.
Nearly every physical interaction, at the small scale, is reversible, thus 
there is no (apparent) inherent direction of time.

However, suppose you were instead shown a video where a group of ceramic shards 
on a floor comes together to form a vase, which then rises off the floor and 
then floats onto a table.
Obviously, this is a video played backwards.
A group of shards on the floor is a higher-entropy state than a vase on a 
table, thus it is obvious what the Arrow of Time here *actually* is.

Or suppose you were shown this video: 
https://www.youtube.com/watch?v=zePA3uIbB5I
Obviously, this is a video played backwards (except for the introduction, of 
course --- pay attention how there is a scene cut from the introduction to the 
main part of the video).
A Rubik cube that is in a disordered state is a higher-entropy state than a 
Rubik cube that is in an ordered state where each side has a specific single 
color, thus it is obvious that the Mythbusters did not actually do any work and 
just ran a time-reversed video of them disordering a newly-opened Rubik cube.


All of our clocks are ultimately derived from the *measurable* increase of 
entropy:

* The current definition of 1 second is measured in terms of the decay of 
[Caesium atoms](https://en.wikipedia.org/wiki/Isotopes_of_caesium#Caesium-133).
  This decay represents the reduction of energy of the atoms and thus an 
increase in universal entropy.
* Wind-up clockwork watches are powered by the controlled release of the energy 
stored in a spring, a consumption of stored energy and thus an increase in 
universal entropy.
* Wind-up clockwork watches are powered by the controlled release of the energy 
stored in a spring, a consumption of stored energy and thus an increase in 
universal entropy.



Now, a low-entropy state is simply one where energy is available for 
consumption.
And as we know, proof-of-work requires energy consumption.

Thus, the existence of a proof-of-work is a proof that time has passed.
If time did not pass, then it would not have been possible to create the 
proof-of-work, because it would not be possible to consume energy (i.e. 
increase universal entropy) and thus create an Arrow of Time.

>From this proof-of-time-passing, we can then build a universal clock, one that 
>is deeply tied to the physical world, due to the energy consumption.
It is by this method that Bitcoin is anchored to reality.


There is already a universal clock available using cryptographic magic.
It is called proof-of-work.


Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Proposal: Force to do nothing for first 9 minutes to save 90% of mining energy

2021-05-18 Thread ZmnSCPxj via bitcoin-dev
Good morning Anton,

> >> 4. My counter-proposal to the community to address energy consumption
> >> problems would be *to encourage users to allow only 'green miners' 
> >> process>> their transaction.* In particular:
> >>...
> >> (b) Should there be some non-profit organization(s) certifying green miners
> >> and giving them cryptographic certificates of conformity (either usage of
> >> green energy or purchase of offsets), users could encrypt their
> >> transactions and submit to mempool in such a format that *only green 
> >> miners>> would be able to decrypt and process them*.
>
> >Hello centralisation. Might as well just have someone sign miner keys, and 
> >get
> >rid of PoW entirely...
> >No, it is not centralization - 
>
> No, it is not centralization, as:
>
> (a) different miners could use different standards / certifications for 
> 'green' status, there are many already;
> (b) it does not affect stability of the network in a material way, rather 
> creates small (12.5% of revenue max) incentive to move to green sources of 
> energy (or buy carbon credits) and get certified - miners who would choose to 
> run dirty energy will still be able to do so.
> and
>
> (c) nothing is being proposed beyond what is already possible - Antpool can 
> go green today, and solicit users to send them signed transactions directly 
> instead of adding them to a public mempool, under the pretext that it would 
> make the transfer 'greener'. What is being proposed is some community effort 
> to standardize & promote this approach, because if we manage to make Bitcoin 
> green(er) - we will remove what many commentators see as the last barrier / 
> biggest risk to even wider Bitcoin adoption.


The point of avoiding centralization is to avoid authorities --- who can end up 
being bribeable or hackable single points-of-failure, and which would 
potentially be able to kill Bitcoin as a whole from a single attack point.

Adding an authority which filters miners works directly against this goal, 
regardless of however you define "centralization" --- centralization is not the 
root issue here, the authority *is*.

One can observe that "more renewable" energy sources will, economically, be 
cheaper (in the long run) anyway, and you do not have to add anything to go 
towards "more green" energy resources.

After all, a "non-renewable" resource is simply a resource that has a lower 
supply (it cannot be renewed) than a "more renewable" energy source.
There is only so much energy that is stored in coal and oil on Earth, but the 
sun has a much larger total mass than Earth itself, thus it is a "more 
renewable" energy resource than coal and oil.
Economically, this implies that "greener" energy resources will be cheaper in 
the long run, simply by price being a function of supply.

In short: trust the invisible hand.

We already know that lots of miners already operate in places where energy 
prices have bottomed due to oversupply due to technological improvements in 
capturing energy that used to be dissipated as waste heat.
What is needed is to spread this knowledge to others, not mess with the design 
of Bitcoin at a fundamental level and risk introducing unexpected side effects 
(bugs).


Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Opinion on proof of stake in future

2021-05-18 Thread ZmnSCPxj via bitcoin-dev
Good morning Erik,

> Verifiable Delay Functions involve active participation of a single
> verifier. Without this a VDF decays into a proof-of-work (multiple
> verifiers === parallelism).
>
> The verifier, in this case is "the bitcoin network" taken as a whole.
> I think it is reasonable to consider that some difficult-to-game
> property of the last N blocks (like the hash of the last 100
> block-id's or whatever), could be the verification input.
>
> The VDF gets calculated by every eligible proof-of-burn miner, and
> then this is used to prevent a timing issue.
>
> Seems reasonable to me, but I haven't looked too far into the
> requirements of VDF's
>
> nice summary for anyone who is interested:
> https://medium.com/@djrtwo/vdfs-are-not-proof-of-work-91ba3bec2bf4
>
> While VDF's almost always lead to a "cpu-speed monopoly", this would
> only be helpful for block latency in a proof-of-burn chain. Block
> height would be calculated by eligible-miner-burned-coins, so the
> monopoly could be easily avoided.

Interesting link.

However, I would like to point out that the *real* reason that PoW consumes 
lots of power is ***NOT***:

* Proof-of-work is parallelizable, so it allows miners consume more energy (by 
buying more grinders) in order to get more blocks than their competitors.

The *real* reason is:

* Proof-of-work allows miners to consume more energy in order to get more 
blocks than their competitors.

VDFs attempt to sidestep that by removing parallelism.
However, there are ways to increase *sequential* speed, such as:

* Overclocking.
  * This shortens lifetime, so you can spend more energy (on building new 
miners) in order to get more blocks than your competitors.
* Lower temperatures.
  * This requires refrigeration/cooling, so you can spend more energy (on the 
refrigeration process) in order to get more blocks than your competitors.

I am certain people with gaming rigs can point out more ways to improve 
sequential speed, as necessary to get more frames per second.

Given the above, I think VDFs will still fail at their intended task.
Speed, yo.

Thus, VDFs do not serve as a sufficient deterrent away from ever-increasing 
energy consumption --- it just moves the energy consumption increase away from 
the obvious (parallelism) to the obscure-if-you-have-no-gamer-buds.

You humans just need to get up to Kardashev 1.0, stat.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Proposal: Low Energy Bitcoin PoW

2021-05-18 Thread ZmnSCPxj via bitcoin-dev


> A few things jump out at me as I read this proposal
>
> First, deriving the hardness from capex as opposed to opex switches the 
> privilege from those who have cheap electricity to those who have access to 
> chip manufacturers/foundries. While this is similarly the case for Bitcoin 
> ASICS today, the longevity of the PoW algorithm has led to a better 
> distribution of knowledge and capital goods required to create ASICS. The 
> creation of a new PoW of any kind, hurts this dimension of decentralization 
> as we would have to start over from scratch on the best way to build, 
> distribute, and operate these new pieces of hardware at scale. While I have 
> not combed over the PoW proposed here in fine detail, the more complicated 
> the algorithm is, the more it privileges those with specific knowledge about 
> it and the manufacturing process.
>
> The competitive nature of Bitcoin mining is such that miners will be willing 
> to spend up to their expected mining reward in their operating costs to 
> continue to mine. Let's suppose that this new PoW was adopted, miners will 
> continue to buy these chips in ever increasing quantities, turning the 
> aforementioned CAPEX into a de facto OPEX. This has a few consequences. First 
> it just pushes the energy consumption upstream to the chip manufacturing 
> process, rather than eliminating it. And it may trade some marginal amount of 
> the energy consumption for the set of resources it takes to educate and 
> create chip manufacturers. The only way to avoid that cost being funneled 
> back into more energy consumption is to make the barrier to understanding of 
> the manufacturing process sufficiently difficult so as to limit the 
> proliferation of these chips. Again, this privileges the chip manufacturers 
> as well as those with close access to the chip manufacturers.
>
> As far as I can tell, the only thing this proposal actually does is create a 
> very lucrative business model for those who sell this variety of chips. Any 
> other effects of it are transient, and in all likelihood the transient 
> effects create serious centralization pressure.
>
> At the end of the day, the energy consumption is foundational to the system. 
> The only way to do away with authorities, is to require competition. This 
> competition will employ ever more resources until it is unprofitable to do 
> so. At the base of all resources of society is energy. You get high energy 
> expenditure, or a privileged class of bitcoin administrators: pick one. I 
> suspect you'll find the vast majority of Bitcoin users to be in the camp of 
> the energy expenditure, since if we pick the latter, we might as well just 
> pack it in and give up on the Bitcoin experiment.


Keagan is quite correct.
Ultimately all currency security derives from energy consumption.
Everything eventually resolves down to proof-of-work.

* Proof-of-space simply moves the work to the construction of more storage 
devices.
* Proof-of-stake simply moves the work to stake-grinding attacks.
* The optical proof-of-work simply moves the work to the construction of more 
miners.
* Even government-enforced fiat is ultimately proof-of-work, as the operation 
and continued existence of any government is work.

It is far better to move towards a more *direct* proof-of-work, than to add 
more complexity and come up with something that is just proof-of-work, but with 
the work moved off to somewhere else and with additional moving parts that can 
be jammed or hacked into.

When considering any new proof-of-foo, it is best to consider all effects until 
you reach the base physics of the arrow of time, at which point you will 
realize it is ultimately just another proof-of-work anyway.

At least, proof-of-work is honest about its consumption of resources.


Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP - limiting OP_RETURN / HF

2021-05-02 Thread ZmnSCPxj via bitcoin-dev
Good morning Ruben,

> Hi Yanmaani,
>
> >Merged mining at present only needs one hash for a merkle root, and that's 
> >stored in the coinbase.
>
> Yes, but that method is not "blind", meaning BTC miners have to validate the 
> merged-mined chain, which is a significant downside.
>
> >It would be even simpler to add the following rules
>
> That would require a specific soft fork, whereas the method described in my 
> post avoids doing that.
>
> >do I need to put in a transaction that burns bitcoins for the tx fee
>
> The blind merged-mined chain (which I call a "spacechain") needs its own 
> native token in order to pay for fees. The mechanism I proposed for that is 
> the perpetual one-way peg, which allows fair "spacecoin" creation by burning 
> BTC, and circumvents creating bad speculative altcoin incentives. Anyone can 
> create a spacechain block and take the fees, and then try to get BTC miners 
> to include it by paying a higher fee than others (via RBF).

What bothers me about BMM is the B.

Mainchain miners assume that sidechain functionaries check the sidechain rules.
Their rule is that if the sidechain functionary is willing to pay an amount, 
then obviously the sidechain functionary must benefit by at *least* that amount 
(if not, the sidechain functionary is losing funds over time and will go out of 
business at some point).
Thus the BMM is an economic incentive for sidechain functionaries to be honest, 
because dishonesty means that sidechain nodes will reject their blocks and they 
will have earned nothing in the sidechain that is of equal or greater value 
than what they spend on the mainchain.

But the BMM on mainchain is done by bidding.
Suppose a sidechain functionary creates a block where it gets S fees, and it 
pays (times any exchange rates that arise due to differing security profiles of 
mainchain vs sidechain) M in fess to mainchain miners to get its commitment on 
the mainchain.
Then any other competing sidechain functionary can create the same block except 
the S fees go to itself, and paying M+1 in fees to mainchain miners to get 
*that* commitment mainchain.
This triggers a bidding war.
Logically, further sidechain functionaries will now bid M+2 etc. until M=S 
(times exchange rates) and the highest bidder earns nothing.

That means that sidechain functionaries will not earn anything once there are 
at least 2 functionaries, because if there are two sidechain functionaries then 
they will start the above bidding war and all earnings go to mainchain miners, 
who are not actually validating anything in the sidechain.
So they are doing all this work of validating the sidechain blocks, but gain 
nothing thereby, and are thus not much better than fullnodes.

Even if you argue that the sidechain functionaries might gain economic benefit 
from the existence of the sidechain, that economic benefit can be quantified as 
some economic value, that can be exchanged at some exchange rate with some 
number of mainchain tokens, so M just rises above S by that economic benefit 
and sidechain functionaries will still end up earning 0 money.

If there is only one sidechain functionary the above bidding war does not 
occur, but then the entire sidechain depends on this one sidechain functionary.

So it does not seem to me that blinded merge mining would work at scale.
Maybe.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Fee estimates and RBF

2021-05-02 Thread ZmnSCPxj via bitcoin-dev
Good morning Prayank,

I believe a "true" full-RBF wallet should be what every onchain wallet aspires 
to.

However, I think a lot of the effort necessary here has to do with sheer 
engineering issues.

For example, if you think "RBF does not exist", you can do things like:

* Spend an unconfirmed input from a third party.
  * This is not actually safe since an unconfirmed tx might have a conflicting 
transaction get confirmed, but a lot of onchain wallets support this for 
non-RBF unconfirmed inputs because 99.9% of the time this never happens.
* When you spend from a (confirmed or unconfirmed) input, delete it from your 
db forever (because you do not have to worry about alternate transactions 
spending the same input).
  * This simplifies db design, you do not have to keep track of states like 
"has been spent but tx is not confirmed yet", "has two different alternate 
transactions spending it that have not confirmed", "is on a transaction that is 
not confirmed and therefore this input might disappear completely" etc.

In particular, if we want a "true" full-RBF wallet:

* Suppose the user wants to spend some amount to address A.
  * The user imposes a limit on up to how much to spend on fees to have this 
spend happen.
* The wallet optimistically creates a low-fee send transaction.
* After some time, the wallet bumps up the fee by creating a new transaction.
  * The wallet keeps bumping up, up to the designated limit, the longer the 
transaction is not confirmed.

Of note is that there is a *race condition* in the above case.
When the wallet is bumping up and constructing a new transaction with higher 
fee, a miner could find a new block that has the old transaction with lower fee.

Now consider the subsequent user story.

* After some time, the user wants to spend another amount to address B.
  * Again the user imposes a limit on how much to spend on fees to have this 
spend happen.
* The wallet RBFs the existing transaction to include the spend to address B.

Again, a race condition can occur --- while the wallet is feebumping a new 
transaction that includes the new output, a random miner can find a new block 
that includes the old transaction.

Thus, the wallet really needs to keep track of any "pending spends" and 
correlate them with actual transactions.

Further, of course it is convenient to be able to spend money even while it is 
unconfirmed.
But the sender of the unconfirmed input might be using the same software as 
this wallet as well, meaning that the actual transaction output might change as 
the original spender keeps fee-bumping it over time.

I confess I have not been thinking of this as well as I should have.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] maximum block height on transaction

2021-05-02 Thread ZmnSCPxj via bitcoin-dev
Good morning Billy, and list,

> -   Using an opcode would greatly increase CPU usage because the script cache 
> would need to be reworked (and probably cannot be made to work).
> -   Adding a field would greatly increase the code complexity to the level of 
> SegWit, without all the important bugfixes+features (tx malleability, 
> quadratic sighash, well-defined extensible outputs) that SegWit provides.

Sometimes, the only way out is through.

A general idea to get around this would be:

* Define a "hidden" field of a transaction, which is not existent in *any* 
serialization of the transaction.
* Set a default value for this field that would be compatible with pre-softfork 
rules.
* Have an opcode that manipulates this field, carefully designed so it is 
idempotent.

The above general idea is not original to me, I believe.
I think I have seen it elsewhere on the list, possibly in discussions around 
sidechains, though my primary cache is unable to fetch and additional searches 
through unindexed storage is taking too long.

So, for this particular case, here is a (non-serious) proposal to implement a 
maximum block height on transactions.

* Create a new field `u32 nMaxHeight` on `CTransaction` that is not serialized 
in any transaction format.
  * A block is not valid if any transaction in it has an `nMaxHeight` larger 
than the block height of the block.
  * Default value is `0x`.
* Add a new opcode `OP_SETMAXHEIGHT` that replaces an existing `OP_NOP`.
  * The opcode must be followed by an `OP_PUSH` of a 32-bit value, else script 
validation fails.
* This prevents using a computed value, instead the value must be given as 
a constant in the script text.
  This is a precaution to reduce the risk that execution of the script at a 
different time or different computer or etc will result in a different value 
that the `OP_SETMAXHEIGHT` opcode uses, which can cause consensus divergence.
  If we figure out later that this precaution is not necessary, we can just 
use another `OP_NOP` for `OP_SETMAXHEIGHTFROMSTACK`.
  * If the current `nMaxHeight` is larger than the given value, then the 
`nMaxHeight` is set to the given value.

The above avoids issues with opcodes --- the script interpreter can continue to 
be executed in the only place it is in, i.e. at entry into the mempool.
It also avoids some of the code complexity with fields, since the field is 
non-existent in any serialization of the transaction, but is instead implied by 
the scripts that the transaction causes to be executed, reducing the need to 
identify pre-softfork peers and baby-talk to them --- the baby-talk simply 
contains "parental bonuses" that are understood by upgraded nodes who are "in 
the know".

Additional complications, such as the need for an index of `nMaxHeight` for 
transactions in the mempool (to remove transactions whose `nMaxHeight` is now 
in the past), and the additional checks needed when receiving an in-block 
transaction that is not in the mempool, are left to the reader.
Similar field and opcode for `CTransactionInput` for a relative-time max height 
are also left as an exercise to the reader.

> -   You can do what you want with a second `nLockTime`d transaction that 
> spends the output anyway.

The advantage of this functionality is that you can be safely offline at the 
time the timeout occurs in any complicated timeout-based contract.

Typically, when using say an HTLC, the contractor who holds lien on the 
timelock branch, has to be online at the time the timelock becomes valid, in 
order to impose a timeout on the hashlock branch.
However, if instead the hashlock branch includes an `OP_SETMAXHEIGHT`, then the 
contractor holding lien on the timelock branch does not have this risk.

However, the contractor holding the lien on the hashlock branch now has 
increased risk.
If the timeout is approaching, and suddenly there is high mempool usage at the 
time, then a claim of the hashlock branch may fall off the mempool due to 
`nMaxHeight` violation.
But the transaction claiming the hashlock branch has been published and the 
preimage has been published in mempools all over the world, thus the contractor 
holding lien on the hashlock branch risks not being compensated for revelation 
of the preimage.

Whereas with the current way things are, the timelock-holder is at risk, and 
the hashlock-holder has reduced risk since even if the timeout arrives, there 
is still the possibility that the hashlock branch is what gets confirmed, 
whereas with `OP_SETMAXHEIGHT` the hashlock-holder has 0 chance of getting the 
hashlock branch confirmed in case of sudden spike in onchain usaage.

Thus it seems to me that this scheme does not really *improve* Bitcoin 
significantly, it only moves risks from one participant to another in a 
two-participant contract.
Thus, this proposal is not particularly serious.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list

Re: [bitcoin-dev] BIP - limiting OP_RETURN / HF

2021-04-16 Thread ZmnSCPxj via bitcoin-dev
Good morning Christopher,

> >> But more importantly, adding limitations on OP_RETURN transactions is not 
> >> helpful.  Users who want to embed arbitrary data in their transactions can 
> >> always do so by encoding their data inside the values of legacy 
> >> multi-signature scriptpubkeys (pubkeys can be generated without knowing 
> >> the private key in order to encode non-key related data).  Not only can 
> >> users do this, users have done this in the past.  However, this behaviour 
> >> is problematic because such multi-signature "data" scriptpubkeys are 
> >> indistinguishable from "real" multisignature scriptpubkeys, and thus must 
> >> be kept in the UTXO set.  This differs from outputs using OP_RETURN which 
> >> are provably unspendable, and therefore can be safely omitted from the 
> >> UTXO set.
>
> This sounds like a good justification to remove the legacy multi-signature 
> capabilities as well.

The same technique can be used on P2PKH as well --- the "pubkey hash" need not 
be a hash of a public key, it can be a 20-byte commitment, or even an ASCII 
message like "ZmnSCPxj is the best" (20 characters, I counted).
There is nothing that *can* check if the hash of a public key is indeed the 
hash of a public key unless you actually reveal the public key.

If you need a 32-byte commitment, a P2WSH would work --- again the "script 
hash" need not be a hash of a script, it can be any 32-byte commitment.

In all these cases you have to waste 547 satoshi, but that tends to be small 
compared to tx fees currently.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Designing Bitcoin Smart Contracts with Sapio (available on Mainnet today)

2021-04-16 Thread ZmnSCPxj via bitcoin-dev
Good morning Jeremy, et al.,


> Bitcoin Developers,
>
> I'm very excited to introduce Sapio[0] formally to you all.

This seems quite interesting to me as well!

I broadly agree with the rant on monetary units.
In C-Lightning we always (except for some legacy fields that will eventually be 
removed) output values as strings with an explicit `msat` unit, even for 
onchain values (the smallest of which are satoshi, but for consistency we 
always print as millisatoshi), and accept explicit `btc`, `sat`, and `msat` 
units.

--

Personally I would have used a non-embedded DSL.

In practice an embedded DSL requires a user to learn two languages --- the 
hosting language and the embedded language.
Whereas if you designed a non-embedded DSL, a new user would have to learn only 
one language.
For instance, if an error is emitted, then the user has to know whether the 
error comes from the hosting language compiler, or the embedded language 
implementation.

In a past career embedded DSLs for hardware description languages were being 
pushed, and we found that one of the drawbacks was the need to learn as well 
the hosting language --- at some point Haskell-embedded DSLs became so 
unpopular that anything that was even Haskell-related had a negative reaction 
in some hardware design shops.
For example BlueSpec originally was a Haskell-embedded DSL, and eventually 
implemented a Verilog-like syntax that was not embedded in Haskell, becoming 
BlueSpecSystemVerilog.

Further, as per coding theory, the hosting language is often quite generic and 
can talk about anything, including other embedded languages, thus we expect 
(all other things being equal) that in general, an utterance in an embedded DSL 
will be longer than an utterance in a non-embedded DSL (as there is more things 
to talk about, more symbols are necessary, and thus we expect things to be 
longer in the generic hosting language).
Whereas a non-embedded DSL can cut away most of the extra verbage needed to 
introduce to the hosting language implementation, in order to indicate the 
"entry" into the domain-specific language.

--

If my understanding is correct, I seem, that the hosting language is a full, 
general, Turing-complete language, that "builds up" a total 
(non-Turing-complete) contract description.

I have had (private) speculations before that it would be possible to design a 
language with two layers:

* A non-Turing-complete total "base language".
* A syntax meta-language similar to Scheme `syntax-rules`, which constructs 
ASTs for the "base language".

Note that Scheme `syntax-rules` is indeed Turing-complete, as a macro can 
expand to a form with two lists that form two "ends" of a tape, and act as a 
Turing machine on that tape, thus Turing-equivalent.
It is not a general language as it lacks many basic practicalities, but as pure 
computation, indeed it is possible to compute anything in that language.

The advantage of this scheme is that the meta-language is executed at language 
compile time, and the developer can see (by observing the compilation process) 
whether the meta-program halts or not.
However, the end-user executing the program is assured that the program, 
delivered as a compiled binary, will indeed terminate, as the base language is 
total and non-Turing-complete (i.e. the halting problem is trivial for the base 
language --- all programs halt).

I even have started designing a syntax scheme that adds in infix notation and 
indent-sensitivity to a Lisp-like syntax, at the cost of disallowing typical 
Lisp-like names like `pair?`, e.g.

foo x = value (bar x)
  where
bar x = x

is equivalent to:

(`=` (foo x)
 (value (bar x)
(where
  (`=` (bar x) x

I can provide more details if interested.

Note that the base language is not embedded in the meta-language, as the 
meta-language is effectively only capable of talking about how the utterance in 
the base language is constructed --- the meta-language is not quite general 
enough (i.e. the meta-language cannot implement "Hello World").
Thus coding theory should imply that this should lead to more succinct 
utterances (in general).
>From this point of view, language design is about striking a balance between 
>the low input bandwidth of neurotypical human brains (thus compression is 
>needed, i.e. the language encourages succinct programs) and the limited 
>processing power of neurotypical human brains (thus decompression speed is 
>needed, i.e. it should be obvious what something expands to).


Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] maximum block height on transaction

2021-04-15 Thread ZmnSCPxj via bitcoin-dev
Good morning Billy,


> I've come across this argument before, and it seems kind of like Satoshi's 
> word here is held as gospel. I haven't heard any deep discussion of this 
> topic, and I even asked a question on the bitcoin SE about it. Sorry to 
> hijack this conversation, but I'm very curious if there's something more to 
> this or if the thinking was simply decided that OP_BLOCKNUMBER wasn't useful 
> enough to warrant the (dubious) potential footgun of people accepting 
> sub-6-block transactions from a transaction that uses an expired spend-path?

Another argument I have encountered has to do with the implementation of 
Bitcoin Core.

As an optimization, SCRIPT is evaluated only when a transaction enters the 
mempool.
It is not evaluated at any other time.
Indeed, when accepting a new block, if a transaction in that block is in the 
mempool, its SCRIPT is not re-evaluated.

If the max-blockheight-constraint is implemented as a SCRIPT opcode, then at 
each block, every SCRIPT in every transaction in the mempool must be 
re-evaluated, as the SCRIPT might not reject.
During times of high chain bloat, there will be large numbers of transactions 
in the mempool, only a tiny fraction will be removed at each block before the 
mempool finally clears, leading to effective O(n^2) CPU time spent (n blocks 
are needed in order to empty a mempool with n transactions, each block triggers 
re-evaluation of SCRIPT of n transactions in the mempool).
That O(n^2) assumes a single SCRIPT is O(1), which is untrue as well (but is 
generally approached in practice as most transactions are simple singlesig or 
`OP_CHECKMULTISIG` affairs).

That is, the mempool assumes that once a SCRIPT accepts, it will always accept 
in the future.
Thus, any SCRIPT opcode cannot change from "accept" (because at the current 
blockheight the max-block is not yet reached) to "reject" (because the 
max-block constraint is now violated).

Thus, we cannot use an opcode to impose the max-block cosntraint.

The alternative is to add a new field `maxtime` to the transaction.
Then possibly, we can have an `OP_CHECKMAXTIMEVERIFY` opcode that checks that 
the field has a particular value.
Then the mempool can have a separate index according to `maxtime` fields, where 
it can remove the indexed transactions at each block.
The index will be likely O(log n), and the filtering at each block would be O(n 
log n), which is an improvement.
Note in particular that the index itself would require O(n) storage.

However, adding a new field to the transaction format would require techniques 
similar to what was used in SegWit, i.e. post-maxtime nodes have to "baby talk" 
to pre-maxtime nodes and pretend transactions do not have this field, in much 
the same way post-SegWit nodes "baby talk" to pre-SegWit nodes and pretend 
transactions do not have a `witness` field.
We would then need a third Merkle Tree to hold the "really real" transaction ID 
that contains the `maxtime` field as well.

Thus, it seems to me that the tradeoffs are simply not good enough, when you 
can get 99% of what you need using just another transaction with `nLockTime`:

* Using an opcode would greatly increase CPU usage because the script cache 
would need to be reworked (and probably cannot be made to work).
* Adding a field would greatly increase the code complexity to the level of 
SegWit, without all the important bugfixes+features (tx malleability, quadratic 
sighash, well-defined extensible outputs) that SegWit provides.
* You can do what you want with a second `nLockTime`d transaction that spends 
the output anyway.

Indeed, it is helpful to realize *why* `OP_CHECKLOCKTIMEVERIFY` and 
`OP_CHECKSEQUENCEVERIFY` work the way they are implemented.
They are typically discussed and described as if they were imposing time-based 
constraints, but the *real* implementation only imposes constraints on 
`nLockTime` and `nSequence` fields --- the SCRIPT interpreter itself does not 
look at the block that the transaction is in (because that is not available, as 
the SCRIPT interpreter is invoked at mempool entry, when the transaction *has* 
no block it is contained in).
There is instead a separate layer (the entry into the mempool) that implements 
the *actual* time-based cosntraints, based on the fields and not the SCRIPT 
opcodes.

Regards,
ZmnSCPxj

>
> On Fri, Apr 9, 2021 at 5:55 AM Jeremy via bitcoin-dev 
>  wrote:
>
> > You could accomplish your rough goal by having:
> >
> > tx A: desired expiry at H
> > tx B: nlocktime H, use same inputs as A, create outputs equivalent to 
> > inputs (have to be sure no relative timelocks)
> >
> > Thus after a timeout the participants in A can cancel the action using TX B.
> >
> > The difference is the coins have to move, without knowing your use case 
> > this may or may not help you. 
> >
> > On Fri, Apr 9, 2021, 4:40 AM Russell O'Connor via bitcoin-dev 
> >  wrote:
> >
> > > From https://bitcointalk.org/index.php?topic=1786.msg22119#msg22119:
> > >

Re: [bitcoin-dev] PSA: Taproot loss of quantum protections

2021-04-15 Thread ZmnSCPxj via bitcoin-dev
Good morning LL,

> On Tue, 16 Mar 2021 at 11:25, David A. Harding via bitcoin-dev 
>  wrote:
>
> > I curious about whether anyone informed about ECC and QC
> > knows how to create output scripts with lower difficulty that could be
> > used to measure the progress of QC-based EC key cracking.  E.g.,
> > NUMS-based ECDSA- or taproot-compatible scripts with a security strength
> > equivalent to 80, 96, and 112 bit security.
>
> Hi Dave,
>
> This is actually relatively easy if you are willing to use a trusted setup. 
> The trusted party takes a secp256k1 secret key and verifiably encrypt it 
> under a NUMS public key from the weaker group. Therefore if you can crack the 
> weaker group's public key you get the secp256k1 secret key. 
> Camenisch-Damgard[1] cut-and-choose verifiable encryption works here.
> People then pay the secp256k1 public key funds to create the bounty. As long 
> as the trusted party deletes the secret key afterwards the scheme is secure.
>
> Splitting the trusted setup among several parties where only one of them 
> needs to be honest looks doable but would take some engineering and analysis 
> work.

To simplify this, perhaps `OP_CHECKMULTISIG` is sufficient?
Simply have the N parties generate individual private keys, encrypt each of 
them with the NUMS pubkey from the weaker group, then pay out to an N-of-N 
`OP_CHECKMULTISIG` address of all the participants.
Then a single honest participant is enough to ensure security of the bounty.

Knowing the privkey from the weaker groups would then be enough to extract all 
of the SECP256K1 privkeys that would unlock the funds in Bitcoin.

This should reduce the need for analysis and engineering.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Prediction Markets and Bitcoin

2021-04-15 Thread ZmnSCPxj via bitcoin-dev
Good morning Prayank,


> I think prediction markets or such tokens might help in adding to the 
> information we already have however they don't decide or replace anything. 
> Bitcoin development should impact such markets and not the other way around. 

"Human behavior is economic behavior. The particulars may vary, but competition 
for limited resources remains a constant. Need as well as greed have followed 
us to the stars, and the rewards of wealth still await those wise enough to 
recognize this deep thrumming of our common pulse. " -- CEO Nwabudike Morgan, 
"The Centauri Monopoly", *Sid Meier's Alpha Centauri*

This is the tension between the necessary freedom of discovering strange new 
techniques, and the exigencies of life, where every joule of negentropy is a 
carefully measured resource.

Of course development must be free to do what is best technically, and to 
experiment and see what other techniques are possible or workable.
Thus the market must follow development.

Of course the people ultimately funding the development must impose what 
direction that development goes to, after all, it is their money that is being 
modified.
Thus development must follow the market.

It is the negotiation of the two that is difficult.

Overall, I think a lot of the developer arguments are reasonably clear --- what 
is unclear is what the market wants, thus I think prediction markets are 
something that are needed in order for the negotiation between these two 
aspects to advance.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Taproot NACK

2021-03-17 Thread ZmnSCPxj via bitcoin-dev
Good morning,

> Good afternoon,
>
> That is not desirable since yourself and I cannot prove the property of the 
> UTXO when it is further spent unless we can ourselves scrutinize it.

What property *needs* to be proven in the first place?

I suspect you are riding too much on your preferences and losing sight of the 
end goal I am pointing at here.
If your goal is to promote something you prefer (which you selected for other 
reasons) then the conclusion will be different.

I already laid out the necessary goal that I consider as necessary:

> The entire point of a public blockchain is to prevent uncontrolled forgery of 
> the coin.

Given the above, it is not *necessary* to prove *any* property of *any* UTXO 
other than the property *this UTXO does not create more coins than what was 
designed*.
The exact value of that coin, the public key of that coin, *when* the coin was 
spent and for *what* purpose are not *necessary*, the only thing necessary to 
prove is that inputs = outputs + fee.
Indeed, the exact values of "inputs" and "outputs" and "fee" are also not 
needed to be verifiable, only the simple fact "input = outputs + fee" needs to 
be verifiable (which is why homomorphic encryptions of input, output, and fee 
are acceptable solutions to this goal).
It is immaterial if you or I *can* or *cannot* prove any *other* property, if 
the goal is only to prevent uncontrolled forgery.

If your definition of "fraud" is broader, then please lay it out explicitly.
As well, take note that as I understand it, this is largely the primary problem 
of cryptocurrencies that existed long before Bitcoin did; it is helpful to 
remember that Chaumian banks and various forms of e-cash existed before Bitcoin.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP Proposal: Consensus (hard fork) PoST Datastore for Energy Efficient Mining

2021-03-17 Thread ZmnSCPxj via bitcoin-dev
Good morning Andrew,

> I wouldn't fully discount general purpose hardware or hardware outside of the 
> realm of ASICS. BOINC (https://cds.cern.ch/record/800111/files/p1099.pdf) 
> implements a decent distributed computing protocol (granted it isn't a 
> cryptocurrency), but it far computes data at a much cheaper cost compared to 
> the competition w/ decent levels of fault tolerance. I myself am running an 
> extremely large scale open distributed computing pipeline, and can tell you 
> for certain that what is out there is insane. In regards to the argument of 
> generic HDDs and CPUs, the algorithmic implementation I am providing would 
> likely make them more adaptable. More than likely, evidently there would be 
> specialized HDDs similar to BurstCoin Miners, and 128-core CPUs, and all 
> that. This could be inevitable, but the main point is providing access to 
> other forms of computation along w/ ASICs. At the very least, the generic 
> guys can experience it, and other infrastructures can have some form of 
> compatibility.

What would the advantage of this be?

As I see it, changing the underlying algorithm is simply an attempt to reverse 
history, by requiring a new strain of specialization to be started instead of 
continuing the trend of optimizing SHA256d very very well.

I think it may be better to push *through* rather than *back*, and instead 
spread the optimization of SHA256d-specific hardware so widely that anyone with 
2 BTC liquidity in one location has no particular advantage over anyone with 2 
BTC liquidity in another location.
For one, I expect that there will be fewer patentable surprises remaining with 
SHA256d than any newer, much more complicated construction.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP Proposal: Consensus (hard fork) PoST Datastore for Energy Efficient Mining

2021-03-16 Thread ZmnSCPxj via bitcoin-dev
Good morning Andrew,

Looking over the text...

> # I am looking towards integrating memory hard compatibility w/ the mining 
> algorithm. Memory hard computation allows for time and space complexity for 
> data storage functionality, and there is a way this can likely be implemented 
> without disenfranchising current miners or their hardware if done right.

I believe this represents a tradeoff between time and space --- either you use 
one spatial unit and take a lot of time, or you use multiple spatial units and 
take smaller units of time.

But such time/space tradeoffs are already possible with the existing mechanism 
--- if you cannot run your existing SHA256d miner faster (time), you just buy 
more miners (space).

Thus, I think the requirement for memory hardness is a red herring in the 
design of proof-of-work algorithms.
Memory hardness *prevents* this tradeoff (you cannot create a smaller miner 
that takes longer to mine, as you have a memory requirement that prevents 
trading off space).

It is also helpful to remember that spinning rust consumes electricity as well, 
and that any operation that requires changes in data being stored requires a 
lot of energy.
Indeed, in purely computational algorithms (e.g. CPU processing pipelines) a 
significant amount of energy is spent on *changing* voltage levels, with very 
little energy (negligible compared to the energy spent in changing voltage 
levels in modern CMOS hardware) in *maintaining* the voltage levels.

> I don't see a reason why somebody with $2m of regular hardware can't mine the 
> same amount of BTC as somebody with $2m worth of ASICs.

I assume here that "regular hardware" means "general-purpose computing device".

The Futamura projections are a good reason I see: 
http://blog.sigfpe.com/2009/05/three-projections-of-doctor-futamura.html

Basically, any interpreter + fixed program can be converted, via Futamura 
projection, to an optimized program that cannot interpret any other program but 
runs faster and takes less resources.

In short, any hardware interpreter (i.e. general-purpose computing device) + a 
fixed proof-of-whatever program, can be converted to an optimized hardware that 
can only perform that proof-of-whatever program, but consuming less energy and 
space and will (eventually) be cheaper per unit as well, so that $2M of such a 
specific hardware will outperform $2M of general-purpose computing hardwre.

Thus, all application-specificity (i.e. any fixed program) will always take 
less resources to run than a generic hardware interpreter that can run any 
program.

Thus, if you ever nail down the specifics of your algorithm, and if a 
thousand-Bitcoin industry ever grows around that program, you will find that 
ASICs ***will*** arise that run that algorithm faster and less energy-consuming 
than general-purpose hardware that has to interpret a binary.
**For one, memory/disk bus operations are limited only to actual data, without 
requiring additional bus operations to fetch code.**
Data can be connected directly from the output of one computational sub-unit to 
the input of another, without requiring (as in the general-purpose hardware 
case) that the intermediate outputs be placed in general-purpose storage 
register (which, as noted, takes energy to *change* its contents, and as 
general-purpose storage will also be used to hold *other* intermediate outputs).
Specialized HDDs can arise as well which are optimized for whatever access 
pattern your scheme requires, and that would also outperform general-purpose 
HDDs as well.

Further optimizations may also exist in an ASIC context that are not readily 
visible but which are likely to be hidden somewhere --- the more complicated 
your program design, the more likely it is that you will not readily see such 
hidden optimizations that can be achieved by ASICs (xref ASICBOOST).

In short, even with memory-hardness, an ASIC will arise which might need to be 
connected to an array of (possibly specialized) HDDs but which will still 
outperform your general-purpose hardware connected to an array of 
general-purpose storage.

Indeed, various storage solutions already have different specializations: SMR 
HDDs replace tape drives, PMR HDDs serve as caches of SMR HDDs, SSDs serve as 
caches of PMR HDDs.
An optimized technology stack like that can outperform a generic HDD.

You cannot fight the inevitability of ASICs and other specialized hardware, 
just as you cannot fight specialization.

You puny humans must specialize in order to achieve the heights of your 
civilization --- I can bet you 547 satoshis that you yourself cannot farm your 
own food, you specialize in software engineering of some kind and just pay a 
farmer to harvest your food for you.
Indeed, you probably do not pay a farmer directly, but pay an intermediary that 
specializes in packing food for transport from the farm to your domicile. which 
itself probably delegates the actual transporting to another 

Re: [bitcoin-dev] Provisions (was: PSA: Taproot loss of quantum protections)

2021-03-16 Thread ZmnSCPxj via bitcoin-dev
Good morning Andrew and Andrea,

Further afield: https://en.bitcoin.it/wiki/Taproot_Uses

Taproot ring signatures was also asked by Andrea, above page contains this link 
(have not actually read it myself): https://github.com/jonasnick/taproot-ringsig

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Taproot NACK

2021-03-16 Thread ZmnSCPxj via bitcoin-dev


> It's incredible how this troll keeps trolling and the list (bitcoin-dev !!) 
> keeping attention
>
> Good troll, really

Depending on topic raised, it may be useful to at least answer the troll 
naively as if it were an honest question, if only so that third parties reading 
do not get confused and think the troll is bringing up some objection that is 
actually relevant.

For this particular topic you replied to, it seems to me obviously inane to 
discuss the "lordship" and "majesty" of the troll.
Even if the claims to such "lordship" are *true*, for most of the world, the 
relevance of the previous British empire is little more than a reality TV show 
about the British royal family (oh, some random thing happened to some random 
descendant of the royal family, how interesting, say did you see that nice new 
(actually old) technique Jeremy was talking about on the other thread about 
delegating control of coins to script, it looks like "graftroot without a 
softfork"?), and any particular claims to nobility or aristocracy are largely 
moot, thus not worth answering.


Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Taproot NACK

2021-03-16 Thread ZmnSCPxj via bitcoin-dev
Good morning JAMES,

> Good Afternoon,
>
> Verifiable and independantly verifiable are not the same. Independantly
> scrutinable means any public can scrutinise blockchain to determine it
> is honest. It does not rely on involved parties but insistently on the
> data published in the blockchain.

The involved parties ultimately publish the data on the blockchain, and the 
result is independently validatable.
All that each involved party has to do is validate for itself that it does not 
lose any funds, and, by the operation of math, the summary result does not 
result in any loss (or creation) of funds, thus CoinJoin cannot lead to fraud.

I do not see much of a point in your objection here.
For example, in the case of Lightning, the individual payments made by the 
participants in the channel cannot be verified by anyone else (they can lie 
about the payments that occurred on their channel).
But both participants in the channel need to agree on a single result, and it 
is that summary result that is independently verified onchain and published.

Indeed, one major technique for privacy improvement in Bitcoin is the simple 
technique of creating summaries of multiple actions without revealing details.
Such a general class of techniques works by reducing the data pushed onchain 
which provides both (a) scale because less data on chain and (b) privacy 
because less data is analyzable onchain.

Basically ---

1.  The entire point of a public blockchain is to prevent uncontrolled forgery 
of the coin.
Only particular rules allow construction of new coins (in Bitcoin, the 
mining subsidy).
2.  Various techniques can be used to support the above central point.
* The simplest is to openly publish every amount value in cleartext.
  * However, this is not necessarily the ***only*** acceptable way to 
achieve the goal!
  * Remember, the point is to prevent uncontrolled forgery.
The point is **not** mass surveillance.
* Another method would be to openly publish **summaries** of transactions, 
such as by Lightning Network summarizing the result of multiple payments.
  * CoinJoin is really just a way to summarize multiple self-payments.
* Another method would be to use homomorphisms between a cleartext and a 
ciphertext, and publish only the ciphertext (which can be independently 
verified as correctly being added together and that inputs equal outputs plus 
fees).

No privacy technique worth discussing and development in Bitcoin gets around 
the above point, and thus fraud cannot be achieved with those (at least if we 
define fraud simply as "those who control the keys control the coins" --- 
someone stealing a copy of your privkeys is beyond this definition of fraud).
Any privacy improvement Taproot buys (mostly in LN, but also some additional 
privacy for CoinSwap) will still not allow fraud.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Delegated signatures in Bitcoin within existing rules, no fork required

2021-03-16 Thread ZmnSCPxj via bitcoin-dev
Good morning Jeremy,

Thank you.

Assuming only keys, an easier way of delegating would be simply to give a copy 
of the privkey outright to the delegatee.

However, an advantage of this technique you described is that the delegator can 
impose additional restrictions that are programmable via any SCRIPT, an ability 
that merely handing over the privkey cannot do.
Thus the technique has an ability that mere handover cannot achieve.

If the delegatee is a known single entity, and S is simply the delegatee key 
plus some additional restrictions, it may be possible to sign with 
`SIGHASH_ALL` a transaction that spends A and D, and outputs to a singlesig of 
the delegatee key.
This would avoid the use of `SIGHASH_NONE`, for a mild improvement in privacy.
The output would still allow the delegatee to dispose of the funds by its 
unilateral decision subject to the fulfillment of the script S (at the cost of 
yet another transaction).
On the other hand, if S is unusual enough, the enhanced privacy may be moot 
(the S already marks the transaction as unusual), so this variation has little 
value.

In terms of offchain technology, if the delegator remains online, the delegatee 
may present a witness satisfying S to the delegator, and ask the delegator to 
provide an alternate transaction that spends A directly without spending D and 
outputs to whatever the delegatee wants.
The delegator cannot refuse since the delegatee can always use the 
`SIGHASH_NONE` signature and spend to whatever it decides provided it can 
present a witness satisfying S.
This is basically a typical "close transaction" for layer 2 technology.
On the other hand, one generalized use-case for delegation would be if the 
delegator suspects it may not be online or able to sign with the delegator key, 
so this variation has reduced value as well.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Delegated signatures in Bitcoin within existing rules, no fork required

2021-03-16 Thread ZmnSCPxj via bitcoin-dev
Good morning Jeremy,

This is a very cool idea!

> Multiple Delegates: By signing a txn with several delegate outputs, it is 
> possible to enforce multiple disparate conditions. Normally this is 
> superfluous -- why not just concatenate S1 and S2? The answer is that you may 
> have S1 require a relative height lock and S2 require a relative time lock 
> (this was one of the mechanisms investigated for powswap.com).

I am somewhat confused by this.
Do you mean that the delegating transaction (the one signed using the script of 
A with `SIGHASH_NONE`) has as input (consumes) multiple delegate outputs D1, 
D2... with individual scripts S1, S2... ?

> Sequenced Contingent Delegation: By constructing a specific TXID that may 
> delegate the coins, you can make a coin's delegation contingent on some other 
> contract reaching a specific state. For example, suppose I had a contract 
> that had 100 different possible end states, all with fixed outpoints at the 
> end. I could delegate coins in different arrangements to be claimable only if 
> the contract reaches that state. Note that such a model requires some level 
> of coordination between the main and observing contract as each Coin delegate 
> can only be claimed one time.

Does this require that each contract end-state have a known TXID at setup time?

> Redelegating: This is where A delegates to S, S delegates to S'. This type of 
> mechanism most likely requires the coin to be moved on-chain to the script (A 
> OR S or S'), but the on-chain movement may be delayed (via presigned 
> transactions) until S' actually wants to do something with the coin.

The script `A || S || S'` suggests that delegation effectively still allows the 
original owner to still control the coin, right?
Which I suppose is implied by "Revocation" above.

Regards,
ZmnSCPxj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] PSA: Taproot loss of quantum protections

2021-03-15 Thread ZmnSCPxj via bitcoin-dev
Good morning aj,

> On Tue, Mar 16, 2021 at 08:01:47AM +0900, Karl-Johan Alm via bitcoin-dev 
> wrote:
>
> > It may initially take months to break a single key.
>
> From what I understand, the constraint on using quantum techniques to
> break an ECC key is on the number of bits you can entangle and how long
> you can keep them coherent -- but those are both essentially thresholds:
> you can't use two quantum computers that support a lower number of bits
> when you need a higher number, and you can't reuse the state you reached
> after you collapsed halfway through to make the next run shorter.
>
> I think that means having a break take a longer time means maintaining
> the quantum state for longer, which is harder than having it happen
> quicker...
>
> So I think the only way you get it taking substantial amounts of time to
> break a key is if your quantum attack works quickly but very unreliably:
> maybe it takes a minute to reset, and every attempt only has probability
> p of succeeding (ie, random probability of managing to maintain the
> quantum state until completion of the dlog algorithm), so over t minutes
> you end up with probability 1-(1-p)^t of success.
>
> For 50% odds after 1 month with 1 minute per attempt, you'd need a 0.0016%
> chance per attempt, for 50% odds after 1 day, you'd need 0.048% chance per
> attempt. But those odds assume you've only got one QC making the attempts
> -- if you've got 30, you can make a month's worth of attempts in a day;
> if you scale up to 720, you can make a month's worth of attempts in an
> hour, ie once you've got one, it's a fairly straightforward engineering
> challenge at that point.
>
> So a "slow" attack simply doesn't seem likely to me. YMMV, obviously.

What you describe seems to match mining in its behavior: probabilistic, and 
scalable by pushing more electricity into more devices.

>From this point-of-view, it seems to me that the amount of energy to mount a 
>"fast" attack may eventually approach the energy required by mining, in which 
>case someone who possesses the ability to mount such an attack may very well 
>find it easier to just 51% the network (since that can be done today without 
>having to pour R satoshis into developing practical quantum computers).

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Taproot NACK

2021-03-15 Thread ZmnSCPxj via bitcoin-dev
Good morning JAMES,

> No-one has yet demonstrated that Conjoin or using Wasabi wallet is secure if 
> it relies on third-parties. Are the transaction not forwarded partially 
> signed as with an SPV wallet? So it is possible the SPV server cannot 
> redirect funds if dishonest? SPV wallets are secure producing fully signed 
> transactions. A ConJoin transaction signs for the UTXO and forwards it to be 
> included signed for in another larger transaction with many inputs and outputs

The above point was not answered, so let me answer this for elucidation of you 
and any readers.

A CoinJoin transaction is a single transaction with many inputs and many 
outputs.

Every input must be signed.

When used to obfuscate, each input has different actual entities owning the 
coin.

In order to prevent fraud, it is necessary that what total amount each entity 
puts into the transaction, that entity also gets out (in freshly-generated 
addresses, which I hope you do not object to) as an output.

When providing its signature, each entity verifies that its provided address 
exists in some output first before signing out its input.

The provided signature requires all the inputs and all the outputs to exist in 
the transaction.
Because of this, it is not possible to take a "partial" signature for this 
transaction, then change the transaction to redirect outputs elsewhere --- the 
signature of previous participants become invalid for the modified transaction..

Thus, the security of the CoinJoin cannot be damaged by a third party.

Third parties involved in popular implementations of CoinJoin (such as the 
coordinator in Wasabi) are nothing more than clerical actuaries that take 
signatures of an immutable document, and any attempt by that clerical actuary 
to change the document also destroys any signatures of that document, making 
the modified document (the transaction) invalid.

> . Also, none of those you mention is inherently a Privacy Technology. 
> Transparency is one of the key articles of value in Bitcoin because it 
> prevents fraud.

The prevention of fraud simply requires that all addition is validatable.
It does not require that the actual values involved are visible in cleartext.

Various cryptographic techniques already exist which allow the verifiable 
addition of encrypted values ("homomorphisms").
You can get 1 * G and 2 * G, add the resulting points, and compare it to 3 * G 
and see that you get the same point, yet if you did not know exactly what G was 
used, you would not know that you were checking the addition of 1 + 2 = 3.
That is the basis of a large number of privacy coins.

At the same time, if I wanted to *voluntarily* reveal this 1 + 2 = 3, I could 
reveal the numbers involved and the point G I used, and any validator 
(including, say, a government taxing authority) can check that the points 
recorded on the blockchain match with what I claimed.

For the prevention of fraud, we should strive to be as transparent as *little* 
as possible, while allowing users to *voluntarily* reveal information.


Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Yesterday's Taproot activation meeting on lockinontimeout (LOT)

2021-02-19 Thread ZmnSCPxj via bitcoin-dev
Good morning list,

> It was pointed out to me that this discussion is largely moot as the software 
> complexity for Bitcoin Core to ship an
> option like this is likely not practical/what people would wish to see.
>
> Bitcoin Core does not have infrastructure to handle switching consensus rules 
> with the same datadir - after running with
> uasf=true for some time, valid blocks will be marked as invalid, and 
> additional development would need to occur to
> enable switching back to uasf=false. This is complex, critical code to get 
> right, and the review and testing cycles
> needed seem to be not worth it.

Without implying anything else, this can be worked around by a user maintaining 
two `datadir`s and running two clients.
This would have an "external" client running an LOT=X (where X is whatever the 
user prefers) and an "internal" client that is at most 0.21.0, which will not 
impose any LOT rules.
The internal client then uses `connect=` directive to connect locally to the 
external client and connects only to that client, using it as a firewall.
The external client can be run pruned in order to reduce diskspace resource 
usage (the internal client can remain unpruned if that is needed by the user, 
e.g. for LN implementation sthat need to look up arbitrary short-channel-ids).
Bandwidth usage should be same since the internal client only connects to the 
external client and the OS should optimize that case.
CPU usage is doubled, though.

(the general idea came from gmax, just to be clear, though the below use is 
from me)

Then the user can select LOT=C or LOT=!C (where C is whatever Bitcoin Core 
ultimately ships with) on the external client based on the user preferences.

If Taproot is not MASF-activated and LOT=!U is what dominates later (where U is 
whatever the user decided on), the user can decide to just destroy the external 
node and connect the internal node directly to the network (optionally 
upgrading the internal node to LOT=!U) as a way to "change their mind in view 
of the economy".
The internal node will then follow the dominant chain.


Regards,
ZmnSCPxj

>
> Instead, the only practical way to ship such an option would be to treat it 
> as a separate chain (the same way regtest,
> testnet, and signet are treated), including its own separate datadir and the 
> like.
>
> Matt
>
> On 2/19/21 09:13, Matt Corallo via bitcoin-dev wrote:
>
> > (Also in response to ZMN...)
> > Bitcoin Core has a long-standing policy of not shipping options which shoot 
> > yourself in the foot. I’d be very disappointed if that changed now. People 
> > are of course more than welcome to run such software themselves, but I 
> > anticipate the loud minority on Twitter and here aren’t processing enough 
> > transactions or throwing enough financial weight behind their decision for 
> > them to do anything but just switch back if they find themselves on a chain 
> > with no blocks.
> > There’s nothing we can (or should) do to prevent people from threatening to 
> > (and possibly) forking themselves off of bitcoin, but that doesn’t mean we 
> > should encourage it either. The work Bitcoin Core maintainers and 
> > developers do is to recommend courses of action which they believe have 
> > reasonable levels of consensus and are technically sound. Luckily, there’s 
> > strong historical precedent for people deciding to run other software 
> > around forks, so misinterpretation is not very common (just like there’s 
> > strong historical precedent for miners not unilaterally deciding forks in 
> > the case of Segwit).
> > Matt
> >
> > > On Feb 19, 2021, at 07:08, Adam Back a...@cypherspace.org wrote:
> > >
> > > > would dev consensus around releasing LOT=false be considered as 
> > > > "developers forcing their views on users"?
> > >
> > > given there are clearly people of both views, or for now don't care
> > > but might later, it would minimally be friendly and useful if
> > > bitcoin-core has a LOT=true option - and that IMO goes some way to
> > > avoid the assumptive control via defaults.
> >
> > > Otherwise it could be read as saying "developers on average
> > > disapprove, but if you, the market disagree, go figure it out for
> > > yourself" which is not a good message for being defensive and avoiding
> > > mis-interpretation of code repositories or shipped defaults as
> > > "control".
> >
> > bitcoin-dev mailing list
> > bitcoin-dev@lists.linuxfoundation.org
> > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Yesterday's Taproot activation meeting on lockinontimeout (LOT)

2021-02-19 Thread ZmnSCPxj via bitcoin-dev
Good morning list,

> This is absolutely the case, however note that the activation method itself 
> is consensus code which executes as a part
> of a fork, and one which deserves as much scrutiny as anything else. While 
> taproot is a model of how a soft-fork should
> be designed, this doesn't imply anything about the consensus code which 
> represents the activation thereof.
>
> Hence all the debate around activation - ultimately its also defining a fork, 
> and given the politics around it, one
> which almost certainly carries significantly more risk than Taproot.
>
> Note that I don't believe anyone is advocating for "try to activate, and if 
> it fails, move on". Various people have
> various views on how conservative and timelines for what to do at that point, 
> but I believe most in this discussion are
> OK with flag-day-based activation (given some level of care) if it becomes 
> clear Taproot is supported by a vast majority
> of Bitcoin users and is only not activating due to lagging miner upgrades.


Okay, I am backing off this proposal to force the LOT=false/true decision on 
users, it was not particularly serious anyway (and was more a reaction to the 
request of Samson Mow to just release both versions, which to my mind is no 
different from such a thing).


Nonetheless, as a thought experiment: the main issue is that some number of 
people run LOT=true when miners do not activate Taproot early for some reason 
and we decide to leave LOT=false for this particular bit until it times out.
The issue is that those people will get forked off the network at the end of 
this particular deployment attempt.

I suspect those people will still exist whether or not Bitcoin Core supports 
any kind of LOT=true mode.
("Never again" for some people)

How do we convince them to go run LOT=false instead of getting themselves 
forked off?
Or do we simply let them?

(and how is that different from asking each user to decide on LOT=false/true 
right now?)
("reasonable default"?)
(fundamentally speaking you still have to educate the users on the 
ramifications of accepting the default and changing it.)


Another thought experiment: From the point of view of a user who strongly 
supports LOT=true, would dev consensus around releasing LOT=false be considered 
as "developers forcing their views on users"?
Why or why not?


Regards,
ZmnSCPxj

> Matt
>
> On 2/18/21 10:04, Keagan McClelland wrote:
>
> > Hi all,
> > I think it's important for us to consider what is actually being considered 
> > for activation here.
> > The designation of "soft fork" is accurate but I don't think it adequately 
> > conveys how non-intrusive a change like this
> > is. All that taproot does (unless I'm completely missing something) is 
> > imbue a previously undefined script version with
> > actual semantics. In order for a chain reorg to take place it would mean 
> > that someone would have to have a use case for
> > that script version today. This is something I think that we can easily 
> > check by digging through the UTXO set or
> > history. If anyone is using that script version, we absolutely should not 
> > be using it, but that doesn't mean that we
> > can't switch to a script version that no one is actually using.
> > If no one is even attempting to use the script version, then the change has 
> > no effect on whether a chain split occurs
> > because there is simply no block that contains a transaction that only some 
> > of the network will accept.
> > Furthermore, I don't know how Bitcoin can stand the test of time if we 
> > allow developers who rely on "undefined behavior"
> > (which the taproot script version presently is) to exert tremendous 
> > influence over what code does or does not get run.
> > This isn't a soft fork that makes some particular UTXO's unspendable. It 
> > isn't one that bans miners from collecting
> > fees. It is a change that means that certain "always accept" transactions 
> > actually have real conditions you have to
> > meet. I can't imagine a less intrusive change.
> > On the other hand, choosing to let L=F be a somewhat final call sets a very 
> > real precedent that 10% of what I estimate
> > to be 1% of bitcoin users can effectively block any change from here on 
> > forward. At that point we are saying that miners
> > are in control of network consensus in ways they have not been up until 
> > now. I don't think this is a more desirable
> > outcome to let ~0.1% of the network get to block /non-intrusive/ changes 
> > that the rest of the network wants.
> > I can certainly live with an L=F attempt as a way to punt on the 
> > discussion, maybe the activation happens and this will
> > all be fine. But if it doesn't, I hardly think that users of Bitcoin are 
> > just going to be like "well, guess that's it
> > for Taproot". I have no idea what ensues at that point, but probably 
> > another community led UASF movement.
> > I wasn't super well educated on this stuff back in '17 when Segwit went 

Re: [bitcoin-dev] Yesterday's Taproot activation meeting on lockinontimeout (LOT)

2021-02-18 Thread ZmnSCPxj via bitcoin-dev
Good morning all,

> "An activation mechanism is a consensus change like any other change, can be 
> contentious like any other change, and we must resolve it like any other 
> change. Otherwise we risk arriving at the darkest timeline."
>
> Who's we here?
>
> Release both and let the network decide.

A thing that could be done, without mandating either LOT=true or LOT=false, 
would be to have a release that requires a `taprootlot=1` or `taprootlot=0` and 
refuses to start if the parameter is not set.

This assures everyone that neither choice is being forced on users, and instead 
what is being forced on users, is for users to make that choice themselves.

Regards,
ZmnSCPxj

>
> On Thu, Feb 18, 2021 at 3:08 AM Michael Folkson via bitcoin-dev 
>  wrote:
>
> > Thanks for your response Ariel. It would be useful if you responded to 
> > specific points I have made in the mailing list post or at least quote 
> > these ephemeral "people" you speak of. I don't know if you're responding to 
> > conversation on the IRC channel or on social media etc.
> >
> > > The argument comes from a naive assumption that users MUST upgrade to the 
> > > choice that is submitted into code. But in fact this isn't true and some 
> > > voices in this discussion need to be more humble about what users must or 
> > > must not run.
> >
> > I personally have never made this assumption. Of course users aren't forced 
> > to run any particular software version, quite the opposite. Defaults set in 
> > software versions matter though as many users won't change them.
> >
> > > Does no one realize that it is a very possible outcome that if LOT=true 
> > > is released there may be only a handful of people that begin running it 
> > > while everyone else delays their upgrade (with the very good reason of 
> > > not getting involved in politics) and a year later those handful of 
> > > people just become stuck at the moment of MUST_SIGNAL, unable to mine new 
> > > blocks?
> >
> > It is a possible outcome but the likely outcome is that miners activate 
> > Taproot before LOT is even relevant. I think it is prudent to prepare for 
> > the unlikely but possible outcome that miners fail to activate and hence 
> > have this discussion now rather than be unprepared for that eventuality. If 
> > LOT is set to false in a software release there is the possibility (T2 in 
> > https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2021-February/018380.html)
> >  of individuals or a proportion of the community changing LOT to true. In 
> > that sense setting LOT=false in a software release appears to be no more 
> > safe than LOT=true.
> >
> > > The result: a wasted year of waiting and a minority of people who didn't 
> > > want to be lenient with miners by default.
> >
> > There is the (unlikely but possible) possibility of a wasted year if LOT is 
> > set to false and miners fail to activate. I'm not convinced by this 
> > perception that LOT=true is antagonistic to miners. I actually think it 
> > offers them clarity on what will happen over a year time period and removes 
> > the need for coordinated or uncoordinated community UASF efforts on top of 
> > LOT=false.
> >
> > > An activation mechanism is a consensus change like any other change, can 
> > > be contentious like any other change, and we must resolve it like any 
> > > other change. Otherwise we risk arriving at the darkest timeline.
> >
> > I don't know what you are recommending here to avoid "this darkest 
> > timeline". Open discussions have occurred and are continuing and in my 
> > mailing list post that you responded to **I recommended we propose 
> > LOT=false be set in protocol implementations such as Bitcoin Core**. I do 
> > think this apocalyptic language isn't particularly helpful. In an open 
> > consensus system discussion is healthy, we should prepare for bad or worst 
> > case scenarios in advance and doing so is not antagonistic or destructive. 
> > Mining pools have pledged support for Taproot but we don't build secure 
> > systems based on pledges of support, we build them to minimize trust in any 
> > human actors. We can be grateful that people like Alejandro have worked 
> > hard on taprootactivation.com (and this effort has informed the discussion) 
> > without taking pledges of support as cast iron guarantees.
> >
> > TL;DR It sounds like you agree with my recommendation to set LOT=false in 
> > protocol implementations in my email :)
> >
> > On Thu, Feb 18, 2021 at 5:43 AM Ariel Lorenzo-Luaces 
> >  wrote:
> >
> > > Something what strikes me about the conversation is the emotion 
> > > surrounding the letters UASF.
> > > It appears as if people discuss UASF as if it's a massive tidal wave of 
> > > support that is inevitable, like we saw during segwit activation. But the 
> > > actual definition is "any activation that is not a MASF".
> > > A UASF can consist of a single node, ten nodes, a thousand, half of all 
> > > nodes, all business' nodes, or even all the 

Re: [bitcoin-dev] Libre/Open blockchain / cryptographic ASICs

2021-02-13 Thread ZmnSCPxj via bitcoin-dev
Good morning Luke,

> > Another point to ponder is test modes.
> > In mass production you need test modes.
>
> > (Sure, an attacker can try targeted ESD at the `TESTMODE` flip-flop 
> > repeatedly, but this risks also flipping other scan flip-flops that contain 
> > the data that is being extracted, so this might be sufficient protection in 
> > practice.)
>
> if however the ASIC can be flipped into TESTMODE and yet it carries on
> otherwise working, an algorithm can be re-run and the exposed data
> will be clean.

But in most testmodes I have seen (and designed) all clocks are driven 
externally from a different pin (usually the serial interface) when in testmode.
If the CPU clock is now controlled by the attacker, how do you run any kind of 
algorithm?

(This could be an artifact of how my old design company designed testmodes, 
YMMV.)

Really the concern here is that testmode is entered while the CPU has key 
material loaded into registers, or caches, then it is possible, if those 
registers/caches are in the scan chain, to exfiltrate data.
Does not matter if the chip is now in a mode that cannot execute algorithms, if 
it was doing any kind of computation involving privkeys (including say deriving 
its public key so that PC-side hardware can get the `xpub`) then key material 
may be in scan chain registers, clock is now controlled by the attacker, and 
possibly scan mode as well (which disables combinational circuitry thus none of 
your algorithms can run).

>
> > If you are really going to open-source the hardware design then the layout
> > is also open and attackers can probably target specific chip area for ESD
> > pulse to try a flip-flop upset, so you need to be extra careful.
>
> this is extremely valuable advice. in the followup [1] you describe a
> gating method: this we have already deployed on a couple of places in
> case the Libre Cell Library (also being developed at the same time by
> Staf Verhaegen of Chips4Makers) causes errors: we do not want, for
> example, an error in a Cell Library to cause a permanent HI which
> locks us from being able to perform testing of other areas of the
> ASIC.
>
> the idea of being able to actually randomly flip bits inside an ASIC
> from outside is both hilarious and entirely news to me, yet it sounds
> to be exactly the kind of thing that would allow an attacker to
> compromise a hardware wallet. potentially destructively, mind, but
> compromise all the same.

Certainly outside of the the old company design philosophy I have seen many 
experts strongly protest against a design philosophy which assumes that any 
flip-flop could randomly switch.

Yet the design philosophy within the old company always had this assumption, 
supposedly (according to in-company lore) because previous engineers had 
actually found the hard way that random bitflips did occur, and for e.g. 
automobile chips the risk was too great to not have strong mitigations:

* State machines had to force unused states into known states.
  For example a state machine with 3 states needs 2 bits of state, but 2 bits 
of state is actually 4 states, so there is a 4th unused state.
  * Not all state machines needed this rule but during planning we had to 
identify state machines that needed this rule, and often we just targeted 
having 2^n states just to ensure that there were no unused states.
  * I even suggested the use of ECC encoding for important state machines and 
it was something being investigated at the time I left.
* State machines that otherwise did not need the above rule were strongly 
encouraged to clear state at display frame vsync.
  This ensured that any unexpected states they had would only last up to one 
display frame, which was considered acceptable.
* Flip-flops that held settings were periodically reloaded at each display 
frame vsync from a flash memory (which apparently as a lot more immune to 
bitflips).

It could be an artifact as well that the company had its own in-house foundry 
rather than delegate out to TSMC or whatnot --- maybe the technology we had was 
just suckier than state-of-the-art so bitflips were more common.

The reason why this stuck to mind is because at one time we had a DS test where 
shooting the ESD gun could sometimes cause the chip to fail (blank display) 
until reset, when the expectation was that at most it would flicker for one 
display frame.
And afterwards we had to go through the entire RTL looking for which state 
machine or settings register was the culprit.
I even wrote a little Verilog-PLI plugin that would inject deterministically 
random data into flip-flops in the model to try to catch it.
Eventually we found a bunch of possible root causes, and on the next DS 
iteration testing we had fun shooting the chip with the ESD gun over and over 
again and sighing in relief that the display was not failing for more than one 
frame.

The chip was a display driver for automotive, apparently at the time cars were 
starting to transition to 

Re: [bitcoin-dev] Libre/Open blockchain / cryptographic ASICs

2021-02-12 Thread ZmnSCPxj via bitcoin-dev
Good morning Luke,

Another thing we can do with scan mode would be something like the below 
masking:

input CLK, RESET_N;
input TESTMODE;
input SCANOUT_INTERNAL;
output SCANOUT_PAD;

reg gating;
wire n_gating = gating && TESTMODE;
always_ff @(posedge CLK, negedge RESET_N) begin
  if (!RESET_N)   gating <= 1'b1; /*RESET-HIGH*/
  elsegating <= n_gating; end

assign SCANOUT_PAD = SCANOUT_INTERNAL && gating;

The `gating` means that after reset, if we are not in test mode, `gating` 
becomes 0 permanently and prevents any scan data from being extracted.
Assuming scan is not used in normal operation (it should not) then inadvertent 
ESD noise on the `gating` flip-flop would not have an effect.

Output being combinational should be fine as the output is "just" an AND gate, 
as long as `gating` does not transition from 0->1 (impossible in normal 
operation, only at reset condition) then glitching is impossible, and when scan 
is running then `TESTMODE` should not be exited which means `gating` should 
remain high as well, thus output is still glitch-free.

Since the flip-flop resets to 1, and in some technologies I have seen a 
reset-to-0 FF is slightly smaller than a reset-to-1 FF, it might do good to 
invert the sense of `gating` instead, and use a NOR gate at the output (which 
might also be smaller than an AND gate, look it up in the technology you are 
targeting).
On the other hand the above is a tiny circuit already and it is unlikely you 
need more than one of it (well for large enough ICs you might want more than 
one scan chain but still, even the largest ICs we handled never had more than 8 
scan chains, usually just 4 to 6) so overoptimizing this is not necessary.


Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Libre/Open blockchain / cryptographic ASICs

2021-02-11 Thread ZmnSCPxj via bitcoin-dev

Good morning Luke,

> > (to be fair, there were tools to force you to improve coverage by injecting 
> > faults to your RTL, e.g. it would virtually flip an `&&` to an `||` and if 
> > none of your tests signaled an error it would complain that your test 
> > coverage sucked.)
>
> nice!

It should be possible for a tool to be developed to parse a Verilog RTL design, 
then generate a new version of it with one change.
Then you could add some automation to run a set of testcases around mutated 
variants of the design.
For example, it could create a "wrapper" module that connects to an unmutated 
differently-named version of the design, and various mutated versions, wire all 
their inputs together, then compare outputs.
If the testcase could trigger an output of a mutated version to be different 
from the reference version, then we would consider that mutation covered by 
that testcase.
Possibly that could be done with Verilog-2001 file writing code in the wrapper 
module to dump out which mutations were covered, then a summary program could 
just read in the generated file.
Or Verilog plugins could be used as well (Icarus supports this, that is how it 
implements all `$` functions).

A drawback is that just because an output is different does not mean the 
testcase actually ***checks*** that output.
If the testcase does not detect the diverging output it could still not be 
properly covering that.

The point of this is to check coverage of the tests.
Not sure how well this works with formal validation.



> > Synthesis in particular is a black box and each vendor keeps their 
> > particular implementations and tricks secret.
>
> sigh.  i think that's partly because they have to insert diodes, and buffers, 
> and generally mess with the netlist.
>
> i was stunned to learn that in a 28nm ASIC, 50% of it is repeater-buffers!

Well, that surprises me as well.

On the other hand, smaller technologies consistently have lower raw output 
current driving capability due to the smaller size, and as trace width goes 
down and frequency goes up they stop acting like ideal 0-impedance traces and 
start acting more like transmission lines.
So I suppose at some point something like that would occur and I should not 
actually be surprised.
(Maybe I am more surprised that it reached that level at that technology size, 
I would have thought 33% at 7nm.)

In the modules where we were doing manual netlist+layout, we used inverting 
buffers instead (slightly smaller than non-inverrting buffers, in most 
technologies a non-inverting buffer is just an inverter followed by an 
inverting buffer), it was an advantage of manual design since it looks like 
synthesis tools are not willing to invert the contents of intermediate 
flip-lfops even if it could give theoretical speed+size advantage to use an 
inverting buffer rather than an non-inverting one (it looks like synthesis 
optimization starts at the output of flip-flops and ends at their input, so a 
manual designer could achieve slightly better performance if they were willing 
to invert an intermediate flip-flop).
Another was that inverting latches were smaller in the technology we were using 
than non-inverting latches, so it was perfectly natural for us to use an 
inverting latch and an inverting buffer on those parts where we needed higher 
fan-out (t was equivalent to a "custom" latch that had higher-than-normal 
driving capability).

Scan chain test generation was impossible though, as those require flip-flops, 
not latches.
Fortunately this was "just" deserialization of high-frequency low-width data 
with no transformation of the data (that was done after the deserialization, at 
lower clock speeds but higher data width, in pure RTL so flip-flops), so it was 
judged acceptable that it would not be covered by scan chain, since scan chain 
is primarily for testing combinational logic between flip-flops.
So we just had flip-flops at the input, and flip-flops at the output, and 
forced all latches to pass-through mode, during scan mode.
We just needed to have enough coverage to uncover stuck-at faults (which was 
still a pain, since additional test vectors slow down manufacturing so we had 
to reduce the test vectors to the minimum possible) in non-scan-momde testing.

Man, making ASICs was tough.


>
> plus, they make an awful lot of money, it is good business.
>
> > Pointing some funding at the open-source Icarus Verilog might also fit, as 
> > it lost its ability to do synthesis more than a decade ago due to inability 
> > to maintain.
>
> ah i didn't know it could do synthesis at all! i thought it was simulation 
> only.

Icarus was the only open-source synthesis tool I could find back then, and it 
dropped synthesis capability fairly early due to maintenance burden (I never 
managed to get the old version with synthesis compiled and never managed actual 
synthesis on it, so my knowledge of it is theoretical).


There is an argument that open-source software is not truly 

Re: [bitcoin-dev] Libre/Open blockchain / cryptographic ASICs

2021-02-02 Thread ZmnSCPxj via bitcoin-dev
Good morning again Luke,



> [my personal favourite is a focus on power-efficiency: battery-operated 
> hand-held devices at or below 3.5 watts (thus not requiring thermal pipes or 
> fans - which tend to break). i have to admit i am a little alarmed at the 
> world-wide energy consumption of bitcoin: personally i would very much prefer 
> to be involved in eco-conscious blockchain and crypto-currency products].

If you mean miner power usage, then power efficiency will not reduce energy 
consumption.

Suppose you are a miner.
Suppose you have access to 1 watt of energy at a particular fixed cost of 1 BTC 
per watt, and you have a current hardware that gives 1 Exahash for 1 watt of 
energy usage.
Suppose this 1 Exahash earns 2 BTC (and that is why you mine, you earn 1 BTC).

Now suppose there is a new technology where a hardware can give 1 Exohash for 
only 0.5 watt of energy usage.
Your choices are:

* Buy only one unit, get 1 Exohash for 0.5 watt, thus getting 2.0 BTC while 
only paying 0.5 BTC in electricity fees for a net of 1.5 BTC.
* Buy two units, get 2 Exohash for 1.0 watt, thus getting 4.0 BTC while only 
paying 1.0 BTC in electricity fees for a net of 3.0 BTC.

What do you think your better choice is?

That assumes that difficulty adjustments do not occur.
If difficulty adjustments are put into consideration, then if everyone *else* 
does the second choice, global mining hashrate doubles and the difficulty 
adjustment matches, and if you took the first choice, you would end up earning 
far less than 2.0 BTC after the difficulty adjustment.

Thus, any rational miner will just pack more miners in the same number of watts 
rather than reduce their watt consumption.
There may be physical limits involved (only so many miners you can put in an 
amount of space, or whatever other limits) but absent those, a rational miner 
will not reduce their energy expenditure with higher-efficiency units, they 
will buy more units.

Thus, increasing power efficiency for mining does not reduce the amount of 
actual energy that will be consumed by Bitcoin mining.

If you are not referring to mining energy, then I think a computer running 
BitTorrent software 24/7 would consume about the same amount of energy as a 
fullnode running Bitcoin software 24/7, and I do not think the energy consumed 
thus is actually particularly high relative to a lot of other things.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Libre/Open blockchain / cryptographic ASICs

2021-02-02 Thread ZmnSCPxj via bitcoin-dev
Good morning Luke,

I happen to have experience designing digital ASICs, mostly pipelined data 
processing.
However my experience is limited to larger geometries and in SystemVerilog.

On the technical side, as I understand it (I have been out of that industry for 
4 years now, so my knowledge may be obsolete) as you approach lower geometries, 
you also start approaching analog design.
In our case we were already manually laying out gates and flip-flops (or 
replacing flip-flops with level-triggered latches and being extra careful with 
clocks) to squeeze performance (and area) for some of the more boring parts 
(i.e. just deserialization of data from a high-frequency low bus width to a 
lower-frequency wide bus width).

Formal correctness proofs are nice, but we were impeded from using those 
because of the need to manually lay out devices, meaning the netlist did not 
correspond exactly to an RTL that formal correctness could understand.
Though to be fair most of the circuit was standard RTL->synthesized netlist and 
formal correctness proofs worked perfectly well for those.
Many of the formal correctness proofs were really about the formal equivalence 
of the netlist to the RTL; the correctness of the RTL was "proved" by 
simulation testing.
(to be fair, there were tools to force you to improve coverage by injecting 
faults to your RTL, e.g. it would virtually flip an `&&` to an `||` and if none 
of your tests signaled an error it would complain that your test coverage 
sucked.)
Things might have changed.

A good RTL would embed SystemVerilog Assertions or PSL Assertions as well.
Some formal verification tools can understand a subset of SystemVerilog 
Assertions / PSL assertions and validate that your RTL conformed to the 
assertions, which would probably help cut down on the need for RTL simulation.

Overall, my understanding is that smaller geometries are needed only if you 
want to target a really high performance / unit cost and performance / energy 
consumption ratios.
That is, you would target smaller geometries for mining.

If you need a secure tr\*sted computing module that does not need to be fast or 
cheap, just very accurate to the required specification, the larger geometries 
should be fine and you would be able to live almost entirely in RTL-land 
without diving into netlist and layout specifications.

A wrinkle here is that licenses for tools from tr\*sted vendors like Synopsys 
or Cadence are ***expensive***.
What is more, you should really buy two sets of licenses, e.g. do logic 
synthesis with Synopsys and then formal verification with Cadence, because you 
do not want to fully tr\*st just one vendor.
Synthesis in particular is a black box and each vendor keeps their particular 
implementations and tricks secret.

Pointing some funding at the open-source Icarus Verilog might also fit, as it 
lost its ability to do synthesis more than a decade ago due to inability to 
maintain.
Icarus Verilog only supports Verilog-2001 and only has very very partial 
support for SystemVerilog (though to be fair, there is little that 
SystemVerilog adds that can be used in RTL --- `always_comb` and `always_ff` 
come to mind, as well as assertions, and I think recent Icarus has started 
experimental support for those for `always` variants).
Note as well that I heard (at the time when I was in the industry) that some 
foundries will not even accept a netlist unless it was created by a synthesis 
tool from one of the major vendors (Synopsys, Cadence, Mentor Graphics, maybe 
more I have forgotten since).

Regards,
ZmnSCPxj

> folks, hi, please do cc me as i am subscribed "digest", apologies for the 
> inconvenience.
>
> i've been speaking on and off with kanzure, asking his advice about a libre / 
> transparently-developed ASIC / SoC, for some time, since meeting a very 
> interesting person at the Barcelona RISC-V Workshop in 2018.
>
> this person pointed out that FIPS-approved algorithms, implemented in 
> FIPS-approved crypto-chips used in hardware wallets to protect billions to 
> trillions in cryptocurrency assets world-wide are basically asking for 
> trouble.  i heard 3rd-hand that the constants used in the original bitcoin 
> protocol were very deliberately changed from those approved by FIPS and the 
> NSA for exactly the reasons that drive people to question whether it is a 
> good idea to trust closed and secretive crypto-chips, no matter how 
> well-intentioned the company that manufactures them.  the person i met was 
> there to "sound out" interested parties willing to help with such a venture, 
> even to the extent of actually buying a Foundry, in order to guarantee that 
> the crypto-chip they would like to see made had not been tampered with at any 
> point during manufacturing.
>
> at FOSDEM2019 i was also approached by a team that also wanted to do a basic 
> "embedded" processor, entirely libre-licensed, only in 350nm or 180nm, with 
> just enough horsepower to do digital signing and so 

Re: [bitcoin-dev] Hardware wallets and "advanced" Bitcoin features

2021-01-14 Thread ZmnSCPxj via bitcoin-dev
Good Morning Kevin,

> Inputs (mainly for pre-signed Tx):
> ==
> Problem: Poisoned inputs are a major risk for HW as they don't know the 
> UTXO set. While this can be exploited for fee
> attacks, it is a bigger threat to pre-signed transactions protocols. Once 
> any input of a (pre-signed)transaction is
> spent, this transaction isn't valid anymore. Most pre-signed transactions 
> protocols are used today as a form of defense
> mechanism, spending any input would mean incapacitating the entire 
> defense mechanism.
> Proposed improvement: for protocols that requires it, keeping track of 
> inputs already signed once would be extremely
> helpful. Going further, most of these protocols require to follow a 
> specific signing order (typically the "clawback"
> first, then the regular spend path) so adding a way to check that a 
> "clawback" has been signed first, with the same
> input, would be very helpful. All of this on the dev
> ice itself.

This requires the hardware device to maintain some state in order to remember 
that the clawback has been signed before.
My post on HW devices for Lightning (which you already linked) contains a 
suggestion to use a Merklized persistent data structure to maintain state for 
the hardware device, with a majority of the state storage on the 
trust-minimized software.

The primary issue here is that we have a base assumption that the hardware 
wallet cannot be sophisticated enough to have Internet access; "do not enter 
seed words on an online device", as the typical advice goes.
Most clawback transactions are time-based, and *must* be broadcast at a 
particular blockheight.
Yet if the hardware wallet cannot be an online device, then it cannot know the 
current blockheight is now at a time when the clawback transaction *must* be 
broadcast.

Thus, the hardware must always tr\*st the software to actually perform the 
clawback in that case.
In protocols where clawbacks are at all necessary, often the counterparty can 
have an advantage / can steal if the clawback is not broadcast in a timely 
manner, thus the software that is corrupted by the counterparty can be 
corrupted to simply not broadcast the clawback.

If the software on an online device cannot be tr\*sted (which is the model that 
hardware wallets use) then the software cannot be tr\*sted to provide correct 
information on the current blockheight to the offline hardware device, and 
cannot be tr\*sted to use clawback transactions.

It seems to me that we cannot use the same model of "do not enter seed words on 
an online device" for any protocol with a time-based clawback component (and 
honestly there seems to be no clawback mechanism that is not time-based).

Ultimately, I consider the blockchain as a proof of time passing, and as the 
blockchain is an online structure, we can only get at that proof by going 
online and actively searching for the block tip.
Yet going online increases our attack surface.


Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Softchains: Sidechains as a Soft Fork via Proof-of-Work Fraud Proofs

2020-12-31 Thread ZmnSCPxj via bitcoin-dev
Good morning Ruben, and list,

First and foremost --- what is the point of sidechains, in the first place?

If sidechains are for experimental new features, then softforking in a new 
sidechain with novel untested new features would be additionally risky --- as 
you note, a bug in the sidechain consensus may cause non-deterministic 
consensus in the sidechain which would propagate into mainchain.
Federated sidechains, which already are enabled on current Bitcoin, are safer 
here, as mainchain will only care about the k-of-n signature that the 
federation agrees on, and if the federation is unable to come to consensus due 
to a sidechain consensus bug, "fails safe" in that it effectively disables the 
peg-out back to mainchain and restricts the consensus problem to the sidechain.

If sidechains are for scaling, then I would like to remind anyone reading this 
that ***blockchains do not scale***, and adding more blockchains for the 
purpose of scaling is *questionable*.
"I have a scaling problem.
I know, I will add a sidechain!
Now I have two scaling problems."

Ultimately, proof-of-work is about energy expenditure, and you would be 
splitting the global energy budget for blockchain security among multiple 
blockchains, thus making each blockchain easier to 51%.

Regards,
ZmnSCPxj

> Hi everyone,
>
> This post describes a fully decentralized two-way peg sidechain design. 
> Activating new sidechains requires a soft fork, hence the name softchains. 
> The key aspect is that all softchains are validated by everyone via 
> Proof-of-Work Fraud Proofs (PoW FP) -- a slow but very efficient consensus 
> mechanism that only requires the validation of disputed blocks. This does 
> increase the validation burden of mainchain full nodes, but only by a minimal 
> amount (~100MB per chain per year). It's similar to drivechains[0], but 
> without the major downside of having to rely on miners, since all Bitcoin 
> full node users can efficiently validate each sidechain.
>
> Proof-of-Work Fraud Proofs
>
> Last year I posted the idea of PoW FP to the Bitcoin mailing list[1][2]. The 
> idea is that we can use the existence of a fork in Bitcoin's PoW as evidence 
> that a block might be invalid (i.e. a proof of potential fraud). Whenever 
> this occurs, we download the block in question to verify whether it was valid 
> (and available), and reject it if it was not. We forego the need for 
> maintaining a UTXO set with UTXO set commitments (such as utreexo[3]), by 
> assuming that the commitment inside the last block to exist in both forks is 
> valid. As a result, we only need to download as many blocks (and their 
> corresponding UTXO set proofs) as there are orphans, which lowers the 
> validation costs considerably compared to running a full node.
>
> In the past 4 months, Forkmonitor has registered 11 stale and invalid 
> blocks[4]. Extrapolating from that data, a PoW FP node verifying Bitcoin 
> consensus would have to download and verify a little over 100MB per year in 
> order to have consensus guarantees that come close to that of a full node:
> - All PoW headers (~4MB per year)
> - 3 x 11 = 33 full blocks (~2MB x 33 = 66MB)
> - UTXO merkle proofs (~1MB x 33 = 33MB with utreexo)
>
> The reason consensus is considered slow, is because we need to allow time for 
> a honest PoW minority to fork away from an invalid chain. If we assume only 
> 1% of all miners are honest, this means consensus slows down by 100x. If you 
> are normally satisfied waiting for 6 confirmations, you now need to wait 600 
> confirmations. The longer you wait, the less honest miners you need.
>
> Softchains
>
> In order to have two-way pegged sidechains, you need a succinct method for 
> proving to the mainchain that a peg-out is valid. PoW FP provides exactly 
> that -- a low-bandwidth way of determining if a chain, and thus a peg-out, is 
> valid. The slowness of PoW FP consensus is not an issue, as peg-outs can be 
> made arbitrarily slow (e.g. one year).
>
> The safest design would be a set of softchains that shares its consensus code 
> with Bitcoin Core, with the addition of UTXO set commitments, and disabling 
> non-taproot address types to minimize certain resource usage issues[5]. All 
> users validate the mainchain as usual with their full node, and all 
> softchains are validated with PoW FP consensus. If a user is interested in 
> directly using a specific softchain, they should run it as a full node in 
> order to get fast consensus.
>
> Peg-ins occur by freezing coins on the mainchain and assigning them to a 
> softchain. Peg-outs occur by creating a mainchain transaction that points to 
> a peg-out transaction on a softchain and waiting for a sufficient number of 
> mainchain confirmations. If the peg-out transaction remains part of the 
> softchain according to PoW FP consensus, the coins become spendable.
>
> The peg-in/peg-out mechanism itself would require a soft fork (the exact 
> design is an open question), and subsequently 

Re: [bitcoin-dev] Out-of-band transaction fees

2020-12-01 Thread ZmnSCPxj via bitcoin-dev
Good morning e, and Sebastian,

So it seems, the goals are the below:

* Someone wants to pay a fee to get a transaction confirmed.
* Miners want to know how much they earn if they confirm a transaction.
* The one paying for the fee does not want to link its other coins to the 
transaction it wants confirmed.

Would that be a fair restatement of the goal?

If so, it seems to me we can make a CoinJoin-like approach using only L1, and 
combine fees by a kind of FeeJoin.

The issue with linking is that if for example the one paying a fee to get a 
transaction confirmed wants to CPFP the transaction, it may need to take 
another UTXO it controls into the child transaction, thereby linking its 
"another UTXO" with the "transaction it wants confirmed".

However, if multiple such individuals were to CoinJoin their transactions, the 
linking becomes much harder to trace.

So a possible mechanism, with a third-party that is trusted only to keep the 
service running (and cannot cheat the system and abscond with the fees and 
leave miners without money) would be:

* The third-party service divides its service into fixed-feerate bins.
* Clients select a preferred feerate bin they want to use.
* For each client:
  * Connects to the service by Tor to register a transaction it wants to have 
CPFPed.
  * Connects to the service by a different Tor circuit to register a UTXO it 
will use to spend fees.
* The server passes through the CPFPed outputs in the whole value.
* The server deducts the fee from the fee-paying UTXO and creates an output 
with all the fees (CPFP output spend, UTXO input spend, CPFP output 
re-creation, UTXO output re-creation) deducted from the UTXO.
* The server gives the resulting transaction to the clients.
* The clients sign the transaction after checking that its interested CPFPed 
outputs and fee-paying UTXOs are present.

This results in a transaction with many CPFPed inputs and fee-paying UTXOs, and 
no easy way to link the latter with the former.

* Miners and chain analysis cannot link them, as they see only the resulting tx.
* The service cannot link them, as clients talk to them on two separate Tor 
connections.

The above is blatantly the Wasabi way of CoinJoining; using the JoinMarket way 
of CoinJoining should be possible as well, and is left as an exercise to the 
reader.

Now, you have mentioned a number of times that you believe Bitcoin will 
eventually be a settlement layer, and somehow link this with standardized UTXO 
sizes.
But I think the end goal should be:

* To improve Bitcoin blockchain layer privacy.

It should not matter how we achieve this, whether it involves standardized UTXO 
sizes or not; if you want to use this solution, you need to present a good 
reason why this is the best solution for Bitcoin privacy, and better than other 
solutions.

For example, the JoinMarket way of CoinJoining does not require any particular 
standardized UTXO size.
The upcoming SwapMarket that Chris Belcher is working on, also does not require 
such a standardized UTXO size, as it is based as well on the JoinMarket 
technique, where the client can select whatever sizes it wants.
Why should the Bitcoin ecosystem adopt a strict schedule of UTXO sizes for 
privacy, if apparently JoinMarket and SwapMarket can improve privacy without 
this?

Regards,
ZmnSCPxj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Out-of-band transaction fees

2020-12-01 Thread ZmnSCPxj via bitcoin-dev
Good morning Sebastian and e,

> Hi Eric,
>
> > In paying fees externally one must find another way to associate a fee with 
> > its transaction. This of course increases the possibility of taint, as you 
> > describe in part here:
>
> I'm not sure I follow, do you see a problem beyond the facts that miners
> would need to authenticate somehow? This can be done in a privacy
> preserving way per block. I don't think transactions would need to
> change in any way. The bounty-transaction link is upheld by a third
> party service which the miners have to trust that it will pay out if the
> transaction is included (not perfect, but a business decision they can
> make).

There has to be an association of "how much do I get if I include *this* 
particular transaction" to "*this* particular transaction", so that the miners 
have an informed decision of how much they stand to earn.
Unless fees are also standardized, this can be used to leak the same 
information ("sombody offered this specific amount of money to the bounty 
server, and the bounty server associated this particular amount to this 
particular transaction").


More concerningly, [a trusted third party is hard to get out 
of](https://nakamotoinstitute.org/trusted-third-parties/).
If there are only a few of them, it becomes easy to co-opt, and then a part of 
the mining infrastructure is now controllable from central points of failure.
If there are many of them, then evaluating which ones cheat and which ones do 
not will take a lot of effort, and the system as a whole may not provide 
benefits commensurate to the overall system cost in finding good third parties.


> > It is also the case that the "bounty" must be associated with the 
> > transaction. Even with miner and payer mutual anonymity, the fee inputs and 
> > outputs will be associated with the transaction inputs and outputs by the 
> > miner, rendering the proposal counterproductive.
> > Total transaction sizing is not reduced by paying fees externally, in fact 
> > it would be increased. The only possible reduction would come from 
> > aggregation of fees. Yet it is not clear how that aggregation would occur 
> > privately in less overall block space. At least with integral fees, it's 
> > possible to spend and pay a fee with a single input and output. That is not 
> > the case with externalized fees.
>
> I should have made this more clear, I don't imagine anyone to pay these
> fees with L1 transactions, but rather some L2 system like Lightning or a
> BTC backed chaumian token issued for that purpose by the bounty service
> provider. Even Lightning would be far more private for the use cases I
> described that don't allow fee deduction from inputs. But if one accepts
> more counter party risk with e.g. some centrally pegged chaumian token
> it can be anonymous.

Since such L2 mechanisms themselves are dependent on L1 and require a facility 
to bump up fees for e.g. commitment transactions in Lightning Network, this 
brings up the possibility of getting into a bootstrapping problem, where the 
security of L2 is dependent on the existence of a reliable fee-bumping 
mechanism at L1, but the fee-bumping mechanism at L1 is dependent on the 
security of L2.
Not impossible, but such strange loops give me pause; I am uncertain if we have 
the tools to properly analyze such.

>
> I see that this might not be very useful today, but I imagine a future
> in which Bitcoin is mostly a settlement and reserve layer. This would
> make it feasible to keep most UTXOs in common sizes. Only large, round
> transactions happen on-chain, the rest can happen on L2. This would
> allow tumbling these already evenly-sized UTXOs on spend without toxic
> waste if we can somehow tackle the fee payment problem. I know of the
> following solutions:
>
> -   everyone has to add a second UTXO per input
> -   Someone is chosen fairly at random to pay the total fee
> -   pay a service on L2 to add an input/output for fee payment
> -   out-of-band L2 fee payments
>
> Only L2 fee payments can hide who is involved in such a tumbling
> operation as additional fee inputs that get reused would indicate the
> same entity was present in two tumbling operations. The out-of-band
> approach saves one input and one output and appears more general (e.g.
> could be used like rbf).
>
> This is also not a general solution for fee payments. In many cases it
> will still be preferable to pay on-chain fees. But having the option to
> avoid that in a standardized way could help some protocols imo.
>
> Best,
> Sebastian
>
>
> > -Original Message-
> > From: bitcoin-dev bitcoin-dev-boun...@lists.linuxfoundation.org On Behalf 
> > Of Sebastian Geisler via bitcoin-dev
> > Sent: Monday, November 30, 2020 3:03 PM
> > To: bitcoin-dev@lists.linuxfoundation.org
> > Subject: [bitcoin-dev] Out-of-band transaction fees
> > Hi all,
> > the possibility of out of band transaction fee payments is a well known 
> > 

Re: [bitcoin-dev] CoinPools based on m-of-n Schnorr aggregated signatures

2020-11-15 Thread ZmnSCPxj via bitcoin-dev
Good morning Sridhar,

My understanding is that it is still possible to generate an m-of-n aggregated 
pubkey, it "just" requires an interactive setup (i.e. all n signers have to 
talk to each other and send data a few times to each other) and they have to 
keep extra data around in order to "sign for" the n - m missing signers.
`andytoshi` and `pwuille` can probably teach you the details.

Of course, if you want to trade off the interactive setup + data storage, for 
extra block space and a privacy loss, that seems a reasonable tradeoff to make.

My understanding is that current plan is to implement a `OP_CHECKSIGADD`, where 
your script would be:

   <0>  OP_CHECKSIGADD  OP_CHECKSIGADD  
OP_CHECKSIGADD  OP_EQUAL

However, `OP_CHECKSIGADD` would have individual signatures from the m 
participating signers.
Your `OP_POOL`, as I understand it, would instead have a single m-of-m 
signature.

This adds another tradeoff:

* `OP_CHECKSIGADD` takes up more block space, but each signer can give their 
signature independently without having to enter a signing sessiong with other 
participating signers.
  * For example, this can reduce the number of communication rounds and the 
latency.
  * A participating signer can emit its own signature and then go offline and 
you can still use its signature when you have gotten the required m 
participants.
* `OP_POOL` takes less block space, but all participating signers have to be 
online simultaneously.

I think the fact that `OP_POOL` requires all participating signers to be online 
simultaneously to generate a single signature sort of defeats the purpose, as 
(by my naive understanding, which could be grossly wrong) in the m-of-n key 
setup, the extra data needed would be stored by all participants, so even if 
one participant loses this data any of the others can still provide it.
Interactive setup may not be so onerous if you are doing multiple interactive 
signing sessions later anyway.
So doing a verifiable secret sharing at interactive setup, to generate a single 
pubkey that is just used directly as the pubkey of the UTXO, would end up being 
smaller and more private anyway, and would "just" require interactive setup + 
storage of extra data.

I guess the question is: just how big is the extra data in the m-of-n 
verifiable secret sharing?

Regards,
ZmnSCPxj


> Hi everyone,
>
> N-of-n multisig transaction using Schnorr aggregate signature is trivial and 
> is similar to the current P2PKH. I would like to propose a model for m-of-n 
> multisig transactions using Schnorr aggregate signatures and use this to 
> enable CoinPools for off-chain scalability.
>
> 1. Creating the pool
>
> A transaction is made on the bitcoin network with an output having the 
> following script:
>
>..  N M OP_POOL
>
> Bitcoin network will create a ‘pool’ with all the ‘N’ public keys and note 
> down the threshold M for this pool. This UTXO would be referred as 
>
> 2. Depositing money to pool
>
> Deposits can be made to a pool with  with the following script
>
>  OP_LOAD_POOL_AGG_PUB_KEY OP_CHECKSIG
>
> 3. Redeeming money from pool
>
> Redeem script would contain the aggregated signature from all signers and the 
> bitmap of signers.
>
>     OP_LOAD_POOL_AGG_PUB_KEY OP_CHECKSIG
>
> With   provided by the person that redeems money 
> from a pool, where
>
>  - is the aggregated signature
>
>  - Is a bitmap representing whether the member of the pool at 
> position 'i' of bitmap has signed or not(1 = signed, 0 - has not signed)
>
> So we will be introducing two new opcodes:
>
> 1.  OP_POOL - this will be used to create a new coin pool.
>
> 2.  OP_LOAD_POOL_AGG_PUB_KEY - This opcode does three things
>
>
> 1.  loads the pool (POOL_ID)
>
> 2.  checks if there are atleast 'm' signers (based on SIGNERS_BITMAP)
>
> 3.  aggregates the public key of the signers. (based on SIGNERS_BITMAP)
>
>
> The opcode uses the top two elements from the stack- the first element from 
> the stack specifies the POOL_ID to load, which will load the public keys from 
> the pool. This opcode also checks if there are ‘M’ signers(as specified at 
> the time of creation of the pool) and aggregates the public keys that have 
> signed based on SIGNERS_BITMAP using Schnorr aggregate signature scheme and 
> puts back this aggregated public key onto the stack.
>
> SIGNERS_BITMAP is a 32 byte value, and represents a bitmap of which public 
> keys from the pool have signed the transaction.
>
> Having this scheme would enable-
>
> 1.  Scalability of m-of-n multisig transactions - People can deposit money to 
> a pool(with 32 byte SIGNERS_BITMAP, we can allow for 256 possible signers).
>
> 2.  Trust minimized off-chain scalability solutions due to the use of a 
> sufficiently large pool of signers. Most existing pools might allow for only 
> a few signers as having many signers would mean higher transaction cost.
>
>
> Downsides:
>
> 1.  We need to have the public keys of the members of the pool exposed.
>
>
> Despite the 

Re: [bitcoin-dev] Progress on bech32 for future Segwit Versions (BIP-173)

2020-10-20 Thread ZmnSCPxj via bitcoin-dev



> Anecdata: c-lightning doesn't allow withdraw to segwit > 0. It seems
> that the contributor John Barboza (CC'd) assumed that future versions
> should be invalid:
>
> if (bip173) {
> bool witness_ok = false;
> if (witness_version == 0 && (witness_program_len == 20 ||
> witness_program_len == 32)) {
> witness_ok = true;
> }
> /* Insert other witness versions here. */

I believe this is actually my code, which was later refactored by John Barboza 
when we were standardizing the `param` system.

This was intended only as a simple precaution against creating non-standard 
transaction, and not an assumption that future versions should be invalid.
The intent is that further `else if` branches would be added for newer witness 
versions and whatever length restrictions they may have, as the `/* Insert 
other witness versions here.  */` comment implies.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Progress on Miner Withholding - FPNC

2020-10-07 Thread ZmnSCPxj via bitcoin-dev
Good morning all,

>
> Below is a novel discussion on block-withholding attacks and FPNC. These are 
> two very simple changes being proposed here that will dramatically impact the 
> network for the better.
>
> But first of all, I'd like to say that the idea for FPNC came out of a 
> conversation with ZmnSCPxj's in regards to re-org stability.  When I had 
> proposed blockchain pointers with the PubRef opcode, he took the time to 
> explain to me concerns around re-orgs and why it is a bigger problem than I 
> initially had thought — and I greatly appreciate this detail.   After 
> touching base with ZmnSCPxj and Greg Maxwell there is an overwhelming view 
> that the current problems that face the network outweigh any theoretical ones.
>
> Currently the elephant in the room is the miner withholding attack. There is 
> an unintended incentive to hold onto blocks because keeping knowledge of this 
> coinbase private gives a greedy miner more time to calculate the next block.  
> Major mining pools are actively employing this strategy because winning two 
> blocks in a row has a much greater payoff than common robbery. This unfair 
> advantage happens each time a new block is found, and provides a kind of 
> home-field advantage for large pools, and contributes to a more centralized 
> network. This odd feature of the bitcoin protocol provides a material 
> incentive to delay transactions and encourages the formation of 
> disagreements. In a sense, withholding is a deception of the computational 
> power of a miner, and by extension a deception of their influence within the 
> electorate.  In effect, other miners are forced to work harder, and when they 
> are successful in finding a 2nd solution of the same height — no one 
> benefits. Disagreement on the bitcoin network is not good for the 
> environment, or for the users, or for honest miners, but is ideal for 
> dishonest miners looking for an advantage.

This is my understanding:

The selfish mining attack described above was already presented and known about 
**many years** ago, with the solution presented here: 
https://www.cs.cornell.edu/~ie53/publications/btcProcFC.pdf

The solution was later determined to actually raise the needed threshhold to 
33%, not 25% in the paper.

That solution is what is used in the network today.

Implementing floating-point Nakamoto Consensus removes the solution presented 
in the paper, and therefore risks reintroducing the selfish mining attack.

Therefore, floating-point Nakamoto Consensus is a hard NAK.


Regards,
ZmnSCPxj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] A thought experiment on bitcoin for payroll privacy

2020-10-05 Thread ZmnSCPxj via bitcoin-dev
Good morning Mr. Lee, and list,

> I can then look at the gossiped channels and see the size of the channel 
> between the cut-throat company and the other employee, and from there, guess 
> that this is the bi-weekly salary of that employee.


This can be made an argument against always publishing all channels, so let me 
propose something.

The key identifying information in an invoice is the routehint and the node ID 
itself.

There are already many competing proposals by which short-channel-ids in 
routehints can be obscured.
They are primarily proposed for unpublished channels, but nothing in those 
proposals prevents them from being used for published channels.

The destination node ID is never explicitly put in the onion, only implied by 
the short-channel-id in order to save space.
However, the destination node ID *is* used to encrypt the final hop in the 
onion.
So the receiver node can keep around a small number of throwaway keypairs (or 
get those by HD) and use a throwaway to sign the invoice, and when it is unable 
to decode by its normal node ID, try using one of the throwaway keypairs.

With both of the above, what remains is the feerate settings in the invoice.
If the company node gives different feerates per channel, it is still possible 
to identify which channel is *actually* referred to in the invoice.
What the receiver node can do would be to give a small random increase in 
feerate, which basically overpays the company node, but obscures as well 
*which* channel is actually in the invoice.

Regards,
ZmnSCPxj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] A thought experiment on bitcoin for payroll privacy

2020-10-04 Thread ZmnSCPxj via bitcoin-dev
Good morning Mr. Lee,


> Permanent raises can justify permanently increasing the size of the channel 
> with the employee.

On reflection, this is a bad idea.

Suppose I am a cut-throat employee and I want to have an idea of the bi-weekly 
salary of another employee.

I make some stupid bet, and lose, with the other employee.
I offer to pay the loss of my bet via Lightning, and the other employee, in all 
innocence, issues a Lightning invoice to me.

The Lightning invoice contains the actual node ID of the other employee.
And since I also have a channel with the cut-throat company, I know as well the 
node ID of the cut-throat company.

I can then look at the gossiped channels and see the size of the channel 
between the cut-throat company and the other employee, and from there, guess 
that this is the bi-weekly salary of that employee.

On the other hand --- once the employee has *any* funds at all, they can 
similarly take an offchain-to-onchain swap, and then use the funds to create 
another channel to another part of the network.
The other employee as well can arrange incoming funds on that other channel by 
using offchain-to-onchain swaps to their cold storage.
Thus, as an employee gets promoted and pulls a larger bi-weekly salary, the 
channel with the cut-throat company becomes less and less an indicator of their 
*actual* bi-weekly salary, and there is still some deniability on the exact 
size of the salary.

At the same time, even if I know the node of the other employee, the size of 
all its channels is also still not a very accurate indicator of their salary at 
the throat-cutting company.
For example, it could be a family node, and the other employee and all her or 
his spouses arrange to have their salaries paid to that node.
Or the other employee can also run a neck-reconstruction business on the side, 
and also use the same node.
(Nodelets for the win?)

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] A thought experiment on bitcoin for payroll privacy

2020-10-04 Thread ZmnSCPxj via bitcoin-dev
Good morning Thomas,

> "big to-network channel"
>
> nit: should this be "big from-network channel" ?

As Lightning Network channels are bidirectional, it would be more properly 
"to/from-network", but that is cumbersome.
"to-network" is shorter by two characters than "from-network", and would be 
true as well (since the channel is bidirectional, it is both a "to-network" and 
"from-network" channel), thus preferred.


>
> thanks for this explanation.

You are welcome.

Regards,
ZmnSCPxj

> On Sat, Oct 3, 2020 at 11:45 PM ZmnSCPxj via bitcoin-dev
> bitcoin-dev@lists.linuxfoundation.org wrote:
>
> > Good Morning Mr. Lee,
> >
> > > I cannot front up funds of my own to give
> > > them inbound balance because it would consume all of my treasury to lock
> > > up funds.
> >
> > This is not a reasonable assumption!
> > Suppose you have a new hire that you have agreed to pay 0.042BTC every 2 
> > weeks.
> > On the first payday of the new hire, you have to have at least 0.042BTC in 
> > your treasury, somehow.
> > If not, you are unable to pay the new hire, full stop, and you are doomed 
> > to bankruptcy and your problems will disappear soon once your cut-throat 
> > new hire cuts your throat for not paying her or him.
> > If you do have at least 0.042BTC in your treasury, you can make the channel 
> > with the new hire and pay the salary via the new channel.
> > At every payday, you need to have at least the salary of your entire 
> > employee base available, otherwise you would be unable to pay at least some 
> > of your employees and you will quickly find yourself with your throat cut.
> > Now, let us talk about topology.
> > Let us reduce this to a pointless topology that is the worst possible 
> > topology for Lightning usage, and show that by golly, Lightning will still 
> > work.
> > Suppose your company only has this one big channel with the network.
> > Let us reduce your company to only having this single new hire 
> > throat-cutter (we will show later that without loss of generality this will 
> > still work even if you have thousands of throat-cutters internationally).
> > Now, as mentioned, on the first payday of your throat-cutter, you have to 
> > have at least the 0.042 salary you promised.
> > If you have been receiving payments for your throat-cutting business on the 
> > big channel, that means the 0.042 BTC is in that single big channel.
> > You can then use an offchain-to-onchain swap service like Boltz or Loop and 
> > put the money onchain.
> > Then you can create the new channel to your new hire and pay the promised 
> > salary to the throat-cutter.
> > Now, you have no more funds in either of your channels, the to-network big 
> > channel, and the to-employee channel.
> > So you are not locking up any of your funds, only the funds of your 
> > employee.
> > Now, as your business operates, you will receive money in your to-network 
> > big channel.
> > The rate at which you receive money for services rendered has to be larger 
> > than 0.042/2weeks on average, otherwise you are not earning enough to pay 
> > your throat-cutter by the time of the next payday (much less your other 
> > operating expenses, such as knife-sharpening, corpse disposal, dealing with 
> > the families of the deceased, etc.).
> > If you are not earning at a high enough rate to pay your employee by the 
> > next payday, your employee will not be paid and will solve your problems by 
> > cutting your throat.
> > But what that means is that the employee salary of the previous payday is 
> > not locked, either!
> > Because you are receiving funds on your big to-network channel 
> > continuously, the employee can now spend the funds "locked" in the 
> > to-employee channel, sending out to the rest of the network.
> > This uses up the money you have been earning and moving the funds to the 
> > to-employee channel, but if you are running a lucrative business, that is 
> > perfectly fine, since you should, by the next payday, have earned enough, 
> > and then some, to pay the employee on the next payday.
> > Of course there will be times when business is a little slow and you get 
> > less than 0.042/2weeks.
> > In that case, a wise business manager will reserve some funds for a rainy 
> > day when business is a little slow, meaning you will still have some funds 
> > you can put into your to-network big channel for other expenses, even as 
> > your employee uses capacity there to actually spend their salary.
> > It all bal

Re: [bitcoin-dev] A thought experiment on bitcoin for payroll privacy

2020-10-03 Thread ZmnSCPxj via bitcoin-dev
Good Morning Mr. Lee,

> I cannot front up funds of my own to give
> them inbound balance because it would consume all of my treasury to lock
> up funds.

This is not a reasonable assumption!

Suppose you have a new hire that you have agreed to pay 0.042BTC every 2 weeks.

On the *first* payday of the new hire, you *have* to have *at least* 0.042BTC 
in your treasury, somehow.

If not, you are unable to pay the new hire, full stop, and you are doomed to 
bankruptcy and your problems will disappear soon once your cut-throat new hire 
cuts your throat for not paying her or him.

If you *do* have at least 0.042BTC in your treasury, you *can* make the channel 
with the new hire and pay the salary via the new channel.

At *every* payday, you need to have at least the salary of your entire employee 
base available, otherwise you would be unable to pay at least some of your 
employees and you will quickly find yourself with your throat cut.




Now, let us talk about *topology*.

Let us reduce this to a pointless topology that is the *worst possible 
topology* for Lightning usage, and show that by golly, Lightning will still 
work.

Suppose your company only has this one big channel with the network.
Let us reduce your company to only having this single new hire throat-cutter 
(we will show later that without loss of generality this will still work even 
if you have thousands of throat-cutters internationally).

Now, as mentioned, on the first payday of your throat-cutter, you *have* to 
have at least the 0.042 salary you promised.
If you have been receiving payments for your throat-cutting business on the big 
channel, that means the 0.042 BTC is in that single big channel.

You can then use an offchain-to-onchain swap service like Boltz or Loop and put 
the money onchain.
Then you can create the new channel to your new hire and pay the promised 
salary to the throat-cutter.

Now, you have no more funds in either of your channels, the to-network big 
channel, and the to-employee channel.
So you are not locking up any of *your* funds, only the funds of your employee.

Now, as your business operates, you will receive money in your to-network big 
channel.
The rate at which you receive money for services rendered *has to* be larger 
than 0.042/2weeks on average, *otherwise* you are not earning enough to pay 
your throat-cutter by the time of the *next* payday (much less your other 
operating expenses, such as knife-sharpening, corpse disposal, dealing with the 
families of the deceased, etc.).
If you are not earning at a high enough rate to pay your employee by the next 
payday, your employee will not be paid and will solve your problems by cutting 
your throat.

But what that means is that the employee salary of the *previous* payday is not 
locked, either!
Because you are receiving funds on your big to-network channel continuously, 
the employee can now spend the funds "locked" in the to-employee channel, 
sending out to the rest of the network.
This uses up the money you have been earning and moving the funds to the 
to-employee channel, but if you are running a lucrative business, that is 
perfectly fine, since you should, by the next payday, have earned enough, and 
then some, to pay the employee on the next payday.

Of course there will be times when business is a little slow and you get less 
than 0.042/2weeks.
In that case, a wise business manager will reserve some funds for a rainy day 
when business is a little slow, meaning you will still have some funds you can 
put into your to-network big channel for other expenses, even as your employee 
uses capacity there to actually spend their salary.

It all balances out.
You only need to keep enough in your channels to cover your continuous 
operational expenses, and employee salaries *are* operational expenses.


Suppose you now want to hire *another* throat-cutter.
You would only do that if business is booming, or in other words, if you have 
accumulated enough money in your treasury to justify hiring yet another 
employee.

By induction, this will work regardless if you have 1 employee, or 1 million.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] A thought experiment on bitcoin for payroll privacy

2020-10-03 Thread ZmnSCPxj via bitcoin-dev
Good morning Mr. Lee,

> Lightning network is not much an option because they do not have
> inbound balance to get paid.

Why not?
Your company can open a channel with each employee that has insufficient 
inbound liquidity.
The employee is incentivized to reveal their node to your company so you can 
open a channel to them, since otherwise they would be unable to receive their 
salary.
Your alternative is as you say: openly-visible salaries and throat-cutters who 
might decide to cut your throat.

Let us say your company receives its income stream over Lightning.
Let us say you hire a new throat-cutter, with a bi-weekly salary of 0.042 BTC.
You ask the new hire if his or her Lightning node has that capacity.

If not, you take some of your onchain Lightning funds, swap out say 0.043 BTC 
on Lightning Loop or Boltz Exchange or some other offchain-to-onchain swap.
You use those swapped onchain funds to create a fresh channel to the new hire.

If you are onboarding by batches (which your HR is likely to want to do, so 
they can give the onboarding employee seminars in groups) then you can save 
onchain fees by using C-Lightning `multifundchannel` as well.

The occasional bonus can be a bit tricky, but similarly the employee can use 
Lightning Loop or Boltz Exchange or some other alternative to free up capacity 
for the bonus (and they have an incentive to do so, as they want to get the 
bonus).
Permanent raises can justify permanently increasing the size of the channel 
with the employee.

Even if the employee leaves your employ, that is no justification to close the 
channel with her or his node.
You can earn forwarding fees from his or her new employer or income source.

Because of the onion routing, it is hard for you to learn what the employee 
spends on, and in the future when they leave your employ, it is hard for you to 
figure out her or his new employer.

If the employee is a saver, they can accumulate their funds, thus reducing 
their incoming capacity below their biweekly salary.
If so, he or she can use an offchain-to-onchain swap, again, to move their 
accumulated savings to onchain cold storage.

This is not perfect of course, if you run multiple nodes you can try 
correlating payments by timing and amount (and prior to payment points i.e. 
today, you can correlate by payment hashes).
But this is still much better than the onchain situation, as you describe.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Detailed protocol design for routed multi-transaction CoinSwap appendium

2020-10-03 Thread ZmnSCPxj via bitcoin-dev
Good morning Chris,

>
> Looking at these equations, I realize that the incentives against
> post-coinswap-theft-attempt still work even if we set K = 0, because the
> extra miner fee paid by Bob could be enough disincentive.

This made me pause for a moment, but on reflection, is correct.

The important difference here relative to v1 is that the mining fee for the 
collateralized contract transaction is deducted from the `Jb` input provided by 
Bob.


> Unlike the v1 protocol, each CoinSwap party knows a different version of
> the contract transactions, so the taker Alice always knows which maker
> broadcast a certain set of contract transactions, and so can always ban
> the correct fidelity bond.

Great observation, and an excellent property to have.

Will go think about this more.

Regards,
ZmnSCPxj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Floating-Point Nakamoto Consensus

2020-10-01 Thread ZmnSCPxj via bitcoin-dev
Good morning Mike,

That is better than implied name-calling and refusing to lay out your argument 
in detail.
It is still sub-optimal since you are still being insulting by labeling me as 
"reactionary", when you could have just laid out the exact same argument ***in 
the first place*** without being unnecessarily inflammatory, but this is 
significantly better than your previous behavior.

I also strongly prefer to discuss this openly on the mailing list.

> Consider for one moment when the words I have said are correct. Take this 
> moment to see the world from someone else's eyes, and do not be reactionary - 
> just be.
>
> Good.
>
> Consider a threat model, where nodes are unable to form new connections, 
> unless the attacker allows it to happen. The point of threat modeling is not 
> to question if it is possible, but rather to plan on failure because we live 
> in a world where failure happens. Now if you are in a world of limited 
> visibility, and a presented solution has no intrinsic value other than it's 
> length - then you create a node that is gullible. An adversary that controls 
> connections can lie that a new solution was ever even found or selectivally 
> slow the formation of this side of the disagreement, and probably other bad 
> things too.   That sucks, and no one is saying that there is a complete 
> solution to this problem and we are all here to help.
>
> You are absolutely correct - the eclipse effect is never going to be perfect. 
> Which is your point, and it's accurate. Imperfections in the node's 
> visibility allow for a more-fit solution to leak out, and ultimately an 
> identical consensus to form - so long as there is some measure to judge the 
> fitness of two disagreements of identical length.

This is the point at which I think your argument fails.

You are expecting:

* That the attacker is powerful enough to split the network.
* That the attacker is adept enough that it can split the network such that 
mining hashpower is *exactly* split in half.
* That the universe is in an eldritch state such that at the exact time one 
side of the chain split finds a block, the other side of the chain split *also* 
finds a block.

This leads to a metastable state, where both chain splits have diverged and yet 
are at the exact same block height, and it is true that this state can be 
maintained indefinitely, with no 100% assurance it will disappear.

Yet this is a [***metastable***](https://en.wikipedia.org/wiki/Metastability) 
state, as I have mentioned.
Since block discovery is random, inevitably, even if the chain splits are 
exactly equal in mining hashpower, by random one or the other will win the next 
block earlier than the other, precisely due to the random nature of mining, and 
if even a single direct connection were manually made between the chain splits, 
this would collapse the losing chain split and it will be reorganized out 
without requiring floating-point Nakamoto.

This is different if the coin had non-random block production, but fortunately 
in Bitcoin we use proof-of-work.

The confluence of too many things (powerful attacker, exact hashpower split, 
perfect metastability) is necessary for this state --- and your solution to 
this state --- to actually ***matter*** in practice.
I estimate that it is far more likely my meat avatar will be destroyed in a 
hit-and-run accident tomorrow than such a state actually occurring, and I do 
not bother worrying about my meat avatar being destroyed by a hit-and-run 
accident tomorrow.

And in Bitcoin, leaving things alone is generally more important, because 
change is always a risk, as it may introduce *other*, more dangerous attacks 
that we have not thought of.
I would suggest deferring to those in the security team, as they may have more 
information than is available to you or me.

>  This minor change of adding a fitness test to solve disagreements is 
>intended to diminish the influence of delayed message passing, and yes there 
>are multiple solutions to this problem, absolutely, but bringing this fact up 
>just derails the important parts of the conversation. 
>
> By the client having limited visibility, then non-voting nodes who simply 
> pass messages *are* given a say in the election process, and that is a 
> problem.   Any attacker can more easily control when a message arrives than a 
> good fitness value.   The old 2013 solution was about naming one side a 
> looser, but that doesn't really help.  It isn't just about calling one 
> solution a winner and a loser. We need to make sure that all descendants of 
> weak solutions are also going to be weak - and that my friend is the basis 
> for a genetic algorithm.
>
> -Michael Brooks 
> (my real name)

Do you think emphasizing that this is your real name ***matters*** compared to 
actual technical arguments?

>
> On Wed, Sep 30, 2020 at 6:45 PM ZmnSCPxj  wrote:
>
> > Good morning Mike,
> >
> > > You are incorrect. 
> >
> > You make no argument to back this 

  1   2   3   4   5   >