Re: [bitcoin-dev] Drivechain -- Request for Discussion

2017-05-29 Thread Paul Sztorc via bitcoin-dev
Hi Peter,

Responses below.

On 5/28/2017 5:07 PM, Peter Todd wrote:
> On Mon, May 22, 2017 at 05:30:46PM +0200, Paul Sztorc wrote:
>> Surprisingly, this requirement (or, more precisely, this incentive) does
>> not effect miners relative to each other. The incentive to upgrade is only
>> for the purpose of preventing a "theft" -- defined as: an improper
>> withdrawal from a sidechain. It is not about miner revenues or the ability
>> to mine generally (or conduct BMM specifically). The costs of such a theft
>> (decrease in market price, decrease in future transaction fee levels) would
>> be shared collectively by all future miners. Therefore, it would have no
>> effect on miners relative to each other.
> 
> That's not at all true. If I'm a miner with a better capability than another
> miner to prevent that theft, I have reasons to induce it to happen to give me
> political cover to pushing that other miner off the network.

Miners can abstain from 'voting', which is politically neutral. Or, if
they wish, smaller miners could acquiesce to the coercion and just copy
the votes of the attacking 51% group. For users who are only running
Bitcoin Core, there is nothing bad about that.

As you say, a 51% group can arbitrarily start orphaning the blocks that
are mined by non-member rivals. This _may_ be a problem, or it may not,
but it is not exacerbated by drivechain.

So, what exactly is "not at all true"?


> 
> This is a very similar problem to what we had with zeroconf double-spending,
> where entities such as Coinbase tried to pay off miners to guarantee something
> that wasn't possible in a geninely decrentralized system: safe zeroconf
> transactions.

I don't see what you mean here. You can't stop Coinbase from donating
BTC to a subset of miners. That will always be possible, and it has
nothing to do with drivechain (as I see it).


> 
>> Moreover, miners have other recourse if they are unable to run the node.
>> They can adopt a policy of simply rejecting ("downvoting") any withdrawals
>> that they don't understand. This would pause the withdraw process until
>> enough miners understand enough of what is going on to proceed with it.
> 
> Why are you forcing miners to run this code at all?

Could we not say the same thing about the code behind CLTV?

The nature of a contract, is that people are happier to be bound by some
rules that they themselves construct (for example, a nuclear
non-proliferation treaty).

In this case, miners prefer sidechains to exist (as existence makes the
BTC they mine more valuable, and provides additional tx fee revenues),
and so they would like to run code which makes them possible.


> 
> Equally, you're opening up miners to huge political risks, as rejecting all
> withdrawals is preventing users' from getting their money, which gives other
> miners a rational for kicking those miners off of Bitcoin entirely.

As I explained above, miners can abstain from voting, which is
politically neutral, or else they can delegate their vote to an
aggressive miner. The "51% can orphan" concern could be raised, even in
a world without drivechain. All that is required, is for the miners to
be anonymous, or in private 'dark' pools (and to thereby escape censure).

But there is a much bigger issue here, which is that our threat models
are different.

As you may know, my threat model [1] does not include miners "pushing
each other off". It only cares about the miner-experience, to the extent
that it impacts the user-experience.

Moreover, I reject [2] the premise that we can even measure "miner
centralization", or even that such a concept exists. If someone has a
definition of this concept, which is both measurable and useful, I would
be interested to read it.

( For what it's worth, Satoshi did not care about this, either. For
example: "If a greedy attacker is able to assemble more CPU power than
all the honest nodes, he...ought to find it more profitable to play by
the rules." which implies robustness to 51% owned by one entity. )

[1] http://www.truthcoin.info/blog/mining-threat-equilibrium/
[2] http://www.truthcoin.info/blog/mirage-miner-centralization/


> 
>> Finally, the point in dispute is a single, infrequent, true/false question.
>> So miners may resort to semi-trusted methods to supplement their decision.
>> In other words, they can just ask people they trust, if the withdrawal is
>> correct or not. It is up to users to decide if they are comfortable with
>> these risks, if/when they decide to deposit to a sidechain.
> 
> Why do you think this will be infrequent? Miners with a better ability to
> validate the drivechain have every reason to make these events more frequent.

It is part of the spec. These timing parameters must be agreed upon when
the sidechain is added, ie _before_ users deposit to the sidechain. Once
the sidechain is created, the timing is enforced by nodes, the same as
with any other protocol rules. Miner-validation-ability has no effect on
the frequency.


> 
>> It 

Re: [bitcoin-dev] Compatibility-Oriented Omnibus Proposal

2017-05-29 Thread Oliver Petruzel via bitcoin-dev
>>if the community wishes to adopt (by unanimous consensus) a 2 MB block
size hardfork, this is probably the best way to do it right now... Legacy
Bitcoin transactions are given the witness discount, and a block size limit
of 2 MB is imposed.<<


The above decision may quickly become very controversial. I don't think it's
what most users had/have in mind when they discuss a "2MB+SegWit" solution.

With the current 1MB+SegWit, testing has shown us that normal usage results
in ~2 or 2.1MB blocks.

I think most users will expect a linear increase when Base Size is
increased to 200 bytes and Total Weight is increased to 800 bytes.
With normal usage, the expected results would then be ~4 or 4.2MB blocks.

Am I missing something here, or does Luke's suggested 2MB cap completely
nullify that expected linear increase? If so, why? What's the logic behind
this decision?

I'd love to be armed with a good answer should my colleagues ask me the
same obvious question, so thank you ahead of time!

Respectfully,
Oliver Petruzel
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Compatibility-Oriented Omnibus Proposal

2017-05-29 Thread Erik Aronesty via bitcoin-dev
I can't think of any resistance to this, but the code, on a tight timeline,
isn't going to be easy.   Is anyone volunteering for this?

On May 29, 2017 6:19 AM, "James Hilliard via bitcoin-dev" <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> For the reasons listed
> here(https://github.com/bitcoin/bips/blob/master/bip-
> 0091.mediawiki#Motivation)
> you should have it so that the HF can not lock in unless the existing
> BIP141 segwit deployment is activated.
>
> The biggest issue is that a safe HF is very unlikely to be able to be
> coded and tested within 6 months.
>
> On Sun, May 28, 2017 at 8:18 PM, CalvinRechner via bitcoin-dev
>  wrote:
> > This proposal is written under the assumption that the signatories to the
> > Consensus 2017 Scaling Agreement[1] are genuinely committed to the terms
> of
> > the agreement, and intend to enact the updates described therein. As
> such,
> > criticisms pertaining to the chosen deployment timeline or hard fork
> upgrade
> > path should be treated as out-of-scope during the initial discussion of
> this
> > proposal.
> >
> > Because it includes the activation of a hard fork for which community
> > consensus does not yet exist, this proposal is not likely to be merged
> into
> > Bitcoin Core in the immediate future, and must instead be maintained and
> > reviewed in a separate downstream repository. However, it is written with
> > the intent to remain cleanly compatible with future network updates and
> > changes, to allow for the option of a straightforward upstream merge if
> > community consensus for the proposal is successfully achieved in the
> > following months.
> >
> >
> > 
> > BIP: ?
> > Layer: Consensus
> > Title: Compatibility-oriented omnibus proposal
> > Author: Calvin Rechner 
> > Comments-Summary: No comments yet.
> > Comments-URI: ?
> > Status: Draft
> > Type: Standards Track
> > Created: 2017-05-28
> > License: PD
> > 
> >
> >
> > ===Abstract===
> >
> > This document describes a virtuous combination of James Hilliard’s
> “Reduced
> > signalling threshold activation of existing segwit deployment”[2],
> Shaolin
> > Fry’s “Mandatory activation of segwit deployment”[3], Sergio Demian
> Lerner’s
> > “Segwit2Mb”[4] proposal, Luke Dashjr’s “Post-segwit 2 MB block size
> > hardfork”[5], and hard fork safety mechanisms from Johnson Lau’s
> > “Spoonnet”[6][7] into a single omnibus proposal and patchset.
> >
> >
> > ===Motivation===
> >
> > The Consensus 2017 Scaling Agreement[1] stipulated the following
> > commitments:
> >
> > • Activate Segregated Witness at an 80% threshold, signaling at bit 4
> > • Activate a 2 MB hard fork within six months
> >
> > This proposal seeks to fulfill these criteria while retaining maximum
> > compatibility with existing deployment approaches, thereby minimizing the
> > risks of a destructive chain split. Additionally, subsequent indications
> of
> > implied criteria and expectations of the Agreement[8][9] are satisfied.
> >
> > The proposed hard fork incorporates a legacy witness discount and 2MB
> > blocksize limit along with the enactment of Spoonnet-derived
> protectionary
> > measures, to ensure the safest possible fork activation within the
> > constraints of the requirements outlined in the Scaling Agreement.
> >
> >
> > ===Rationale===
> >
> > To the extent possible, this represents an effort at a best-of-all-worlds
> > proposal, intended to provide a common foundation from which all
> > mutually-inclusive goals can be achieved while risks are minimized.
> >
> > The individual constituent proposals include the following respective
> > rationales:
> >
> > James Hilliard’s “Reduced signalling threshold activation of existing
> segwit
> > deployment”[2] explains:
> >
> >> The goal here is to minimize chain split risk and network disruption
> while
> >> maximizing backwards compatibility and still providing for rapid
> activation
> >> of segwit at the 80% threshold using bit 4.
> >
> > Shaolin Fry’s “Mandatory activation of segwit deployment”[3] is included
> to:
> >
> >> cause the existing "segwit" deployment to activate without needing to
> >> release a new deployment.
> >
> > Both of the aforementioned activation options (“fast-activation” and
> > “flag-day activation”) serve to prevent unnecessary delays in the network
> > upgrade process, addressing a common criticism of the Scaling Agreement
> and
> > providing an opportunity for cooperation and unity instead.
> >
> > Sergio Demian Lerner’s “Segwit2Mb”[4] proposal explains the reasoning
> behind
> > linking SegWit’s activation with that of a later hard fork block size
> > increase:
> >
> >> Segwit2Mb combines segwit as it is today in Bitcoin 0.14+ with a 2MB
> block
> >> size hard-fork activated ONLY if segwit activates (95% of miners
> signaling
> >> ... to re-unite the Bitcoin community and avoid a cryptocurrency split.
> >
> > Luke Dashjr’s “Post-segwit 2 MB block size hardfork”[5] suggestions 

Re: [bitcoin-dev] A Method for Computing Merkle Roots of Annotated Binary Trees

2017-05-29 Thread Peter Todd via bitcoin-dev
On Mon, May 29, 2017 at 10:55:37AM -0400, Russell O'Connor wrote:
> > This doesn't hold true in the case of pruned trees, as for the pruning to
> > be
> > useful, you don't know what produced the left merkleRoot, and thus you
> > can't
> > guarantee it is in fact a midstate of a genuine SHA256 hash.
> >
> 
> Thanks for the review Peter.  This does seem like a serious issue that I
> hadn't considered yet.  As far as I understand, we have no reason to think
> that the SHA-256 compression function will be secure with chosen initial
> values.

Well, it's an easy thing to forget, unless like me, you've written a general
purpose library to work with pruned data. :) (actually, soon to be two of
them!)

I also ran into the midstate issue with my OpenTimestamps protocol, as I was
looking into whether or not it was safe to have a SHA256 midstate commitment
operation, and couldn't find clear evidence that it was.

> Some of this proposal can be salvaged, I think, by putting the hash of the
> tags into Sha256Compress's first argument:
> 
> merkleRoot : Tree BitString -> Word256
> merkleRoot (Leaf (t)):= sha256Compress(sha256(t),
> 0b0^512)
> merkleRoot (Unary (t, child)):= sha256Compress(sha256(t),
> merkleRoot(child) ⋅ 0b0^256)
> merkleRoot (Binary (t, left, right)) := sha256Compress(sha256(t),
> merkleRoot(left) ⋅ merkleRoot(right))
> 
> The Merkle--Damgård property will still hold under the same conditions
> about tags determining the type of node (Leaf, Unary, or Binary) while
> avoiding the need for SHA256's padding.  If you have a small number of
> tags, then you can pre-compute their hashes so that you only end up with
> one call to SHA256 compress per node. If you have tags with a large amount
> of data, you were going to be hashing them anyways.

Notice how what you're proposing here is almost the same thing as using SHA256
directly, modulo the fact that you skip the final block containing the message
length.

Similarly, you don't need to compute sha256(t) - you can just as easily compute
the midstate sha256Compress(IV, t), and cache that midstate if you can reuse
tags. Again, the only difference is the last block.

> Unfortunately we lose the ability to cleverly avoid collisions between the
> Sha256 and MerkleRoot functions by using safe tags.

I think a better question to ask is why you want that property in the first
place?

My earlier python-proofmarshal(1) library had a scheme of per-use-case tags,
but I eventually realised that depending on tags being unique is a footgun. For
example, it's easy to see how two different systems could end up using the same
tag due to designers forgetting to create new tags while copying and pasting
old code. Similarly, if two such systems have to be integrated, you'll end up
with tags getting reused for two different purposes.

Now, if you design a system where that doesn't matter, then by extension it'll
also be true that collisions between the sha256 and merkleroot functions don't
matter either. And that system will be more robust to design mistakes, as tags
only need to be unique "locally" to distinguish between different sub-types in
a sum type (enum).


FWIW what I've done with my newer (and as yet unpublished) rust-proofmarshal
work is commitments are only valid for a specific type. Secondly, I use
blake2b, whose compression function processes up to 128 bytes of message on
each invocation.  That's large enough for four 32 byte hashes, which is by
itself more than sufficient for a summed merkle tree with three 32 byte hashes
(left right and sum) and a per-node-type tag.

Blake2b's documentations don't make it clear if it's resistant to collision if
the adversary can control the salt or personalization strings, so I don't
bother using them - the large block size by itself is enough to fit almost any
use-case into a single block, and it hashes blocks significantly faster than
SHA256. This also has the advantage that the actual primitive I'm using is 100%
standard blake2b, an aid to debugging and development.

1) https://github.com/proofchains/python-proofmarshal

-- 
https://petertodd.org 'peter'[:-1]@petertodd.org


signature.asc
Description: Digital signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] A Method for Computing Merkle Roots of Annotated Binary Trees

2017-05-29 Thread Russell O'Connor via bitcoin-dev
On Sun, May 28, 2017 at 4:26 AM, Peter Todd  wrote:

> On Mon, May 22, 2017 at 03:05:49AM -0400, Russell O'Connor via bitcoin-dev
> wrote:
> > Not all of the inputs to the SHA256 compression function are created
> > equal.  Only the second argument, the chunk data, is applied to the
> SHA256
> > expander.  `merkleRoot` is designed to ensure that the first argument of
> > the SHA256 compression function is only fed some output of the SHA256
> > compression function.  In fact, we can prove that the output of the
> > `merkleRoot` function is always the midstate of some SHA256 hash.  To see
> > this, let us explicitly separate the `sha256` function into the padding
> > step, `sha256Pad`, and the recursive hashing step, `unpaddedSha256`.
>
> This doesn't hold true in the case of pruned trees, as for the pruning to
> be
> useful, you don't know what produced the left merkleRoot, and thus you
> can't
> guarantee it is in fact a midstate of a genuine SHA256 hash.
>

Thanks for the review Peter.  This does seem like a serious issue that I
hadn't considered yet.  As far as I understand, we have no reason to think
that the SHA-256 compression function will be secure with chosen initial
values.

Some of this proposal can be salvaged, I think, by putting the hash of the
tags into Sha256Compress's first argument:

merkleRoot : Tree BitString -> Word256
merkleRoot (Leaf (t)):= sha256Compress(sha256(t),
0b0^512)
merkleRoot (Unary (t, child)):= sha256Compress(sha256(t),
merkleRoot(child) ⋅ 0b0^256)
merkleRoot (Binary (t, left, right)) := sha256Compress(sha256(t),
merkleRoot(left) ⋅ merkleRoot(right))

The Merkle--Damgård property will still hold under the same conditions
about tags determining the type of node (Leaf, Unary, or Binary) while
avoiding the need for SHA256's padding.  If you have a small number of
tags, then you can pre-compute their hashes so that you only end up with
one call to SHA256 compress per node. If you have tags with a large amount
of data, you were going to be hashing them anyways.

Unfortunately we lose the ability to cleverly avoid collisions between the
Sha256 and MerkleRoot functions by using safe tags.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Emergency Deployment of SegWit as a partial mitigation of CVE-2017-9230

2017-05-29 Thread Anthony Towns via bitcoin-dev
On Sat, May 27, 2017 at 01:07:58PM -0700, Eric Voskuil via bitcoin-dev wrote:
> Anthony,
> For the sake of argument:

(That seems like the cue to move any further responses to bitcoin-discuss)

> (1) What would the situation look like if there was no patent?

If there were no patent, and it were easy enough to implement it, then
everyone would use it. So blocking ASICBoost would decrease everyone's
hashrate by the same amount, and you'd just have a single retarget period
with everyone earning a little less, and then everyone would be back to
making the same profit.

But even without a patent, entry costs might be high (redesigning an
ASIC, making software that shuffles transactions so you can use the
ASIC's features) and how that works out seems hard to analyse...

> (2) Would the same essential formulation exist if there had been a
> patent on bitcoin mining ASICs in general?

Not really; for the formulation to apply you'd have to have some way
to block ASIC use via consensus rules, in a way that doesn't just block
ASICs completely, but just removes their advantage, ie makes them perform
comparably to GPUs/FPGAs or whatever everyone else is using.

Reportedly, ASICBoost is an option you can turn on or off on some mining
hardware, so this seems valid (I'm assuming not using the option either
increases your electricity use by ~20% due to activating extra circuitry,
or decreases your hashrate by ~20% and maybe also decreases your
electricity use by less than that by not activating some circuitry); but
"being an ASIC" isn't something you can turn off and on in that manner.

> (3) Would an unforeseen future patented mining optimization exhibit
> the same characteristics?

Maybe? It depends on whether the optimisation's use (or lack thereof)
can be detected (enforced) via consensus rules. If you've got a patent
on a 10nm process, and you build a bitcoin ASIC with it, there's no way
to stop you via consensus rules that I can think of.

> (4) Given that patent is a state grant of monopoly privilege, could a
> state licensing regime for miners, applied in the same scope as a
> patent, but absent any patent, have the same effect?

I don't think that scenario's any different from charging miners income
tax, is it? If you don't pay the licensing fee / income tax, you get put
out of business; if you do, you have less profit. There's no way to block
either via consensus mechanisms, at least in general...

I think it's the case that any optional technology with license fees can't
be made available to all miners on equal terms, though, provided there is
any way for it to be blocked via consensus mechanisms. If it were, the
choice would be:

 my percentage of the hashrate is h (0(r-X)*h provided X>0, so all miners are better off if the
 technology is not allowed (because they all suffer equally in loss of
 hashrate, which is cancelled out in a retarget period; and they all
 benefit equally by not having to pay licensing fees).

Sadly, the solution to this argument is to use discriminatory terms,
either not offering the technology to everyone, or offering varying fees
for miners with different hashrates. Unless somehow it works to make it
more expensive for higher hashrate miners, this makes decentralisation
worse.

Cheers,
aj

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Compatibility-Oriented Omnibus Proposal

2017-05-29 Thread James Hilliard via bitcoin-dev
For the reasons listed
here(https://github.com/bitcoin/bips/blob/master/bip-0091.mediawiki#Motivation)
you should have it so that the HF can not lock in unless the existing
BIP141 segwit deployment is activated.

The biggest issue is that a safe HF is very unlikely to be able to be
coded and tested within 6 months.

On Sun, May 28, 2017 at 8:18 PM, CalvinRechner via bitcoin-dev
 wrote:
> This proposal is written under the assumption that the signatories to the
> Consensus 2017 Scaling Agreement[1] are genuinely committed to the terms of
> the agreement, and intend to enact the updates described therein. As such,
> criticisms pertaining to the chosen deployment timeline or hard fork upgrade
> path should be treated as out-of-scope during the initial discussion of this
> proposal.
>
> Because it includes the activation of a hard fork for which community
> consensus does not yet exist, this proposal is not likely to be merged into
> Bitcoin Core in the immediate future, and must instead be maintained and
> reviewed in a separate downstream repository. However, it is written with
> the intent to remain cleanly compatible with future network updates and
> changes, to allow for the option of a straightforward upstream merge if
> community consensus for the proposal is successfully achieved in the
> following months.
>
>
> 
> BIP: ?
> Layer: Consensus
> Title: Compatibility-oriented omnibus proposal
> Author: Calvin Rechner 
> Comments-Summary: No comments yet.
> Comments-URI: ?
> Status: Draft
> Type: Standards Track
> Created: 2017-05-28
> License: PD
> 
>
>
> ===Abstract===
>
> This document describes a virtuous combination of James Hilliard’s “Reduced
> signalling threshold activation of existing segwit deployment”[2], Shaolin
> Fry’s “Mandatory activation of segwit deployment”[3], Sergio Demian Lerner’s
> “Segwit2Mb”[4] proposal, Luke Dashjr’s “Post-segwit 2 MB block size
> hardfork”[5], and hard fork safety mechanisms from Johnson Lau’s
> “Spoonnet”[6][7] into a single omnibus proposal and patchset.
>
>
> ===Motivation===
>
> The Consensus 2017 Scaling Agreement[1] stipulated the following
> commitments:
>
> • Activate Segregated Witness at an 80% threshold, signaling at bit 4
> • Activate a 2 MB hard fork within six months
>
> This proposal seeks to fulfill these criteria while retaining maximum
> compatibility with existing deployment approaches, thereby minimizing the
> risks of a destructive chain split. Additionally, subsequent indications of
> implied criteria and expectations of the Agreement[8][9] are satisfied.
>
> The proposed hard fork incorporates a legacy witness discount and 2MB
> blocksize limit along with the enactment of Spoonnet-derived protectionary
> measures, to ensure the safest possible fork activation within the
> constraints of the requirements outlined in the Scaling Agreement.
>
>
> ===Rationale===
>
> To the extent possible, this represents an effort at a best-of-all-worlds
> proposal, intended to provide a common foundation from which all
> mutually-inclusive goals can be achieved while risks are minimized.
>
> The individual constituent proposals include the following respective
> rationales:
>
> James Hilliard’s “Reduced signalling threshold activation of existing segwit
> deployment”[2] explains:
>
>> The goal here is to minimize chain split risk and network disruption while
>> maximizing backwards compatibility and still providing for rapid activation
>> of segwit at the 80% threshold using bit 4.
>
> Shaolin Fry’s “Mandatory activation of segwit deployment”[3] is included to:
>
>> cause the existing "segwit" deployment to activate without needing to
>> release a new deployment.
>
> Both of the aforementioned activation options (“fast-activation” and
> “flag-day activation”) serve to prevent unnecessary delays in the network
> upgrade process, addressing a common criticism of the Scaling Agreement and
> providing an opportunity for cooperation and unity instead.
>
> Sergio Demian Lerner’s “Segwit2Mb”[4] proposal explains the reasoning behind
> linking SegWit’s activation with that of a later hard fork block size
> increase:
>
>> Segwit2Mb combines segwit as it is today in Bitcoin 0.14+ with a 2MB block
>> size hard-fork activated ONLY if segwit activates (95% of miners signaling
>> ... to re-unite the Bitcoin community and avoid a cryptocurrency split.
>
> Luke Dashjr’s “Post-segwit 2 MB block size hardfork”[5] suggestions are
> included to reduce the marginal risks that such an increase in the block
> size might introduce:
>
>> if the community wishes to adopt (by unanimous consensus) a 2 MB block
>> size hardfork, this is probably the best way to do it right now... Legacy
>> Bitcoin transactions are given the witness discount, and a block size limit
>> of 2 MB is imposed.
>
> Johnson Lau’s anti-replay and network version updates[6][7] are included as
> general hard fork safety measures:
>
>> In a