Re: [bitcoin-dev] Rolling UTXO set hashes

2017-05-15 Thread Peter R via bitcoin-dev
Hi Pieter,

I wanted to say that I thought this write-up was excellent!  And efficiently 
hashing the UTXO set in this rolling fashion is a very exciting idea!! 

Peter R

> On May 15, 2017, at 1:01 PM, Pieter Wuille via bitcoin-dev 
>  wrote:
> 
> Hello all,
> 
> I would like to discuss a way of computing a UTXO set hash that is
> very efficient to update, but does not support any compact proofs of
> existence or non-existence.
> 
> Much has been written on the topic of various data structures and
> derived hashes for the UTXO/TXO set before (including Alan Reiner's
> trust-free lite nodes [1], Peter Todd's TXO MMR commitments [2] [3],
> or Bram Cohen's TXO bitfield [4]). They all provide interesting extra
> functionality or tradeoffs, but require invasive changes to the P2P
> protocol or how wallets work, or force nodes to maintain their
> database in a normative fashion. Instead, here I focus on an efficient
> hash that supports nothing but comparing two UTXO sets. However, it is
> not incompatible with any of those other approaches, so we can gain
> some of the advantages of a UTXO hash without adopting something that
> may be incompatible with future protocol enhancements.
> 
> 1. Incremental hashing
> 
> Computing a hash of the UTXO set is easy when it does not need
> efficient updates, and when we can assume a fixed serialization with a
> normative ordering for the data in it - just serialize the whole thing
> and hash it. As different software or releases may use different
> database models for the UTXO set, a solution that is order-independent
> would seem preferable.
> 
> This brings us to the problem of computing a hash of unordered data.
> Several approaches that accomplish this through incremental hashing
> were suggested in [5], including XHASH, AdHash, and MuHash. XHASH
> consists of first hashing all the set elements independently, and
> XORing all those hashes together. This is insecure, as Gaussian
> elimination can easily find a subset of random hashes that XOR to a
> given value. AdHash/MuHash are similar, except addition/multiplication
> modulo a large prime are used instead of XOR. Wagner [6] showed that
> attacking XHASH or AdHash is an instance of a generalized birthday
> problem (called the k-sum problem in his paper, with unrestricted k),
> and gives a O(2^(2*sqrt(n)-1)) algorithm to attack it (for n-bit
> hashes). As a result, AdHash with 256-bit hashes only has 31 bits of
> security.
> 
> Thankfully, [6] also shows that the k-sum problem cannot be
> efficiently solved in groups in which the discrete logarithm problem
> is hard, as an efficient k-sum solver can be used to compute discrete
> logarithms. As a result, MuHash modulo a sufficiently large safe prime
> is provably secure under the DL assumption. Common guidelines on
> security parameters [7] say that 3072-bit DL has about 128 bits of
> security. A final 256-bit hash can be applied to the 3072-bit result
> without loss of security to reduce the final size.
> 
> An alternative to multiplication modulo a prime is using an elliptic
> curve group. Due to the ECDLP assumption, which the security of
> Bitcoin signatures already relies on, this also results in security
> against k-sum solving. This approach is used in the Elliptic Curve
> Multiset Hash (ECMH) in [8]. For this to work, we must "hash onto a
> curve point" in a way that results in points without known discrete
> logarithm. The paper suggests using (controversial) binary elliptic
> curves to make that operation efficient. If we only consider
> secp256k1, one approach is just reading potential X coordinates from a
> PRNG until one is found that has a corresponding Y coordinate
> according to the curve equation. On average, 2 iterations are needed.
> A constant time algorithm to hash onto the curve exists as well [9],
> but it is only slightly faster and is much more complicated to
> implement.
> 
> AdHash-like constructions with a sufficiently large intermediate hash
> can be made secure against Wagner's algorithm, as suggested in [10].
> 4160-bit hashes would be needed for 128 bits of security. When
> repetition is allowed, [8] gives a stronger attack against AdHash,
> suggesting that as much as 40 bits are needed. While repetition is
> not directly an issue for our use case, it would be nice if
> verification software would not be required to check for duplicated
> entries.
> 
> 2. Efficient addition and deletion
> 
> Interestingly, both ECMH and MuHash not only support adding set
> elements in any order but also deleting in any order. As a result, we
> can simply maintain a running sum for the UTXO set as a whole, and
> add/subtract when creating/spending an output in it. In the case of
> MuHash it is slightly more complicated, as computing an inverse is
> relatively expensive. This can be solved by representing the running
> value as a fraction, and multiplying created elements into the
> numerator and spent elements 

Re: [bitcoin-dev] Hard fork proposal from last week's meeting

2017-03-29 Thread Peter R via bitcoin-dev
I believe nearly everyone at Bitcoin Unlimited would be supportive of a UTXO 
check-pointing scheme.  I’d love to see this happen, as it would greatly reduce 
the time needed to get a new node up-and-running, for node operators who are 
comfortable trusting these commitments.  

I’m confident that we could work with the miners who we have good relationships 
with to start including the root hash of the (lagging) UTXO set in their 
coinbase transactions, in order to begin transforming this idea into reality.  
We could also issue regular transactions from “semi-trusted” addresses 
controlled by known people that include the same root hash in an OP_RETURN 
output, which would allow cross-checking against the miners’ UTXO commitments, 
as part of this initial “prototype” system.

This would "get the ball rolling" on UTXO commitments in a permissionless way 
(no one can stop us from doing this). If the results from this prototype 
commitment scheme were positive, then perhaps there would be support from the 
community and miners to enforce a new rule which requires the (lagging) root 
hashes be included in new blocks.  At that point, the UTXO commitment scheme is 
no longer a prototype but a trusted feature of the Bitcoin network.

On that topic, are there any existing proposals detailing a canonical ordering 
of the UTXO set and a scheme to calculate the root hash?

Best regards,
Peter


> On Mar 29, 2017, at 12:33 PM, Daniele Pinna via bitcoin-dev 
>  wrote:
> 
> What about periodically committing the entire UTXO set to a special 
> checkpoint block which becomes the new de facto Genesis block? 
> 
> Daniele 
> 
> --
> 
> Message: 5
> Date: Wed, 29 Mar 2017 16:41:29 +
> From: Andrew Johnson  >
> To: David Vorick >
> Cc: Bitcoin Dev  >
> Subject: Re: [bitcoin-dev] Hard fork proposal from last week's meeting
> Message-ID:
>  >
> Content-Type: text/plain; charset="utf-8"
> 
> I believe that as we continue to add users to the system by scaling
> capacity that we will see more new nodes appear, but I'm at a bit of a loss
> as to how to empirically prove it.
> 
> I do see your point on increasing load on archival nodes, but the majority
> of that load is going to come from new nodes coming online, they're the
> only ones going after very old blocks.   I could see that as a potential
> attack vector, overwhelm the archival nodes by spinning up new nodes
> constantly, therefore making it difficult for a "real" new node to get up
> to speed in a reasonable amount of time.
> 
> Perhaps the answer there would be a way to pay an archival node a small
> amount of bitcoin in order to retrieve blocks older than a certain cutoff?
> Include an IP address for the node asking for the data as metadata in the
> transaction...  Archival nodes could set and publish their own policy, let
> the market decide what those older blocks are worth.  Would also help to
> incentivize running archival node, which we do need.  Of course, this isn't
> very user friendly.
> 
> We can take this to bitcoin-discuss, if we're getting too far off topic.
> 
> 
> On Wed, Mar 29, 2017 at 11:25 AM David Vorick  >
> wrote:
> 
> >
> > On Mar 29, 2017 12:20 PM, "Andrew Johnson"  > >
> > wrote:
> >
> > What's stopping these users from running a pruned node?  Not every node
> > needs to store a complete copy of the blockchain.
> >
> >
> > Pruned nodes are not the default configuration, if it was the default
> > configuration then I think you would see far more users running a pruned
> > node.
> >
> > But that would also substantially increase the burden on archive nodes.
> >
> >
> > Further discussion about disk space requirements should be taken to
> > another thread.
> >
> >
> > --
> Andrew Johnson
> -- next part --
> An HTML attachment was scrubbed...
> URL: 
>   
> >
> 
> --
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Defending against empty or near empty blocks from malicious miner takeover?

2017-03-26 Thread Peter R via bitcoin-dev
re people in the community that the 
rationale started to make sense to me: the biggest concern people had was a 
chain split!

So I guess the “ethics” here depend on the lens through which one is looking. 
People who believe that an important outcome of the upgrade to larger blocks is 
to avoid a blockchain split may be more favourable to these ideas than people 
who want the upgrade to result in a split (or are OK with a split), as it 
sounds like you do (is this true that you’d rather split than accept blocks 
with more than 1,000,000 bytes of transaction information in them? Sorry if I 
misunderstood).  

But if one's intention is to split and not follow the majority hash power when 
blocks become larger, then why not change the proof-of-work?  This would 
certainly result in a peaceful splitting, as you said you desire.  

Best regards,
Peter R



> 
> On Sat, Mar 25, 2017 at 3:28 PM, Peter R via bitcoin-dev 
> <bitcoin-dev@lists.linuxfoundation.org 
> <mailto:bitcoin-dev@lists.linuxfoundation.org>> wrote:
> One of the purported benefits of a soft-forking change (a tightening of the 
> consensus rule set) is the reduced risk of a blockchain split compared to a 
> loosening of the consensus rule set.  The way this works is that miners who 
> fail to upgrade to the new tighter ruleset will have their non-compliant 
> blocks orphaned by the hash power majority.  This is a strong incentive to 
> upgrade and has historically worked well.  If a minority subset of the 
> network didn’t want to abide by the new restricted rule set, a reasonable 
> solution would be for them to change the proof-of-work and start a spin-off 
> from the existing Bitcoin ledger 
> (https://bitcointalk.org/index.php?topic=563972.0 
> <https://bitcointalk.org/index.php?topic=563972.0>).
> 
> In the case of the coming network upgrade to larger blocks, a primary concern 
> of both business such as Coinbase and Bitpay, and most miners, is the 
> possibility of a blockchain split and the associated confusion, replay risk, 
> etc.  By applying techniques that are known to be successful for soft-forking 
> changes, we can likewise benefit in a way that makes a split less likely as 
> we move towards larger blocks.  Two proposed techniques to reduce the chances 
> of a split are:
> 
> 1. That miners begin to orphan the blocks of non-upgraded miners once a 
> super-majority of the network hash power has upgraded. This would serve as an 
> expensive-to-ignore reminder to upgrade.
> 
> 2. That, in the case where a minority branch emerges (unlikely IMO), majority 
> miners would continually re-org that minority branch with empty blocks to 
> prevent transactions from confirming, thereby eliminating replay risk.
> 
> Just like after a soft forking change, a minority that does not want to abide 
> by the current ruleset enforced by the majority could change the 
> proof-of-work and start a spin-off from the existing Bitcoin ledger, as 
> suggested by Emin.
> 
> Best regards,
> Peter R
> 
> 
> > On Mar 25, 2017, at 9:12 AM, CANNON via bitcoin-dev 
> > <bitcoin-dev@lists.linuxfoundation.org 
> > <mailto:bitcoin-dev@lists.linuxfoundation.org>> wrote:
> >
> > On 03/24/2017 07:00 PM, Aymeric Vitte wrote:
> >> I don't know what "Time is running short I fear" stands for and when 50%
> >> is supposed to be reached
> >
> > -BEGIN PGP SIGNED MESSAGE-
> > Hash: SHA512
> >
> > On 03/24/2017 07:00 PM, Aymeric Vitte wrote: > I don't know what
> > "Time is running short I fear" stands for and when 50% > is supposed
> > to be reached
> >
> > According to current hashrate distribution tracking site coin.dance,
> > very likely within less than four weeks according to current hashrate
> > takeover rate.
> >
> > While a fork is very likely, that I dont really fear because worst
> > case scenario is that bitcoin still survives and the invalid chain
> > becomes an alt.  My fear is the centralized mining power being used
> > to attack the valid chain with intentions on killing it. [1]
> >
> > Shouldn't this 50% attack they are threatening be a concern? If it
> > is a concern, what options are on the table. If it is not a concern
> > please enlightent me as to why.
> >
> >
> > [1] Source:
> > https://www.reddit.com/r/Bitcoin/comments/6172s3/peter_rizun_tells_miners_to_force_a_hard_fork_by/
> >  
> > <https://www.reddit.com/r/Bitcoin/comments/6172s3/peter_rizun_tells_miners_to_force_a_hard_fork_by/>
> >
> > Text:
> >
> > The attack quoted from his article:
> > https://medium.com/@peter_r/on-the-emerging-consensus-regarding-bitcoins-block-size-li

Re: [bitcoin-dev] Defending against empty or near empty blocks from malicious miner takeover?

2017-03-25 Thread Peter R via bitcoin-dev
One of the purported benefits of a soft-forking change (a tightening of the 
consensus rule set) is the reduced risk of a blockchain split compared to a 
loosening of the consensus rule set.  The way this works is that miners who 
fail to upgrade to the new tighter ruleset will have their non-compliant blocks 
orphaned by the hash power majority.  This is a strong incentive to upgrade and 
has historically worked well.  If a minority subset of the network didn’t want 
to abide by the new restricted rule set, a reasonable solution would be for 
them to change the proof-of-work and start a spin-off from the existing Bitcoin 
ledger (https://bitcointalk.org/index.php?topic=563972.0).

In the case of the coming network upgrade to larger blocks, a primary concern 
of both business such as Coinbase and Bitpay, and most miners, is the 
possibility of a blockchain split and the associated confusion, replay risk, 
etc.  By applying techniques that are known to be successful for soft-forking 
changes, we can likewise benefit in a way that makes a split less likely as we 
move towards larger blocks.  Two proposed techniques to reduce the chances of a 
split are:

1. That miners begin to orphan the blocks of non-upgraded miners once a 
super-majority of the network hash power has upgraded. This would serve as an 
expensive-to-ignore reminder to upgrade.

2. That, in the case where a minority branch emerges (unlikely IMO), majority 
miners would continually re-org that minority branch with empty blocks to 
prevent transactions from confirming, thereby eliminating replay risk.

Just like after a soft forking change, a minority that does not want to abide 
by the current ruleset enforced by the majority could change the proof-of-work 
and start a spin-off from the existing Bitcoin ledger, as suggested by Emin.  

Best regards,
Peter R


> On Mar 25, 2017, at 9:12 AM, CANNON via bitcoin-dev 
>  wrote:
> 
> On 03/24/2017 07:00 PM, Aymeric Vitte wrote:
>> I don't know what "Time is running short I fear" stands for and when 50%
>> is supposed to be reached
> 
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA512
> 
> On 03/24/2017 07:00 PM, Aymeric Vitte wrote: > I don't know what
> "Time is running short I fear" stands for and when 50% > is supposed
> to be reached
> 
> According to current hashrate distribution tracking site coin.dance,
> very likely within less than four weeks according to current hashrate
> takeover rate.
> 
> While a fork is very likely, that I dont really fear because worst
> case scenario is that bitcoin still survives and the invalid chain
> becomes an alt.  My fear is the centralized mining power being used
> to attack the valid chain with intentions on killing it. [1]
> 
> Shouldn't this 50% attack they are threatening be a concern? If it
> is a concern, what options are on the table. If it is not a concern
> please enlightent me as to why.
> 
> 
> [1] Source:
> https://www.reddit.com/r/Bitcoin/comments/6172s3/peter_rizun_tells_miners_to_force_a_hard_fork_by/
> 
> Text:
> 
> The attack quoted from his article:
> https://medium.com/@peter_r/on-the-emerging-consensus-regarding-bitcoins-block-size-limit-insights-from-my-visit-with-2348878a16d8
> 
>[Level 2] Anti-split protection  Miners will orphan the
>blocks of non-compliant miners prior to the first larger block
>to serve as a reminder to upgrade. Simply due to the possibility
>of having blocks orphaned, all miners would be motivated to
>begin signalling for larger blocks once support definitively
>passes 51%. If some miners hold out (e.g., they may not be
>paying attention regarding the upgrade), then they will begin
>to pay attention after losing approximately $15,000 of revenue
>due to an orphaned block.
> 
>[Level 3] Anti-split protection  In the scenario where Levels
>1 and 2 protection fails to entice all non-compliant miners to
>upgrade, a small-block minority chain may emerge. To address the
>risk of coins being spent on this chain (replay risk), majority
>miners will deploy hash power as needed to ensure the minority
>chain includes only empty blocks after the forking point. This
>can easily be accomplished if the majority miners maintain a
>secret chain of empty blocks  built off their last empty
>block  publishing only as much of this chain as necessary
>to orphan any non-empty blocks produced on the minority chain.
> 
> 
> 
> 
> - --
> Cannon
> PGP Fingerprint: 2BB5 15CD 66E7 4E28 45DC 6494 A5A2 2879 3F06 E832 
> Email: can...@cannon-ciota.info
> 
> NOTICE: ALL EMAIL CORRESPONDENCE NOT SIGNED/ENCRYPTED WITH PGP SHOULD 
> BE CONSIDERED POTENTIALLY FORGED, AND NOT PRIVATE. 
> If this matters to you, use PGP.
> -BEGIN PGP SIGNATURE-
> 
> iQIcBAEBCgAGBQJY1pbaAAoJEAYDai9lH2mwOO0QANOWqGzPNlifWguc+Y5UQxQM
> eAiztAayQBoAyLcFE7/qdtSNlUxbIAHG17fM+aNkehjYH2oN5ODJ+j7E2Yt6EoUH
> 

Re: [bitcoin-dev] The Excessive-Block Gate: How a Bitcoin Unlimited Node Deals With Large Blocks

2016-11-26 Thread Peter R via bitcoin-dev
Great discussion, Sergio and Tom!

> I now think my reasoning and conclusions are based on a false premise: that 
> BU block size policies for miners can be heterogeneous.


Right, miners who set their block size limits (BSL) above OR below the 
"effective BSL" are disadvantaged.  Imagine that we plot the distribution (by 
hash power) for all miners' BSLs.  We might get a chart that looks like this:

http://imgur.com/a/tWNr6 

In this chart, the "effective BSL" is defined as the largest block size that no 
less than half the hash power will accept.  

If a block is mined with a size Q that is less than the "effective BSL," then 
all the hash power with BSLs between BSL_min and Q will be forked from the 
longest chain (until they update their software if they're running Core or 
until their acceptance depth is hit if they're running BU).  This wastes these 
miners' hash power.  

However, if a block is mined with a size Q that is greater than the effective 
BSL, then all the hash power with BSLs between Q and BSL_max will temporarily 
be mining on a "destined to be orphaned" chain.  This also wastes these miners' 
hash power.  

Therefore, it is in the best interest of miners to all set the same block size 
limit (and reliably signal in their coinbase TX what that limit is, as done by 
Bitcoin Unlimited miners).  

We have empirical evidence the miners in fact behave this way: 

(1) No major miner has ever set his block size limit to less than 1 MB (not 
even those such as Luke-Jr who think 1 MB is too big) because doing so would 
just waste money.  

(2) After switching to Bitcoin Unlimited, both ViaBTC and the Bitcoin.com pool 
temporarily set their BSLs to 2 MB and 16 MB, respectively (of course keeping 
their _generation limit_ at 1MB).  However, both miners quickly reduced these 
limits back to 1 MB when they realized how it was possible to lose money in an 
attack scenario.  (This actually surprised me because the only way they could 
lose money is if some _other_ miner wasted even more money by purposely mining 
a destined-to-be-orphaned block.)   

The follow-up article I'm working on is about the topics we're discussing now, 
particularly about how Bitcoin Unlimited's “node-scale” behavior facilitates 
the emergence of a fluid and organic block size limit at the network scale.  
Happy to keep continue with this current discussion, however.

Best regards
Peter

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] The Excessive-Block Gate: How a Bitcoin Unlimited Node Deals With Large Blocks

2016-11-22 Thread Peter R via bitcoin-dev
Dear all,

Bitcoin Unlimited’s market-based solution to the block-size limit is slowly 
winning support from node operators and miners.  With this increased attention, 
many people are asking for a better explanation of how Bitcoin Unlimited 
actually works.  The article linked below describes how Bitcoin Unlimited’s 
excessive-block logic works from the perspective of a single node. (I’m hoping 
to do a follow-up article that describe how this “node-scale” behavior 
facilitates the emergence of a fluid and organic block size limit at the 
network scale.)

https://medium.com/@peter_r/the-excessive-block-gate-how-a-bitcoin-unlimited-node-deals-with-large-blocks-22a4a5c322d4
 


Best regards,
Peter R___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Towards Massive On-Chain Scaling: Presenting Our Block Propagation Results With Xthin

2016-05-30 Thread Peter R via bitcoin-dev
Dear all,

For the past two months, Andrew Clifford, Andrew Stone, @sickpig, Peter 
Tschipper and I have been collecting empirical data regarding block propagation 
with Xthin — both across the normal P2P network and over the Great Firewall of 
China. We have six Bitcoin Unlimited (BU) nodes running, including one located 
in Shenzhen and another in Shanghai, and we have collected data on the 
transmission and reception for over nine thousand blocks.

Here is a link to Part 1(Methodology) of our 5 part article series on the 
testing we performed for this exciting new block relay technology:

https://medium.com/@peter_r/towards-massive-on-chain-scaling-presenting-our-block-propagation-results-with-xthin-da54e55dc0e4
 


We thank Jihan Wu from AntPool for the block source within Mainland China and 
@cypherdoc for the funding.  

Best regards,
Peter___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Compact Block Relay BIP

2016-05-09 Thread Peter R via bitcoin-dev
[9 May 16 @ 6:40 PDT]

For those interested in the hash collision attack discussion, it turns out 
there is a faster way to scan your set to find the collision:  you’d keep a 
sorted list of the hashes for each TX you generate and then use binary search 
to check that list for a collision for each new TX you randomly generate. 
Performing these operations can probably be reduced to N lg N complexity, which 
is doable for N ~2^32.   In other words, I now agree that the attack is 
feasible.  

Cheers,
Peter

hat tip to egs

> On May 9, 2016, at 4:37 PM, Peter R via bitcoin-dev 
> <bitcoin-dev@lists.linuxfoundation.org> wrote:
> 
> Greg Maxwell wrote:
> 
>> What are you talking about? You seem profoundly confused here...
>> 
>> I obtain some txouts. I write a transaction spending them in malleable
>> form (e.g. sighash single and an op_return output).. then grind the
>> extra output to produce different hashes.  After doing this 2^32 times
>> I am likely to find two which share the same initial 8 bytes of txid.
> 
> [9 May 16 @ 4:30 PDT]
> 
> I’m trying to understand the collision attack that you're explaining to Tom 
> Zander.  
> 
> Mathematica is telling me that if I generated 2^32 random transactions, that 
> the chances that the initial 64-bits on one of the pairs of transactions is 
> about 40%.  So I am following you up to this point.  Indeed, there is a good 
> chance that a pair of transactions from a set of 2^32 will have a collision 
> in the first 64 bits.  
> 
> But how do you actually find that pair from within your large set?  The only 
> way I can think of is to check if the first 64-bits is equal for every 
> possible pair until I find it.  How many possible pairs are there?  
> 
> It is a standard result that there are 
> 
>m! / [n! (m-n)!] 
> 
> ways of picking n numbers from a set of m numbers, so there are
> 
>(2^32)! / [2! (2^32 - 2)!] ~ 2^63
> 
> possible pairs in a set of 2^32 transactions.  So wouldn’t you have to 
> perform approximately 2^63 comparisons in order to identify which pair of 
> transactions are the two that collide?
> 
> Perhaps I made an error or there is a faster way to scan your set to find the 
> collision.  Happy to be corrected…
> 
> Best regards,
> Peter
> 
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Compact Block Relay BIP

2016-05-09 Thread Peter R via bitcoin-dev
Greg Maxwell wrote:

> What are you talking about? You seem profoundly confused here...
> 
> I obtain some txouts. I write a transaction spending them in malleable
> form (e.g. sighash single and an op_return output).. then grind the
> extra output to produce different hashes.  After doing this 2^32 times
> I am likely to find two which share the same initial 8 bytes of txid.

[9 May 16 @ 4:30 PDT]

I’m trying to understand the collision attack that you're explaining to Tom 
Zander.  

Mathematica is telling me that if I generated 2^32 random transactions, that 
the chances that the initial 64-bits on one of the pairs of transactions is 
about 40%.  So I am following you up to this point.  Indeed, there is a good 
chance that a pair of transactions from a set of 2^32 will have a collision in 
the first 64 bits.  

But how do you actually find that pair from within your large set?  The only 
way I can think of is to check if the first 64-bits is equal for every possible 
pair until I find it.  How many possible pairs are there?  

It is a standard result that there are 

m! / [n! (m-n)!] 

ways of picking n numbers from a set of m numbers, so there are

(2^32)! / [2! (2^32 - 2)!] ~ 2^63

possible pairs in a set of 2^32 transactions.  So wouldn’t you have to perform 
approximately 2^63 comparisons in order to identify which pair of transactions 
are the two that collide?

Perhaps I made an error or there is a faster way to scan your set to find the 
collision.  Happy to be corrected…

Best regards,
Peter

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Compact Block Relay BIP

2016-05-09 Thread Peter R via bitcoin-dev
Hi Pieter,

> I tried to derive what length of short ids is actually necessary (some
> write-up is on
> https://gist.github.com/sipa/b2eb2e486156b5509ac711edd16153ed but it's
> incomplete).
> 
> For any reasonable numbers I can come up with (in a very wide range),
> the number of bits needed is very well approximated by:
> 
>  log2(#receiver_mempool_txn * #block_txn_not_in_receiver_mempool /
> acceptable_per_block_failure_rate)
> 
> For example, with 2 mempool transactions, 2500 transactions in a
> block, 95% hitrate, and a chance of 1 in 1 blocks to fail to
> reconstruct, needed_bits = log2(2 * 2500 * (1 - 0.95) / 0.0001) =
> 34.54, or 5 byte txids would suffice.
> 
> Note that 1 in 1 failures may sound like a lot, but this is for each
> individual connection, and since every transmission uses separately
> salted identifiers, occasional failures should not affect global
> propagation. Given that transmission failures due to timeouts, network
> connectivity, ... already occur much more frequently than once every few
> gigabytes (what 1 blocks corresponds to), that's probably already
> more than enough.
> 
> In short: I believe 5 or 6 byte txids should be enough, but perhaps it
> makes sense to allow the sender to choose (so he can weigh trying
> multiple nonces against increasing the short txid length).

[9 May 16 @ 11am PDT]  

We worked on this with respect to “Xthin" for Bitcoin Unlimited, and came to a 
similar conclusion.  

But we (I think it was theZerg) also noticed another trick: if the node 
receiving the thin blocks has a small number of collisions with transactions in 
its mempool (e.g., 1 or 2), then it can test each possible block against the 
Merkle root in the block header to determine the correct one.  Using this 
technique, it should be possible to further reduce the number of bytes used for 
the txids.  That being said, even thin blocks built from 64-bit short IDs 
represent a tremendous savings compared to standard block propagation.  So we 
(Bitcoin Unlimited) decided not to pursue this optimization any further at that 
time.

***

It’s also interesting to ask what the information-theoretic minimum amount of 
information necessary for a node to re-construct a block is. The way I’m 
thinking about this currently[1] is that the node needs all of the transactions 
in the block that were not initially part of its mempool, plus enough 
information to select and ordered subset from that mempool that represents the 
block.  If m is the number of transactions in mempool and n is the number of 
transactions in the block, then the number of possible subsets (C') is given by 
the binomial coefficient:

  C' =  m! / [n! (m - n)!]

Since there are n! possible orderings for each subset, the total number of 
possible blocks (C) of size n from a mempool of size m is

  C = n! C’ = m! / (m-n)!

Assuming that all possible blocks are equally likely, the Shannon entropy (the 
information that must be communicated) is the base-2 logarithm of the number of 
possible blocks.  After making some approximations, this works out very close to

   minimum information ~= n * log2(m),

which for your case of 20,000 transactions in mempool (m = 20,000) and a 
2500-transaction block (n = 2500), yields

   minimum information = 2500 * log2(20,000) ~ 2500 * 15 bits.

In other words, a lower bound on the information required is about 2 bytes per 
transactions for every transaction in the block that the node is already aware 
of, as well as all the missing transactions in full. 

Of course, this assumes an unlimited number of round trips, and it is probably 
complicated by other factors that I haven’t considered (queue the “spherical 
cow” jokes :), but I thought it was interesting that a technique like Xthin or 
compact blocks is already pretty close to this limit.  

Cheers,
Peter 

[1] There are still some things that I can’t wrap my mind around that I’d love 
to discuss with another math geek :)


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [patch] Switching Bitcoin Core to sqlite db

2015-11-15 Thread Peter R via bitcoin-dev
Hello Jorge:
> > What rules does Bitcoin obey? 
> 
> Thanks to the worl of many people, part of the consensus rules are finally 
> encapsulated in the libbitcoinconsensus library. I'm currently writing a 
> document to complete the encapsulation of the specification of the consensus 
> rules.
> 
I applaud your work on the consensus library.  I think it an important step to 
encouraging multiple competing implementations of the protocol.
> > I am not convinced that Bitcoin even *has* a block size limit, let alone 
> > that it can enforce one against the invisible hand of the market.
> 
> You keep insisting that some consensus rules are not consensus rules while 
> others "are clearly a very different thing". What technical difference is 
> there between the rule that impedes me from creating transactions bigger than 
> X and the rules that prevent me frm creatin new coins (not as a miner, as a 
> regular user in a transaction with more coins in the outputs than in the 
> inputs)?
> 
What technical difference is there between a cat and a dog? They both have four 
legs and a furry coat. 

I think you’re using the term “technical difference” to mean something very 
specific.  Perhaps you could clarify exactly how you are defining that term 
because to me it is crystal clear that creating coins out of thin air is very 
different than accepting a block 1.1 MB in size and full of valid TXs.  There 
are many technical differences between the two. For example, technically the 
first allows coins to be created randomly while the second doesn’t.  
> What about property enforcement? If the invisible hand of the market is what 
> decides consensus rules instead of their (still incomple) specification (aka 
> libconsensus), then the market could decide to stop enforcing ownership.
> 
Correct.  Bitcoin is an experiment and could still fail (e.g., the network 
could allow people to move coins without valid signatures).  For Bitcoin to be 
viable, the network of miners and node operators being net-econo-rational 
actually is probably a core axiom that must be accepted.  
> You also keep assuming that somehow it is a universal law that users must 
> eventually converge under the most-work chain. People follow the most-work 
> VALID chain, but if they consciously decide to implement different rules 
> (different definitions of "valid block") then their chains can diverge, and 
> once they do they won't converge again (unless/until one group decides to 
> implement the rules of the other exactly again)
> 
It is fact that two competing forks can persist for at least a short amount of 
time—we saw this a few years ago with the LevelDB bug and again this summer 
with the SPV mining incident.  In both cases, there was tremendous pressure to 
converge back to a single chain.

Could two chains persist indefinitely?  I don’t know.  No one knows.  My gut 
feeling is that since users would have coins on both sides of the fork, there 
would be a fork arbitrage event (a “forkbitrage”) where speculators would sell 
the coins on the side they predict to lose in exchange for additional coins on 
the side they expect to win.  This could actually be facilitated by exchanges 
once the fork event is credible and before the fork actually occurs, or even in 
a futures market somehow.  I suspect that the forkbitrage would create an 
unstable equilibrium where coins on one side quickly devalue.  Miners would 
then abandon that side in favour of the other, killing the fork because 
difficulty would be too high to find new blocks.  Anyways, I think even *this* 
would be highly unlikely.  I suspect nodes and miners would get inline with 
consensus as soon as the fork event was credible.  

Cryptocurrency is a new area of interdisciplinary research.  We are all 
learning together and no one knows how things will unfold.  

Best regards,
Peter

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [patch] Switching Bitcoin Core to sqlite db

2015-11-14 Thread Peter R via bitcoin-dev

> On Nov 14, 2015, at 6:10 PM, Gregory Maxwell  wrote:
> 
> On Sun, Nov 15, 2015 at 1:45 AM, Peter R  wrote:
>> My previous email described how Type 1 consensus failures can be safely 
>> dealt with.  These include many kinds of database exceptions (e.g., the 
>> LevelDB fork at block #225,430), or consensus mismatches regarding the max 
>> size of a block.
> 
> The size of a block is not a type 1 failure. There is a clear, known,
> unambiguous, and easily measured rule in the system that a block is
> not permitted to have a size over a threshold. A block that violates
> this threshold is invalid.  

You are looking at the problem from a “top down” governance perspective 
assuming you know what code is actually being run and what rules the market 
wants.  The reality is that you can only make estimates about these two things. 
 As Bitcoin evolves away from the current development monoculture, rules such 
as the max block size may no longer be perfectly clear.  However, we can prove 
that the following results will continue to hold:

1. A node with a block size limit greater than the hash-power weighted median 
will always follow the longest chain.

2. An excessive (e.g., greater than 1 MB) block will be accepted into the 
longest chain if it is smaller than the hash-power weighted median block size 
limit.

Already today, different nodes have different block size limits.  There are 
even some miners today that will attempt to build upon blocks larger than 1 MB 
if anyone dares to create one.  At some point, the majority of the hash power 
will support something larger than 1 MB.  When that first bigger block comes 
and gets buried in the longest chain, hold-out nodes (e.g., MP says he will 
never increase his limit) will have to make a choice: fight consensus or track 
consensus!  I know that I would want my node to give up on its block size limit 
and track consensus.  You may decide to make a different choice.  

You said that "a block that violates this [block size limit] threshold is 
invalid.”  I agree.  If the nodes and miners rejected the block as invalid then 
it would not persist as the longest chain. If the nodes and miners accepted the 
block and continued to build on top of it, then that chain would be Bitcoin 
(whether you personally agree of not).  

Bitcoin is ultimately a creature of the market, governed by the code people 
freely choose to run. Consensus is then an emergent property, objectively 
represented by the longest persistent chain.  Proof-of-work both enforces and 
defines the rules of the network.  


>  The only way I can see to classify that
> as a type 1 failure is to call the violation of any known system rule
> a type 1 failure.  "Spends a coin that was already spent" "provides a
> signature that doesn't verify with the pubkey" "creates bitcoins out
> of thin air" -- these are not logically different than "if size>x
> return false”.

I think you’re being intentionally obtuse here: accepting a block composed 
entirely of valid transactions that is 1.1 MB is entirely different than 
accepting a TX that creates a ten thousand bitcoins out of thin air.  The 
market would love the former but abhor the later.  I believe you can recognize 
the difference.  


>> Type 2 consensus failures are more severe but also less likely (I’m not 
>> aware of a Type 2 consensus failure besides the 92 million bitcoin bug from 
>> August 2010).  If Core was to accept a rogue TX that created another 92 
>> million bitcoins, I think it would be a good thing if the other 
>> implementations forked away from it (we don’t want bug-for-bug compatibility 
>> here).
> 
> The only consensus consistency error we've ever that I would have
> classified as potentially type 1 had has been the BDB locks issue.

Thank you for conceding on that point. 

> Every other one, including the most recently fixed one (eliminated by
> BIP66) was a type 2, by your description. They are _much_ more common;
> because if the author understood the possible cases well enough to
> create an "I don't know" case, they could have gone one step further
> and fully defined it in a consistent way. The fact that an
> inconsistency was possible was due to a lack of complete knowledge.
> 
> Even in the BDB locks case, I don't know that the type 1 distinction
> completely applies, a lower layer piece of software threw an error
> that higher layer software didn't know was possible, and so it
> interpreted "I don't know" as "no"-- it's not that it chose to treat I
> don't know as no, its that it wasn't authored with the possibility of
> I don't know in mind, and that exceptions were used to handle general
> failures (which should have been treated as no). Once you step back to
> the boundary of the calling code (much less the whole application) the
> "I don't know" doesn't exist anymore; there.

Please don’t take my comments and observations as criticisms.  I think the Core 
Dev team has done excellent 

Re: [bitcoin-dev] [patch] Switching Bitcoin Core to sqlite db

2015-11-14 Thread Peter R via bitcoin-dev

> On Sunday, November 15, 2015 1:02:33 AM Peter R via bitcoin-dev wrote:
>> A group of us have been exploring this “meta-cognition” idea with Bitcoin
>> Unlimited.  For example, Bitcoin Unlimited can be (optionally) made to
>> automatically fork to the longest chain if it “gets stuck” and can neither
>> prove that a block is valid nor that the block is invalid.
> 
> This situation isn't something that can be ignored and simply moved past. If 
> you can't determine the validity of a block, you also cannot process its 
> results correctly. Taking for example the BDB/LevelDB issue, the result was 
> that BDB failed to accept further changes to the UTXO set. Unless the UTXO 
> set 
> could be updated correctly, there is no way to even attempt to validate the 
> next block or any new transactions.

Great point, Luke! 

Indeed, whether the program can or cannot continue after a Type 1 consensus 
mismatch depends on the specifics of the situation and exactly how the code was 
written.  But I agree: there are cases where the program *can’t* continue.  In 
those cases it would halt.  This would require manual intervention to fix but 
avoids the problem of potential double-spends during the fork event.  This 
would be preferable to knowingly causing a fork.  

Peter
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] How to evaluate block size increase suggestions.

2015-11-14 Thread Peter R via bitcoin-dev
> It looks like some specific meta-level criteria would help more at this point 
> than new proposals all exploring a different variants of block size increase 
> schedules.

I agree.  In fact, I’ll go meta on your meta and suggest that we should first 
discuss how Bitcoin should be governed in the first place.  Should Bitcoin 
evolve from the “bottom up,” or from the “top down”?

If one’s answer is from the “top-down,” then the meta-level criteria can be 
endlessly debated, for they all involve some sort of tradeoff, they all require 
some sort of compromise.  The “top down” perspective holds that people might 
make poor choices if given the freedom to easily do so--it holds that the 
trade-offs must be balanced instead by experts.  

However, if one's answer is from the “bottom up,” then the meta-level criteria 
is very easy: we do what the people wants. We allow the people to weigh the 
tradeoffs and then we watch as consensus emerges through a decentralized 
process, objectively represented by the longest proof-of-work chain.  

Regarding the block size limit debate, at the end of the day it comes down to 
two things:

1.  How big of a block will my node accept today?

2.  What do I want my node to do if the longest chain includes a block larger 
than the limit I set?

If one concedes that Bitcoin should be governed from the “bottom up,” then it 
is already possible to empower each node operator to more easily express his 
free choice regarding the size of blocks he is willing to accept, while 
simultaneously ensuring that his node tracks consensus.

Best regards,
Peter

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [patch] Switching Bitcoin Core to sqlite db

2015-11-14 Thread Peter R via bitcoin-dev
Hi Greg,

Like you said, the issue with using more than one database technology is not 
that one node would prove that Block X is valid while the another node proves 
that Block X is NOT valid.  Instead, the problem is that one node might say 
“valid” while the other node says “I don’t know.”

It is often said that what caused the Level DB fork was that the old version 
determined that the triggering block was invalid while the new version 
determined the opposite.  This interpretation of the fork event has lead to the 
“bug-for-bug”-compatibility fear regarding multiple implementations of the 
protocol (or multiple database technologies).  In reality, this fear is largely 
unfounded.  

The old version ran out of LDB locks and couldn’t execute properly.  If the 
software was written with the philosophy that tracking consensus was more 
important than bug-for-bug compatibility, it would have returned an exception 
rather than “invalid.”  At that point, the software would have checked what 
decision the rest of the network came to about the block in question.   My node 
would have forked to the longest chain, automatically assuming that the 
questionable block was valid; your node may have made a different decision 
(it’s a question of ideology at that point).  

A group of us have been exploring this “meta-cognition” idea with Bitcoin 
Unlimited.  For example, Bitcoin Unlimited can be (optionally) made to 
automatically fork to the longest chain if it “gets stuck” and can neither 
prove that a block is valid nor that the block is invalid.  Similarly, Bitcoin 
Unlimited can be (optionally) made to automatically fork to a chain that 
contains a block marginally bigger than its block size limit—once that block is 
buried sufficiently in what has emerged as the longest persistent chain. 

Thinking from this perspective might go along way towards decentralizing 
development, the emergence of multiple competing implementations of the 
protocol, and true “bottom up” governance.  

Best regards,
Peter



> On Oct 29, 2015, at 9:28 PM, Gregory Maxwell  wrote:
> 
> On Fri, Oct 30, 2015 at 4:04 AM, Peter R  wrote:
>> Can you give a specific example of how nodes that used different database 
>> technologies might determine different answers to whether a given 
>> transaction is valid or invalid?  I’m not a database expert, but to me it 
>> would seem that if all the unspent outputs can be found in the database, and 
>> if the relevant information about each output can be retrieved without 
>> corruption, then that’s all that really matters as far as the database is 
>> concerned.
> 
> If you add to those set of assumptions the handling of write ordering
> is the same (e.g. multiple updates in an change end up with the same
> entry surviving) and read/write interleave returning the same results
> then it wouldn't.
> 
> But databases sometimes have errors which cause them to fail to return
> records, or to return stale data. And if those exist consistency must
> be maintained; and "fixing" the bug can cause a divergence in
> consensus state that could open users up to theft.
> 
> Case in point, prior to leveldb's use in Bitcoin Core it had a bug
> that, under rare conditions, could cause it to consistently return not
> found on records that were really there (I'm running from memory so I
> don't recall the specific cause).  Leveldb fixed this serious bug in a
> minor update.  But deploying a fix like this in an uncontrolled manner
> in the bitcoin network would potentially cause a fork in the consensus
> state; so any such fix would need to be rolled out in an orderly
> manner.
> 
>> I’d like a concrete example to help me understand why more than one 
>> implementation of something like the UTXO database would be unreasonable.
> 
> It's not unreasonable, but great care is required around the specifics.
> 
> Bitcoin consensus implements a mathematical function that defines the
> operation of the system and above all else all systems must agree (or
> else the state can diverge and permit double-spends);  if you could
> prove that a component behaves identically under all inputs to another
> function then it can be replaced without concern but this is something
> that cannot be done generally for all software, and proving
> equivalence even in special cases it is an open area of research.  The
> case where the software itself is identical or nearly so is much
> easier to gain confidence in the equivalence of a change through
> testing and review.
> 
> With that cost in mind one must then consider the other side of the
> equation-- utxo database is an opaque compressed representation,
> several of the posts here have been about desirability of blockchain
> analysis interfaces, and I agree they're sometimes desirable but
> access to the consensus utxo database is not helpful for that.
> Similarly, other things suggested are so phenomenally slow that it's
> unlikely that a node would 

Re: [bitcoin-dev] This thread is not about the soft/hard fork technical debate

2015-10-05 Thread Peter R via bitcoin-dev

> If we want decentralization (or even mere stability), we must impose a 
> counterbalancing rule such that each past commit makes one *less* likely to 
> get their next commit pulled. For example, a "one man one commit" policy.

Haha great stuff, NotMike!

Indeed, it’s not enough to keep the block size limit small so that every man 
can run his own node, we also must also implement your proposed “one man, one 
commit” policy!  Think of the decentralization if everyone Bitcoin user is also 
contributing code!/s

Peter



___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] This thread is not about the soft/hard fork technical debate

2015-10-05 Thread Peter R via bitcoin-dev
> On Oct 5, 2015, at 2:08 PM, Tom Zander via bitcoin-dev 
>  wrote:
> On Monday 5. October 2015 20.56.34 Gregory Maxwell wrote:
>> (In this case, I don't even believe we have any regulator
>> contributors that disagree).
> 
> Regular contributor?
> 
> Please explain how for a fork in the protocol should you only listen to 
> regular Bitcoin Core contributors?

Furthermore, Bitcoin is significantly more than a "software project": it sits 
at a unique intersection of computer science, economics, physics, law and more. 
 While I agree that minor bug-fixes and code-maintenance-type issues should be 
dealt with quietly by developers, decisions regarding Bitcoin’s governance and 
its evolution should be shaped by an interdisciplinary group of stakeholders 
from across the community.  The hard- vs soft-fork debate is not just a code 
maintenance issue.  

Once again, let’s use the current gridlock in Core to rally the growth of new 
forkwise-compatible implementations of the protocol.  Gavin and Mike’s 
initiative with BIP101 and Bitcoin XT should be encouraged as one possible 
model for coming to consensus on hard-forking changes.  

Best regards,
Peter



___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] This thread is not about the soft/hard fork technical debate

2015-10-05 Thread Peter R via bitcoin-dev

> On Oct 5, 2015, at 2:30 PM, Gregory Maxwell <gmaxw...@gmail.com> wrote:
> 
> On Mon, Oct 5, 2015 at 9:27 PM, Peter R via bitcoin-dev
> <bitcoin-dev@lists.linuxfoundation.org> wrote:
>> Once again, let’s use the current gridlock
> 
> Allow me to state unequivocally-- since we've had problems with people
> stating non-factuals as fact without getting adequately clear
> correction--, there is no gridlock here and an effort to manufacturer
> one for political reasons will not be successful.

I disagree.  There is gridlock in the Core Dev development process.  

Peter

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] This thread is not about the soft/hard fork technical debate

2015-10-05 Thread Peter R via bitcoin-dev

> On Oct 5, 2015, at 6:37 PM, Tom Harding via bitcoin-dev 
>  wrote:
> 
> On 10/5/2015 1:56 PM, Gregory Maxwell via bitcoin-dev wrote:
>> In this case, I don't even believe we have any regulator contributors
>> that disagree.
> 
> Since Gavin Andresen chose you to be one of 4 people who decides whose
> contributions are accepted to the Core project, shouldn't you recuse
> yourself from referencing "regular contributor" as some kind of bar to
> an opinion being worthy?
> 
> You don't want to be accused of squelching a person's opinions by
> nacking or sitting on commits, then turning around and branding those
> opinions as worthless because they are not from a "regular contributor."
> Do you?

Great point, Tom! 

In fact, you’ve just explained the dynamics that create “centralizing pressure” 
in regards to development:  If the weight of a person’s opinion is proportional 
to how many commits that person has made, and if the probability of getting a 
commit pulled is proportional to the weight of that person’s opinion, well…I’m 
pretty sure this results in a differential equation that has a solution that 
results in ever-increasing centralized control of the code base.  

I believe we should work to deprecate the idea that Core is somehow the “core 
of Bitcoin," in favour of multiple competing implementations. XT and btcd are 
two working examples of this idea.  Let’s make it easier for the community to 
determine the evolution of Bitcoin by making it easier for the community to 
express their vote based on the code we choose to run.  

Best regards,
Peter
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Dev-list's stance on potentially altering the PoW algorithm

2015-10-02 Thread Peter R via bitcoin-dev
> On Oct 2, 2015, at 1:20 AM, Jorge Timón via bitcoin-dev 
>  wrote:
> On Oct 2, 2015 10:03 AM, "Daniele Pinna via bitcoin-dev" 
>  > wrote:
> > should an algorithm that guarantees protection from ASIC/FPGA optimization 
> > be found.
> This is demonstrably impossible: anything that can be done with software can 
> be done with hardware. This is computer science 101.  And specialized 
> hardware can always be more efficient, at least energy-wise.
> 
I encourage Alex and Dmitry to consider submitting their paper to Ledger, where 
it will be reviewed objectively and with an open mind.  The authors have 
motivated their work, framed it in its scholarly context, and made explicit the 
contributions their paper makes.  Their manuscript, "Asymmetric proof-of-work 
based on the Generalized Birthday problem," clearly represents a great deal of 
work by the authors and I commend them for their efforts.  

In the link Adam Back provided, Greg Maxwell mentioned that “it is far from 
clear that 'memory hardness' is actually a useful goal.”  I agree with this 
statement; however, regardless of whether memory hardness turns out to be a 
useful goal in regards to cryptocurrency or not, a paper analyzing memory-hard 
proof-of-work schemes is certainly useful in helping us to figure that out. 

Best regards,
Peter___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Weak block thoughts...

2015-09-23 Thread Peter R via bitcoin-dev
Thanks for the reply, Gavin.  I agree on all points.  

Peter
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Your Gmaxwell exchange

2015-08-30 Thread Peter R via bitcoin-dev
Hi Greg,

 I agree that miners may change their level of centralization.  This neither
 affects the model nor the results presented in the paper.
 
 It has tremdous significance to the real-world impact of your results.

The paper makes no claims about miners changing their level of 
centralization.  Whether it is or is not significant, it is outside the scope 
of the paper and irrelevant to the result that a fee market exists without a 
block size limit (given the assumption that some information about the 
transactions included in a block is communicated between miners during block 
solution propagation).  

 If not for the other errors in your work,

Again, what you are calling errors are assumptions--in fact very reasonable 
assumptions that we know are valid for the network as a whole right now.  You 
are proposing that maybe they won't be valid in the future if the miners move 
to some hypothetical configuration that you worry may be possible.  I say prove 
it: show rigorously what the network would look like in a case where 
information about the transactions contained in a block is never communicated 
with the block solution.  How do you get the miners to agree (assuming such a 
configuration is even physically possible)?  Do you need all the miners to 
agree or only a portion of them?  How reasonable is it that the systems moves 
into such a configuration?  I know you spoke to this in your last response, but 
I mean really prove it...with a mathematical model, with diagrams and charts, 
with equations, with your assumptions clearly stated so that they can be put to 
the test--not by waving your hands on the bitcoin-dev mailing list.

This is an exciting field of research and we should encourage people to 
continue to explore these questions.   

 this point would make the
 take away of your work not a healthy transaction fee market exists
 without a block size limit but rather a decenteralized bitcoin
 cannot exist-- as, accepting the other errors as fact your model
 shows that centralizing mining is always strictly more profitable at
 any level of fee demand; because your model equivilent shows that for
 any level of fee demand and gamma miners could increase their income
 by centeralizing further.

OK, I actually sort of agree with your very last sentence above.  My paper 
indeed suggests that the most profitable configuration is one where there is 
only one super pool. Is that likely to happen?  I personally don't think so 
because of the game theory involved.  But regardless of my opinion or your 
opinion on that matter, whether it is or isn't likely to happen is outside the 
scope of my paper.  It is irrelevant to the result that a fee market exists 
without a block size limit given the assumption that there is more than a 
single miner.

 I absolutely agree that simplifications are useful and essential, but
 it is critically important to call them out before someone mistakes
 theoretical work as useful motivation for policy in the non-simplified
 system.

I do call them out.  Furthermore, I've already agreed with you to further 
clarify these assumptions when I make the other corrections to the paper.  
Doing so will make the paper stronger.  

I'm not going to address the remainder of your email because I believe that we 
are now talking past each other.  I do appreciate the interest and attention 
that you've paid to my work on the Transaction Fee Market, and I also recognize 
the important contributions you've made such as coinjoin and HD wallets.  

Best regards,
Peter

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] A Transaction Fee Market Exists Without a Block Size Limit--new research paper suggests

2015-08-30 Thread Peter R via bitcoin-dev
Hi Daniele,

I don't think there is any contention over the idea that miners that control a 
larger percentage of the hash rate, h / H, have a profitability advantage if 
you hold all the other variables of the miner's profit equation constant.  I 
think this is important: it is a centralizing factor similar to other economies 
of scale.  

However, that is outside the scope of the result that an individual miner's 
profit per block is always maximized at a finite block size Q* if Shannon 
Entropy about each transaction is communicated during the block solution 
announcement.  This result is important because it explains how a minimum fee 
density exists and it shows how miners cannot create enormous spam blocks for 
no cost, for example.  

Best regards,
Peter


 2) Whether it's truly possible for a miner's marginal profit per unit of hash 
 to decrease with increasing hashrate in some parametric regime.This however 
 directly contradicts the assumption that an optimal hashrate exists beyond 
 which the revenue per unit of hash v'  v if  h'  h. 
 Q.E.D 
 
 This theorem in turn implies the following corollary:
 
 COROLLARY: The marginal profit curve is a monotonically increasing of miner 
 hashrate.
 
 This simple theorem, suggested implicitly by Gmaxwell disproves any and all 
 conclusions of my work. Most importantly, centralization pressures will 
 always be present. 

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Unlimited Max Blocksize (reprise)

2015-08-30 Thread Peter R via bitcoin-dev
Hello Tom, Daniele --

Thank you Tom for pointing out the knapsack problem to all of us.  I will 
include a note about it when I make the other corrections to the Fee Market 
paper.

I agree with what Daniele said previously.  The other non-greedy solutions to 
the knapsack problem are most relevant when one is choosing from a smaller 
number of items where certain items have a size similar to the size of the 
knapsack itself.  For example, these other solutions would be useful if 
transactions were often 50 kB, 100 kB, 400 kB in size, and we were trying to 
find the optimal TX set to fit into a 1 MB block.  

However, since the average transaction size is ~500 bytes, even with a block 
size limit of 1 MB, we are looking at up to 2000 transactions.  The 
quantization effects become small.  Rather than 22 triangles as shown in Fig. 3 
(http://imgur.com/rWxZddg), there are hundreds or a few thousands, and we can 
reasonably approximate the discrete set of points as a continuous curve.  Like 
Daniele pointed out, the greedy algorithm assumed in the paper is 
asymptotically optimal in such a case.

Best regards,
Peter___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] On the Nature of Miner Advantages in Uncapped Block Size Fee Markets

2015-08-29 Thread Peter R via bitcoin-dev
 Of course this assumes the network does not change any as a result of
 such a system. But such a system provides strong incentives for the
 network to centralize in other ways (put all the mining nodes in one DC
 for all miners, etc).

If all the mining nodes are in one data center, and if all the nodes are 
programmed to build blocks in essentially the same way, then I would agree that 
the orphan cost would be negligible!  I will add this as an example of a 
network configuration where the results of my paper would be less relevant.  

Peter  


On 2015-08-29, at 7:35 PM, Matt Corallo lf-li...@mattcorallo.com wrote:

 Of course this assumes the network does not change any as a result of
 such a system. But such a system provides strong incentives for the
 network to centralize in other ways (put all the mining nodes in one DC
 for all miners, etc).
 
 Matt
 
 On 08/30/15 02:33, Matt Corallo via bitcoin-dev wrote:
 It is not a purely academic scenario that blocks contain effectively no
 information (that was not previously relayed). I'm not aware of any
 public code to do so, but I know several large miners who pre-relay the
 block(s) they are working on to other nodes of theirs around the globe.
 This means at announce-time you have only a few bytes to broadcast (way
 less than a packet, and effects of using smaller packets to relay things
 vs larger packets are very small, if anything). After you've broadcast
 to all of your nodes, hops to other mining nodes are probably only a
 handful of ms away with very low packet loss, so relay time is no longer
 connected to transaction inclusion at all (unless you're talking about
 multi-GB blocks). Of course, this is relay time for large miners who can
 invest time and money to build such systems. Small miners are completely
 screwed in such a system.
 
 Thus, the orphan risk for including a transaction is related to the
 validation time (which is only DB modify-utxo-set time, essentially,
 which maybe you can optimize much of that away, too, and only have to
 pass over mempool or so). Anyway, my point, really, is that though
 miners will have an incentive to not include transactions which will
 trigger validation by other nodes (ie things not already in their
 mempool), the incentive to not include transactions which have already
 been relayed around sufficiently is, while not theoretically zero, as
 near to zero in practice as you can get.
 
 Matt
 
 On 08/29/15 23:17, Peter R wrote:
 Hello Matt and Daniele,
 
 this seems to ignore the effects of transaction validation caches and
 *block
 compression protocols. *
 
 The effect of block compression protocols is included.  This is what I
 call the coding gain and use the Greek letter gamma to represent. 
 
 As long as the block solution announcements contain information (i.e.,
 Shannon Entropy) about the transactions included in a block, then the
 fee market will be healthy according to the definitions given in the
 linked paper (see below).  This is the case right now, this is the case
 with your relay network, and this would be the case using any
 implementation of IBLTs that I can imagine, so long as miners can still
 construct blocks according to their own volition.  The healthy fee
 market result follows from the Shannon-Hartley theorem; the SH-theorem
 describes the maximum rate at which information (Shannon Entropy) can be
 transmitted over a physical communication channel.   
 
 https://dl.dropboxusercontent.com/u/43331625/feemarket.pdf
 
 I've exchanged emails with Greg Maxwell about (what IMO is) an academic
 scenario where the block solutions announcements contain *no information
 at all* about the transactions included in the blocks.  Although the fee
 market would not be healthy in such a scenario, it is my feeling that
 this also requires miners to relinquish their ability to construct
 blocks according to their own volition (i.e., the system would already
 be centralized).  I look forward to a white paper demonstrating otherwise!
 
 Best regards,
 Peter
 
 
 
 On 2015-08-29, at 2:07 PM, Matt Corallo via bitcoin-dev
 bitcoin-dev@lists.linuxfoundation.org
 mailto:bitcoin-dev@lists.linuxfoundation.org wrote:
 
 I believe it was pointed out previously in the discussion of the Peter R
 paper, but I'll repeat it here so that its visible - this seems to
 ignore the effects of transaction validation caches and block
 compression protocols. Many large miners already have their own network
 to relay blocks around the globe with only a few bytes on the wire at
 block-time, and there is also the bitcoinrelaynetwork.org
 http://bitcoinrelaynetwork.org network, which
 does the same for smaller miners, albeit with slightly less efficiency.
 Also, transaction validation time upon receiving a block can be rather
 easily made negligible (ie the only validation time you should have is
 the DB modify-utxo-set time). Thus, the increased orphan risk for
 including a transaction can be reduced to a very, very tiny 

Re: [bitcoin-dev] Your Gmaxwell exchange

2015-08-29 Thread Peter R via bitcoin-dev
Hi Greg,

 Unfortunately, your work extensive as it was made at least two
 non-disclosed or poorly-disclosed simplifying assumptions and a significant
 system understanding error which, I believe, undermined it completely.
 
 In short these were:
 
 * You assume miners do not have the ability to change their level
 centralization.
 
 -- In fact they do, not just in theory but in pratice have responded
to orphaning this way in the past; and it is one of the major
concerns in this space.

I agree that miners may change their level of centralization.  This neither 
affects the model nor the results presented in the paper.

 * You assume the supply of bitcoin is infinite (that subsidy never
 declines)
 
 -- It declines geometrically, and must if the 21m limit is to be upheld.
(Though I think this is not equally important as the other concerns)

No I don't.  I assume the inflation rate is R/T, where R is a variable.  

The last paragraph of the conclusion speaks to the paradox of what happens when 
R - 0 as an area for future research. 


 * You argue, incorrectly, that amount of information which must be
 transmitted at the moment a block is discovered is proportional to the
 block's size.
 
 -- Instead the same information can be transmitted _in advance_, as
 has been previously proposed, and various techniques can make doing
 so arbitrarily efficient.

I assume, very reasonably, that the block solutions contain information about 
the transactions included in the block.  This is the case today, this is the 
case using the relay network, and this would be the case using any compression 
scheme I can personally imagine actually occurring in the future.  

(I've always agreed that if block solutions can be communicated to all of the 
hash power in such a way that they don't include any information about the 
transactions they contain, then the conditions for a healthy fee market would 
not be met [this would be gamma -- infinity in my model]). 


 [I would encourage anyone who is interested to read the posted off-list
 discussion]

As would I.  

 I contacted you in private as a courtesy in the hope that it would be
 a more productive pathway to improving our collective understanding; as well
 as a courtesy to the readers of the list in consideration of traffic levels.

I appreciated the discussion we were having and I thought we had come to some 
kind of an understanding.  I acknowledged that when I made the other 
corrections to my paper that I would further clarify the assumptions (I agreed 
that the presentation could be improved).  

What was not courteous was that you forwarded the entire private email chain to 
other people without my permission.

 In one sense, this was a success: Our conversation concluded with you
 enumerating a series of corrective actions that you would take:
 
 
 1.  I will make it more clear that the results of the paper hinge on
 the assumption that block solutions are propagated across channels,
 and that the quantity of pure information communicated per solution
 is proportional to the amount of information contained within the block.
 
 2.  I will add a note [unless you ask me not to] something to the effect
 of Greg Maxwell challenges the claim that the coding gain cannot
 be infinite… followed by a summary of the scenario you described.
 I will reference personal communication.  I will also run the note
 by you before I make the next revision public.
 
 3.  I will point out that if the coding gain can be made spectacularly
 high, that the propagation impedance in my model will become very small,
 and that although a fee market may strictly exist in the asymptotic
 sense, such a fee market may not be relevant (the phenomena in the paper
 would be negligible compared to the dynamics from some other effect).
 
 4. [UNRELATED] I also plan to address Dave Hudson's objections in my
 next revision (the you don't orphan your own block point).
 
 Lastly, thank you for the note about what might happen when fees 
 rewards.  I've have indeed been thinking about this.  I believe it is
 outside the scope of the present paper, although I am doing some work
 on the topic.  (Perhaps I'll add a bit more discussion on this topic
 to the present paper to get the reader thinking in this direction).

I stand by all of these four points.  My paper wasn't perfectly presented.  
Making these clarifications will strengthen the manuscript by showing how 
strong the claim a healthy transaction fee market exists without a block size 
limit is.  


 To the best of my knowledge, you've taken none of these corrective
 actions in the nearly month that has passed.  I certainly understand being
 backlogged, but you've also continued to make public comments about your
 work seemingly (to me) in contradiction with the above corrective actions.

My public comments have been factual.  I've even gone out of my way in several 
public threads to point out your objection that the coding gain 

Re: [bitcoin-dev] Fees and the block-finding process

2015-08-07 Thread Peter R via bitcoin-dev
 ...blocks are found at random intervals.
 
 Every once in a while the network will get lucky and we'll find six blocks in 
 ten minutes. If you are deciding what transaction fee to put on your 
 transaction, and you're willing to wait until that six-blocks-in-ten-minutes 
 once-a-week event, submit your transaction with a low fee.
 
 All the higher-fee transactions waiting to be confirmed will get confirmed in 
 the first five blocks and, if miners don't have any floor on the fee they'll 
 accept (they will, but lets pretend they won't) then your very-low-fee 
 transaction will get confirmed.
 
 In the limit, that logic becomes wait an infinite amount of time, pay zero 
 fee.
 ...
 
 Gavin Andresen

Yes, I see this as correct as well.  If demand for space within a particular 
block is elevated (e.g., when the network has not found a block for 30 
minutes), the minimum fee density for inclusion will be greater than the 
minimum fee density when demand for space is low (e.g., when the network has 
found several blocks in quick succession, as Gavin pointed out).  Lower-fee 
paying transaction will just wait to be included at one of the network lulls 
where a bunch of blocks were found quickly in a row.  

The feemarket.pdf paper ( 
https://dl.dropboxusercontent.com/u/43331625/feemarket.pdf ) shows that this 
will always be the case so long as the block space supply curve (i.e., the cost 
in BTC/byte to supply additional space within a block [rho]) is a 
monotonically-increasing function of the block size (refer to Fig. 6 and Table 
1).  The curve will satisfy this condition provided the propagation time for 
block solutions grows faster than log Q where Q is the size of the block.  
Assuming that block solutions are propagated across physical channels, and that 
the quantity of pure information communicated per solution is proportional to 
the amount of information contained within the block, then the communication 
time will always grow asymptotically like O(Q) as per the Shannon-Hartely 
theorem, and the fee market will be healthy.  

Best regards,
Peter

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] A Transaction Fee Market Exists Without a Block Size Limit--new research paper suggests

2015-08-05 Thread Peter R via bitcoin-dev
Thank you for the feedback, Benjamin.

 When you talk about a market, what are you referring to exactly?

I define what I mean by healthy, unhealthy, and non-existent markets in Section 
7, and I show a figure to illustrate the supply and demand curves in each of 
these three cases.  A healthy market is defined as one where a rational miner 
would be incentivized to produce a finite block.  An unhealthy market is one 
where a miner would be incentivized to produce an arbitrarily large block.  A 
non-existant market is one where a miner is better off publishing an empty 
block.  I show that so long as block space in a normal economic commodity that 
obeys the Law of Demand, and that the Shannon-Hartley theorem applies to the 
communication of the block solutions between miners, that an unhealthy market 
is not possible.  

  A market means that demand and supply are matched continuously, and Bitcoin 
 has no such mechanism.

Take a look at my definitions for the mempool demand curve (Sec 4) and the 
block space supply curve (Sec 5).  I show that the miner's profit is a maximum 
at the point where the derivatives of these two curves intersect.  I think of 
this as when demand and supply are matched.

 ...I don't think a fee market exists and that demand or supply are not easily 
 definable.

Do you not find the definitions presented in the paper for these curves useful? 
 The mempool demand curve represents the empirical demand measureable from a 
miner’s mempool, while the block space supply curve represents the additional 
cost to create a block of size Q by accounting for orphaning risk.  

 Ideally supply of transaction capability would completely depend on demand, 
 and a price would exist such that demand can react to longterm or shorterm 
 supply constraints.

Supply and demand do react.  For example, if the cost to produce block space 
decreases (e.g., due to improvements in network interconnectivity) then a miner 
will be able to profitably include a greater number of transactions in his 
block.  

Furthermore, not only is there a minimum fee density below which no rational 
miner should include any transactions as Gavin observed, but the required fee 
density for inclusion also naturally increases if demand for space within a 
block is elevated.  A rational miner will not necessarily include all 
fee-paying transactions, as urgent higher-paying transactions bump lower-fee 
transactions out, thereby bidding up the minimum fee density exponentially with 
demand.

 In such a scenario there would be no scalability concerns, as scale would be 
 almost perfectly elastic.

Agreed.  

Best regards,
Peter

 
 On Tue, Aug 4, 2015 at 8:40 AM, Peter R via bitcoin-dev 
 bitcoin-dev@lists.linuxfoundation.org wrote:
 Dear Bitcoin-Dev Mailing list,
 
 I’d like to share a research paper I’ve recently completed titled “A 
 Transaction Fee Market Exists Without a Block Size Limit.”  In addition to 
 presenting some useful charts such as the cost to produce large spam blocks, 
 I think the paper convincingly demonstrates that, due to the orphaning cost, 
 a block size limit is not necessary to ensure a functioning fee market.  
 
 The paper does not argue that a block size limit is unnecessary in general, 
 and in fact brings up questions related to mining cartels and the size of the 
 UTXO set.   
 
 It can be downloaded in PDF format here:
 
 https://dl.dropboxusercontent.com/u/43331625/feemarket.pdf
 
 Or viewed with a web-browser here:
 
 https://www.scribd.com/doc/273443462/A-Transaction-Fee-Market-Exists-Without-a-Block-Size-Limit
 
 Abstract.  This paper shows how a rational Bitcoin miner should select 
 transactions from his node’s mempool, when creating a new block, in order to 
 maximize his profit in the absence of a block size limit. To show this, the 
 paper introduces the block space supply curve and the mempool demand curve.  
 The former describes the cost for a miner to supply block space by accounting 
 for orphaning risk.  The latter represents the fees offered by the 
 transactions in mempool, and is expressed versus the minimum block size 
 required to claim a given portion of the fees.  The paper explains how the 
 supply and demand curves from classical economics are related to the 
 derivatives of these two curves, and proves that producing the quantity of 
 block space indicated by their intersection point maximizes the miner’s 
 profit.  The paper then shows that an unhealthy fee market—where miners are 
 incentivized to produce arbitrarily large blocks—cannot exist since it 
 requires communicating information at an arbitrarily fast rate.  The paper 
 concludes by considering the conditions under which a rational miner would 
 produce big, small or empty blocks, and by estimating the cost of a spam 
 attack.  
 
 Best regards,
 Peter
 
 ___
 bitcoin-dev mailing list
 bitcoin-dev@lists.linuxfoundation.org
 https://lists.linuxfoundation.org/mailman

Re: [bitcoin-dev] A Transaction Fee Market Exists Without a Block Size Limit--new research paper suggests

2015-08-05 Thread Peter R via bitcoin-dev
Hi Dave,

Thank you for the feedback regarding my paper.  

 The paper is nicely done, but I'm concerned that there's a real problem with 
 equation 4. The orphan rate is not just a function of time; it's also a 
 function of the block maker's proportion of the network hash rate. 
 Fundamentally a block maker (pool or aggregation of pools) does not orphan 
 its own blocks.

With the benefit of hindsight, I think the paper would be stronger if it also 
analyzed how the model changes (or doesn't) if we assume zero propagation 
impedance for intra-miner communication, as you suggested (the you don't 
orphan your own blocks idea).  Note that the paper did briefly discuss 
miner-dependent propagation times in the second paragraph of page 9 and in note 
13.  

 In a degenerate case a 100% pool has no orphaned blocks.

Agreed.  In this case there's no information to communicate (since the miner 
has no peers) and so the Shannon-Hartley limit doesn't apply.  My model makes 
no attempt to explain this case.  

 Consider that a 1% miner must assume a greater risk from orphaning than, say, 
 a pool with 25%, or worse 40% of the hash rate.

I'd like to explore this in more detail.  Although a miner may not orphan his 
own block, by building on his own block he may now orphan two blocks in a row.  
At some point, his solution or solutions must be communicated to his peers.  
And if there's information about the transactions in his blocks to communicate, 
I think there's a cost associated with that.  It's an interesting problem and 
I'd like to continue working on it.  

 I suspect this may well change some of the conclusions as larger block makers 
 will definitely be able to create larger blocks than their smaller 
 counterparts.

It will be interesting to see.  I suspect that the main result that a healthy 
fee market exists will still hold (assuming of course that a single miner with 
50% of the hash power isn't acting maliciously).  Whether miners with larger 
value of h/H have a profit advantage, I'm not sure (but that was outside the 
scope of the paper anyways).  

Best regards,
Peter



 On 3 Aug 2015, at 23:40, Peter R via bitcoin-dev 
 bitcoin-dev@lists.linuxfoundation.org wrote:
 
 Dear Bitcoin-Dev Mailing list,
 
 I’d like to share a research paper I’ve recently completed titled “A 
 Transaction Fee Market Exists Without a Block Size Limit.”  In addition to 
 presenting some useful charts such as the cost to produce large spam blocks, 
 I think the paper convincingly demonstrates that, due to the orphaning cost, 
 a block size limit is not necessary to ensure a functioning fee market.  
 
 The paper does not argue that a block size limit is unnecessary in general, 
 and in fact brings up questions related to mining cartels and the size of 
 the UTXO set.   
 
 It can be downloaded in PDF format here:
 
 https://dl.dropboxusercontent.com/u/43331625/feemarket.pdf
 
 Or viewed with a web-browser here:
 
 https://www.scribd.com/doc/273443462/A-Transaction-Fee-Market-Exists-Without-a-Block-Size-Limit
 
 Abstract.  This paper shows how a rational Bitcoin miner should select 
 transactions from his node’s mempool, when creating a new block, in order to 
 maximize his profit in the absence of a block size limit. To show this, the 
 paper introduces the block space supply curve and the mempool demand curve.  
 The former describes the cost for a miner to supply block space by 
 accounting for orphaning risk.  The latter represents the fees offered by 
 the transactions in mempool, and is expressed versus the minimum block size 
 required to claim a given portion of the fees.  The paper explains how the 
 supply and demand curves from classical economics are related to the 
 derivatives of these two curves, and proves that producing the quantity of 
 block space indicated by their intersection point maximizes the miner’s 
 profit.  The paper then shows that an unhealthy fee market—where miners are 
 incentivized to produce arbitrarily large blocks—cannot exist since it 
 requires communicating information at an arbitrarily fast rate.  The paper 
 concludes by considering the conditions under which a rational miner would 
 produce big, small or empty blocks, and by estimating the cost of a spam 
 attack.  
 
 Best regards,
 Peter
 ___
 bitcoin-dev mailing list
 bitcoin-dev@lists.linuxfoundation.org
 https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
 

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] A Transaction Fee Market Exists Without a Block Size Limit--new research paper suggests

2015-08-04 Thread Peter R via bitcoin-dev
Dear Bitcoin-Dev Mailing list,

I’d like to share a research paper I’ve recently completed titled “A 
Transaction Fee Market Exists Without a Block Size Limit.”  In addition to 
presenting some useful charts such as the cost to produce large spam blocks, I 
think the paper convincingly demonstrates that, due to the orphaning cost, a 
block size limit is not necessary to ensure a functioning fee market.  

The paper does not argue that a block size limit is unnecessary in general, and 
in fact brings up questions related to mining cartels and the size of the UTXO 
set.   

It can be downloaded in PDF format here:

https://dl.dropboxusercontent.com/u/43331625/feemarket.pdf

Or viewed with a web-browser here:

https://www.scribd.com/doc/273443462/A-Transaction-Fee-Market-Exists-Without-a-Block-Size-Limit

Abstract.  This paper shows how a rational Bitcoin miner should select 
transactions from his node’s mempool, when creating a new block, in order to 
maximize his profit in the absence of a block size limit. To show this, the 
paper introduces the block space supply curve and the mempool demand curve.  
The former describes the cost for a miner to supply block space by accounting 
for orphaning risk.  The latter represents the fees offered by the transactions 
in mempool, and is expressed versus the minimum block size required to claim a 
given portion of the fees.  The paper explains how the supply and demand curves 
from classical economics are related to the derivatives of these two curves, 
and proves that producing the quantity of block space indicated by their 
intersection point maximizes the miner’s profit.  The paper then shows that an 
unhealthy fee market—where miners are incentivized to produce arbitrarily large 
blocks—cannot exist since it requires communicating information at an 
arbitrarily fast rate.  The paper concludes by considering the conditions under 
which a rational miner would produce big, small or empty blocks, and by 
estimating the cost of a spam attack.  

Best regards,
Peter___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev