Re: [bitcoin-dev] Hardfork to fix difficulty drop algorithm

2016-03-09 Thread Dave Hudson via bitcoin-dev

> On 9 Mar 2016, at 20:21, Bob McElrath  wrote:
> 
> Dave Hudson [d...@hashingit.com] wrote:
>> A damping-based design would seem like the obvious choice (I can think of a
>> few variations on a theme here, but most are found in the realms of control
>> theory somewhere).  The problem, though, is working working out a timeframe
>> over which to run the derivative calculations.
> 
> From a measurement theory perspective this is straightforward.  Each block is 
> a
> measurement, and error propagation can be performed to derive an error on the
> derivatives.

Sure, but I think there are 2 problems:

1) My guess is that errors over anything but a long period are probably too 
large to be very useful.

2) We don't have a strong notion of time that is part of the consensus.  Sure, 
blocks have timestamps but they're very loosely controlled (can't be more than 
2 hours ahead of what any validating node thinks the time might be).  
Difficulty can't be calculated based on anything that's not part of the 
consensus data.

> The statistical theory of Bitcoin's block timing is known as a Poisson Point
> Process: https://en.wikipedia.org/wiki/Poisson_point_process or temporal point
> process.  If you google those plus "estimation" you'll find a metric shit-ton 
> of
> literature on how to handle this.

Strictly it's a non-homogeneous Poisson Process, but I'm pretty familiar with 
the concept (Google threw one of my own blog posts back at me: 
http://hashingit.com/analysis/27-hash-rate-headaches, but I actually prefer 
this one: http://hashingit.com/analysis/30-finding-2016-blocks because most 
people seem to find it easier to visualize).

>> The problem is the measurement of the hashrate, which is pretty inaccurate at
>> best because even 2016 events isn't really enough (with a completely constant
>> hash rate running indefinitely we'd see difficulty swings of up to +/- 5% 
>> even
>> with the current algorithm).  In order to meaningfully react to a major loss
>> of hashing we'd still need to be considering a window of probably 2 weeks.
> 
> You don't want to assume it's constant in order to get a better measurement.
> The assumption is clearly false.  But, errors can be calculated, and 
> retargeting
> can take errors into account, because no matter what we'll always be dealing
> with a finite sample.

Agreed, it's a thought experiment I ran in May 2014 
(http://hashingit.com/analysis/28-reach-for-the-ear-defenders).  I found that 
many people's intuition is that there would be little or no difficulty changes 
in such a scenario, but the intuition isn't reliable.  Given a static hash rate 
the NHPP behaviour introduces a surprisingly large amount of noise (often much 
larger than any signal over a period of even weeks).  Any measurements in the 
order of even a few days has so much noise that it's practically unusable.  I 
just realized that unlike some of my other sims this one didn't make it to 
github; I'll fix that later this week.


Cheers,
Dave
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Hardfork to fix difficulty drop algorithm

2016-03-02 Thread Dave Hudson via bitcoin-dev
I think the biggest question here would be how would the difficulty retargeting 
be changed?  Without seeing the algorithm proposal it's difficult to assess the 
impact that it would have, but my intuition is that this is likely to be 
problematic.

Probabilistically the network sees surprisingly frequent swings of +/-20% in 
terms of the block finding rate on any given day, while the statistical noise 
over a 2016 block period can be more than +/-5%.  Any change would still have 
to require a fairly significant period of time before there would be a 
reasonable level of confidence that the hash rate really had fallen as opposed 
to just seeing statistical noise 
(http://hashingit.com/analysis/29-lies-damned-lies-and-bitcoin-difficulties and 
http://hashingit.com/analysis/28-reach-for-the-ear-defenders).

How long would be required to deem that the hash rate had dramatically fallen?  
Would such a change be a one-time event or would it be ever-present?

If we were to say that if the hash rate dropped 50% in one day (which could, of 
course be a 30% real drop and 20% variance) and the difficulty was retargeted 
to 50% lower then that would have to be matched with a similar rapid retarget 
if it were to increase by a similar amount.  Failing to do this both ways this 
would introduce an economic incentive for large miners to suppress the 
difficulty and gain dramatically larger numbers of block rewards.  The current 
fixed block count per difficulty change prevents this because the daily losses 
while suppressing hashing outweigh the potential gains when it's re-added.


Cheers,
Dave


> On 2 Mar 2016, at 14:56, Luke Dashjr via bitcoin-dev 
>  wrote:
> 
> We are coming up on the subsidy halving this July, and there have been some 
> concerns raised that a non-trivial number of miners could potentially drop 
> off 
> the network. This would result in a significantly longer block interval, 
> which 
> also means a higher per-block transaction volume, which could cause the block 
> size limit to legitimately be hit much sooner than expected. Furthermore, due 
> to difficulty adjustment being measured exclusively in blocks, the time until 
> it adjusts to compensate would be prolonged.
> 
> For example, if 50% of miners dropped off the network, blocks would be every 
> 20 minutes on average and contain double the transactions they presently do. 
> Even double would be approximately 850-900k, which potentially bumps up 
> against the hard limit when empty blocks are taken into consideration. This 
> situation would continue for a full month if no changes are made. If more 
> miners drop off the network, most of this becomes linearly worse, but due to 
> hitting the block size limit, the backlog would grow indefinitely until the 
> adjustment occurs.
> 
> To alleviate this risk, it seems reasonable to propose a hardfork to the 
> difficulty adjustment algorithm so it can adapt quicker to such a significant 
> drop in mining rate. BtcDrak tells me he has well-tested code for this in his 
> altcoin, which has seen some roller-coaster hashrates, so it may even be 
> possible to have such a proposal ready in time to be deployed alongside 
> SegWit 
> to take effect in time for the upcoming subsidy halving. If this slips, I 
> think it may be reasonable to push for at least code-readiness before July, 
> and possibly roll it into any other hardfork proposed before or around that 
> time.
> 
> I am unaware of any reason this would be controversial, so if anyone has a 
> problem with such a change, please speak up sooner rather than later. Other 
> ideas or concerns are of course welcome as well.
> 
> Thanks,
> 
> Luke
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Fees and the block-finding process

2015-08-07 Thread Dave Hudson via bitcoin-dev

 On 7 Aug 2015, at 16:17, Ryan Butler via bitcoin-dev 
 bitcoin-dev@lists.linuxfoundation.org wrote:
 
 A raspberry pie 2 node on reasonable Internet connection with a reasonable 
 hard drive can run a node with 8 or 20mb blocks easily.
 
I'm curious as I've not seen any data on this subject. How fast can a RP2 do 
the necessary cryptographic calculations to validate blocks of various sizes?

While everyone tends to talk in terms of 10 minutes per block that is, of 
course, only a typical time and doesn't account for situations in which 2 or 
more blocks are found in quick succession (which, of course, happens on a daily 
basis). At what point does, say, an RP2 node fail to be able to validate a 
second or third block because it's still not finished processing the first?

If someone were to be playing games with the system and mining transactions 
without first broadcasting them to the network then how long would that take? 
This would in essence define the ability to DoS lower-performance nodes 
(ignoring all of the other usual considerations such as bandwidth, etc).

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] A Transaction Fee Market Exists Without a Block Size Limit--new research paper suggests

2015-08-05 Thread Dave Hudson via bitcoin-dev
 
 On 5 Aug 2015, at 15:15, Peter R pete...@gmx.com wrote:
 
 Hi Dave,
 
 Thank you for the feedback regarding my paper.  
 
 The paper is nicely done, but I'm concerned that there's a real problem with 
 equation 4. The orphan rate is not just a function of time; it's also a 
 function of the block maker's proportion of the network hash rate. 
 Fundamentally a block maker (pool or aggregation of pools) does not orphan 
 its own blocks.
 
 With the benefit of hindsight, I think the paper would be stronger if it also 
 analyzed how the model changes (or doesn't) if we assume zero propagation 
 impedance for intra-miner communication, as you suggested (the you don't 
 orphan your own blocks idea).  Note that the paper did briefly discuss 
 miner-dependent propagation times in the second paragraph of page 9 and in 
 note 13.

I think this would be really interesting. It's an area that seems to be lacking 
research.

While I've not had time to model it I did have a quick discussion with the 
author of the Organ-of-Corti blog a few months ago and he seemed to think that 
the Poisson process model isn't quite accurate here. Intuitively this makes 
sense as until a block has fully propagated we're only seeing some fraction of 
the actual hashing network operating on the same problem, so we actually see 
slightly fewer very quick blocks than we might expect.

I do suspect that if we were to model this more accurately we might be able to 
infer the typical propagation characteristics by measuring the deviation from 
the expected distribution.

 Consider that a 1% miner must assume a greater risk from orphaning than, 
 say, a pool with 25%, or worse 40% of the hash rate.
 
 I'd like to explore this in more detail.  Although a miner may not orphan his 
 own block, by building on his own block he may now orphan two blocks in a 
 row.  At some point, his solution or solutions must be communicated to his 
 peers.  And if there's information about the transactions in his blocks to 
 communicate, I think there's a cost associated with that.  It's an 
 interesting problem and I'd like to continue working on it.  \

Agreed - I think this would be really interesting!

 I suspect this may well change some of the conclusions as larger block 
 makers will definitely be able to create larger blocks than their smaller 
 counterparts.
 
 It will be interesting to see.  I suspect that the main result that a 
 healthy fee market exists will still hold (assuming of course that a single 
 miner with 50% of the hash power isn't acting maliciously).  Whether miners 
 with larger value of h/H have a profit advantage, I'm not sure (but that was 
 outside the scope of the paper anyways).

I really look forward to seeing the revised version. Seeing the differences 
will also help assess how much impact there is from simplified models.


Regards,
Dave


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] A Transaction Fee Market Exists Without a Block Size Limit--new research paper suggests

2015-08-04 Thread Dave Hudson via bitcoin-dev

 On 4 Aug 2015, at 14:30, Gavin Andresen gavinandre...@gmail.com wrote:
 
 On Tue, Aug 4, 2015 at 2:41 PM, Dave Hudson via bitcoin-dev 
 bitcoin-dev@lists.linuxfoundation.org 
 mailto:bitcoin-dev@lists.linuxfoundation.org wrote:
 Fundamentally a block maker (pool or aggregation of pools) does not orphan 
 its own blocks.
 
 Unless the block maker has an infinitely fast connection to it's hashpower OR 
 it's hashpower is not parallelized at all, that's not strictly true -- it 
 WILL orphan its own blocks because two hashing units will find solutions in 
 the time it takes to communicate that solution to the block maker and to the 
 rest of the hashing units.
 
 That's getting into how many miners can dance on the head of a pin 
 territory, though. I don't think we know whether the communication advantages 
 of putting lots of hashing power physically close together will outweigh the 
 extra cooling costs of doing that (or maybe some other tradeoff I haven't 
 thought of). That would be a fine topic for another paper

Yes, but the block maker won't publish the second block it finds for the same 
set of transactions. It won't orphan its own block. In fact even if it does it 
still doesn't matter because the block maker still gets the block reward 
irrespective of which of the two solutions are published.

It's not about which hash wins, the issue is who gets paid as a result.

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] A Transaction Fee Market Exists Without a Block Size Limit--new research paper suggests

2015-08-04 Thread Dave Hudson via bitcoin-dev
The paper is nicely done, but I'm concerned that there's a real problem with 
equation 4. The orphan rate is not just a function of time; it's also a 
function of the block maker's proportion of the network hash rate. 
Fundamentally a block maker (pool or aggregation of pools) does not orphan its 
own blocks. In a degenerate case a 100% pool has no orphaned blocks. Consider 
that a 1% miner must assume a greater risk from orphaning than, say, a pool 
with 25%, or worse 40% of the hash rate.

I suspect this may well change some of the conclusions as larger block makers 
will definitely be able to create larger blocks than their smaller counterparts.


Cheers,
Dave


 On 3 Aug 2015, at 23:40, Peter R via bitcoin-dev 
 bitcoin-dev@lists.linuxfoundation.org wrote:
 
 Dear Bitcoin-Dev Mailing list,
 
 I’d like to share a research paper I’ve recently completed titled “A 
 Transaction Fee Market Exists Without a Block Size Limit.”  In addition to 
 presenting some useful charts such as the cost to produce large spam blocks, 
 I think the paper convincingly demonstrates that, due to the orphaning cost, 
 a block size limit is not necessary to ensure a functioning fee market.  
 
 The paper does not argue that a block size limit is unnecessary in general, 
 and in fact brings up questions related to mining cartels and the size of the 
 UTXO set.   
 
 It can be downloaded in PDF format here:
 
 https://dl.dropboxusercontent.com/u/43331625/feemarket.pdf 
 https://dl.dropboxusercontent.com/u/43331625/feemarket.pdf
 
 Or viewed with a web-browser here:
 
 https://www.scribd.com/doc/273443462/A-Transaction-Fee-Market-Exists-Without-a-Block-Size-Limit
  
 https://www.scribd.com/doc/273443462/A-Transaction-Fee-Market-Exists-Without-a-Block-Size-Limit
 
 Abstract.  This paper shows how a rational Bitcoin miner should select 
 transactions from his node’s mempool, when creating a new block, in order to 
 maximize his profit in the absence of a block size limit. To show this, the 
 paper introduces the block space supply curve and the mempool demand curve.  
 The former describes the cost for a miner to supply block space by accounting 
 for orphaning risk.  The latter represents the fees offered by the 
 transactions in mempool, and is expressed versus the minimum block size 
 required to claim a given portion of the fees.  The paper explains how the 
 supply and demand curves from classical economics are related to the 
 derivatives of these two curves, and proves that producing the quantity of 
 block space indicated by their intersection point maximizes the miner’s 
 profit.  The paper then shows that an unhealthy fee market—where miners are 
 incentivized to produce arbitrarily large blocks—cannot exist since it 
 requires communicating information at an arbitrarily fast rate.  The paper 
 concludes by considering the conditions under which a rational miner would 
 produce big, small or empty blocks, and by estimating the cost of a spam 
 attack.  
 
 Best regards,
 Peter
 ___
 bitcoin-dev mailing list
 bitcoin-dev@lists.linuxfoundation.org
 https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Răspuns: Personal opinion on the fee market from a worried local trader

2015-07-30 Thread Dave Hudson via bitcoin-dev

 On 30 Jul 2015, at 06:14, Tom Harding via bitcoin-dev 
 bitcoin-dev@lists.linuxfoundation.org wrote:
 
 Another empirical fact also needs explaining.  Why have average fees *as
 measured in BTC* risen during the times of highest public interest in
 bitcoin?  This happened without block size pressure, and it is not an
 exchange rate effect -- these are raw BTC fees:
 
 https://blockchain.info/charts/transaction-fees?timespan=alldaysAverageString=7
  
 https://blockchain.info/charts/transaction-fees?timespan=alldaysAverageString=7

I've not published any new figures for about 8 months (will try to do that this 
weekend), but the thing that that chart doesn't show is what's actually 
happening to fees per transaction. Here's a chart that does: 
http://hashingit.com/analysis/35-the-future-of-bitcoin-transaction-fees 
http://hashingit.com/analysis/35-the-future-of-bitcoin-transaction-fees

The data is also taken from blockchain.info so it's apples-for-apples. It shows 
that far from a fees going up they spent 3 years dropping. I just ran a new 
chart and the decline in fees continued until about 8 weeks when the stress 
tests first occurred. Even so, they're still below the level from the end of 
2013. By comparison the total transaction volume is up about 2.4x to 2.5x 
(don't have the exact number).

 ... more evidence that conclusively refutes the conjecture that a
 production quota is necessary for a functioning fee market.  A
 production quota merely pushes up fees.  We have a functioning market,
 and so far, it shows that wider bitcoin usage is even more effective
 than a quota at pushing up fees.

I think it's equally easy to argue (from the same data) that wider adoption has 
actually caused wallet users to become much more effective at fee selection. 
Miners (as expected, assuming that they hadn't formed a cartel) have continued 
to accept whatever fees are available, no matter how small. Only where there 
has been an element of scarcity have we actually seen miners do anything but 
take whatever is offered.

Clearly history is not an accurate indicator of what might happen in the 
future, but it seems difficult to argue that there has been any sort of fee 
market emerge to date (other than as a result of scarcity during the stress 
tests).

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev