Re: [Bitcoin-development] Fwd: death by halving

2014-10-28 Thread Ferdinando M. Ametrano
On Tue, Oct 28, 2014 at 10:43 PM, Gregory Maxwell 
wrote:

> > As of now the cost per block is probably already about 100USD, probably
> in
> > the 50-150USD.
>
> This is wildly at odds with reality. I don't mean to insult, but
> please understand that every post you make here consumes the time of
> dozens (or, hopefully, hundreds) of people. Every minute you spend
> refining your post has a potential return of many minutes for the rest
> of the users of the list.
>
> At current difficulty, with a SP30 (one of the
> leading-in-power-efficiency) marginal break-even is ~1144.8852 * $/kwh
> == $/btc.
>
> At $0.10/kwh each block has an expected cost right now, discounting
> all one time hardware costs, close to $3000.
>

yes, you're right I meant about $100USD per BTC, i.e. $2500 per block.
Because of my mistake I'll shut up and go back researching the archive on
this issue.

Thank you for the kind summary of the many good reasons why halving is a
non-issue. Very much appreciated, especially considering how precious is
your time.
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Fwd: death by halving

2014-10-28 Thread Ferdinando M. Ametrano
On Tue, Oct 28, 2014 at 11:00 PM, Thomas Zander 
wrote:

> you didn't read the
> archives where these ideas have been brought forward and discussed, a
> consensus was reached. (it wasn't so basic afterall)
> The fact that people don't want to repeat the discussion just for your
> sake is
> not the same as people not listening to those arguments.


I didn't start the thread and so didn't research the archive. Until two
posts ago there was no reference about the issue being discussed before. A
link would have been much kinder than harsh dismissal. I will research and
read.
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Fwd: death by halving

2014-10-28 Thread Thomas Zander
On Tuesday 28. October 2014 22.44.50 Ferdinando M. Ametrano wrote:
> It amazes me that basic economic considerations seems completely lost here,
> especially when it comes to mining.

Please don't confuse people dismissing your thoughts with dismissing the basic 
economic considerations. The fact of the matter is that you didn't read the 
archives where these ideas have been brought forward and discussed, a 
consensus was reached. (it wasn't so basic afterall)

The fact that people don't want to repeat the discussion just for your sake is 
not the same as people not listening to those arguments.



--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Fwd: death by halving

2014-10-28 Thread Christophe Biocca
> The fact that it is known in advance is no counter argument to me.

But it does change miner behaviour in pretty significant ways.

Unlike difficulty forecasting, which seems near impossible to do
accurately, miners can plan to purchase less hardware as they approach
the revenue drop. You can do some basic cost/benefit calculation and
see that *if* margins are already low as the halving approaches, then
rational miners would cease purchasing any new hardware that wouldn't
be profitable past that point, unless they expect it to pay for itself
by then.

The lower the margins are, the longer in advance they would alter
their buying behaviour. You'd see an increased focus on cost-effective
hashpower (and older units would not be replaced as they break).
Either a significant supply of cost effective hardware shows up
(because it's the only thing that would sell in the last months), or
difficulty would stall long before the halving happens. Either way,
the predictability of the halving can reduce the hashpower on the day.

On Tue, Oct 28, 2014 at 5:34 PM, Neil  wrote:
> Economically a halving is almost the same as a halving in price (as fees
> take up more of the pie, less so).
>
> Coincidentally the price has halved since early July to mid-October, and
> we've not even seen difficulty fall yet.
>
> I don't think there's much to see here.
>
> Neil
>
>
> --
>
> ___
> Bitcoin-development mailing list
> Bitcoin-development@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bitcoin-development
>

--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Fwd: death by halving

2014-10-28 Thread Ferdinando M. Ametrano
On Tue, Oct 28, 2014 at 10:34 PM, Neil  wrote:

> Economically a halving is almost the same as a halving in price (as fees
> take up more of the pie, less so).
>
> Coincidentally the price has halved since early July to mid-October, and
> we've not even seen difficulty fall yet.
>
because mining profits are many times operational costs. This might change
because of competition, in that case halving in price will become
problematic.

It amazes me that basic economic considerations seems completely lost here,
especially when it comes to mining. We should have learned the lesson of
how a small error in the incentive structure has lead from "one CPU, one
vote" to an oligopoly which might easily become a monopoly in the near
future.
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Fwd: death by halving

2014-10-28 Thread Gregory Maxwell
On Tue, Oct 28, 2014 at 9:19 PM, Jérémie Dubois-Lacoste
 wrote:
> The fact that a topic was brought up many times since a long time,
> does not mean it is not relevant.

I am not saying that it is "not relevant", I'm saying the discussion
is pointless:

No new information has arrived since the very first times that this
has been discussed except
that the first halving passed without incident.
If people were not sufficiently convinced that this was a serious
concern before there was concrete evidence (however small) that it was
okay, then discussion is not likely going to turn out differently the
50th or 100th time it is repeated...
except, perhaps, by wearing out all the most experienced and
knowledgeable among us as we become tired of rehashing the same
discussions over and over again.

On Tue, Oct 28, 2014 at 9:23 PM, Ferdinando M. Ametrano
 wrote:
[snip]
> As of now the cost per block is probably already about 100USD, probably in
> the 50-150USD.

This is wildly at odds with reality. I don't mean to insult, but
please understand that every post you make here consumes the time of
dozens (or, hopefully, hundreds) of people. Every minute you spend
refining your post has a potential return of many minutes for the rest
of the users of the list.

At current difficulty, with a SP30 (one of the
leading-in-power-efficiency) marginal break-even is ~1144.8852 * $/kwh
== $/btc.

At $0.10/kwh each block has an expected cost right now, discounting
all one time hardware costs, close to $3000.

--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Fwd: death by halving

2014-10-28 Thread Neil
Economically a halving is almost the same as a halving in price (as fees
take up more of the pie, less so).

Coincidentally the price has halved since early July to mid-October, and
we've not even seen difficulty fall yet.

I don't think there's much to see here.

Neil
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Fwd: death by halving

2014-10-28 Thread Ferdinando M. Ametrano
> > In november 2008 bitcoin was a much younger ecosystem,
>
> Or very old, indeed, if you are using unsigned arithmetic. [...]
>
:-) I meant 2012, of course, but loved your wit


> > and the halving happened during a quite stable positive price trend
>
> Hardly,
>
>
> http://bitcoincharts.com/charts/mtgoxUSD#rg60zczsg2012-10-01zeg2012-12-01ztgSzm1g10zm2g25zv


indeed!
http://bitcoincharts.com/charts/mtgoxUSD#rg60zczsg2012-08-01zeg2013-02-01ztgSzm1g10zm2g25zv


> There is a lot more complexity to the system than the subsidy schedule.
>
who said the contrary?

This thread is, in my opinion, a waste of time.
>
it might be, I have some free time right now...

many people have performed planning around the current
> behaviour. The current behaviour has also not shown itself to be
> problematic (and we've actually experienced its largest effect already
> without incident), and there are arguable benefits like encouraging
> investment in mining infrastructure.
>

I would love a proper rebuttal of a basic economic argument. If increased
competition will push mining revenues below 200% of operational costs, then
the halving will suddenly switch off many non profitable mining resources.
As of now the cost per block is probably already about 100USD, probably in
the 50-150USD.
Dismissed mining resources might even become cheaply available for
malevolent agents considering a 51% attack. Moreover the timing would be
perfect for the burst of any existing cloud hashing Ponzi scheme.
>From a strict economic point of view allowing the halving jump is looking
for trouble. To each his own.
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Fwd: death by halving

2014-10-28 Thread Jérémie Dubois-Lacoste
Answering today's concerns with yesterday's facts is dangerous,
especially with bitcoin on a 4 years period. I personally consider all
arguments like "we went through once, and nothing special. So no
disturbance worthy of discussion to expect" baseless.
Also, starting a topic with mentions of "death" is not leading to any
useful discussion.

@Topic starters: don't oversell your topic with that kind of
vocabulary hype. "death by halving", seriously?
@Everybody else: don't focus on the chosen vocabulary, or use it to
discard what might be a relevant discussion topic.

The fact that a topic was brought up many times since a long time,
does not mean it is not relevant. It only means it is a recurring
concern. I read no convincing argument against a significant
disturbance of the mining market to come. The fact that it is known in
advance is no counter argument to me.
Environmental conditions will have changed so much, the next halving
occurence might have nothing to do with the previous one, and it
should be perfectly ok to discuss it instead of putting the whole
thing under the carpet.

What is most important to the discussion to me: the main difference
between the last halving and the one to come is the relative weight of
ideology vs. rationality in miners's motivations. Effectively putting
us closer to the original bitcoin premises (miners fully rational).
Miners were close to being 100% individuals last halving, they are now
largely for-profit companies. This isn't a change we can overlook with
pure maths or with previous experience.


Jeremie DL





2014-10-28 21:36 GMT+01:00 Gregory Maxwell :
> On Tue, Oct 28, 2014 at 8:17 PM, Ferdinando M. Ametrano
>  wrote:
>>
>> On Oct 25, 2014 9:19 PM, "Gavin Andresen"  wrote:
>> > We had a halving, and it was a non-event.
>> > Is there some reason to believe next time will be different?
>>
>> In november 2008 bitcoin was a much younger ecosystem,
>
> Or very old, indeed, if you are using unsigned arithmetic. [...]
>
>> and the halving happened during a quite stable positive price trend
>
> Hardly,
>
> http://bitcoincharts.com/charts/mtgoxUSD#rg60zczsg2012-10-01zeg2012-12-01ztgSzm1g10zm2g25zv
>
>> Moreover, halving is not strictly necessary to respect the spirit of 
>> Nakamoto's monetary rule
>
> It isn't, but many people have performed planning around the current
> behaviour. The current behaviour has also not shown itself to be
> problematic (and we've actually experienced its largest effect already
> without incident), and there are arguable benefits like encouraging
> investment in mining infrastructure.
>
> This thread is, in my opinion, a waste of time.  It's yet again
> another perennial bikeshedding proposal brought up many times since at
> least 2011, suggesting random changes for
> non-existing(/not-yet-existing) issues.
>
> There is a lot more complexity to the system than the subsidy schedule.
>
> --
> ___
> Bitcoin-development mailing list
> Bitcoin-development@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bitcoin-development

2014-10-28 21:57 GMT+01:00 Alex Mizrahi :
>
>>
>> This thread is, in my opinion, a waste of time.  It's yet again
>> another perennial bikeshedding proposal brought up many times since at
>> least 2011, suggesting random changes for
>> non-existing(/not-yet-existing) issues.
>>
>> There is a lot more complexity to the system than the subsidy schedule.
>
>
> Well, the main question is what makes Bitcoin secure.
> It is secured by proofs of work which are produced by miners.
> Miners have economic incentives to play by the rules; in simple terms, that
> is more profitable than performing attacks.
>
> So the question is, why and when it works? It would be nice to know the
> boundaries, no?
>
>
> --
>
> ___
> Bitcoin-development mailing list
> Bitcoin-development@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bitcoin-development
>

--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Fwd: death by halving

2014-10-28 Thread Alex Mizrahi
> This thread is, in my opinion, a waste of time.  It's yet again
> another perennial bikeshedding proposal brought up many times since at
> least 2011, suggesting random changes for
> non-existing(/not-yet-existing) issues.
>
> There is a lot more complexity to the system than the subsidy schedule.
>

Well, the main question is what makes Bitcoin secure.
It is secured by proofs of work which are produced by miners.
Miners have economic incentives to play by the rules; in simple terms, that
is more profitable than performing attacks.

So the question is, why and when it works? It would be nice to know the
boundaries, no?
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


[Bitcoin-development] Fwd: death by halving

2014-10-28 Thread Gregory Maxwell
On Tue, Oct 28, 2014 at 8:17 PM, Ferdinando M. Ametrano
 wrote:
>
> On Oct 25, 2014 9:19 PM, "Gavin Andresen"  wrote:
> > We had a halving, and it was a non-event.
> > Is there some reason to believe next time will be different?
>
> In november 2008 bitcoin was a much younger ecosystem,

Or very old, indeed, if you are using unsigned arithmetic. [...]

> and the halving happened during a quite stable positive price trend

Hardly,

http://bitcoincharts.com/charts/mtgoxUSD#rg60zczsg2012-10-01zeg2012-12-01ztgSzm1g10zm2g25zv

> Moreover, halving is not strictly necessary to respect the spirit of 
> Nakamoto's monetary rule

It isn't, but many people have performed planning around the current
behaviour. The current behaviour has also not shown itself to be
problematic (and we've actually experienced its largest effect already
without incident), and there are arguable benefits like encouraging
investment in mining infrastructure.

This thread is, in my opinion, a waste of time.  It's yet again
another perennial bikeshedding proposal brought up many times since at
least 2011, suggesting random changes for
non-existing(/not-yet-existing) issues.

There is a lot more complexity to the system than the subsidy schedule.

--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] death by halving

2014-10-28 Thread Ferdinando M. Ametrano
On Oct 25, 2014 9:19 PM, "Gavin Andresen"  wrote:
> We had a halving, and it was a non-event.
> Is there some reason to believe next time will be different?

In november 2008 bitcoin was a much younger ecosystem, with less liquidity
and trading, smaller market cap, and the halving happened during a quite
stable positive price trend.

In the next months competition might easily drive down mining margins, and
the reward halving might generate unexpected disruption in mining
operations.

Moreover, halving is not strictly necessary to respect the spirit of
Nakamoto's monetary rule and its 21M limit. At the beginning of the 3rd
reward era (block 42, in 2017) a new reward function could become
effective R(b)=k*2^(-h*b/21) where b is the block number and R(b) is
the reward. The parameters h and k can be calibrated so that R(41)=25
and sum_b{R}=21M


​If the increased issuance speed in the third era is considered
problematic, then each era could have its own R_e(b)=k_e*2^(-h_e*b/21)
fitted to the amount of coins to be issued in that era according to the
current supply rule, e.g. fitting k_e and h_e to R(41)=25 and
sum_{b}_e=2,625,000.

Would such a BIP have any chance to be considered? Am I missing something?

Nando
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] DS Deprecation Window

2014-10-28 Thread Tom Harding
On 10/27/2014 7:36 PM, Gregory Maxwell wrote:
> Consider a malicious miner can concurrently flood all other miners
> with orthogonal double spends (which he doesn't mine himself). These
> other miners will all be spending some amount of their time mining on
> these transactions before realizing others consider them
> double-spends.

If I understand correctly, the simplest example of this attack is three 
transactions spending the same coin, distributed to two miners like this:

 Miner AMiner B
Mempool   tx1a   tx1b
Relayed   tx2tx2

Since relay has to be limited, Miner B doesn't know about tx1a until it 
is included in Miner A's block, so he delays that block (unless it 
appears very quickly).

To create this situation, attacker has to transmit all three 
transactions very quickly, or mempools will be too synchronized. 
Attacker tries to make it so that everyone else has a tx1a conflict that 
Miner A does not have.  Ditto for each individual victim, with different 
transactions (this seems very difficult).

Proposal shows that there is always a tiny risk to including tx1 when a 
double-spend is known, and I agree that this attack can add something to 
that risk.  Miner A can neutralize his risk by excluding any tx1 known 
to be double-spent, but as Thomas Zander wrote, that is an undesirable 
outcome.

However, Miner A has additional information - he knows how soon he 
received tx2 after receiving tx1a.

The attack has little chance of working if any of the malicious 
transactions are sent even, say, 10 seconds apart from each other. 
Dropping the labels for transmit-order numbering, if the 1->2 transmit 
gap is large, mempools will agree on 1.  If 1->2 gap is small, but the 
gap to 3 is large, mempools will agree on the 1-2 pair, but possibly 
have the order reversed.  Either way, mempools won't disagree on the 
existence of 1 unless the 1->3 gap is small.

So, I think it will be possible to quantify and target the risk of 
including tx1a to an arbitrarily low level, based on the local 
measurement of the time gap to tx2, and an effective threshold won't be 
very high.  It does highlight yet again, the shorter the time frame, the 
greater the risk.


--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Reworking the policy estimation code (fee estimates)

2014-10-28 Thread Alex Morcos
RE: 90% : I think it's fine to use 90% for anything other than 1
confirmation, but if you look at the real world data test I did, or the raw
data from this new code, you'll see that even the highest fee rate
transactions only get confirmed at about a 90% rate in 1 block, so that if
you use that as your cut-off you will sometimes get no answer and sometimes
get a very high fee rate and sometimes get a reasonable fee rate, it just
depends because the data is too noisy.  I think thats just because there is
no good answer to that question.  There is no fee you can put on your
transaction to guarantee greater than 90% chance of getting confirmed in
one block.  I think 85% might be safe?

RE: tunable as command-line/bitcoin.conf: sounds good!

OK, sorry to have all this conversation on the dev list, maybe i'll turn
this into an actual PR if we want to comment on the code?
I just wanted to see if it even made sense to make a PR for this or this
isn't the way we wanted to go about it.




On Tue, Oct 28, 2014 at 10:58 AM, Gavin Andresen 
wrote:

> On Tue, Oct 28, 2014 at 10:30 AM, Alex Morcos  wrote:
>>
>> Do you think it would make sense to make that 90% number an argument to
>> rpc call?  For instance there could be a default (I would use 80%) but then
>> you could specify if you required a different certainty.  It wouldn't
>> require any code changes and might make it easier for people to build more
>> complicated logic on top of it.
>>
>
> RE: 80% versus 90% :  I think a default of 80% will get us a lot of "the
> fee estimation logic is broken, I want my transactions to confirm quick and
> a lot of them aren't confirming for 2 or 3 blocks."
>
> RE: RPC argument:  I'm reluctant to give too many 'knobs' for the RPC
> interface. I think the default percentage makes sense as a
> command-line/bitcoin.conf option; I can imagine services that want to save
> on fees running with -estimatefeethreshold=0.5  (or
> -estimatefeethreshold=0.95 if as-fast-as-possible confirmations are
> needed). Setting both the number of confirmations and the estimation
> threshold on a transaction-by-transaction basis seems like overkill to me.
>
> --
> --
> Gavin Andresen
>
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Reworking the policy estimation code (fee estimates)

2014-10-28 Thread Gavin Andresen
On Tue, Oct 28, 2014 at 10:30 AM, Alex Morcos  wrote:
>
> Do you think it would make sense to make that 90% number an argument to
> rpc call?  For instance there could be a default (I would use 80%) but then
> you could specify if you required a different certainty.  It wouldn't
> require any code changes and might make it easier for people to build more
> complicated logic on top of it.
>

RE: 80% versus 90% :  I think a default of 80% will get us a lot of "the
fee estimation logic is broken, I want my transactions to confirm quick and
a lot of them aren't confirming for 2 or 3 blocks."

RE: RPC argument:  I'm reluctant to give too many 'knobs' for the RPC
interface. I think the default percentage makes sense as a
command-line/bitcoin.conf option; I can imagine services that want to save
on fees running with -estimatefeethreshold=0.5  (or
-estimatefeethreshold=0.95 if as-fast-as-possible confirmations are
needed). Setting both the number of confirmations and the estimation
threshold on a transaction-by-transaction basis seems like overkill to me.

-- 
--
Gavin Andresen
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Reworking the policy estimation code (fee estimates)

2014-10-28 Thread Alex Morcos
Sorry, perhaps I misinterpreted that question.  The estimates will be
dominated by the prevailing transaction rates initially, so the estimates
you get for something like "what is the least I can pay and still be 90%
sure I get confirmed in 20 blocks"  won't be insane, but they will still be
way too conservative.  I'm not sure what you meant by reasonable.  You
won't get the "correct" answer of something significantly less than 40k
sat/kB for quite some time.  Given that the half-life of the decay is 2.5
days, then within a couple of days.  And in fact even in the steady state,
the new code will still return a much higher rate than the existing code,
say 10k sat/kB instead of 1k sat/kB, but that's just a result of the
sorting the existing code does and the fact that no one places transactions
with that small fee.   To correctly give such low answers, the new code
will require that those super low feerate transactions are occurring
frequently enough, but the bar for enough datapoints in a feerate bucket is
pretty low, an average of 1 tx per block.  The bar can be made lower at the
expense of a bit of noisiness in the answers, for instance for priorities I
had to make the bar significantly lower because there are so many fewer
transactions confirmed because of priorities.  I'm certainly open to tuning
some of these variables.





On Tue, Oct 28, 2014 at 10:30 AM, Alex Morcos  wrote:

> Oh in just a couple of blocks, it'll give you a somewhat reasonable
> estimate for asking about every confirmation count other than 1, but it
> could take several hours for it to have enough data points to give you a
> good estimate for getting confirmed in one block (because the prevalent
> feerate is not always confirmed in 1 block >80% of the time)   Essentially
> what it does is just combine buckets until it has enough data points, so
> after the first block it might be treating all of the txs as belonging to
> the same feerate bucket, but since the answer it returns is the "median"*
> fee rate for that bucket, its a reasonable answer right off the get go.
>
> Do you think it would make sense to make that 90% number an argument to
> rpc call?  For instance there could be a default (I would use 80%) but then
> you could specify if you required a different certainty.  It wouldn't
> require any code changes and might make it easier for people to build more
> complicated logic on top of it.
>
> *It can't actually track the median, but it identifies which of the
> smaller actual buckets the median would have fallen into and returns the
> average feerate for that median bucket.
>
>
>
>
>
> On Tue, Oct 28, 2014 at 9:59 AM, Gavin Andresen 
> wrote:
>
>> I think Alex's approach is better; I don't think we can know how much
>> better until we have a functioning fee market.
>>
>> We don't have a functioning fee market now, because fees are hard-coded.
>> So we get "pay the hard-coded fee and you'll get confirmed in one or two or
>> three blocks, depending on which miners mine the next three blocks and what
>> time of day it is."
>>
>> git HEAD code says you need a fee of 10, satoshis/kb to be pretty
>> sure you'll get confirmed in the next block. That looks about right with
>> Alex's real-world data (if we take "90% chance" as 'pretty sure you'll get
>> confirmed'):
>>
>> Fee rate 10 Avg blocks to confirm 1.09 NumBlocks:% confirmed 1: 0.901
>> 2: 1.0   3: 1.0
>>
>> My only concern with Alex's code is that it takes much longer to get
>> 'primed' -- Alex, if I started with no data about fees, how long would it
>> take to be able to get enough data for a reasonable estimate of "what is
>> the least I can pay and still be 90% sure I get confirmed in 20 blocks" ?
>> Hours? Days? Weeks?
>>
>> --
>> --
>> Gavin Andresen
>>
>
>
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Reworking the policy estimation code (fee estimates)

2014-10-28 Thread Alex Morcos
Oh in just a couple of blocks, it'll give you a somewhat reasonable
estimate for asking about every confirmation count other than 1, but it
could take several hours for it to have enough data points to give you a
good estimate for getting confirmed in one block (because the prevalent
feerate is not always confirmed in 1 block >80% of the time)   Essentially
what it does is just combine buckets until it has enough data points, so
after the first block it might be treating all of the txs as belonging to
the same feerate bucket, but since the answer it returns is the "median"*
fee rate for that bucket, its a reasonable answer right off the get go.

Do you think it would make sense to make that 90% number an argument to rpc
call?  For instance there could be a default (I would use 80%) but then you
could specify if you required a different certainty.  It wouldn't require
any code changes and might make it easier for people to build more
complicated logic on top of it.

*It can't actually track the median, but it identifies which of the smaller
actual buckets the median would have fallen into and returns the average
feerate for that median bucket.





On Tue, Oct 28, 2014 at 9:59 AM, Gavin Andresen 
wrote:

> I think Alex's approach is better; I don't think we can know how much
> better until we have a functioning fee market.
>
> We don't have a functioning fee market now, because fees are hard-coded.
> So we get "pay the hard-coded fee and you'll get confirmed in one or two or
> three blocks, depending on which miners mine the next three blocks and what
> time of day it is."
>
> git HEAD code says you need a fee of 10, satoshis/kb to be pretty sure
> you'll get confirmed in the next block. That looks about right with Alex's
> real-world data (if we take "90% chance" as 'pretty sure you'll get
> confirmed'):
>
> Fee rate 10 Avg blocks to confirm 1.09 NumBlocks:% confirmed 1: 0.901
> 2: 1.0   3: 1.0
>
> My only concern with Alex's code is that it takes much longer to get
> 'primed' -- Alex, if I started with no data about fees, how long would it
> take to be able to get enough data for a reasonable estimate of "what is
> the least I can pay and still be 90% sure I get confirmed in 20 blocks" ?
> Hours? Days? Weeks?
>
> --
> --
> Gavin Andresen
>
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Reworking the policy estimation code (fee estimates)

2014-10-28 Thread Gavin Andresen
I think Alex's approach is better; I don't think we can know how much
better until we have a functioning fee market.

We don't have a functioning fee market now, because fees are hard-coded. So
we get "pay the hard-coded fee and you'll get confirmed in one or two or
three blocks, depending on which miners mine the next three blocks and what
time of day it is."

git HEAD code says you need a fee of 10, satoshis/kb to be pretty sure
you'll get confirmed in the next block. That looks about right with Alex's
real-world data (if we take "90% chance" as 'pretty sure you'll get
confirmed'):

Fee rate 10 Avg blocks to confirm 1.09 NumBlocks:% confirmed 1: 0.901
2: 1.0   3: 1.0

My only concern with Alex's code is that it takes much longer to get
'primed' -- Alex, if I started with no data about fees, how long would it
take to be able to get enough data for a reasonable estimate of "what is
the least I can pay and still be 90% sure I get confirmed in 20 blocks" ?
Hours? Days? Weeks?

-- 
--
Gavin Andresen
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Reworking the policy estimation code (fee estimates)

2014-10-28 Thread Alex Morcos
Yeah, so to explain points 1 and 2 a bit more

1)  It's about what question you are trying to answer.  The existing code
tries to answer the question of what is the median fee of a transaction
that gets confirmed in Y blocks.  It turns out that is not a very good
proxy for the question we really want to know which is what is the fee that
is necessary such that we are likely to be confirmed within Y blocks.
What happens is that there are so many transactions of the 40k satoshis/kB
feerate that they turn out to be the dominant data points of transactions
that are confirmed after 2 blocks, 3 blocks, etc. and not only 1 block.

So for example.   A hypothetical sample of 20 txs might find 2 of your 1k
sat/kB txs and 18 of the 40k sat/kB txs.  Perhaps 15 of the 40k txs are
confirmed in 1 block and the other 3 in 2 blocks, and 1 of the 1k txs in 1
block and the other in 2 blocks.  So if you analyze the data by
confirmation time, you find that 15/16 1-conf txs are 40k and 3/4 2-conf
txs are 40k, so the median feerate is 40k for both 1 and 2 confirmations.
Instead, the correct thing to do is analyze the data by feerate.  Doing
that, we find that 15/18 (83%) of 40k txs are confirmed in 1 block and 1/2
(50%) 1k txs are.  But 100% of both are confirmed within two blocks.  This
leads you to say, you need 40k feerate if you want to get confirmed in 1
block but 1k is sufficient if you want to be confirmed in 2 blocks.

Put another way, Let's imagine you wanted to know how tall you have to be
no longer fit in the coach seats on an airplane.   If you looked at the
median height of all people in coach and all people in first class, you
would see that they were about the same, and you would get a confusing
answer.  Instead you have to bin by height, and look at the percentage of
people of each height that fly first-class vs coach, and I'd guess that by
the time you got up to say 6'8" you were finding greater than 50% of the
people flying first class.

2) The code also presupposes that higher fee rate transactions must be
confirmed quicker.  And so in addition to binning all transactions by
confirmation time, the code then sorts all of the transactions and re-bins
them such that the highest fee transactions are all in the 1-confirmation
bin and the lowest fee transactions are all in the 25-confirmation bin.  If
we'd been trying to predict whether the first 2 bytes of transaction hash
influenced our confirmation time, we would have started by having a random
distribution of hashes in each confirmation bin, but then after doing the
sorting, we'd of course have found that the "median hash" of the
1-confirmation transactions was higher, because we sorted it to make that
the case.

In the airplane example this would have been equivalent to taking the
median height of the 20 tallest people on the plane (assuming first class
is 20 seats) and saying that was the height for first class and the median
height of the remaining people for coach.  This will appear to give a
slightly better answer than the first approach, but is still wrong.



There are still a lot of additional improvements that can be made to fee
estimation.  One problem my proposed code has is there really just aren't
enough data points of low feerate transactions to give meaningful answers
about how likely those are to be confirmed, so its answers are still a bit
conservative.  This will improve though as the actual distribution of
transactions spreads out.The other major needed improvement is to not
just state some description of what has happened in the past, but to
actually make a prediction about what is going to happen in the future.
For instance looking at the feerates of unconfirmed transactions currently
in the mempool could tell you that if you want to be confirmed immediately
you'll need to be high enough in that priority queue.








On Tue, Oct 28, 2014 at 5:55 AM, Mike Hearn  wrote:

> Could you explain a little further why you think the current approach is
> statistically incorrect? There's no doubt that the existing estimates the
> system produces are garbage, but that's because it assumes players in the
> fee market are rational and they are not.
>
> Fwiw bitcoinj 0.12.1 applies the January fee drop and will attach fee of
> only 1000 satoshis per kB by default. I also have a program that measures
> confirmation time for a given fee level (with fresh coins so there's no
> priority) and it aligns with your findings, most txns confirm within a
> couple of blocks.
>
> Ultimately there isn't any easy method to stop people throwing money away.
> Bitcoinj will probably continue to use hard coded fee values for now to try
> and contribute to market sanity in the hope it makes smartfees smarter.
> On 27 Oct 2014 19:34, "Alex Morcos"  wrote:
>
>> I've been playing around with the code for estimating fees and found a
>> few issues with the existing code.   I think this will address several
>> observations that the estimates returned by the existing code appe

Re: [Bitcoin-development] Reworking the policy estimation code (fee estimates)

2014-10-28 Thread Mike Hearn
Could you explain a little further why you think the current approach is
statistically incorrect? There's no doubt that the existing estimates the
system produces are garbage, but that's because it assumes players in the
fee market are rational and they are not.

Fwiw bitcoinj 0.12.1 applies the January fee drop and will attach fee of
only 1000 satoshis per kB by default. I also have a program that measures
confirmation time for a given fee level (with fresh coins so there's no
priority) and it aligns with your findings, most txns confirm within a
couple of blocks.

Ultimately there isn't any easy method to stop people throwing money away.
Bitcoinj will probably continue to use hard coded fee values for now to try
and contribute to market sanity in the hope it makes smartfees smarter.
On 27 Oct 2014 19:34, "Alex Morcos"  wrote:

> I've been playing around with the code for estimating fees and found a few
> issues with the existing code.   I think this will address several
> observations that the estimates returned by the existing code appear to be
> too high.  For instance see @cozz in Issue 4866
> .
>
> Here's what I found:
>
> 1) We're trying to answer the question of what fee X you need in order to
> be confirmed within Y blocks.   The existing code tries to do that by
> calculating the median fee for each possible Y instead of gathering
> statistics for each possible X.  That approach is statistically incorrect.
> In fact since certain X's appear so frequently, they tend to dominate the
> statistics at all possible Y's (a fee rate of about 40k satoshis)
>
> 2) The existing code then sorts all of the data points in all of the
> buckets together by fee rate and then reassigns buckets before calculating
> the medians for each confirmation bucket.  The sorting forces a
> relationship where there might not be one.  Imagine some other variable,
> such as first 2 bytes of the transaction hash.  If we sorted these and then
> used them to give estimates, we'd see a clear but false relationship where
> transactions with low starting bytes in their hashes took longer to confirm.
>
> 3) Transactions which don't have all their inputs available (because they
> depend on other transactions in the mempool) aren't excluded from the
> calculations.  This skews the results.
>
> I rewrote the code to follow a different approach.  I divided all possible
> fee rates up into fee rate buckets (I spaced these logarithmically).  For
> each transaction that was confirmed, I updated the appropriate fee rate
> bucket with how many blocks it took to confirm that transaction.
>
> The hardest part of doing this fee estimation is to decide what the
> question really is that we're trying to answer.  I took the approach that
> if you are asking what fee rate I need to be confirmed within Y blocks,
> then what you would like to know is the lowest fee rate such that a
> relatively high percentage of transactions of that fee rate are confirmed
> within Y blocks. Since even the highest fee transactions are confirmed
> within the first block only 90-93% of the time, I decided to use 80% as my
> cutoff.  So now to answer "estimatefee Y", I scan through all of the fee
> buckets from the most expensive down until I find the last bucket with >80%
> of the transactions confirmed within Y blocks.
>
> Unfortunately we still have the problem of not having enough data points
> for non-typical fee rates, and so it requires gathering a lot of data to
> give reasonable answers. To keep all of these data points in a circular
> buffer and then sort them for every analysis (or after every new block) is
> expensive.  So instead I adopted the approach of keeping an exponentially
> decaying moving average for each bucket.  I used a decay of .998 which
> represents a half life of 374 blocks or about 2.5 days.  Also if a bucket
> doesn't have very many transactions, I combine it with the next bucket.
>
> Here is a link  to the code.  I
> can create an actual pull request if there is consensus that it makes sense
> to do so.
>
> I've attached a graph comparing the estimates produced for 1-3
> confirmations by the new code and the old code.  I did apply the patch to
> fix issue 3 above to the old code first.  The new code is in green and the
> fixed code is in purple.  The Y axis is a log scale of feerate in satoshis
> per KB and the X axis is chain height.  The new code produces the same
> estimates for 2 and 3 confirmations (the answers are effectively quantized
> by bucket).
>
> I've also completely reworked smartfees.py.  It turns out to require many
> many more transactions are put through in order to have statistically
> significant results, so the test is quite slow to run (about 3 mins on my
> machine).
>
> I've also been running a real world test, sending transactions of various
> fee rates and seeing how long they took to get confirmed.  After almost 200
> tx's at