Re: [Bitcoin-development] Reworking the policy estimation code (fee estimates)

2014-10-29 Thread Peter Todd
On Mon, Oct 27, 2014 at 03:33:45PM -0400, Alex Morcos wrote:
 I've been playing around with the code for estimating fees and found a few
 issues with the existing code.   I think this will address several
 observations that the estimates returned by the existing code appear to be
 too high.  For instance see @cozz in Issue 4866
 https://github.com/bitcoin/bitcoin/issues/4866.

I don't have time to look at the details of your statistical methods
unfortunately due to some deadlines, but a quick comment:

You should think about the malleability of your estimates to attackers.
For instance the current fee estimation code has a serious issue where
it'll happily estimate ludicriously high fees based on very little date.
There is a 'insane fees' failsafe, but it's IIRC set to allow
transactions with fees of less than 100mBTC/tx, roughly $50 at current
exchange rates. It's relatively easy to get a wallet into a condition
where this happens as the estimations are considered valid even based on
very little data - a simple sybil attack suffices. (e.g. the recently
published paper(1) on Tor sybil attacks comes to mind as one example of
many ways to do this) Obviously this could empty someone's wallet pretty
quickly; an exchange that makes a few dozen transactions an hour could
easily lose tens of thousands of dollars due to this exploit. Someone
correct me if I'm wrong, but last I checked in git HEAD this exploit is
still unfixed.

A user-configurable failsafe limit is a pretty obvious solution here,
albeit a crude one; it'd be interesting to see if a plausible security
argument could be made for something more sophisticated, like taking
into account coin-age of observed transactions that estimates are based
on.

1) Bitcoin over Tor isn't a good idea,
   http://arxiv.org/abs/1410.6079

-- 
'peter'[:-1]@petertodd.org
098d3c9095b47ff1fd692fef5ac6731340802c7c63d38bb0


signature.asc
Description: Digital signature
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Reworking the policy estimation code (fee estimates)

2014-10-28 Thread Mike Hearn
Could you explain a little further why you think the current approach is
statistically incorrect? There's no doubt that the existing estimates the
system produces are garbage, but that's because it assumes players in the
fee market are rational and they are not.

Fwiw bitcoinj 0.12.1 applies the January fee drop and will attach fee of
only 1000 satoshis per kB by default. I also have a program that measures
confirmation time for a given fee level (with fresh coins so there's no
priority) and it aligns with your findings, most txns confirm within a
couple of blocks.

Ultimately there isn't any easy method to stop people throwing money away.
Bitcoinj will probably continue to use hard coded fee values for now to try
and contribute to market sanity in the hope it makes smartfees smarter.
On 27 Oct 2014 19:34, Alex Morcos mor...@gmail.com wrote:

 I've been playing around with the code for estimating fees and found a few
 issues with the existing code.   I think this will address several
 observations that the estimates returned by the existing code appear to be
 too high.  For instance see @cozz in Issue 4866
 https://github.com/bitcoin/bitcoin/issues/4866.

 Here's what I found:

 1) We're trying to answer the question of what fee X you need in order to
 be confirmed within Y blocks.   The existing code tries to do that by
 calculating the median fee for each possible Y instead of gathering
 statistics for each possible X.  That approach is statistically incorrect.
 In fact since certain X's appear so frequently, they tend to dominate the
 statistics at all possible Y's (a fee rate of about 40k satoshis)

 2) The existing code then sorts all of the data points in all of the
 buckets together by fee rate and then reassigns buckets before calculating
 the medians for each confirmation bucket.  The sorting forces a
 relationship where there might not be one.  Imagine some other variable,
 such as first 2 bytes of the transaction hash.  If we sorted these and then
 used them to give estimates, we'd see a clear but false relationship where
 transactions with low starting bytes in their hashes took longer to confirm.

 3) Transactions which don't have all their inputs available (because they
 depend on other transactions in the mempool) aren't excluded from the
 calculations.  This skews the results.

 I rewrote the code to follow a different approach.  I divided all possible
 fee rates up into fee rate buckets (I spaced these logarithmically).  For
 each transaction that was confirmed, I updated the appropriate fee rate
 bucket with how many blocks it took to confirm that transaction.

 The hardest part of doing this fee estimation is to decide what the
 question really is that we're trying to answer.  I took the approach that
 if you are asking what fee rate I need to be confirmed within Y blocks,
 then what you would like to know is the lowest fee rate such that a
 relatively high percentage of transactions of that fee rate are confirmed
 within Y blocks. Since even the highest fee transactions are confirmed
 within the first block only 90-93% of the time, I decided to use 80% as my
 cutoff.  So now to answer estimatefee Y, I scan through all of the fee
 buckets from the most expensive down until I find the last bucket with 80%
 of the transactions confirmed within Y blocks.

 Unfortunately we still have the problem of not having enough data points
 for non-typical fee rates, and so it requires gathering a lot of data to
 give reasonable answers. To keep all of these data points in a circular
 buffer and then sort them for every analysis (or after every new block) is
 expensive.  So instead I adopted the approach of keeping an exponentially
 decaying moving average for each bucket.  I used a decay of .998 which
 represents a half life of 374 blocks or about 2.5 days.  Also if a bucket
 doesn't have very many transactions, I combine it with the next bucket.

 Here is a link https://github.com/morcos/bitcoin/pull/3 to the code.  I
 can create an actual pull request if there is consensus that it makes sense
 to do so.

 I've attached a graph comparing the estimates produced for 1-3
 confirmations by the new code and the old code.  I did apply the patch to
 fix issue 3 above to the old code first.  The new code is in green and the
 fixed code is in purple.  The Y axis is a log scale of feerate in satoshis
 per KB and the X axis is chain height.  The new code produces the same
 estimates for 2 and 3 confirmations (the answers are effectively quantized
 by bucket).

 I've also completely reworked smartfees.py.  It turns out to require many
 many more transactions are put through in order to have statistically
 significant results, so the test is quite slow to run (about 3 mins on my
 machine).

 I've also been running a real world test, sending transactions of various
 fee rates and seeing how long they took to get confirmed.  After almost 200
 tx's at each fee rate, here are the results so far:

 Fee rate 1100   

Re: [Bitcoin-development] Reworking the policy estimation code (fee estimates)

2014-10-28 Thread Gavin Andresen
I think Alex's approach is better; I don't think we can know how much
better until we have a functioning fee market.

We don't have a functioning fee market now, because fees are hard-coded. So
we get pay the hard-coded fee and you'll get confirmed in one or two or
three blocks, depending on which miners mine the next three blocks and what
time of day it is.

git HEAD code says you need a fee of 10, satoshis/kb to be pretty sure
you'll get confirmed in the next block. That looks about right with Alex's
real-world data (if we take 90% chance as 'pretty sure you'll get
confirmed'):

Fee rate 10 Avg blocks to confirm 1.09 NumBlocks:% confirmed 1: 0.901
2: 1.0   3: 1.0

My only concern with Alex's code is that it takes much longer to get
'primed' -- Alex, if I started with no data about fees, how long would it
take to be able to get enough data for a reasonable estimate of what is
the least I can pay and still be 90% sure I get confirmed in 20 blocks ?
Hours? Days? Weeks?

-- 
--
Gavin Andresen
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Reworking the policy estimation code (fee estimates)

2014-10-28 Thread Alex Morcos
Oh in just a couple of blocks, it'll give you a somewhat reasonable
estimate for asking about every confirmation count other than 1, but it
could take several hours for it to have enough data points to give you a
good estimate for getting confirmed in one block (because the prevalent
feerate is not always confirmed in 1 block 80% of the time)   Essentially
what it does is just combine buckets until it has enough data points, so
after the first block it might be treating all of the txs as belonging to
the same feerate bucket, but since the answer it returns is the median*
fee rate for that bucket, its a reasonable answer right off the get go.

Do you think it would make sense to make that 90% number an argument to rpc
call?  For instance there could be a default (I would use 80%) but then you
could specify if you required a different certainty.  It wouldn't require
any code changes and might make it easier for people to build more
complicated logic on top of it.

*It can't actually track the median, but it identifies which of the smaller
actual buckets the median would have fallen into and returns the average
feerate for that median bucket.





On Tue, Oct 28, 2014 at 9:59 AM, Gavin Andresen gavinandre...@gmail.com
wrote:

 I think Alex's approach is better; I don't think we can know how much
 better until we have a functioning fee market.

 We don't have a functioning fee market now, because fees are hard-coded.
 So we get pay the hard-coded fee and you'll get confirmed in one or two or
 three blocks, depending on which miners mine the next three blocks and what
 time of day it is.

 git HEAD code says you need a fee of 10, satoshis/kb to be pretty sure
 you'll get confirmed in the next block. That looks about right with Alex's
 real-world data (if we take 90% chance as 'pretty sure you'll get
 confirmed'):

 Fee rate 10 Avg blocks to confirm 1.09 NumBlocks:% confirmed 1: 0.901
 2: 1.0   3: 1.0

 My only concern with Alex's code is that it takes much longer to get
 'primed' -- Alex, if I started with no data about fees, how long would it
 take to be able to get enough data for a reasonable estimate of what is
 the least I can pay and still be 90% sure I get confirmed in 20 blocks ?
 Hours? Days? Weeks?

 --
 --
 Gavin Andresen

--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Reworking the policy estimation code (fee estimates)

2014-10-28 Thread Alex Morcos
Sorry, perhaps I misinterpreted that question.  The estimates will be
dominated by the prevailing transaction rates initially, so the estimates
you get for something like what is the least I can pay and still be 90%
sure I get confirmed in 20 blocks  won't be insane, but they will still be
way too conservative.  I'm not sure what you meant by reasonable.  You
won't get the correct answer of something significantly less than 40k
sat/kB for quite some time.  Given that the half-life of the decay is 2.5
days, then within a couple of days.  And in fact even in the steady state,
the new code will still return a much higher rate than the existing code,
say 10k sat/kB instead of 1k sat/kB, but that's just a result of the
sorting the existing code does and the fact that no one places transactions
with that small fee.   To correctly give such low answers, the new code
will require that those super low feerate transactions are occurring
frequently enough, but the bar for enough datapoints in a feerate bucket is
pretty low, an average of 1 tx per block.  The bar can be made lower at the
expense of a bit of noisiness in the answers, for instance for priorities I
had to make the bar significantly lower because there are so many fewer
transactions confirmed because of priorities.  I'm certainly open to tuning
some of these variables.





On Tue, Oct 28, 2014 at 10:30 AM, Alex Morcos mor...@gmail.com wrote:

 Oh in just a couple of blocks, it'll give you a somewhat reasonable
 estimate for asking about every confirmation count other than 1, but it
 could take several hours for it to have enough data points to give you a
 good estimate for getting confirmed in one block (because the prevalent
 feerate is not always confirmed in 1 block 80% of the time)   Essentially
 what it does is just combine buckets until it has enough data points, so
 after the first block it might be treating all of the txs as belonging to
 the same feerate bucket, but since the answer it returns is the median*
 fee rate for that bucket, its a reasonable answer right off the get go.

 Do you think it would make sense to make that 90% number an argument to
 rpc call?  For instance there could be a default (I would use 80%) but then
 you could specify if you required a different certainty.  It wouldn't
 require any code changes and might make it easier for people to build more
 complicated logic on top of it.

 *It can't actually track the median, but it identifies which of the
 smaller actual buckets the median would have fallen into and returns the
 average feerate for that median bucket.





 On Tue, Oct 28, 2014 at 9:59 AM, Gavin Andresen gavinandre...@gmail.com
 wrote:

 I think Alex's approach is better; I don't think we can know how much
 better until we have a functioning fee market.

 We don't have a functioning fee market now, because fees are hard-coded.
 So we get pay the hard-coded fee and you'll get confirmed in one or two or
 three blocks, depending on which miners mine the next three blocks and what
 time of day it is.

 git HEAD code says you need a fee of 10, satoshis/kb to be pretty
 sure you'll get confirmed in the next block. That looks about right with
 Alex's real-world data (if we take 90% chance as 'pretty sure you'll get
 confirmed'):

 Fee rate 10 Avg blocks to confirm 1.09 NumBlocks:% confirmed 1: 0.901
 2: 1.0   3: 1.0

 My only concern with Alex's code is that it takes much longer to get
 'primed' -- Alex, if I started with no data about fees, how long would it
 take to be able to get enough data for a reasonable estimate of what is
 the least I can pay and still be 90% sure I get confirmed in 20 blocks ?
 Hours? Days? Weeks?

 --
 --
 Gavin Andresen



--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Reworking the policy estimation code (fee estimates)

2014-10-28 Thread Gavin Andresen
On Tue, Oct 28, 2014 at 10:30 AM, Alex Morcos mor...@gmail.com wrote:

 Do you think it would make sense to make that 90% number an argument to
 rpc call?  For instance there could be a default (I would use 80%) but then
 you could specify if you required a different certainty.  It wouldn't
 require any code changes and might make it easier for people to build more
 complicated logic on top of it.


RE: 80% versus 90% :  I think a default of 80% will get us a lot of the
fee estimation logic is broken, I want my transactions to confirm quick and
a lot of them aren't confirming for 2 or 3 blocks.

RE: RPC argument:  I'm reluctant to give too many 'knobs' for the RPC
interface. I think the default percentage makes sense as a
command-line/bitcoin.conf option; I can imagine services that want to save
on fees running with -estimatefeethreshold=0.5  (or
-estimatefeethreshold=0.95 if as-fast-as-possible confirmations are
needed). Setting both the number of confirmations and the estimation
threshold on a transaction-by-transaction basis seems like overkill to me.

-- 
--
Gavin Andresen
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Reworking the policy estimation code (fee estimates)

2014-10-28 Thread Alex Morcos
RE: 90% : I think it's fine to use 90% for anything other than 1
confirmation, but if you look at the real world data test I did, or the raw
data from this new code, you'll see that even the highest fee rate
transactions only get confirmed at about a 90% rate in 1 block, so that if
you use that as your cut-off you will sometimes get no answer and sometimes
get a very high fee rate and sometimes get a reasonable fee rate, it just
depends because the data is too noisy.  I think thats just because there is
no good answer to that question.  There is no fee you can put on your
transaction to guarantee greater than 90% chance of getting confirmed in
one block.  I think 85% might be safe?

RE: tunable as command-line/bitcoin.conf: sounds good!

OK, sorry to have all this conversation on the dev list, maybe i'll turn
this into an actual PR if we want to comment on the code?
I just wanted to see if it even made sense to make a PR for this or this
isn't the way we wanted to go about it.




On Tue, Oct 28, 2014 at 10:58 AM, Gavin Andresen gavinandre...@gmail.com
wrote:

 On Tue, Oct 28, 2014 at 10:30 AM, Alex Morcos mor...@gmail.com wrote:

 Do you think it would make sense to make that 90% number an argument to
 rpc call?  For instance there could be a default (I would use 80%) but then
 you could specify if you required a different certainty.  It wouldn't
 require any code changes and might make it easier for people to build more
 complicated logic on top of it.


 RE: 80% versus 90% :  I think a default of 80% will get us a lot of the
 fee estimation logic is broken, I want my transactions to confirm quick and
 a lot of them aren't confirming for 2 or 3 blocks.

 RE: RPC argument:  I'm reluctant to give too many 'knobs' for the RPC
 interface. I think the default percentage makes sense as a
 command-line/bitcoin.conf option; I can imagine services that want to save
 on fees running with -estimatefeethreshold=0.5  (or
 -estimatefeethreshold=0.95 if as-fast-as-possible confirmations are
 needed). Setting both the number of confirmations and the estimation
 threshold on a transaction-by-transaction basis seems like overkill to me.

 --
 --
 Gavin Andresen

--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


[Bitcoin-development] Reworking the policy estimation code (fee estimates)

2014-10-27 Thread Alex Morcos
I've been playing around with the code for estimating fees and found a few
issues with the existing code.   I think this will address several
observations that the estimates returned by the existing code appear to be
too high.  For instance see @cozz in Issue 4866
https://github.com/bitcoin/bitcoin/issues/4866.

Here's what I found:

1) We're trying to answer the question of what fee X you need in order to
be confirmed within Y blocks.   The existing code tries to do that by
calculating the median fee for each possible Y instead of gathering
statistics for each possible X.  That approach is statistically incorrect.
In fact since certain X's appear so frequently, they tend to dominate the
statistics at all possible Y's (a fee rate of about 40k satoshis)

2) The existing code then sorts all of the data points in all of the
buckets together by fee rate and then reassigns buckets before calculating
the medians for each confirmation bucket.  The sorting forces a
relationship where there might not be one.  Imagine some other variable,
such as first 2 bytes of the transaction hash.  If we sorted these and then
used them to give estimates, we'd see a clear but false relationship where
transactions with low starting bytes in their hashes took longer to confirm.

3) Transactions which don't have all their inputs available (because they
depend on other transactions in the mempool) aren't excluded from the
calculations.  This skews the results.

I rewrote the code to follow a different approach.  I divided all possible
fee rates up into fee rate buckets (I spaced these logarithmically).  For
each transaction that was confirmed, I updated the appropriate fee rate
bucket with how many blocks it took to confirm that transaction.

The hardest part of doing this fee estimation is to decide what the
question really is that we're trying to answer.  I took the approach that
if you are asking what fee rate I need to be confirmed within Y blocks,
then what you would like to know is the lowest fee rate such that a
relatively high percentage of transactions of that fee rate are confirmed
within Y blocks. Since even the highest fee transactions are confirmed
within the first block only 90-93% of the time, I decided to use 80% as my
cutoff.  So now to answer estimatefee Y, I scan through all of the fee
buckets from the most expensive down until I find the last bucket with 80%
of the transactions confirmed within Y blocks.

Unfortunately we still have the problem of not having enough data points
for non-typical fee rates, and so it requires gathering a lot of data to
give reasonable answers. To keep all of these data points in a circular
buffer and then sort them for every analysis (or after every new block) is
expensive.  So instead I adopted the approach of keeping an exponentially
decaying moving average for each bucket.  I used a decay of .998 which
represents a half life of 374 blocks or about 2.5 days.  Also if a bucket
doesn't have very many transactions, I combine it with the next bucket.

Here is a link https://github.com/morcos/bitcoin/pull/3 to the code.  I
can create an actual pull request if there is consensus that it makes sense
to do so.

I've attached a graph comparing the estimates produced for 1-3
confirmations by the new code and the old code.  I did apply the patch to
fix issue 3 above to the old code first.  The new code is in green and the
fixed code is in purple.  The Y axis is a log scale of feerate in satoshis
per KB and the X axis is chain height.  The new code produces the same
estimates for 2 and 3 confirmations (the answers are effectively quantized
by bucket).

I've also completely reworked smartfees.py.  It turns out to require many
many more transactions are put through in order to have statistically
significant results, so the test is quite slow to run (about 3 mins on my
machine).

I've also been running a real world test, sending transactions of various
fee rates and seeing how long they took to get confirmed.  After almost 200
tx's at each fee rate, here are the results so far:

Fee rate 1100   Avg blocks to confirm 2.30 NumBlocks:% confirmed 1: 0.528
2: 0.751 3: 0.870
Fee rate 2500   Avg blocks to confirm 2.22 NumBlocks:% confirmed 1: 0.528
2: 0.766 3: 0.880
Fee rate 5000   Avg blocks to confirm 1.93 NumBlocks:% confirmed 1: 0.528
2: 0.782 3: 0.891
Fee rate 1  Avg blocks to confirm 1.67 NumBlocks:% confirmed 1: 0.569
2: 0.844 3: 0.943
Fee rate 2  Avg blocks to confirm 1.33 NumBlocks:% confirmed 1: 0.715
2: 0.963 3: 0.989
Fee rate 3  Avg blocks to confirm 1.27 NumBlocks:% confirmed 1: 0.751
2: 0.974 3: 1.0
Fee rate 4  Avg blocks to confirm 1.25 NumBlocks:% confirmed 1: 0.792
2: 0.953 3: 0.994
Fee rate 6  Avg blocks to confirm 1.12 NumBlocks:% confirmed 1: 0.875
2: 1.0   3: 1.0
Fee rate 10 Avg blocks to confirm 1.09 NumBlocks:% confirmed 1: 0.901
2: 1.0   3: 1.0
Fee rate 30 Avg blocks to confirm 1.12 NumBlocks:% confirmed 1: 0.886
2: 0.989 3: 1.0


Alex