Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-28 Thread Gavin Andresen
On Thu, May 28, 2015 at 1:34 PM, Mike Hearn m...@plan99.net wrote:

 As noted, many miners just accept the defaults. With your proposed change
 their target would effectively *drop* from 1mb to 800kb today, which
 seems crazy. That's the exact opposite of what is needed right now.


 I am very skeptical about this idea.


By the time a hard fork can happen, I expect average block size will be
above 500K.

Would you support a rule that was larger of 1MB or 2x average size ? That
is strictly better than the situation we're in today.

-- 
--
Gavin Andresen
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-28 Thread Pieter Wuille
 until we have size-independent new block propagation

I don't really believe that is possible. I'll argue why below. To be clear,
this is not an argument against increasing the block size, only against
using the assumption of size-independent propagation.

There are several significant improvements likely possible to various
aspects of block propagation, but I don't believe you can make any part
completely size-independent. Perhaps the remaining aspects result in terms
in the total time that vanish compared to the link latencies for 1 MB
blocks, but there will be some block sizes for which this is no longer the
case, and we need to know where that is the case.

* You can't assume that every transaction is pre-relayed and pre-validated.
This can happen due to non-uniform relay policies (different codebases, and
future things like size-limited mempools), double spend attempts, and
transactions generated before a block had time to propagate. You've
previously argued for a policy of not including too recent transactions,
but that requires a bound on network diameter, and if these late
transactions are profitable, it has exactly the same problem as making
larger blocks non-proportionally more economic for larger pools groups if
propagation time is size dependent).
  * This results in extra bandwidth usage for efficient relay protocols,
and if discrepancy estimation mispredicts the size of IBLT or error
correction data needed, extra roundtrips.
  * Signature validation for unrelayed transactions will be needed at block
relay time.
  * Database lookups for the inputs of unrelayed transactions cannot be
cached in advance.

* Block validation with 100% known and pre-validated transactions is not
constant time, due to updates that need to be made to the UTXO set (and
future ideas like UTXO commitments would make this effect an order of
magnitude worse).

* More efficient relay protocols also have higher CPU cost for
encoding/decoding.

Again, none of this is a reason why the block size can't increase. If
availability of hardware with higher bandwidth, faster disk/ram access
times, and faster CPUs increases, we should be able to have larger blocks
with the same propagation profile as smaller blocks with earlier technology.

But we should know how technology scales with larger blocks, and I don't
believe we do, apart from microbenchmarks in laboratory conditions.

-- 
Pieter
 On Fri, May 8, 2015 at 3:20 AM, Matt Whitlock b...@mattwhitlock.name
wrote:

 Between all the flames on this list, several ideas were raised that did
 not get much attention. I hereby resubmit these ideas for consideration and
 discussion.

 - Perhaps the hard block size limit should be a function of the actual
 block sizes over some trailing sampling period. For example, take the
 median block size among the most recent 2016 blocks and multiply it by 1.5.
 This allows Bitcoin to scale up gradually and organically, rather than
 having human beings guessing at what is an appropriate limit.


A lot of people like this idea, or something like it. It is nice and
simple, which is really important for consensus-critical code.

With this rule in place, I believe there would be more fee pressure
(miners would be creating smaller blocks) today. I created a couple of
histograms of block sizes to infer what policy miners are ACTUALLY
following today with respect to block size:

Last 1,000 blocks:
  http://bitcoincore.org/~gavin/sizes_last1000.html

Notice a big spike at 750K -- the default size for Bitcoin Core.
This graph might be misleading, because transaction volume or fees might
not be high enough over the last few days to fill blocks to whatever limit
miners are willing to mine.

So I graphed a time when (according to statoshi.info) there WERE a lot of
transactions waiting to be confirmed:
   http://bitcoincore.org/~gavin/sizes_357511.html

That might also be misleading, because it is possible there were a lot of
transactions waiting to be confirmed because miners who choose to create
small blocks got lucky and found more blocks than normal.  In fact, it
looks like that is what happened: more smaller-than-normal blocks were
found, and the memory pool backed up.

So: what if we had a dynamic maximum size limit based on recent history?

The average block size is about 400K, so a 1.5x rule would make the max
block size 600K; miners would definitely be squeezing out transactions /
putting pressure to increase transaction fees. Even a 2x rule (implying
800K max blocks) would, today, be squeezing out transactions / putting
pressure to increase fees.

Using a median size instead of an average means the size can increase or
decrease more quickly. For example, imagine the rule is median of last
2016 blocks and 49% of miners are producing 0-size blocks and 51% are
producing max-size blocks. The median is max-size, so the 51% have total
control over making blocks bigger.  Swap the roles, and the median is
min-size.

Because of that, I think using an average is 

Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-28 Thread Peter Todd
On Thu, May 28, 2015 at 01:19:44PM -0400, Gavin Andresen wrote:
 As for whether there should be fee pressure now or not: I have no
 opinion, besides we should make block propagation faster so there is no
 technical reason for miners to produce tiny blocks. I don't think us
 developers should be deciding things like whether or not fees are too high,
 too low, .

Note that the majority of hashing power is using Matt Corallo's block
relay network, something I confirmed the other day through my mining
contacts. Interestingly, the miners that aren't using it include some of
the largest pools; I haven't yet gotten an answer as to what their
rational for not using it was exactly.

Importantly, this does mean that block propagation is probably fairly
close to optimal already, modulo major changes to the consensus
protocol; IBLT won't improve the situation much, if any.

It's also notable that we're already having issues with miners turning
validation off as a way to lower their latency; I've been asked myself
about the possibility of creating an SPV miner that skips validation
while new blocks are propagating to shave off time and builds directly
off of block headers corresponding to blocks with unknown contents.

-- 
'peter'[:-1]@petertodd.org
0327487b689490b73f9d336b3008f82114fd3ada336bcac0


signature.asc
Description: Digital signature
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB stepfunction

2015-05-28 Thread Pieter Wuille
On May 28, 2015 10:42 AM, Raystonn . rayst...@hotmail.com wrote:

 I agree that developers should avoid imposing economic policy.  It is
dangerous for Bitcoin and the core developers themselves to become such a
central point of attack for those wishing to disrupt Bitcoin.

I could not agree more that developers should not be in charge of the
network rules.

Which is why - in my opinion - hard forks cannot be controversial things. A
controversial change to the software, forced to be adopted by the public
because the only alternative is a permanent chain fork, is a use of power
that developers (or anyone) should not have, and an incredibly dangerous
precedent for other changes that only a subset of participants would want.

The block size is also not just an economic policy. It is the compromise
the _network_ chooses to make between utility and various forms of
centralization pressure, and we should treat it as a compromise, and not as
some limit that is inferior to scaling demands.

I personally think the block size should increase, by the way, but only if
we can do it under a policy of doing it after technological growth has been
shown to be sufficient to support it without increased risk.

-- 
Pieter
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-28 Thread Mike Hearn

 Twenty is scary.


To whom? The only justification for the max size is DoS attacks, right?
Back when Bitcoin had an average block size of 10kb, the max block size was
100x the average. Things worked fine, nobody was scared.

The max block size is really a limit set by hardware capability, which is
something that's difficult to measure in software. I think I preferred your
original formula that guesstimated based on previous trends to one that
just tries to follow some average.

As noted, many miners just accept the defaults. With your proposed change
their target would effectively *drop* from 1mb to 800kb today, which seems
crazy. That's the exact opposite of what is needed right now.

I am very skeptical about this idea.


 I don't think us developers should be deciding things like whether or not
 fees are too high, too low,


Miners can already attempt to apply fee pressure by just not mining
transactions that they feel don't pay enough. Some sort of auto-cartel that
attempts to restrict supply based on everyone looking at everyone else
feels overly complex and prone to strange situations: it looks a lot like
some kind of Mexican standoff to me.

Additionally, the justification for the block size limit was DoS by someone
mining troll blocks. It was never meant to be about fee pressure.
Resource management inside Bitcoin Core is certainly something to be
handled by developers.
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB stepfunction

2015-05-28 Thread Gavin Andresen
On Thu, May 28, 2015 at 1:59 PM, Pieter Wuille pieter.wui...@gmail.com
wrote:

 I personally think the block size should increase, by the way, but only if
 we can do it under a policy of doing it after technological growth has been
 shown to be sufficient to support it without increased risk.

 Can you be more specific about this? What risks are you worried about?

I've tried to cover all that I've heard about in my blog posts about why I
think the risks of 20MB blocks are outweighed by the benefits, am I missing
something?
  (blog posts are linked from
http://gavinandresen.ninja/time-to-roll-out-bigger-blocks )

There is the a sudden jump to a 20MB max might have unforseen
consequences risk that I don't address, but a dynamic increase would fix
that.

-- 
--
Gavin Andresen
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-28 Thread Gavin Andresen
Can we hold off on bike-shedding the particular choice of parameters until
people have a chance to weigh in on whether or not there is SOME set of
dynamic parameters they would support right now?


-- 
--
Gavin Andresen
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB stepfunction

2015-05-28 Thread Raystonn .
I agree that developers should avoid imposing economic policy.  It is dangerous 
for Bitcoin and the core developers themselves to become such a central point 
of attack for those wishing to disrupt Bitcoin.  My opinion is these things are 
better left to a decentralized free market anyhow.


From: Gavin Andresen 
Sent: Thursday, May 28, 2015 10:19 AM
To: Mike Hearn 
Cc: Bitcoin Dev 
Subject: Re: [Bitcoin-development] Proposed alternatives to the 20MB 
stepfunction

On Thu, May 28, 2015 at 1:05 PM, Mike Hearn m...@plan99.net wrote:

Isn't that a step backwards, then? I see no reason for fee pressure to 
exist at the moment. All it's doing is turning away users for no purpose: 
mining isn't supported by fees, and the tiny fees we use right now seem to be 
good enough to stop penny flooding.


  Why not set the max size to be 20x the average size? Why 2x, given you just 
pointed out that'd result in blocks shrinking rather than growing.

Twenty is scary.

And two is a very neutral number: if 50% of hashpower want the max size to grow 
as fast as possible and 50% are dead-set opposed to any increase in max size, 
then half produce blocks 2 times as big, half produce empty blocks, and the max 
size doesn't change. If it was 20, then a small minority of miners could force 
a max size increase.  (if it is less than 2, then a minority of minors can 
force the block size down)


As for whether there should be fee pressure now or not: I have no opinion, 
besides we should make block propagation faster so there is no technical 
reason for miners to produce tiny blocks. I don't think us developers should 
be deciding things like whether or not fees are too high, too low, .

-- 

--
Gavin Andresen




--




___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-28 Thread Steven Pine
My understanding, which is very likely wrong in one way or another, is
transaction size and block size are two slightly different things but
perhaps it's so negligible that block size is a fine stand-in for total
transaction throughput.

Potentially Doubling the block size everyday is frankly imprudent. The
logarithmic increases in difficulty, which were often closer to 10% or 20%
every 2016 blocks was and is plenty fast, potentially changing blocksize by
twice daily is the mentality I would expect from a startup with the move
fast break things motto.

Infrastructure takes time, not everyone wants to run a node on a virtual
amazon instance, provisioning additional hard drive and bandwidth can't
happen overnight and trying to plan when block size from one week to the
next is a total mystery would be extremely difficult.

Anyone who has spent time examining the mining difficulty increases and
trajectory knows future planning is very very hard, allowing block size to
double daily would make it impossible.

Perhaps a middle way would be 300%  increase every 2016 blocks, that will
scale to 20mbs within a  month or two

The problem is logarithmic increases seem slow until they seem fast. If the
network begins to grow and block size hits 20, then the next day 40, 80...
Small nodes could get swamped within a week or less.

As for your point about Christmas, Bitcoin is a global network, Christmas,
while widely celebrated, isn't the only holiday, and planning around
American buying habits seems short sighted and no different from developers
trying to choose what the right fee pressure is.

On May 28, 2015 1:22 PM, Gavin Andresen gavinandre...@gmail.com wrote:

 On Thu, May 28, 2015 at 12:30 PM, Steven Pine steven.p...@gmail.com
wrote:

 I would support a dynamic block size increase as outlined. I have a few
questions though.

 Is scaling by average block size the best and easiest method, why not
scale by transactions confirmed instead? Anyone can write and relay a
transaction, and those are what we want to scale for, why not measure it
directly?


 What do you mean? Transactions aren't confirmed until they're in a
block...


 I would prefer changes every 2016 blocks, it is a well known change and
a reasonable time period for planning on changes. Two weeks is plenty fast,
especially at a 50% rate increase, in a few months the block size could be
dramatically larger.


 What type of planning do you imagine is necessary?

 And have you looked at transaction volumes for credit-card payment
networks around Christmas?


 Daily change to size seems confusing especially considering that max
block size will be dipping up and down. Also if something breaks trying to
fix it in a day seems problematic. The hard fork database size difference
error comes to mind. Finally daily 50% increases could quickly crowd out
smaller nodes if changes happen too quickly to adapt for.

 The bottleneck is transaction volume; blocks won't get bigger unless
there are fee-paying transactions around to pay them. What scenario are you
imagining where transaction volume increases by 50% a day for a sustained
period of time?

 --
 --
 Gavin Andresen
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Version bits proposal

2015-05-28 Thread Christian Decker
Agreed, there is no need to misuse the version field as well. There is more
than enough variability you could roll in the merkle tree including and
excluding transactions, and the scriptSig of the coinbase transaction,
which also influences the merkle root.

I have a fundamental dislike of retroactively changing semantics, and the
version field should be used just for that: a version. I don't even
particularly like flagging support for a fork in the version field, but
since I have no better solution, count me as supporting Sipa's proposal. We
definitely need a more comfortable way of rolling out new features.

Regards,
Chris

On Thu, May 28, 2015 at 3:08 AM Patrick Strateman 
patrick.strate...@gmail.com wrote:

 There is absolutely no reason to do this.

 Any reasonable micro-controller can build merkle tree roots
 significantly faster than is necessary.

 1 Th/s walks the nonce range once every 4.3ms.

 The largest valid merkle trees are 14 nodes high.

 That translates to 28 SHA256 ops per 4.3ms or 6511 SHA256 ops/second.

 For reference an RPi 1 model B does 2451050 SHA256 ops/second.

 On 05/27/2015 03:52 PM, Sergio Lerner wrote:
  I like the idea but I think we should leave at least 16 bits of the
  version fixed as an extra-nonce.
  If we don't then miners may use them as a nonce anyway, and mess with
  the soft-fork voting system.
  My original proposal was this:
 https://github.com/bitcoin/bitcoin/pull/5102
 
  Best regards
 
 
 
 --
  ___
  Bitcoin-development mailing list
  Bitcoin-development@lists.sourceforge.net
  https://lists.sourceforge.net/lists/listinfo/bitcoin-development




 --
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development

--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Long-term mining incentives

2015-05-28 Thread Mike Hearn

 The prior (and seemingly this) assurance contract proposals pay the
 miners who mines a chain supportive of your interests and miners whom
 mine against your interests identically.


The same is true today - via inflation I pay for blocks regardless of
whether they contain or double spend my transactions or not. So I don't see
why it'd be different in future.


 There is already a mechanism built into Bitcoin for paying for
 security which doesn't have this problem, and which mitigates the
 common action problem of people just sitting around for other people
 to pay for security: transaction fees.


The article states quite clearly that assurance contracts are proposed only
if people setting transaction fees themselves doesn't work. There's some
reasonably good arguments that it probably won't work, but I don't assign
very high weight to game theoretic arguments these days so it wouldn't
surprise me if Satoshi's original plan worked out OK too.

Of course, by the time this matters I plan to be sipping a pina colada on
my private retirement beach :) It's a problem the next generation can
tackle, as far as I am concerned.


 Considering the near-failure in just keeping development funded, I'm not
 sure where the believe this this model will be workable comes from


Patience :)

Right now it's a lot easier to get development money from VC funds and rich
benefactors than raising it directly from the community, so unsurprisingly
that's what most people do.

Despite that, the Hourglass design document project now has sufficient
pre-pledges that it should be possible to crowdfund it successfully once I
get around to actually doing the work. And BitSquare was able to raise
nearly half of their target despite an incredibly aggressive deadline and
the fact that they hadn't shipped a usable prototype. I think as people get
better at crafting their contracts and people get more experience with
funding work this way, we'll see it get more common.

But yes. Paying for things via assurance contracts is a long term and very
experimental plan, for sure.


 one time cost. I note that many existing crowdfunding platforms
 (including your own) do not do ongoing costs with this kind of binary
 contract.


Lighthouse wasn't written to do hashing assurance contracts, so no, it
doesn't have such a feature. Perhaps in version 2.
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Consensus-enforced transaction replacement via sequence numbers

2015-05-28 Thread Tier Nolan
Can you update it so that it only applies to transactions with version
number 3 and higher.  Changing the meaning of a field is exactly what the
version numbers are for.

You could even decode version 3 transactions like that.

Version 3 transactions have a sequence number of 0x and the
sequence number field is re-purposed for relative lock time.

This means that legacy transactions that have already been signed but have
a locktime in the future will still be able to enter the blockchain
(without having to wait significantly longer than expected).

On Thu, May 28, 2015 at 10:56 AM, Mark Friedenbach m...@friedenbach.org
wrote:

 I have no problem with modifying the proposal to have the most significant
 bit signal use of the nSequence field as a relative lock-time. That leaves
 a full 31 bits for experimentation when relative lock-time is not in use. I
 have adjusted the code appropriately:

 https://github.com/maaku/bitcoin/tree/sequencenumbers

 On Wed, May 27, 2015 at 10:39 AM, Mike Hearn m...@plan99.net wrote:

 Mike, this proposal was purposefully constructed to maintain as well as
 possible the semantics of Satoshi's original construction. Higher sequence
 numbers -- chronologically later transactions -- are able to hit the chain
 earlier, and therefore it can be reasonably argued will be selected by
 miners before the later transactions mature. Did I fail in some way to
 capture that original intent?


 Right, but the original protocol allowed for e.g. millions of revisions
 of the transaction, hence for high frequency trading (that's actually how
 Satoshi originally explained it to me - as a way to do HFT - back then the
 channel concept didn't exist).

 As you point out, with a careful construction of channels you should only
 need to bump the sequence number when the channel reverses direction. If
 your app only needs to do that rarely, it's a fine approach.And your
 proposal does sounds better than sequence numbers being useless like at the
 moment. I'm just wondering if we can get back to the original somehow or at
 least leave a path open to it, as it seems to be a superset of all other
 proposals, features-wise.




 --

 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development


--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Consensus-enforced transaction replacement via sequence numbers

2015-05-28 Thread Peter Todd
On Thu, May 28, 2015 at 11:30:18AM +0100, Tier Nolan wrote:
 Can you update it so that it only applies to transactions with version
 number 3 and higher.  Changing the meaning of a field is exactly what the
 version numbers are for.
 
 You could even decode version 3 transactions like that.
 
 Version 3 transactions have a sequence number of 0x and the
 sequence number field is re-purposed for relative lock time.
 
 This means that legacy transactions that have already been signed but have
 a locktime in the future will still be able to enter the blockchain
 (without having to wait significantly longer than expected).

For that matter, we probably don't want to treat this as a *version*
change, but rather a *feature* flag. For instance, nSequence is
potentially useful for co-ordinating multiple signatures to ensure they
can only be used in certain combinations, a use-case not neccesarily
compatible with this idea of a relative lock. Similarly it's potentially
useful for dealing with malleability.

nSequence is currently the *only* thing in CTxIn's that the signature
signs that can be freely changed; I won't be surprised if we find other
uses for it.

Of course, all of the above is assuming this proposal is useful; that's
not clear to me yet and won't be without fleshed out examples.

-- 
'peter'[:-1]@petertodd.org
08464a6a19387029fa99edace15996d06a6343a8345d6167


signature.asc
Description: Digital signature
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Consensus-enforced transaction replacement via sequence numbers

2015-05-28 Thread Tier Nolan
On Thu, May 28, 2015 at 3:59 PM, Mark Friedenbach m...@friedenbach.org
wrote:

 Why 3? Do we have a version 2?

I meant whatever the next version is, so you are right, it's version 2.

 As for doing it in serialization, that would alter the txid making it a
 hard fork change.

The change is backwards compatible (since there is no restrictions on
sequence numbers).   This makes it a soft fork.

That doesn't change the fact that you are changing what a field in the
transaction represents.

You could say that the sequence number is no longer encoded in the
serialization, it is assumed to be 0x for all version 2+
transactions and the relative locktime is a whole new field that is the
same size (and position).

I think keeping some of the bytes for other uses is a good idea.  The
entire top 2 bytes could be ignored when working out relative locktime
verify.  That leaves them fully free to be set to anything.

It could be that if the MSB of the bottom 2 bytes is set, then that
activates the rule and the top 2 bytes are ignored.

Are there any use-cases which need a RLTV of more than 8191 blocks delay
(that can't be covered by the absolute version)?
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development