I've been following the discussion of the block size limit and IMO it
is clear that any constant block size limit is, as many have said
before, just kicking the can down the road.
My problem with the dynamic lower limit solution based on past blocks
is that it doesn't account for usage spikes. I
On Fri, May 29, 2015 at 12:26 PM, Mike Hearn m...@plan99.net wrote:
IMO it's not even clear there needs to be a size limit at all. Currently
the 32mb message cap imposes one anyway
If the plan is a fix once and for all, then that should be changed too. It
could be set so that it is at least
By the time a hard fork can happen, I expect average block size will be
above 500K.
Yes, possibly.
Would you support a rule that was larger of 1MB or 2x average size ?
That is strictly better than the situation we're in today.
It is, but only by a trivial amount - hitting the limit is
If the plan is a fix once and for all, then that should be changed too.
It could be set so that it is at least some multiple of the max block size
allowed.
Well, but RAM is not infinite :-) Effectively what these caps are doing is
setting the minimum hardware requirements for running a
What do other people think?
If we can't come to an agreement soon, then I'll ask for help
reviewing/submitting patches to Mike's Bitcoin-Xt project that implement a
big increase now that grows over time so we may never have to go through
all this rancor and debate again.
I'll then ask for help
Are you really that pig headed that you are going to try and blow up the
entire system just to get your way? A bunch of ignorant redditors do not
make consensus, mercifully.
On 2015-05-29 12:39, Gavin Andresen wrote:
What do other people think?
If we can't come to an agreement soon, then
How is this being pigheaded? In my opinion, this is leadership. If
*something* isn't implemented soon, the network is going to have some real
problems, right at the
time when adoption is starting to accelerate. I've been seeing nothing but
navel-gazing and circlejerks on this issue for weeks
On Fri, May 29, 2015 at 1:39 PM, Gavin Andresen gavinandre...@gmail.com
wrote:
But if there is still no consensus among developers but the bigger blocks
now movement is successful, I'll ask for help getting big miners to do the
same, and use the soft-fork block version voting mechanism to
What about trying the dynamic scaling method within the 20MB range + 1 year
with a 40% increase of that cap? Until a way to dynamically scale is
found, the cap will only continue to be an issue. With 20 MB + 40% yoy,
we're either imposing an arbitrary cap later, or achieving less than great
DOS
miners would definitely be squeezing out transactions / putting pressure
to increase transaction fees
I'd just like to re-iterate that transactions getting squeezed out
(failure after a lengthy period of uncertainty) is a radical change from
the current behavior of the network. There are plenty
On Fri, May 29, 2015 at 5:39 AM, Gavin Andresen gavinandre...@gmail.com
wrote:
What do other people think?
If we can't come to an agreement soon, then I'll ask for help
reviewing/submitting patches to Mike's Bitcoin-Xt project that implement a
big increase now that grows over time so we may
On Fri, May 29, 2015 at 10:09 AM, Tier Nolan tier.no...@gmail.com wrote:
How do you intend to measure exchange/merchant acceptance?
Public statements saying we're running software that is ready for bigger
blocks.
And looking at the version (aka user-agent) strings of publicly reachable
nodes
The measure is miner consensus. How do you intend to measure
exchange/merchant acceptance?
Asking them.
In fact, we already have. I have been talking to well known people and CEOs
in the Bitcoin community for some time now. *All* of them support bigger
blocks, this includes:
- Every
On Fri, May 29, 2015 at 3:09 PM, Tier Nolan tier.no...@gmail.com wrote:
On Fri, May 29, 2015 at 1:39 PM, Gavin Andresen gavinandre...@gmail.com
wrote:
But if there is still no consensus among developers but the bigger
blocks now movement is successful, I'll ask for help getting big miners
And looking at the version (aka user-agent) strings of publicly reachable
nodes on the network.
(e.g. see the count at https://getaddr.bitnodes.io/nodes/ )
Yeah, though FYI Luke informed me last week that I somehow managed to take
out the change to the user-agent string in Bitcoin XT,
On Thu, May 28, 2015 at 1:34 PM, Mike Hearn m...@plan99.net wrote:
As noted, many miners just accept the defaults. With your proposed change
their target would effectively *drop* from 1mb to 800kb today, which
seems crazy. That's the exact opposite of what is needed right now.
I am very
until we have size-independent new block propagation
I don't really believe that is possible. I'll argue why below. To be clear,
this is not an argument against increasing the block size, only against
using the assumption of size-independent propagation.
There are several significant
On Thu, May 28, 2015 at 01:19:44PM -0400, Gavin Andresen wrote:
As for whether there should be fee pressure now or not: I have no
opinion, besides we should make block propagation faster so there is no
technical reason for miners to produce tiny blocks. I don't think us
developers should be
Twenty is scary.
To whom? The only justification for the max size is DoS attacks, right?
Back when Bitcoin had an average block size of 10kb, the max block size was
100x the average. Things worked fine, nobody was scared.
The max block size is really a limit set by hardware capability, which
Can we hold off on bike-shedding the particular choice of parameters until
people have a chance to weigh in on whether or not there is SOME set of
dynamic parameters they would support right now?
--
--
Gavin Andresen
--
My understanding, which is very likely wrong in one way or another, is
transaction size and block size are two slightly different things but
perhaps it's so negligible that block size is a fine stand-in for total
transaction throughput.
Potentially Doubling the block size everyday is frankly
On Mon, May 18, 2015 at 2:42 AM, Rusty Russell ru...@rustcorp.com.au
wrote:
OK. Be nice if these were cleaned up, but I guess it's a sunk cost.
Yeah.
On the plus side, as people spend their money, old UTXOs would be used up
and then they would be included in the cost function. It is only
Tier Nolan tier.no...@gmail.com writes:
On Sat, May 16, 2015 at 1:22 AM, Rusty Russell ru...@rustcorp.com.au
wrote:
3) ... or maybe not, if any consumed UTXO was generated before the soft
fork (reducing Tier's perverse incentive).
The incentive problem can be fixed by excluding UTXOs from
On Sat, May 16, 2015 at 1:22 AM, Rusty Russell ru...@rustcorp.com.au
wrote:
Some tweaks:
1) Nomenclature: call tx_size tx_cost and real_size tx_bytes?
Fair enough.
2) If we have a reasonable hard *byte* limit, I don't think that we need
the MAX(). In fact, it's probably OK to go
Tier Nolan tier.no...@gmail.com writes:
On Sat, May 9, 2015 at 4:36 AM, Gregory Maxwell gmaxw...@gmail.com wrote:
An example would
be tx_size = MAX( real_size 1, real_size + 4*utxo_created_size -
3*utxo_consumed_size).
This could be implemented as a soft fork too.
* 1MB hard size limit
On Sat, May 9, 2015 at 4:36 AM, Gregory Maxwell gmaxw...@gmail.com wrote:
An example would
be tx_size = MAX( real_size 1, real_size + 4*utxo_created_size -
3*utxo_consumed_size).
This could be implemented as a soft fork too.
* 1MB hard size limit
* 900kB soft limit
S = block size
U =
On Sun, May 10, 2015 at 9:21 PM, Gavin Andresen gavinandre...@gmail.com wrote:
a while I think any algorithm that ties difficulty to block size is just a
complicated way of dictating minimum fees.
Thats not the long term effect or the motivation-- what you're seeing
is that the subsidy gets in
Le 11/05/2015 00:31, Mark Friedenbach a écrit :
I'm on my phone today so I'm somewhat constrained in my reply, but the key
takeaway is that the proposal is a mechanism for miners to trade subsidy
for the increased fees of a larger block. Necessarily it only makes sense
to do so when the
Le 08/05/2015 22:33, Mark Friedenbach a écrit :
* For each block, the miner is allowed to select a different difficulty
(nBits) within a certain range, e.g. +/- 25% of the expected difficulty,
and this miner-selected difficulty is used for the proof of work check. In
addition to adjusting
Let me make sure I understand this proposal:
On Fri, May 8, 2015 at 11:36 PM, Gregory Maxwell gmaxw...@gmail.com wrote:
(*) I believe my currently favored formulation of general dynamic control
idea is that each miner expresses in their coinbase a preferred size
between some minimum (e.g.
I'm on my phone today so I'm somewhat constrained in my reply, but the key
takeaway is that the proposal is a mechanism for miners to trade subsidy
for the increased fees of a larger block. Necessarily it only makes sense
to do so when the marginal fee per KB exceeds the subsidy fee per KB. It
How much will that cost me?
The network is hashing at 310PetaHash/sec right now.
Takes 600 seconds to find a block, so 186,000PH per block
186,000 * 0.00038 = 70 extra PH
If it takes 186,000 PH to find a block, and a block is worth 25.13 BTC
(reward plus fees), that 70 PH costs:
(25.13
On 05/08/2015 11:36 PM, Gregory Maxwell wrote:
Another related point which has been tendered before but seems to have
been ignored is that changing how the size limit is computed can help
better align incentives and thus reduce risk. E.g. a major cost to the
network is the UTXO impact of
Micropayment channels are not pie in the sky proposals. They work today on
Bitcoin as it is deployed without any changes. People just need to start
using them.
On May 10, 2015 11:03, Owen Gunden ogun...@phauna.org wrote:
On 05/08/2015 11:36 PM, Gregory Maxwell wrote:
Another related point
RE: fixing sigop counting, and building in UTXO cost: great idea! One of
the problems with this debate is it is easy for great ideas get lost in all
the noise.
RE: a hard upper limit, with a dynamic limit under it:
I like that idea. Can we drill down on the hard upper limit?
There are lots of
On Sat, May 9, 2015 at 12:58 PM, Gavin Andresen gavinandre...@gmail.com
wrote:
RE: fixing sigop counting, and building in UTXO cost: great idea! One of
the problems with this debate is it is easy for great ideas get lost in all
the noise.
If the UTXO set cost is built in, UTXO database
On Sat, May 09, 2015 at 01:36:56AM +0300, Joel Joonatan Kaartinen wrote:
such a contract is a possibility, but why would big owners give an
exclusive right to such pools? It seems to me it'd make sense to offer
those for any miner as long as the get paid a little for it. Especially
when it's
There are certainly arguments to be made for and against all of these
proposals.
The fixed 20mb cap isn't actually my proposal at all, it is from Gavin. I
am supporting it because anything is better than nothing. Gavin originally
proposed the block size be a function of time. That got dropped, I
Matt : I think proposal #1 and #3 are a lot better than #2, and #1 is my
favorite.
I see two problems with proposal #2.
The first problem with proposal #2 is that, as we see in democracies,
there is often a mismatch between the people conscious vote and these same
people behavior.
Relying on an
Block size scaling should be as transparent and simple as possible, like
pegging it to total transactions per difficulty change.
--
One dashboard for servers and applications across Physical-Virtual-Cloud
Widest
Adaptive schedules, i.e. those where block size limit depends not only on
block height, but on other parameters as well, are surely attractive in the
sense that the system can adapt to the actual use, but they also open a
possibility of a manipulation.
E.g. one of mining companies might try to
On Fri, May 8, 2015 at 2:20 AM, Matt Whitlock b...@mattwhitlock.name wrote:
- Perhaps the hard block size limit should be a function of the actual block
sizes over some
trailing sampling period. For example, take the median block size among the
most recent
2016 blocks and multiply it by
On Fri, May 08, 2015 at 03:32:00PM +0300, Joel Joonatan Kaartinen wrote:
Matt,
It seems you missed my suggestion about basing the maximum block size on
the bitcoin days destroyed in transactions that are included in the block.
I think it has potential for both scaling as well as keeping up a
It is my professional opinion that raising the block size by merely
adjusting a constant without any sort of feedback mechanism would be a
dangerous and foolhardy thing to do. We are custodians of a multi-billion
dollar asset, and it falls upon us to weigh the consequences of our own
actions
I like the bitcoin days destroyed idea.
I like lots of the ideas that have been presented here, on the bitcointalk
forums, etc etc etc.
It is easy to make a proposal, it is hard to wade through all of the
proposals. I'm going to balance that equation by completely ignoring any
proposal that
On Friday, 8 May 2015, at 3:32 pm, Joel Joonatan Kaartinen wrote:
It seems you missed my suggestion about basing the maximum block size on
the bitcoin days destroyed in transactions that are included in the block.
I think it has potential for both scaling as well as keeping up a constant
fee
On Fri, May 8, 2015 at 3:43 PM, Aaron Voisine vois...@gmail.com wrote:
This is a clever way to tie block size to fees.
I would just like to point out though that it still fundamentally is using
hard block size limits to enforce scarcity. Transactions with below market
fees will hang in limbo
such a contract is a possibility, but why would big owners give an
exclusive right to such pools? It seems to me it'd make sense to offer
those for any miner as long as the get paid a little for it. Especially
when it's as simple as offering an incomplete transaction with the
appropriate SIGHASH
This is a clever way to tie block size to fees.
I would just like to point out though that it still fundamentally is using
hard block size limits to enforce scarcity. Transactions with below market
fees will hang in limbo for days and fail, instead of failing immediately
by not propagating, or
That's fair, and we've implemented child-pays-for-parent for spending
unconfirmed inputs in breadwallet. But what should the behavior be when
those options aren't understood/implemented/used?
My argument is that the less risky, more conservative default fallback
behavior should be either
Matt,
It seems you missed my suggestion about basing the maximum block size on
the bitcoin days destroyed in transactions that are included in the block.
I think it has potential for both scaling as well as keeping up a constant
fee pressure. If tuned properly, it should both stop spamming and
On Fri, May 8, 2015 at 8:33 PM, Mark Friedenbach m...@friedenbach.org wrote:
These rules create an incentive environment where raising the block size has
a real cost associated with it: a more difficult hashcash target for the
same subsidy reward. For rational miners that cost must be
It seems to me all this would do is encourage 0-transaction blocks, crippling
the network. Individual blocks don't have a maximum block size, they have an
actual block size. Rational miners would pick blocks to minimize difficulty,
lowering the effective maximum block size as defined by the
In a fee-dominated future, replace-by-fee is not an opt-in feature. When
you create a transaction, the wallet presents a range of fees that it
expects you might pay. It then signs copies of the transaction with spaced
fees from this interval and broadcasts the lowest fee first. In the user
54 matches
Mail list logo