--[remove this line and above]--
On Thu, 7 May 2015, Gregory Maxwell wrote:
Date: Thu, 7 May 2015 00:37:54 +
From: Gregory Maxwell gmaxw...@gmail.com
To: Matt Corallo bitcoin-l...@bluematt.me
Cc: Bitcoin Dev bitcoin-development@lists.sourceforge.net
Subject: Re: [Bitcoin-development]
Why can't we have dynamic block size limit that changes with difficulty, such
as the block size cannot exceed 2x the mean size of the prior difficulty
period?
I recently subscribed to this list so my apologies if this has been addressed
already.
Between all the flames on this list, several ideas were raised that did not get
much attention. I hereby resubmit these ideas for consideration and discussion.
- Perhaps the hard block size limit should be a function of the actual block
sizes over some trailing sampling period. For example,
* Though there are many proposals floating around which could
significantly decrease block propagation latency, none of them are
implemented today.
With a 20mb cap, miners still have the option of the soft limit.
I would actually be quite surprised if there were no point along the road
That reminds me - I need to integrate the patch that automatically sweeps
anyone-can-pay transactions for a miner.
On Thu, May 7, 2015 at 7:32 PM, Tier Nolan tier.no...@gmail.com wrote:
One of the suggestions to avoid the problem of fees going to zero is
assurance contracts. This lets users
Interesting.
1. How do you know who was first? If one node can figure out where
more transactions happen he can gain an advantage by being closer to
him. Mining would not be fair.
2. A merchant wants to cause block number 1 million to effectively
have a minting fee of 50BTC. - why should he do
Alan argues that 7 tps is a couple orders of magnitude too low
By the way, just to clear this up - the real limit at the moment is more
like 3 tps, not 7.
The 7 transactions/second figure comes from calculations I did years ago,
in 2011. I did them a few months before the sendmany command was
There are certainly arguments to be made for and against all of these
proposals.
The fixed 20mb cap isn't actually my proposal at all, it is from Gavin. I
am supporting it because anything is better than nothing. Gavin originally
proposed the block size be a function of time. That got dropped, I
Looks like a neat solution, Tier.
--
One dashboard for servers and applications across Physical-Virtual-Cloud
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give
Matt : I think proposal #1 and #3 are a lot better than #2, and #1 is my
favorite.
I see two problems with proposal #2.
The first problem with proposal #2 is that, as we see in democracies,
there is often a mismatch between the people conscious vote and these same
people behavior.
Relying on an
Just to clarify the process.
Pledgers create transactions using the following template and broadcast
them. The p2p protocol could be modified to allow this, or it could be a
separate system.
*Input: 0.01 BTC*
*Signed with SIGHASH_ANYONE_CAN_PAY*
*Output 50BTC*
*Paid to: 1 million
Hello,
At DevCore London, Gavin mentioned the idea that we could get rid of
sending full blocks. Instead, newly minted blocks would only be
distributed as block headers plus all hashes of the transactions
included in the block. The assumption would be that nodes have already
the majority of
This isn't about everyone's coffee. This is about an absolute minimum
amount of participation by people who wish to use the network. If our
goal is really for bitcoin to really be a global, open transaction
network that makes money fluid, then 7tps is already a failure. If even
5% of the
So, there are several ideas about how to reduce the size of blocks being
sent on the network:
* Matt Corallo's relay network, which internally works by remembering the
last 5000 (i believe?) transactions sent by the peer, and allowing the peer
to backreference those rather than retransmit them
Block size scaling should be as transparent and simple as possible, like
pegging it to total transactions per difficulty change.
--
One dashboard for servers and applications across Physical-Virtual-Cloud
Widest
On 05/08/2015 01:13 AM, Tom Harding wrote:
On 5/7/2015 7:09 PM, Jeff Garzik wrote:
G proposed 20MB blocks, AFAIK - 140 tps
A proposed 100MB blocks - 700 tps
For ref,
Paypal is around 115 tps
VISA is around 2000 tps (perhaps 4000 tps peak)
For reference, I'm not proposing 100 MB blocks
Adaptive schedules, i.e. those where block size limit depends not only on
block height, but on other parameters as well, are surely attractive in the
sense that the system can adapt to the actual use, but they also open a
possibility of a manipulation.
E.g. one of mining companies might try to
On Fri, May 8, 2015 at 2:59 PM, Alan Reiner etothe...@gmail.com wrote:
This isn't about everyone's coffee. This is about an absolute minimum
amount of participation by people who wish to use the network. If our
goal is really for bitcoin to really be a global, open transaction network
On Fri, May 8, 2015 at 10:59 AM, Alan Reiner etothe...@gmail.com wrote:
This isn't about everyone's coffee. This is about an absolute minimum
amount of participation by people who wish to use the network. If our
goal is really for bitcoin to really be a global, open transaction network
On Fri, May 8, 2015 at 2:20 AM, Matt Whitlock b...@mattwhitlock.name wrote:
- Perhaps the hard block size limit should be a function of the actual block
sizes over some
trailing sampling period. For example, take the median block size among the
most recent
2016 blocks and multiply it by
On Fri, May 08, 2015 at 12:03:04PM +0200, Mike Hearn wrote:
* Though there are many proposals floating around which could
significantly decrease block propagation latency, none of them are
implemented today.
With a 20mb cap, miners still have the option of the soft limit.
The
On Fri, May 08, 2015 at 06:00:37AM -0400, Jeff Garzik wrote:
That reminds me - I need to integrate the patch that automatically sweeps
anyone-can-pay transactions for a miner.
You mean anyone-can-spend?
I've got code that does this actually:
On Fri, May 08, 2015 at 03:32:00PM +0300, Joel Joonatan Kaartinen wrote:
Matt,
It seems you missed my suggestion about basing the maximum block size on
the bitcoin days destroyed in transactions that are included in the block.
I think it has potential for both scaling as well as keeping up a
On Fri, May 8, 2015 at 5:37 PM, Peter Todd p...@petertodd.org wrote:
The soft-limit is there miners themselves produce smaller blocks; the
soft-limit does not prevent other miners from producing larger blocks.
I wonder if having a miner flag would be good for the network.
Clients for general
It is my professional opinion that raising the block size by merely
adjusting a constant without any sort of feedback mechanism would be a
dangerous and foolhardy thing to do. We are custodians of a multi-billion
dollar asset, and it falls upon us to weigh the consequences of our own
actions
Actually I believe that side chains and off-main-chain transactions will be
a critical part for the overall scalability of the network. I was actually
trying to make the point that (insert some huge block size here) will be
needed to even accommodate the reduced traffic.
I believe that it is
Replace by fee is what I was referencing. End-users interpret the old
transaction as expired. Hence the nomenclature. An alternative is a new
feature that operates in the reverse of time lock, expiring a transaction after
a specific time. But time is a bit unreliable in the blockchain
Hello,
I was reading some of the thread but can't say I read the entire thing.
I think that it is realistic to cinsider a nlock sixe of 20MB for any block
txn to occur. THis is an enormous amount of data (relatively for a netwkrk)
in which the avergage rate of 10tps over 10 miniutes would allow
Transactions don't expire. But if the wallet is online, it can periodically
choose to release an already created transaction with a higher fee. This
requires replace-by-fee to be sufficiently deployed, however.
On Fri, May 8, 2015 at 1:38 PM, Raystonn . rayst...@hotmail.com wrote:
I have a
Replace by fee is the better approach. It will ultimately replace zombie transactions (due to insufficient fee) with potentially much higher fees as the feature takes hold in wallets throughout the network, and fee competition increases. However, this does not fix the problem of low tps. In fact,
I like the bitcoin days destroyed idea.
I like lots of the ideas that have been presented here, on the bitcointalk
forums, etc etc etc.
It is easy to make a proposal, it is hard to wade through all of the
proposals. I'm going to balance that equation by completely ignoring any
proposal that
On Friday, 8 May 2015, at 3:32 pm, Joel Joonatan Kaartinen wrote:
It seems you missed my suggestion about basing the maximum block size on
the bitcoin days destroyed in transactions that are included in the block.
I think it has potential for both scaling as well as keeping up a constant
fee
Fail, Damian. Not even a half-good attempt.
-Raystonn
On 8 May 2015 3:15 pm, Damian Gomez dgomez1...@gmail.com wrote:On Fri, May 8, 2015 at 3:12 PM, Damian Gomez dgomez1092@gmail.com wrote:let me continue my conversation: as the development of this transactions would be indiscated as a ByteArray
On Fri, May 8, 2015 at 3:43 PM, Aaron Voisine vois...@gmail.com wrote:
This is a clever way to tie block size to fees.
I would just like to point out though that it still fundamentally is using
hard block size limits to enforce scarcity. Transactions with below market
fees will hang in limbo
Well zombie txns aside, I expect this to be resolved w/ a client side
implementation using a Merkle-Winternitz OTS in order to prevent the loss
of fee structure theougth the implementation of a this security hash that
eill alloow for a one-wya transaction to conitnue, according to the TESLA
On Fri, May 8, 2015 at 3:12 PM, Damian Gomez dgomez1...@gmail.com wrote:
let me continue my conversation:
as the development of this transactions would be indiscated
as a ByteArray of
On Fri, May 8, 2015 at 3:11 PM, Damian Gomez dgomez1...@gmail.com wrote:
Well zombie txns aside, I
let me continue my conversation:
as the development of this transactions would be indiscated
as a ByteArray of
On Fri, May 8, 2015 at 3:11 PM, Damian Gomez dgomez1...@gmail.com wrote:
Well zombie txns aside, I expect this to be resolved w/ a client side
implementation using a
such a contract is a possibility, but why would big owners give an
exclusive right to such pools? It seems to me it'd make sense to offer
those for any miner as long as the get paid a little for it. Especially
when it's as simple as offering an incomplete transaction with the
appropriate SIGHASH
This is a clever way to tie block size to fees.
I would just like to point out though that it still fundamentally is using
hard block size limits to enforce scarcity. Transactions with below market
fees will hang in limbo for days and fail, instead of failing immediately
by not propagating, or
That's fair, and we've implemented child-pays-for-parent for spending
unconfirmed inputs in breadwallet. But what should the behavior be when
those options aren't understood/implemented/used?
My argument is that the less risky, more conservative default fallback
behavior should be either
Matt,
It seems you missed my suggestion about basing the maximum block size on
the bitcoin days destroyed in transactions that are included in the block.
I think it has potential for both scaling as well as keeping up a constant
fee pressure. If tuned properly, it should both stop spamming and
On Fri, May 08, 2015 at 08:47:52PM +0100, Tier Nolan wrote:
On Fri, May 8, 2015 at 5:37 PM, Peter Todd p...@petertodd.org wrote:
The soft-limit is there miners themselves produce smaller blocks; the
soft-limit does not prevent other miners from producing larger blocks.
I wonder if
On Fri, May 8, 2015 at 8:33 PM, Mark Friedenbach m...@friedenbach.org wrote:
These rules create an incentive environment where raising the block size has
a real cost associated with it: a more difficult hashcash target for the
same subsidy reward. For rational miners that cost must be
On Wednesday 6. May 2015 21.49.52 Peter Todd wrote:
I'm not sure if you've seen this, but a good paper on this topic was
published recently: The Economics of Bitcoin Transaction Fees
The obvious flaw in this paper is that it talks about a block size in todays
(trivial) data-flow economy and
...of the following:
the DH_GENERATION would in effect calculate the reponses for a total
overage of the public component, by addding a ternary option in the actual
DH key (which I have attached to sse if you can iunderstand my logic)
For Java Practice this will be translated:
public
It seems to me all this would do is encourage 0-transaction blocks, crippling
the network. Individual blocks don't have a maximum block size, they have an
actual block size. Rational miners would pick blocks to minimize difficulty,
lowering the effective maximum block size as defined by the
On Sat, May 9, 2015 at 12:00 AM, Damian Gomez dgomez1...@gmail.com wrote:
...of the following:
the DH_GENERATION would in effect calculate the reponses for a total
overage of the public component, by addding a ternary option in the actual
DH key (which I have attached to sse if you can
In a fee-dominated future, replace-by-fee is not an opt-in feature. When
you create a transaction, the wallet presents a range of fees that it
expects you might pay. It then signs copies of the transaction with spaced
fees from this interval and broadcasts the lowest fee first. In the user
I have a proposal for wallets such as yours. How about creating all transactions with an expiration time starting with a low fee, then replacing with new transactions that have a higher fee as time passes. Users can pick the fee curve they desire based on the transaction priority they want to
The problems with that are larger than time being unreliable. It is no
longer reorg-safe as transactions can expire in the course of a reorg and
any transaction built on the now expired transaction is invalidated.
On Fri, May 8, 2015 at 1:51 PM, Raystonn rayst...@hotmail.com wrote:
Replace by
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
Hello. I've seen Greg make a couple of posts online
(https://bitcointalk.org/index.php?topic=1033396.msg11155302#msg11155302
is one such example) where he has mentioned that Pieter has a new
proposal for allowing multiple softforks to be deployed at
51 matches
Mail list logo