Re: [Bitcoin-development] Proposed alternatives to the 20MB step

2015-06-13 Thread odinn
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1



On 06/02/2015 04:03 AM, Mike Hearn wrote:
(...)
 
 If you really believe that decentralisation is, itself, the end,
 then why not go use an ASIC resistant alt coin with no SPV or web
 wallets which resembles Bitcoin at the end of 2009? That'd be a
 whole lot more decentralised than what you have now.
 
 The *percentage* of the community that mines is totally
 irrelevant, it's the absolute number of (independent) people that
 matters.
 
 
 So usage does matter, then? You'd rather have a coin that has
 power concentrated in a far smaller elite, proportionally, but has
 overall more usage? If there are say, 5000 full nodes today, and in
 ten years there are 6000, and they all run in vast datacenters and
 are owned by large companies, you'll feel like Bitcoin is more
 decentralised than ever?   (n.b. I do not think this situation will
 ever happen, it's just an example).
 

Something you said about power concentrated, made me think I should
post this here:

https://twitter.com/adam3us/status/608920099609817088

- -- 
http://abis.io ~
a protocol concept to enable decentralization
and expansion of a giving economy, and a new social good
https://keybase.io/odinn
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJVe8gvAAoJEGxwq/inSG8CcykH/0d+WuPnzFWooCRJR+FwaI4w
Ad0z5GSLfYKGnmMMbbqkLsIA2GsfRAvivrsfZYd4slF5C7HEDGa3J/NC72U46dk6
qVm07UNBO3V+loLJtStIQQkg3tVGWjXeiySf4E4b8wlaZiBMS9WW0sAOWUJiGMDQ
jKNRpjXobkQGd8C+VJXDpgtmiY60bS4l6j7bbYv+mU6LxhLwCVCqjRJSEN08BH4E
AOwJg1qlORHPnrepfeJrB6TVxeHuLjCjWodXQ0jHbNchVQw7zc81gKrD40BJTzyO
TTtGPu3JUkcHtx7MVLbIdYNVElqxMS5Li+j9j3h+m9eGSaNgOOl3+8VGJexKPKI=
=j5Fh
-END PGP SIGNATURE-

--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step

2015-06-02 Thread Eric Voskuil
On 06/01/2015 08:55 AM, Mike Hearn wrote:
 Decentralization is the core of Bitcoin's security model and thus
that's what gives Bitcoin its value.
 No. Usage is what gives Bitcoin value.

Nonsense.

Visa, Dollar, Euro, Yuan, Peso have usage.

The value in Bitcoin is *despite* it's far lesser usage.

Yes, the price is a function of demand, but demand is a function of
utility. Despite orders of magnitude less usage than state currencies,
Bitcoin has utility. This premium *only* exists due to its lack of
centralized control. I would not work full time, or at all, on Bitcoin
if it was not for decentralization; nor would I hold any of it. I doubt
anyone would show an interest in Bitcoin if it was not decentralized. If
it centralized even you would be forced to find something else to do,
because Bitcoin usage would drop to zero.

 It's kind of maddening that I have to point this out. Decentralisation
is a means to an end.

No, it was/is the primary objective. Paypal had already been done. If
anything is maddening it's that you of all people can't see this. When
people talk about the core innovation of Bitcoin, it's a conversation
about Byzantine Generals, not wicked growth hacking.

 in April 2009 and it was perfectly decentralised [...] every wallet
was a full node and every computer was capable of mining. So if you
believe what you just wrote [...] Bitcoin's value has gone down every
day since

An obvious non sequitur. By way of example, if 10 of 10 participants are
capable of mining it is not more decentralized than if 1,000 in 100,000
are doing so. 1,000 *people* in control vs. 10 is two orders of
magnitude more decentralized. The *percentage* of the community that
mines is totally irrelevant, it's the absolute number of (independent)
people that matters.

I'm not making a statement on block size, just trying to help ensure
that ill-considered ideas, like this inversion of the core value
proposition, stay on the margins.

e






signature.asc
Description: OpenPGP digital signature
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step

2015-06-02 Thread Mike Hearn

  1,000 *people* in control vs. 10 is two orders of

magnitude more decentralized.


Yet Bitcoin has got worse by all these metrics: there was a time before
mining pools when there were ~thousands of people mining with their local
CPUs and GPUs. Now the number of full nodes that matter for block selection
number in the dozens, and all the other miners just follow their blocks
blindly.

If you really believe that decentralisation is, itself, the end, then why
not go use an ASIC resistant alt coin with no SPV or web wallets which
resembles Bitcoin at the end of 2009? That'd be a whole lot more
decentralised than what you have now.

The *percentage* of the community that mines is totally irrelevant, it's
 the absolute number of (independent) people that matters.


So usage does matter, then? You'd rather have a coin that has power
concentrated in a far smaller elite, proportionally, but has overall more
usage? If there are say, 5000 full nodes today, and in ten years there are
6000, and they all run in vast datacenters and are owned by large
companies, you'll feel like Bitcoin is more decentralised than ever?
(n.b. I do not think this situation will ever happen, it's just an example).

That's not the vibe I'm getting from most people on this list. What I'm
seeing is complaints about how in the good old days back when Core was the
only wallet and ASICs hadn't been made,  there were lots of nodes and lots
of people mining solo and since then it's all been downhill and woe is us
... and let's throw on the brakes in case it gets worse.

Not for the first time, these discussions remind me very strongly of the
old desktop Linux/free software debates.
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step

2015-06-02 Thread Eric Voskuil
On 06/02/2015 04:03 AM, Mike Hearn wrote:

  1,000 *people* in control vs. 10 is two orders of

 magnitude more decentralized. 


 Yet Bitcoin has got worse by all these metrics: there was a time
 before mining pools when there were ~thousands of people mining with
 their local CPUs and GPUs. Now the number of full nodes that matter
 for block selection number in the dozens, and all the other miners
 just follow their blocks blindly.

A mining pool is not a person, a full node is not a miner, and
cooperation is not control.

http://bravenewcoin.com/news/number-of-bitcoin-miners-far-higher-than-popular-estimates/

The entire Bitcoin ecosystem cooperates, that is what consensus means.
Establishing proof of that cooperation is the purpose of Bitcoin.

Decentralization is about keeping control out of the hands of the state
(any entity that would substitute violence for consensus). Nobody has
the power to compel the cooperation of individual miners in a pool. When
state power is applied to a pool operator the miners (people) retain
their vote.

e



signature.asc
Description: OpenPGP digital signature
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-06-01 Thread Ricardo Filipe
I've been following the discussion of the block size limit and IMO it
is clear that any constant block size limit is, as many have said
before, just kicking the can down the road.
My problem with the dynamic lower limit solution based on past blocks
is that it doesn't account for usage spikes. I would like to propose
another dynamic lower limit scheme:
Let the block size limit be a function of the number of current
transactions in the mempool. This way, bitcoin usage regulates the
block size limit.

I'm sorry i don't have the knowledge of the code base or time to make
simulations on this kind of approach, but nevertheless I would like to
leave it here for discussion or foster other ideas.

cheers

--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step

2015-06-01 Thread Mike Hearn

 It's surprising to see a core dev going to the public to defend a proposal
 most other core devs disagree on, and then lobbying the Bitcoin ecosystem.


I agree that it is a waste of time. Many agree. The Bitcoin ecosystem
doesn't really need lobbying - my experience from talking to businesses and
wallet developers months ago is they virtually all see raising capacity as
a no brainer ... and some of them see this debate as despair-inducing
insanity.

What's happened here is that a small number of people have come to believe
they have veto power over changes to Bitcoin, and they have also become
*wildly* out of step with what the wider community wants. That cannot last.
So, short of some sudden change of heart that lets us kick the can down the
road a bit longer, a fork is inevitable.

Just be glad it's Gavin driving this and not me ... or a faceless coalition
of startups.


 Decentralization is the core of Bitcoin's security model and thus that's
 what gives Bitcoin its value.


No. Usage is what gives Bitcoin value.

It's kind of maddening that I have to point this out. Decentralisation is a
means to an end. I first used Bitcoin in April 2009 and it was perfectly
decentralised back then: every wallet was a full node and every computer
was capable of mining.

So if you believe what you just wrote, I guess Bitcoin's value has gone
down every day since.

On the other hand, if you believe the markets, Bitcoin's value has gone up.

Apparently the question of what gives Bitcoin its value is a bit more
complicated than that.




 : to incentive layer 2 and offchain solutions to scale Bitcoin : there are
 promising designs/solutions out there (LN, ChainDB, OtherCoin protocole,
 ...), but most don't get much attention, because there is right now no need
 for them. And, I am sure new solutions will be invented.


I have seen this notion a few times. I would like to dispose of it right
now.

I am one of the wallet developers you would be trying to incentivise by
letting Bitcoin break, and I say: get real. Developers are not some
bottomless fountain of work that will spit out whatever you like for free
if you twist their arms badly enough.

The problems that incentivised the creation of Bitcoin existed for decades
before Bitcoin was actually invented. Incentives are not enough. Someone
has to actually do the work, too. All proposals on the table would:

   - Involve enormous amounts of effort from many different people
   - Be technically risky (read: we don't know if they would even work)
   - Not be Bitcoin

The last point is important: people who got interested in Bitcoin and
decided to devote their time to it might not feel the same way about some
network of payment hubs or whatever today's fashion is. Faced with their
work being broken by armchair developers on some mailing list, they might
just say screw it and walk away completely.

After all, as the arguments for these systems are not particularly logical,
they might slave over hot keyboards for a year to support the Lightning
Network or whatever and then discover that it's no longer the fashionable
thing ... and that suddenly an even more convoluted design is being
incentivised.
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step

2015-06-01 Thread Gavin Andresen
RE: going to the public:

I started pushing privately for SOMETHING, ANYTHING to be done, or at the
very least for there to be some coherent plan besides wait and see back
in February.

As for it being unhealthy for me to write the code that I think should be
written and asking people to run it:

Ok. What would you suggest I do? I believe scaling up is the number one
priority right now. I think core devs SHOULD be taking time to solve it,
because I think the uncertainty of how it will be solved (or if it will be
solved) is bad for Bitcoin.

I think working on things like fixing transaction malleability is great...
but the reason to work on that is to enable smart contracts and all sorts
of other interesting new uses of the blockchain. But if we're stuck with
1MB blocks then there won't be room for all of those interesting new uses
on the blockchain.

Others disagree, and have the advantage of status-quo : if nothing is done,
they get what they want.

Based on some comments I've seen, I think there is also concern that my
own personal network/computer connection might not be able to handle more
transaction volume. That is NOT a good reason to limit scalability, but I
think it is clouding the judgement of many of the core contributors who
started contributing as a spare-time hobby from their homes (where maybe
they have crappy DSL connections).


RE: decentralization:

I think this is a red-herring. I'll quote something I said on reddit
yesterday:

I don't believe a 20MB max size will increase centralization to any
significant degree.

See
http://gavinandresen.ninja/does-more-transactions-necessarily-mean-more-centralized

and http://gavinandresen.ninja/are-bigger-blocks-better-for-bigger-miners

And I think we will have a lot LESS centralization of payments via services
like Coinbase (or hubs in some future StrawPay/Lightning network) if the
bitcoin network can directly handle more payment volume.

The centralization trade-offs seems very clear to me, and I think the big
blocks mean more centralized arguments are either just wrong or are
exaggerated or ignore the tradeoff with payment centralization (I think
that is a lot more important for privacy and censorship resistance).


RE: incentives for off-chain solutions:

I'll quote myself again from
http://gavinandresen.ninja/it-must-be-done-but-is-not-a-panacea :

The “layer 2” services that are being built on top of the blockchain are
absolutely necessary to get nearly instant real-time payments,
micropayments and high volume machine-to-machine payments, to pick just
three examples. The ten-minute settlement time of blocks on the network is
not fast enough for those problems, and it will be the ten minute block
interval that drives development of those off-chain innovations more than
the total number of transactions supported.

On Mon, Jun 1, 2015 at 8:45 AM, Jérôme Legoupil jjlegou...@gmail.com
wrote:

 If during the 1MB bumpy period something goes wrong, consensus among the
 community would be reached easily if necessary.


That is the problem: this will be a frog in boiling water problem. I
believe there will be no sudden crisis-- instead, transactions will just
get increasingly unreliable and expensive, driving more and more people
away from Bitcoin towards... I don't know what. Some less expensive, more
reliable, probably more-centralized solution.

The Gavin 20MB proposal is compromising Bitcoin's long-term security in an
 irreversible way, for gaining short-term better user experience.


If by long-term security you mean will transaction fees be high enough to
pay for enough hashing power to secure the network if there are bigger
blocks I've written about that:
http://gavinandresen.ninja/block-size-and-miner-fees-again


If you mean something else, then please be specific.

-- 
--
Gavin Andresen
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-29 Thread Tier Nolan
On Fri, May 29, 2015 at 12:26 PM, Mike Hearn m...@plan99.net wrote:

 IMO it's not even clear there needs to be a size limit at all. Currently
 the 32mb message cap imposes one anyway


If the plan is a fix once and for all, then that should be changed too.  It
could be set so that it is at least some multiple of the max block size
allowed.

Alternatively, the merkle block message already incorporates the required
functionality.

Send
- headers message (with 1 header)
- merkleblock messages (max 1MB per message)

The transactions for each merkleblock could be sent directly before each
merkleblock, as is currently the case.

That system can send a block of any size.  It would require a change to the
processing of any merkleblocks received.
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-29 Thread Mike Hearn

 By the time a hard fork can happen, I expect average block size will be
 above 500K.


Yes, possibly.


 Would you support a rule that was larger of 1MB or 2x average size ?
 That is strictly better than the situation we're in today.


It is, but only by a trivial amount - hitting the limit is still very
likely. I don't want to see this issue come up over and over again. Ideally
never. We shouldn't be artificially throttling organic growth of the
network, especially not by accident.

IMO it's not even clear there needs to be a size limit at all. Currently
the 32mb message cap imposes one anyway, but if miners can always just
discourage blocks over some particular size if they want to.

But I can get behind a 20mb limit (or 20mb+N) as it represents a reasonable
compromise: the limit still exists, it's far below VISA capacity etc, but
it should also free up enough space that everyone can get back to what we
*should* be focusing on, which is user growth!
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-29 Thread Mike Hearn

 If the plan is a fix once and for all, then that should be changed too.
 It could be set so that it is at least some multiple of the max block size
 allowed.


Well, but RAM is not infinite :-) Effectively what these caps are doing is
setting the minimum hardware requirements for running a Bitcoin node.

That's OK by me - I don't think we are actually going to exhaust the
hardware abilities of any reasonable computer any time soon, but still,
having the software recognise the finite nature of a computing machine
doesn't seem unwise.


 That system can send a block of any size.  It would require a change to
 the processing of any merkleblocks received.


Not any size because, again, the remote node must buffer things up and
have the transaction data actually in memory in order to digest it. But a
much larger size, yes.

However, that's a bigger change.
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-29 Thread Gavin Andresen
What do other people think?


If we can't come to an agreement soon, then I'll ask for help
reviewing/submitting patches to Mike's Bitcoin-Xt project that implement a
big increase now that grows over time so we may never have to go through
all this rancor and debate again.

I'll then ask for help lobbying the merchant services and exchanges and
hosted wallet companies and other bitcoind-using-infrastructure companies
(and anybody who agrees with me that we need bigger blocks sooner rather
than later) to run Bitcoin-Xt instead of Bitcoin Core, and state that they
are running it. We'll be able to see uptake on the network by monitoring
client versions.

Perhaps by the time that happens there will be consensus bigger blocks are
needed sooner rather than later; if so, great! The early deployment will
just serve as early testing, and all of the software already deployed will
ready for bigger blocks.

But if there is still no consensus among developers but the bigger blocks
now movement is successful, I'll ask for help getting big miners to do the
same, and use the soft-fork block version voting mechanism to (hopefully)
get a majority and then a super-majority willing to produce bigger blocks.
The purpose of that process is to prove to any doubters that they'd better
start supporting bigger blocks or they'll be left behind, and to give them
a chance to upgrade before that happens.


Because if we can't come to consensus here, the ultimate authority for
determining consensus is what code the majority of merchants and exchanges
and miners are running.


-- 
--
Gavin Andresen
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-29 Thread insecurity
Are you really that pig headed that you are going to try and blow up the 
entire system just to get your way? A bunch of ignorant redditors do not 
make consensus, mercifully.


On 2015-05-29 12:39, Gavin Andresen wrote:
 What do other people think?
 
 If we can't come to an agreement soon, then I'll ask for help
 reviewing/submitting patches to Mike's Bitcoin-Xt project that
 implement a big increase now that grows over time so we may never have
 to go through all this rancor and debate again.
 
 I'll then ask for help lobbying the merchant services and exchanges
 and hosted wallet companies and other bitcoind-using-infrastructure
 companies (and anybody who agrees with me that we need bigger blocks
 sooner rather than later) to run Bitcoin-Xt instead of Bitcoin Core,
 and state that they are running it. We'll be able to see uptake on the
 network by monitoring client versions.
 
 Perhaps by the time that happens there will be consensus bigger blocks
 are needed sooner rather than later; if so, great! The early
 deployment will just serve as early testing, and all of the software
 already deployed will ready for bigger blocks.
 
 But if there is still no consensus among developers but the bigger
 blocks now movement is successful, I'll ask for help getting big
 miners to do the same, and use the soft-fork block version voting
 mechanism to (hopefully) get a majority and then a super-majority
 willing to produce bigger blocks. The purpose of that process is to
 prove to any doubters that they'd better start supporting bigger
 blocks or they'll be left behind, and to give them a chance to upgrade
 before that happens.
 
 Because if we can't come to consensus here, the ultimate authority for
 determining consensus is what code the majority of merchants and
 exchanges and miners are running.
 
 --
 
 --
 Gavin Andresen
 
 --
 
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development

--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-29 Thread Braun Brelin
How is this being pigheaded? In my opinion, this is leadership.  If
*something* isn't implemented soon, the network is going to have some real
problems, right at the
time when adoption is starting to accelerate.  I've been seeing nothing but
navel-gazing and circlejerks on this issue for weeks now.  Gavin or Mike or
someone at some
point needs to step up and say follow me.

Braun Brelin


On Fri, May 29, 2015 at 5:00 PM, insecurity@national.shitposting.agency
wrote:

 Are you really that pig headed that you are going to try and blow up the
 entire system just to get your way? A bunch of ignorant redditors do not
 make consensus, mercifully.


 On 2015-05-29 12:39, Gavin Andresen wrote:
  What do other people think?
 
  If we can't come to an agreement soon, then I'll ask for help
  reviewing/submitting patches to Mike's Bitcoin-Xt project that
  implement a big increase now that grows over time so we may never have
  to go through all this rancor and debate again.
 
  I'll then ask for help lobbying the merchant services and exchanges
  and hosted wallet companies and other bitcoind-using-infrastructure
  companies (and anybody who agrees with me that we need bigger blocks
  sooner rather than later) to run Bitcoin-Xt instead of Bitcoin Core,
  and state that they are running it. We'll be able to see uptake on the
  network by monitoring client versions.
 
  Perhaps by the time that happens there will be consensus bigger blocks
  are needed sooner rather than later; if so, great! The early
  deployment will just serve as early testing, and all of the software
  already deployed will ready for bigger blocks.
 
  But if there is still no consensus among developers but the bigger
  blocks now movement is successful, I'll ask for help getting big
  miners to do the same, and use the soft-fork block version voting
  mechanism to (hopefully) get a majority and then a super-majority
  willing to produce bigger blocks. The purpose of that process is to
  prove to any doubters that they'd better start supporting bigger
  blocks or they'll be left behind, and to give them a chance to upgrade
  before that happens.
 
  Because if we can't come to consensus here, the ultimate authority for
  determining consensus is what code the majority of merchants and
  exchanges and miners are running.
 
  --
 
  --
  Gavin Andresen
 
 
 --
 
  ___
  Bitcoin-development mailing list
  Bitcoin-development@lists.sourceforge.net
  https://lists.sourceforge.net/lists/listinfo/bitcoin-development


 --
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development

--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-29 Thread Tier Nolan
On Fri, May 29, 2015 at 1:39 PM, Gavin Andresen gavinandre...@gmail.com
wrote:

 But if there is still no consensus among developers but the bigger blocks
 now movement is successful, I'll ask for help getting big miners to do the
 same, and use the soft-fork block version voting mechanism to (hopefully)
 get a majority and then a super-majority willing to produce bigger blocks.
 The purpose of that process is to prove to any doubters that they'd better
 start supporting bigger blocks or they'll be left behind, and to give them
 a chance to upgrade before that happens.


How do you define that the movement is successful?

For


 Because if we can't come to consensus here, the ultimate authority for
 determining consensus is what code the majority of merchants and exchanges
 and miners are running.


The measure is miner consensus.  How do you intend to measure
exchange/merchant acceptance?
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-29 Thread Admin Istrator
What about trying the dynamic scaling method within the 20MB range + 1 year
with a 40% increase of that cap?  Until a way to dynamically scale is
found, the cap will only continue to be an issue.  With 20 MB + 40% yoy,
we're either imposing an arbitrary cap later, or achieving less than great
DOS protection always.  Why not set that policy as a maximum for 2 years as
a protection against the possibility of dynamic scaling abuse, and see what
happens with a dynamic method in the mean time.  The policy of Max(1MB,
(average size over previous 144 blocks) * 2) calculated at each block seems
pretty reasonable.

As an outsider, the real 'median' here seems to be 'keeping the cap as
small as possible while allowing for larger blocks still'.We know
miners will want to keep space in their blocks relatively scarce, but we
also know that doesn't exclude the more powerful miners from
including superfluous transactions to increase their effective share of the
network.  I have the luck of not being drained by this topic over the past
three years, so it looks to me as if its two poles of 'block size must
increase' and 'block size must not increase' are forcing what is the clear
route to establishing the 'right' block size off the table.

--Andrew Len
(sorry if anybody received this twice, sent as the wrong email the first
time around).

On Fri, May 29, 2015 at 5:39 AM, Gavin Andresen gavinandre...@gmail.com
wrote:

 What do other people think?


 If we can't come to an agreement soon, then I'll ask for help
 reviewing/submitting patches to Mike's Bitcoin-Xt project that implement a
 big increase now that grows over time so we may never have to go through
 all this rancor and debate again.

 I'll then ask for help lobbying the merchant services and exchanges and
 hosted wallet companies and other bitcoind-using-infrastructure companies
 (and anybody who agrees with me that we need bigger blocks sooner rather
 than later) to run Bitcoin-Xt instead of Bitcoin Core, and state that they
 are running it. We'll be able to see uptake on the network by monitoring
 client versions.

 Perhaps by the time that happens there will be consensus bigger blocks are
 needed sooner rather than later; if so, great! The early deployment will
 just serve as early testing, and all of the software already deployed will
 ready for bigger blocks.

 But if there is still no consensus among developers but the bigger blocks
 now movement is successful, I'll ask for help getting big miners to do the
 same, and use the soft-fork block version voting mechanism to (hopefully)
 get a majority and then a super-majority willing to produce bigger blocks.
 The purpose of that process is to prove to any doubters that they'd better
 start supporting bigger blocks or they'll be left behind, and to give them
 a chance to upgrade before that happens.


 Because if we can't come to consensus here, the ultimate authority for
 determining consensus is what code the majority of merchants and exchanges
 and miners are running.


 --
 --
 Gavin Andresen


 --

 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development


--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-29 Thread Aaron Voisine
 miners would definitely be squeezing out transactions / putting pressure
to increase transaction fees

I'd just like to re-iterate that transactions getting squeezed out
(failure after a lengthy period of uncertainty) is a radical change from
the current behavior of the network. There are plenty of avenues to create
fee pressure without resorting to such a drastic change in how the network
works today.


Aaron Voisine
co-founder and CEO
breadwallet.com

On Thu, May 28, 2015 at 8:53 AM, Gavin Andresen gavinandre...@gmail.com
wrote:

 On Fri, May 8, 2015 at 3:20 AM, Matt Whitlock b...@mattwhitlock.name
 wrote:

 Between all the flames on this list, several ideas were raised that did
 not get much attention. I hereby resubmit these ideas for consideration and
 discussion.

 - Perhaps the hard block size limit should be a function of the actual
 block sizes over some trailing sampling period. For example, take the
 median block size among the most recent 2016 blocks and multiply it by 1.5.
 This allows Bitcoin to scale up gradually and organically, rather than
 having human beings guessing at what is an appropriate limit.


 A lot of people like this idea, or something like it. It is nice and
 simple, which is really important for consensus-critical code.

 With this rule in place, I believe there would be more fee pressure
 (miners would be creating smaller blocks) today. I created a couple of
 histograms of block sizes to infer what policy miners are ACTUALLY
 following today with respect to block size:

 Last 1,000 blocks:
   http://bitcoincore.org/~gavin/sizes_last1000.html

 Notice a big spike at 750K -- the default size for Bitcoin Core.
 This graph might be misleading, because transaction volume or fees might
 not be high enough over the last few days to fill blocks to whatever limit
 miners are willing to mine.

 So I graphed a time when (according to statoshi.info) there WERE a lot of
 transactions waiting to be confirmed:
http://bitcoincore.org/~gavin/sizes_357511.html

 That might also be misleading, because it is possible there were a lot of
 transactions waiting to be confirmed because miners who choose to create
 small blocks got lucky and found more blocks than normal.  In fact, it
 looks like that is what happened: more smaller-than-normal blocks were
 found, and the memory pool backed up.

 So: what if we had a dynamic maximum size limit based on recent history?

 The average block size is about 400K, so a 1.5x rule would make the max
 block size 600K; miners would definitely be squeezing out transactions /
 putting pressure to increase transaction fees. Even a 2x rule (implying
 800K max blocks) would, today, be squeezing out transactions / putting
 pressure to increase fees.

 Using a median size instead of an average means the size can increase or
 decrease more quickly. For example, imagine the rule is median of last
 2016 blocks and 49% of miners are producing 0-size blocks and 51% are
 producing max-size blocks. The median is max-size, so the 51% have total
 control over making blocks bigger.  Swap the roles, and the median is
 min-size.

 Because of that, I think using an average is better-- it means the max
 size will change (up or down) more slowly.

 I also think 2016 blocks is too long, because transaction volumes change
 quicker than that. An average over 144 blocks (last 24 hours) would be
 better able to handle increased transaction volume around major holidays,
 and would also be able to react more quickly if an economically irrational
 attacker attempted to flood the network with fee-paying transactions.

 So my straw-man proposal would be:  max size 2x average size over last 144
 blocks, calculated at every block.

 There are a couple of other changes I'd pair with that consensus change:

 + Make the default mining policy for Bitcoin Core neutral-- have its
 target block size be the average size, so miners that don't care will go
 along with the people who do care.

 + Use something like Greg's formula for size instead of bytes-on-the-wire,
 to discourage bloating the UTXO set.


 -

 When I've proposed (privately, to the other core committers) some dynamic
 algorithm the objection has been but that gives miners complete control
 over the max block size.

 I think that worry is unjustified right now-- certainly, until we have
 size-independent new block propagation there is an incentive for miners to
 keep their blocks small, and we see miners creating small blocks even when
 there are fee-paying transactions waiting to be confirmed.

 I don't even think it will be a problem if/when we do have
 size-independent new block propagation, because I think the combination of
 the random timing of block-finding plus a dynamic limit as described above
 will create a healthy system.

 If I'm wrong, then it seems to me the miners will have a very strong
 incentive to, collectively, impose whatever rules are necessary (maybe a
 soft-fork to put a hard cap on block size) to 

Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-29 Thread Bryan Cheng
On Fri, May 29, 2015 at 5:39 AM, Gavin Andresen gavinandre...@gmail.com
wrote:

 What do other people think?


 If we can't come to an agreement soon, then I'll ask for help
 reviewing/submitting patches to Mike's Bitcoin-Xt project that implement a
 big increase now that grows over time so we may never have to go through
 all this rancor and debate again.

 I'll then ask for help lobbying the merchant services and exchanges and
 hosted wallet companies and other bitcoind-using-infrastructure companies
 (and anybody who agrees with me that we need bigger blocks sooner rather
 than later) to run Bitcoin-Xt instead of Bitcoin Core, and state that they
 are running it. We'll be able to see uptake on the network by monitoring
 client versions.



While I think we'd all prefer Core to make changes like this, the current
environment may make that impossible. If this change happens in XT, we will
support the necessary changes in our own implementation. The block size
limit is a problem _today_, and I'd rather we solve today's problems with
today's understanding rather than let speculation about future unknowns
stop our ability to respond to known issues.
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-29 Thread Gavin Andresen
On Fri, May 29, 2015 at 10:09 AM, Tier Nolan tier.no...@gmail.com wrote:

  How do you intend to measure exchange/merchant acceptance?


Public statements saying we're running software that is ready for bigger
blocks.

And looking at the version (aka user-agent) strings of publicly reachable
nodes on the network.
(e.g. see the count at  https://getaddr.bitnodes.io/nodes/ )

-- 
--
Gavin Andresen
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-29 Thread Mike Hearn

 The measure is miner consensus.  How do you intend to measure
 exchange/merchant acceptance?


Asking them.

In fact, we already have. I have been talking to well known people and CEOs
in the Bitcoin community for some time now. *All* of them support bigger
blocks, this includes:

   - Every wallet developer I have asked (other than Bitcoin Core)
   - So far, every payment processor and every exchange company

I know Gavin has also been talking to people about this.

There's a feeling on this list that there's no consensus, or that Gavin and
myself are on the wrong side of it. I'd put it differently - there's very
strong consensus out in the wider community and this list is something of
an aberration.
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-29 Thread Tier Nolan
On Fri, May 29, 2015 at 3:09 PM, Tier Nolan tier.no...@gmail.com wrote:



 On Fri, May 29, 2015 at 1:39 PM, Gavin Andresen gavinandre...@gmail.com
 wrote:

 But if there is still no consensus among developers but the bigger
 blocks now movement is successful, I'll ask for help getting big miners to
 do the same, and use the soft-fork block version voting mechanism to
 (hopefully) get a majority and then a super-majority willing to produce
 bigger blocks. The purpose of that process is to prove to any doubters that
 they'd better start supporting bigger blocks or they'll be left behind, and
 to give them a chance to upgrade before that happens.


 How do you define that the movement is successful?


Sorry again, I keep auto-sending from gmail when trying to delete.

In theory, using the nuclear option, the block size can be increased via
soft fork.

Version 4 blocks would contain the hash of the a valid extended block in
the coinbase.

block height 32 byte extended hash

To send coins to the auxiliary block, you send them to some template.

OP_P2SH_EXTENDED scriptPubKey hash OP_TRUE

This transaction can be spent by anyone (under the current rules).  The
soft fork would lock the transaction output unless it transferred money
from the extended block.

To unlock the transaction output, you need to include the txid of
transaction(s) in the extended block and signature(s) in the scriptSig.

The transaction output can be spent in the extended block using P2SH
against the scriptPubKey hash.

This means that people can choose to move their money to the extended
block.  It might have lower security than leaving it in the root chain.

The extended chain could use the updated script language too.

This is obviously more complex than just increasing the size though, but it
could be a fallback option if no consensus is reached.  It has the
advantage of giving people a choice.  They can move their money to the
extended chain or not, as they wish.
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-29 Thread Mike Hearn

 And looking at the version (aka user-agent) strings of publicly reachable
 nodes on the network.
 (e.g. see the count at  https://getaddr.bitnodes.io/nodes/ )


Yeah, though FYI Luke informed me last week that I somehow managed to take
out the change to the user-agent string in Bitcoin XT, presumably I made a
mistake during a rebase of the rebranding change. So the actual number of
XT nodes is a bit higher than counting user-agent strings would suggest.

I sort of neglected XT lately. If we go ahead with this then I'll fix
things like this.
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-28 Thread Gavin Andresen
On Thu, May 28, 2015 at 1:34 PM, Mike Hearn m...@plan99.net wrote:

 As noted, many miners just accept the defaults. With your proposed change
 their target would effectively *drop* from 1mb to 800kb today, which
 seems crazy. That's the exact opposite of what is needed right now.


 I am very skeptical about this idea.


By the time a hard fork can happen, I expect average block size will be
above 500K.

Would you support a rule that was larger of 1MB or 2x average size ? That
is strictly better than the situation we're in today.

-- 
--
Gavin Andresen
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-28 Thread Pieter Wuille
 until we have size-independent new block propagation

I don't really believe that is possible. I'll argue why below. To be clear,
this is not an argument against increasing the block size, only against
using the assumption of size-independent propagation.

There are several significant improvements likely possible to various
aspects of block propagation, but I don't believe you can make any part
completely size-independent. Perhaps the remaining aspects result in terms
in the total time that vanish compared to the link latencies for 1 MB
blocks, but there will be some block sizes for which this is no longer the
case, and we need to know where that is the case.

* You can't assume that every transaction is pre-relayed and pre-validated.
This can happen due to non-uniform relay policies (different codebases, and
future things like size-limited mempools), double spend attempts, and
transactions generated before a block had time to propagate. You've
previously argued for a policy of not including too recent transactions,
but that requires a bound on network diameter, and if these late
transactions are profitable, it has exactly the same problem as making
larger blocks non-proportionally more economic for larger pools groups if
propagation time is size dependent).
  * This results in extra bandwidth usage for efficient relay protocols,
and if discrepancy estimation mispredicts the size of IBLT or error
correction data needed, extra roundtrips.
  * Signature validation for unrelayed transactions will be needed at block
relay time.
  * Database lookups for the inputs of unrelayed transactions cannot be
cached in advance.

* Block validation with 100% known and pre-validated transactions is not
constant time, due to updates that need to be made to the UTXO set (and
future ideas like UTXO commitments would make this effect an order of
magnitude worse).

* More efficient relay protocols also have higher CPU cost for
encoding/decoding.

Again, none of this is a reason why the block size can't increase. If
availability of hardware with higher bandwidth, faster disk/ram access
times, and faster CPUs increases, we should be able to have larger blocks
with the same propagation profile as smaller blocks with earlier technology.

But we should know how technology scales with larger blocks, and I don't
believe we do, apart from microbenchmarks in laboratory conditions.

-- 
Pieter
 On Fri, May 8, 2015 at 3:20 AM, Matt Whitlock b...@mattwhitlock.name
wrote:

 Between all the flames on this list, several ideas were raised that did
 not get much attention. I hereby resubmit these ideas for consideration and
 discussion.

 - Perhaps the hard block size limit should be a function of the actual
 block sizes over some trailing sampling period. For example, take the
 median block size among the most recent 2016 blocks and multiply it by 1.5.
 This allows Bitcoin to scale up gradually and organically, rather than
 having human beings guessing at what is an appropriate limit.


A lot of people like this idea, or something like it. It is nice and
simple, which is really important for consensus-critical code.

With this rule in place, I believe there would be more fee pressure
(miners would be creating smaller blocks) today. I created a couple of
histograms of block sizes to infer what policy miners are ACTUALLY
following today with respect to block size:

Last 1,000 blocks:
  http://bitcoincore.org/~gavin/sizes_last1000.html

Notice a big spike at 750K -- the default size for Bitcoin Core.
This graph might be misleading, because transaction volume or fees might
not be high enough over the last few days to fill blocks to whatever limit
miners are willing to mine.

So I graphed a time when (according to statoshi.info) there WERE a lot of
transactions waiting to be confirmed:
   http://bitcoincore.org/~gavin/sizes_357511.html

That might also be misleading, because it is possible there were a lot of
transactions waiting to be confirmed because miners who choose to create
small blocks got lucky and found more blocks than normal.  In fact, it
looks like that is what happened: more smaller-than-normal blocks were
found, and the memory pool backed up.

So: what if we had a dynamic maximum size limit based on recent history?

The average block size is about 400K, so a 1.5x rule would make the max
block size 600K; miners would definitely be squeezing out transactions /
putting pressure to increase transaction fees. Even a 2x rule (implying
800K max blocks) would, today, be squeezing out transactions / putting
pressure to increase fees.

Using a median size instead of an average means the size can increase or
decrease more quickly. For example, imagine the rule is median of last
2016 blocks and 49% of miners are producing 0-size blocks and 51% are
producing max-size blocks. The median is max-size, so the 51% have total
control over making blocks bigger.  Swap the roles, and the median is
min-size.

Because of that, I think using an average is 

Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-28 Thread Peter Todd
On Thu, May 28, 2015 at 01:19:44PM -0400, Gavin Andresen wrote:
 As for whether there should be fee pressure now or not: I have no
 opinion, besides we should make block propagation faster so there is no
 technical reason for miners to produce tiny blocks. I don't think us
 developers should be deciding things like whether or not fees are too high,
 too low, .

Note that the majority of hashing power is using Matt Corallo's block
relay network, something I confirmed the other day through my mining
contacts. Interestingly, the miners that aren't using it include some of
the largest pools; I haven't yet gotten an answer as to what their
rational for not using it was exactly.

Importantly, this does mean that block propagation is probably fairly
close to optimal already, modulo major changes to the consensus
protocol; IBLT won't improve the situation much, if any.

It's also notable that we're already having issues with miners turning
validation off as a way to lower their latency; I've been asked myself
about the possibility of creating an SPV miner that skips validation
while new blocks are propagating to shave off time and builds directly
off of block headers corresponding to blocks with unknown contents.

-- 
'peter'[:-1]@petertodd.org
0327487b689490b73f9d336b3008f82114fd3ada336bcac0


signature.asc
Description: Digital signature
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-28 Thread Mike Hearn

 Twenty is scary.


To whom? The only justification for the max size is DoS attacks, right?
Back when Bitcoin had an average block size of 10kb, the max block size was
100x the average. Things worked fine, nobody was scared.

The max block size is really a limit set by hardware capability, which is
something that's difficult to measure in software. I think I preferred your
original formula that guesstimated based on previous trends to one that
just tries to follow some average.

As noted, many miners just accept the defaults. With your proposed change
their target would effectively *drop* from 1mb to 800kb today, which seems
crazy. That's the exact opposite of what is needed right now.

I am very skeptical about this idea.


 I don't think us developers should be deciding things like whether or not
 fees are too high, too low,


Miners can already attempt to apply fee pressure by just not mining
transactions that they feel don't pay enough. Some sort of auto-cartel that
attempts to restrict supply based on everyone looking at everyone else
feels overly complex and prone to strange situations: it looks a lot like
some kind of Mexican standoff to me.

Additionally, the justification for the block size limit was DoS by someone
mining troll blocks. It was never meant to be about fee pressure.
Resource management inside Bitcoin Core is certainly something to be
handled by developers.
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-28 Thread Gavin Andresen
Can we hold off on bike-shedding the particular choice of parameters until
people have a chance to weigh in on whether or not there is SOME set of
dynamic parameters they would support right now?


-- 
--
Gavin Andresen
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-28 Thread Steven Pine
My understanding, which is very likely wrong in one way or another, is
transaction size and block size are two slightly different things but
perhaps it's so negligible that block size is a fine stand-in for total
transaction throughput.

Potentially Doubling the block size everyday is frankly imprudent. The
logarithmic increases in difficulty, which were often closer to 10% or 20%
every 2016 blocks was and is plenty fast, potentially changing blocksize by
twice daily is the mentality I would expect from a startup with the move
fast break things motto.

Infrastructure takes time, not everyone wants to run a node on a virtual
amazon instance, provisioning additional hard drive and bandwidth can't
happen overnight and trying to plan when block size from one week to the
next is a total mystery would be extremely difficult.

Anyone who has spent time examining the mining difficulty increases and
trajectory knows future planning is very very hard, allowing block size to
double daily would make it impossible.

Perhaps a middle way would be 300%  increase every 2016 blocks, that will
scale to 20mbs within a  month or two

The problem is logarithmic increases seem slow until they seem fast. If the
network begins to grow and block size hits 20, then the next day 40, 80...
Small nodes could get swamped within a week or less.

As for your point about Christmas, Bitcoin is a global network, Christmas,
while widely celebrated, isn't the only holiday, and planning around
American buying habits seems short sighted and no different from developers
trying to choose what the right fee pressure is.

On May 28, 2015 1:22 PM, Gavin Andresen gavinandre...@gmail.com wrote:

 On Thu, May 28, 2015 at 12:30 PM, Steven Pine steven.p...@gmail.com
wrote:

 I would support a dynamic block size increase as outlined. I have a few
questions though.

 Is scaling by average block size the best and easiest method, why not
scale by transactions confirmed instead? Anyone can write and relay a
transaction, and those are what we want to scale for, why not measure it
directly?


 What do you mean? Transactions aren't confirmed until they're in a
block...


 I would prefer changes every 2016 blocks, it is a well known change and
a reasonable time period for planning on changes. Two weeks is plenty fast,
especially at a 50% rate increase, in a few months the block size could be
dramatically larger.


 What type of planning do you imagine is necessary?

 And have you looked at transaction volumes for credit-card payment
networks around Christmas?


 Daily change to size seems confusing especially considering that max
block size will be dipping up and down. Also if something breaks trying to
fix it in a day seems problematic. The hard fork database size difference
error comes to mind. Finally daily 50% increases could quickly crowd out
smaller nodes if changes happen too quickly to adapt for.

 The bottleneck is transaction volume; blocks won't get bigger unless
there are fee-paying transactions around to pay them. What scenario are you
imagining where transaction volume increases by 50% a day for a sustained
period of time?

 --
 --
 Gavin Andresen
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-19 Thread Tier Nolan
On Mon, May 18, 2015 at 2:42 AM, Rusty Russell ru...@rustcorp.com.au
wrote:

 OK.  Be nice if these were cleaned up, but I guess it's a sunk cost.


Yeah.

On the plus side, as people spend their money, old UTXOs would be used up
and then they would be included in the cost function.  It is only people
who are storing their money long term that wouldn't.

They are unlikely to have consumed their UTXOs anyway, unless miners
started paying for UTXOs.

We could make it a range.

UTXOs from below 355,000 and above 375,000 are included.  That can create
incentive problems for the next similar change, I think a future threshold
is better.


  He said utxo_created_size not utxo_created so I assumed scriptlen?


Maybe I mis-read.


 But you made that number up?  The soft cap and hard byte limit are
 different beasts, so there's no need for soft cost cap  hard byte
 limit.


I was thinking about it being a soft-fork.

If it was combined with the 20MB limit change, then it can be anything.

I made a suggestion somewhere (her or forums not sure), that transactions
should be allowed to store bytes.

For example, a new opcode could be added, byte_count OP_LOCK_BYTES.

This makes the transaction seem byte_count larger.  However, when
spending the UTXO, that transaction counts as byte_count smaller, even
against the hard-cap.

This would be useful for channels.  If channels were 100-1000X the
blockchain volume and someone caused lots of channels to close, there
mightn't be enough space for all the close channel transactions.  Some
people might be able to get their refund transactions included in the
blockchain because the timeout expires.

If transactions could store enough space to be spent, then a mass channel
close would cause some very large blocks, but then they would have to be
followed by lots of tiny blocks.

The block limit would be an average not fixed per block.  There would be 3
limits

Absolute hard limit (max bytes no matter what): 100MB
Hard limit (max bytes after stored bytes offset): 30MB
Soft limit (max bytes equivalents): 10MB

Blocks lager than ~32MB require a new network protocol, which makes the
hard fork even harder.  The protocol change could be messages can now be
150MB max though, so maybe not so complex.



  This requires that transactions include scriptPubKey information when
  broadcasting them.

 Brilliant!  I completely missed that possibility...


I have written a BIP about it.  It is still in the draft stage.  I had a
look into writing up the code for the protocol change.

https://github.com/TierNolan/bips/blob/extended_transactions/bip-etx.mediawiki
https://github.com/TierNolan/bips/blob/extended_transactions/bip-etx-fork.mediawiki
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-18 Thread Rusty Russell
Tier Nolan tier.no...@gmail.com writes:
 On Sat, May 16, 2015 at 1:22 AM, Rusty Russell ru...@rustcorp.com.au
 wrote:
 3) ... or maybe not, if any consumed UTXO was generated before the soft
fork (reducing Tier's perverse incentive).

 The incentive problem can be fixed by excluding UTXOs from blocks before a
 certain count.

 UTXOs in blocks before 375000 don't count.

OK.  Be nice if these were cleaned up, but I guess it's a sunk cost.

 4) How do we measure UTXO size?  There are some constant-ish things in
there (eg. txid as key, height, outnum, amount).  Maybe just add 32
to scriptlen?


 They can be stored as a fixed digest.  That can be any size, depending on
 security requirements.

 Gmaxwell's cost proposal is 3-4 bytes per UTXO change.  It isn't
 4*UXTO.size - 3*UTXO.size

He said utxo_created_size not utxo_created so I assumed scriptlen?

 It is only a small nudge.  With only 10% of the block space to play with it
 can't be massive.

But you made that number up?  The soft cap and hard byte limit are
different beasts, so there's no need for soft cost cap  hard byte
limit.

 This requires that transactions include scriptPubKey information when
 broadcasting them.

Brilliant!  I completely missed that possibility...

 5) Add a CHECKSIG cost.  Naively, since we allow 20,000 CHECKSIGs and
1MB blocks, that implies a cost of 50 bytes per CHECKSIG (but counted
correctly, unlike now).

 This last one implies that the initial cost limit would be 2M, but in
 practice probably somewhere in the middle.

   tx_cost = 50*num-CHECKSIG
 + tx_bytes
 + 4*utxo_created_size
 - 3*utxo_consumed_size

  A 250 byte transaction with 2 inputs and 2 outputs would have an adjusted
  size of 252 bytes.

 Now cost == 352.

 That is to large a cost for a 10% block change.  It could be included in
 the block size hard fork though.

I don't think so.  Again, you're mixing units.

 I think have one combined cost for
 transactions is good.  It means much fewer spread out transaction checks.
 The code for the cost formula would be in one place.

Agreed!  Unfortunately there'll always be 2, because we really do want a
hard byte limit: it's total tx bytes which brings most concerns about
centralization.  But ideally it'll be so rarely hit that it can be ~
ignored (and certainly not optimized for).

Cheers,
Rusty.

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-16 Thread Tier Nolan
On Sat, May 16, 2015 at 1:22 AM, Rusty Russell ru...@rustcorp.com.au
wrote:

 Some tweaks:

 1) Nomenclature: call tx_size tx_cost and real_size tx_bytes?


Fair enough.


 2) If we have a reasonable hard *byte* limit, I don't think that we need
the MAX().  In fact, it's probably OK to go negative.


I agree, we want people to compress the UTXO space and a transaction with
100 inputs and one output is great.

It may have privacy problem though.



 3) ... or maybe not, if any consumed UTXO was generated before the soft
fork (reducing Tier's perverse incentive).


The incentive problem can be fixed by excluding UTXOs from blocks before a
certain count.

UTXOs in blocks before 375000 don't count.



 4) How do we measure UTXO size?  There are some constant-ish things in
there (eg. txid as key, height, outnum, amount).  Maybe just add 32
to scriptlen?


They can be stored as a fixed digest.  That can be any size, depending on
security requirements.

Gmaxwell's cost proposal is 3-4 bytes per UTXO change.  It isn't
4*UXTO.size - 3*UTXO.size

It is only a small nudge.  With only 10% of the block space to play with it
can't be massive.

This requires that transactions include scriptPubKey information when
broadcasting them.



 5) Add a CHECKSIG cost.  Naively, since we allow 20,000 CHECKSIGs and
1MB blocks, that implies a cost of 50 bytes per CHECKSIG (but counted
correctly, unlike now).

 This last one implies that the initial cost limit would be 2M, but in
 practice probably somewhere in the middle.

   tx_cost = 50*num-CHECKSIG
 + tx_bytes
 + 4*utxo_created_size
 - 3*utxo_consumed_size

  A 250 byte transaction with 2 inputs and 2 outputs would have an adjusted
  size of 252 bytes.

 Now cost == 352.


That is to large a cost for a 10% block change.  It could be included in
the block size hard fork though.  I think have one combined cost for
transactions is good.  It means much fewer spread out transaction checks.
The code for the cost formula would be in one place.
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-15 Thread Rusty Russell
Tier Nolan tier.no...@gmail.com writes:
 On Sat, May 9, 2015 at 4:36 AM, Gregory Maxwell gmaxw...@gmail.com wrote:

 An example would
 be tx_size = MAX( real_size  1,  real_size + 4*utxo_created_size -
 3*utxo_consumed_size).


 This could be implemented as a soft fork too.

 * 1MB hard size limit
 * 900kB soft limit

I like this too.

Some tweaks:

1) Nomenclature: call tx_size tx_cost and real_size tx_bytes?

2) If we have a reasonable hard *byte* limit, I don't think that we need
   the MAX().  In fact, it's probably OK to go negative.

3) ... or maybe not, if any consumed UTXO was generated before the soft
   fork (reducing Tier's perverse incentive).

4) How do we measure UTXO size?  There are some constant-ish things in
   there (eg. txid as key, height, outnum, amount).  Maybe just add 32
   to scriptlen?

5) Add a CHECKSIG cost.  Naively, since we allow 20,000 CHECKSIGs and
   1MB blocks, that implies a cost of 50 bytes per CHECKSIG (but counted
   correctly, unlike now).   

This last one implies that the initial cost limit would be 2M, but in
practice probably somewhere in the middle.

  tx_cost = 50*num-CHECKSIG
+ tx_bytes
+ 4*utxo_created_size
- 3*utxo_consumed_size

 A 250 byte transaction with 2 inputs and 2 outputs would have an adjusted
 size of 252 bytes.

Now cost == 352.

Cheers,
Rusty.

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-13 Thread Tier Nolan
On Sat, May 9, 2015 at 4:36 AM, Gregory Maxwell gmaxw...@gmail.com wrote:

 An example would
 be tx_size = MAX( real_size  1,  real_size + 4*utxo_created_size -
 3*utxo_consumed_size).


This could be implemented as a soft fork too.

* 1MB hard size limit
* 900kB soft limit

S = block size
U = UTXO_adjusted_size = S + 4 * outputs - 3 * inputs

A block is valid if S  1MB and U  1MB

A 250 byte transaction with 2 inputs and 2 outputs would have an adjusted
size of 252 bytes.

The memory pool could be sorted by fee per adjusted_size.

 Coin selection could be adjusted so it tries to have at least 2 inputs
when creating transactions, unless the input is worth more than a threshold
(say 0.001 BTC).

This is a pretty weak incentive, especially if the block size is
increased.  Maybe it will cause a nudge
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-10 Thread Gregory Maxwell
On Sun, May 10, 2015 at 9:21 PM, Gavin Andresen gavinandre...@gmail.com wrote:
 a while I think any algorithm that ties difficulty to block size is just a
 complicated way of dictating minimum fees.

Thats not the long term effect or the motivation-- what you're seeing
is that the subsidy gets in the way here.  Consider how the procedure
behaves with subsidy being negligible compared to fees.   What it
accomplishes in that case is that it incentivizes increasing the size
until the marginal value to miners of the transaction-data being
left out is not enormously smaller than the value of the data in the
block on average.  Value in quotes because it's blind to the fees
the transaction claims.

With a large subsidy, the marginal value of the first byte in the
block is HUGE; and so that pushes up the average-- and creates the
base fee effect that you're looking at.  It's not that anyone is
picking a fee there, it's that someone picked the subsidy there.  :)
As the subsidy goes down the only thing fees are relative to is fees.

An earlier version of the proposal took subsidy out of the picture
completely by increasing it linearly with the increased difficulty;
but that creates additional complexity both to implement and to
explain to people (e.g. that the setup doesn't change the supply of
coins); ... I suppose without it that starting disadvantage parameter
(the offset that reduces the size if you're indifferent) needs to be
much smaller, unfortunately.

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-10 Thread Thomas Voegtlin


Le 11/05/2015 00:31, Mark Friedenbach a écrit :
 I'm on my phone today so I'm somewhat constrained in my reply, but the key
 takeaway is that the proposal is a mechanism for miners to trade subsidy
 for the increased fees of a larger block. Necessarily it only makes sense
 to do so when the marginal fee per KB exceeds the subsidy fee per KB. It
 correspondingly makes sense to use a smaller block size if fees are less
 than subsidy, but note that fees are not uniform and as the block shrinks
 the marginal fee rate goes up..
 

Oh I see, you expect the sign of the dE/dx to change depending on
whether fees exceed the subsidy. This is possible, but instead of the
linear identity, you have to increase the block size twice as fast as
the difficulty. In that case we would get (using the notations of my
previous email):

D' = D(1+x)
F' = F(1+2x)

and thus:

E' - E = x/(1+x)P(F-S)

The presence of the (F-S) factor means that the sign reversal occurs
when fees exceed subsidy.


 Limits on both the relative and absolute amount a miner can trade subsidy
 for block size prevent incentive edge cases as well as prevent a sharp
 shock to the current fee-poor economy (by disallowing adjustment below 1MB).
 
 Also the identity transform was used only for didactic purposes. I fully
 expect there to be other, more interesting functions to use.

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-10 Thread Thomas Voegtlin
Le 08/05/2015 22:33, Mark Friedenbach a écrit :

   * For each block, the miner is allowed to select a different difficulty
 (nBits) within a certain range, e.g. +/- 25% of the expected difficulty,
 and this miner-selected difficulty is used for the proof of work check. In
 addition to adjusting the hashcash target, selecting a different difficulty
 also raises or lowers the maximum block size for that block by a function
 of the difference in difficulty. So increasing the difficulty of the block
 by an additional 25% raises the block limit for that block from 100% of the
 current limit to 125%, and lowering the difficulty by 10% would also lower
 the maximum block size for that block from 100% to 90% of the current
 limit. For simplicity I will assume a linear identity transform as the
 function, but a quadratic or other function with compounding marginal cost
 may be preferred.
 

Sorry but I fail to see how a linear identity transform between block
size and difficulty would work.

The miner's reward for finding a block is the sum of subsidy and fees:

 R = S + F

The probability that the miner will find a block over a time interval is
inversely proportional to the difficulty D:

 P = K / D

where K is a constant that depends on the miner's hashrate. The expected
reward of the miner is:

 E = P * R

Consider that the miner chooses a new difficulty:

 D' = D(1 + x).

With a linear identity transform between block size and difficulty, the
miner will be allowed to collect fees from a block of size: S'=S(1+x)

In the best case, collected will be proportional to block size:

 F' = F(1+x)

Thus we get:

 E' = P' * R' = K/(D(1+x)) * (S + F(1+x))

 E' = E - x/(1+x) * S * K / D

So with this linear identity transform, increasing block size never
increases the miners gain. As long as the subsidy exists, the best
strategy for miners is to reduce block size (i.e. to choose x0).

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-10 Thread Gavin Andresen
Let me make sure I understand this proposal:

On Fri, May 8, 2015 at 11:36 PM, Gregory Maxwell gmaxw...@gmail.com wrote:

 (*) I believe my currently favored formulation of general dynamic control
 idea is that each miner expresses in their coinbase a preferred size
 between some minimum (e.g. 500k) and the miner's effective-maximum;
 the actual block size can be up to the effective maximum even if the
 preference is lower (you're not forced to make a lower block because you
 stated you wished the limit were lower).  There is a computed maximum
 which is the 33-rd percentile of the last 2016 coinbase preferences
 minus computed_max/52 (rounding up to 1) bytes-- or 500k if thats
 larger. The effective maximum is X bytes more, where X on the range
 [0, computed_maximum] e.g. the miner can double the size of their
 block at most. If X  0, then the miners must also reach a target
 F(x/computed_maximum) times the bits-difficulty; with F(x) = x^2+1  ---
 so the maximum penalty is 2, with a quadratic shape;  for a given mempool
 there will be some value that maximizes expected income.  (obviously all
 implemented with precise fixed point arithmetic).   The percentile is
 intended to give the preferences of the 33% least preferring miners a
 veto on increases (unless a majority chooses to soft-fork them out). The
 minus-comp_max/52 provides an incentive to slowly shrink the maximum
 if its too large-- x/52 would halve the size in one year if miners
 were doing the lowest difficulty mining. The parameters 500k/33rd,
 -computed_max/52 bytes, and f(x)  I have less strong opinions about;
 and would love to hear reasoned arguments for particular parameters.


I'm going to try to figure out how much transaction fee a transaction would
have to pay to bribe a miner to include it. Greg, please let me know if
I've misinterpreted the proposed algorithm. And everybody, please let me
know if I'm making a bone-headed mistake in how I'm computing anything:

Lets say miners are expressing a desire for 600,000 byte blocks in their
coinbases.

computed_max = 600,000 - 600,000/52 = 588,462 bytes.
  -- this is about 23 average-size (500-byte) transactions less than
600,000.
effective_max = 1,176,923

Lets say I want to maintain status quo at 600,000 bytes; how much penalty
do I have?
((600,000-588,462)/588,462)^2 + 1 = 1.00038

How much will that cost me?
The network is hashing at 310PetaHash/sec right now.
Takes 600 seconds to find a block, so 186,000PH per block
186,000 * 0.00038 = 70 extra PH

If it takes 186,000 PH to find a block, and a block is worth 25.13 BTC
(reward plus fees), that 70 PH costs:
(25.13 BTC/block / 186,000 PH/block) * 70 PH = 0.00945 BTC
or at $240 / BTC:  $2.27

... so average transaction fee will have to be about ten cents ($2.27
spread across 23 average-sized transactions) for miners to decide to stay
at 600K blocks. If they fill up 588,462 bytes and don't have some
ten-cent-fee transactions left, they should express a desire to create a
588,462-byte-block and mine with no penalty.

Is that too much?  Not enough?  Average transaction fees today are about 3
cents per transaction.
I created a spreadsheet playing with the parameters:

https://docs.google.com/spreadsheets/d/1zYZfb44Uns8ai0KnoQ-LixDwdhqO5iTI3ZRcihQXlgk/edit?usp=sharing

We could tweak the constants or function to get a transaction fee we
think is reasonable... but we really shouldn't be deciding whether
transaction fees are too high, too low, or just right, and after thinking
about this for a while I think any algorithm that ties difficulty to block
size is just a complicated way of dictating minimum fees.

As for some other dynamic algorithm: OK with me. How do we get consensus on
what the best algorithm is? I'm ok with any don't grow too quickly, give
some reasonable-percentage-minority of miners the ability to block further
increases.

Also relevant here:
The curious task of economics is to demonstrate to men how little they
really know about what they imagine they can design. - Friedrich August
von Hayek

-- 
--
Gavin Andresen
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-10 Thread Mark Friedenbach
I'm on my phone today so I'm somewhat constrained in my reply, but the key
takeaway is that the proposal is a mechanism for miners to trade subsidy
for the increased fees of a larger block. Necessarily it only makes sense
to do so when the marginal fee per KB exceeds the subsidy fee per KB. It
correspondingly makes sense to use a smaller block size if fees are less
than subsidy, but note that fees are not uniform and as the block shrinks
the marginal fee rate goes up..

Limits on both the relative and absolute amount a miner can trade subsidy
for block size prevent incentive edge cases as well as prevent a sharp
shock to the current fee-poor economy (by disallowing adjustment below 1MB).

Also the identity transform was used only for didactic purposes. I fully
expect there to be other, more interesting functions to use.
On May 10, 2015 3:03 PM, Thomas Voegtlin thom...@electrum.org wrote:

 Le 08/05/2015 22:33, Mark Friedenbach a écrit :

* For each block, the miner is allowed to select a different difficulty
  (nBits) within a certain range, e.g. +/- 25% of the expected difficulty,
  and this miner-selected difficulty is used for the proof of work check.
 In
  addition to adjusting the hashcash target, selecting a different
 difficulty
  also raises or lowers the maximum block size for that block by a function
  of the difference in difficulty. So increasing the difficulty of the
 block
  by an additional 25% raises the block limit for that block from 100% of
 the
  current limit to 125%, and lowering the difficulty by 10% would also
 lower
  the maximum block size for that block from 100% to 90% of the current
  limit. For simplicity I will assume a linear identity transform as the
  function, but a quadratic or other function with compounding marginal
 cost
  may be preferred.
 

 Sorry but I fail to see how a linear identity transform between block
 size and difficulty would work.

 The miner's reward for finding a block is the sum of subsidy and fees:

  R = S + F

 The probability that the miner will find a block over a time interval is
 inversely proportional to the difficulty D:

  P = K / D

 where K is a constant that depends on the miner's hashrate. The expected
 reward of the miner is:

  E = P * R

 Consider that the miner chooses a new difficulty:

  D' = D(1 + x).

 With a linear identity transform between block size and difficulty, the
 miner will be allowed to collect fees from a block of size: S'=S(1+x)

 In the best case, collected will be proportional to block size:

  F' = F(1+x)

 Thus we get:

  E' = P' * R' = K/(D(1+x)) * (S + F(1+x))

  E' = E - x/(1+x) * S * K / D

 So with this linear identity transform, increasing block size never
 increases the miners gain. As long as the subsidy exists, the best
 strategy for miners is to reduce block size (i.e. to choose x0).


 --
 One dashboard for servers and applications across Physical-Virtual-Cloud
 Widest out-of-the-box monitoring support with 50+ applications
 Performance metrics, stats and reports that give you Actionable Insights
 Deep dive visibility with transaction tracing using APM Insight.
 http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-10 Thread Rob Golding
 How much will that cost me?
 The network is hashing at 310PetaHash/sec right now.
 Takes 600 seconds to find a block, so 186,000PH per block
 186,000 * 0.00038 = 70 extra PH
 
 If it takes 186,000 PH to find a block, and a block is worth 25.13 BTC
 (reward plus fees), that 70 PH costs:
 (25.13 BTC/block / 186,000 PH/block) * 70 PH = 0.00945 BTC
 or at $240 / BTC:  $2.27
 
 ... so average transaction fee will have to be about ten cents ($2.27
 spread across 23 average-sized transactions) for miners to decide to
 stay at 600K blocks

Surely that's an *extra* $2.27 as you've already included .13BTC 
($31.20) in fees in the calculation ?

Rob

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-10 Thread Owen Gunden
On 05/08/2015 11:36 PM, Gregory Maxwell wrote:
 Another related point which has been tendered before but seems to have
 been ignored is that changing how the size limit is computed can help
 better align incentives and thus reduce risk.  E.g. a major cost to the
 network is the UTXO impact of transactions, but since the limit is blind
 to UTXO impact a miner would gain less income if substantially factoring
 UTXO impact into its fee calculations; and without fee impact users have
 little reason to optimize their UTXO behavior.

Along the lines of aligning incentives with a diversity of costs to a 
variety of network participants, I am curious about reactions to Justus' 
general approach:

http://bitcoinism.liberty.me/2015/02/09/economic-fallacies-and-the-block-size-limit-part-2-price-discovery/

I realize it relies on pie-in-the-sky ideas like micropayment channels, 
but I wonder if it's a worthy long-term ideal direction for this stuff.

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-10 Thread Mark Friedenbach
Micropayment channels are not pie in the sky proposals. They work today on
Bitcoin as it is deployed without any changes. People just need to start
using them.
On May 10, 2015 11:03, Owen Gunden ogun...@phauna.org wrote:

 On 05/08/2015 11:36 PM, Gregory Maxwell wrote:
  Another related point which has been tendered before but seems to have
  been ignored is that changing how the size limit is computed can help
  better align incentives and thus reduce risk.  E.g. a major cost to the
  network is the UTXO impact of transactions, but since the limit is blind
  to UTXO impact a miner would gain less income if substantially factoring
  UTXO impact into its fee calculations; and without fee impact users have
  little reason to optimize their UTXO behavior.

 Along the lines of aligning incentives with a diversity of costs to a
 variety of network participants, I am curious about reactions to Justus'
 general approach:


 http://bitcoinism.liberty.me/2015/02/09/economic-fallacies-and-the-block-size-limit-part-2-price-discovery/

 I realize it relies on pie-in-the-sky ideas like micropayment channels,
 but I wonder if it's a worthy long-term ideal direction for this stuff.


 --
 One dashboard for servers and applications across Physical-Virtual-Cloud
 Widest out-of-the-box monitoring support with 50+ applications
 Performance metrics, stats and reports that give you Actionable Insights
 Deep dive visibility with transaction tracing using APM Insight.
 http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-09 Thread Gavin Andresen
RE: fixing sigop counting, and building in UTXO cost: great idea! One of
the problems with this debate is it is easy for great ideas get lost in all
the noise.

RE: a hard upper limit, with a dynamic limit under it:

I like that idea. Can we drill down on the hard upper limit?

There are lots of people who want a very high upper limit, right now (all
the big Bitcoin companies, and anybody who thinks as-rapid-as-possible
growth now is the best path to long-term success). This is the it is OK if
you have to run full nodes in a data center camp.

There are also lots of people who want an upper limit low enough that they
can continue to run Bitcoin on the hardware and Internet connection that
they have (or are concerned about centralization, so want to make sure
OTHER people can continue to run).

Is there an upper limit we can choose to make both sets of people mostly
happy? I've proposed must be inexpensive enough that a 'hobbyist' can
afford to run a full node ...

Is the limit chosen once, now, via hard-fork, or should we expect multiple
hard-forks to change it when necessary ?

The economics change every time the block reward halves, which make me
think that might be a good time to adjust the hard upper limit. If we have
a hard upper limit and a lower dynamic limit, perhaps adjusting the hard
upper limit (up or down) to account for the block reward halving, based on
the dynamic limit



RE: the lower dynamic limit algorithm:  I REALLY like that idea.

-- 
--
Gavin Andresen
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-09 Thread Tier Nolan
On Sat, May 9, 2015 at 12:58 PM, Gavin Andresen gavinandre...@gmail.com
wrote:

 RE: fixing sigop counting, and building in UTXO cost: great idea! One of
 the problems with this debate is it is easy for great ideas get lost in all
 the noise.


If the UTXO set cost is built in, UTXO database entries suddenly are worth
something, in addition to the bitcoin held in that entry.

A user's client might display how many they own.  When sending money to a
merchant, the user might demand the merchant indicate a slot to pay to.

The user could send an ANYONE_CAN_PAY partial transaction.  The transaction
would guarantee that the user has at least as many UTXOs as before.

Discussing the possibility of doing this creates an incentive to bloat the
UTXO set right now, since UTXOs would be valuable in the future.

The objective would be to make them valuable enough to encourage
conservation, but not so valuable that the UTXO contains more value than
the bitcoins in the output.

Gmaxwell's suggested tx_size = MAX( real_size  1,  real_size +
4*utxo_created_size - 3*utxo_consumed_size) for a 250 byte transaction
with 1 input and 2 outputs has very little effect.

real_size + 4 * (2) - 3 * 1 = 255

That gives a 2% size penalty for adding an extra UTXO.  I doubt that is
enough to change behavior.

The UTXO set growth could be limited directly.  A block would be invalid if
it increases the number of UTXO entries above the charted path.

RE: a hard upper limit, with a dynamic limit under it:


If the block is greater than 32MB, then it means an update to how blocks
are broadcast, so that could be a reasonable hard upper limit (or maybe
31MB, or just the 20MB already suggested).
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-09 Thread Peter Todd
On Sat, May 09, 2015 at 01:36:56AM +0300, Joel Joonatan Kaartinen wrote:
 such a contract is a possibility, but why would big owners give an
 exclusive right to such pools? It seems to me it'd make sense to offer
 those for any miner as long as the get paid a little for it. Especially
 when it's as simple as offering an incomplete transaction with the
 appropriate SIGHASH flags.

Like many things, the fact that they need to negotiate the right at all
is a *huge* barrier to smaller mining operations, as well as being an
attractive point of control for regulators.

 a part of the reason I like this idea is because it will allow stakeholders
 a degree of influence on how large the fees are. At least from the surface,
 it looks like incentives are pretty well matched. They have an incentive to
 not let the fees drop too low so the network continues to be usable and
 they also have an incentive to not raise them too high because it'll push
 users into using other systems. Also, there'll be competition between
 stakeholders, which should keep the fees reasonable.

If you want to allow stakeholders influence you should look into John Dillon's
proof-of-stake blocksize voting scheme:

http://www.mail-archive.com/bitcoin-development@lists.sourceforge.net/msg02323.html

-- 
'peter'[:-1]@petertodd.org
0e7980aab9c096c46e7f34c43a661c5cb2ea71525ebb8af7


signature.asc
Description: Digital signature
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-08 Thread Mike Hearn
There are certainly arguments to be made for and against all of these
proposals.

The fixed 20mb cap isn't actually my proposal at all, it is from Gavin. I
am supporting it because anything is better than nothing. Gavin originally
proposed the block size be a function of time. That got dropped, I suppose
to make the process of getting consensus easier. It is the simplest thing
that can possibly work.

I would like to see the process of chain forking becoming less traumatic. I
remember Gavin, Jeff and I once considered (on stage at a conference??)
that maybe there should be a scheduled fork every year, so people know when
to expect them.

If everything goes well, I see no reason why 20mb would be the limit
forever.
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-08 Thread Clément Elbaz
Matt : I think proposal #1 and #3 are a lot better than #2, and #1 is my
favorite.

I see two problems with proposal #2.
The first problem with proposal #2 is that, as we see in democracies,
there is often a mismatch between the people conscious vote and these same
people behavior.

Relying on an  intentional vote made consciously by miners by choosing a
configuration value can lead to twisted results if their actual behavior
doesn't correlate with their vote (eg, they all vote for a small block size
because it is the default configuration of their software, and then they
fill it completely all the time and everything crashes).

The second problem with proposal #2 is that if Gavin and Mike are right,
there is simply no time to gather a meaningful amount of votes over the
coinbases, after the fork but before the Bitcoin scalability crash.

I like proposal #1 because the vote is made using already available data.
Also there is no possible mismatch between behavior and vote. As a miner
you vote by choosing to create a big (or small) block, and your actions
reflect your vote. It is simple and straightforward.

My feelings on proposal #3 is it is a little bit mixing apples and oranges,
but I may not seeing all the implications.

Le ven. 8 mai 2015 à 09:21, Matt Whitlock b...@mattwhitlock.name a écrit :

 Between all the flames on this list, several ideas were raised that did
 not get much attention. I hereby resubmit these ideas for consideration and
 discussion.

 - Perhaps the hard block size limit should be a function of the actual
 block sizes over some trailing sampling period. For example, take the
 median block size among the most recent 2016 blocks and multiply it by 1.5.
 This allows Bitcoin to scale up gradually and organically, rather than
 having human beings guessing at what is an appropriate limit.

 - Perhaps the hard block size limit should be determined by a vote of the
 miners. Each miner could embed a desired block size limit in the coinbase
 transactions of the blocks it publishes. The effective hard block size
 limit would be that size having the greatest number of votes within a
 sliding window of most recent blocks.

 - Perhaps the hard block size limit should be a function of block-chain
 length, so that it can scale up smoothly rather than jumping immediately to
 20 MB. This function could be linear (anticipating a breakdown of Moore's
 Law) or quadratic.

 I would be in support of any of the above, but I do not support Mike
 Hearn's proposed jump to 20 MB. Hearn's proposal kicks the can down the
 road without actually solving the problem, and it does so in a
 controversial (step function) way.


 --
 One dashboard for servers and applications across Physical-Virtual-Cloud
 Widest out-of-the-box monitoring support with 50+ applications
 Performance metrics, stats and reports that give you Actionable Insights
 Deep dive visibility with transaction tracing using APM Insight.
 http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-08 Thread Steven Pine
Block size scaling should be as transparent and simple as possible, like
pegging it to total transactions per difficulty change.
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-08 Thread Alex Mizrahi
Adaptive schedules, i.e. those where block size limit depends not only on
block height, but on other parameters as well, are surely attractive in the
sense that the system can adapt to the actual use, but they also open a
possibility of a manipulation.

E.g. one of mining companies might try to bankrupt other companies by
making mining non-profitable. To do that they will accept transactions with
ridiculously low fees (e.g. 1 satoshi per transaction). Of course, they
will suffer losees themselves, but the they might be able to survive that
if they have access to financial resources. (E.g. companies backed by banks
and such will have an advantage).
Once competitors close down their mining operations, they can drive fees
upwards.

So if you don't want to open room for manipulation (which is very hard to
analyze), it is better to have a block size hard limit which depends only
on block height.
On top of that there might be a soft limit which is enforced by the
majority of miners.
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-08 Thread Bryan Bishop
On Fri, May 8, 2015 at 2:20 AM, Matt Whitlock b...@mattwhitlock.name wrote:
 - Perhaps the hard block size limit should be a function of the actual block 
 sizes over some
 trailing sampling period. For example, take the median block size among the 
 most recent
 2016 blocks and multiply it by 1.5. This allows Bitcoin to scale up gradually 
 and organically,
 rather than having human beings guessing at what is an appropriate limit.

Block contents can be grinded much faster than hashgrinding and
mining. There is a significant run-away effect there, and it also
works in the gradual sense as a miner probabilistically mines large
blocks that get averaged into that 2016 median block size computation.
At least this proposal would be a slower way of pushing out miners and
network participants that can't handle 100 GB blocks immediately..  As
the size of the blocks are increased, low-end hardware participants
have to fall off the network because they no longer meet the minimum
performance requirements. Adjustment might become severely mismatched
with general economic trends in data storage device development or
availability or even current-market-saturation of said storage
devices. With the assistance of transaction stuffing or grinding, that
2016 block median metric can be gamed to increase faster than other
participants can keep up with or, perhaps worse, in a way that was
unintended by developers yet known to be a failure mode. These are
just some issues to keep and mind and consider.

- Bryan
http://heybryan.org/
1 512 203 0507

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-08 Thread Peter Todd
On Fri, May 08, 2015 at 03:32:00PM +0300, Joel Joonatan Kaartinen wrote:
 Matt,
 
 It seems you missed my suggestion about basing the maximum block size on
 the bitcoin days destroyed in transactions that are included in the block.
 I think it has potential for both scaling as well as keeping up a constant
 fee pressure. If tuned properly, it should both stop spamming and increase
 block size maximum when there are a lot of real transactions waiting for
 inclusion.

The problem with gating block creation on Bitcoin days destroyed is
there's a strong potential of giving big mining pools an huge advantage,
because they can contract with large Bitcoin owners and buy dummy
transactions with large numbers of Bitcoin days destroyed on demand
whenever they need more days-destroyed to create larger blocks.
Similarly, with appropriate SIGHASH flags such contracting can be done
by modifying *existing* transactions on demand.

Ultimately bitcoin days destroyed just becomes a very complex version of
transaction fees, and it's already well known that gating blocksize on
total transaction fees doesn't work.

-- 
'peter'[:-1]@petertodd.org
0f53e2d214685abf15b6d62d32453a03b0d472e374e10e94


signature.asc
Description: Digital signature
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-08 Thread Mark Friedenbach
It is my professional opinion that raising the block size by merely
adjusting a constant without any sort of feedback mechanism would be a
dangerous and foolhardy thing to do. We are custodians of a multi-billion
dollar asset, and it falls upon us to weigh the consequences of our own
actions against the combined value of the entire bitcoin ecosystem. Ideally
we would take no action for which we are not absolutely certain of the
ramifications, with the information that can be made available to us. But
of course that is not always possible: there are unknown-unknowns, time
pressures, and known-unknowns where information has too high a marginal
cost. So where certainty is unobtainable, we must instead hedge against
unwanted outcomes.

The proposal to raise the block size now by redefining a constant carries
with it risk associated with infrastructure scaling, centralization
pressures, and delaying the necessary development of a constraint-based fee
economy. It also simply kicks the can down the road in settling these
issues because a larger but realistic hard limit must still exist, meaning
a future hard fork may still be required.

But whatever new hard limit is chosen, there is also a real possibility
that it may be too high. The standard response is that it is a soft-fork
change to impose a lower block size limit, which miners could do with a
minimal amount of coordination. This is however undermined by the
unfortunate reality that so many mining operations are absentee-run
businesses, or run by individuals without a strong background in bitcoin
protocol policy, or with interests which are not well aligned with other
users or holders of bitcoin. We cannot rely on miners being vigilant about
issues that develop, as they develop, or able to respond in the appropriate
fashion that someone with full domain knowledge and an objective
perspective would.

The alternative then is to have some sort of dynamic block size limit
controller, and ideally one which applies a cost to raising the block size
in some way the preserves the decentralization and/or long-term stability
features that we care about. I will now describe one such proposal:

  * For each block, the miner is allowed to select a different difficulty
(nBits) within a certain range, e.g. +/- 25% of the expected difficulty,
and this miner-selected difficulty is used for the proof of work check. In
addition to adjusting the hashcash target, selecting a different difficulty
also raises or lowers the maximum block size for that block by a function
of the difference in difficulty. So increasing the difficulty of the block
by an additional 25% raises the block limit for that block from 100% of the
current limit to 125%, and lowering the difficulty by 10% would also lower
the maximum block size for that block from 100% to 90% of the current
limit. For simplicity I will assume a linear identity transform as the
function, but a quadratic or other function with compounding marginal cost
may be preferred.

  * The default maximum block size limit is then adjusted at regular
intervals. For simplicity I will assume an adjustment at the end of each
2016 block interval, at the same time that difficulty is adjusted, but
there is no reason these have to be aligned. The adjustment algorithm
itself is either the selection of the median, or perhaps some sort of
weighted average that respects the middle majority. There would of course
be limits on how quickly the block size limit can adjusted in any one
period, just as there are min/max limits on the difficulty adjustment.

  * To prevent perverse mining incentives, the original difficulty without
adjustment is used in the aggregate work calculations for selecting the
most-work chain, and the allowable miner-selected adjustment to difficulty
would have to be tightly constrained.

These rules create an incentive environment where raising the block size
has a real cost associated with it: a more difficult hashcash target for
the same subsidy reward. For rational miners that cost must be
counter-balanced by additional fees provided in the larger block. This
allows block size to increase, but only within the confines of a
self-supporting fee economy.

When the subsidy goes away or is reduced to an insignificant fraction of
the block reward, this incentive structure goes away. Hopefully at that
time we would have sufficient information to soft-fork set a hard block
size maximum. But in the mean time, the block size limit controller
constrains the maximum allowed block size to be within a range supported by
fees on the network, providing an emergency relief valve that we can be
assured will only be used at significant cost.

Mark Friedenbach

* There has over time been various discussions on the bitcointalk forums
about dynamically adjusting block size limits. The true origin of the idea
is unclear at this time (citations would be appreciated!) but a form of it
was implemented in Bytecoin / Monero using subsidy burning to 

Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-08 Thread Gavin Andresen
I like the bitcoin days destroyed idea.

I like lots of the ideas that have been presented here, on the bitcointalk
forums, etc etc etc.

It is easy to make a proposal, it is hard to wade through all of the
proposals. I'm going to balance that equation by completely ignoring any
proposal that isn't accompanied by code that implements the proposal (with
appropriate tests).

However, I'm not the bottleneck-- you need to get the attention of the
other committers and convince THEM:

a) something should be done now-ish
b) your idea is good

We are stuck on (a) right now, I think.


On Fri, May 8, 2015 at 8:32 AM, Joel Joonatan Kaartinen 
joel.kaarti...@gmail.com wrote:

 Matt,

 It seems you missed my suggestion about basing the maximum block size on
 the bitcoin days destroyed in transactions that are included in the block.
 I think it has potential for both scaling as well as keeping up a constant
 fee pressure. If tuned properly, it should both stop spamming and increase
 block size maximum when there are a lot of real transactions waiting for
 inclusion.

 - Joel



-- 
--
Gavin Andresen
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-08 Thread Matt Whitlock
On Friday, 8 May 2015, at 3:32 pm, Joel Joonatan Kaartinen wrote:
 It seems you missed my suggestion about basing the maximum block size on
 the bitcoin days destroyed in transactions that are included in the block.
 I think it has potential for both scaling as well as keeping up a constant
 fee pressure. If tuned properly, it should both stop spamming and increase
 block size maximum when there are a lot of real transactions waiting for
 inclusion.

I saw it. I apologize for not including it in my list. I should have, for sake 
of discussion, even though I have a problem with it.

My problem with it is that bitcoin days destroyed is not a measure of demand 
for space in the block chain. In the distant future, when Bitcoin is the 
predominant global currency, bitcoins will have such high velocity that the 
number of bitcoin days destroyed in each block will be much lower than at 
present. Does this mean that the block size limit should be lower in the future 
than it is now? Clearly this would be incorrect.

Perhaps I am misunderstanding your proposal. Could you describe it more 
explicitly?

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-08 Thread Mark Friedenbach
On Fri, May 8, 2015 at 3:43 PM, Aaron Voisine vois...@gmail.com wrote:

 This is a clever way to tie block size to fees.

 I would just like to point out though that it still fundamentally is using
 hard block size limits to enforce scarcity. Transactions with below market
 fees will hang in limbo for days and fail, instead of failing immediately
 by not propagating, or seeing degraded, long confirmation times followed by
 eventual success.


There are already solutions to this which are waiting to be deployed as
default policy to bitcoind, and need to be implemented in other clients:
replace-by-fee and child-pays-for-parent.
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-08 Thread Joel Joonatan Kaartinen
such a contract is a possibility, but why would big owners give an
exclusive right to such pools? It seems to me it'd make sense to offer
those for any miner as long as the get paid a little for it. Especially
when it's as simple as offering an incomplete transaction with the
appropriate SIGHASH flags.

a part of the reason I like this idea is because it will allow stakeholders
a degree of influence on how large the fees are. At least from the surface,
it looks like incentives are pretty well matched. They have an incentive to
not let the fees drop too low so the network continues to be usable and
they also have an incentive to not raise them too high because it'll push
users into using other systems. Also, there'll be competition between
stakeholders, which should keep the fees reasonable.

I think this would at least be preferable to the let the miner decide
model.

- Joel

On Fri, May 8, 2015 at 7:51 PM, Peter Todd p...@petertodd.org wrote:

 On Fri, May 08, 2015 at 03:32:00PM +0300, Joel Joonatan Kaartinen wrote:
  Matt,
 
  It seems you missed my suggestion about basing the maximum block size on
  the bitcoin days destroyed in transactions that are included in the
 block.
  I think it has potential for both scaling as well as keeping up a
 constant
  fee pressure. If tuned properly, it should both stop spamming and
 increase
  block size maximum when there are a lot of real transactions waiting for
  inclusion.

 The problem with gating block creation on Bitcoin days destroyed is
 there's a strong potential of giving big mining pools an huge advantage,
 because they can contract with large Bitcoin owners and buy dummy
 transactions with large numbers of Bitcoin days destroyed on demand
 whenever they need more days-destroyed to create larger blocks.
 Similarly, with appropriate SIGHASH flags such contracting can be done
 by modifying *existing* transactions on demand.

 Ultimately bitcoin days destroyed just becomes a very complex version of
 transaction fees, and it's already well known that gating blocksize on
 total transaction fees doesn't work.

 --
 'peter'[:-1]@petertodd.org
 0f53e2d214685abf15b6d62d32453a03b0d472e374e10e94

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-08 Thread Aaron Voisine
This is a clever way to tie block size to fees.

I would just like to point out though that it still fundamentally is using
hard block size limits to enforce scarcity. Transactions with below market
fees will hang in limbo for days and fail, instead of failing immediately
by not propagating, or seeing degraded, long confirmation times followed by
eventual success.


Aaron Voisine
co-founder and CEO
breadwallet.com

On Fri, May 8, 2015 at 1:33 PM, Mark Friedenbach m...@friedenbach.org
wrote:

 It is my professional opinion that raising the block size by merely
 adjusting a constant without any sort of feedback mechanism would be a
 dangerous and foolhardy thing to do. We are custodians of a multi-billion
 dollar asset, and it falls upon us to weigh the consequences of our own
 actions against the combined value of the entire bitcoin ecosystem. Ideally
 we would take no action for which we are not absolutely certain of the
 ramifications, with the information that can be made available to us. But
 of course that is not always possible: there are unknown-unknowns, time
 pressures, and known-unknowns where information has too high a marginal
 cost. So where certainty is unobtainable, we must instead hedge against
 unwanted outcomes.

 The proposal to raise the block size now by redefining a constant carries
 with it risk associated with infrastructure scaling, centralization
 pressures, and delaying the necessary development of a constraint-based fee
 economy. It also simply kicks the can down the road in settling these
 issues because a larger but realistic hard limit must still exist, meaning
 a future hard fork may still be required.

 But whatever new hard limit is chosen, there is also a real possibility
 that it may be too high. The standard response is that it is a soft-fork
 change to impose a lower block size limit, which miners could do with a
 minimal amount of coordination. This is however undermined by the
 unfortunate reality that so many mining operations are absentee-run
 businesses, or run by individuals without a strong background in bitcoin
 protocol policy, or with interests which are not well aligned with other
 users or holders of bitcoin. We cannot rely on miners being vigilant about
 issues that develop, as they develop, or able to respond in the appropriate
 fashion that someone with full domain knowledge and an objective
 perspective would.

 The alternative then is to have some sort of dynamic block size limit
 controller, and ideally one which applies a cost to raising the block size
 in some way the preserves the decentralization and/or long-term stability
 features that we care about. I will now describe one such proposal:

   * For each block, the miner is allowed to select a different difficulty
 (nBits) within a certain range, e.g. +/- 25% of the expected difficulty,
 and this miner-selected difficulty is used for the proof of work check. In
 addition to adjusting the hashcash target, selecting a different difficulty
 also raises or lowers the maximum block size for that block by a function
 of the difference in difficulty. So increasing the difficulty of the block
 by an additional 25% raises the block limit for that block from 100% of the
 current limit to 125%, and lowering the difficulty by 10% would also lower
 the maximum block size for that block from 100% to 90% of the current
 limit. For simplicity I will assume a linear identity transform as the
 function, but a quadratic or other function with compounding marginal cost
 may be preferred.

   * The default maximum block size limit is then adjusted at regular
 intervals. For simplicity I will assume an adjustment at the end of each
 2016 block interval, at the same time that difficulty is adjusted, but
 there is no reason these have to be aligned. The adjustment algorithm
 itself is either the selection of the median, or perhaps some sort of
 weighted average that respects the middle majority. There would of course
 be limits on how quickly the block size limit can adjusted in any one
 period, just as there are min/max limits on the difficulty adjustment.

   * To prevent perverse mining incentives, the original difficulty without
 adjustment is used in the aggregate work calculations for selecting the
 most-work chain, and the allowable miner-selected adjustment to difficulty
 would have to be tightly constrained.

 These rules create an incentive environment where raising the block size
 has a real cost associated with it: a more difficult hashcash target for
 the same subsidy reward. For rational miners that cost must be
 counter-balanced by additional fees provided in the larger block. This
 allows block size to increase, but only within the confines of a
 self-supporting fee economy.

 When the subsidy goes away or is reduced to an insignificant fraction of
 the block reward, this incentive structure goes away. Hopefully at that
 time we would have sufficient information to soft-fork set a hard block
 size 

Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-08 Thread Aaron Voisine
That's fair, and we've implemented child-pays-for-parent for spending
unconfirmed inputs in breadwallet. But what should the behavior be when
those options aren't understood/implemented/used?

My argument is that the less risky, more conservative default fallback
behavior should be either non-propagation or delayed confirmation, which is
generally what we have now, until we hit the block size limit. We still
have lots of safe, non-controversial, easy to experiment with options to
add fee pressure, causing users to economize on block space without
resorting to dropping transactions after a prolonged delay.

Aaron Voisine
co-founder and CEO
breadwallet.com

On Fri, May 8, 2015 at 3:45 PM, Mark Friedenbach m...@friedenbach.org
wrote:

 On Fri, May 8, 2015 at 3:43 PM, Aaron Voisine vois...@gmail.com wrote:

 This is a clever way to tie block size to fees.

 I would just like to point out though that it still fundamentally is
 using hard block size limits to enforce scarcity. Transactions with below
 market fees will hang in limbo for days and fail, instead of failing
 immediately by not propagating, or seeing degraded, long confirmation times
 followed by eventual success.


 There are already solutions to this which are waiting to be deployed as
 default policy to bitcoind, and need to be implemented in other clients:
 replace-by-fee and child-pays-for-parent.

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-08 Thread Joel Joonatan Kaartinen
Matt,

It seems you missed my suggestion about basing the maximum block size on
the bitcoin days destroyed in transactions that are included in the block.
I think it has potential for both scaling as well as keeping up a constant
fee pressure. If tuned properly, it should both stop spamming and increase
block size maximum when there are a lot of real transactions waiting for
inclusion.

- Joel

On Fri, May 8, 2015 at 1:30 PM, Clément Elbaz clem...@gmail.com wrote:

 Matt : I think proposal #1 and #3 are a lot better than #2, and #1 is my
 favorite.

 I see two problems with proposal #2.
 The first problem with proposal #2 is that, as we see in democracies,
 there is often a mismatch between the people conscious vote and these same
 people behavior.

 Relying on an  intentional vote made consciously by miners by choosing a
 configuration value can lead to twisted results if their actual behavior
 doesn't correlate with their vote (eg, they all vote for a small block size
 because it is the default configuration of their software, and then they
 fill it completely all the time and everything crashes).

 The second problem with proposal #2 is that if Gavin and Mike are right,
 there is simply no time to gather a meaningful amount of votes over the
 coinbases, after the fork but before the Bitcoin scalability crash.

 I like proposal #1 because the vote is made using already available
 data. Also there is no possible mismatch between behavior and vote. As a
 miner you vote by choosing to create a big (or small) block, and your
 actions reflect your vote. It is simple and straightforward.

 My feelings on proposal #3 is it is a little bit mixing apples and
 oranges, but I may not seeing all the implications.


 Le ven. 8 mai 2015 à 09:21, Matt Whitlock b...@mattwhitlock.name a
 écrit :

 Between all the flames on this list, several ideas were raised that did
 not get much attention. I hereby resubmit these ideas for consideration and
 discussion.

 - Perhaps the hard block size limit should be a function of the actual
 block sizes over some trailing sampling period. For example, take the
 median block size among the most recent 2016 blocks and multiply it by 1.5.
 This allows Bitcoin to scale up gradually and organically, rather than
 having human beings guessing at what is an appropriate limit.

 - Perhaps the hard block size limit should be determined by a vote of the
 miners. Each miner could embed a desired block size limit in the coinbase
 transactions of the blocks it publishes. The effective hard block size
 limit would be that size having the greatest number of votes within a
 sliding window of most recent blocks.

 - Perhaps the hard block size limit should be a function of block-chain
 length, so that it can scale up smoothly rather than jumping immediately to
 20 MB. This function could be linear (anticipating a breakdown of Moore's
 Law) or quadratic.

 I would be in support of any of the above, but I do not support Mike
 Hearn's proposed jump to 20 MB. Hearn's proposal kicks the can down the
 road without actually solving the problem, and it does so in a
 controversial (step function) way.


 --
 One dashboard for servers and applications across Physical-Virtual-Cloud
 Widest out-of-the-box monitoring support with 50+ applications
 Performance metrics, stats and reports that give you Actionable Insights
 Deep dive visibility with transaction tracing using APM Insight.
 http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development



 --
 One dashboard for servers and applications across Physical-Virtual-Cloud
 Widest out-of-the-box monitoring support with 50+ applications
 Performance metrics, stats and reports that give you Actionable Insights
 Deep dive visibility with transaction tracing using APM Insight.
 http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development


--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-08 Thread Gregory Maxwell
On Fri, May 8, 2015 at 8:33 PM, Mark Friedenbach m...@friedenbach.org wrote:
 These rules create an incentive environment where raising the block size has
 a real cost associated with it: a more difficult hashcash target for the
 same subsidy reward. For rational miners that cost must be counter-balanced
 by additional fees provided in the larger block. This allows block size to
 increase, but only within the confines of a self-supporting fee economy.

 When the subsidy goes away or is reduced to an insignificant fraction of the
 block reward, this incentive structure goes away. Hopefully at that time we
 would have sufficient information to soft-fork set a hard block size
 maximum. But in the mean time, the block size limit controller constrains
 the maximum allowed block size to be within a range supported by fees on the
 network, providing an emergency relief valve that we can be assured will
 only be used at significant cost.

Though I'm a fan of this class of techniques(*) and think using something
in this space is strictly superior to not, and I think it makes larger
sizes safer long term;  I do not think it adequately obviates the need
for a hard upper limit for two reasons:

(1) for software engineering and operational reasons it is very
difficult to develop, test for, or provision for something without
knowing limits. There would in fact be hard limits on real deployments
but they'd be opaque to their operators and you could easily imagine
the network forking by surprise as hosts crossed those limits.

(2)  At best this approach mitigates the collective action problem between
miners around fees;  it does not correct the incentive alignment between
miners and everyone else (miners can afford huge node costs because they
have income; but the full-node-using-users that need to exist in plenty
to keep miners honest do not), or the centralization pressures (N miners
can reduce their storage/bandwidth/cpu costs N fold by centralizing).

A dynamic limit can be combined with a hard upper to at least be no
worse than a hard upper with respect to those two points.


Another related point which has been tendered before but seems to have
been ignored is that changing how the size limit is computed can help
better align incentives and thus reduce risk.  E.g. a major cost to the
network is the UTXO impact of transactions, but since the limit is blind
to UTXO impact a miner would gain less income if substantially factoring
UTXO impact into its fee calculations; and without fee impact users have
little reason to optimize their UTXO behavior.   This can be corrected
by augmenting the size used for limit calculations.   An example would
be tx_size = MAX( real_size  1,  real_size + 4*utxo_created_size -
3*utxo_consumed_size).   The reason for the MAX is so that a block
which cleaned a bunch of big UTXO could not break software by being
super large, the utxo_consumed basically lets you credit your fees by
cleaning the utxo set; but since you get less credit than you cost the
pressure should be downward but not hugely so. The 1/2, 4, 3 I regard
as parameters which I don't have very strong opinions on which could be
set based on observations in the network today (e.g. adjusted so that a
normal cleaning transaction can hit the minimum size).  One way to think
about this is that it makes it so that every output you create prepays
the transaction fees needed to spend it by shifting space from the
current block to a future block. The fact that the prepayment is not
perfectly efficient reduces the incentive for miners to create lots of
extra outputs when they have room left in their block in order to store
space to use later [an issue that is potentially less of a concern with a
dynamic size limit].  With the right parameters there would never be such
at thing as a dust output (one which costs more to spend than its worth).

(likewise the sigops limit should be counted correctly and turned into
size augmentation (ones that get run by the txn); which would greatly
simplify selection rules: maximize income within a single scalar limit)

(*) I believe my currently favored formulation of general dynamic control
idea is that each miner expresses in their coinbase a preferred size
between some minimum (e.g. 500k) and the miner's effective-maximum;
the actual block size can be up to the effective maximum even if the
preference is lower (you're not forced to make a lower block because you
stated you wished the limit were lower).  There is a computed maximum
which is the 33-rd percentile of the last 2016 coinbase preferences
minus computed_max/52 (rounding up to 1) bytes-- or 500k if thats
larger. The effective maximum is X bytes more, where X on the range
[0, computed_maximum] e.g. the miner can double the size of their
block at most. If X  0, then the miners must also reach a target
F(x/computed_maximum) times the bits-difficulty; with F(x) = x^2+1  ---
so the maximum penalty is 2, with a quadratic shape;  for a given mempool

Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-08 Thread Raystonn
It seems to me all this would do is encourage 0-transaction blocks, crippling 
the network.  Individual blocks don't have a maximum block size, they have an 
actual block size.  Rational miners would pick blocks to minimize difficulty, 
lowering the effective maximum block size as defined by the optimal size for 
rational miners.  This would be a tragedy of the commons.

In addition to that, average block cinfirmation time, and hence rate of 
inflation of the bitcoin currency, would now be subject to manipulation.  This 
undermined a core value of Bitcoin.

 On Fri, May 8, 2015 at 1:33 PM, Mark Friedenbach m...@friedenbach.org wrote:

   * For each block, the miner is allowed to select a different difficulty 
 (nBits) within a certain range, e.g. +/- 25% of the expected difficulty, and 
 this miner-selected difficulty is used for the proof of work check. In 
 addition to adjusting the hashcash target, selecting a different difficulty 
 also raises or lowers the maximum block size for that block by a function of 
 the difference in difficulty.

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-08 Thread Mark Friedenbach
In a fee-dominated future, replace-by-fee is not an opt-in feature. When
you create a transaction, the wallet presents a range of fees that it
expects you might pay. It then signs copies of the transaction with spaced
fees from this interval and broadcasts the lowest fee first. In the user
interface, the transaction is shown with its transacted amount and the
approved fee range. All of the inputs used are placed on hold until the
transaction gets a confirmation. As time goes by and it looks like the
transaction is not getting accepted, successively higher fee versions are
released. You can opt-out and send a no-fee or base-fee-only transaction,
but that should not be the default.

On the receiving end, local policy controls how much fee should be spent
trying to obtain confirmations before alerting the user, if there are fees
available in the hot wallet to do this. The receiving wallet then adds its
own fees via a spend if it thinks insufficient fees were provided to get a
confirmation. Again, this should all be automated so long as there is a hot
wallet on the receiving end.

Is this more complicated than now, where blocks are not full and clients
generally don't have to worry about their transactions eventually
confirming? Yes, it is significantly more complicated. But such
complication is unavoidable. It is a simple fact that the block size cannot
increase so much as to cover every single use by every single person in the
world, so there is no getting around the reality that we will have to
transition into an economy where at least one side has to pay up for a
transaction to get confirmation, at all. We are going to have to deal with
this issue whether it is now at 1MB or later at 20MB. And frankly, it'll be
much easier to do now.

On Fri, May 8, 2015 at 4:15 PM, Aaron Voisine vois...@gmail.com wrote:

 That's fair, and we've implemented child-pays-for-parent for spending
 unconfirmed inputs in breadwallet. But what should the behavior be when
 those options aren't understood/implemented/used?

 My argument is that the less risky, more conservative default fallback
 behavior should be either non-propagation or delayed confirmation, which is
 generally what we have now, until we hit the block size limit. We still
 have lots of safe, non-controversial, easy to experiment with options to
 add fee pressure, causing users to economize on block space without
 resorting to dropping transactions after a prolonged delay.

 Aaron Voisine
 co-founder and CEO
 breadwallet.com

 On Fri, May 8, 2015 at 3:45 PM, Mark Friedenbach m...@friedenbach.org
 wrote:

 On Fri, May 8, 2015 at 3:43 PM, Aaron Voisine vois...@gmail.com wrote:

 This is a clever way to tie block size to fees.

 I would just like to point out though that it still fundamentally is
 using hard block size limits to enforce scarcity. Transactions with below
 market fees will hang in limbo for days and fail, instead of failing
 immediately by not propagating, or seeing degraded, long confirmation times
 followed by eventual success.


 There are already solutions to this which are waiting to be deployed as
 default policy to bitcoind, and need to be implemented in other clients:
 replace-by-fee and child-pays-for-parent.



--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development