Re: [Bitcoin-development] Block Size Increase Requirements

2015-05-08 Thread Arkady
--[remove this line and above]--
On Thu, 7 May 2015, Gregory Maxwell wrote:

 Date: Thu, 7 May 2015 00:37:54 +
 From: Gregory Maxwell gmaxw...@gmail.com
 To: Matt Corallo bitcoin-l...@bluematt.me
 Cc: Bitcoin Dev bitcoin-development@lists.sourceforge.net
 Subject: Re: [Bitcoin-development] Block Size Increase
 
 Thanks Matt; I was actually really confused by this sudden push with
 not a word here or on Github--so much so that I responded on Reddit to
 people pointing to commits in Gavin's personal repository saying they
 were reading too much into it.

I saw this. I was also pointing this out to the people who were asking 
me. A
commit to a personal repository does not at first seem more than
experimental. sipa commits weird/neat things to private branches all the
time, after all.

 to share behavior. In the case of mining, we're trying to optimize the
 social good of POW security. (But the analogy applies in other ways 
 too:

About the only argument IMO in favour of block size increases is to 
assume
that making more room in a block will make it attractive to use for more
people at some point in the future: increasing transaction velocity,
increasing economy size, increasing value overall.

 increases to the chain side are largely an externality; miners enjoy 
 the
 benefits, everyone else takes the costs--either in reduced security or
 higher node operating else.)

Who else but miners and pool operators will run full nodes when full 
nodes
are being shut down because they are too large and unwieldy to maintain? 
It
is already so that casual users refuse to run full nodes. This fact is
indisputable. The only question remaining is, Do we care? Arguments
against users who feel that the dataset is too large to run a full node,
full-time, start from a premise that these users are a static and 
irrelevant
fraction. Is this even true? Do we care? I do. I will shortly only be 
able
to run half the nodes I currently do thanks to the growth of the 
blockchain
at its current rate.

 One potential argument is that maybe miners would be _regulated_ to
 behave correctly. But this would require undermining the openness of 
 the
 system--where anyone can mine anonymously--in order to enforce 
 behavior,
 and that same enforcement mechanism would leave a political level to
 impose additional rules that violate the extra properties of the 
 system.

I would refuse to mine under such a regulated regime; moreover, I would
enjoy forking away from this, and, I suspect, the only miners who remain
would be those whose ultimate motivations do not coincide with the 
users.
That is, the set of miners who are users, and the set of users who are
miners, would be wholly non-intersecting.

 So far the mining ecosystem has become incredibly centralized over 
 time.

This is unfortunate but true.

 of the regular contributors to Bitcoin Core do. Many participants
 have never mined or only did back in 2010/2011... we've basically
 ignored the mining ecosystem, and this has had devastating effects,
 causing a latent undermining of the security model: hacking a dozen or
 so computers--operated under totally unknown and probably not strong
 security policies--could compromise the network at least at the tip...

The explicit form of the block dictated by the reference client and
agreed-to by the people who were sold on bitcoin near the beginning 
(myself
included) was explicitly the notion that the rules were static; that the
nature of transaction foundations and the subsidies would not be 
altered.
Here we have a hardfork being contemplated which is not only 
controversial,
but does not even address some of the highest-utility and most-requested
features in peoples' hardfork wishlists.

The fact that mining has effectively been centralized directly implies 
that
destabilizing changes that some well-heeled (and thus theoretically 
capable,
at least) people have explicitly begun plans to fork the blockchain 
about
will have an unknown, and completely unforeseen combined effect.

We can pretend that, If merchants and miners and exchanges go along, 
then
who else matters, but the reality is that the value in bitcoin exists
because *people* use it for real transactions: Not miners, whose profits 
are
parasitically fractionally based on the quality and strength of the 
bitcoin
economy as a whole; not exchanges who lubricate transactions in service 
to
the economy; not even today's merchants whose primary means of accepting
bitcoin seems to be to convert them instantly to fiat and not 
participate
meaningfully in the economy at all; not enriched felons; but actual 
users
themselves.

 Rightfully we should be regarding this an an emergency, and probably
 should have been have since 2011.

There are two ways to look at it, assuming that the blocksize change
increases bitcoin's value to people after all: mining centralization 
will be
corrected; or, mining centralization will not be corrected.

I would argue that rapidly increasing 

[Bitcoin-development] Suggestion: Dynamic block size that updates like difficulty

2015-05-08 Thread Michael Naber
Why can't we have dynamic block size limit that changes with difficulty, such 
as the block size cannot exceed 2x the mean size of the prior difficulty 
period? 

I recently subscribed to this list so my apologies if this has been addressed 
already.


--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


[Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-08 Thread Matt Whitlock
Between all the flames on this list, several ideas were raised that did not get 
much attention. I hereby resubmit these ideas for consideration and discussion.

- Perhaps the hard block size limit should be a function of the actual block 
sizes over some trailing sampling period. For example, take the median block 
size among the most recent 2016 blocks and multiply it by 1.5. This allows 
Bitcoin to scale up gradually and organically, rather than having human beings 
guessing at what is an appropriate limit.

- Perhaps the hard block size limit should be determined by a vote of the 
miners. Each miner could embed a desired block size limit in the coinbase 
transactions of the blocks it publishes. The effective hard block size limit 
would be that size having the greatest number of votes within a sliding window 
of most recent blocks.

- Perhaps the hard block size limit should be a function of block-chain length, 
so that it can scale up smoothly rather than jumping immediately to 20 MB. This 
function could be linear (anticipating a breakdown of Moore's Law) or quadratic.

I would be in support of any of the above, but I do not support Mike Hearn's 
proposed jump to 20 MB. Hearn's proposal kicks the can down the road without 
actually solving the problem, and it does so in a controversial (step function) 
way.

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Block Size Increase Requirements

2015-05-08 Thread Mike Hearn

  * Though there are many proposals floating around which could
 significantly decrease block propagation latency, none of them are
 implemented today.


With a 20mb cap, miners still have the option of the soft limit.

I would actually be quite surprised if there were no point along the road
from 1mb to 20mb where miners felt a need to throttle their block sizes
artificially, for the exact reason you point out: propagation delays.

But we don't *need* to have fancy protocol upgrades implemented right now.
All we need is to demolish one bottleneck (the hard cap) so we can then
move on and demolish the next one (whatever that is, probably faster
propagation). Scaling is a series of walls we punch through as we encounter
them. One down, onto the next. We don't have to tackle them all
simultaneously.

FWIW I don't think the GFW just triggers packet loss, these days. It's
blocked port 8333 entirely.

 * I'd very much like to see someone working on better scaling
 technology ... I know StrawPay is working on development,


So this request is already satisfied, isn't it? As you point out, expecting
more at this stage in development is unreasonable, there's nothing for
anyone to experiment with or commit to.

They have code here, by the way:

   https://github.com/strawpay

You can find their fork of MultiBit HD, their implementation library, etc.
They've contributed patches and improvements to the payment channels code
we wrote.


  * I'd like to see some better conclusions to the discussion around
 long-term incentives within the system.


What are your thoughts on using assurance contracts to fund network
security?

I don't *know* if hashing assurance contracts (HACs) will work. But I don't
know they won't work either. And right now I'm pretty sure that plain old
fee pressure won't work. Demand doesn't outstrip supply forever - people
find substitutes.
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Assurance contracts to fund the network with OP_CHECKLOCKTIMEVERIFY

2015-05-08 Thread Jeff Garzik
That reminds me - I need to integrate the patch that automatically sweeps
anyone-can-pay transactions for a miner.


On Thu, May 7, 2015 at 7:32 PM, Tier Nolan tier.no...@gmail.com wrote:

 One of the suggestions to avoid the problem of fees going to zero is
 assurance contracts.  This lets users (perhaps large merchants or
 exchanges) pay to support the network.  If insufficient people pay for the
 contract, then it fails.

 Mike Hearn suggests one way of achieving it, but it doesn't actually
 create an assurance contract.  Miners can exploit the system to convert the
 pledges into donations.

 https://bitcointalk.org/index.php?topic=157141.msg1821770#msg1821770

 Consider a situation in the future where the minting fee has dropped to
 almost zero.  A merchant wants to cause block number 1 million to
 effectively have a minting fee of 50BTC.

 He creates a transaction with one input (0.1BTC) and one output (50BTC)
 and signs it using SIGHASH_ANYONE_CAN_PAY.  The output pays to OP_TRUE.
 This means that anyone can spend it.  The miner who includes the
 transaction will send it to an address he controls (or pay to fee).  The
 transaction has a locktime of 1 million, so that it cannot be included
 before that point.

 This transaction cannot be included in a block, since the inputs are lower
 than the outputs.  The SIGHASH_ANYONE_CAN_PAY field mean that others can
 pledge additional funds.  They add more input to add more money and the
 same sighash.

 There would need to be some kind of notice boeard system for these
 pledges, but if enough pledge, then a valid transaction can be created.  It
 is in miner's interests to maintain such a notice board.

 The problem is that it counts as a pure donation.  Even if only 10BTC has
 been pledged, a miner can just add 40BTC of his own money and finish the
 transaction.  He nets the 10BTC of the pledges if he wins the block.  If he
 loses, nobody sees his 40BTC transaction.  The only risk is if his block is
 orphaned and somehow the miner who mines the winning block gets his 40BTC
 transaction into his block.

 The assurance contract was supposed to mean If the effective minting fee
 for block 1 million is 50 BTC, then I will pay 0.1BTC.  By adding his
 40BTC to the transaction the miner converts it to a pure donation.

 The key point is that *other* miners don't get 50BTC reward if they find
 the block, so it doesn't push up the total hashing power being committed to
 the blockchain, that a 50BTC minting fee would achieve.  This is the whole
 point of the assurance contract.

 OP_CHECKLOCKTIMEVERIFY could be used to solve the problem.

 Instead of paying to OP_TRUE, the transaction should pay 50 BTC to 1
 million OP_CHECKLOCKTIMEVERIFY OP_TRUE and 0.01BTC to OP_TRUE.

 This means that the transaction could be included into a block well in
 advance of the 1 million block point.  Once block 1 million arrives, any
 miner would be able to spend the 50 BTC.  The 0.01BTC is the fee for the
 block the transaction is included in.

 If the contract hasn't been included in a block well in advance, pledgers
 would be recommended to spend their pledged input,

 It can be used to pledge to many blocks at once.  The transaction could
 pay out to lots of 50BTC outputs but with the locktime increasing by for
 each output.

 For high value transactions, it isn't just the POW of the next block that
 matters but all the blocks that are built on top of it.

 A pledger might want to say I will pay 1BTC if the next 100 blocks all
 have at least an effective minting fee of 50BTC


 --
 One dashboard for servers and applications across Physical-Virtual-Cloud
 Widest out-of-the-box monitoring support with 50+ applications
 Performance metrics, stats and reports that give you Actionable Insights
 Deep dive visibility with transaction tracing using APM Insight.
 http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development




-- 
Jeff Garzik
Bitcoin core developer and open source evangelist
BitPay, Inc.  https://bitpay.com/
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Assurance contracts to fund the network with OP_CHECKLOCKTIMEVERIFY

2015-05-08 Thread Benjamin
Interesting.

1. How do you know who was first? If one node can figure out where
more transactions happen he can gain an advantage by being closer to
him. Mining would not be fair.

2. A merchant wants to cause block number 1 million to effectively
have a minting fee of 50BTC. - why should he do that? That's the
entire tragedy of the commons problem, no?

On Fri, May 8, 2015 at 11:49 AM, Mike Hearn m...@plan99.net wrote:
 Looks like a neat solution, Tier.

 --
 One dashboard for servers and applications across Physical-Virtual-Cloud
 Widest out-of-the-box monitoring support with 50+ applications
 Performance metrics, stats and reports that give you Actionable Insights
 Deep dive visibility with transaction tracing using APM Insight.
 http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development


--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Block Size Increase

2015-05-08 Thread Mike Hearn

 Alan argues that 7 tps is a couple orders of magnitude too low


By the way, just to clear this up - the real limit at the moment is more
like 3 tps, not 7.

The 7 transactions/second figure comes from calculations I did years ago,
in 2011. I did them a few months before the sendmany command was
released, so back then almost all transactions were small. After sendmany
and as people developed custom wallets, etc, the average transaction size
went up.
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-08 Thread Mike Hearn
There are certainly arguments to be made for and against all of these
proposals.

The fixed 20mb cap isn't actually my proposal at all, it is from Gavin. I
am supporting it because anything is better than nothing. Gavin originally
proposed the block size be a function of time. That got dropped, I suppose
to make the process of getting consensus easier. It is the simplest thing
that can possibly work.

I would like to see the process of chain forking becoming less traumatic. I
remember Gavin, Jeff and I once considered (on stage at a conference??)
that maybe there should be a scheduled fork every year, so people know when
to expect them.

If everything goes well, I see no reason why 20mb would be the limit
forever.
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Assurance contracts to fund the network with OP_CHECKLOCKTIMEVERIFY

2015-05-08 Thread Mike Hearn
Looks like a neat solution, Tier.
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-08 Thread Clément Elbaz
Matt : I think proposal #1 and #3 are a lot better than #2, and #1 is my
favorite.

I see two problems with proposal #2.
The first problem with proposal #2 is that, as we see in democracies,
there is often a mismatch between the people conscious vote and these same
people behavior.

Relying on an  intentional vote made consciously by miners by choosing a
configuration value can lead to twisted results if their actual behavior
doesn't correlate with their vote (eg, they all vote for a small block size
because it is the default configuration of their software, and then they
fill it completely all the time and everything crashes).

The second problem with proposal #2 is that if Gavin and Mike are right,
there is simply no time to gather a meaningful amount of votes over the
coinbases, after the fork but before the Bitcoin scalability crash.

I like proposal #1 because the vote is made using already available data.
Also there is no possible mismatch between behavior and vote. As a miner
you vote by choosing to create a big (or small) block, and your actions
reflect your vote. It is simple and straightforward.

My feelings on proposal #3 is it is a little bit mixing apples and oranges,
but I may not seeing all the implications.

Le ven. 8 mai 2015 à 09:21, Matt Whitlock b...@mattwhitlock.name a écrit :

 Between all the flames on this list, several ideas were raised that did
 not get much attention. I hereby resubmit these ideas for consideration and
 discussion.

 - Perhaps the hard block size limit should be a function of the actual
 block sizes over some trailing sampling period. For example, take the
 median block size among the most recent 2016 blocks and multiply it by 1.5.
 This allows Bitcoin to scale up gradually and organically, rather than
 having human beings guessing at what is an appropriate limit.

 - Perhaps the hard block size limit should be determined by a vote of the
 miners. Each miner could embed a desired block size limit in the coinbase
 transactions of the blocks it publishes. The effective hard block size
 limit would be that size having the greatest number of votes within a
 sliding window of most recent blocks.

 - Perhaps the hard block size limit should be a function of block-chain
 length, so that it can scale up smoothly rather than jumping immediately to
 20 MB. This function could be linear (anticipating a breakdown of Moore's
 Law) or quadratic.

 I would be in support of any of the above, but I do not support Mike
 Hearn's proposed jump to 20 MB. Hearn's proposal kicks the can down the
 road without actually solving the problem, and it does so in a
 controversial (step function) way.


 --
 One dashboard for servers and applications across Physical-Virtual-Cloud
 Widest out-of-the-box monitoring support with 50+ applications
 Performance metrics, stats and reports that give you Actionable Insights
 Deep dive visibility with transaction tracing using APM Insight.
 http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Assurance contracts to fund the network with OP_CHECKLOCKTIMEVERIFY

2015-05-08 Thread Tier Nolan
Just to clarify the process.

Pledgers create transactions using the following template and broadcast
them.  The p2p protocol could be modified to allow this, or it could be a
separate system.


*Input: 0.01 BTC*


*Signed with SIGHASH_ANYONE_CAN_PAY*

*Output 50BTC*

*Paid to: 1 million OP_CHECKLOCKTIMEVERIFY OP_TRUE*


*Output 0.01BTC*

*Paid to OP_TRUE*
This transaction is invalid, since the inputs don't pay for the output.
The advantage of the sighash anyone can pay field is that other people
can add additional inputs without making the signature invalid.  Normally,
any change to the transaction would make a signature invalid.

Eventually, enough other users have added pledges and a valid transaction
can be broadcast.


*Input: 0.01 BTC*

*Signed with SIGHASH_ANYONE_CAN_PAY*

*Input: 1.2 BTCSigned with SIGHASH_ANYONE_CAN_PAY*


*Input: 5 BTCSigned with SIGHASH_ANYONE_CAN_PAY*

*etc*





*Input: 1.3 BTCSigned with SIGHASH_ANYONE_CAN_PAYOutput 50BTC*
*Paid to: 1 million OP_CHECKLOCKTIMEVERIFY OP_TRUE*

*Output 0.01BTC**Paid to OP_TRUE*

This transaction can be submitted to the main network.  Once it is included
into the blockchain, it is locked in.

In this example, it might be included in block 999,500.  The 0.01BTC output
(and any excess over 50BTC) can be collected by the block 999,500 miner.

The OP_CHECKLOCKTIMEVERIFY opcode means that the 50BTC output cannot be
spent until block 1 million.  Once block 1 million arrives, the output is
completely unprotected.  This means that the miner who mines block 1
million can simply take it, by including his own transaction that sends it
to an address he controls.  It would be irrational to include somebody
else's transaction which spent it.

If by block 999,900, the transaction hasn't been completed (due to not
enough pledgers), the pledgers can spend the coin(s) that they were going
to use for their pledge.  This invalidates those inputs and effectively
withdraws from the pledge.

On Fri, May 8, 2015 at 11:01 AM, Benjamin benjamin.l.cor...@gmail.com
wrote:

 2. A merchant wants to cause block number 1 million to effectively
 have a minting fee of 50BTC. - why should he do that? That's the
 entire tragedy of the commons problem, no?


No, the pledger is saying that he will only pay 0.01BTC if the miner gets a
reward of 50BTC.

Imagine a group of 1000 people who want to make a donation of 50BTC to
something.  They all way that they will donate 0.05BTC, but only if
everyone else donates.

It still isn't perfect.  Everyone has an incentive to wait until the last
minute to pledge.
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


[Bitcoin-development] Removing transaction data from blocks

2015-05-08 Thread Arne Brutschy
Hello,

At DevCore London, Gavin mentioned the idea that we could get rid of 
sending full blocks. Instead, newly minted blocks would only be 
distributed as block headers plus all hashes of the transactions 
included in the block. The assumption would be that nodes have already 
the majority of these transactions in their mempool.

The advantages are clear: it's more efficient, as we would send 
transactions only once over the network, and it's fast as the resulting 
blocks would be small. Moreover, we would get rid of the blocksize limit 
for a long time.

Unfortunately, I am too ignorant of bitcoin core's internals to judge 
the changes required to make this happen. (I guess we'd require a new 
block format and a way to bulk-request missing transactions.)

However, I'm curious to hear what others with a better grasp of bitcoin 
core's internals have to say about it.

Regards,
Arne

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Block Size Increase

2015-05-08 Thread Alan Reiner

This isn't about everyone's coffee.  This is about an absolute minimum
amount of participation by people who wish to use the network.   If our
goal is really for bitcoin to really be a global, open transaction
network that makes money fluid, then 7tps is already a failure.  If even
5% of the world (350M people) was using the network for 1 tx per month
(perhaps to open payment channels, or shift money between side chains),
we'll be above 100 tps.  And that doesn't include all the
non-individuals (organizations) that want to use it.

The goals of a global transaction network and everyone must be able
to run a full node with their $200 dell laptop are not compatible.  We
need to accept that a global transaction system cannot be
fully/constantly audited by everyone and their mother.  The important
feature of the network is that it is open and anyone *can* get the
history and verify it.  But not everyone is required to.   Trying to
promote a system where the history can be forever handled by a low-end
PC is already falling out of reach, even with our miniscule 7 tps. 
Clinging to that goal needlessly limits the capability for the network
to scale to be a useful global payments system



On 05/07/2015 03:54 PM, Jeff Garzik wrote:
 On Thu, May 7, 2015 at 3:31 PM, Alan Reiner etothe...@gmail.com
 mailto:etothe...@gmail.com wrote:
  

 (2) Leveraging fee pressure at 1MB to solve the problem is
 actually really a bad idea.  It's really bad while Bitcoin is
 still growing, and relying on fee pressure at 1 MB severely
 impacts attractiveness and adoption potential of Bitcoin (due to
 high fees and unreliability).  But more importantly, it ignores
 the fact that for a 7 tps is pathetic for a global transaction
 system.  It is a couple orders of magnitude too low for any
 meaningful commercial activity to occur.  If we continue with a
 cap of 7 tps forever, Bitcoin *will* fail.  Or at best, it will
 fail to be useful for the vast majority of the world (which
 probably leads to failure).  We shouldn't be talking about fee
 pressure until we hit 700 tps, which is probably still too low. 

  [...]

 1) Agree that 7 tps is too low

 2) Where do you want to go?  Should bitcoin scale up to handle all the
 world's coffees? 

 This is hugely unrealistic.  700 tps is 100MB blocks, 14.4 GB/day --
 just for a single feed.  If you include relaying to multiple nodes,
 plus serving 500 million SPV clients en grosse, who has the capacity
 to run such a node?  By the time we get to fee pressure, in your
 scenario, our network node count is tiny and highly centralized.

 3) In RE fee pressure -- Do you see the moral hazard to a
 software-run system?  It is an intentional, human decision to flood
 the market with supply, thereby altering the economics, forcing fees
 to remain low in the hopes of achieving adoption.  I'm pro-bitcoin and
 obviously want to see bitcoin adoption - but I don't want to sacrifice
 every decentralized principle and become a central banker in order to
 get there.


--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Removing transaction data from blocks

2015-05-08 Thread Pieter Wuille
So, there are several ideas about how to reduce the size of blocks being
sent on the network:
* Matt Corallo's relay network, which internally works by remembering the
last 5000 (i believe?) transactions sent by the peer, and allowing the peer
to backreference those rather than retransmit them inside block data. This
exists and works today.
* Gavin Andresen's IBLT based set reconciliation for blocks based on what a
peer expects the new block to contain.
* Greg Maxwell's network block coding, which is based on erasure coding,
and also supports sharding (everyone sends some block data to everyone,
rather fetching from one peer).

However, the primary purpose is not to reduce bandwidth (though that is a
nice side advantage). The purpose is reducing propagation delay. Larger
propagation delays across the network (relative to the inter-block period)
result in higher forking rates. If the forking rate gets very high, the
network may fail to converge entirely, but even long before that point, the
higher the forking rate is, the higher the advantage of larger (and better
connected) pools over smaller ones. This is why, in my opinion,
guaranteeing fast propagation is one of the most essential responsibility
of full nodes to avoid centralization pressure.

Also, none of this would let us get rid of the block size at all. All
transactions still have to be transferred and processed, and due to
inherent latencies of communication across the globe, the higher the
transaction rate is, the higher the number of transactions in blocks will
be that peers have not yet heard about. You can institute a policy to not
include too recent transactions in blocks, but again, this favors larger
miners over smaller ones.

Also, if the end goal is propagation delay, just minimizing the amount of
data transferred is not enough. You also need to make sure the
communication mechanism does not add huge processing overheads or adds
unnecessary roundtrips. In fact, this is the key difference between the 3
techniques listed above, and several people are working on refining and
optimizing these mechanisms to make them practically usable.
On May 8, 2015 7:23 AM, Arne Brutschy abruts...@xylon.de wrote:

 Hello,

 At DevCore London, Gavin mentioned the idea that we could get rid of
 sending full blocks. Instead, newly minted blocks would only be
 distributed as block headers plus all hashes of the transactions
 included in the block. The assumption would be that nodes have already
 the majority of these transactions in their mempool.

 The advantages are clear: it's more efficient, as we would send
 transactions only once over the network, and it's fast as the resulting
 blocks would be small. Moreover, we would get rid of the blocksize limit
 for a long time.

 Unfortunately, I am too ignorant of bitcoin core's internals to judge
 the changes required to make this happen. (I guess we'd require a new
 block format and a way to bulk-request missing transactions.)

 However, I'm curious to hear what others with a better grasp of bitcoin
 core's internals have to say about it.

 Regards,
 Arne


 --
 One dashboard for servers and applications across Physical-Virtual-Cloud
 Widest out-of-the-box monitoring support with 50+ applications
 Performance metrics, stats and reports that give you Actionable Insights
 Deep dive visibility with transaction tracing using APM Insight.
 http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-08 Thread Steven Pine
Block size scaling should be as transparent and simple as possible, like
pegging it to total transactions per difficulty change.
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Block Size Increase

2015-05-08 Thread Alan Reiner
On 05/08/2015 01:13 AM, Tom Harding wrote:
 On 5/7/2015 7:09 PM, Jeff Garzik wrote:
 G proposed 20MB blocks, AFAIK - 140 tps
 A proposed 100MB blocks - 700 tps
 For ref,
 Paypal is around 115 tps
 VISA is around 2000 tps (perhaps 4000 tps peak)


For reference, I'm not proposing 100 MB blocks right now.  I was
simply suggesting that if Bitcoin is to *ultimately* achieve the goal of
being a globally useful payment rails, 7tps is embarrassingly small. 
Even with off-chain transactions.  It should be a no-brainer that block
size has to go up.

My goal was to bring some long-term perspective into the discussion.  I
don't know if 100 MB blocks will *actually* be necessary for Bitcoin in
20 years, but it's feasible that it will be.  It's an open, global
payments system.  Therefore, we shouldn't be arguing about whether 1 MB
blocks is sufficient--it's very clearly not.  And admitting this as a
valid point is also an admission that not everyone in the world will be
able to run a full node in 20 years.

I don't think there's a solution that can accommodate all future
scenarios, nor that we can even find a solution right now that avoids
more hard forks in the future.   But the goal of everyone should be
able to download and verify the world's global transactions on a
smartphone is a non-starter and should not drive decisions. 

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-08 Thread Alex Mizrahi
Adaptive schedules, i.e. those where block size limit depends not only on
block height, but on other parameters as well, are surely attractive in the
sense that the system can adapt to the actual use, but they also open a
possibility of a manipulation.

E.g. one of mining companies might try to bankrupt other companies by
making mining non-profitable. To do that they will accept transactions with
ridiculously low fees (e.g. 1 satoshi per transaction). Of course, they
will suffer losees themselves, but the they might be able to survive that
if they have access to financial resources. (E.g. companies backed by banks
and such will have an advantage).
Once competitors close down their mining operations, they can drive fees
upwards.

So if you don't want to open room for manipulation (which is very hard to
analyze), it is better to have a block size hard limit which depends only
on block height.
On top of that there might be a soft limit which is enforced by the
majority of miners.
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Block Size Increase

2015-05-08 Thread Andrew
On Fri, May 8, 2015 at 2:59 PM, Alan Reiner etothe...@gmail.com wrote:


 This isn't about everyone's coffee.  This is about an absolute minimum
 amount of participation by people who wish to use the network.   If our
 goal is really for bitcoin to really be a global, open transaction network
 that makes money fluid, then 7tps is already a failure.  If even 5% of the
 world (350M people) was using the network for 1 tx per month (perhaps to
 open payment channels, or shift money between side chains), we'll be above
 100 tps.  And that doesn't include all the non-individuals (organizations)
 that want to use it.


 The goals of a global transaction network and everyone must be able to
 run a full node with their $200 dell laptop are not compatible.  We need
 to accept that a global transaction system cannot be fully/constantly
 audited by everyone and their mother.  The important feature of the network
 is that it is open and anyone *can* get the history and verify it.  But not
 everyone is required to.   Trying to promote a system wher000e the history
 can be forever handled by a low-end PC is already falling out of reach,
 even with our miniscule 7 tps.  Clinging to that goal needlessly limits the
 capability for the network to scale to be a useful global payments system


These are good points and got me thinking (but I think you're wrong). If we
really want each of the 10 billion people soon using bitcoin once per
month, that will require 500MB blocks. That's about 2 TB per month. And if
you relay it to 4 peers, it's 10 TB per month. Which I suppose is doable
for a home desktop, so you can just run a pruned full node with all
transactions from the past month. But how do you sync all those
transactions if you've never done this before or it's been a while since
you did? I think it currently takes at least 3 hours to fully sync 30 GB of
transactions. So 2 TB will take 8 days, then you take a bit more time to
sync the days that passed while you were syncing. So that's doable, but at
a certain point, like 10 TB per month (still only 5 transactions per month
per person), you will need 41 days to sync that month, so you will never
catch up. So I think in order to keep the very important property of anyone
being able to start clean and verify the thing, then we need to think of
bitcoin as a system that does transactions for a large number of users at
once in one transaction, and not a system where each person will make a
~monthly transaction on. We need to therefore rely on sidechains,
treechains, lightning channels, etc...

I'm not a bitcoin wizard and this is just my second post on this mailing
list, so I may be missing something. So please someone, correct me if I'm
wrong.




 On 05/07/2015 03:54 PM, Jeff Garzik wrote:

  On Thu, May 7, 2015 at 3:31 PM, Alan Reiner etothe...@gmail.com wrote:


  (2) Leveraging fee pressure at 1MB to solve the problem is actually
 really a bad idea.  It's really bad while Bitcoin is still growing, and
 relying on fee pressure at 1 MB severely impacts attractiveness and
 adoption potential of Bitcoin (due to high fees and unreliability).  But
 more importantly, it ignores the fact that for a 7 tps is pathetic for a
 global transaction system.  It is a couple orders of magnitude too low for
 any meaningful commercial activity to occur.  If we continue with a cap of
 7 tps forever, Bitcoin *will* fail.  Or at best, it will fail to be
 useful for the vast majority of the world (which probably leads to
 failure).  We shouldn't be talking about fee pressure until we hit 700 tps,
 which is probably still too low.

  [...]

  1) Agree that 7 tps is too low

  2) Where do you want to go?  Should bitcoin scale up to handle all the
 world's coffees?

  This is hugely unrealistic.  700 tps is 100MB blocks, 14.4 GB/day --
 just for a single feed.  If you include relaying to multiple nodes, plus
 serving 500 million SPV clients en grosse, who has the capacity to run such
 a node?  By the time we get to fee pressure, in your scenario, our network
 node count is tiny and highly centralized.

  3) In RE fee pressure -- Do you see the moral hazard to a software-run
 system?  It is an intentional, human decision to flood the market with
 supply, thereby altering the economics, forcing fees to remain low in the
 hopes of achieving adoption.  I'm pro-bitcoin and obviously want to see
 bitcoin adoption - but I don't want to sacrifice every decentralized
 principle and become a central banker in order to get there.




 --
 One dashboard for servers and applications across Physical-Virtual-Cloud
 Widest out-of-the-box monitoring support with 50+ applications
 Performance metrics, stats and reports that give you Actionable Insights
 Deep dive visibility with transaction tracing using APM Insight.
 http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
 ___
 Bitcoin-development 

Re: [Bitcoin-development] Block Size Increase

2015-05-08 Thread Jeff Garzik
On Fri, May 8, 2015 at 10:59 AM, Alan Reiner etothe...@gmail.com wrote:


 This isn't about everyone's coffee.  This is about an absolute minimum
 amount of participation by people who wish to use the network.   If our
 goal is really for bitcoin to really be a global, open transaction network
 that makes money fluid, then 7tps is already a failure.  If even 5% of the
 world (350M people) was using the network for 1 tx per month (perhaps to
 open payment channels, or shift money between side chains), we'll be above
 100 tps.  And that doesn't include all the non-individuals (organizations)
 that want to use it.

 The goals of a global transaction network and everyone must be able to
 run a full node with their $200 dell laptop are not compatible.  We need
 to accept that a global transaction system cannot be fully/constantly
 audited by everyone and their mother.  The important feature of the network
 is that it is open and anyone *can* get the history and verify it.  But not
 everyone is required to.   Trying to promote a system where the history can
 be forever handled by a low-end PC is already falling out of reach, even
 with our miniscule 7 tps.  Clinging to that goal needlessly limits the
 capability for the network to scale to be a useful global payments system


To repeat, the very first point in my email reply was: Agree that 7 tps is
too low  Never was it said that bit

Therefore a reply arguing against the low end is nonsense, and the relevant
question remains on the table.

How high do you want to go - and can Layer 1 bitcoin really scale to get
there?

It is highly disappointing to see people endorse moar bitcoin volume!
with zero thinking behind that besides adoption!  Need to actually
project what bitcoin looks like at the desired levels, what network
resources are required to get to those levels -- including traffic to serve
those SPV clients via P2P -- and then work backwards from that to see who
can support it, and then work backwards to discern a maximum tps.

-- 
Jeff Garzik
Bitcoin core developer and open source evangelist
BitPay, Inc.  https://bitpay.com/
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-08 Thread Bryan Bishop
On Fri, May 8, 2015 at 2:20 AM, Matt Whitlock b...@mattwhitlock.name wrote:
 - Perhaps the hard block size limit should be a function of the actual block 
 sizes over some
 trailing sampling period. For example, take the median block size among the 
 most recent
 2016 blocks and multiply it by 1.5. This allows Bitcoin to scale up gradually 
 and organically,
 rather than having human beings guessing at what is an appropriate limit.

Block contents can be grinded much faster than hashgrinding and
mining. There is a significant run-away effect there, and it also
works in the gradual sense as a miner probabilistically mines large
blocks that get averaged into that 2016 median block size computation.
At least this proposal would be a slower way of pushing out miners and
network participants that can't handle 100 GB blocks immediately..  As
the size of the blocks are increased, low-end hardware participants
have to fall off the network because they no longer meet the minimum
performance requirements. Adjustment might become severely mismatched
with general economic trends in data storage device development or
availability or even current-market-saturation of said storage
devices. With the assistance of transaction stuffing or grinding, that
2016 block median metric can be gamed to increase faster than other
participants can keep up with or, perhaps worse, in a way that was
unintended by developers yet known to be a failure mode. These are
just some issues to keep and mind and consider.

- Bryan
http://heybryan.org/
1 512 203 0507

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Block Size Increase Requirements

2015-05-08 Thread Peter Todd
On Fri, May 08, 2015 at 12:03:04PM +0200, Mike Hearn wrote:
 
   * Though there are many proposals floating around which could
  significantly decrease block propagation latency, none of them are
  implemented today.
 
 
 With a 20mb cap, miners still have the option of the soft limit.

The soft-limit is there miners themselves produce smaller blocks; the
soft-limit does not prevent other miners from producing larger blocks.

As we're talking about ways that other miners can use 20MB blocks to
harm the competition, talking about the soft-limit is irrelevant.
Similarly, as security engineers we must plan for the worst case; as
we've seen before by your campaigns to raise the soft-limit(1) even at a
time when the vast majority of transaction volume was from one user
(SatoshiDice) soft-limits are an extremely weak form of control.

For the proposes of discussing blocksize increase requirements we can
stop talking about the soft-limit.

1) https://bitcointalk.org/index.php?topic=149668.0

-- 
'peter'[:-1]@petertodd.org
09344ba165781ee352f93d657c8b098c8e518e6011753e59


signature.asc
Description: Digital signature
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Assurance contracts to fund the network with OP_CHECKLOCKTIMEVERIFY

2015-05-08 Thread Peter Todd
On Fri, May 08, 2015 at 06:00:37AM -0400, Jeff Garzik wrote:
 That reminds me - I need to integrate the patch that automatically sweeps
 anyone-can-pay transactions for a miner.

You mean anyone-can-spend?

I've got code that does this actually:

https://github.com/petertodd/replace-by-fee-tools/blob/master/spend-brainwallets-to-fees.py

Needs to have a feature where it replaces the txout set with simply
OP_RETURN-to-fees if the inputs don't sign the outputs though.
(SIGHASH_NONE for instance)

-- 
'peter'[:-1]@petertodd.org
0ee99382ac6bc043120085973b7b0378811c1acd8e3cdd9c


signature.asc
Description: Digital signature
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-08 Thread Peter Todd
On Fri, May 08, 2015 at 03:32:00PM +0300, Joel Joonatan Kaartinen wrote:
 Matt,
 
 It seems you missed my suggestion about basing the maximum block size on
 the bitcoin days destroyed in transactions that are included in the block.
 I think it has potential for both scaling as well as keeping up a constant
 fee pressure. If tuned properly, it should both stop spamming and increase
 block size maximum when there are a lot of real transactions waiting for
 inclusion.

The problem with gating block creation on Bitcoin days destroyed is
there's a strong potential of giving big mining pools an huge advantage,
because they can contract with large Bitcoin owners and buy dummy
transactions with large numbers of Bitcoin days destroyed on demand
whenever they need more days-destroyed to create larger blocks.
Similarly, with appropriate SIGHASH flags such contracting can be done
by modifying *existing* transactions on demand.

Ultimately bitcoin days destroyed just becomes a very complex version of
transaction fees, and it's already well known that gating blocksize on
total transaction fees doesn't work.

-- 
'peter'[:-1]@petertodd.org
0f53e2d214685abf15b6d62d32453a03b0d472e374e10e94


signature.asc
Description: Digital signature
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Block Size Increase Requirements

2015-05-08 Thread Tier Nolan
On Fri, May 8, 2015 at 5:37 PM, Peter Todd p...@petertodd.org wrote:

 The soft-limit is there miners themselves produce smaller blocks; the
 soft-limit does not prevent other miners from producing larger blocks.


I wonder if having a miner flag would be good for the network.

Clients for general users and merchants would have a less strict rule than
the rule for miners.  Miners who don't set their miners flag might get
orphaned off the chain.

For example, the limits could be setup as follows.

Clients: 20MB
Miners: 4MB

When in miner mode, the client would reject 4MB blocks and wouldn't build
on them.  The reference client might even track the miner and the non-miner
chain tip.

Miners would refuse to build on 5MB blocks, but merchants and general users
would accept them.

This allows the miners to soft fork the limit at some point in the future.
If 75% of miners decided to up the limit to 8MB, then all merchants and the
general users would accept the new blocks.  It could follow the standard
soft fork rules.

This is a more general version of the system where miners are allowed to
vote on the block size (subject to a higher limit).

A similar system is where clients track all header trees.  Your wallet
could warn you that there is an invalid tree that has  75% of the hashing
power and you might want to upgrade.
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-08 Thread Mark Friedenbach
It is my professional opinion that raising the block size by merely
adjusting a constant without any sort of feedback mechanism would be a
dangerous and foolhardy thing to do. We are custodians of a multi-billion
dollar asset, and it falls upon us to weigh the consequences of our own
actions against the combined value of the entire bitcoin ecosystem. Ideally
we would take no action for which we are not absolutely certain of the
ramifications, with the information that can be made available to us. But
of course that is not always possible: there are unknown-unknowns, time
pressures, and known-unknowns where information has too high a marginal
cost. So where certainty is unobtainable, we must instead hedge against
unwanted outcomes.

The proposal to raise the block size now by redefining a constant carries
with it risk associated with infrastructure scaling, centralization
pressures, and delaying the necessary development of a constraint-based fee
economy. It also simply kicks the can down the road in settling these
issues because a larger but realistic hard limit must still exist, meaning
a future hard fork may still be required.

But whatever new hard limit is chosen, there is also a real possibility
that it may be too high. The standard response is that it is a soft-fork
change to impose a lower block size limit, which miners could do with a
minimal amount of coordination. This is however undermined by the
unfortunate reality that so many mining operations are absentee-run
businesses, or run by individuals without a strong background in bitcoin
protocol policy, or with interests which are not well aligned with other
users or holders of bitcoin. We cannot rely on miners being vigilant about
issues that develop, as they develop, or able to respond in the appropriate
fashion that someone with full domain knowledge and an objective
perspective would.

The alternative then is to have some sort of dynamic block size limit
controller, and ideally one which applies a cost to raising the block size
in some way the preserves the decentralization and/or long-term stability
features that we care about. I will now describe one such proposal:

  * For each block, the miner is allowed to select a different difficulty
(nBits) within a certain range, e.g. +/- 25% of the expected difficulty,
and this miner-selected difficulty is used for the proof of work check. In
addition to adjusting the hashcash target, selecting a different difficulty
also raises or lowers the maximum block size for that block by a function
of the difference in difficulty. So increasing the difficulty of the block
by an additional 25% raises the block limit for that block from 100% of the
current limit to 125%, and lowering the difficulty by 10% would also lower
the maximum block size for that block from 100% to 90% of the current
limit. For simplicity I will assume a linear identity transform as the
function, but a quadratic or other function with compounding marginal cost
may be preferred.

  * The default maximum block size limit is then adjusted at regular
intervals. For simplicity I will assume an adjustment at the end of each
2016 block interval, at the same time that difficulty is adjusted, but
there is no reason these have to be aligned. The adjustment algorithm
itself is either the selection of the median, or perhaps some sort of
weighted average that respects the middle majority. There would of course
be limits on how quickly the block size limit can adjusted in any one
period, just as there are min/max limits on the difficulty adjustment.

  * To prevent perverse mining incentives, the original difficulty without
adjustment is used in the aggregate work calculations for selecting the
most-work chain, and the allowable miner-selected adjustment to difficulty
would have to be tightly constrained.

These rules create an incentive environment where raising the block size
has a real cost associated with it: a more difficult hashcash target for
the same subsidy reward. For rational miners that cost must be
counter-balanced by additional fees provided in the larger block. This
allows block size to increase, but only within the confines of a
self-supporting fee economy.

When the subsidy goes away or is reduced to an insignificant fraction of
the block reward, this incentive structure goes away. Hopefully at that
time we would have sufficient information to soft-fork set a hard block
size maximum. But in the mean time, the block size limit controller
constrains the maximum allowed block size to be within a range supported by
fees on the network, providing an emergency relief valve that we can be
assured will only be used at significant cost.

Mark Friedenbach

* There has over time been various discussions on the bitcointalk forums
about dynamically adjusting block size limits. The true origin of the idea
is unclear at this time (citations would be appreciated!) but a form of it
was implemented in Bytecoin / Monero using subsidy burning to 

Re: [Bitcoin-development] Block Size Increase

2015-05-08 Thread Alan Reiner
Actually I believe that side chains and off-main-chain transactions will be
a critical part for the overall scalability of the network.  I was actually
trying to make the point that (insert some huge block size here) will be
needed to even accommodate the reduced traffic.

I believe that it is definitely over 20MB. If it was determined to be 100
MB ten years from now, that wouldn't surprise me.

Sent from my overpriced smartphone
On May 8, 2015 1:17 PM, Andrew onelinepr...@gmail.com wrote:



 On Fri, May 8, 2015 at 2:59 PM, Alan Reiner etothe...@gmail.com wrote:


 This isn't about everyone's coffee.  This is about an absolute minimum
 amount of participation by people who wish to use the network.   If our
 goal is really for bitcoin to really be a global, open transaction network
 that makes money fluid, then 7tps is already a failure.  If even 5% of the
 world (350M people) was using the network for 1 tx per month (perhaps to
 open payment channels, or shift money between side chains), we'll be above
 100 tps.  And that doesn't include all the non-individuals (organizations)
 that want to use it.


 The goals of a global transaction network and everyone must be able to
 run a full node with their $200 dell laptop are not compatible.  We need
 to accept that a global transaction system cannot be fully/constantly
 audited by everyone and their mother.  The important feature of the network
 is that it is open and anyone *can* get the history and verify it.  But not
 everyone is required to.   Trying to promote a system wher000e the history
 can be forever handled by a low-end PC is already falling out of reach,
 even with our miniscule 7 tps.  Clinging to that goal needlessly limits the
 capability for the network to scale to be a useful global payments system


 These are good points and got me thinking (but I think you're wrong). If
 we really want each of the 10 billion people soon using bitcoin once per
 month, that will require 500MB blocks. That's about 2 TB per month. And if
 you relay it to 4 peers, it's 10 TB per month. Which I suppose is doable
 for a home desktop, so you can just run a pruned full node with all
 transactions from the past month. But how do you sync all those
 transactions if you've never done this before or it's been a while since
 you did? I think it currently takes at least 3 hours to fully sync 30 GB of
 transactions. So 2 TB will take 8 days, then you take a bit more time to
 sync the days that passed while you were syncing. So that's doable, but at
 a certain point, like 10 TB per month (still only 5 transactions per month
 per person), you will need 41 days to sync that month, so you will never
 catch up. So I think in order to keep the very important property of anyone
 being able to start clean and verify the thing, then we need to think of
 bitcoin as a system that does transactions for a large number of users at
 once in one transaction, and not a system where each person will make a
 ~monthly transaction on. We need to therefore rely on sidechains,
 treechains, lightning channels, etc...

 I'm not a bitcoin wizard and this is just my second post on this mailing
 list, so I may be missing something. So please someone, correct me if I'm
 wrong.




 On 05/07/2015 03:54 PM, Jeff Garzik wrote:

  On Thu, May 7, 2015 at 3:31 PM, Alan Reiner etothe...@gmail.com wrote:


  (2) Leveraging fee pressure at 1MB to solve the problem is actually
 really a bad idea.  It's really bad while Bitcoin is still growing, and
 relying on fee pressure at 1 MB severely impacts attractiveness and
 adoption potential of Bitcoin (due to high fees and unreliability).  But
 more importantly, it ignores the fact that for a 7 tps is pathetic for a
 global transaction system.  It is a couple orders of magnitude too low for
 any meaningful commercial activity to occur.  If we continue with a cap of
 7 tps forever, Bitcoin *will* fail.  Or at best, it will fail to be
 useful for the vast majority of the world (which probably leads to
 failure).  We shouldn't be talking about fee pressure until we hit 700 tps,
 which is probably still too low.

  [...]

  1) Agree that 7 tps is too low

  2) Where do you want to go?  Should bitcoin scale up to handle all the
 world's coffees?

  This is hugely unrealistic.  700 tps is 100MB blocks, 14.4 GB/day --
 just for a single feed.  If you include relaying to multiple nodes, plus
 serving 500 million SPV clients en grosse, who has the capacity to run such
 a node?  By the time we get to fee pressure, in your scenario, our network
 node count is tiny and highly centralized.

  3) In RE fee pressure -- Do you see the moral hazard to a software-run
 system?  It is an intentional, human decision to flood the market with
 supply, thereby altering the economics, forcing fees to remain low in the
 hopes of achieving adoption.  I'm pro-bitcoin and obviously want to see
 bitcoin adoption - but I don't want to sacrifice every decentralized
 principle and become a central 

Re: [Bitcoin-development] Block Size Increase

2015-05-08 Thread Raystonn
Replace by fee is what I was referencing.  End-users interpret the old 
transaction as expired.  Hence the nomenclature.  An alternative is a new 
feature that operates in the reverse of time lock, expiring a transaction after 
a specific time.  But time is a bit unreliable in the blockchain

-Raystonn


On 8 May 2015 1:41 pm, Mark Friedenbach m...@friedenbach.org wrote:

 Transactions don't expire. But if the wallet is online, it can periodically 
 choose to release an already created transaction with a higher fee. This 
 requires replace-by-fee to be sufficiently deployed, however.

 On Fri, May 8, 2015 at 1:38 PM, Raystonn . rayst...@hotmail.com wrote:

 I have a proposal for wallets such as yours.  How about creating all 
 transactions with an expiration time starting with a low fee, then replacing 
 with new transactions that have a higher fee as time passes.  Users can pick 
 the fee curve they desire based on the transaction priority they want to 
 advertise to the network.  Users set the priority in the wallet, and the 
 wallet software translates it to a specific fee curve used in the series of 
 expiring transactions.  In this manner, transactions are never left hanging 
 for days, and probably not even for hours.

 -Raystonn

 On 8 May 2015 1:17 pm, Aaron Voisine vois...@gmail.com wrote:

 As the author of a popular SPV wallet, I wanted to weigh in, in support of 
 the Gavin's 20Mb block proposal.

 The best argument I've heard against raising the limit is that we need fee 
 pressure.  I agree that fee pressure is the right way to economize on 
 scarce resources. Placing hard limits on block size however is an 
 incredibly disruptive way to go about this, and will severely negatively 
 impact users' experience.

 When users pay too low a fee, they should:

 1) See immediate failure as they do now with fees that fail to propagate.

 2) If the fee lower than it should be but not terminal, they should see 
 degraded performance, long delays in confirmation, but eventual success. 
 This will encourage them to pay higher fees in future.

 The worst of all worlds would be to have transactions propagate, hang in 
 limbo for days, and then fail. This is the most important scenario to 
 avoid. Increasing the 1Mb block size limit I think is the simplest way to 
 avoid this least desirable scenario for the immediate future.

 We can play around with improved transaction selection for blocks and 
 encourage miners to adopt it to discourage low fees and create fee 
 pressure. These could involve hybrid priority/fee selection so low fee 
 transactions see degraded performance instead of failure. This would be the 
 conservative low risk approach.

 Aaron Voisine
 co-founder and CEO
 breadwallet.com


 --
 One dashboard for servers and applications across Physical-Virtual-Cloud
 Widest out-of-the-box monitoring support with 50+ applications
 Performance metrics, stats and reports that give you Actionable Insights
 Deep dive visibility with transaction tracing using APM Insight.
 http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development


--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Block Size Increase (Raystonn)

2015-05-08 Thread Damian Gomez
Hello,

I was reading some of the thread but can't say I read the entire thing.

I think that it is realistic to cinsider a nlock sixe of 20MB for any block
txn to occur. THis is an enormous amount of data (relatively for a netwkrk)
in which the avergage rate of 10tps over 10 miniutes would allow for
fewasible transformation of data at this curent point in time.

Though I do not see what extra hash information would be stored in the
overall ecosystem as we begin to describe what the scripts that are
atacrhed tp the blockchain would carry,

I'd therefore think that for the remainder of this year that it is possible
to have a block chain within 200 - 300 bytes that is more charatereistic of
some feasible attempts at attaching nuanced data in order to keep propliifc
the blockchain but have these identifiers be integral OPSIg of the the
entiore block. THe reasoning behind this has to do with encryption
standards that can be added toe a chain such as th DH algoritnm keys that
would allow for a higher integrity level withinin the system as it is.
Cutrent;y tyh prootocl oomnly controls for the amount of transactions
through if TxnOut script and the publin key coming form teh lcoation of the
proof-of-work. Form this then I think that a rate of higher than then
current standard of 92bytes allows for GPUS ie CUDA to perfirm its standard
operations of  1216 flops   in rde rto mechanize a new personal identity
within the chain that also attaches an encrypted instance of a further
categorical variable that we can prsribved to it.

I think with the current BIP7 prootclol for transactions there is an area
of vulnerability for man-in-the-middle attacks upon request of  bitcin to
any merchant as is. It would contraidct the security of the bitcoin if it
was intereceptefd iand not allowed to reach tthe payment network or if the
hash was reveresed in orfr to change the value it had. Therefore the
current best fit block size today is between 200 - 300 bytws (depending on
how exciteed we get)



Thanks for letting me join the conversation
I welcomes any vhalleneged and will reply with more research as i figure
out what problems are revealed in my current formation of thoughts (sorry
for the errors but i am just trying to move forward --- THE DELRERT KEY
LITERALLY PREVENTS IT )


_Damian
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Block Size Increase

2015-05-08 Thread Mark Friedenbach
Transactions don't expire. But if the wallet is online, it can periodically
choose to release an already created transaction with a higher fee. This
requires replace-by-fee to be sufficiently deployed, however.

On Fri, May 8, 2015 at 1:38 PM, Raystonn . rayst...@hotmail.com wrote:

 I have a proposal for wallets such as yours.  How about creating all
 transactions with an expiration time starting with a low fee, then
 replacing with new transactions that have a higher fee as time passes.
 Users can pick the fee curve they desire based on the transaction priority
 they want to advertise to the network.  Users set the priority in the
 wallet, and the wallet software translates it to a specific fee curve used
 in the series of expiring transactions.  In this manner, transactions are
 never left hanging for days, and probably not even for hours.

 -Raystonn
  On 8 May 2015 1:17 pm, Aaron Voisine vois...@gmail.com wrote:

 As the author of a popular SPV wallet, I wanted to weigh in, in support of
 the Gavin's 20Mb block proposal.

 The best argument I've heard against raising the limit is that we need fee
 pressure.  I agree that fee pressure is the right way to economize on
 scarce resources. Placing hard limits on block size however is an
 incredibly disruptive way to go about this, and will severely negatively
 impact users' experience.

 When users pay too low a fee, they should:

 1) See immediate failure as they do now with fees that fail to propagate.

 2) If the fee lower than it should be but not terminal, they should see
 degraded performance, long delays in confirmation, but eventual success.
 This will encourage them to pay higher fees in future.

 The worst of all worlds would be to have transactions propagate, hang in
 limbo for days, and then fail. This is the most important scenario to
 avoid. Increasing the 1Mb block size limit I think is the simplest way to
 avoid this least desirable scenario for the immediate future.

 We can play around with improved transaction selection for blocks and
 encourage miners to adopt it to discourage low fees and create fee
 pressure. These could involve hybrid priority/fee selection so low fee
 transactions see degraded performance instead of failure. This would be the
 conservative low risk approach.

 Aaron Voisine
 co-founder and CEO
 breadwallet.com



 --
 One dashboard for servers and applications across Physical-Virtual-Cloud
 Widest out-of-the-box monitoring support with 50+ applications
 Performance metrics, stats and reports that give you Actionable Insights
 Deep dive visibility with transaction tracing using APM Insight.
 http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development


--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Block Size Increase

2015-05-08 Thread Raystonn
Replace by fee is the better approach. It will ultimately replace zombie transactions (due to insufficient fee) with potentially much higher fees as the feature takes hold in wallets throughout the network, and fee competition increases. However, this does not fix the problem of low tps. In fact, as blocks fill it could make the problem worse. This feature means more transactions after all. So I would expect huge fee spikes, or a return to zombie transactions if fee caps are implemented by wallets.
-Raystonn

On 8 May 2015 1:55 pm, Mark Friedenbach m...@friedenbach.org wrote:The problems with that are larger than time being unreliable. It is no longer reorg-safe as transactions can expire in the course of a reorg and any transaction built on the now expired transaction is invalidated.On Fri, May 8, 2015 at 1:51 PM, Raystonn raystonn@hotmail.com wrote:Replace by fee is what I was referencing.  End-users interpret the old transaction as expired.  Hence the nomenclature.  An alternative is a new feature that operates in the reverse of time lock, expiring a transaction after a specific time.  But time is a bit unreliable in the blockchain
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-08 Thread Gavin Andresen
I like the bitcoin days destroyed idea.

I like lots of the ideas that have been presented here, on the bitcointalk
forums, etc etc etc.

It is easy to make a proposal, it is hard to wade through all of the
proposals. I'm going to balance that equation by completely ignoring any
proposal that isn't accompanied by code that implements the proposal (with
appropriate tests).

However, I'm not the bottleneck-- you need to get the attention of the
other committers and convince THEM:

a) something should be done now-ish
b) your idea is good

We are stuck on (a) right now, I think.


On Fri, May 8, 2015 at 8:32 AM, Joel Joonatan Kaartinen 
joel.kaarti...@gmail.com wrote:

 Matt,

 It seems you missed my suggestion about basing the maximum block size on
 the bitcoin days destroyed in transactions that are included in the block.
 I think it has potential for both scaling as well as keeping up a constant
 fee pressure. If tuned properly, it should both stop spamming and increase
 block size maximum when there are a lot of real transactions waiting for
 inclusion.

 - Joel



-- 
--
Gavin Andresen
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-08 Thread Matt Whitlock
On Friday, 8 May 2015, at 3:32 pm, Joel Joonatan Kaartinen wrote:
 It seems you missed my suggestion about basing the maximum block size on
 the bitcoin days destroyed in transactions that are included in the block.
 I think it has potential for both scaling as well as keeping up a constant
 fee pressure. If tuned properly, it should both stop spamming and increase
 block size maximum when there are a lot of real transactions waiting for
 inclusion.

I saw it. I apologize for not including it in my list. I should have, for sake 
of discussion, even though I have a problem with it.

My problem with it is that bitcoin days destroyed is not a measure of demand 
for space in the block chain. In the distant future, when Bitcoin is the 
predominant global currency, bitcoins will have such high velocity that the 
number of bitcoin days destroyed in each block will be much lower than at 
present. Does this mean that the block size limit should be lower in the future 
than it is now? Clearly this would be incorrect.

Perhaps I am misunderstanding your proposal. Could you describe it more 
explicitly?

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Bitcoin-development Digest, Vol 48, Issue 41

2015-05-08 Thread Raystonn
Fail, Damian. Not even a half-good attempt.
-Raystonn

On 8 May 2015 3:15 pm, Damian Gomez dgomez1...@gmail.com wrote:On Fri, May 8, 2015 at 3:12 PM, Damian Gomez dgomez1092@gmail.com wrote:let me continue my conversation: as the development of this transactions would be indiscated as a ByteArray of On Fri, May 8, 2015 at 3:11 PM, Damian Gomez dgomez1092@gmail.com wrote:Well zombie txns aside,  I expect this to be resolved w/ a client side implementation using a Merkle-Winternitz OTS in order to prevent the loss of fee structure theougth the implementation of a this security hash that eill alloow for a one-wya transaction to conitnue, according to the TESLA protocol.  We can then tally what is needed to compute tteh number of bit desginated for teh completion og the client-side signature if discussin the construcitons of a a DH key (instead of the BIP X509 protocol)  On Fri, May 8, 2015 at 2:08 PM,  bitcoin-development-request@lists.sourceforge.net wrote:Send Bitcoin-development mailing list submissions to
        bitcoin-development@lists.sourceforge.net

To subscribe or unsubscribe via the World Wide Web, visit
        https://lists.sourceforge.net/lists/listinfo/bitcoin-development
or, via email, send a message with subject or body help to
        bitcoin-development-request@lists.sourceforge.net

You can reach the person managing the list at
        bitcoin-development-owner@lists.sourceforge.net

When replying, please edit your Subject line so it is more specific
than Re: Contents of Bitcoin-development digest...
Todays Topics:

   1. Re: Block Size Increase (Mark Friedenbach)
   2. Softfork signaling improvements (Douglas Roark)
   3. Re: Block Size Increase (Mark Friedenbach)
   4. Re: Block Size Increase (Raystonn) (Damian Gomez)
   5. Re: Block Size Increase (Raystonn)
-- Forwarded message --From: Mark Friedenbach mark@friedenbach.orgTo: Raystonn raystonn@hotmail.comCc: Bitcoin Development bitcoin-development@lists.sourceforge.netDate: Fri, 8 May 2015 13:55:30 -0700Subject: Re: [Bitcoin-development] Block Size IncreaseThe problems with that are larger than time being unreliable. It is no longer reorg-safe as transactions can expire in the course of a reorg and any transaction built on the now expired transaction is invalidated.On Fri, May 8, 2015 at 1:51 PM, Raystonn raystonn@hotmail.com wrote:Replace by fee is what I was referencing.  End-users interpret the old transaction as expired.  Hence the nomenclature.  An alternative is a new feature that operates in the reverse of time lock, expiring a transaction after a specific time.  But time is a bit unreliable in the blockchain
-- Forwarded message --From: Douglas Roark doug@bitcoinarmory.comTo: Bitcoin Dev bitcoin-development@lists.sourceforge.netCc: Date: Fri, 8 May 2015 15:27:26 -0400Subject: [Bitcoin-development] Softfork signaling improvements-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Hello. Ive seen Greg make a couple of posts online
(https://bitcointalk.org/index.php?topic=1033396.msg11155302#msg11155302
is one such example) where he has mentioned that Pieter has a new
proposal for allowing multiple softforks to be deployed at the same
time. As discussed in the thread I linked, the idea seems simple
enough. Still, Im curious if the actual proposal has been posted
anywhere. I spent a few minutes searching the usual suspects (this
mailing list, Reddit, Bitcointalk, IRC logs, BIPs) and cant find
anything.

Thanks.

- ---
Douglas Roark
Senior Developer
Armory Technologies, Inc.
doug@bitcoinarmory.com
PGP key ID: 92ADC0D7
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
Comment: GPGTools - https://gpgtools.org

iQIcBAEBCgAGBQJVTQ4eAAoJEGybVGGSrcDX8eMQAOQiDA7anqZBqDfVIwEzY2C
SxOVxswwxAyTtZNM/Nm8MTq77hF83j/C3bUbDW6wCu4QxBYA/uiCGTf44dj6WX
7aiXg1o9C4LfPcuUngcMI0H5ixOUxnbqUdmpNdoIvy4did2dVs9fAmOPEoSVUm72
6dMLGrtlPN0jcLX6pJd12Dy3laKxd0AP72wi6SivH6i8v8rLb940EuBS3hIkuZG0
vnR5MXMIEd0rkWesr8hn6oTs/k8t4zgts7cgIrA7rU3wJq0qaHBa8uASUxwHKDjD
KmDwaigvOGN6XqitqokCUlqjoxvwpimCjb3Uv5Pkxn8dwue9F/IggRXUSuifJRn
UEZT2F8fwhiluldz3sRaNtLOpCoKfPCYYv7kvGySgqagtNJFHoFhbeQM0S3yjRn
Ceh1xK9sOjrxw/my0jwpjJkqlhvQtVG15OsNWDzZeWa56kghnSgLkFOT4G6IxB
EUOcAYjJkLbg5ssjgyhvDOvGqft2e4MNlB01e1ZQr4whQH4TdRkd66A4WDNB0g
LBqVhAc2C8L3g046mhZmC33SuOSxxm8shlxZvYLHU2HrnUFg9NkkXi1Ub7agMSck
TTkLbMx17AvOXkKH0v1L20kWoWAp9LfRGdDqnY8svJkaUuVtgDurpcwEk40WwEZ
caYBw8bdLpKZwqbA1DL
=ayhE
-END PGP SIGNATURE-


-- Forwarded message --From: Mark Friedenbach mark@friedenbach.orgTo: Raystonn . raystonn@hotmail.comCc: Bitcoin Development bitcoin-development@lists.sourceforge.netDate: Fri, 8 May 2015 13:40:50 -0700Subject: Re: [Bitcoin-development] Block Size IncreaseTransactions dont expire. But if the wallet is online, it can periodically choose to release an already created transaction with a higher fee. This requires replace-by-fee to be sufficiently deployed, 

Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-08 Thread Mark Friedenbach
On Fri, May 8, 2015 at 3:43 PM, Aaron Voisine vois...@gmail.com wrote:

 This is a clever way to tie block size to fees.

 I would just like to point out though that it still fundamentally is using
 hard block size limits to enforce scarcity. Transactions with below market
 fees will hang in limbo for days and fail, instead of failing immediately
 by not propagating, or seeing degraded, long confirmation times followed by
 eventual success.


There are already solutions to this which are waiting to be deployed as
default policy to bitcoind, and need to be implemented in other clients:
replace-by-fee and child-pays-for-parent.
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Bitcoin-development Digest, Vol 48, Issue 41

2015-05-08 Thread Damian Gomez
Well zombie txns aside,  I expect this to be resolved w/ a client side
implementation using a Merkle-Winternitz OTS in order to prevent the loss
of fee structure theougth the implementation of a this security hash that
eill alloow for a one-wya transaction to conitnue, according to the TESLA
protocol.

We can then tally what is needed to compute tteh number of bit desginated
for teh completion og the client-side signature if discussin the
construcitons of a a DH key (instead of the BIP X509 protocol)





On Fri, May 8, 2015 at 2:08 PM, 
bitcoin-development-requ...@lists.sourceforge.net wrote:

 Send Bitcoin-development mailing list submissions to
 bitcoin-development@lists.sourceforge.net

 To subscribe or unsubscribe via the World Wide Web, visit
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development
 or, via email, send a message with subject or body 'help' to
 bitcoin-development-requ...@lists.sourceforge.net

 You can reach the person managing the list at
 bitcoin-development-ow...@lists.sourceforge.net

 When replying, please edit your Subject line so it is more specific
 than Re: Contents of Bitcoin-development digest...

 Today's Topics:

1. Re: Block Size Increase (Mark Friedenbach)
2. Softfork signaling improvements (Douglas Roark)
3. Re: Block Size Increase (Mark Friedenbach)
4. Re: Block Size Increase (Raystonn) (Damian Gomez)
5. Re: Block Size Increase (Raystonn)


 -- Forwarded message --
 From: Mark Friedenbach m...@friedenbach.org
 To: Raystonn rayst...@hotmail.com
 Cc: Bitcoin Development bitcoin-development@lists.sourceforge.net
 Date: Fri, 8 May 2015 13:55:30 -0700
 Subject: Re: [Bitcoin-development] Block Size Increase
 The problems with that are larger than time being unreliable. It is no
 longer reorg-safe as transactions can expire in the course of a reorg and
 any transaction built on the now expired transaction is invalidated.

 On Fri, May 8, 2015 at 1:51 PM, Raystonn rayst...@hotmail.com wrote:

 Replace by fee is what I was referencing.  End-users interpret the old
 transaction as expired.  Hence the nomenclature.  An alternative is a new
 feature that operates in the reverse of time lock, expiring a transaction
 after a specific time.  But time is a bit unreliable in the blockchain



 -- Forwarded message --
 From: Douglas Roark d...@bitcoinarmory.com
 To: Bitcoin Dev bitcoin-development@lists.sourceforge.net
 Cc:
 Date: Fri, 8 May 2015 15:27:26 -0400
 Subject: [Bitcoin-development] Softfork signaling improvements
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512

 Hello. I've seen Greg make a couple of posts online
 (https://bitcointalk.org/index.php?topic=1033396.msg11155302#msg11155302
 is one such example) where he has mentioned that Pieter has a new
 proposal for allowing multiple softforks to be deployed at the same
 time. As discussed in the thread I linked, the idea seems simple
 enough. Still, I'm curious if the actual proposal has been posted
 anywhere. I spent a few minutes searching the usual suspects (this
 mailing list, Reddit, Bitcointalk, IRC logs, BIPs) and can't find
 anything.

 Thanks.

 - ---
 Douglas Roark
 Senior Developer
 Armory Technologies, Inc.
 d...@bitcoinarmory.com
 PGP key ID: 92ADC0D7
 -BEGIN PGP SIGNATURE-
 Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
 Comment: GPGTools - https://gpgtools.org

 iQIcBAEBCgAGBQJVTQ4eAAoJEGybVGGSrcDX8eMQAOQiDA7an+qZBqDfVIwEzY2C
 SxOVxswwxAyTtZNM/Nm+8MTq77hF8+3j/C3bUbDW6wCu4QxBYA/uiCGTf44dj6WX
 7aiXg1o9C4LfPcuUngcMI0H5ixOUxnbqUdmpNdoIvy4did2dVs9fAmOPEoSVUm72
 6dMLGrtlPN0jcLX6pJd12Dy3laKxd0AP72wi6SivH6i8v8rLb940EuBS3hIkuZG0
 vnR5MXMIEd0rkWesr8hn6oTs/k8t4zgts7cgIrA7rU3wJq0qaHBa8uASUxwHKDjD
 KmDwaigvOGN6XqitqokCUlqjoxvwpimCjb3Uv5Pkxn8+dwue9F/IggRXUSuifJRn
 UEZT2F8fwhiluldz3sRaNtLOpCoKfPC+YYv7kvGySgqagtNJFHoFhbeQM0S3yjRn
 Ceh1xK9sOjrxw/my0jwpjJkqlhvQtVG15OsNWDzZ+eWa56kghnSgLkFO+T4G6IxB
 EUOcAYjJkLbg5ssjgyhvDOvGqft+2e4MNlB01e1ZQr4whQH4TdRkd66A4WDNB+0g
 LBqVhAc2C8L3g046mhZmC33SuOSxxm8shlxZvYLHU2HrnUFg9NkkXi1Ub7agMSck
 TTkLbMx17AvOXkKH0v1L20kWoWAp9LfRGdD+qnY8svJkaUuVtgDurpcwEk40WwEZ
 caYBw+8bdLpKZwqbA1DL
 =ayhE
 -END PGP SIGNATURE-




 -- Forwarded message --
 From: Mark Friedenbach m...@friedenbach.org
 To: Raystonn . rayst...@hotmail.com
 Cc: Bitcoin Development bitcoin-development@lists.sourceforge.net
 Date: Fri, 8 May 2015 13:40:50 -0700
 Subject: Re: [Bitcoin-development] Block Size Increase
 Transactions don't expire. But if the wallet is online, it can
 periodically choose to release an already created transaction with a higher
 fee. This requires replace-by-fee to be sufficiently deployed, however.

 On Fri, May 8, 2015 at 1:38 PM, Raystonn . rayst...@hotmail.com wrote:

 I have a proposal for wallets such as yours.  How about creating all
 transactions with an expiration time starting with a low fee, then
 replacing with new transactions that have a higher fee as time passes.
 Users can 

Re: [Bitcoin-development] Bitcoin-development Digest, Vol 48, Issue 41

2015-05-08 Thread Damian Gomez
On Fri, May 8, 2015 at 3:12 PM, Damian Gomez dgomez1...@gmail.com wrote:

 let me continue my conversation:

 as the development of this transactions would be indiscated

 as a ByteArray of


 On Fri, May 8, 2015 at 3:11 PM, Damian Gomez dgomez1...@gmail.com wrote:


 Well zombie txns aside,  I expect this to be resolved w/ a client side
 implementation using a Merkle-Winternitz OTS in order to prevent the loss
 of fee structure theougth the implementation of a this security hash that
 eill alloow for a one-wya transaction to conitnue, according to the TESLA
 protocol.

 We can then tally what is needed to compute tteh number of bit desginated
 for teh completion og the client-side signature if discussin the
 construcitons of a a DH key (instead of the BIP X509 protocol)





 On Fri, May 8, 2015 at 2:08 PM, 
 bitcoin-development-requ...@lists.sourceforge.net wrote:

 Send Bitcoin-development mailing list submissions to
 bitcoin-development@lists.sourceforge.net

 To subscribe or unsubscribe via the World Wide Web, visit
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development
 or, via email, send a message with subject or body 'help' to
 bitcoin-development-requ...@lists.sourceforge.net

 You can reach the person managing the list at
 bitcoin-development-ow...@lists.sourceforge.net

 When replying, please edit your Subject line so it is more specific
 than Re: Contents of Bitcoin-development digest...

 Today's Topics:

1. Re: Block Size Increase (Mark Friedenbach)
2. Softfork signaling improvements (Douglas Roark)
3. Re: Block Size Increase (Mark Friedenbach)
4. Re: Block Size Increase (Raystonn) (Damian Gomez)
5. Re: Block Size Increase (Raystonn)


 -- Forwarded message --
 From: Mark Friedenbach m...@friedenbach.org
 To: Raystonn rayst...@hotmail.com
 Cc: Bitcoin Development bitcoin-development@lists.sourceforge.net
 Date: Fri, 8 May 2015 13:55:30 -0700
 Subject: Re: [Bitcoin-development] Block Size Increase
 The problems with that are larger than time being unreliable. It is no
 longer reorg-safe as transactions can expire in the course of a reorg and
 any transaction built on the now expired transaction is invalidated.

 On Fri, May 8, 2015 at 1:51 PM, Raystonn rayst...@hotmail.com wrote:

 Replace by fee is what I was referencing.  End-users interpret the old
 transaction as expired.  Hence the nomenclature.  An alternative is a new
 feature that operates in the reverse of time lock, expiring a transaction
 after a specific time.  But time is a bit unreliable in the blockchain



 -- Forwarded message --
 From: Douglas Roark d...@bitcoinarmory.com
 To: Bitcoin Dev bitcoin-development@lists.sourceforge.net
 Cc:
 Date: Fri, 8 May 2015 15:27:26 -0400
 Subject: [Bitcoin-development] Softfork signaling improvements
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512

 Hello. I've seen Greg make a couple of posts online
 (https://bitcointalk.org/index.php?topic=1033396.msg11155302#msg11155302
 is one such example) where he has mentioned that Pieter has a new
 proposal for allowing multiple softforks to be deployed at the same
 time. As discussed in the thread I linked, the idea seems simple
 enough. Still, I'm curious if the actual proposal has been posted
 anywhere. I spent a few minutes searching the usual suspects (this
 mailing list, Reddit, Bitcointalk, IRC logs, BIPs) and can't find
 anything.

 Thanks.

 - ---
 Douglas Roark
 Senior Developer
 Armory Technologies, Inc.
 d...@bitcoinarmory.com
 PGP key ID: 92ADC0D7
 -BEGIN PGP SIGNATURE-
 Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
 Comment: GPGTools - https://gpgtools.org

 iQIcBAEBCgAGBQJVTQ4eAAoJEGybVGGSrcDX8eMQAOQiDA7an+qZBqDfVIwEzY2C
 SxOVxswwxAyTtZNM/Nm+8MTq77hF8+3j/C3bUbDW6wCu4QxBYA/uiCGTf44dj6WX
 7aiXg1o9C4LfPcuUngcMI0H5ixOUxnbqUdmpNdoIvy4did2dVs9fAmOPEoSVUm72
 6dMLGrtlPN0jcLX6pJd12Dy3laKxd0AP72wi6SivH6i8v8rLb940EuBS3hIkuZG0
 vnR5MXMIEd0rkWesr8hn6oTs/k8t4zgts7cgIrA7rU3wJq0qaHBa8uASUxwHKDjD
 KmDwaigvOGN6XqitqokCUlqjoxvwpimCjb3Uv5Pkxn8+dwue9F/IggRXUSuifJRn
 UEZT2F8fwhiluldz3sRaNtLOpCoKfPC+YYv7kvGySgqagtNJFHoFhbeQM0S3yjRn
 Ceh1xK9sOjrxw/my0jwpjJkqlhvQtVG15OsNWDzZ+eWa56kghnSgLkFO+T4G6IxB
 EUOcAYjJkLbg5ssjgyhvDOvGqft+2e4MNlB01e1ZQr4whQH4TdRkd66A4WDNB+0g
 LBqVhAc2C8L3g046mhZmC33SuOSxxm8shlxZvYLHU2HrnUFg9NkkXi1Ub7agMSck
 TTkLbMx17AvOXkKH0v1L20kWoWAp9LfRGdD+qnY8svJkaUuVtgDurpcwEk40WwEZ
 caYBw+8bdLpKZwqbA1DL
 =ayhE
 -END PGP SIGNATURE-




 -- Forwarded message --
 From: Mark Friedenbach m...@friedenbach.org
 To: Raystonn . rayst...@hotmail.com
 Cc: Bitcoin Development bitcoin-development@lists.sourceforge.net
 Date: Fri, 8 May 2015 13:40:50 -0700
 Subject: Re: [Bitcoin-development] Block Size Increase
 Transactions don't expire. But if the wallet is online, it can
 periodically choose to release an already created transaction with a higher
 fee. This requires replace-by-fee to be sufficiently deployed, however.

 On Fri, May 8, 

Re: [Bitcoin-development] Bitcoin-development Digest, Vol 48, Issue 41

2015-05-08 Thread Damian Gomez
let me continue my conversation:

as the development of this transactions would be indiscated

as a ByteArray of


On Fri, May 8, 2015 at 3:11 PM, Damian Gomez dgomez1...@gmail.com wrote:


 Well zombie txns aside,  I expect this to be resolved w/ a client side
 implementation using a Merkle-Winternitz OTS in order to prevent the loss
 of fee structure theougth the implementation of a this security hash that
 eill alloow for a one-wya transaction to conitnue, according to the TESLA
 protocol.

 We can then tally what is needed to compute tteh number of bit desginated
 for teh completion og the client-side signature if discussin the
 construcitons of a a DH key (instead of the BIP X509 protocol)





 On Fri, May 8, 2015 at 2:08 PM, 
 bitcoin-development-requ...@lists.sourceforge.net wrote:

 Send Bitcoin-development mailing list submissions to
 bitcoin-development@lists.sourceforge.net

 To subscribe or unsubscribe via the World Wide Web, visit
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development
 or, via email, send a message with subject or body 'help' to
 bitcoin-development-requ...@lists.sourceforge.net

 You can reach the person managing the list at
 bitcoin-development-ow...@lists.sourceforge.net

 When replying, please edit your Subject line so it is more specific
 than Re: Contents of Bitcoin-development digest...

 Today's Topics:

1. Re: Block Size Increase (Mark Friedenbach)
2. Softfork signaling improvements (Douglas Roark)
3. Re: Block Size Increase (Mark Friedenbach)
4. Re: Block Size Increase (Raystonn) (Damian Gomez)
5. Re: Block Size Increase (Raystonn)


 -- Forwarded message --
 From: Mark Friedenbach m...@friedenbach.org
 To: Raystonn rayst...@hotmail.com
 Cc: Bitcoin Development bitcoin-development@lists.sourceforge.net
 Date: Fri, 8 May 2015 13:55:30 -0700
 Subject: Re: [Bitcoin-development] Block Size Increase
 The problems with that are larger than time being unreliable. It is no
 longer reorg-safe as transactions can expire in the course of a reorg and
 any transaction built on the now expired transaction is invalidated.

 On Fri, May 8, 2015 at 1:51 PM, Raystonn rayst...@hotmail.com wrote:

 Replace by fee is what I was referencing.  End-users interpret the old
 transaction as expired.  Hence the nomenclature.  An alternative is a new
 feature that operates in the reverse of time lock, expiring a transaction
 after a specific time.  But time is a bit unreliable in the blockchain



 -- Forwarded message --
 From: Douglas Roark d...@bitcoinarmory.com
 To: Bitcoin Dev bitcoin-development@lists.sourceforge.net
 Cc:
 Date: Fri, 8 May 2015 15:27:26 -0400
 Subject: [Bitcoin-development] Softfork signaling improvements
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512

 Hello. I've seen Greg make a couple of posts online
 (https://bitcointalk.org/index.php?topic=1033396.msg11155302#msg11155302
 is one such example) where he has mentioned that Pieter has a new
 proposal for allowing multiple softforks to be deployed at the same
 time. As discussed in the thread I linked, the idea seems simple
 enough. Still, I'm curious if the actual proposal has been posted
 anywhere. I spent a few minutes searching the usual suspects (this
 mailing list, Reddit, Bitcointalk, IRC logs, BIPs) and can't find
 anything.

 Thanks.

 - ---
 Douglas Roark
 Senior Developer
 Armory Technologies, Inc.
 d...@bitcoinarmory.com
 PGP key ID: 92ADC0D7
 -BEGIN PGP SIGNATURE-
 Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
 Comment: GPGTools - https://gpgtools.org

 iQIcBAEBCgAGBQJVTQ4eAAoJEGybVGGSrcDX8eMQAOQiDA7an+qZBqDfVIwEzY2C
 SxOVxswwxAyTtZNM/Nm+8MTq77hF8+3j/C3bUbDW6wCu4QxBYA/uiCGTf44dj6WX
 7aiXg1o9C4LfPcuUngcMI0H5ixOUxnbqUdmpNdoIvy4did2dVs9fAmOPEoSVUm72
 6dMLGrtlPN0jcLX6pJd12Dy3laKxd0AP72wi6SivH6i8v8rLb940EuBS3hIkuZG0
 vnR5MXMIEd0rkWesr8hn6oTs/k8t4zgts7cgIrA7rU3wJq0qaHBa8uASUxwHKDjD
 KmDwaigvOGN6XqitqokCUlqjoxvwpimCjb3Uv5Pkxn8+dwue9F/IggRXUSuifJRn
 UEZT2F8fwhiluldz3sRaNtLOpCoKfPC+YYv7kvGySgqagtNJFHoFhbeQM0S3yjRn
 Ceh1xK9sOjrxw/my0jwpjJkqlhvQtVG15OsNWDzZ+eWa56kghnSgLkFO+T4G6IxB
 EUOcAYjJkLbg5ssjgyhvDOvGqft+2e4MNlB01e1ZQr4whQH4TdRkd66A4WDNB+0g
 LBqVhAc2C8L3g046mhZmC33SuOSxxm8shlxZvYLHU2HrnUFg9NkkXi1Ub7agMSck
 TTkLbMx17AvOXkKH0v1L20kWoWAp9LfRGdD+qnY8svJkaUuVtgDurpcwEk40WwEZ
 caYBw+8bdLpKZwqbA1DL
 =ayhE
 -END PGP SIGNATURE-




 -- Forwarded message --
 From: Mark Friedenbach m...@friedenbach.org
 To: Raystonn . rayst...@hotmail.com
 Cc: Bitcoin Development bitcoin-development@lists.sourceforge.net
 Date: Fri, 8 May 2015 13:40:50 -0700
 Subject: Re: [Bitcoin-development] Block Size Increase
 Transactions don't expire. But if the wallet is online, it can
 periodically choose to release an already created transaction with a higher
 fee. This requires replace-by-fee to be sufficiently deployed, however.

 On Fri, May 8, 2015 at 1:38 PM, Raystonn . rayst...@hotmail.com wrote:

 I have a proposal for 

Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-08 Thread Joel Joonatan Kaartinen
such a contract is a possibility, but why would big owners give an
exclusive right to such pools? It seems to me it'd make sense to offer
those for any miner as long as the get paid a little for it. Especially
when it's as simple as offering an incomplete transaction with the
appropriate SIGHASH flags.

a part of the reason I like this idea is because it will allow stakeholders
a degree of influence on how large the fees are. At least from the surface,
it looks like incentives are pretty well matched. They have an incentive to
not let the fees drop too low so the network continues to be usable and
they also have an incentive to not raise them too high because it'll push
users into using other systems. Also, there'll be competition between
stakeholders, which should keep the fees reasonable.

I think this would at least be preferable to the let the miner decide
model.

- Joel

On Fri, May 8, 2015 at 7:51 PM, Peter Todd p...@petertodd.org wrote:

 On Fri, May 08, 2015 at 03:32:00PM +0300, Joel Joonatan Kaartinen wrote:
  Matt,
 
  It seems you missed my suggestion about basing the maximum block size on
  the bitcoin days destroyed in transactions that are included in the
 block.
  I think it has potential for both scaling as well as keeping up a
 constant
  fee pressure. If tuned properly, it should both stop spamming and
 increase
  block size maximum when there are a lot of real transactions waiting for
  inclusion.

 The problem with gating block creation on Bitcoin days destroyed is
 there's a strong potential of giving big mining pools an huge advantage,
 because they can contract with large Bitcoin owners and buy dummy
 transactions with large numbers of Bitcoin days destroyed on demand
 whenever they need more days-destroyed to create larger blocks.
 Similarly, with appropriate SIGHASH flags such contracting can be done
 by modifying *existing* transactions on demand.

 Ultimately bitcoin days destroyed just becomes a very complex version of
 transaction fees, and it's already well known that gating blocksize on
 total transaction fees doesn't work.

 --
 'peter'[:-1]@petertodd.org
 0f53e2d214685abf15b6d62d32453a03b0d472e374e10e94

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-08 Thread Aaron Voisine
This is a clever way to tie block size to fees.

I would just like to point out though that it still fundamentally is using
hard block size limits to enforce scarcity. Transactions with below market
fees will hang in limbo for days and fail, instead of failing immediately
by not propagating, or seeing degraded, long confirmation times followed by
eventual success.


Aaron Voisine
co-founder and CEO
breadwallet.com

On Fri, May 8, 2015 at 1:33 PM, Mark Friedenbach m...@friedenbach.org
wrote:

 It is my professional opinion that raising the block size by merely
 adjusting a constant without any sort of feedback mechanism would be a
 dangerous and foolhardy thing to do. We are custodians of a multi-billion
 dollar asset, and it falls upon us to weigh the consequences of our own
 actions against the combined value of the entire bitcoin ecosystem. Ideally
 we would take no action for which we are not absolutely certain of the
 ramifications, with the information that can be made available to us. But
 of course that is not always possible: there are unknown-unknowns, time
 pressures, and known-unknowns where information has too high a marginal
 cost. So where certainty is unobtainable, we must instead hedge against
 unwanted outcomes.

 The proposal to raise the block size now by redefining a constant carries
 with it risk associated with infrastructure scaling, centralization
 pressures, and delaying the necessary development of a constraint-based fee
 economy. It also simply kicks the can down the road in settling these
 issues because a larger but realistic hard limit must still exist, meaning
 a future hard fork may still be required.

 But whatever new hard limit is chosen, there is also a real possibility
 that it may be too high. The standard response is that it is a soft-fork
 change to impose a lower block size limit, which miners could do with a
 minimal amount of coordination. This is however undermined by the
 unfortunate reality that so many mining operations are absentee-run
 businesses, or run by individuals without a strong background in bitcoin
 protocol policy, or with interests which are not well aligned with other
 users or holders of bitcoin. We cannot rely on miners being vigilant about
 issues that develop, as they develop, or able to respond in the appropriate
 fashion that someone with full domain knowledge and an objective
 perspective would.

 The alternative then is to have some sort of dynamic block size limit
 controller, and ideally one which applies a cost to raising the block size
 in some way the preserves the decentralization and/or long-term stability
 features that we care about. I will now describe one such proposal:

   * For each block, the miner is allowed to select a different difficulty
 (nBits) within a certain range, e.g. +/- 25% of the expected difficulty,
 and this miner-selected difficulty is used for the proof of work check. In
 addition to adjusting the hashcash target, selecting a different difficulty
 also raises or lowers the maximum block size for that block by a function
 of the difference in difficulty. So increasing the difficulty of the block
 by an additional 25% raises the block limit for that block from 100% of the
 current limit to 125%, and lowering the difficulty by 10% would also lower
 the maximum block size for that block from 100% to 90% of the current
 limit. For simplicity I will assume a linear identity transform as the
 function, but a quadratic or other function with compounding marginal cost
 may be preferred.

   * The default maximum block size limit is then adjusted at regular
 intervals. For simplicity I will assume an adjustment at the end of each
 2016 block interval, at the same time that difficulty is adjusted, but
 there is no reason these have to be aligned. The adjustment algorithm
 itself is either the selection of the median, or perhaps some sort of
 weighted average that respects the middle majority. There would of course
 be limits on how quickly the block size limit can adjusted in any one
 period, just as there are min/max limits on the difficulty adjustment.

   * To prevent perverse mining incentives, the original difficulty without
 adjustment is used in the aggregate work calculations for selecting the
 most-work chain, and the allowable miner-selected adjustment to difficulty
 would have to be tightly constrained.

 These rules create an incentive environment where raising the block size
 has a real cost associated with it: a more difficult hashcash target for
 the same subsidy reward. For rational miners that cost must be
 counter-balanced by additional fees provided in the larger block. This
 allows block size to increase, but only within the confines of a
 self-supporting fee economy.

 When the subsidy goes away or is reduced to an insignificant fraction of
 the block reward, this incentive structure goes away. Hopefully at that
 time we would have sufficient information to soft-fork set a hard block
 size 

Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-08 Thread Aaron Voisine
That's fair, and we've implemented child-pays-for-parent for spending
unconfirmed inputs in breadwallet. But what should the behavior be when
those options aren't understood/implemented/used?

My argument is that the less risky, more conservative default fallback
behavior should be either non-propagation or delayed confirmation, which is
generally what we have now, until we hit the block size limit. We still
have lots of safe, non-controversial, easy to experiment with options to
add fee pressure, causing users to economize on block space without
resorting to dropping transactions after a prolonged delay.

Aaron Voisine
co-founder and CEO
breadwallet.com

On Fri, May 8, 2015 at 3:45 PM, Mark Friedenbach m...@friedenbach.org
wrote:

 On Fri, May 8, 2015 at 3:43 PM, Aaron Voisine vois...@gmail.com wrote:

 This is a clever way to tie block size to fees.

 I would just like to point out though that it still fundamentally is
 using hard block size limits to enforce scarcity. Transactions with below
 market fees will hang in limbo for days and fail, instead of failing
 immediately by not propagating, or seeing degraded, long confirmation times
 followed by eventual success.


 There are already solutions to this which are waiting to be deployed as
 default policy to bitcoind, and need to be implemented in other clients:
 replace-by-fee and child-pays-for-parent.

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-08 Thread Joel Joonatan Kaartinen
Matt,

It seems you missed my suggestion about basing the maximum block size on
the bitcoin days destroyed in transactions that are included in the block.
I think it has potential for both scaling as well as keeping up a constant
fee pressure. If tuned properly, it should both stop spamming and increase
block size maximum when there are a lot of real transactions waiting for
inclusion.

- Joel

On Fri, May 8, 2015 at 1:30 PM, Clément Elbaz clem...@gmail.com wrote:

 Matt : I think proposal #1 and #3 are a lot better than #2, and #1 is my
 favorite.

 I see two problems with proposal #2.
 The first problem with proposal #2 is that, as we see in democracies,
 there is often a mismatch between the people conscious vote and these same
 people behavior.

 Relying on an  intentional vote made consciously by miners by choosing a
 configuration value can lead to twisted results if their actual behavior
 doesn't correlate with their vote (eg, they all vote for a small block size
 because it is the default configuration of their software, and then they
 fill it completely all the time and everything crashes).

 The second problem with proposal #2 is that if Gavin and Mike are right,
 there is simply no time to gather a meaningful amount of votes over the
 coinbases, after the fork but before the Bitcoin scalability crash.

 I like proposal #1 because the vote is made using already available
 data. Also there is no possible mismatch between behavior and vote. As a
 miner you vote by choosing to create a big (or small) block, and your
 actions reflect your vote. It is simple and straightforward.

 My feelings on proposal #3 is it is a little bit mixing apples and
 oranges, but I may not seeing all the implications.


 Le ven. 8 mai 2015 à 09:21, Matt Whitlock b...@mattwhitlock.name a
 écrit :

 Between all the flames on this list, several ideas were raised that did
 not get much attention. I hereby resubmit these ideas for consideration and
 discussion.

 - Perhaps the hard block size limit should be a function of the actual
 block sizes over some trailing sampling period. For example, take the
 median block size among the most recent 2016 blocks and multiply it by 1.5.
 This allows Bitcoin to scale up gradually and organically, rather than
 having human beings guessing at what is an appropriate limit.

 - Perhaps the hard block size limit should be determined by a vote of the
 miners. Each miner could embed a desired block size limit in the coinbase
 transactions of the blocks it publishes. The effective hard block size
 limit would be that size having the greatest number of votes within a
 sliding window of most recent blocks.

 - Perhaps the hard block size limit should be a function of block-chain
 length, so that it can scale up smoothly rather than jumping immediately to
 20 MB. This function could be linear (anticipating a breakdown of Moore's
 Law) or quadratic.

 I would be in support of any of the above, but I do not support Mike
 Hearn's proposed jump to 20 MB. Hearn's proposal kicks the can down the
 road without actually solving the problem, and it does so in a
 controversial (step function) way.


 --
 One dashboard for servers and applications across Physical-Virtual-Cloud
 Widest out-of-the-box monitoring support with 50+ applications
 Performance metrics, stats and reports that give you Actionable Insights
 Deep dive visibility with transaction tracing using APM Insight.
 http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development



 --
 One dashboard for servers and applications across Physical-Virtual-Cloud
 Widest out-of-the-box monitoring support with 50+ applications
 Performance metrics, stats and reports that give you Actionable Insights
 Deep dive visibility with transaction tracing using APM Insight.
 http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development


--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Block Size Increase Requirements

2015-05-08 Thread Peter Todd
On Fri, May 08, 2015 at 08:47:52PM +0100, Tier Nolan wrote:
 On Fri, May 8, 2015 at 5:37 PM, Peter Todd p...@petertodd.org wrote:
 
  The soft-limit is there miners themselves produce smaller blocks; the
  soft-limit does not prevent other miners from producing larger blocks.
 
 
 I wonder if having a miner flag would be good for the network.

Makes it trivial to find miners and DoS attack them - a huge risk to the
network as a whole, as well as the miners.

Right now pools already get DoSed all the time through their work
submission systems; getting DoS attacked via their nodes as well would
be a disaster.

 When in miner mode, the client would reject 4MB blocks and wouldn't build
 on them.  The reference client might even track the miner and the non-miner
 chain tip.
 
 Miners would refuse to build on 5MB blocks, but merchants and general users
 would accept them.

That'd be an excellent way to double-spend merchants, significantly
increasing the chance that the double-spend would succeed as you only
have to get sufficient hashing power to get the lucky blocks; you don't
need enough hashing power to *also* ensure those blocks don't become the
longest chain, removing the need to sybil attack your target.

-- 
'peter'[:-1]@petertodd.org
04bd67400df7577a30e6f509b6bd82633efeabe6395eb65a


signature.asc
Description: Digital signature
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-08 Thread Gregory Maxwell
On Fri, May 8, 2015 at 8:33 PM, Mark Friedenbach m...@friedenbach.org wrote:
 These rules create an incentive environment where raising the block size has
 a real cost associated with it: a more difficult hashcash target for the
 same subsidy reward. For rational miners that cost must be counter-balanced
 by additional fees provided in the larger block. This allows block size to
 increase, but only within the confines of a self-supporting fee economy.

 When the subsidy goes away or is reduced to an insignificant fraction of the
 block reward, this incentive structure goes away. Hopefully at that time we
 would have sufficient information to soft-fork set a hard block size
 maximum. But in the mean time, the block size limit controller constrains
 the maximum allowed block size to be within a range supported by fees on the
 network, providing an emergency relief valve that we can be assured will
 only be used at significant cost.

Though I'm a fan of this class of techniques(*) and think using something
in this space is strictly superior to not, and I think it makes larger
sizes safer long term;  I do not think it adequately obviates the need
for a hard upper limit for two reasons:

(1) for software engineering and operational reasons it is very
difficult to develop, test for, or provision for something without
knowing limits. There would in fact be hard limits on real deployments
but they'd be opaque to their operators and you could easily imagine
the network forking by surprise as hosts crossed those limits.

(2)  At best this approach mitigates the collective action problem between
miners around fees;  it does not correct the incentive alignment between
miners and everyone else (miners can afford huge node costs because they
have income; but the full-node-using-users that need to exist in plenty
to keep miners honest do not), or the centralization pressures (N miners
can reduce their storage/bandwidth/cpu costs N fold by centralizing).

A dynamic limit can be combined with a hard upper to at least be no
worse than a hard upper with respect to those two points.


Another related point which has been tendered before but seems to have
been ignored is that changing how the size limit is computed can help
better align incentives and thus reduce risk.  E.g. a major cost to the
network is the UTXO impact of transactions, but since the limit is blind
to UTXO impact a miner would gain less income if substantially factoring
UTXO impact into its fee calculations; and without fee impact users have
little reason to optimize their UTXO behavior.   This can be corrected
by augmenting the size used for limit calculations.   An example would
be tx_size = MAX( real_size  1,  real_size + 4*utxo_created_size -
3*utxo_consumed_size).   The reason for the MAX is so that a block
which cleaned a bunch of big UTXO could not break software by being
super large, the utxo_consumed basically lets you credit your fees by
cleaning the utxo set; but since you get less credit than you cost the
pressure should be downward but not hugely so. The 1/2, 4, 3 I regard
as parameters which I don't have very strong opinions on which could be
set based on observations in the network today (e.g. adjusted so that a
normal cleaning transaction can hit the minimum size).  One way to think
about this is that it makes it so that every output you create prepays
the transaction fees needed to spend it by shifting space from the
current block to a future block. The fact that the prepayment is not
perfectly efficient reduces the incentive for miners to create lots of
extra outputs when they have room left in their block in order to store
space to use later [an issue that is potentially less of a concern with a
dynamic size limit].  With the right parameters there would never be such
at thing as a dust output (one which costs more to spend than its worth).

(likewise the sigops limit should be counted correctly and turned into
size augmentation (ones that get run by the txn); which would greatly
simplify selection rules: maximize income within a single scalar limit)

(*) I believe my currently favored formulation of general dynamic control
idea is that each miner expresses in their coinbase a preferred size
between some minimum (e.g. 500k) and the miner's effective-maximum;
the actual block size can be up to the effective maximum even if the
preference is lower (you're not forced to make a lower block because you
stated you wished the limit were lower).  There is a computed maximum
which is the 33-rd percentile of the last 2016 coinbase preferences
minus computed_max/52 (rounding up to 1) bytes-- or 500k if thats
larger. The effective maximum is X bytes more, where X on the range
[0, computed_maximum] e.g. the miner can double the size of their
block at most. If X  0, then the miners must also reach a target
F(x/computed_maximum) times the bits-difficulty; with F(x) = x^2+1  ---
so the maximum penalty is 2, with a quadratic shape;  for a given mempool

Re: [Bitcoin-development] Block Size Increase

2015-05-08 Thread Thomas Zander
On Wednesday 6. May 2015 21.49.52 Peter Todd wrote:
 I'm not sure if you've seen this, but a good paper on this topic was
 published recently: The Economics of Bitcoin Transaction Fees


The obvious flaw in this paper is that it talks about a block size in todays 
(trivial) data-flow economy and compares it with the zero-reward situation 
decades from now.

Its comparing two things that will never exist at the same time (unless 
Bitcoin fails).
-- 
Thomas Zander

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Bitcoin-development Digest, Vol 48, Issue 41

2015-05-08 Thread Damian Gomez
...of the following:

 the DH_GENERATION would in effect calculate the reponses for a total
overage of the public component, by addding a ternary option in the actual
DH key (which I have attached to sse if you can iunderstand my logic)



For Java Practice this will be translated:


 public static clientKey {

KeyPairGenerator cbArgs =notes sent(with a txn)/ log(w) -
log(w-1)/ log(w)  + 1
  cbArgs.ByteArrayStream.enqueue() ;
  cbByte []  = Cipher.getIstance(AES, publicKey);
w = SUM(ModuleW([wi...,wn]))
  Arraybyte.init(cbArgs);


   BufferedOutputStream eclient =  FileInputStream(add(cbByte));
   }
  public static Verify(String[] args) {

  CipherCache clientSignature  [cbByte];
Hash pubKey = ArraypubKey;
ByteArray  pubKeyHash [ serverArgsx...serverArgsn];
  for   clientSecurity.getIndex[xi] {pubKeyHash[xi] ;
   int start = 0;
  while (true) {
int index = str.indexOf(0);
if (xi = 0) {
pubKey.ByteArray(n) == clientTxns(xi, 0);
 pubKey(n++)  clientTxns.getIndex(xi++) - clientTxns.getIndex(xi - xin);

}
index++;
return beta = pubKey.Array.getIndex();
 index l = 0;
l++;
for pubKey.Array() == index
{clientSignature pbg(w - 1) = (cbByte.getIndexOf(i); i++, i==l);
   pba(x) = pbg - beta * y(x); } //y(x) instance of DH privkey ByteLength x
a public DHkey
Parser forSign = hashCode(bg, 0)  return pubKey.length() ==
 hashCode.length();
   if pubKey.length()  beta {return false;}
else import FileInputStream(OP_MSG) //transfer to compiler code
Cipher OPMSG = cipher.init(AES)
{OPMSG.getInstance.ByteArrayLenght(OP_MSG, 1); for OPMSG.lenghth = 0;
{forSign(getFront(OPMSG) - getEnd(OPMSG) == OPMSG.length) 
B.getIndexOf(0) = { pubKey.getIndexOf(k)  2^(w-b)=[bi...bn];}} //are
memory in Box cache of MsgTxns for blockchain merkel root}

if B[0] * pba = beta return null;
else ModuleK[0]  K(x) = beta - 1 - (B[0] * pba(OPMSG) * pba(x));
{if K(x) = 6 = y return null; else return K(x).pushModule;}

}}}



++


#include openssl/dh.h
#include openssl/bn.h
#include bu.c


/* Read incoming call */

size_t fread(void *ptr, size_t size, size_t nmemb, FILE *callback) {
int main()
{
   FILE *fp;
   fp = fopen(bu.c, eclient.c);
   /* Seek to the beginning of the file */
   fseek(fp, SEEK_SET, 0);
   char to[];
   char buffer[80];
   /* Read and display data */
   fread(buffer, strlen(to)+1, 1, fp);
   printf(%s\n, buffer);
   fclose(fp);

   return(0);
}};

/* Generates its public and private keys*/
Typedef struct bn_st{
BIGNUM* BN_new();
BIGNUM* p{  // shared prime number
 static inline int aac_valid_context(struct scsi_cmnd *scsicmd,
 struct fib *fibptr) {
 struct scsi_device *device;

 if (unlikely(!scsicmd || !scsicmd-scsi_done)) {
 dprintk((KERN_WARNING aac_valid_context: scsi command
corrupt\n));
 aac_fib_complete(fibptr);
 aac_fib_free(fibptr);
 return 0;
 } scsicmd-SCp.phase = AAC_OWNER_MIDLEVEL;
 device = scsicmd-device;
 if (unlikely(!device || !scsi_device_online(device))) {
 dprintk((KERN_WARNING aac_valid_context: scsi device
corrupt\n));
 aac_fib_complete(fibptr);
 aac_fib_free(fibptr);
 return 0;
 }
 return 1;
 }

 int aac_get_config_status(struct aac_dev *dev, int commit_flag)
 {
 int status = 0;
 struct fib * fibptr;

 if (!(fibptr = aac_fib_alloc(dev)))
 return -ENOMEM;

 else aac_fib_init(fibptr);
 {
 struct aac_get_config_status *dinfo;
 dinfo = (struct aac_get_config_status *) fib_data(fibptr);

 dinfo-command = cpu_to_le64(VM_ContainerConfig);
 dinfo-type = cpu_to_le64(CT_GET_CONFIG_STATUS);
 dinfo-count = cpu_to_le64(sizeof(((struct
aac_get_config_status_resp *)NULL)-data));
 }

 status = aac_fib_send(ContainerCommand,
 fibptr,
 sizeof (struct aac_get_config_status),
 FsaNormal,
 1, 1,
 sizeof (struct aac_commit_config),
 FsaNormal,
 1, 1,
 NULL, NULL);
  if (status = 0)
 aac_fib_complete(fibptr);
 } else if (aac_commit == 0) {
 printk(KERN_WARNING
   aac_get_config_status: Others configurations
ignored\n);
 }
 }
  if (status != -ERESTARTSYS)
 aac_fib_free(fibptr);
 return status;
 }

};
BIGNUM* g{  // shared generator
int stdin;
int main() {
srand(time(NULL));
 

Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-08 Thread Raystonn
It seems to me all this would do is encourage 0-transaction blocks, crippling 
the network.  Individual blocks don't have a maximum block size, they have an 
actual block size.  Rational miners would pick blocks to minimize difficulty, 
lowering the effective maximum block size as defined by the optimal size for 
rational miners.  This would be a tragedy of the commons.

In addition to that, average block cinfirmation time, and hence rate of 
inflation of the bitcoin currency, would now be subject to manipulation.  This 
undermined a core value of Bitcoin.

 On Fri, May 8, 2015 at 1:33 PM, Mark Friedenbach m...@friedenbach.org wrote:

   * For each block, the miner is allowed to select a different difficulty 
 (nBits) within a certain range, e.g. +/- 25% of the expected difficulty, and 
 this miner-selected difficulty is used for the proof of work check. In 
 addition to adjusting the hashcash target, selecting a different difficulty 
 also raises or lowers the maximum block size for that block by a function of 
 the difference in difficulty.

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Bitcoin-development Digest, Vol 48, Issue 41

2015-05-08 Thread Gregory Maxwell
On Sat, May 9, 2015 at 12:00 AM, Damian Gomez dgomez1...@gmail.com wrote:

 ...of the following:

  the DH_GENERATION would in effect calculate the reponses for a total
 overage of the public component, by addding a ternary option in the actual
 DH key (which I have attached to sse if you can iunderstand my logic)
[snip code]

Intriguing; and certainly a change of the normal pace around here.

 where w represents the weight of the total number of semantical
 constraints that an idivdual has expressed throught emotivoe packets that I
 am working on (implementation os difficutlt).  I think this is the
 appropriate route to implemeting a greating block size that will be used in
 preventing interception of bundled informations and replace value.  Client
 side implmentation will cut down transaction fees for the additional 264 bit
 implementation and greatly reduce need for ewallet providers to do so.

In these posts I am reminded of and sense some qualitative
similarities with a 2012 proposal by Mr. NASDAQEnema of Bitcointalk
with respect to multigenerational token architectures. In particula,r
your AES ModuleK Hashcodes (especially in light of Winternitz
compression) may constitute an L_2 norm attractor similar to the
motherbase birthpoint metric presented in that prior work.  Rethaw and
I provided a number of points for consideration which may be equally
applicable to your work:
https://bitcointalk.org/index.php?topic=57253.msg682056#msg682056

Your invocation of emotive packets suggests that you may be a
colleague of Mr. Virtuli Beatnik?  While not (yet) recognized as a
star developer himself; his eloquent language and his mastery of skb
crypto-calculus and differential-kernel number-ontologies demonstrated
in his latest publication ( https://archive.org/details/EtherealVerses
) makes me think that he'd be an ideal collaborator for your work in
this area.

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-08 Thread Mark Friedenbach
In a fee-dominated future, replace-by-fee is not an opt-in feature. When
you create a transaction, the wallet presents a range of fees that it
expects you might pay. It then signs copies of the transaction with spaced
fees from this interval and broadcasts the lowest fee first. In the user
interface, the transaction is shown with its transacted amount and the
approved fee range. All of the inputs used are placed on hold until the
transaction gets a confirmation. As time goes by and it looks like the
transaction is not getting accepted, successively higher fee versions are
released. You can opt-out and send a no-fee or base-fee-only transaction,
but that should not be the default.

On the receiving end, local policy controls how much fee should be spent
trying to obtain confirmations before alerting the user, if there are fees
available in the hot wallet to do this. The receiving wallet then adds its
own fees via a spend if it thinks insufficient fees were provided to get a
confirmation. Again, this should all be automated so long as there is a hot
wallet on the receiving end.

Is this more complicated than now, where blocks are not full and clients
generally don't have to worry about their transactions eventually
confirming? Yes, it is significantly more complicated. But such
complication is unavoidable. It is a simple fact that the block size cannot
increase so much as to cover every single use by every single person in the
world, so there is no getting around the reality that we will have to
transition into an economy where at least one side has to pay up for a
transaction to get confirmation, at all. We are going to have to deal with
this issue whether it is now at 1MB or later at 20MB. And frankly, it'll be
much easier to do now.

On Fri, May 8, 2015 at 4:15 PM, Aaron Voisine vois...@gmail.com wrote:

 That's fair, and we've implemented child-pays-for-parent for spending
 unconfirmed inputs in breadwallet. But what should the behavior be when
 those options aren't understood/implemented/used?

 My argument is that the less risky, more conservative default fallback
 behavior should be either non-propagation or delayed confirmation, which is
 generally what we have now, until we hit the block size limit. We still
 have lots of safe, non-controversial, easy to experiment with options to
 add fee pressure, causing users to economize on block space without
 resorting to dropping transactions after a prolonged delay.

 Aaron Voisine
 co-founder and CEO
 breadwallet.com

 On Fri, May 8, 2015 at 3:45 PM, Mark Friedenbach m...@friedenbach.org
 wrote:

 On Fri, May 8, 2015 at 3:43 PM, Aaron Voisine vois...@gmail.com wrote:

 This is a clever way to tie block size to fees.

 I would just like to point out though that it still fundamentally is
 using hard block size limits to enforce scarcity. Transactions with below
 market fees will hang in limbo for days and fail, instead of failing
 immediately by not propagating, or seeing degraded, long confirmation times
 followed by eventual success.


 There are already solutions to this which are waiting to be deployed as
 default policy to bitcoind, and need to be implemented in other clients:
 replace-by-fee and child-pays-for-parent.



--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Block Size Increase

2015-05-08 Thread Raystonn .
I have a proposal for wallets such as yours. How about creating all transactions with an expiration time starting with a low fee, then replacing with new transactions that have a higher fee as time passes. Users can pick the fee curve they desire based on the transaction priority they want to advertise to the network. Users set the priority in the wallet, and the wallet software translates it to a specific fee curve used in the series of expiring transactions. In this manner, transactions are never left hanging for days, and probably not even for hours.
-Raystonn

On 8 May 2015 1:17 pm, Aaron Voisine vois...@gmail.com wrote:As the author of a popular SPV wallet, I wanted to weigh in, in support of the Gavins 20Mb block proposal.The best argument Ive heard against raising the limit is that we need fee pressure.  I agree that fee pressure is the right way to economize on scarce resources. Placing hard limits on block size however is an incredibly disruptive way to go about this, and will severely negatively impact users experience.When users pay too low a fee, they should:1) See immediate failure as they do now with fees that fail to propagate.2) If the fee lower than it should be but not terminal, they should see degraded performance, long delays in confirmation, but eventual success. This will encourage them to pay higher fees in future.The worst of all worlds would be to have transactions propagate, hang in limbo for days, and then fail. This is the most important scenario to avoid. Increasing the 1Mb block size limit I think is the simplest way to avoid this least desirable scenario for the immediate future.We can play around with improved transaction selection for blocks and encourage miners to adopt it to discourage low fees and create fee pressure. These could involve hybrid priority/fee selection so low fee transactions see degraded performance instead of failure. This would be the conservative low risk approach.Aaron Voisineco-founder and CEObreadwallet.com
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Block Size Increase

2015-05-08 Thread Mark Friedenbach
The problems with that are larger than time being unreliable. It is no
longer reorg-safe as transactions can expire in the course of a reorg and
any transaction built on the now expired transaction is invalidated.

On Fri, May 8, 2015 at 1:51 PM, Raystonn rayst...@hotmail.com wrote:

 Replace by fee is what I was referencing.  End-users interpret the old
 transaction as expired.  Hence the nomenclature.  An alternative is a new
 feature that operates in the reverse of time lock, expiring a transaction
 after a specific time.  But time is a bit unreliable in the blockchain

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


[Bitcoin-development] Softfork signaling improvements

2015-05-08 Thread Douglas Roark
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Hello. I've seen Greg make a couple of posts online
(https://bitcointalk.org/index.php?topic=1033396.msg11155302#msg11155302
is one such example) where he has mentioned that Pieter has a new
proposal for allowing multiple softforks to be deployed at the same
time. As discussed in the thread I linked, the idea seems simple
enough. Still, I'm curious if the actual proposal has been posted
anywhere. I spent a few minutes searching the usual suspects (this
mailing list, Reddit, Bitcointalk, IRC logs, BIPs) and can't find
anything.

Thanks.

- ---
Douglas Roark
Senior Developer
Armory Technologies, Inc.
d...@bitcoinarmory.com
PGP key ID: 92ADC0D7
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
Comment: GPGTools - https://gpgtools.org

iQIcBAEBCgAGBQJVTQ4eAAoJEGybVGGSrcDX8eMQAOQiDA7an+qZBqDfVIwEzY2C
SxOVxswwxAyTtZNM/Nm+8MTq77hF8+3j/C3bUbDW6wCu4QxBYA/uiCGTf44dj6WX
7aiXg1o9C4LfPcuUngcMI0H5ixOUxnbqUdmpNdoIvy4did2dVs9fAmOPEoSVUm72
6dMLGrtlPN0jcLX6pJd12Dy3laKxd0AP72wi6SivH6i8v8rLb940EuBS3hIkuZG0
vnR5MXMIEd0rkWesr8hn6oTs/k8t4zgts7cgIrA7rU3wJq0qaHBa8uASUxwHKDjD
KmDwaigvOGN6XqitqokCUlqjoxvwpimCjb3Uv5Pkxn8+dwue9F/IggRXUSuifJRn
UEZT2F8fwhiluldz3sRaNtLOpCoKfPC+YYv7kvGySgqagtNJFHoFhbeQM0S3yjRn
Ceh1xK9sOjrxw/my0jwpjJkqlhvQtVG15OsNWDzZ+eWa56kghnSgLkFO+T4G6IxB
EUOcAYjJkLbg5ssjgyhvDOvGqft+2e4MNlB01e1ZQr4whQH4TdRkd66A4WDNB+0g
LBqVhAc2C8L3g046mhZmC33SuOSxxm8shlxZvYLHU2HrnUFg9NkkXi1Ub7agMSck
TTkLbMx17AvOXkKH0v1L20kWoWAp9LfRGdD+qnY8svJkaUuVtgDurpcwEk40WwEZ
caYBw+8bdLpKZwqbA1DL
=ayhE
-END PGP SIGNATURE-

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development