Re: [Bitcoin-development] A way to create a fee market even without a block size limit (2013)

2015-05-11 Thread Sergio Lerner
El 10/05/2015 06:07 p.m., Gregory Maxwell escribió:
 On Sun, May 10, 2015 at 8:45 PM, Sergio Lerner
 sergioler...@certimix.com wrote:
 Can the system by gamed?
 Users can pay fees or a portion of fees out of band to miner(s); this
 is undetectable to the network.
Then this is exactly what is needed. Let me explain.

I know of 5 methods for a user to pay fees to a miner. I will explain
each method and why these methods do not prevent the fee market from
being created:

1) By transaction fees

This is the standard, which would be limited by the CoVar algorithm, and
would create the fee market, if it were the only way to pay fees.

2) By creating multiple transactions, each adding an output that pays to
each miner (to a known miner address) the fees. User does not
pre-negotiate anything with miners.

This requires a transaction to have an additional output and requires
sending through the p2p network one different transaction to each miner,
each having an output with a known address of that miner. But the
network does not propagates double-spends, so those transaction would
need to be sent directly to the top miners, and to all at the same time.
The IP addresses of the top miners are not generally publicly available,
and then may not accept new incoming connections. Also having an
additional output means the transactions would be larger, so they will
score lower by any metric the miner uses to choose transactions. Last,
miners must be programmed to automatically interpret payments to their
addresses as fees. The resulting protocol is very difficult to do
reliably, expensive, as any delay would make one miner receive the
transaction from other miner and reject the double-spend that is being
send directly to it, increasing the average confirmation time.

3) By adding an anyone-can-spend output for fees, so the miner can spend
that output in the same block.  User does not pre-negotiate anything
with miners.

We can hard-fork not to allow spending outputs created in the same
block. This is a drawback, unless we reduce the block rate, which is my
proposal. However, spending in the same block also requires an storing
in the block an additional input, which consumes at least 40 bytes more,
and the transaction containing the input cannot be relayed to the
network in advance. Then the block that uses this method to collect fees
from many transactions will propagate slower, and the miner may end
loosing money. The any-one-can-spend output would take approximately 10
bytes. So if transmitting 10+40=50 bytes, cost more than the fees
earned, then miners do not have an incentive to game the system. It's
has been studied that each kilobyte costs an additional 80ms delay
until a majority knows about the block. (Information propagation in the
Bitcoin network). So 50 bytes costs 3.9 ms in propagation time, which
having a a 25 BTC subsidy is roughly equivalent to 0.2 mBTC. Currently
this is more than what transactions do pay in fees (about 0.1 mBTC), so
this should not be a problem for at least 5 years. And again, we could
just prevent spending outputs in the same block they are created.

4) Using a transaction having a single input having exactly the desired
output amount plus fees and signing the input with SIGHASH_SINGLE |
SIGHASH_ANYONECANPAY and adding to the transaction a single output with
the desired amount. The miner will be able to join many of these
transactions and finally add an output to collect all fees together,
without using standard transaction fees.

This is unreliable and cannot be systematically repeated without
creating a pre-transaction just to prepare the single input having the
amount plus fees exactly. The pre-transaction would need to pay fees, so
the problem is not avoided, just moved around.

5) By negotiating out of band with the miner previously. Anything could
be agreed by the user and the miner.

This actually creates a parallel out-of-band market for fees, which is
exactly what we want. If a user-to-miner pre-negotiation will take
place, then the miner can establish whatever price policy he wants to
compete and stay in business, as block data propagation costs money. So
there will be two fee markets, the out-of-band market, and the
in-band market, and both should converge.

My conclusion is that fee markets will be created, and any alternate
fee-paying methods (without a pre-negotiation) are not reliable nor
cost-saving options. The full proposal would be to use the CoVar method,
reduce the block rate to 1 minute, and do not allow spending outputs in
the same block they are created.

Best regards,
 Sergio.



--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.

Re: [Bitcoin-development] Reducing the block rate instead of increasing the maximum block size

2015-05-11 Thread Thy Shizzle
Yes This!

So many people seem hung up on growing the block size! If gaining a higher tps 
throughput is the main aim, I think that this proposition to speed up block 
creation has merit!

Yes it will lead to an increase in the block chain still due to 1mb ~1 minute 
instead of ~10 minute, but the change to the protocol is minor, you are only 
adding in a different difficulty rate starting from hight blah, no new features 
or anything are being added so there seems to me much less of a security risk! 
Also that impact if a hard fork should be minimal because there is nothing but 
absolute incentive for miners to mine at the new easier difficulty!

I feel this deserves a great deal of consideration as opposed to blowing out 
the block through miners voting etc

From: Sergio Lernermailto:sergioler...@certimix.com
Sent: ‎11/‎05/‎2015 5:05 PM
To: 
bitcoin-development@lists.sourceforge.netmailto:bitcoin-development@lists.sourceforge.net
Subject: [Bitcoin-development] Reducing the block rate instead of increasing 
the maximum block size

In this e-mail I'll do my best to argue than if you accept that
increasing the transactions/second is a good direction to go, then
increasing the maximum block size is not the best way to do it. I argue
that the right direction to go is to decrease the block rate to 1
minute, while keeping the block size limit to 1 Megabyte (or increasing
it from a lower value such as 100 Kbyte and then have a step function).
I'm backing up my claims with many hours of research simulating the
Bitcoin network under different conditions [1].  I'll try to convince
you by responding to each of the arguments I've heard against it.

Arguments against reducing the block interval

1. It will encourage centralization, because participants of mining
pools will loose more money because of excessive initial block template
latency, which leads to higher stale shares

When a new block is solved, that information needs to propagate
throughout the Bitcoin network up to the mining pool operator nodes,
then a new block header candidate is created, and this header must be
propagated to all the mining pool users, ether by a push or a pull
model. Generally the mining server pushes new work units to the
individual miners. If done other way around, the server would need to
handle a high load of continuous work requests that would be difficult
to distinguish from a DDoS attack. So if the server pushes new block
header candidates to clients, then the problem boils down to increasing
bandwidth of the servers to achieve a tenfold increase in work
distribution. Or distributing the servers geographically to achieve a
lower latency. Propagating blocks does not require additional CPU
resources, so mining pools administrators would need to increase
moderately their investment in the server infrastructure to achieve
lower latency and higher bandwidth, but I guess the investment would be low.

2. It will increase the probability of a block-chain split

The convergence of the network relies on the diminishing probability of
two honest miners creating simultaneous competing blocks chains. To
increase the competition chain, competing blocks must be generated in
almost simultaneously (in the same time window approximately bounded by
the network average block propagation delay). The probability of a block
competition decreases exponentially with the number of blocks. In fact,
the probability of a sustained competition on ten 1-minute blocks is one
million times lower than the probability of a competition of one
10-minute block. So even if the competition probability of six 1-minute
blocks is higher than of six ten-minute blocks, this does not imply
reducing the block rate increases this chance, but on the contrary,
reduces it.

3, It will reduce the security of the network

The security of the network is based on two facts:
A- The miners are incentivized to extend the best chain
B- The probability of a reversal based on a long block competition
decreases as more confirmation blocks are appended.
C- Renting or buying hardware to perform a 51% attack is costly.

A still holds. B holds for the same amount of confirmation blocks, so 6
confirmation blocks in a 10-minute block-chain is approximately
equivalent to 6 confirmation blocks in a 1-minute block-chain.
Only C changes, as renting the hashing power for 6 minutes is ten times
less expensive as renting it for 1 hour. However, there is no shop where
one can find 51% of the hashing power to rent right now, nor probably
will ever be if Bitcoin succeeds. Last, you can still have a 1 hour
confirmation (60 1-minute blocks) if you wish for high-valued payments,
so the security decreases only if participant wish to decrease it.

4. Reducing the block propagation time on the average case is good, but
what happen in the worse case?

Most methods proposed to reduce the block propagation delay do it only
on the average case. Any kind of block compression 

[Bitcoin-development] Reducing the block rate instead of increasing the maximum block size

2015-05-11 Thread Sergio Lerner
In this e-mail I'll do my best to argue than if you accept that
increasing the transactions/second is a good direction to go, then
increasing the maximum block size is not the best way to do it. I argue
that the right direction to go is to decrease the block rate to 1
minute, while keeping the block size limit to 1 Megabyte (or increasing
it from a lower value such as 100 Kbyte and then have a step function).
I'm backing up my claims with many hours of research simulating the
Bitcoin network under different conditions [1].  I'll try to convince
you by responding to each of the arguments I've heard against it.

Arguments against reducing the block interval

1. It will encourage centralization, because participants of mining
pools will loose more money because of excessive initial block template
latency, which leads to higher stale shares

When a new block is solved, that information needs to propagate
throughout the Bitcoin network up to the mining pool operator nodes,
then a new block header candidate is created, and this header must be
propagated to all the mining pool users, ether by a push or a pull
model. Generally the mining server pushes new work units to the
individual miners. If done other way around, the server would need to
handle a high load of continuous work requests that would be difficult
to distinguish from a DDoS attack. So if the server pushes new block
header candidates to clients, then the problem boils down to increasing
bandwidth of the servers to achieve a tenfold increase in work
distribution. Or distributing the servers geographically to achieve a
lower latency. Propagating blocks does not require additional CPU
resources, so mining pools administrators would need to increase
moderately their investment in the server infrastructure to achieve
lower latency and higher bandwidth, but I guess the investment would be low.

2. It will increase the probability of a block-chain split

The convergence of the network relies on the diminishing probability of
two honest miners creating simultaneous competing blocks chains. To
increase the competition chain, competing blocks must be generated in
almost simultaneously (in the same time window approximately bounded by
the network average block propagation delay). The probability of a block
competition decreases exponentially with the number of blocks. In fact,
the probability of a sustained competition on ten 1-minute blocks is one
million times lower than the probability of a competition of one
10-minute block. So even if the competition probability of six 1-minute
blocks is higher than of six ten-minute blocks, this does not imply
reducing the block rate increases this chance, but on the contrary, 
reduces it.

3, It will reduce the security of the network

The security of the network is based on two facts:
A- The miners are incentivized to extend the best chain
B- The probability of a reversal based on a long block competition
decreases as more confirmation blocks are appended.
C- Renting or buying hardware to perform a 51% attack is costly.

A still holds. B holds for the same amount of confirmation blocks, so 6
confirmation blocks in a 10-minute block-chain is approximately
equivalent to 6 confirmation blocks in a 1-minute block-chain.
Only C changes, as renting the hashing power for 6 minutes is ten times
less expensive as renting it for 1 hour. However, there is no shop where
one can find 51% of the hashing power to rent right now, nor probably
will ever be if Bitcoin succeeds. Last, you can still have a 1 hour
confirmation (60 1-minute blocks) if you wish for high-valued payments,
so the security decreases only if participant wish to decrease it.

4. Reducing the block propagation time on the average case is good, but
what happen in the worse case?

Most methods proposed to reduce the block propagation delay do it only
on the average case. Any kind of block compression relies on both
parties sharing some previous information. In the worse case it's true
that a miner can create and try to broadcast a block that takes too much
time to verify or bandwidth to transmit. This is currently true on the
Bitcoin network. Nevertheless there is no such incentive for miners,
since they will be shooting on their own foots. Peter Todd has argued
that the best strategy for miners is actually to reach 51% of the
network, but not more. In other words, to exclude the slowest 49%
percent. But this strategy of creating bloated blocks is too risky in
practice, and surely doomed to fail, as network conditions dynamically 
change. Also it would be perceived as an attack to the network, and the
miner (if it is a public mining pool) would be probably blacklisted.

5. Thousands of SPV wallets running in mobile devices would need to be
upgraded (thanks Mike).

That depends on the current upgrade rate for SPV wallets like Bitcoin
Wallet  and BreadWallet. Suppose that the upgrade rate is 80%/year: we
develop the source code for the change now and apply the change in 

Re: [Bitcoin-development] Reducing the block rate instead of increasing the maximum block size

2015-05-11 Thread insecurity
 So if the server pushes new block
 header candidates to clients, then the problem boils down to increasing
 bandwidth of the servers to achieve a tenfold increase in work
 distribution.

Most Stratum pools already do multiple updates of the header every block 
period,
bandwidth is really inconsequential, it's the latency that kills. At the 
present
time you are looking up to 15 seconds between the first and last pools 
to push
headers to their clients for the latest block. It's sort of 
inconsequential with
a 10 minute block time, but it cuts into a 1 minute one very heavily.

Some pools already don't do their own validation of blocks, but simply 
mirror
other pools, pushing them to be even more latency focused will just make 
this an
epidemic of invalidity rather than a solution.


 There are several proof-of-work cryptocurrencies in existence
 that have lower than 1 minute block intervals and they work just fine.
 First there was Bitcoin with a 10 minute interval, then was LiteCoin
 using a 2.5 interval, then was DogeCoin with 1 minute, and then
 QuarkCoin with just 30 seconds.

You can't really use these as examples of things going just fine. None 
of these
networks see anything approaching the Bitcoin transaction volume and 
none have
even remotely the same network size. Some Bitcoin forks use floats in 
consensus
critical code and work just fine, for the moment. We can't justify 
poor
decisions with but the altcoins are doing it.

Is there even a single study of the stale rates within these networks?

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Reducing the block rate instead of increasing the maximum block size

2015-05-11 Thread Peter Todd
On Mon, May 11, 2015 at 04:03:29AM -0300, Sergio Lerner wrote:
 Arguments against reducing the block interval
 
 1. It will encourage centralization, because participants of mining
 pools will loose more money because of excessive initial block template
 latency, which leads to higher stale shares
 
 When a new block is solved, that information needs to propagate
 throughout the Bitcoin network up to the mining pool operator nodes,
 then a new block header candidate is created, and this header must be
 propagated to all the mining pool users, ether by a push or a pull
 model. Generally the mining server pushes new work units to the
 individual miners. If done other way around, the server would need to
 handle a high load of continuous work requests that would be difficult
 to distinguish from a DDoS attack. So if the server pushes new block
 header candidates to clients, then the problem boils down to increasing
 bandwidth of the servers to achieve a tenfold increase in work
 distribution. Or distributing the servers geographically to achieve a
 lower latency. Propagating blocks does not require additional CPU
 resources, so mining pools administrators would need to increase
 moderately their investment in the server infrastructure to achieve
 lower latency and higher bandwidth, but I guess the investment would be low.

It's *way* easier to buy more bandwidth that it is to get lower latency.

After all, getting to the other side of the planet via fiber takes at
*minimum* 100ms simply due to the speed of light; routing overheads
approximately double or triple that for all but highly specialized and
very, very expensive, networking services. Bandwidth simply can't fix
the speed of light.

It's also not at all realistic or desirable to assume connectivity in a
single hop, so you can again multiply that base latency by 2-5 times.

And on top of *that* you have to take into account latency from hasher
to mining pool - time that the hashing power isn't working on the new
block because they're work unit hasn't been updated matters just as much
as the time to get that block to the pool in the first place. Being
forced to reduce that latency is very damaging to the ecosystem as
you're making it more profitable to keep hashing power centralized.

In any case, even with 10 minute blocks pools already pay a lot of
attention to latency... Why make that problem 10x worse?

 2. It will increase the probability of a block-chain split
 
 The convergence of the network relies on the diminishing probability of
 two honest miners creating simultaneous competing blocks chains. To
 increase the competition chain, competing blocks must be generated in
 almost simultaneously (in the same time window approximately bounded by
 the network average block propagation delay). The probability of a block
 competition decreases exponentially with the number of blocks. In fact,
 the probability of a sustained competition on ten 1-minute blocks is one
 million times lower than the probability of a competition of one
 10-minute block. So even if the competition probability of six 1-minute
 blocks is higher than of six ten-minute blocks, this does not imply
 reducing the block rate increases this chance, but on the contrary, 
 reduces it.

Can you explain your reasoning here in detail?

 4. Reducing the block propagation time on the average case is good, but
 what happen in the worse case?
 
 Most methods proposed to reduce the block propagation delay do it only
 on the average case. Any kind of block compression relies on both
 parties sharing some previous information. In the worse case it's true
 that a miner can create and try to broadcast a block that takes too much
 time to verify or bandwidth to transmit. This is currently true on the
 Bitcoin network. Nevertheless there is no such incentive for miners,
 since they will be shooting on their own foots. Peter Todd has argued
 that the best strategy for miners is actually to reach 51% of the
 network, but not more. In other words, to exclude the slowest 49%

Actually the correct figure is less than ~30%:

http://www.mail-archive.com/bitcoin-development@lists.sourceforge.net/msg03200.html

 percent. But this strategy of creating bloated blocks is too risky in
 practice, and surely doomed to fail, as network conditions dynamically 
 change.

They dynamically change? Source?

Remember that the strategy still gives you a benefit if you simply
target, say, 75% rather than the minimum threshold.

 Also it would be perceived as an attack to the network, and the
 miner (if it is a public mining pool) would be probably blacklisted.

How do you see that blacklisting actually being done?

Equally, it's easy to portray such mining as being for the good of
Bitcoin - we're just making transaction cheap! tough luck if your
shitty pool can't keep up This is quite unlike selfish mining.

 7. There has been insufficient testing and/or insufficient research into
 technical/economic implications or reducing 

Re: [Bitcoin-development] Reducing the block rate instead of increasing the maximum block size

2015-05-11 Thread insecurity
On 2015-05-11 10:34, Peter Todd wrote:
 How do you see that blacklisting actually being done?

Same way ghash.io was banned from the network when used Finney attacks
against BetCoin Dice.

As Andreas Antonopoulos says, if any of the miners do anything bad, we
just ban them from mining. Any sort of attack like this only lasts 10
minutes as a result. Stop worrying so much.

https://youtu.be/ncPyMUfNyVM?t=20s



--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Reducing the block rate instead of increasing the maximum block size

2015-05-11 Thread Dave Hudson
I proposed the same thing last year (there's a video of the presentation I was 
giving somewhere around). My intuition was that this would require slowly 
reducing the inter-block time, probably by step reductions at particular block 
heights.

Having had almost a year to think about it some more there are a few subtleties:

1) I think it could discourage decentralisation if the nominal 2 week period 
per difficulty retarget is retained. If we reached 4032 blocks and a 5 minute 
block time then there would be 2x as many blocks at any given difficulty which 
increases the odds of a smaller pool finding a block and thus getting a reward. 
Block rewards would have to drop in proportion to the reduced interval to keep 
the total schedule of 21M coins on track though, but the reduction in variance 
is a win for smaller miners.

2) There are limits to the block time. The speed of light is an ultimately 
limiting factor here, but we would want to avoid excessive orphan rates.

3) There would be some amount of confusion about numbers of confirmations. I 
actually think that confirmation numbers are a really misleading idea anyway 
and it would be safer to think in terms of minutes of security. A zero conf 
transaction has zero minutes, while right now 1, 2, 3 and 6 would be ten 
minutes, twenty minutes, thirty minutes and sixty minutes respectively. 
If our block time were 5 minutes then 8 confirmations would be forty minutes 
of security; if the block time was 2.5 minutes then 8 confirmations would be 
twenty minutes of security. The minutes of security measure indicates the 
mean number of minutes of the entire network's hash rate would be required to 
undo a transaction.

4) Reducing the inter-block time reduces the variance in reaching that sixty 
minutes of security level. The variance around finding 6 blocks with a ten 
minute interval is much wider than the variance for finding 12 blocks with a 5 
minute interval.



 On 11 May 2015, at 08:30, Thy Shizzle thyshiz...@outlook.com wrote:
 
 Yes This!
 
 So many people seem hung up on growing the block size! If gaining a higher 
 tps throughput is the main aim, I think that this proposition to speed up 
 block creation has merit!
 
 Yes it will lead to an increase in the block chain still due to 1mb ~1 minute 
 instead of ~10 minute, but the change to the protocol is minor, you are only 
 adding in a different difficulty rate starting from hight blah, no new 
 features or anything are being added so there seems to me much less of a 
 security risk! Also that impact if a hard fork should be minimal because 
 there is nothing but absolute incentive for miners to mine at the new easier 
 difficulty!
 
 I feel this deserves a great deal of consideration as opposed to blowing out 
 the block through miners voting etc
 From: Sergio Lerner mailto:sergioler...@certimix.com
 Sent: ‎11/‎05/‎2015 5:05 PM
 To: bitcoin-development@lists.sourceforge.net 
 mailto:bitcoin-development@lists.sourceforge.net
 Subject: [Bitcoin-development] Reducing the block rate instead of increasing 
 the maximum block size
 
 In this e-mail I'll do my best to argue than if you accept that
 increasing the transactions/second is a good direction to go, then
 increasing the maximum block size is not the best way to do it. I argue
 that the right direction to go is to decrease the block rate to 1
 minute, while keeping the block size limit to 1 Megabyte (or increasing
 it from a lower value such as 100 Kbyte and then have a step function).
 I'm backing up my claims with many hours of research simulating the
 Bitcoin network under different conditions [1].  I'll try to convince
 you by responding to each of the arguments I've heard against it.
 
 Arguments against reducing the block interval
 
 1. It will encourage centralization, because participants of mining
 pools will loose more money because of excessive initial block template
 latency, which leads to higher stale shares
 
 When a new block is solved, that information needs to propagate
 throughout the Bitcoin network up to the mining pool operator nodes,
 then a new block header candidate is created, and this header must be
 propagated to all the mining pool users, ether by a push or a pull
 model. Generally the mining server pushes new work units to the
 individual miners. If done other way around, the server would need to
 handle a high load of continuous work requests that would be difficult
 to distinguish from a DDoS attack. So if the server pushes new block
 header candidates to clients, then the problem boils down to increasing
 bandwidth of the servers to achieve a tenfold increase in work
 distribution. Or distributing the servers geographically to achieve a
 lower latency. Propagating blocks does not require additional CPU
 resources, so mining pools administrators would need to increase
 moderately their investment in the server infrastructure to achieve
 lower latency and higher bandwidth, but I guess the investment would be 

Re: [Bitcoin-development] Reducing the block rate instead of increasing the maximum block size

2015-05-11 Thread Dave Hudson

 On 11 May 2015, at 12:10, insecurity@national.shitposting.agency wrote:
 
 On 2015-05-11 10:34, Peter Todd wrote:
 How do you see that blacklisting actually being done?
 
 Same way ghash.io was banned from the network when used Finney attacks
 against BetCoin Dice.
 
 As Andreas Antonopoulos says, if any of the miners do anything bad, we
 just ban them from mining. Any sort of attack like this only lasts 10
 minutes as a result. Stop worrying so much.

This doesn't work because a large-scale miner can trivially make themselves 
look like a very large number of much smaller scale miners. Their ability to 
minimize variance comes from the cumulative totals they control so 10 pools of 
1% of the network cumulatively have the same variance as 1 pool with 10% of the 
network. It's also very easy for miners to relay blocks via different addresses 
and the cost is minimal. The biggest cost would be in DDoS prevention and a 
miner that actually split their pool into lots of small fragments would 
actually give themselves the ability to do quite a lot of DDoS mitigation 
anyway. If no-one is doing this right now it's simply because they've not had 
the right incentives to make it worthwhile; if the incentives make it 
worthwhile then this is pretty trivial to do.

This is one area where anonymity on behalf of transaction validators and block 
makers essentially makes it pretty-much impossible to maintain any sort of 
sanctions against antisocial behaviour.
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Reducing the block rate instead of increasing the maximum block size

2015-05-11 Thread Christian Decker
The propagation speed gain from having smaller blocks is linear in the size
reduction, down to a small size, after which the delay of the first byte
prevails [1], however the blockchain fork rate increases superlinearly,
giving an overall worse tradeoff. A high blockchain fork rate is a symptom
of inefficient use of the network's mining resources and may give an
advantage to an attacker that is more efficient in communicating internally.

I'd strongly against increasing the block generation rate in Bitcoin, it'd
be a very controversial proposal and would not solve anything.

[1]
http://www.tik.ee.ethz.ch/file/49318d3f56c1d525aabf7fda78b23fc0/P2P2013_041.pdf

On Mon, May 11, 2015 at 1:51 PM Dave Hudson d...@hashingit.com wrote:


  On 11 May 2015, at 12:10, insecurity@national.shitposting.agency wrote:
 
  On 2015-05-11 10:34, Peter Todd wrote:
  How do you see that blacklisting actually being done?
 
  Same way ghash.io was banned from the network when used Finney attacks
  against BetCoin Dice.
 
  As Andreas Antonopoulos says, if any of the miners do anything bad, we
  just ban them from mining. Any sort of attack like this only lasts 10
  minutes as a result. Stop worrying so much.

 This doesn't work because a large-scale miner can trivially make
 themselves look like a very large number of much smaller scale miners.
 Their ability to minimize variance comes from the cumulative totals they
 control so 10 pools of 1% of the network cumulatively have the same
 variance as 1 pool with 10% of the network. It's also very easy for miners
 to relay blocks via different addresses and the cost is minimal. The
 biggest cost would be in DDoS prevention and a miner that actually split
 their pool into lots of small fragments would actually give themselves the
 ability to do quite a lot of DDoS mitigation anyway. If no-one is doing
 this right now it's simply because they've not had the right incentives to
 make it worthwhile; if the incentives make it worthwhile then this is
 pretty trivial to do.

 This is one area where anonymity on behalf of transaction validators and
 block makers essentially makes it pretty-much impossible to maintain any
 sort of sanctions against antisocial behaviour.

 --
 One dashboard for servers and applications across Physical-Virtual-Cloud
 Widest out-of-the-box monitoring support with 50+ applications
 Performance metrics, stats and reports that give you Actionable Insights
 Deep dive visibility with transaction tracing using APM Insight.
 http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Bitcoin core 0.11 planning

2015-05-11 Thread Wladimir
A reminder - feature freeze and string freeze is coming up this Friday the 15th.

Let me know if your pull request is ready to be merged before then,

Wladimir

On Tue, Apr 28, 2015 at 7:44 AM, Wladimir J. van der Laan
laa...@gmail.com wrote:
 Hello all,

 The release window for 0.11 is nearing, I'd propose the following schedule:

 2015-05-01  Soft translation string freeze
 Open Transifex translations for 0.11
 Finalize and close translation for 0.9

 2015-05-15  Feature freeze, string freeze

 2015-06-01  Split off 0.11 branch
 Tag and release 0.11.0rc1
 Start merging for 0.12 on master branch

 2015-07-01  Release 0.11.0 final (aim)

 In contrast to former releases, which were protracted for months, let's try 
 to be more strict about the dates. Of course it is always possible for 
 last-minute critical issues to interfere with the planning. The release will 
 not be held up for features, though, and anything that will not make it to 
 0.11 will be postponed to next release scheduled for end of the year.

 Wladimir

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


[Bitcoin-development] Fwd: Bitcoin core 0.11 planning

2015-05-11 Thread Wladimir
On Tue, Apr 28, 2015 at 11:01 AM, Pieter Wuille pieter.wui...@gmail.com wrote:
 As softforks almost certainly require backports to older releases and other
 software anyway, I don't think they should necessarily be bound to Bitcoin
 Core major releases. If they don't require large code changes, we can easily
 do them in minor releases too.

Agree here - there is no need to time consensus changes with a major
release, as they need to be ported back to older releases anyhow.
(I don't really classify them as software features, but properties of
the underlying system that we need to adopt to)

Wladimir

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


[Bitcoin-development] Long-term mining incentives

2015-05-11 Thread Thomas Voegtlin
The discussion on block size increase has brought some attention to the
other elephant in the room: Long-term mining incentives.

Bitcoin derives its current market value from the assumption that a
stable, steady-state regime will be reached in the future, where miners
have an incentive to keep mining to protect the network. Such a steady
state regime does not exist today, because miners get most of their
reward from the block subsidy, which will progressively be removed.

Thus, today's 3 billion USD question is the following: Will a steady
state regime be reached in the future? Can such a regime exist? What are
the necessary conditions for its existence?

Satoshi's paper suggests that this may be achieved through miner fees.
Quite a few people seem to take this for granted, and are working to
make it happen (developing cpfp and replace-by-fee). This explains part
of the opposition to raising the block size limit; some people would
like to see some fee pressure building up first, in order to get closer
to a regime where miners are incentivised by transaction fees instead of
block subsidy. Indeed, the emergence of a working fee market would be
extremely reassuring for the long-term viability of bitcoin. So, the
thinking goes, by raising the block size limit, we would be postponing a
crucial reality check. We would be buying time, at the expenses of
Bitcoin's decentralization.

OTOH, proponents of a block size increase have a very good point: if the
block size is not raised soon, Bitcoin is going to enter a new, unknown
and potentially harmful regime. In the current regime, almost all
transaction get confirmed quickly, and fee pressure does not exist. Mike
Hearn suggested that, when blocks reach full capacity and users start to
experience confirmation delays and confirmation uncertainty, users will
simply go away and stop using Bitcoin. To me, that outcome sounds very
plausible indeed. Thus, proponents of the block size increase are
conservative; they are trying to preserve the current regime, which is
known to work, instead of letting the network enter uncharted territory.

My problem is that this seems to lacks a vision. If the maximal block
size is increased only to buy time, or because some people think that 7
tps is not enough to compete with VISA, then I guess it would be
healthier to try and develop off-chain infrastructure first, such as the
Lightning network.

OTOH, I also fail to see evidence that a limited block capacity will
lead to a functional fee market, able to sustain a steady state. A
functional market requires well-informed participants who make rational
choices and accept the outcomes of their choices. That is not the case
today, and to believe that it will magically happen because blocks start
to reach full capacity sounds a lot like like wishful thinking.

So here is my question, to both proponents and opponents of a block size
increase: What steady-state regime do you envision for Bitcoin, and what
is is your plan to get there? More specifically, how will the
steady-state regime look like? Will users experience fee pressure and
delays, or will it look more like a scaled up version of what we enjoy
today? Should fee pressure be increased jointly with subsidy decrease,
or as soon as possible, or never? What incentives will exist for miners
once the subsidy is gone? Will miners have an incentive to permanently
fork off the last block and capture its fees? Do you expect Bitcoin to
work because miners are altruistic/selfish/honest/caring?

A clear vision would be welcome.

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Long-term mining incentives

2015-05-11 Thread insecurity
On 2015-05-11 16:28, Thomas Voegtlin wrote:
 My problem is that this seems to lacks a vision. If the maximal block
 size is increased only to buy time, or because some people think that 7
 tps is not enough to compete with VISA, then I guess it would be
 healthier to try and develop off-chain infrastructure first, such as 
 the
 Lightning network.

If your end goal is compete with VISA you might as well just give up
and go home right now. There's lots of terrible proposals where people
try to demonstrate that so many hundred thousand transactions a second
are possible if we just make the block size 500GB. In the real world
with physical limits, you literally can not verify more than a few
thousand ECDSA signatures a second on a CPU core. The tradeoff taken
in Bitcoin is that the signatures are pretty small, but they are also
slow to verify on any sort of scale. There's no way competing with a
centralised entity using on-chain transactions is even a sane goal.

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Bitcoin-development Digest, Vol 48, Issue 62

2015-05-11 Thread Damian Gomez
Hllo

I want to build from a conversation that I had w/ Peter (T?) regarding the
increase in block size in the bitcoin from its's current structure would be
the proposasl of an prepend to the hash chain itself that would be the
first DER decoded script in order to verify integrity(trust) within a set
of transactions and the originiator themselves.

It is my belief that the process to begin a new encryption tool using a
variant of the WinterNitz OTS for its existential unforgeability to be the
added signatures with every  Wallet transaction in order to provide a
consesnus systemt that takes into accont a personal level of intergrity for
the intention fo a transaction to occur. This signature would then be
hashes for there to be an intermediate proxy state that then verifies and
evaluates the trust fucntion for the receiving trnsactions.  This
evaluation loop would itself be a state in which the mining power and the
rewards derived from them would be an increased level of integrity as
provided for the brainers of a systems who are then the signatuers of
the transaction authenticity, and additiaonally program extranonces of x
bits {72} in order  to have a double valid signature that the rest of the
nodes would accept in order to have a valid address from which to be able
to continuously receive transactions.

There is a level of diffculty in obtaining brainers, fees would only apply
uin so much as they are able to create authentic transactions based off the
voting power of the rest of the received nodes. The greater number of
faults within the system from a brainer then the more, so would his
computational power be restricted in order to provide a reward feedback
system. This singularity in a Byzantine consensus is only achieved if the
route of an appropriate transformation occurs, one that is invariant to the
participants of the system, thus being able to provide initial vector
transformations from a person's online identity is the responsibilty that
we have to ensure and calulate a lagrangian method that utilisizes a set of
convolutional neural network funcitons [backpropagation, fuzzy logic] and
and tranformation function taking the vectors of tranformations in a
kahunen-loeve algorithm and using the convergence of a baryon wave function
in order to proceed with a baseline reading of the current level of
integrity in the state today that is an instance of actionable acceleration
within a system.

This is something that I am trying to continue to parse out. Therefore
there are still heavy questions to be answered(the most important being the
consent of the people to measure their own levels of integrity through
mined information) There must always be the option to disconnect from a
transactional system where payments occur in order to allow a level of
solace and peace within individuals -- withour repercussions and a seperate
system that supports the offline realm as well. (THis is a design problem)

Ultimately, quite literally such a transaction system could exist to
provide detailed analysis that promotes integrity being the basis for
sharing information.  The fee structure would be eliminated, due to the
level of integrity and procesing power to have messages and transactions
and reviews of unfiduciary responsible orgnizations be merited as highly
true (.9 in fizzy logic) in order to promote a well-being in the state.
That is its own reward, the strenght of having more processing speed.


FYI(thank you to peter whom nudged my thinking and interest (again) in this
area. )

This is something I am attempting to design in order to program it. Though
I am not an expert and my technology stack is limited to java and c (and my
issues from it).  I provided a class the other day the was pseudo code for
the beginning of the consensus. Now I might to now if I am missing any of
teh technical paradigms that might make this illogical? I now with the
advent of 7petabyte computers one could easily store 2.5 petabytes of human
information for just an instance of integrity not to mention otehr
emotions.



*Also, might someone be able to provide a bit of information on Bitcoin
core project?*

thank you again. Damain.

On Mon, May 11, 2015 at 10:29 AM, 
bitcoin-development-requ...@lists.sourceforge.net wrote:

 Send Bitcoin-development mailing list submissions to
 bitcoin-development@lists.sourceforge.net

 To subscribe or unsubscribe via the World Wide Web, visit
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development
 or, via email, send a message with subject or body 'help' to
 bitcoin-development-requ...@lists.sourceforge.net

 You can reach the person managing the list at
 bitcoin-development-ow...@lists.sourceforge.net

 When replying, please edit your Subject line so it is more specific
 than Re: Contents of Bitcoin-development digest...

 Today's Topics:

1. Fwd:  Bitcoin core 0.11 planning (Wladimir)
2. Re: Bitcoin core 0.11 planning (Wladimir)
3. Long-term 

Re: [Bitcoin-development] Bitcoin-development Digest, Vol 48, Issue 63

2015-05-11 Thread Damian Gomez
Btw How awful that I didn't cite my sources, please exucse me, this is
definitely not my intention sometimes I get too caught up in my own
excitemtnt

1) Martin, J., Alvisi, L., Fast Byzantine Consensus. *IEEE Transactions on
Dependable and Secure Computing. 2006. *3(3) doi: ?  Please see
John-Phillipe Martin and Lorenzo ALvisi

2) https://eprint.iacr.org/2011/191.pdf  One_Time Winternitz Signatures.


On Mon, May 11, 2015 at 1:20 PM, 
bitcoin-development-requ...@lists.sourceforge.net wrote:

 Send Bitcoin-development mailing list submissions to
 bitcoin-development@lists.sourceforge.net

 To subscribe or unsubscribe via the World Wide Web, visit
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development
 or, via email, send a message with subject or body 'help' to
 bitcoin-development-requ...@lists.sourceforge.net

 You can reach the person managing the list at
 bitcoin-development-ow...@lists.sourceforge.net

 When replying, please edit your Subject line so it is more specific
 than Re: Contents of Bitcoin-development digest...

 Today's Topics:

1. Re: Bitcoin-development Digest, Vol 48,   Issue 62 (Damian Gomez)


 -- Forwarded message --
 From: Damian Gomez dgomez1...@gmail.com
 To: bitcoin-development@lists.sourceforge.net
 Cc:
 Date: Mon, 11 May 2015 13:20:46 -0700
 Subject: Re: [Bitcoin-development] Bitcoin-development Digest, Vol 48,
 Issue 62
 Hllo

 I want to build from a conversation that I had w/ Peter (T?) regarding the
 increase in block size in the bitcoin from its's current structure would be
 the proposasl of an prepend to the hash chain itself that would be the
 first DER decoded script in order to verify integrity(trust) within a set
 of transactions and the originiator themselves.

 It is my belief that the process to begin a new encryption tool using a
 variant of the WinterNitz OTS for its existential unforgeability to be the
 added signatures with every  Wallet transaction in order to provide a
 consesnus systemt that takes into accont a personal level of intergrity for
 the intention fo a transaction to occur. This signature would then be
 hashes for there to be an intermediate proxy state that then verifies and
 evaluates the trust fucntion for the receiving trnsactions.  This
 evaluation loop would itself be a state in which the mining power and the
 rewards derived from them would be an increased level of integrity as
 provided for the brainers of a systems who are then the signatuers of
 the transaction authenticity, and additiaonally program extranonces of x
 bits {72} in order  to have a double valid signature that the rest of the
 nodes would accept in order to have a valid address from which to be able
 to continuously receive transactions.

 There is a level of diffculty in obtaining brainers, fees would only apply
 uin so much as they are able to create authentic transactions based off the
 voting power of the rest of the received nodes. The greater number of
 faults within the system from a brainer then the more, so would his
 computational power be restricted in order to provide a reward feedback
 system. This singularity in a Byzantine consensus is only achieved if the
 route of an appropriate transformation occurs, one that is invariant to the
 participants of the system, thus being able to provide initial vector
 transformations from a person's online identity is the responsibilty that
 we have to ensure and calulate a lagrangian method that utilisizes a set of
 convolutional neural network funcitons [backpropagation, fuzzy logic] and
 and tranformation function taking the vectors of tranformations in a
 kahunen-loeve algorithm and using the convergence of a baryon wave function
 in order to proceed with a baseline reading of the current level of
 integrity in the state today that is an instance of actionable acceleration
 within a system.

 This is something that I am trying to continue to parse out. Therefore
 there are still heavy questions to be answered(the most important being the
 consent of the people to measure their own levels of integrity through
 mined information) There must always be the option to disconnect from a
 transactional system where payments occur in order to allow a level of
 solace and peace within individuals -- withour repercussions and a seperate
 system that supports the offline realm as well. (THis is a design problem)

 Ultimately, quite literally such a transaction system could exist to
 provide detailed analysis that promotes integrity being the basis for
 sharing information.  The fee structure would be eliminated, due to the
 level of integrity and procesing power to have messages and transactions
 and reviews of unfiduciary responsible orgnizations be merited as highly
 true (.9 in fizzy logic) in order to promote a well-being in the state.
 That is its own reward, the strenght of having more processing speed.


 FYI(thank you to peter whom nudged my thinking and interest 

Re: [Bitcoin-development] Reducing the block rate instead of increasing the maximum block size

2015-05-11 Thread Luke Dashjr
On Monday, May 11, 2015 7:03:29 AM Sergio Lerner wrote:
 1. It will encourage centralization, because participants of mining
 pools will loose more money because of excessive initial block template
 latency, which leads to higher stale shares
 
 When a new block is solved, that information needs to propagate
 throughout the Bitcoin network up to the mining pool operator nodes,
 then a new block header candidate is created, and this header must be
 propagated to all the mining pool users, ether by a push or a pull
 model. Generally the mining server pushes new work units to the
 individual miners. If done other way around, the server would need to
 handle a high load of continuous work requests that would be difficult
 to distinguish from a DDoS attack. So if the server pushes new block
 header candidates to clients, then the problem boils down to increasing
 bandwidth of the servers to achieve a tenfold increase in work
 distribution. Or distributing the servers geographically to achieve a
 lower latency. Propagating blocks does not require additional CPU
 resources, so mining pools administrators would need to increase
 moderately their investment in the server infrastructure to achieve
 lower latency and higher bandwidth, but I guess the investment would be
 low.

1. Latency is what matters here, not bandwidth so much. And latency reduction 
is either expensive or impossible.
2. Mining pools are mostly run at a loss (with exception to only the most 
centralised pools), and have nothing to invest in increasing infrastructure.

 3, It will reduce the security of the network
 
 The security of the network is based on two facts:
 A- The miners are incentivized to extend the best chain
 B- The probability of a reversal based on a long block competition
 decreases as more confirmation blocks are appended.
 C- Renting or buying hardware to perform a 51% attack is costly.
 
 A still holds. B holds for the same amount of confirmation blocks, so 6
 confirmation blocks in a 10-minute block-chain is approximately
 equivalent to 6 confirmation blocks in a 1-minute block-chain.
 Only C changes, as renting the hashing power for 6 minutes is ten times
 less expensive as renting it for 1 hour. However, there is no shop where
 one can find 51% of the hashing power to rent right now, nor probably
 will ever be if Bitcoin succeeds. Last, you can still have a 1 hour
 confirmation (60 1-minute blocks) if you wish for high-valued payments,
 so the security decreases only if participant wish to decrease it.

You're overlooking at least:
1. The real network has to suffer wasted work as a result of the stale blocks, 
while an attacker does not. If 20% of blocks are stale, the attacker only 
needs 40% of the legitimate hashrate to achieve 50%-in-practice.
2. Since blocks are individually weaker, it becomes cheaper to DoS nodes with 
invalid blocks. (not sure if this is a real concern, but it ought to be 
considered and addressed)

 4. Reducing the block propagation time on the average case is good, but
 what happen in the worse case?
 
 Most methods proposed to reduce the block propagation delay do it only
 on the average case. Any kind of block compression relies on both
 parties sharing some previous information. In the worse case it's true
 that a miner can create and try to broadcast a block that takes too much
 time to verify or bandwidth to transmit. This is currently true on the
 Bitcoin network. Nevertheless there is no such incentive for miners,
 since they will be shooting on their own foots. Peter Todd has argued
 that the best strategy for miners is actually to reach 51% of the
 network, but not more. In other words, to exclude the slowest 49%
 percent. But this strategy of creating bloated blocks is too risky in
 practice, and surely doomed to fail, as network conditions dynamically
 change. Also it would be perceived as an attack to the network, and the
 miner (if it is a public mining pool) would be probably blacklisted.

One can probably overcome changing network conditions merely by trying to 
reach 75% and exclude the slowest 25%. Also, there is no way to identify or 
blacklist miners.

 5. Thousands of SPV wallets running in mobile devices would need to be
 upgraded (thanks Mike).
 
 That depends on the current upgrade rate for SPV wallets like Bitcoin
 Wallet  and BreadWallet. Suppose that the upgrade rate is 80%/year: we
 develop the source code for the change now and apply the change in Q2
 2016, then  most of the nodes will already be upgraded by when the
 hardfork takes place. Also a public notice telling people to upgrade in
 web pages, bitcointalk, SPV wallets warnings, coindesk, one year in
 advance will give plenty of time to SPV wallet users to upgrade.

I agree this shouldn't be a real concern. SPV wallets are also more likely and 
less risky (globally) to be auto-updated.

 6. If there are 10x more blocks, then there are 10x more block headers,
 and that increases the amount of bandwidth SPV wallets 

Re: [Bitcoin-development] Long-term mining incentives

2015-05-11 Thread Gavin Andresen
I think long-term the chain will not be secured purely by proof-of-work. I
think when the Bitcoin network was tiny running solely on people's home
computers proof-of-work was the right way to secure the chain, and the only
fair way to both secure the chain and distribute the coins.

See https://gist.github.com/gavinandresen/630d4a6c24ac6144482a  for some
half-baked thoughts along those lines. I don't think proof-of-work is the
last word in distributed consensus (I also don't think any alternatives are
anywhere near ready to deploy, but they might be in ten years).

I also think it is premature to worry about what will happen in twenty or
thirty years when the block subsidy is insignificant. A lot will happen in
the next twenty years. I could spin a vision of what will secure the chain
in twenty years, but I'd put a low probability on that vision actually
turning out to be correct.

That is why I keep saying Bitcoin is an experiment. But I also believe that
the incentives are correct, and there are a lot of very motivated, smart,
hard-working people who will make it work. When you're talking about trying
to predict what will happen decades from now, I think that is the best you
can (honestly) do.

-- 
--
Gavin Andresen
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development