Re: [Bitcoin-development] comments on BIP 100

2015-06-15 Thread Adam Back
I think he's more talking about like extension-blocks, however they
are actually soft-forkable even (and keep the 21m coins obviously)

See  See 
https://www.mail-archive.com/bitcoin-development%40lists.sourceforge.net/msg07937.html

and Tier Nolan tech detail
https://www.mail-archive.com/bitcoin-development@lists.sourceforge.net/msg07927.html

Discussion / claimed properties on

https://www.reddit.com/r/Bitcoin/comments/39kqzs/how_about_a_softfork_optin_blocksize_increase/

Adam

On 15 June 2015 at 19:53, Raystonn .  wrote:
>> The solution is to hard-fork and merge-mine. This way, both can live, and
>> mining power is not divided.
>
> No, this would essentially be blessing an increase to 42M bitcoins, half on
> each chain.  You could expect a severe market price correction if this were
> to happen.
>
> From: Rebroad (sourceforge)
> Sent: Monday, June 15, 2015 4:16 AM
> Cc: Bitcoin Dev
> Subject: Re: [Bitcoin-development] comments on BIP 100
>
> My understanding of this debate is that there are some people who want to
> keep Bitcoin at 1MB block limit, and there are some who want to increase it.
>
> I for one am curious to see how 1MB limited bitcoin evolves, and I believe
> we can all have a chance to see this AND hard-fork bitcoin to remove the
> block size restriction.
>
> To remove the 1MB limit required a hard fork. This is not disputed. It's
> what we do with the original (1MB limit) bitcoin that concerns me (and
> other's I am sure).
>
> What I would like to see is both being allowed to live. Harry Potter and
> Voldermort! (Except neither are evil!)
>
> The solution is to hard-fork and merge-mine. This way, both can live, and
> mining power is not divided.
>
> Dogecoin recently hardforked and this hardfork also involved switching to
> merge-mining, so it's been done successfully.
>
> So, simply, bitcoin as it is doesn't need to actually fork, but instead, at
> a certain block size, a forked bitcoin with the blocksize lmit removed will
> start to be merge-mined alongside bitcoin. Miners can be ready for this.
> Wallets can be ready for this - in fact, for most wallets and merchants they
> will possibly want to default to the bigger blocksize fork since this caters
> for more transactions per block.
>
> We still don't know how removing the block limit will pan out and what other
> problems with scalability will arise in the future, so by preserving the
> original bitcoin, we keep diversity, and people won't feel their investments
> in bitcoin are being unnecessarily put at risk (as their investments will
> stay in both the new and the old bitcoin).
>
> The bitcoin core developers can implement a patch like the one recently used
> for dogecoin, to allow the chain to fork at a set point, where at which
> point, bitcoins will be split into the new and the old. Branding will be an
> important issue here I think, so that there is as little confusion as
> possible. I think this is a small price to pay in return for not killing the
> original bitcoin (even if it's true that Satoshi did intend to remove the
> 1MB limit at some point).
>
> If I'm missing something obvious please let me know.
>
> On Mon, Jun 15, 2015 at 1:50 PM, Mike Hearn  wrote:
>>>
>>> The fact that using a centralized service is easier isn't a good reason
>>> IMHO. It disregards the long-term, and introduces systemic risk.
>>
>>
>> Well sure, that's easy for you to say, but you have a salary :) Other
>> developers may find the incremental benefits of decentralisation low vs
>> adding additional features, for instance, and who is to say they are wrong?
>>
>>>
>>> But in cases where using a decentralized approach doesn't *add* anything,
>>> I cannot reasonably promote it, and that's why I was against getutxos in the
>>> P2P protocol.
>>
>>
>> It does add something though! It means, amongst other things, I can switch
>> of all my servers, walk away for good, discard this Mike Hearn pseudonym I
>> invented for Bitcoin and the app will still work :) Surely that is an
>> important part of being decentralised?
>>
>> It also means that as the underlying protocol evolves over time, getutxos
>> can evolve along side it. P2P protocol gets encrypted/authenticated? Great,
>> one more additional bit of security. If one day miners commit to UTXO sets,
>> great, one more additional bit of security. When we start including input
>> values in the signature hash, great ... one more step up in security.
>>
>> Anyway, I didn't really want to reopen this debate. I just point out that
>> third party services are quite happy to pro

Re: [Bitcoin-development] comments on BIP 100

2015-06-15 Thread Raystonn .
> The solution is to hard-fork and merge-mine. This way, both can live, and 
> mining power is not divided.

No, this would essentially be blessing an increase to 42M bitcoins, half on 
each chain.  You could expect a severe market price correction if this were to 
happen.

From: Rebroad (sourceforge) 
Sent: Monday, June 15, 2015 4:16 AM
Cc: Bitcoin Dev 
Subject: Re: [Bitcoin-development] comments on BIP 100

My understanding of this debate is that there are some people who want to keep 
Bitcoin at 1MB block limit, and there are some who want to increase it. 

I for one am curious to see how 1MB limited bitcoin evolves, and I believe we 
can all have a chance to see this AND hard-fork bitcoin to remove the block 
size restriction.

To remove the 1MB limit required a hard fork. This is not disputed. It's what 
we do with the original (1MB limit) bitcoin that concerns me (and other's I am 
sure).

What I would like to see is both being allowed to live. Harry Potter and 
Voldermort! (Except neither are evil!)

The solution is to hard-fork and merge-mine. This way, both can live, and 
mining power is not divided.

Dogecoin recently hardforked and this hardfork also involved switching to 
merge-mining, so it's been done successfully.

So, simply, bitcoin as it is doesn't need to actually fork, but instead, at a 
certain block size, a forked bitcoin with the blocksize lmit removed will start 
to be merge-mined alongside bitcoin. Miners can be ready for this. Wallets can 
be ready for this - in fact, for most wallets and merchants they will possibly 
want to default to the bigger blocksize fork since this caters for more 
transactions per block.

We still don't know how removing the block limit will pan out and what other 
problems with scalability will arise in the future, so by preserving the 
original bitcoin, we keep diversity, and people won't feel their investments in 
bitcoin are being unnecessarily put at risk (as their investments will stay in 
both the new and the old bitcoin).

The bitcoin core developers can implement a patch like the one recently used 
for dogecoin, to allow the chain to fork at a set point, where at which point, 
bitcoins will be split into the new and the old. Branding will be an important 
issue here I think, so that there is as little confusion as possible. I think 
this is a small price to pay in return for not killing the original bitcoin 
(even if it's true that Satoshi did intend to remove the 1MB limit at some 
point).

If I'm missing something obvious please let me know.

On Mon, Jun 15, 2015 at 1:50 PM, Mike Hearn  wrote:

The fact that using a centralized service is easier isn't a good reason 
IMHO. It disregards the long-term, and introduces systemic risk.


  Well sure, that's easy for you to say, but you have a salary :) Other 
developers may find the incremental benefits of decentralisation low vs adding 
additional features, for instance, and who is to say they are wrong?

But in cases where using a decentralized approach doesn't *add* anything, I 
cannot reasonably promote it, and that's why I was against getutxos in the P2P 
protocol.


  It does add something though! It means, amongst other things, I can switch of 
all my servers, walk away for good, discard this Mike Hearn pseudonym I 
invented for Bitcoin and the app will still work :) Surely that is an important 
part of being decentralised?

  It also means that as the underlying protocol evolves over time, getutxos can 
evolve along side it. P2P protocol gets encrypted/authenticated? Great, one 
more additional bit of security. If one day miners commit to UTXO sets, great, 
one more additional bit of security. When we start including input values in 
the signature hash, great ... one more step up in security.

  Anyway, I didn't really want to reopen this debate. I just point out that 
third party services are quite happy to provide whatever developers need to 
build great apps, even if doing so fails to meet some kind of perfect 
cryptographic ideal. And that's why they're winning devs.

  Now back to your regularly scheduled block size debates ... 

  --

  ___
  Bitcoin-development mailing list
  Bitcoin-development@lists.sourceforge.net
  https://lists.sourceforge.net/lists/listinfo/bitcoin-development






--




___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development
-

Re: [Bitcoin-development] comments on BIP 100

2015-06-15 Thread Rebroad (sourceforge)
My understanding of this debate is that there are some people who want to
keep Bitcoin at 1MB block limit, and there are some who want to increase it.

I for one am curious to see how 1MB limited bitcoin evolves, and I believe
we can all have a chance to see this AND hard-fork bitcoin to remove the
block size restriction.

To remove the 1MB limit required a hard fork. This is not disputed. It's
what we do with the original (1MB limit) bitcoin that concerns me (and
other's I am sure).

What I would like to see is both being allowed to live. Harry Potter and
Voldermort! (Except neither are evil!)

The solution is to hard-fork and merge-mine. This way, both can live, and
mining power is not divided.

Dogecoin recently hardforked and this hardfork also involved switching to
merge-mining, so it's been done successfully.

So, simply, bitcoin as it is doesn't need to actually fork, but instead, at
a certain block size, a forked bitcoin with the blocksize lmit removed will
start to be merge-mined alongside bitcoin. Miners can be ready for this.
Wallets can be ready for this - in fact, for most wallets and merchants
they will possibly want to default to the bigger blocksize fork since this
caters for more transactions per block.

We still don't know how removing the block limit will pan out and what
other problems with scalability will arise in the future, so by preserving
the original bitcoin, we keep diversity, and people won't feel their
investments in bitcoin are being unnecessarily put at risk (as their
investments will stay in both the new and the old bitcoin).

The bitcoin core developers can implement a patch like the one recently
used for dogecoin, to allow the chain to fork at a set point, where at
which point, bitcoins will be split into the new and the old. Branding will
be an important issue here I think, so that there is as little confusion as
possible. I think this is a small price to pay in return for not killing
the original bitcoin (even if it's true that Satoshi did intend to remove
the 1MB limit at some point).

If I'm missing something obvious please let me know.

On Mon, Jun 15, 2015 at 1:50 PM, Mike Hearn  wrote:

> The fact that using a centralized service is easier isn't a good reason
>> IMHO. It disregards the long-term, and introduces systemic risk.
>>
>
> Well sure, that's easy for you to say, but you have a salary :) Other
> developers may find the incremental benefits of decentralisation low vs
> adding additional features, for instance, and who is to say they are wrong?
>
>
>> But in cases where using a decentralized approach doesn't *add* anything,
>> I cannot reasonably promote it, and that's why I was against getutxos in
>> the P2P protocol.
>>
>
> It does add something though! It means, amongst other things, I can switch
> of all my servers, walk away for good, discard this Mike Hearn pseudonym I
> invented for Bitcoin and the app will still work :) Surely that is an
> important part of being decentralised?
>
> It also means that as the underlying protocol evolves over time, getutxos
> can evolve along side it. P2P protocol gets encrypted/authenticated? Great,
> one more additional bit of security. If one day miners commit to UTXO sets,
> great, one more additional bit of security. When we start including input
> values in the signature hash, great ... one more step up in security.
>
> Anyway, I didn't really want to reopen this debate. I just point out that
> third party services are quite happy to provide whatever developers need to
> build great apps, even if doing so fails to meet some kind of perfect
> cryptographic ideal. And that's why they're winning devs.
>
> Now back to your regularly scheduled block size debates ...
>
>
> --
>
> ___
> Bitcoin-development mailing list
> Bitcoin-development@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bitcoin-development
>
>
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] comments on BIP 100

2015-06-15 Thread Mike Hearn
>
> The fact that using a centralized service is easier isn't a good reason
> IMHO. It disregards the long-term, and introduces systemic risk.
>

Well sure, that's easy for you to say, but you have a salary :) Other
developers may find the incremental benefits of decentralisation low vs
adding additional features, for instance, and who is to say they are wrong?


> But in cases where using a decentralized approach doesn't *add* anything,
> I cannot reasonably promote it, and that's why I was against getutxos in
> the P2P protocol.
>

It does add something though! It means, amongst other things, I can switch
of all my servers, walk away for good, discard this Mike Hearn pseudonym I
invented for Bitcoin and the app will still work :) Surely that is an
important part of being decentralised?

It also means that as the underlying protocol evolves over time, getutxos
can evolve along side it. P2P protocol gets encrypted/authenticated? Great,
one more additional bit of security. If one day miners commit to UTXO sets,
great, one more additional bit of security. When we start including input
values in the signature hash, great ... one more step up in security.

Anyway, I didn't really want to reopen this debate. I just point out that
third party services are quite happy to provide whatever developers need to
build great apps, even if doing so fails to meet some kind of perfect
cryptographic ideal. And that's why they're winning devs.

Now back to your regularly scheduled block size debates ...
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] comments on BIP 100

2015-06-15 Thread Pieter Wuille
On Mon, Jun 15, 2015 at 12:36 PM, Mike Hearn  wrote:

>
> Since you keep bringing this up, I'll try to clarify this once again.
>>
>
> I understand the arguments against it. And I think you are agreeing with
> me - Adam is bemoaning the way developers outsource stuff to third party
> services, and suggesting it is relevant to the block size debate. And we
> are saying, no, it's happening because it's easier than doing things in a
> decentralised way.
>

The fact that using a centralized service is easier isn't a good reason
IMHO. It disregards the long-term, and introduces systemic risk.

But in cases where using a decentralized approach doesn't *add* anything, I
cannot reasonably promote it, and that's why I was against getutxos in the
P2P protocol.

-- 
Pieter
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] comments on BIP 100

2015-06-15 Thread Mike Hearn
>
> Since you keep bringing this up, I'll try to clarify this once again.
>

I understand the arguments against it. And I think you are agreeing with me
- Adam is bemoaning the way developers outsource stuff to third party
services, and suggesting it is relevant to the block size debate. And we
are saying, no, it's happening because it's easier than doing things in a
decentralised way.


> If you can't do that, and are just aiming for removing central points of
> failure, run a bunch of servers yourself, and let others run their own.
> That sounds remarkably close to what you actually did, actually...
>

Right. There's a deeper issue here. The sort of 'trustless' querying of the
UTXO set that was demanded from me is impossible even with commitments,
because the answer can change the moment you receive it. All it takes is a
new block or new transaction to be broadcast a split second after you
download and use the data, and suddenly what you did is incorrect no matter
how many proofs you verified!

The only way to know this has happened is to be a full node and download
all transactions yourself ... and even then, you are trusting your peers to
actually relay you all transactions and not a subset. So in the end you can
never achieve perfection, only get closer to it.

But that situation is *fine* for many use cases, like showing someone a
snapshot of their crowdfund in a user interface. We just accept that what
we see on the screen may lag behind reality. It happens all the time with
all kinds of non-Bitcoin apps. We accept that there may be cases where the
answer we get is wrong. The software can nevertheless still be useful.

So Lighthouse compromises. It queries multiple peers and cross-references
their answers. If their answers don't match it shows an error on the screen
and won't show the user any status for their crowdfund at all. This error
has never occurred. Maybe one day it will. So the app gets more
decentralisation, more robustness, and accepts that the user interface
might one day show a wrong answer if the P2P network starts lying (or your
internet connection is hacked, but the right fix for that is P2P
encryption).

Unfortunately this sort of balance-of-risks approach is considered a
non-starter in Bitcoin Core. So why would developers even try? The message
sent was clear:  even if you have an approach you think will work, don't
bother.

Much easier to just outsource to a trusted service indeed.
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] comments on BIP 100

2015-06-15 Thread Pieter Wuille
On Mon, Jun 15, 2015 at 11:27 AM, Mike Hearn  wrote:

> I persevered for several months to add a very small "API" I needed for my
> app to Bitcoin Core, and it was in the end a waste of time. There are no
> actionable items left for the getutxo patch, regardless, I had to fork
> Bitcoin to get it out there. It would have been *much* easier to just
> say, fuck it, I'll use blockchain.info and in fact some in this community
> told me to do exactly that. But, the approach I chose has been working fine
> for months now.
>

Since you keep bringing this up, I'll try to clarify this once again.

Since your patch was to enable querying spentness of particular outputs,
which is fundamentally unprovable data in Bitcoin as is (even your proposed
protocol that verifies scripts with amounts under sighash only proves
correctness of the txout data, not its spentness), I indeed don't see why
you would want anything else than querying such a service. I wish it were
different, but the choice is between querying a central service, or
trusting something a random peer on the internet tells you. At least with
the central service you can use an authenticated protocol with known keys
to detect MITM, and have someone to point to when they lie.

Not decentralized you say? Absolutely. But why do we want decentralization
in the first place? To remove central points of failure, and to reduce
trust. Bitcoin gets away with decentralization because it can validate (to
more or lesser extent) the data it received from its identityless peers. If
you can't do that, and are just aiming for removing central points of
failure, run a bunch of servers yourself, and let others run their own.
That sounds remarkably close to what you actually did, actually...

Do you want actually trustless querying of spentness in the future? Help
working on one of the several TXO commitment ideas to get them implemented.

-- 
Pieter
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] comments on BIP 100

2015-06-15 Thread Eric Lombrozo
>OK. O() notation normally refers to computational complexity, but ... I
still don't get it - the vast >majority of users don't run relaying nodes
that take part in gossiping. They run web or SPV >wallets. And the nodes
that do take part don't connect to every other node.

It's a little scary, IMO, that the fact that the majority of nodes don't
relay and only perform the most rudimtentary level of validation if any is
considered an acceptable feature of the protocol.

- Eric Lombrozo
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] comments on BIP 100

2015-06-15 Thread Mike Hearn
>
> That was probably insufficiently specific, let me rephrase: I am
> referring to the trend that much of the industry is built on web2.0
> technology using bitcoin via a library in a web scripting language


OK, good to hear that. I'm not happy about the use of web technologies in
wallets/services either, but the causes of that trend are nothing to do
with block chain sizes. It's more because there's a generation of
developers who see no alternatives.

With projects like Lighthouse, I'm trying to show people that they can
blend the good bits of the web with the good bits of more traditional
client side development, at a cost they can afford.

Unfortunately, as you know, one of the reasons that developers turn to
outsourced services is that those services actually like developers and
give them the features they need. Whereas any attempt to add protocol
features for app/wallet developers to Bitcoin Core becomes controversial
due to some perceived or real lack of perfection.

I persevered for several months to add a very small "API" I needed for my
app to Bitcoin Core, and it was in the end a waste of time. There are no
actionable items left for the getutxo patch, regardless, I had to fork
Bitcoin to get it out there. It would have been *much* easier to just say,
fuck it, I'll use blockchain.info and in fact some in this community told
me to do exactly that. But, the approach I chose has been working fine for
months now.

Compare this experience to companies like chain.com, blockcypher etc - when
developers say jump, they say "how high?"

So It's unreasonable for the Bitcoin Core developer group to constantly
call developers building apps idiots or "non technical" (as I see so often
in this block size debate), and then complain that people don't write apps
in their preferred way! Just accept that decentralised app dev is already
hard, and the way Core is run makes it much harder still.


As I said I dont think we can expect Bitcoin to scale with no further
> algorithmic improvements.


A big part of the debate around this change is showing that this statement
is wrong. "Scaling" is not some kind of binary yes/no thing. It's a
continuous effort. You write a system that scales a certain amount, and
then if you find you need more capacity, you scale it again. Maybe that
 involves rewriting the existing code or maybe it just means improving what
you've got.

Or maybe (painful truth coming up) your product is not that compelling, or
times change and your users leave, and you discover you never actually need
to scale to the giddy heights originally envisioned.

A big part of the reason modern web dev is so messed up is that lots of
developers starting thinking every app they built needed to be "web scale"
from day one. SQL databases? Pah. Doesn't scale. Think big. We gotta no
NoSQL sharded key/value store from the start! Otherwise we're just showing
lack of confidence in our own product.

Then when they used up all their budget solving consistency bugs a
relational database would have avoided, they notice their competitors
sailing past them on a not-fully-scalable but certainly-scalable-enough
architecture that let them focus on features and making users happy.




> I am referring to global bandwidth O(n^2) with n=users


OK. O() notation normally refers to computational complexity, but ... I
still don't get it - the vast majority of users don't run relaying nodes
that take part in gossiping. They run web or SPV wallets. And the nodes
that do take part don't connect to every other node.




> There can be a case for some increase to create some breathing room to
> work on scaling and decentralising tech, I just mean to say that if we
> do it in isolation, we're not focussing on the big picture.


Alright - let's agree that we disagree on a few areas, like the relative
desirability of alternative non-blockchain designs - but we do seem to
agree that there is a case for an increase in the block size limit. That
seems like progress.

As you agree with that, what sort of schedule and time are you thinking of?
(well, by "you" I really mean blockstream because it's taking forever to
try and negotiate with every single person individually).
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] comments on BIP 100

2015-06-15 Thread Mike Hearn
> StrawPay hasn't published any details of their work publicly; if they
> wanted credit on the mailing list they should have done that.
>

There's a brief discussion here:


https://www.reddit.com/r/Bitcoin/comments/2r3ri7/strawpay_cheap_and_secure_micropayments/

But yes, they are developing it before publishing more details that may be
subject to change post-implementation experience anyway.


> I'm genuinely looking forward to a concrete fork proposal. Any ETA on
> when the blocksize increase code will go in Bitcoin XT?
>

Great!  I am waiting for Gavin to finish writing the patches. Once he has a
patch and there's been some time for review, I guess it will go in,
assuming no other issues.
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] comments on BIP 100

2015-06-14 Thread Peter Todd
On Mon, Jun 15, 2015 at 12:23:44AM +0200, Mike Hearn wrote:
> >
> > One thing that is concerning is that few in industry seem inclined to
> > take any development initiatives or even integrate a library.
> 
> 
> Um, you mean except all the people who have built more scalable wallets
> over the past few years, which is the only reason anyone can even use
> Bitcoin from their phone?
>
> Or maybe you mean initiatives like Lightning 
> except StrawPay already developed something similar to the Lightning
> network (complete with a working GUI wallet) and were ignored by
> Blockstream as you prefer to write your own from scratch?
>
> Sure, people in the industry take development initiatives. That doesn't
> mean their work will be recognised on this mailing list.

StrawPay hasn't published any details of their work publicly; if they
wanted credit on the mailing list they should have done that.

I couldn't even find any screenshots of that GUI wallet when I learned
what they were doing; I went to the trouble of reaching out to them
recently because I have multiple clients with a need for their
technology. I'm sure we all would have appreciated and welcomed them
taking the time to let us know what they were doing; it would have saved
me personally a lot of time; their lack of recognition on this mailing
list is both unfortunate, and a product of their actions alone.

In any case, StrawPay and Lightning are complementary projects: StrawPay
has limited functionality in exchange for faster deployment; Lightning
has significantly more functionality in exchange for a longer deployment
schedule. Both projects can and should be developed in parallel.
Equally, note efforts like my own CHECKLOCKTIMEVERIFY, which will be
part of StrawPay in due time.

> > But it will be helpful if we expose companies to the back-pressure
> > actually implied by an O(n^2) scaling protocol and don't just immediately
> > increase the block-size to levels that are dangerous for decentralisation
> > security
> 
> 
> Bitcoin does not have n-squared scaling. I really don't get where this idea
> comes from. Computational complexity for the entire network is O(nm) where
> n=transactions and m=fully validating nodes. There is no fixed
> relationships between those two variables.

Note for instance how we're discussing what standards we need in the
CryptoCurrency Security Standard for requirements for compliant
companies to run full nodes for transaction verification; failure to run
a full node will be considered non-compliant in much the same way that
failure to secure your private keys is non-compliance. Pedantically, if
you assume a diverse, decentralized ecosystem, these security standards
by themselves do create fixed linear relationships between those
variables, giving O(n^2) scaling.

https://github.com/CryptoConsortium/CCSS/issues/15

> "Exposing the companies to back-pressure" sounds quite nice and gentle. Let
> me rephrase it to be equivalent but perhaps more direct: you mean "breaking
> the current software ecosystem to force people into a new, fictional system
> that bears little resemblance to the Bitcoin we use today, whether they
> want that or not".

Equally, not running full nodes bears little resemblance to the Bitcoin
we use today. Either way, something must change for the number of
Bitcoin users to grow.

> As nothing that has been proposed so far (Lightning, merge mined chains,
> extension blocks etc) has much chance of actual deployment any time soon,
> that leaves raising the block size limit as the only possible path left.
> Which is why there will soon be a fork that does it.

I'm genuinely looking forward to a concrete fork proposal. Any ETA on
when the blocksize increase code will go in Bitcoin XT?

-- 
'peter'[:-1]@petertodd.org
127ab1d576dc851f374424f1269c4700ccaba2c42d97e778


signature.asc
Description: Digital signature
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] comments on BIP 100

2015-06-14 Thread Eric Lombrozo

> On Jun 14, 2015, at 9:11 PM, Peter Todd  wrote:
> 
> On Sun, Jun 14, 2015 at 05:53:05PM -0700, Eric Lombrozo wrote:
>> I think the whole complexity talk is missing the bigger issue.
>> 
>> Sure, per block validation scales linearly (or quasilinearly…there’s an 
>> O(log n) term in there somewhere but it’s probably dominated by linear 
>> factors at current levels…asymptotic limits don’t always apply very well to 
>> finite systems). And there’s an O(n^2) bandwidth issue.
>> 
>> The real issue, though, is validation cost. The n in O(n) here does not 
>> represent block size - it represents the size of the entire block chain for 
>> every new validator that must be synchronized! It means we have no way to 
>> construct short proofs (or at least arguments that are computationally 
>> *hard* to forge) without requiring the validator to maintain the complete 
>> system state. And currently, there is no mechanism for directly compensating 
>> validators.
> 
> ...and can there be? The goal of validation after all is finding if a
> mistake has been made, and current production cryptography doesn't have
> any way to prove you have done that honestly. You need "moon math" like
> recursive SNARKS to do that, and it's unknown when they'll be available
> for production usage.
> 

While things like zero-knowledge and homomorphic encryption would be awesome, 
they are not really needed to achieve the objective of an efficient proof that 
is hard to forge with at least a decently thought out security model (i.e. we 
can make information withholding far more difficult)…and we can dramatically 
improve search times and local storage requirements by doing some of the things 
that you’ve actually proposed, Peter, like shifting the responsibility of 
maintaining and constructing proofs over to transaction senders and committing 
proof hashes to the global state. At least the incentives would be far better 
aligned in such a scenario.

How do we deal with things like the discovery of an invalid proof a couple 
weeks after it’s already been committed? This is a tricky issue I’ve been 
giving a lot of thought to recently - but we’ll deal with this topic in a 
separate thread. :)

> When we say "compensating validators", if we're being honest with
> outselves what we really mean is the much more boring task of
> compensating servers who are giving us blockchain data. That has nothing
> to do with validation.

If we were to shift responsibility of constructing proofs over to transaction 
senders, today's “validators” would indeed become nothing more than compensated 
servers. Clients would be able to query for proofs and verify them for 
themselves efficiently.

> A useful task would be to make an SPV archival node implementation that
> did no validation at all, while distributing the blockchain data linked
> to the longest chain. Such an implementation can and should serve SPV
> clients, as this is what their actual security model usually is given
> the lack of authentication of the identity of the server they're
> connecting too. Actually implementing this would be a simple matter of
> patching Bitcoin Core to turn off block validation.
> 
>> A full validator that goes offline even for a short period of time takes a 
>> while to fully catch up to the rest of the network - and starting up a new 
>> validator from scratch will continue to be painful…even for those of us 
>> who’ve turned this into routine by now, let alone new nontechnical users.
> 
> Concretely, 20MB blocks lead to 20GB/week of blocks. On my 1MB/second
> down internet, turning on my node after a week away would take five
> hours; starting up a new node after two years of 20MB blocks would take
> 23 days - likely longer in practice.
> 
> There's serious unsolved and undiscussed devops and development issues
> with this. For instance, after changes to the validation code, it's
> routine to resync/reindex Bitcoin Core to ensure starting up a new node
> actually works. Even now we haven't really come to grips with what
> consistent 1MB blocks looks like from this point of view after a few
> years of usage, let another order of magnitude longer sync times.
> 

It’s a disaster. Even with 1MB blocks this is already the principal 
centralization pressure on Bitcoin.

>> Satoshi’s SPV is not a real solution - it’s a mere suggestion that wasn’t 
>> fully thought out at the time of the Bitcoin white paper. Besides lacking a 
>> good validation security model, practical implementations of it weaken 
>> privacy and complicate client implementations substantially…and the worst 
>> part, it still doesn’t scale all that well. The validator still has to query 
>> every single block (even if filtered) back to the first transaction (which 
>> cannot be determined without doing a blockchain scan anyway).
> 
> Note how with 20MB blocks it would take up to 1TB of IO per year-synced
> for a bloom-filter-using wallet to sync the blockchain. We already have
> a bloom IO DoS attack issu

Re: [Bitcoin-development] comments on BIP 100

2015-06-14 Thread Peter Todd
On Sun, Jun 14, 2015 at 05:53:05PM -0700, Eric Lombrozo wrote:
> I think the whole complexity talk is missing the bigger issue.
> 
> Sure, per block validation scales linearly (or quasilinearly…there’s an O(log 
> n) term in there somewhere but it’s probably dominated by linear factors at 
> current levels…asymptotic limits don’t always apply very well to finite 
> systems). And there’s an O(n^2) bandwidth issue.
> 
> The real issue, though, is validation cost. The n in O(n) here does not 
> represent block size - it represents the size of the entire block chain for 
> every new validator that must be synchronized! It means we have no way to 
> construct short proofs (or at least arguments that are computationally *hard* 
> to forge) without requiring the validator to maintain the complete system 
> state. And currently, there is no mechanism for directly compensating 
> validators.

...and can there be? The goal of validation after all is finding if a
mistake has been made, and current production cryptography doesn't have
any way to prove you have done that honestly. You need "moon math" like
recursive SNARKS to do that, and it's unknown when they'll be available
for production usage.

When we say "compensating validators", if we're being honest with
outselves what we really mean is the much more boring task of
compensating servers who are giving us blockchain data. That has nothing
to do with validation.

A useful task would be to make an SPV archival node implementation that
did no validation at all, while distributing the blockchain data linked
to the longest chain. Such an implementation can and should serve SPV
clients, as this is what their actual security model usually is given
the lack of authentication of the identity of the server they're
connecting too. Actually implementing this would be a simple matter of
patching Bitcoin Core to turn off block validation.

> A full validator that goes offline even for a short period of time takes a 
> while to fully catch up to the rest of the network - and starting up a new 
> validator from scratch will continue to be painful…even for those of us 
> who’ve turned this into routine by now, let alone new nontechnical users.

Concretely, 20MB blocks lead to 20GB/week of blocks. On my 1MB/second
down internet, turning on my node after a week away would take five
hours; starting up a new node after two years of 20MB blocks would take
23 days - likely longer in practice.

There's serious unsolved and undiscussed devops and development issues
with this. For instance, after changes to the validation code, it's
routine to resync/reindex Bitcoin Core to ensure starting up a new node
actually works. Even now we haven't really come to grips with what
consistent 1MB blocks looks like from this point of view after a few
years of usage, let another order of magnitude longer sync times.

> Satoshi’s SPV is not a real solution - it’s a mere suggestion that wasn’t 
> fully thought out at the time of the Bitcoin white paper. Besides lacking a 
> good validation security model, practical implementations of it weaken 
> privacy and complicate client implementations substantially…and the worst 
> part, it still doesn’t scale all that well. The validator still has to query 
> every single block (even if filtered) back to the first transaction (which 
> cannot be determined without doing a blockchain scan anyway).

Note how with 20MB blocks it would take up to 1TB of IO per year-synced
for a bloom-filter-using wallet to sync the blockchain. We already have
a bloom IO DoS attack issue - what are the consequences of making that
issue 20x worse? Nobody has analysed it yet.

-- 
'peter'[:-1]@petertodd.org
127ab1d576dc851f374424f1269c4700ccaba2c42d97e778


signature.asc
Description: Digital signature
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] comments on BIP 100

2015-06-14 Thread Jeff Garzik
Adding - in re pay-to-FOO - these schemes are inherently short term, such
that it is near-impossible for the market to plan for what happens in 12+
months.

On Sun, Jun 14, 2015 at 10:28 PM, Jeff Garzik  wrote:

> On Sun, Jun 14, 2015 at 5:23 PM, Adam Back  wrote:
>
>> Hi
>>
>> I made these comments elsewhere, but I think really we should be
>> having these kind of conversations here rather than scattered around.
>>
>> These are about Jeff Garzik's outline draft BIP 100 I guess this is
>> the latest draft:  (One good thing about getting off SF would be
>> finally JGarzik's emails actually not getting blocked!).
>>
>> http://gtf.org/garzik/bitcoin/BIP100-blocksizechangeproposal.pdf
>>
>> may have changed since the original [1]
>>
>> Over the original proposal:
>>
>> 1. there should be a hard cap, not indefinitely growing.
>>
>>
> In the latest draft there is an explicit 32MB ceiling now.
>
> Users will need to opt into growth beyond 32MB via a 2nd hard fork.
>
>
>
>> 2. there should be  a growth limiter (no more than X%/year)
>>
>>
> As a general principle, this is an area of market disagreement, and should
> not be our call.  Encoding this into software veers into personal opinion
> about what economic policy should be.
>
> That said  -- BIP 100, as a compromise, includes a growth limiter.  Abrupt
> change (1MB -> 32MB!) is awful on markets.  Good policies include a
> measured pace of transition from policy A to policy B.  It gives the
> community time to assess system effectiveness - while also allowing free
> market input.
>
> In the long run I hope the cap is removed (see below), and the intention
> is to -slowly- and -transparently- move from the tightly controlled limit
> to something the free market and users are choosing.
>
>
>
>
>> 3. I think the miners should not be given a vote that has no costs to
>> cast, because their interests are not necessarily aligned with users
>> or businesses.
>>
>> I think Greg Maxwell's difficulty adjust [2] is better here for that
>> reason.  It puts quadratic cost via higher difficulty for miners to
>> vote to increase block-size, which miners can profitably do if there
>> are transactions with fees available to justify it. There is also the
>> growth limiter as part of Greg's proposal. [3]
>>
>>
> "paying with difficulty" has severe negative elements that will likely
> cause it never to be used:
> - complex and difficult for miners to reason
> - fails the opportunity cost test - dollar cost lost losing the block race
> versus value gained by increasing block size
> - inherently unpredictable in the short term - user experience is that
> it's possibly difficult to see a gain in utility versus the revenue you are
> giving up
> - REQUIRES informal miner collusion - probably less transparent than BIP
> 100 - in order to solve the who-goes-first problem.
> - net result: tough sell
>
> Paying bitcoins to future miners makes a lot more sense.  Initially I was
> a fan of pay-with-diff, but freezing bitcoins (CLTV) or timelock'd
> anyone-can-spend has much more clear incentives, if you want to go down
> that road.
>
> Problems with pay-to-increase-block-size:
> - how much to pay?  You are inherently setting your growth policy on top
> of bitcoin by choosing a price here.
> - another who-goes-first problem
>
> Anyway, there is a natural equilibrium block size that the free market and
> user choice will seek.
>
> Related:  There is a lot of naive "miner = max income = max block size"
> reasoning going on, with regards to fees.  This is defining the bounds of
> an economically scarce resource.  There are many reasons why a miner will
> today, in the real world, limit their block size. WRT fee income, if block
> size is too large the fee competition in the overall market is low-to-zero,
> fee income rapidly collapses.  Then factor in price and demand elasticity
> on top of that.
>
> Quite frankly, there seems to be a natural block size equilibrium ceiling,
> and I worry about miners squeezing the market by maximizing their fee
> income through constrained block sizes and competition at the low end.
> This is of course already possible today - miners may openly or covertly
> collude to keep the block size low.
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>> I think bitcoin will have to involve layering models that uplift
>> security to higher layers, but preserve security assurances, and
>> smart-contracts even, with protocols that improve the algorithmic
>> complexity beyond O(n^2) in users, like lightning, and there are
>> multiple other candidates with useful tradeoffs for various use-cases.
>>
>> One thing that is concerning is that few in industry seem inclined to
>> take any development initiatives or even integrate a library.  I
>> suppose eventually that problem would self-correct as new startups
>> would make a more scalable wallet and services that are layer2 aware
>> and eat the lunch of the laggards.  But it will be helpful if we
>> expose companies to the back-pressure actually impl

Re: [Bitcoin-development] comments on BIP 100

2015-06-14 Thread Jeff Garzik
On Sun, Jun 14, 2015 at 5:23 PM, Adam Back  wrote:

> Hi
>
> I made these comments elsewhere, but I think really we should be
> having these kind of conversations here rather than scattered around.
>
> These are about Jeff Garzik's outline draft BIP 100 I guess this is
> the latest draft:  (One good thing about getting off SF would be
> finally JGarzik's emails actually not getting blocked!).
>
> http://gtf.org/garzik/bitcoin/BIP100-blocksizechangeproposal.pdf
>
> may have changed since the original [1]
>
> Over the original proposal:
>
> 1. there should be a hard cap, not indefinitely growing.
>
>
In the latest draft there is an explicit 32MB ceiling now.

Users will need to opt into growth beyond 32MB via a 2nd hard fork.



> 2. there should be  a growth limiter (no more than X%/year)
>
>
As a general principle, this is an area of market disagreement, and should
not be our call.  Encoding this into software veers into personal opinion
about what economic policy should be.

That said  -- BIP 100, as a compromise, includes a growth limiter.  Abrupt
change (1MB -> 32MB!) is awful on markets.  Good policies include a
measured pace of transition from policy A to policy B.  It gives the
community time to assess system effectiveness - while also allowing free
market input.

In the long run I hope the cap is removed (see below), and the intention is
to -slowly- and -transparently- move from the tightly controlled limit to
something the free market and users are choosing.




> 3. I think the miners should not be given a vote that has no costs to
> cast, because their interests are not necessarily aligned with users
> or businesses.
>
> I think Greg Maxwell's difficulty adjust [2] is better here for that
> reason.  It puts quadratic cost via higher difficulty for miners to
> vote to increase block-size, which miners can profitably do if there
> are transactions with fees available to justify it. There is also the
> growth limiter as part of Greg's proposal. [3]
>
>
"paying with difficulty" has severe negative elements that will likely
cause it never to be used:
- complex and difficult for miners to reason
- fails the opportunity cost test - dollar cost lost losing the block race
versus value gained by increasing block size
- inherently unpredictable in the short term - user experience is that it's
possibly difficult to see a gain in utility versus the revenue you are
giving up
- REQUIRES informal miner collusion - probably less transparent than BIP
100 - in order to solve the who-goes-first problem.
- net result: tough sell

Paying bitcoins to future miners makes a lot more sense.  Initially I was a
fan of pay-with-diff, but freezing bitcoins (CLTV) or timelock'd
anyone-can-spend has much more clear incentives, if you want to go down
that road.

Problems with pay-to-increase-block-size:
- how much to pay?  You are inherently setting your growth policy on top of
bitcoin by choosing a price here.
- another who-goes-first problem

Anyway, there is a natural equilibrium block size that the free market and
user choice will seek.

Related:  There is a lot of naive "miner = max income = max block size"
reasoning going on, with regards to fees.  This is defining the bounds of
an economically scarce resource.  There are many reasons why a miner will
today, in the real world, limit their block size. WRT fee income, if block
size is too large the fee competition in the overall market is low-to-zero,
fee income rapidly collapses.  Then factor in price and demand elasticity
on top of that.

Quite frankly, there seems to be a natural block size equilibrium ceiling,
and I worry about miners squeezing the market by maximizing their fee
income through constrained block sizes and competition at the low end.
This is of course already possible today - miners may openly or covertly
collude to keep the block size low.














> I think bitcoin will have to involve layering models that uplift
> security to higher layers, but preserve security assurances, and
> smart-contracts even, with protocols that improve the algorithmic
> complexity beyond O(n^2) in users, like lightning, and there are
> multiple other candidates with useful tradeoffs for various use-cases.
>
> One thing that is concerning is that few in industry seem inclined to
> take any development initiatives or even integrate a library.  I
> suppose eventually that problem would self-correct as new startups
> would make a more scalable wallet and services that are layer2 aware
> and eat the lunch of the laggards.  But it will be helpful if we
> expose companies to the back-pressure actually implied by an O(n^2)
> scaling protocol and don't just immediately increase the block-size to
> levels that are dangerous for decentralisation security, as an
> interventionist subsidy to save them having to do basic integration
> work.  Otherwise I think whichever any kind of kick the can some 2-5
> years down the road we consider, we risk the whole saga repeating in a
> few years

Re: [Bitcoin-development] comments on BIP 100

2015-06-14 Thread Eric Lombrozo

> On Jun 14, 2015, at 5:53 PM, Eric Lombrozo  wrote:
> 
> I think the whole complexity talk is missing the bigger issue.
> 
> Sure, per block validation scales linearly (or quasilinearly…there’s an O(log 
> n) term in there somewhere but it’s probably dominated by linear factors at 
> current levels…asymptotic limits don’t always apply very well to finite 
> systems). And there’s an O(n^2) bandwidth issue.

For accuracy’s sake, I meant to say O(n log n).

> 
> The real issue, though, is validation cost. The n in O(n) here does not 
> represent block size - it represents the size of the entire block chain for 
> every new validator that must be synchronized! It means we have no way to 
> construct short proofs (or at least arguments that are computationally *hard* 
> to forge) without requiring the validator to maintain the complete system 
> state. And currently, there is no mechanism for directly compensating 
> validators.
> 
> A full validator that goes offline even for a short period of time takes a 
> while to fully catch up to the rest of the network - and starting up a new 
> validator from scratch will continue to be painful…even for those of us 
> who’ve turned this into routine by now, let alone new nontechnical users.
> 
> Satoshi’s SPV is not a real solution - it’s a mere suggestion that wasn’t 
> fully thought out at the time of the Bitcoin white paper. Besides lacking a 
> good validation security model, practical implementations of it weaken 
> privacy and complicate client implementations substantially…and the worst 
> part, it still doesn’t scale all that well. The validator still has to query 
> every single block (even if filtered) back to the first transaction (which 
> cannot be determined without doing a blockchain scan anyway).
> 
> So yes, we will most certainly need algorithmic improvements!
> 
> - Eric Lombrozo
> 
> 
>> On Jun 14, 2015, at 4:58 PM, Adam Back  wrote:
>> 
>> Hi Mike
>> 
>> On 15 June 2015 at 00:23, Mike Hearn  wrote:
 One thing that is concerning is that few in industry seem inclined to
 take any development initiatives or even integrate a library.
>>> 
>>> Um, you mean except all the people who have built more scalable wallets over
>>> the past few years, which is the only reason anyone can even use Bitcoin
>>> from their phone?
>> 
>> No slight intended obviously to people who do write actual client code.
>> 
>> That was probably insufficiently specific, let me rephrase: I am
>> referring to the trend that much of the industry is built on web2.0
>> technology using bitcoin via a library in a web scripting language,
>> often with consensus bugs, and even outsourcing and not even running
>> their own full node, so that the service itself offered to their users
>> isn't even SPV secure to the operator.  As well as being heavily based
>> on a third-party custody model that is the root cause of the repeated
>> wallet breaches.  Some of these companies have a noted tendency not to
>> upgrade or fix code.
>> 
>> So I mean this not to call out specific companies, but in the sense
>> that if we're technologists we should be trying to move the technology
>> forward, not just changing parameters which run into an O(n^2) scaling
>> wall or break decentralisation security.  And we shouldnt take the
>> above state of affairs as an immutable reality.  It can not persist
>> for bitcoin to reach maturity on scale nor security.
>> 
>>> I still think you guys don't recognise what you are actually asking for here
>>> - scrapping virtually the entire existing investment in software, wallets
>>> and tools.
>> 
>> As I said I dont think we can expect Bitcoin to scale with no further
>> algorithmic improvements.  Algorithmic improvements take code.  There
>> is reasonable scope to build in an incrementally deployable way,
>> there's plenty of time for people to code, test and opt-in to things,
>> the sky is not falling.  Companies do care about scaling, and can
>> invest in the integration and coding implied to improve their products
>> scalability, they have an economic incentive to do it and there is no
>> scalable and safe way todo it without this work.
>> 
>>> Computational complexity for the entire network is O(nm) where
>>> n=transactions and m=fully validating nodes. There is no fixed relationships
>>> between those two variables.
>> 
>> I am referring to global bandwidth O(n^2) with n=users, or O(n) per
>> user bandwidth cost to the system, while O(nm) is accurate nodes is an
>> internal system concept.  Anyway suffice to say the network does not
>> scale O(1) in per user cost.
>> 
>>> "Exposing the companies to back-pressure" sounds quite nice and gentle. Let
>>> me rephrase it to be equivalent but perhaps more direct: you mean "breaking
>>> the current software ecosystem to force people into a new, fictional system
>>> that bears little resemblance to the Bitcoin we use today, whether they want
>>> that or not".
>>> 
>>> As nothing that has been proposed so far (Lightn

Re: [Bitcoin-development] comments on BIP 100

2015-06-14 Thread Eric Lombrozo
I think the whole complexity talk is missing the bigger issue.

Sure, per block validation scales linearly (or quasilinearly…there’s an O(log 
n) term in there somewhere but it’s probably dominated by linear factors at 
current levels…asymptotic limits don’t always apply very well to finite 
systems). And there’s an O(n^2) bandwidth issue.

The real issue, though, is validation cost. The n in O(n) here does not 
represent block size - it represents the size of the entire block chain for 
every new validator that must be synchronized! It means we have no way to 
construct short proofs (or at least arguments that are computationally *hard* 
to forge) without requiring the validator to maintain the complete system 
state. And currently, there is no mechanism for directly compensating 
validators.

A full validator that goes offline even for a short period of time takes a 
while to fully catch up to the rest of the network - and starting up a new 
validator from scratch will continue to be painful…even for those of us who’ve 
turned this into routine by now, let alone new nontechnical users.

Satoshi’s SPV is not a real solution - it’s a mere suggestion that wasn’t fully 
thought out at the time of the Bitcoin white paper. Besides lacking a good 
validation security model, practical implementations of it weaken privacy and 
complicate client implementations substantially…and the worst part, it still 
doesn’t scale all that well. The validator still has to query every single 
block (even if filtered) back to the first transaction (which cannot be 
determined without doing a blockchain scan anyway).

So yes, we will most certainly need algorithmic improvements!

- Eric Lombrozo


> On Jun 14, 2015, at 4:58 PM, Adam Back  wrote:
> 
> Hi Mike
> 
> On 15 June 2015 at 00:23, Mike Hearn  wrote:
>>> One thing that is concerning is that few in industry seem inclined to
>>> take any development initiatives or even integrate a library.
>> 
>> Um, you mean except all the people who have built more scalable wallets over
>> the past few years, which is the only reason anyone can even use Bitcoin
>> from their phone?
> 
> No slight intended obviously to people who do write actual client code.
> 
> That was probably insufficiently specific, let me rephrase: I am
> referring to the trend that much of the industry is built on web2.0
> technology using bitcoin via a library in a web scripting language,
> often with consensus bugs, and even outsourcing and not even running
> their own full node, so that the service itself offered to their users
> isn't even SPV secure to the operator.  As well as being heavily based
> on a third-party custody model that is the root cause of the repeated
> wallet breaches.  Some of these companies have a noted tendency not to
> upgrade or fix code.
> 
> So I mean this not to call out specific companies, but in the sense
> that if we're technologists we should be trying to move the technology
> forward, not just changing parameters which run into an O(n^2) scaling
> wall or break decentralisation security.  And we shouldnt take the
> above state of affairs as an immutable reality.  It can not persist
> for bitcoin to reach maturity on scale nor security.
> 
>> I still think you guys don't recognise what you are actually asking for here
>> - scrapping virtually the entire existing investment in software, wallets
>> and tools.
> 
> As I said I dont think we can expect Bitcoin to scale with no further
> algorithmic improvements.  Algorithmic improvements take code.  There
> is reasonable scope to build in an incrementally deployable way,
> there's plenty of time for people to code, test and opt-in to things,
> the sky is not falling.  Companies do care about scaling, and can
> invest in the integration and coding implied to improve their products
> scalability, they have an economic incentive to do it and there is no
> scalable and safe way todo it without this work.
> 
>> Computational complexity for the entire network is O(nm) where
>> n=transactions and m=fully validating nodes. There is no fixed relationships
>> between those two variables.
> 
> I am referring to global bandwidth O(n^2) with n=users, or O(n) per
> user bandwidth cost to the system, while O(nm) is accurate nodes is an
> internal system concept.  Anyway suffice to say the network does not
> scale O(1) in per user cost.
> 
>> "Exposing the companies to back-pressure" sounds quite nice and gentle. Let
>> me rephrase it to be equivalent but perhaps more direct: you mean "breaking
>> the current software ecosystem to force people into a new, fictional system
>> that bears little resemblance to the Bitcoin we use today, whether they want
>> that or not".
>> 
>> As nothing that has been proposed so far (Lightning, merge mined chains,
>> extension blocks etc) has much chance of actual deployment any time soon,
>> that leaves raising the block size limit as the only possible path left.
> 
> A hard-fork takes a long period of time to deploy 

Re: [Bitcoin-development] comments on BIP 100

2015-06-14 Thread Adam Back
Hi Mike

On 15 June 2015 at 00:23, Mike Hearn  wrote:
>> One thing that is concerning is that few in industry seem inclined to
>> take any development initiatives or even integrate a library.
>
> Um, you mean except all the people who have built more scalable wallets over
> the past few years, which is the only reason anyone can even use Bitcoin
> from their phone?

No slight intended obviously to people who do write actual client code.

That was probably insufficiently specific, let me rephrase: I am
referring to the trend that much of the industry is built on web2.0
technology using bitcoin via a library in a web scripting language,
often with consensus bugs, and even outsourcing and not even running
their own full node, so that the service itself offered to their users
isn't even SPV secure to the operator.  As well as being heavily based
on a third-party custody model that is the root cause of the repeated
wallet breaches.  Some of these companies have a noted tendency not to
upgrade or fix code.

So I mean this not to call out specific companies, but in the sense
that if we're technologists we should be trying to move the technology
forward, not just changing parameters which run into an O(n^2) scaling
wall or break decentralisation security.  And we shouldnt take the
above state of affairs as an immutable reality.  It can not persist
for bitcoin to reach maturity on scale nor security.

> I still think you guys don't recognise what you are actually asking for here
> - scrapping virtually the entire existing investment in software, wallets
> and tools.

As I said I dont think we can expect Bitcoin to scale with no further
algorithmic improvements.  Algorithmic improvements take code.  There
is reasonable scope to build in an incrementally deployable way,
there's plenty of time for people to code, test and opt-in to things,
the sky is not falling.  Companies do care about scaling, and can
invest in the integration and coding implied to improve their products
scalability, they have an economic incentive to do it and there is no
scalable and safe way todo it without this work.

> Computational complexity for the entire network is O(nm) where
> n=transactions and m=fully validating nodes. There is no fixed relationships
> between those two variables.

I am referring to global bandwidth O(n^2) with n=users, or O(n) per
user bandwidth cost to the system, while O(nm) is accurate nodes is an
internal system concept.  Anyway suffice to say the network does not
scale O(1) in per user cost.

> "Exposing the companies to back-pressure" sounds quite nice and gentle. Let
> me rephrase it to be equivalent but perhaps more direct: you mean "breaking
> the current software ecosystem to force people into a new, fictional system
> that bears little resemblance to the Bitcoin we use today, whether they want
> that or not".
>
> As nothing that has been proposed so far (Lightning, merge mined chains,
> extension blocks etc) has much chance of actual deployment any time soon,
> that leaves raising the block size limit as the only possible path left.

A hard-fork takes a long period of time to deploy due to the
non-upgrade risk, people are working on things in the mean-time.
There can be a case for some increase to create some breathing room to
work on scaling and decentralising tech, I just mean to say that if we
do it in isolation, we're not focussing on the big picture.

Adam

--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] comments on BIP 100

2015-06-14 Thread Mike Hearn
>
> One thing that is concerning is that few in industry seem inclined to
> take any development initiatives or even integrate a library.


Um, you mean except all the people who have built more scalable wallets
over the past few years, which is the only reason anyone can even use
Bitcoin from their phone? Or maybe you mean initiatives like Lightning 
except StrawPay already developed something similar to the Lightning
network (complete with a working GUI wallet) and were ignored by
Blockstream as you prefer to write your own from scratch?

Sure, people in the industry take development initiatives. That doesn't
mean their work will be recognised on this mailing list.


> I suppose eventually that problem would self-correct as new startups would
> make a more scalable wallet and services that are layer2 aware and eat the
> lunch of the laggards.


"The laggards" being *everyone* who has already invested in building
Bitcoin software so far. Not a great way to frame things. Many of those
"laggards" have written orders of magnitude more code than you or Gregory
or Jeff, for instance.

I still think you guys don't recognise what you are actually asking for
here - scrapping virtually the entire existing investment in software,
wallets and tools.


> But it will be helpful if we expose companies to the back-pressure
> actually implied by an O(n^2) scaling protocol and don't just immediately
> increase the block-size to levels that are dangerous for decentralisation
> security


Bitcoin does not have n-squared scaling. I really don't get where this idea
comes from. Computational complexity for the entire network is O(nm) where
n=transactions and m=fully validating nodes. There is no fixed
relationships between those two variables.

"Exposing the companies to back-pressure" sounds quite nice and gentle. Let
me rephrase it to be equivalent but perhaps more direct: you mean "breaking
the current software ecosystem to force people into a new, fictional system
that bears little resemblance to the Bitcoin we use today, whether they
want that or not".

As nothing that has been proposed so far (Lightning, merge mined chains,
extension blocks etc) has much chance of actual deployment any time soon,
that leaves raising the block size limit as the only possible path left.
Which is why there will soon be a fork that does it.
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


[Bitcoin-development] comments on BIP 100

2015-06-14 Thread Adam Back
Hi

I made these comments elsewhere, but I think really we should be
having these kind of conversations here rather than scattered around.

These are about Jeff Garzik's outline draft BIP 100 I guess this is
the latest draft:  (One good thing about getting off SF would be
finally JGarzik's emails actually not getting blocked!).

http://gtf.org/garzik/bitcoin/BIP100-blocksizechangeproposal.pdf

may have changed since the original [1]

Over the original proposal:

1. there should be a hard cap, not indefinitely growing.

2. there should be  a growth limiter (no more than X%/year)

3. I think the miners should not be given a vote that has no costs to
cast, because their interests are not necessarily aligned with users
or businesses.

I think Greg Maxwell's difficulty adjust [2] is better here for that
reason.  It puts quadratic cost via higher difficulty for miners to
vote to increase block-size, which miners can profitably do if there
are transactions with fees available to justify it. There is also the
growth limiter as part of Greg's proposal. [3]

I think bitcoin will have to involve layering models that uplift
security to higher layers, but preserve security assurances, and
smart-contracts even, with protocols that improve the algorithmic
complexity beyond O(n^2) in users, like lightning, and there are
multiple other candidates with useful tradeoffs for various use-cases.

One thing that is concerning is that few in industry seem inclined to
take any development initiatives or even integrate a library.  I
suppose eventually that problem would self-correct as new startups
would make a more scalable wallet and services that are layer2 aware
and eat the lunch of the laggards.  But it will be helpful if we
expose companies to the back-pressure actually implied by an O(n^2)
scaling protocol and don't just immediately increase the block-size to
levels that are dangerous for decentralisation security, as an
interventionist subsidy to save them having to do basic integration
work.  Otherwise I think whichever any kind of kick the can some 2-5
years down the road we consider, we risk the whole saga repeating in a
few years, when no algorithmic progress has been made and even more
protocol inertia has set in.

Adam

[1] original proposal comments on reddit
https://www.reddit.com/r/Bitcoin/comments/39kzyt/draft_bip_100_soft_fork_block_size_increase/

[2] flexcap propoal by Greg Maxwell see post by Mark Freidenbach
https://www.mail-archive.com/bitcoin-development@lists.sourceforge.net/msg07599.html

[3] growth limited proposal for flexcap by Greg Maxwell
https://www.mail-archive.com/bitcoin-development@lists.sourceforge.net/msg07620.html

--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development