Re: [bitcoin-dev] Mailing List Moderation Now Active.

2015-10-23 Thread Mike Hearn via bitcoin-dev
>
> - Posts must concern the near-term development of the bitcoin core
>   code or bitcoin protocol.
>

Are block size discussions considered acceptable, then?
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Memory leaks?

2015-10-14 Thread Mike Hearn via bitcoin-dev
Leaks are not the only explanation possible. Caches and fragmentation can
also give this sort of effect. Unfortunately the tools to debug this aren't
great. You could try a build with tcmalloc and use it to investigate heap
stats.

Odinn, trolling like a 3 year old will get you swiftly banned. Last warning.
On 14 Oct 2015 9:58 am, "Tom Zander"  wrote:

> On Tuesday 13 Oct 2015 14:56:08 Jonathan Toomim  via bitcoin-dev wrote:
> > Does anybody have any guesses where we might be leaking memory, or what
> is
> > using the additional 2.4 GB? I've been using minrelaytxfee=0.3 or
> > similar on my nodes. Maybe there's a leak in the minrelaytxfee code path?
> > Has anyone else seen something similar?
>
> I suggest running it in valgrind with --leak-check=full for 10 minutes.
>
>   valgrind --leak-check=full src/bitcoind 2>&1 | tee out
>
> This at least will show you any memory leaks at exit.
> Naturally, the leaks you observe may just be design issues where cache can
> grow to much and when the cache is cleaned on shutdown you won't see it in
> the
> valgrind output.
>
> --
> You received this message because you are subscribed to the Google Groups
> "bitcoin-xt" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to bitcoin-xt+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Let's deploy BIP65 CHECKLOCKTIMEVERIFY!

2015-10-05 Thread Mike Hearn via bitcoin-dev
Hi Jorge,

I'm glad we seem to be reaching agreement that hard forks aren't so bad
really and can even have advantages. It seems the remaining area of
disagreement is this rollout specifically.

> a non-upgraded full node and an upgraded full will converge on what they
> see: "the most-work valid chain" will be the same for both.
>
Indeed it will, but the point of fully verifying is to *not* converge with
the miner majority, if something goes wrong and they aren't following the
same rules as you. Defining "work" as "converge with miner majority" is
fine for SPV wallets and a correct or at least reasonable definition. But
not for fully verifying nodes, where non-convergence is an explicit design
goal! That's the only thing that stops miners awarding themselves infinite
free money!

> Are you going to produce a bip65 hardfork alternative to try to convince
> people of its advantages over bip65 (it is not clear to me how you include
> a new script operand via hardfork)?
>
No, I'm focused on the block size issue right now. I don't think there's
much point in improving the block chain protocol if most users are going to
be unable to use it. But the modification is simple, right? You just
replace this bit:

  CHECKLOCKTIMEVERIFY redefines the existing NOP2 opcode

with this

  CHECKLOCKTIMEVERIFY defines a new opcode (0xc0)

and that's it. The section *upgrade and testing plan* only says TBD so that
part doesn't even need to change at all, as it's not written yet.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Let's deploy BIP65 CHECKLOCKTIMEVERIFY!

2015-10-05 Thread Mike Hearn via bitcoin-dev
Well, let's agree to disagree on these two things:

- I define "working" for a full node as verifying everything; if a node
starts skipping bits then I'd say it's not really "working" according to
its original design goals

- Saying the pre-fork behaviour is defined and deterministic is true, but
only in the sense that reading an uninitialised variable in C is defined
and deterministic. It reads whatever happens to be at that stack position:
easily defined. For many programs, that may be the same value each time:
deterministic. Nonetheless, it's considered undefined behaviour by the C
specification and programmers that rely on it can easily create security
holes.

In the same way, I'd consider a node running a script with a NOP and
reaching the opposite conclusion from other nodes to be a case of undefined
behaviour leading to a non-fully-working node.

But these are arguments about the semantics of words. I think we both know
what each other is getting at.

On Mon, Oct 5, 2015 at 1:23 PM, Jeff Garzik <jgar...@gmail.com> wrote:

>
> - It is true that hard forks produce a much cleaner outcome, in terms of
> well defined behavior across the entire network.
>
> - Replacing an opcode should not result in undefined behavior.  The
> non-upgraded behavior is defined and deterministic.
>
> - IsStandard remains an assistant.  Miners may mine non-standard
> transactions.
>
> - "Hard forks require everyone to upgrade and soft forks don't"   Doesn't
> require tons of explanation:  Non upgraded clients continue working on the
> network even after the rules are upgraded.
>
> All those corrections aside, I do think there has been too much hysteria
> surrounding hard forks.  Hard forks, when done right, produce a much
> cleaner system for users.
>
>
>
>
>
>
>
>
> On Mon, Oct 5, 2015 at 6:59 AM, Mike Hearn via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> Putting aside stupid arguments about who is older or who starting using
>> the term SPV wallet first, let me try and make a better suggestion than
>> what's in the BIP. How about the following:
>>
>> A new flag is introduced to Core, --scriptchecks=[all,standardonly,none].
>> The default is all. When set to "standardonly", non-standard scripts are
>> not checked but others are. This is similar to the behaviour during a soft
>> fork. In "none" you have something a bit like SPV mode, but still
>> calculating the UTXO set. This flag is simple and can be implemented in a
>> few lines of code. Then an unused opcode is used for CLTV, so making it a
>> hard fork.
>>
>> This has the following advantages:
>>
>>- Nodes that want the pseudo-SPV behaviour of a soft fork can opt in
>>to it if they want it. This prioritises availability (in a sense) over
>>correctness.
>>
>>- But otherwise, nodes will prioritise correctness by default, which
>>is how it should be. This isn't PHP where nonsensical code the interpreter
>>doesn't understand just does .. something. This is financial software
>>where money is at risk. I feel very strongly about this: undefined
>>behaviour is fine *if you opted into getting it. *Otherwise it should
>>be avoided whenever possible.
>>
>>- SPV wallets do the right thing by default.
>>
>>- IsStandard doesn't silently become a part of the consensus rules.
>>
>>- All other software gets simpler. It's not just SPV wallets. Block
>>explorers, for example, can just add a single line to their opcode map.
>>With a soft fork they have to implement the entire soft fork logic just to
>>figure out when an opcode transitioned from OP_NOP to CLTV and make sure
>>they render old scripts differently to new scripts. And they face tricky
>>questions - do they render an opcode as a NOP if the miner who built it 
>> was
>>un-upgraded, or do they calculate the flag day and change all of them 
>> after
>>that? It's just an explosion of complexity.
>>
>> Many people by now have accepted that hard forks are simpler,
>> conceptually cleaner, and prioritise correctness of results over
>> availability of results. I think these arguments are strong.
>>
>> So let me try addressing the counter-arguments one more time:
>>
>>- Hard forks require everyone to upgrade and soft forks don't. I
>>still feel this one has never actually been explained. There is no
>>difference to the level of support required to trigger the change. With 
>> the
>>suggestion above, if someone can't or won't upgrade their full node but 
>> can

Re: [bitcoin-dev] Let's deploy BIP65 CHECKLOCKTIMEVERIFY!

2015-10-05 Thread Mike Hearn via bitcoin-dev
>
> As Greg explained to you repeatedly, a softfork won't cause a
> non-upgraded full node to start accepting blocks that create more
> subsidy than is valid.
>

It was an example. Adam Back's extension blocks proposal would, in fact,
allow for a soft forking change that creates more subsidy than is valid (or
does anything else) by hiding one block inside another.

Anyway, I think you got my point.


> That's very different security from an SPV node, and as Greg
> also explained, SPV nodes could be much more secure than bitcoinj
> nodes (they could, for example, validate the coinbase transaction of
> every block).
>

I'm pretty sure Gregory did not use such an example because it's dead
wrong. You cannot verify the size of a coinbase without being a fully
verifying node because you need to know the fees in the block, and
calculating that requires access to the entire UTXO set.

This sort of thing is why I get annoyed when people lecture me about SPV
wallets and the things they "should" do. None of you guys has built one. I
keep seeing wild statements about theoretical unicorn wallets that nobody
has even designed, and how all existing wallets are crappy and insecure
because they don't meet your ever shifting goal posts.

To everyone making such statements I say: go away and build an SPV wallet
of your own from scratch. Then you will understand the engineering
tradeoffs involved much better, and be in a much better position to debate
what they should or should not be doing.

And bear in mind if it weren't for the work myself and a few others did on
SPV wallets, everyone would be using web wallets instead. Then you'd all
just complain about that instead.


> Can you give an example of an attack in which a non-upgraded full node
> wallet is defrauded with BIP65 but could not with the hardfork
> alternative (that nobody seems to be willing to implement)?
>

Making it a hard fork instead is changing one line of code (ignoring the
code to set up the flag day, which can be based on the code for BIP101). If
it comes down to it, then I'll do the work to change that one line. But
obviously I'd need to see agreement from the maintainers that such a pull
req would be merged first.

The example is this: find someone that accepts 1-block confirmed
transactions in return for something valuable. There are plenty of them out
there. Once the soft fork starts, send a P2SH transaction that defines a
new output controlled by OP_CLTV. It will be incorporated into the UTXO set
by all miners because it's opaque (p2sh).

Now send a transaction that pays the merchant, and make it spend your
OP_CLTV output with an invalid script. New nodes will reject it as a rule
violator. Old nodes won't. So at some point an old miner will create a
block containing your invalid transaction, the merchant will think they got
paid, they'll give you the stuff and the fraud is done.


> Please, don't assume 0 confirmation transactions or similar
> unreasonable assumptions (ie see section 11 "Calculations" of the
> Bitcoin whitepaper).
>

This is just embarrassing - do any of you guys at Blockstream actually use
Bitcoin in the real world? Virtually all payments that aren't moving money
into/out of exchange wallets are 0-confirm in reality. I described a
1-confirm attack above, but really ... come on.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] This thread is not about the soft/hard fork technical debate

2015-10-05 Thread Mike Hearn via bitcoin-dev
Hey Sergio,

To clarify: my *single* objection is that CLTV should be a hard fork. I
haven't been raising never-ending technical objections, there's only one.

I *have* been answering all the various reasons being brought up why I'm
wrong and soft forks are awesome  and there do seem to be a limitless
number of such emails  but on my side it's still just a single
objection. If CLTV is a hard fork then I won't be objecting anymore, right?

CLTV deployment is clearly controversial. Many developers other than me
have noted that hard forks are cleaner, and have other desirable
properties. I'm not the only one who sees a big question mark over soft
forks.

As everyone in the Bitcoin community has been clearly told that
controversial changes to the consensus rules must not happen, it's clear
that CLTV cannot happen in its current form.

Now I'll be frank - you are quite correct that I fully expect the Core
maintainers to ignore this controversy and do CLTV as a soft fork anyway.
I'm a cynic. I don't think "everyone must agree" is workable and have said
so from the start. Faced with a choice of going back on their public
statements or having to make changes to something they clearly want, I
expect them to redefine what "real consensus" means. I hope I'm wrong, but
if I'm not . well, at least everyone will see what Gavin and I have
been talking about for so many months.

But I'd rather the opcode is tweaked. There's real financial risks to a
soft fork.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Crossing the line? [Was: Re: Let's deploy BIP65 CHECKLOCKTIMEVERIFY!]

2015-10-02 Thread Mike Hearn via bitcoin-dev
FWIW the "coining" I am referring to is here:

https://bitcointalk.org/index.php?topic=7972.msg116285#msg116285

OK, with that, here goes. Firstly some terminology. I'm going to call these
things SPV clients for "simplified payment verification". Headers-only is
kind of a mouthful and "lightweight client" is too vague, as there are
several other designs that could be described as lightweight like RPC
frontend and Stefans WebCoin API approach

At that time nobody used the term "SPV wallet" to refer to what apps like
BreadWallet or libraries like bitcoinj do. Satoshi used the term "client
only mode", Jeff was calling them "headers only client" etc. So I said, I'm
going to call them SPV wallets after the section of the whitepaper that
most precisely describes their operation.

On Thu, Oct 1, 2015 at 6:39 PM, Jeff Garzik <jgar...@gmail.com> wrote:

> To reduce the list noise level, drama level and promote inclusion, my own
> personal preference (list admin hat: off, community member hat: on) is for
> temporal bans based on temporal circumstances.  Default to
> pro-forgiveness.  Also, focus on disruption of the list as a metric, rather
> than focusing on a specific personality.
>
> I do think we're at a bit of a point where we're going around in circles.
>
> Given the current reddit hubbub, a bit of a cooling off period is IMO
> advisable before taking any further action.
>
>
>
> On Thu, Oct 1, 2015 at 12:08 AM, Tao Effect via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> Dear list,
>>
>> Mike has made a variety of false and damaging statements about Bitcoin,
>> of which this is but one:
>>
>> On Sep 30, 2015, at 2:01 PM, Mike Hearn via bitcoin-dev <
>> bitcoin-dev@lists.linuxfoundation.org> wrote:
>>
>> I coined the term SPV so I know exactly what it means, and bitcoinj
>> implements it, as does BreadWallet (the other big SPV implementation).
>>
>>
>> On his website Vinumeris.com he writes:
>>
>> Vinumeris was founded in 2014 by Mike Hearn, one of the developers of the
>> Bitcoin digital currency system.
>>
>>
>> On plan99.net there are several embedded videos that refer to him a
>> “core developer” of Bitcoin. And now it seems he is claiming to be Satoshi.
>>
>> It seems to me that Mike’s emails, false statements (like the one above
>> about coining SPV), arguments, and his attempts to steal control of Bitcoin
>> via the contentious Bitcoin XT fork, represent actions that have been
>> harming and dividing this community for several years now.
>>
>> In many communities/tribes, there exists a line that, once crossed,
>> results in the expulsion of a member from the community.
>>
>> So, two questions:
>>
>> 1. Does the Bitcoin-devs mailing list have such a line?
>> 2. If so, does the community feel that Mike Hearn has crossed it? (I
>> personally feel he has. Multiple times.)
>>
>> Thanks for your thoughts,
>> Greg Slepak
>>
>> --
>> Please do not email me anything that you are not comfortable also sharing 
>> with
>> the NSA.
>>
>>
>> ___
>> bitcoin-dev mailing list
>> bitcoin-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
>>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Let's deploy BIP65 CHECKLOCKTIMEVERIFY!

2015-09-30 Thread Mike Hearn via bitcoin-dev
>
> Exactly, all those "mini divergences" eventually disappear
>
A miner that has accepted a newly invalid transaction into its memory pool
and is trying to mine it, will keep producing invalid blocks forever until
the owner shuts it down and upgrades. This was happening for weeks after
P2SH triggered.

For instance, any miner that has modified/bypassed IsStandard() can do
this, or any miner that accepts direct transaction submission, or any miner
that runs an old node from before OP_NOPs were made non-standard.

> On the other hand, the "single divergence" in the hardfork keeps growing
> forever (unless all miners evetually upgrade.
>
Which they do, because they will eventually notice they are burning money.

Sorry Jorge, but I don't think your argument makes sense.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Let's deploy BIP65 CHECKLOCKTIMEVERIFY!

2015-09-30 Thread Mike Hearn via bitcoin-dev
>
> Field experience shows it successfully delivers new features to end users
> without a global software upgrade.
>

The global upgrade is required for all full nodes in both types. If a full
node doesn't upgrade then it no longer does what it was designed to do; if
the user is OK with that, they should just run an SPV wallet or use
blockchain.info or some other mechanism that consumes way fewer resources.

But if you want the software you installed to achieve its stated goal, you
*must* upgrade. There is no way around that.

Jorge has said soft forks always lead to network convergence. No, they
don't. You get constant mini divergences until everyone has upgraded, as
opposed to a single divergence with a hard fork (until everyone has
upgraded). The quantity of invalid blocks mined, on the other hand, is
identical in both types.

Adam has said "there is actually consensus", although I just said there
isn't. Feel free to say what you really mean here Adam - there's consensus
if you ignore people who don't agree, i.e. the concept of "developer
consensus" doesn't actually mean anything. This would contradict your prior
statements about how Bitcoin Core makes decisions, but alright 

Finally John, I fully agree with what you wrote. Debates that never end are
bad news all round. Bitcoin Core has told the world it uses "developer
consensus" to make decisions. I don't agree that's a good way to do things,
but if Core wants to stick with it then there is no choice - as I am a
developer, and I do not agree with the change, there is no consensus and
the debate is over.

Hey, I have an idea. Maybe we should organise a conference about soft vs
hard forks. Let's have it down the road from where I live, a couple of
weeks from now. Please submit your talk titles to me so I can vet them to
ensure nobody does an offtopic talk ;)
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Let's deploy BIP65 CHECKLOCKTIMEVERIFY!

2015-09-30 Thread Mike Hearn via bitcoin-dev
tl;dr Nothing I have read here has changed my mind. There is still no
consensus to deploy CLTV in this way.


> Yes, your article contained numerous factual and logical inaccuracies
> which I corrected
>

I responded to your response several times. It was not convincing, and I do
not think you corrected factual inaccuracies. I mean, you said yourself you
once used the correct terminology of forwards compatibility but stopped
only because the term "backwards compatibility" is more common. But that's
not a good reason to use a term with the opposite meaning and is certainly
not a factual correction!


> Yes, because what 101 does is not a hard-fork from the perspective of
> BitcoinJ clients. Please do not conflate BitcoinJ with all of SPV;


I coined the term SPV so I know exactly what it means, and bitcoinj
implements it, as does BreadWallet (the other big SPV implementation).

Yes, SPV wallets will follow the mining hashpower instead of doing a hard
reject for bigger blocks, because they deliberately check a subset of the
rules: block size is not and never has been one of them. Indeed it's not
even included in the protocol messages. Users have no expectation that SPV
wallets would check that, as it's never been claimed they do.

On the other hand, full nodes all claim they run scripts. Users expect that
and may be relying on it. The unstated assumption here is that the nodes
run them correctly. A soft fork breaks this assumption.

I'm going to ignore the rest of the stuff you wrote about "design decisions
to lack security" or "cheaply avoidable lack of validation". When you have
sat down and written an SPV implementation by yourself, then shipped it to
a couple of million users, you might have better insight into basic
engineering costs. Until then, I find your criticisms of code you think was
missing due to "stonewalling" and so on to be seriously lacking real world
experience.

Yes, a hypothetical full node could fork on the version bits. I would be
quite happy with the version number in the header being an enforced
consensus rule: it'd make hard forks easier to trigger. But it hasn't been
done that way, and wishing away the behaviour of existing software in the
field is no good. Luckily, for introducing a new opcode, the same effect
can be achieved by using a non-allocated opcode number.


> For many changes, including CLTV the actual soft fork change is by far
> the most natural way of implementing the change itself.


This is subjective. I'd say picking an entirely new opcode number is most
natural.

The rest of your argument boils down to "people don't have to upgrade if
they don't want to", which is addressed in the article I wrote already, and
multiple responses on this thread. Yes, they do, otherwise they aren't
getting the security level they were before.


> Could [P2SH] have been done as a hard-fork?  Likely not: you would have
> prevented it.


What? This is nonsensical. P2SH was added to the full verification code
quite quickly, but it didn't matter much because nobody uses bitcoinj for
mining. The docs explicitly tell people, in fact, not to mine on top of
bitcoinj:

https://bitcoinj.github.io/full-verification

So no, bitcoinj+P2SH was irrelevant from a fork type perspective. It just
had no effect at all. This entire section of your message is completely
wrong.

The code that did take longer was for wallet support. And the reason it
came later was resource prioritisation: there were more important issues to
resolve. Like I said - write the amount of code I've written, unpaid in
your evenings and weekends, and then you can criticise bitcoinj for lacking
features.

75% is a fine activation threshold. By definition if support is at 75% then
bigger blocks is "winning", but if support fell, then the SPV wallets would
just reorg back onto the 1mb-blocks chain.

Re: demonstrated track record. They "work" only if you ignore the actual
problems that have resulted. P2SH-invalid blocks were being mined for weeks
after the flag day. That's not good no matter how you slice it: even if you
didn't hear about any fraud resulting, it is still risk that can be avoided.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Let's deploy BIP65 CHECKLOCKTIMEVERIFY!

2015-09-30 Thread Mike Hearn via bitcoin-dev
>
> I think from discussion with Gavin sometime during the montreal
> scaling bitcoin workshop, XT maybe willing to make things easy and
> adapt what it's doing.


If Core ships CLTV as is, then XT will have to adopt it - such is the
nature of a consensus system.

This will not change the fact that the rollout strategy is bad and nobody
has answered my extremely basic question: *why* is it being done in this
way, given the numerous downsides?
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Let's deploy BIP65 CHECKLOCKTIMEVERIFY!

2015-09-30 Thread Mike Hearn via bitcoin-dev
Hi Gregory,


> I'm surprised to see this response


Why? I have objected to the idea of soft forks many times. I wrote an
entire article about it in August. I also objected in April 2014, for
instance, where Pieter agreed with me that soft forks can result in ugly
hacks, and that they are "not nice philosophically because they reduce the
security model of former full nodes to SPV without their knowledge" (he
thought they were worth it anyway).

This is not a new debate. If you're surprised, it means only you weren't
paying attention to all the previous times people raised this issue.


> Have I missed a proposal to change BIP101 to be a real hardfork


There's no such thing as a "real" hard fork - don't try and move the goal
posts. SPV clients do not need any changes to do the right thing with BIP
101, they will follow the new chain automatically, so it needs no changes.

Several people have asked several times now: given the very real and widely
acknowledged downsides that come with a soft fork, *what* is the specific
benefit to end users of doing them?

Until that question is answered to my satisfaction I continue to object to
this BIP on the grounds that the deployment creates financial risk
unnecessarily. To repeat: *CLTV does not have consensus at the moment*.

BTW, in the April 2014 thread Pieter's argument was that hard forks are
more risky, which is at least an answer to my question. But he didn't
explain why he thought that. I disagree: the risk level seems lower with a
hard fork because it doesn't lower anyone's security level.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Let's deploy BIP65 CHECKLOCKTIMEVERIFY!

2015-09-29 Thread Mike Hearn via bitcoin-dev
>
> Other than the fact that doing this as a soft fork requires an extra
> OP_DROP, how would doing this as a hard fork make any difference to SPV
> clients? If, as others have suggested, all clients warn the user on
> unrecognized nVersion
>

All clients do *not* do this. Why would they? What action would they take?
Try and simulate a hard fork in some complicated roundabout manner? Why not
just do the real thing and keep things simple?


> and make unknown noops nonstandard
>

They are already non-standard. That change was made last time I brought up
the problems with soft forks. It brought soft forks that use OP_NOPs a bit
closer to the ideal of a hard fork, but didn't go all the way. I pointed
that out above in my reply to Peter's mail.

So to answer your question, no, it wouldn't satisfy my concerns. My logic
is this:

Hard forks - simple, well understood, SPV friendly, old full nodes do not
calculate incorrect ledgers whilst telling their users (via UI, RPC) that
they are fully synced. Emphasis on simple: simple is good.

Soft forks - to get the benefits of a hard fork back requires lots of extra
code, silently makes IsStandard() effectively a part of the consensus rules
when in the past it hasn't been, SPV unfriendly. Benefits? As far as I can
tell, there are none.

If someone could elucidate *what* the benefits actually are, that would be
a good next step. So far everyone who tried to answer this question gave a
circular answer of the form "soft forks are good because they are soft
forks".
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Let's deploy BIP65 CHECKLOCKTIMEVERIFY!

2015-09-29 Thread Mike Hearn via bitcoin-dev
Hi Jorge,

Yes, there is a difference. Assuming the hashrate majority upgrades, in the
> case of a softfork [snip] .. In the case of a hardfork [snip]
>
Yes, I know what the difference between them is at a technical level. You
didn't explain why this would make any difference to how fast miners
upgrade. The amount of money they lose in both cases is identical: they are
equally incentivised to upgrade with both fork types.

Additionally, you say in a hard fork the other chain may "continue
forever". Why do you think this is not true for miners building invalid
blocks on top of the main chain? Why would that not continue forever?

There just isn't any difference between the two fork types in terms of how
fast miners would upgrade. Heck if anything, a hard fork should promote
faster upgrades, because if a miner isn't paying attention to their
debug.log they might miss the warnings. A soft fork would then look
identical to a run of really bad luck, which can legitimately happen from
time to time. A hard fork results in your node having a different height to
everyone else, which is easily detectable by just checking a block explorer.

> This discussion about the general desirability of softforks seems offtopic
> for the concrete cltv deployment discussion, which assumes softforks as
> deployment mechanism (just like bip66 assumed it).
>
Isn't that circular? This thread is about deployment of CLTV, but the BIP
assumes a particular mechanism, so pointing out problems with it is off
topic? Why have a thread at all?
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Is it possible for there to be two chains after a hard fork?

2015-09-29 Thread Mike Hearn via bitcoin-dev
>
> Mining empty blocks is not fraud.
>

I didn't say it was, sorry, the comma was separating two list items. By
"fraud" I meant double spending. Mining only empty blocks would be a DoS
attack rather than double spending.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Let's deploy BIP65 CHECKLOCKTIMEVERIFY!

2015-09-28 Thread Mike Hearn via bitcoin-dev
>
> Can you explain exactly how you think wallets will "know" how to ignore
> the invalid chain?
>

I'm confused - I already said this. For a fork to work, hard or soft, there
must be support from a majority of the hash power.

Therefore, the usual SPV technique of following the highest work chain
results in ignoring the minority chain produced by the hard fork.

BIP 101 is SPV friendly because the wallets would simply follow the 75%
chain and never even be aware anything has changed. It's backwards
compatible with them in this respect: they already know how to ignore the
no-bigger-blocks fork that'd be created if some miners didn't upgrade
during the grace period.

My point about IsStandard is that miners can and do bypass it, without
expecting that to carry financial consequences or lower the security of
other users. By making it so a block which includes non-standard
transactions can end up being seen as invalid, you are increasing the risk
of accidents that carry financial consequences.

That's incorrect: Miners bypassing IsStandard() risk creating invalid
> blocks in the event of a soft-fork. Equally, we design soft-forks to
> take advantage of this.
>

Gah. You repeated what I just said. Yes, I know miners face that risk, my
point is that they do NOT face such a risk when there's no soft fork in
action and have historically NOT faced that risk at all, hence the
widespread practice of bypassing or modifying this function.

All this approach does is make changing IsStandard() the same as changing
AcceptBlock(), except without the advantage of telling anyone about it.


> > So I'll repeat the question that I posed before - given that there are
> > clear, explicit downsides, what is the purpose of doing things this way?
> > Where is the gain for ordinary Bitcoin users?
>
> We seem to be in strong disagreement about which option has "clear,
> explicit downsides"


Obviously. So please enlighten me.

How do ordinary Bitcoin users benefit from this rollout strategy? Put
simply, what is the point of this whole complex soft fork endeavour?
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Let's deploy BIP65 CHECKLOCKTIMEVERIFY!

2015-09-28 Thread Mike Hearn via bitcoin-dev
There is *no* consensus on using a soft fork to deploy this feature. It
will result in the same problems as all the other soft forks - SPV wallets
will become less reliable during the rollout period. I am against that, as
it's entirely avoidable.

Make it a hard fork and my objection will be dropped.

Until then, as there is no consensus, you need to do one of two things:

1) Drop the "everyone must agree to make changes" idea that people here
like to peddle, and do it loudly, so everyone in the community is correctly
informed

2) Do nothing
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Let's deploy BIP65 CHECKLOCKTIMEVERIFY!

2015-09-28 Thread Mike Hearn via bitcoin-dev
>
> 1) Do you agree that CLTV should be added to the Bitcoin protocol?
>
> Ignoring the question how exactly it is added, hard-fork or soft-fork.
>

The opcode definition seems OK.


> 2) Will you add a IsSuperMajority() CLTV soft-fork to Bitcoin XT if it
>is added to Bitcoin Core?
>

Yes. It might be worth putting the version bit change behind a command line
flag though: the BIP, as written, has problems (with deployment).


> 3) Will you add soft-fork detection to bitcoinj, to allow SPV clients to

   detect advertised soft-forks and correctly handle them?
>

I'd really hate to do that. It'd be a Rube Goldberg machine:

   https://krypt3ia.files.wordpress.com/2011/11/rube.jpg

There's no really good way to do what you propose, and we already have a
perfectly workable mechanism to tell SPV clients about chain forks: the
block chain itself. This has the advantage of being already implemented,
already deployed, and it works correctly.

Attempting to strap a different mechanism on top to try and make soft forks
more like hard forks would be a large and pointless waste of people's time
and effort, not just mine (bitcoinj is not the only widely used SPV
implementation nowadays). You may as well go straight to the correct
outcome instead of trying to simulate it with ever more complex mechanisms.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Scaling Bitcoin conference micro-report

2015-09-20 Thread Mike Hearn via bitcoin-dev
>
> Also, in the US, despite overwhelming resistance on a broad scale,
> legislation continues to be presented which would violate the 2nd amendment
> right to keep and bear arms.


And yet the proposed legislation goes nowhere, and the USA continues to
stand alone in having the first world's weakest gun control laws.

You are just supporting my point with this example. Obama would like to
restrict guns, but can't, because they are too popular (in the USA).

The comparison to BitTorrent is likewise weak: governments hardly care
about piracy. They care enough to pass laws occasionally, but not enough to
put serious effort into enforcement. Wake me up when the USA establishes a
Copyright Enforcement Administration with the same budget and powers as the
DEA.

Internet based black markets exist only because governments tolerate them
(for now). A ban on Tor, Bitcoin or both would send them back to the
pre-2011 state where they were virtually non-existent. Governments tolerate
this sort of abuse only because they believe, I think correctly, that
Bitcoin can have great benefits for their ordinary voters and for now are
willing to let the tech industry experiment.

But for that state of affairs to continue, the benefits must actually
appear. That requires growth.

I think there's a difference between natural growth and the kind of growth
> that's being proposed by bank-backed start-ups and pro-censorship entities.
>

What difference? Are you saying the people who come to Bitcoin because of a
startup are somehow less "natural" than other users?
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Scaling Bitcoin conference micro-report

2015-09-19 Thread Mike Hearn via bitcoin-dev
>
> Let me get this straight. You start this whole debate with a "kick the can
> down the road" proposal to increase the block size to 20MB, which obviously
> would require another hard fork in the future, but if someone else proposes
> a similar "kicka the can" proposal you will outright reject it?
>

Which part of "in the next few years" was unclear?

This seems to be a persistent problem in the block size debates: the
assumption that there are only two numbers, zero and infinity.

BIP101 tops out at 8 gigabyte blocks, which would represent extremely high
transaction rates compared to today. *If* Bitcoin ever became so popular,
it would be a long way in the future, and many things could have happened:

   1. Bitcoin may have become as irrelevant as the Commodore 64 is.
   2. We may have invented upgrades that make Bitcoin 100x more efficient
   than today.
   3. Hardware may have improved so much that it no longer matters.
   4. The world may have been devastated by nuclear war and nobody gives a
   shit about internet currencies anymore, because there is no internet.

It's silly to ignore the time dimension in these decisions. Bitcoin will
not last forever: even if it becomes very successful it will one day it
will be replaced by something better, so it does not have to handle
infinite usage.

But hey, as you bring it up, I'd have been happy with no upper limit at
all. There's nothing magic about 8 gigabytes. I go along with BIP 101
because it is still the only proposal that is both reasonable and
implemented, and I'm willing to compromise.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Scaling Bitcoin conference micro-report

2015-09-19 Thread Mike Hearn via bitcoin-dev
>
> Your argument is that the state is not a threat to a system designed to
> deprive the state of seigniorage, because the state will see that system
> as too important?
>

And so we get to one of the hearts of the debate.

The axiom upon which you and NxtChg disagree is this: he/she believes
governments can crush Bitcoin if they want regardless of how decentralised
it is, and you don't.

If one believes governments have the power to end Bitcoin no matter what,
then the only true protection comes from popularity. Governments find it
hard to ban things that are wildly popular with their voters. This is the
Uber approach: grow fast, annoy governments, but be popular enough that
banning you is politically risky.

If you don't believe that governments can end Bitcoin because of
decentralisation, then the opposite conclusion is logical: growth can be
dangerous because stateless money will be inherently opposed by the state,
therefore if growth == less decentralisation, growth increases the risk of
state shutdown.

I don't think we have to choose between decentralisation and growth
actually - computers are just amazingly fast. But that's irrelevant here.

The point is, your disagreement is summed up by your statement:


> Bitcoin cannot be both decentralized and reliant on being, "too important
> to close". If it can be closed there is insufficient decentralization.
>

I believe this statement is wrong because governments can shut down Bitcoin
at any point regardless of its level of decentralisation. This is true
because:

   - Most governments can easily spend enough money to do a 51% attack,
   especially if they can compel chip fabs to cooperate for free. This attack
   works regardless of how decentralised Bitcoin is.

   - Any government can end Bitcoin usage in its territory by jailing
   anyone who advertises acceptance/trading of bitcoins, or prices in BTC.
   Because merchants *must* advertise in order to alert customers that
   trades in BTC are possible, this is an attack which is unsolvable. If
   ordinary people can find such merchants so can government agents.

It may appear that trade cannot be suppressed because merchants can all
become anonymous too, a la Silk Road. However, if use of Bitcoin is banned
then it becomes impossible to convert coins into local currency as that
requires cooperation of banks . making it useless for even anonymous
merchants. An outlaw currency is useless even to outlaws.

Because Bitcoin's existence ultimately relies on government cooperation and
acceptance, the best way to ensure its survival is growth. Lots of it.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Scaling Bitcoin conference micro-report

2015-09-18 Thread Mike Hearn via bitcoin-dev
Any change that results in this happening all over again in a few years
does not have consensus.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Your Gmaxwell exchange

2015-08-31 Thread Mike Hearn via bitcoin-dev
I think your summary of what people actually want from decentralisation is
pretty good, Justus.


> I don't believe that any Bitcoin user actually cares
> about decentralization, because none of them I've asked can define that
> term.
>

+1 Insightful

It's been quite impressive to see so many Bitcoin users and developers
saying, "Bitcoin is totally decentralised because it's open source and
nobody is in charge.. oh nooo we didn't mean you could change *those
lines! *If you want to change *those lines* then *we* must agree first!"

Believing simultaneously that:

1. Bitcoin is decentralised

2. Nobody should modify the code in certain ways without the agreement of
me and my buddies

is just doublethink.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Revisiting NODE_BLOOM: Proposed BIP

2015-08-24 Thread Mike Hearn via bitcoin-dev
NACK: stated rationales are invalid: both privacy and DoS (see below for
experimental data).


1 - Bloom filtering doesn't add privacy for node operators, it adds privacy
for lightweight wallets. And in fact, with a high FP rate it does do that.
Most users want both low bandwidth usage *and* query scrambling, which is
harder to do but not impossible. There is a clear roadmap for how to
implement that with smarter clients: no protocol changes are needed.

So the first stated rationale is spurious: disabling Bloom filtering
doesn't improve privacy for anyone. It can only hurt.



2 - SPV usage is rising, not falling.

Peter's data is flawed because he ignored the fact that SPV clients tend to
connect, sync, then disconnect. They don't remain connected all the time.
So merely examining a random snapshot of what's connected at a single point
in time will give wildly varying and almost random results.

A more scientifically valid approach is to check the number of actual
connections over a long span of time. Here's the data from my node:

mike@plan99:~/.bitcoin$ grep -Po 'receive version message: ([^:]*):'
debug.log |sort |uniq -c|sort -n|tac|head -n 10
  11027 receive version message: /getaddr.bitnodes.io:
   6264 receive version message: /bitcoinseeder:
   4944 receive version message: /bitcoinj:
   2531 receive version message: /Snoopy:
   2362 receive version message: /breadwallet:
   1127 receive version message: /Satoshi:
204 receive version message: /Bitcoin XT:
128 receive version message: /BitCoinJ:
 97 receive version message: /Bither1.3.8/:
 82 receive version message: /Bitaps:

Once crawlers are removed, SPV wallets (bitcoinj, breadwallet) make up the
bulk of all P2P clients. This is very far from 1% and falling, as Todd
wrongly suggests.



3 - It is said that there is a DoS attack possible. This claim does not
seem to have been researched.

I decided to test it out for real, so I implemented a DoS attack similar to
the one we've seen against XT nodes: it sends getdata for large (1mb)
filtered blocks over and over again as fast as possible.

As was reported and makes sense, CPU usage goes to 100%. However I couldn't
see any other effects. RPCs still react immediately, the Qt GUI is fully
responsive, I was even able to sync another SPV client to that node and it
proceeded at full speed. It's actually pretty nice to see how well it held
up.

Most importantly transactions and blocks continued to be relayed without
delay. I saw my VPS node receive a block only eight seconds after my local
node, which is well within normal propagation delays.

There's another very important point here: I profiled my local node whilst
it was under this attack. It turns out that Bloom filtering is extremely
fast. 90% of the CPU time is spent on loading and deserializing the data
from disk. Only 10% of the CPU time was spent actually filtering.

Thus you can easily trigger exactly the same DoS attack by just using
regular getdata requests on large blocks over and over. You don't need
Bloom filtering. If you don't want to actually download the blocks just
don't TCP ACK the packets and then FIN after a few seconds  the data
will all have been loaded and be sitting in the send buffers.

So even if I refine the attack and find a way to actually deny service to
someone, the fix would have to apply to regular non-filtered block fetches
too, which cannot be disabled.


In summary: this BIP doesn't solve anything, but does create a big upgrade
headache.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bitcoin XT Fork

2015-08-20 Thread Mike Hearn via bitcoin-dev

 It is just that no one else is reckless enough to bypass the review process


I keep seeing this notion crop up.

I want to kill this idea right now:

   - There were months of public discussion leading to up the authoring of
   BIP 101, both on this mailing list and elsewhere.

   - BIP 101 was submitted for review via the normal process. Jeff Garzik
   specifically called Gavin out on Twitter and thanked him for following the
   process:

   https://twitter.com/jgarzik/status/614412097359708160

   https://github.com/bitcoin/bips/pull/163

   As you can see, other than a few minor typo fixes and a comment by sipa,
   there was no other review offered.

   - The implementation for BIP 101 was submitted to Bitcoin Core as a pull
   request, to invoke the code review process:

   https://github.com/bitcoin/bitcoin/pull/6341

   Some minor code layout suggestions were made by Cory and incorporated.
   Peter popped up to say there was no chance it'd ever be accepted . and
   no further review was done.

So the entire Bitcoin Core BIP process was followed to the letter. The net
result was this. There were, in fact, bugs in the implementation of BIP
101. They were found when Gavin submitted the code to the XT community
review process, which resulted in *actual* peer review. Additionally, there
was much discussion of technical details on the XT mailing list that
Bitcoin Core entirely ignored.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bitcoin XTs Tor IP blacklist downloading system has significant privacy leaks.

2015-08-19 Thread Mike Hearn via bitcoin-dev
The code was peer reviewed, in the XT project. I didn't bother submitting
other revisions to Core, obviously, as it was already rejected.

The quantity of incorrect statements in this thread is quite ridiculous.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bitcoin XT 0.11A

2015-08-16 Thread Mike Hearn via bitcoin-dev
Hi Eric,

Sorry you feel that way. I devoted a big part of the article to trying to
fairly represent the top 3 arguments made, but ultimately I can't link to a
clear statement of what Bitcoin Core thinks because there isn't one. Some
people think the block size should increase, but not now, or not by much.
Others think it should stay at 1mb forever, others think everyone should
migrate to Lightning, people who are actually *implementing* Lightning
think it's not a replacement for an increase . I think one or two
people even suggested shrinking the block size!

So I've done my best to sum up the top arguments. If you think I've done a
bad job, well, get writing and lay it out how you see it!

I don't think the position of Bitcoin is open source but touching THESE
parts is completely bogus is reasonable. Bitcoin is open source or it
isn't. You can't claim to be decentralised and open source, but then only
have 5 people who are allowed to edit the most important parts. That's
actually worse than central banking!

This isn’t a democracy - consensus is all or nothing.


This idea is one of the incorrect beliefs that will hopefully be disproven
in the coming months. Bitcoin cannot possibly be all or nothing because
as I pointed out before, that would give people a strong financial
incentive to try and hold the entire community to ransom: I have 1
terahash/sec of mining power. Pay me 1000 BTC or I'll never agree to the
next upgrade.

Or indeed, me and Gavin could play the same trick.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Future Of Bitcoin-Cores Wallet

2015-08-11 Thread Mike Hearn via bitcoin-dev
Hey Jonas,

I think your analysis of what (some) users need is a good one.

We've discussed this before so I know you prefer your current approach, but
I personally would take a slightly different path to reach the same end:

   1. Support serving of SPV wallets from pruned storage. This means some
   protocol upgrades, BIPs, etc. It helps all SPV wallets, including on phones.
   2. Then make a bitcoinj based desktop wallet app, that contains a
   bundled bitcoind.
   3. Make the app sync TWO wallets simultaneously, one from the P2P
   network as today, and another from the local bitcoind via a local socket
   (or even just passing buffers around internally)
   4. The app should then switch from using the wallet synced to P2P to the
   wallet synced to localhost when the latter is fully caught up, and back
   again when the local node is behind.
   5. If there's a discrepancy, alert the user.

There are big advantages of taking this path! They are:

   - The switching back and forth between local full-security mode (which
   may be behind) and remote SPV security (fully synced) is instant and
   transparent to the user. This is important for laptop users who don't run a
   local node all the time. The different audit levels can be reflected in the
   UI in some way.

   - The bitcoinj wallet code already has support for things like
   multi-sig, BIP32, seed words, micropayment channels, etc. You can disable
   Bloom filtering if you like (download full blocks).

   - You can do a local RPC or JNI/JNA call to get fee estimates, if wanted.

   - The modern JVM tools and languages are much, much more productive than
   working with C++.


If you want a thing that runs a home server, then the best way to do that
IMO would be to bundle Tor and make it auto-register a Tor hidden service.
Then you can just define a QR code standard for 'pairing' a wallet to a
.onion address. Any bitcoinj based wallet can sync to it, and as it's your
own node, you can use a Bloom filter sized to give virtually no false
positives. No additional indexing is then required.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Block size following technological growth

2015-08-06 Thread Mike Hearn via bitcoin-dev
Whilst 1mb to 8mb might seem irrelevant from a pure computer science
perspective payment demand is not really infinite, at least not if by
payment we mean something resembling how current Bitcoin users use the
network.

If we define payment to mean the kind of thing that Bitcoin users and
enthusiasts have been doing up until now, then suddenly 1mb to 8mb makes a
ton of sense and doesn't really seem that small: we'd have to increase
usage by nearly an order of magnitude before it becomes an issue again!

If we think of Bitcoin as a business that serves customers, growing our
user base by an order of magnitude would be a great and celebration worthy
achievement! Not at all a small constant factor :)

And keeping the current user base happy and buying things is extremely
interesting, both to me and Gavin. Without users Bitcoin is nothing at all.
Not a settlement network, not anything.

It's actually going to be quite hard to grow that much. As the white paper
says, the system works well enough for most transactions. And despite a
lot of effort by many people, killer apps that use Bitcoin's unique
features are still hit and miss. Perhaps Streamium, Lighthouse, ChangeTip,
some distributed exchange or something else will stimulate huge new demand
for transactions in future . but if so we're not there yet.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Block size following technological growth

2015-07-31 Thread Mike Hearn via bitcoin-dev
Hey Jorge,

He is not saying that. Whatever the reasons for centralization are, it
 is obvious that increasing the size won't help.


It's not obvious. Quite possibly bigger blocks == more users == more nodes
and more miners.

To repeat: it's not obvious to me at all that everything wrong with Bitcoin
can be solved by shrinking blocks. I don't think that's going to suddenly
make everything magically more decentralised.

The 8mb cap isn't quite arbitrary. It was picked through negotiation with
different stakeholders, in particular, Chinese miners. But it should be
high enough to ensure organic growth is not constrained, which is good
enough.

I think it would be nice to have some sort of simulation to calculate
 a centralization heuristic for different possible blocksize values
 so we can compare these arbitrary numbers somehow.


Centralization is not a single floating point value that is controlled by
block size. It's a multi-faceted and complex problem. You cannot destroy
Bitcoin through centralization by adjusting a single constant in the
source code.

To say once more: block size won't make much difference to how many
merchants rely on payment processors because they aren't using them due to
block processing overheads anyway. So trying to calculate such a formula
won't work. Ditto for end users on phones, ditto for developers who want
JSON/REST access to an indexed block chain, or hosted wallet services, or
miners who want to reduce variance.

None of these factors have anything to do with traffic levels.

What people like you are Pieter are doing is making a single number a kind
of proxy for all fears and concerns about the trend towards outsourcing in
the Bitcoin community. Everything gets compressed down to one number you
feel you can control, whether it is relevant or not.

 So why should anyone go through the massive hassle of setting up
 exchanges,
  without the lure of large future profits?

 Are you suggesting that bitcoin consensus rules should be designed
 to maximize the profits of Bitcoin exchanges?


That isn't what I said at all Jorge. Let me try again.

Setting up an exchange is a lot of risky and expensive work. The motivation
is profit, and profits are higher when there are more users to sell to.
This is business 101.

If you remove the potential for future profit, you remove the motivation to
create the services that we now enjoy and take for granted. Because if you
think Bitcoin can be useful without exchanges then let me tell you, I was
around when there were none. Bitcoin was useless.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Block size following technological growth

2015-07-31 Thread Mike Hearn via bitcoin-dev

 How more users or more nodes can bring more miners, or more importantly,
 improve mining decentralization?


Because the bigger the ecosystem is the more interest there is in taking
part?

I mean, I guess I don't know how to answer your question. When Bitcoin was
new it had almost no users and almost no miners. Now there are millions of
users and factories producing ASICs just for Bitcoin. Surely the
correlation is obvious?

I'm sorry, but until there's a simulation that I can run with different
 sizes' testchains (for example using #6382) to somehow compare them, I will
 consider any value arbitrary.


Gavin did run simulations. 20mb isn't arbitrary, the process behind it was
well documented here:

http://gavinandresen.ninja/does-more-transactions-necessarily-mean-more-centralized

*I chose 20MB as a reasonable block size to target because 170 gigabytes
per month comfortably fits into the typical 250-300 gigabytes per month
data cap– so you can run a full node from home on a “pretty good” broadband
plan.*
Did you think 20mb was picked randomly?


 Agreed on the first sentence, I'm just saying that the influence of
 the blocksize in that function is monotonic: with bigger sizes, equal
 or worse mining centralization.


I have a hard time agreeing with this because I've seen Bitcoin go from
blocks that were often empty to blocks that are often full, and in this
time the number of miners and hash power on the network has gone up a huge
amount too.

You can argue that a miner doesn't count if they pool mine. But if a miner
mines on a pool that uses exactly the same software and settings as the
miner would have done anyway, then it makes no difference. Miners can
switch between pools to find one that works the way they like, so whilst
less pooling or more decentralised pools would be nice (e.g.
getblocktemplate), and I've written about how to push it forward before, I
still say there are many more miners than in the past.

If I had to pick between two changes to improve mining decentralisation:

1) Lower block size
2) Finishing, documenting, and making the UX really slick for a
getblocktemplate based decentralised mining pool

then I'd pick (2) in a heartbeat. I think it'd be a lot more effective.


 you should be consequently advocating for full removal of the limit rather
 than changes towards bigger arbitrary values.


I did toy with that idea a while ago. Of course there can not really be no
limit at all because the code assumes blocks fit into RAM/swap, and nodes
would just end up ignoring blocks they couldn't download in time anyway.
There is obviously a physical limit somewhere.

But it is easier to find common ground with others by compromising. Is 8mb
better than no limit? I don't know and I don't care much:  I think Bitcoin
adoption is a slow, hard process and we'll be lucky to increase average
usage 8x over the next couple of years. So if 8mb+ is better for others,
that's OK by me.



 Sorry, I don't know about Pieter, but I was mostly talking about
 mining centralization, certainly not about payment services.


OK. I write these emails for other readers too :) In the past for instance,
developers who run services without running their own nodes has come up.

Re: exchange profit. You can pick some other useful service provider if you
like. Payment processors or cold storage providers or the TREZOR
manufacturers or whoever.

My point is you can't have a tiny high-value-transactions only currency AND
all the useful infrastructure that the Bitcoin community is making. It's a
contradiction. And without the infrastructure bitcoin ceases to be
interesting even to people who are willing to pay huge sums to use it.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Block size following technological growth

2015-07-31 Thread Mike Hearn via bitcoin-dev
I agree with Gavin - whilst it's great that a Blockstream employee has
finally made a realistic proposal (i.e. not let's all use Lightning) -
this BIP is virtually the same as keeping the 1mb cap.

 Well, centralization of mining is already terrible. I see no reason why we
 should encourage making it worse.

Centralization of mining has been a continual gripe since Slush first
invented pooled mining. There has never been a time after that when people
weren't talking about the centralisation of mining, and back then blocks
were ~10kb.

I see constant assertions that node count, mining centralisation,
developers not using Bitcoin Core in their own businesses etc is all to do
with block sizes. But nobody has shown that. Nobody has even laid the
groundwork for that. Verifying blocks takes milliseconds and downloading
them takes seconds everywhere except, apparently, China: this resource
usage is trivial.

Yet developers, miners and users even outside of China routinely delegate
validation to others. Often for quite understandable technical reasons that
have nothing to do with block sizes.

So I see no reason why arbitrarily capping the block size will move the
needle on these metrics. Trying to arrest the growth of Bitcoin for
everyone won't suddenly make Bitcoin-Qt a competitive wallet, or make
service devs migrate away from chain.com, or make merchants stop using
BitPay.

We need to accept that, and all previous proposals I've seen don't seem to
 do that.

I think that's a bit unfair: BIP 101 keeps a cap. Even with 8mb+growth
you're right, some use cases will be priced out. I initiated the
micropayment channels project (along with Matt, tip of the hat)
specifically to optimise a certain class of transactions. Even with 8mb+
blocks, there will still be a need for micropayment channels, centralised
exchange platforms and other forms of off chain transaction.

If Bitcoin needs to support a large scale, it already failed.

It hasn't even been tried.

The desperately sad thing about all of this is that there's going to be a
fork, and yet I think most of us agree on most things.  But we don't agree
on this.

Bitcoin can support a large scale and it must, for all sorts of reasons.
Amongst others:

   1. Currencies have network effects. A currency that has few users is
   simply not competitive with currencies that have many. There's no such
   thing as a settlement currency for high value transactions only, as
   evidenced by the ever-dropping importance of gold.


   2. A decentralised currency that the vast majority can't use doesn't
   change the amount of centralisation in the world. Most people will still
   end up using banks, with all the normal problems. You cannot solve a
   problem by creating a theoretically pure solution that's out of reach of
   ordinary people: just ask academic cryptographers!


   3. Growth is a part of the social contract. It always has been.

   The best quote Gregory can find to suggest Satoshi wanted small blocks
   is a one sentence hypothetical example about what *might* happen if
   Bitcoin users became tyrannical as a result of non-financial transactions
   being stuffed in the block chain. That position makes sense because his
   scaling arguments assuming payment-network-sized traffic and throwing DNS
   systems or whatever into the mix could invalidate those arguments, in the
   absence of merged mining. But Satoshi did invent merged mining, and so
   there's no need for Bitcoin users to get tyrannical: his original
   arguments still hold.


   4. All the plans for some kind of ultra-throttled Bitcoin network used
   for infrequent transactions neglect to ask where the infrastructure for
   that will come from. The network of exchanges, payment processors and
   startups that are paying people to build infrastructure are all based on
   the assumption that the market will grow significantly. It's a gamble at
   best because Bitcoin's success is not guaranteed, but if the block chain
   cannot grow it's a gamble that is guaranteed to be lost.

   So why should anyone go through the massive hassle of setting up
   exchanges, without the lure of large future profits?


   5. Bitcoin needs users, lots of them, for its political survival. There
   are many people out there who would like to see digital cash disappear, or
   be regulated out of existence. They will argue for that in front of
   governments and courts  some already are. And if they're going to lose
   those arguments, the political and economic damage of getting rid of
   Bitcoin must be large enough to make people think twice. That means it
   needs supporters, it needs innovative services, it needs companies, and it
   needs legal users making legal payments: as many of them as possible.

   If Bitcoin is a tiny, obscure currency used by drug dealers and a
   handful of crypto-at-any-cost geeks, the cost of simply banning it outright
   will seem trivial and the hammer will drop. There won't be a large 

Re: [bitcoin-dev] Why Satoshi's temporary anti-spam measure isn't temporary

2015-07-29 Thread Mike Hearn via bitcoin-dev
I do love history lessons from people who weren't actually there.

Let me correct your misconceptions.


Initially there was no block size limit - it was thought that the fee
 market would naturally develop and would impose economic constraints on
 growth.


The term fee market was never used back then, and Satoshi did not ever
postulate economic constraints on growth. Back then the talk was (quite
sensibly) how to grow faster, not how to slow things down!



 But this hypothesis failed after a sudden influx of new uses. It was still
 too easy to attack the network. This idea had to wait until the network was
 more mature to handle things.


No such event happened, and the hypothesis of which you talk never existed.



 Enter a “temporary” anti-spam measure - a one megabyte block size limit.


The one megabyte limit was nothing to do with anti spam. It was a quick
kludge to try and avoid the user experience degrading significantly in the
event of a DoS block, back when everyone used Bitcoin-Qt. The fear was
that some malicious miner would generate massive blocks and make the wallet
too painful to use, before there were any alternatives.

The plan was to remove it once SPV wallets were widespread. But Satoshi
left before that happened.


Now on to your claims:

1) We never really got to test things out…a fee market never really got
 created, we never got to see how fees would really work in practice.


The limit had nothing to do with fees. Satoshi explicitly wanted free
transactions to last as long as possible.


 2) Turns out the vast majority of validation nodes have little if anything
 to do with mining - validators do not get compensated…validation cost is
 externalized to the entire network.


Satoshi explicitly envisioned a future where only miners ran nodes, so it
had nothing to do with this either.

Validators validate for themselves. Calculating a local UTXO set and then
not using it for anything doesn't help anyone. SPV wallets need filtering
and serving capability, but a computer can filter and serve the chain
without validating it.

The only purposes non-mining, non-rpc-serving, non-Qt-wallet-sustaining
full nodes are needed for with today's network are:

   1. Filtering the chain for bandwidth constrained SPV wallets (nb: you
   can run an SPV wallet that downloads all transactions if you want). But
   this could be handled by specialised nodes, just like we always imagined in
   future not every node will serve the entire chain but only special
   archival nodes

   2. Relaying validated transactions so SPV wallets can stick a thumb into
   the wind and heuristically guess whether a transaction is valid or not.
   This is useful for a better user interface.

   3. Storing the mempool and filtering/serving it so SPV wallets can find
   transactions that were broadcast before they started, but not yet included
   in a block. This is useful for a better user interface.

Outside of serving lightweight P2P wallets there's no purpose in running a
P2P node if you aren't mining, or using it as a trusted node for your own
operations.

And if one day there aren't enough network nodes being run by volunteers to
service all the lightweight wallets, then we can easily create an incentive
scheme to fix that.


3) Miners don’t even properly validate blocks. And the bigger the blocks
 get, the greater the propensity to skip this step. Oops!


Miners who don't validate have a habit of bleeding money:   that's the
system working as designed.



 4) A satisfactory mechanism for thin clients to be able to securely obtain
 reasonably secure, short proofs for their transactions never materialized.


It did. I designed it. The proofs are short and reasonably secure in that
it would be a difficult and expensive attack to mount.

But as is so often the case with Bitcoin Core these days, someone who came
along much later has retroactively decided that the work done so far fails
to meet some arbitrary and undefined level of perfection. Satisfactory
and reasonably secure don't mean anything, especially not coming from
someone who hasn't done the work, so why should anyone care about that
opinion of yours?
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Why Satoshi's temporary anti-spam measure isn't temporary

2015-07-29 Thread Mike Hearn via bitcoin-dev

 Irrelevant what term was used - and as brilliant as Satoshi might have
 been at some things, he obviously got this one wrong.


I don't think it's obvious. You may disagree, but don't pretend any of this
stuff is obvious.

Consider this:  the highest Bitcoin tx fees can possibly go is perhaps a
little higher than what our competition charges. Too much higher than that,
and people will just say, you know what  I'll make a bank transfer.
It's cheaper and not much slower, sometimes no slower at all.

And now consider that in many parts of the world bank transfers are free.

They aren't actually free, of course, but they *appear* to be free because
the infrastructure for doing them is cross subsidised by the fees on other
products and services, or hidden in the prices of goods sold.

So that's a market reality Bitcoin has to handle. It's already more
expensive than the competition sometimes, but luckily not much more, and
anyway Bitcoin has some features those other systems lack (and vice versa).
So it can still be competitive.

But your extremely vague notion of a fee market neglects to consider that
it already exists, and it's not a market of Bitcoin users buying space in
Bitcoin blocks. It's users paying to move money.

You can argue with this sort of economic logic if you like, but don't claim
this stuff is obvious.

Nobody threatened to start mining huge blocks given how relatively
 inexpensive it was to mine back then?


Not that I recall. It wasn't a response to any actual event, I think, but
rather a growing realisation that the code was full of DoS attacks.



 Guess what? SPV wallets are still not particularly widespread…and those
 that are out there are notoriously terrible at detecting network forks and
 making sure they are on the right one.


The most popular mobile wallet (measured by installs) on Android is SPV. It
has between 500,000 and 1 million installs, whilst Coinbase has not yet
crossed the 500,000 mark. One of the most popular wallets on iOS is SPV. If
we had SPV wallets with better user interfaces on desktops, they'd be more
popular there too (perhaps MultiBit HD can recapture some lost ground).

So I would argue that they are in fact very widespread.

Likewise, they are not notoriously terrible at detecting chain forks.
That's a spurious idea that you and Patrick have been pushing lately, but
they detect them and follow reorgs across them according to the SPV
algorithm, which is based on most work done. This is exactly what they are
designed to do.

Contrast this with other lightweight wallets which either don't examine the
block chain or implement the algorithm incorrectly, and I fail to see how
this can be described as notoriously terrible.



 I understand that initially it was desirable that transactions be free…but
 surely even Satoshi understood this couldn’t be perpetually
 self-sustaining…and that the ability to bid for inclusion in blocks would
 eventually become a crucial component of the network. Or were fees just
 added for decoration?


Fees were added as a way to get money to miners in a fair and decentralised
way.

Attaching fees directly to all transactions is certainly one way to use
that, but it's not the only way. As noted above, our competitors prefer a
combination of price-hiding and cross subsidisation. Both of these can be
implemented with tx fees, but not necessarily by trying to artificially
limit supply, which is economically nonsensical.



 We’re already more than six years into this. When were these mechanisms
 going to be developed and tested? After 10 years? 20? Perhaps after 1024
 years?(https://github.com/bitcoin/bips/blob/master/bip-0042.mediawiki)


Maybe when there is a need? I already discussed this topic of need here:

https://medium.com/@octskyward/hashing-7d04a887acc8

Right. Turns out the ledger structure is terrible for constructing the
 kinds of proofs that are most important to validators - i.e. whether an
 output exists, what its script and amounts are, whether it’s been spent,
 etc…


Validators don't require proofs. That's why they are validators.

I think you're trying to say the block chain doesn't provide the kinds of
proofs that are most important to lightweight wallets. But I would
disagree. Even with UTXO commitments, there can still be double spends out
there in the networks memory pools you are unaware of. Merely being
presented with a correctly signed transaction doesn't tell you a whole lot
. if you wait for a block, you get the same level of proof regardless
of whether there are UTXO commitments or not. If you don't then you still
have to have some trust in your peers that you are seeing an accurate and
full view of network traffic.

So whilst there are ways to make the protocol incrementally better, when
you work through the use cases for these sorts of data structures and ask
how will this impact the user experience, the primary candidates so far
don't seem to make much difference.

Remote attestation from secure 

Re: [bitcoin-dev] Bitcoin Roadmap 2015, or If We Do Nothing Analysis

2015-07-24 Thread Mike Hearn via bitcoin-dev

 It's worth noting that even massive companies with $30M USD of funding
 don't run a single Bitcoin Core node


This has nothing to do with block sizes, and everything to do with Core not
directly providing the services businesses actually want.

The whole node count is falling because of block sizes is nothing more
than conjecture presented as fact. The existence of multiple companies who
could easily afford to do this but don't because they perceive it as
valueless should be a wakeup call there.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bitcoin Core and hard forks

2015-07-22 Thread Mike Hearn via bitcoin-dev

 Until we’re able to merge blockchain forks like we’re able to merge git
 repo forks, the safest option is no fork.


Block chain forks merge in the same way as git forks all the time, that's
how the reorg algorithm works. Transactions that didn't make it into the
post-reorg chain go back into the mempool and miners attempt to reinclude
them: this is the merge process. If they now conflict with other
transactions they are dropped and this is resolving merge conflicts.

However you have to want to merge with the new chain. If your software is
programmed not to do that out of some bizarre belief that throttling your
own user base is a good idea, then of course, no merge happens. Once you
stop telling your computer to do that, you can then merge (reorg) back onto
the main chain again.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bitcoin Core and hard forks

2015-07-22 Thread Mike Hearn via bitcoin-dev
Hi Pieter,

I think a core area of disagreement is this:

 Bitcoin Core is not running the Bitcoin economy, and its developers have
 no authority to set its rules.

In fact Bitcoin Core is running the Bitcoin economy, and its developers do
have the authority to set its rules. This is enforced by the reality of
~100% market share and limited github commit access.

You may not like this situation, but it is what it is. By refusing to make
a release with different rules, people who disagree are faced with only two
options:

1. Swallow it even if they hate it
2. Fork the project and fork the block chain with it (XT)

There are no alternatives. People who object to (2) are inherently
suggesting (1) is the only acceptable path, which not surprisingly, makes a
lot of people very angry.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Making Electrum more anonymous

2015-07-22 Thread Mike Hearn via bitcoin-dev

 One solution would be for the client to combine all the addresses they are
 interested in into a single bloom filter and send that to the server.


snip extra ideas

Hey Joseph,

All those ideas are ones we had years ago and they are implemented in the
current Bitcoin protocol.

The trick, as you may know, is this bit:

The client would also need to be fairly clever


It turns out making a sufficiently clever client to fool even advanced
observers is a lot of programming work, assuming you wish for the Ultimate
Solution which lets you allocate a desired quantity of bandwidth and then
use it to maximize privacy.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] QR code alternatives (was: Proposal: extend bip70 with OpenAlias)

2015-07-21 Thread Mike Hearn via bitcoin-dev
Thanks Clement!
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev