Re: [bitcoin-dev] Anti-transaction replay in a hardfork

2017-01-26 Thread Gavin Andresen via bitcoin-dev
On Wed, Jan 25, 2017 at 10:29 PM, Matt Corallo via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> To maximize fork divergence, it might make sense to require this. Any
> sensible proposal for a hard fork would include a change to the sighash
> anyway, so might as well make it required, no?
>

Compatibility with existing transaction-signing software and hardware
should be considered.

I think any hard fork proposal should support a reasonable number of
reasonable-size old-sighash transactions, to allow a smooth transaction of
wallet software and hardware and to support anybody who might have a
hardware wallet locked away in a safe deposit box for years.

-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Start time for BIP141 (segwit)

2016-10-16 Thread Gavin Andresen via bitcoin-dev
On Sun, Oct 16, 2016 at 10:58 AM, Tom Zander via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> The fallow period sounds wy to short. I suggest 2 months at minimum
> since anyone that wants to be safe needs to upgrade.
>

I asked a lot of businesses and individuals how long it would take them to
upgrade to a new release over the last year or two.

Nobody said it would take them more than two weeks.

If somebody is running their own validation code... then we should assume
they're sophisticated enough to figure out how to mitigate any risks
associated with segwit activation on their own.

-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP proposal: Increase block size limit to 2 megabytes

2016-02-09 Thread Gavin Andresen via bitcoin-dev
On Tue, Feb 9, 2016 at 8:59 AM, Yifu Guo  wrote:

>
> There are 406 nodes total that falls under the un-maintained category,
> which is below 10% of the network.
> Luke also have some data here that shows similar results.
> http://luke.dashjr.org/programs/bitcoin/files/charts/versions.txt
>

I love seeing data!  I was considering 0.10 nodes as 'unmaintained' because
it has been a long time since the 0.11 release.


>
> > The network could shrink by 60% and it would still have plenty of open
>> connection slots
>
>
> I'm afraid we have to agree to disagree if you think dropping support for
> 60% of the nodes on the network when rolling out an upgrade is the sane
> default.
>

That is my estimate of the worst-case-- not 'sane default.'

My point is that even if the number of nodes shrank by 60%, we would not
see any issues (SPV nodes would still have no problem finding a full node
to connect to, full nodes would not have any problem connecting to each
other, and we would not be significantly more vulnerable to Sybil attacks
or "governments get together and try to ban running a full node" attacks).



>
>> > People are committing to spinning up thousands of supports-2mb-nodes
>> during the grace period.
>
>
> thousands of nodes?! where did you get this figure? who are these people?
> *Please* elaborate.
>

There are over a thousand people subscribed to the Classic slack channel,
many of whom have privately told me they are willing and able to run an
extra node or three (or a hundred-and-eleven) once there is a final release.

I'm not going to name names, because
 a) these were private communications, and
 b) risk of death threats, extortion, doxxing, DoS attacks, etc.  Those
risks aren't theoretical, they are very real.

To be clear: I will discourage and publicly condemn anybody who runs
'pseudo nodes' or plans to spin up lots of nodes to try to influence the
debate. The only legitimate reason to run extra nodes is to fill in a
possible gap in total node count that might be caused by old, unmaintained
nodes that stop serving blocks because the rest of the network has upgraded.


> We could wait a year and pick up maybe 10 or 20% more.
>
>
> I don't understand this statement at all, please explicate.
>

The adoption curve for a new major release is exponential: lots of adoption
in the first 30 days or so, then it rapidly tapers off.  Given that
people's nodes will be alerting them that they must upgrade, and given that
every source of Bitcoin news will probably be covering the miner adoption
vote like it was a presidential election, I expect the adoption curve for
the 2mb bump to be steeper than we've ever seen.  So my best guess is
70-80% of nodes will upgrade within 30 days of the miner voting hitting 50%
of blocks and triggering the automatic 'version obsolete; upgrade required'
warning.

Wait a year, and my guess is you might reach another 10-20% (80 to
90-something percent).
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP proposal: Increase block size limit to 2 megabytes

2016-02-07 Thread Gavin Andresen via bitcoin-dev
As I feared, request on feedback for this specific BIP has devolved into a
general debate about the merits of soft-forks versus hard-forks (versus
semi-hard Kosher Free Range forks...).

I've replied to several people privately off-list to not waste people's
time rehashing arguments that have been argued to death in the past.

I do want to briefly address all of the concerns that stem from "what if a
significant fraction of hashpower (e.g. 25%) stick with the 1mb branch of
the chain."

Proof of work cannot be spoofed. If there is very little (a few percent) of
hashpower mining a minority chain, confirmations on that chain take orders
of magnitude longer.  I wrote about why the incentives are extremely strong
for only the stronger branch to survive here:
 http://gavinandresen.ninja/minority-branches

... the debate about whether or not that is correct doesn't belong here in
bitcoin-dev, in my humble opinion.

All of the security concerns I have seen flow from an assumption that
significant hashpower continues on the weaker branch. The BIP that is under
discussion assumes that analysis is correct. I have not seen any evidence
that it is not correct; all experience with previous forks (of both Bitcoin
and altcoins) is that the stronger branch survives and the weaker branch
very quickly dies.


As for the argument that creating and testing a patch for Core would take
longer than 28 days:

The glib answer is "people should just run Classic, then."

A less glib answer is it would be trivial to create a patch for Core that
accepted a more proof-of-work chain with larger blocks, but refused to mine
larger blocks.

That would be a trivial patch that would require very little testing
(extensive testing of 8 and 20mb blocks has already been done), and perhaps
would be the best compromise until we can agree on a permanent solution
that eliminates the arbitrary, contentious limits.

-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP proposal: Increase block size limit to 2 megabytes

2016-02-07 Thread Gavin Andresen via bitcoin-dev
On Sat, Feb 6, 2016 at 3:46 PM, Luke Dashjr via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> On Saturday, February 06, 2016 5:25:21 PM Tom Zander via bitcoin-dev wrote:
> > On Saturday, February 06, 2016 06:09:21 PM Jorge Timón via bitcoin-dev
> wrote:
> > > None of the reasons you list say anything about the fact that "being
> > > lost" (kicked out of the network) is a problem for those node's users.
> >
> > That's because its not.
> >
> > If you have a node that is "old" your node will stop getting new blocks.
> > The node will essentially just say "x-hours behind" with "x" getting
> larger
> > every hour. Funds don't get confirmed. etc.
>
> Until someone decides to attack you. Then you'll get 6, 10, maybe more
> blocks
> confirming a large 1 BTC payment. If you're just a normal end user (or
> perhaps an automated system), you'll figure that payment is good and
> irreversibly hand over the title to the house.
>

There will be approximately zero percentage of hash power left on the
weaker branch of the fork, based on past soft-fork adoption by miners (they
upgrade VERY quickly from 75% to over 95%).

So it will take a week to get 6 confirmations.

If you are a full node, you are warned that your software is obsolete and
you must upgrade.

If you are a lightweight node, it SHOULD tell you something is wrong, but
even if it doesn't, given that people running lightweight nodes run them so
they don't have to be connected to the network 24/7, it is very likely
during that week you disconnect and reconnect to the network several times.
And every time you do that you increase your chances that you will connect
to full nodes on the majority branch of the chain, where you will be told
about the double-spend.

All of that is assuming that there is no OTHER mitigation done. DNS seeds
should avoid reporting nodes that look like they are in the middle of
initial block download (that are at a block height significantly behind the
rest of the network), for example.

-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP proposal: Increase block size limit to 2 megabytes

2016-02-06 Thread Gavin Andresen via bitcoin-dev
On Sat, Feb 6, 2016 at 12:01 PM, Adam Back  wrote:

>
> It would probably be a good idea to have a security considerations
> section


Containing what?  I'm not aware of any security considerations that are any
different from any other consensus rules change.

(I can write a blog post summarizing our slack discussion of SPV security
immediately after the first greater-than-1mb-block if you like).



> , also, is there a list of which exchange, library, wallet,
> pool, stats server, hardware etc you have tested this change against?
>

That testing is happening by the exchange, library, wallet, etc providers
themselves. There is a list on the Classic home page:

https://bitcoinclassic.com/


>
> Do you have a rollback plan in the event the hard-fork triggers via
> false voting as seemed to be prevalent during XT?  (Or rollback just
> as contingency if something unforseen goes wrong).
>

The only voting in this BIP is done by the miners, and that cannot be faked.

Are you talking about people spinning up pseudo-full-nodes that fake the
user-agent?

As I said, there are people who have said they will spin up thousands of
full nodes to help prevent possible Sybil attacks which would become
marginally easier to accomplish immediately after the first >1mb block was
produced and full nodes that hadn't upgraded were left behind.

Would Blockstream be willing to help out by running a dozen or two extra
full nodes?

I can't imagine any even-remotely-likely sequence of events that would
require a rollback, can you be more specific about what you are imagining?
Miners suddenly getting cold feet?


> How do you plan to monitor and manage security through the hard-fork?
>

I don't plan to monitor or manage anything; the Bitcoin network is
self-monitoring and self-managing. Services like statoshi.info will do the
monitoring, and miners and people and businesses will manage the network,
as they do every day.

-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP proposal: Increase block size limit to 2 megabytes

2016-02-06 Thread Gavin Andresen via bitcoin-dev
Responding to "28 days is not long enough" :

I keep seeing this claim made with no evidence to back it up.  As I said, I
surveyed several of the biggest infrastructure providers and the btcd lead
developer and they all agree "28 days is plenty of time."

For individuals... why would it take somebody longer than 28 days to either
download and restart their bitcoind, or to patch and then re-run (the patch
can be a one-line change MAX_BLOCK_SIZE from 100 to 200)?

For the Bitcoin Core project:  I'm well aware of how long it takes to roll
out new binaries, and 28 days is plenty of time.

I suspect there ARE a significant percentage of un-maintained full nodes--
probably 30 to 40%. Losing those nodes will not be a problem, for three
reasons:
1) The network could shrink by 60% and it would still have plenty of open
connection slots
2) People are committing to spinning up thousands of supports-2mb-nodes
during the grace period.
3) We could wait a year and pick up maybe 10 or 20% more.

I strongly disagree with the statement that there is no cost to a longer
grace period. There is broad agreement that a capacity increase is needed
NOW.

To bring it back to bitcoin-dev territory:  are there any TECHNICAL
arguments why an upgrade would take a business or individual longer than 28
days?


Responding to Luke's message:

On Sat, Feb 6, 2016 at 1:12 AM, Luke Dashjr via bitcoin-dev
>  wrote:
> > On Friday, February 05, 2016 8:51:08 PM Gavin Andresen via bitcoin-dev
> wrote:
> >> Blog post on a couple of the constants chosen:
> >>   http://gavinandresen.ninja/seventyfive-twentyeight
> >
> > Can you put this in the BIP's Rationale section (which appears to be
> mis-named
> > "Discussion" in the current draft)?
>

I'll rename the section and expand it a little. I think standards documents
like BIPs should be concise, though (written for implementors), so I'm not
going to recreate the entire blog post there.


> >
> >> Signature operations in un-executed branches of a Script are not counted
> >> OP_CHECKMULTISIG evaluations are counted accurately; if the signature
> for a
> >> 1-of-20 OP_CHECKMULTISIG is satisified by the public key nearest the top
> >> of the execution stack, it is counted as one signature operation. If it
> is
> >> satisfied by the public key nearest the bottom of the execution stack,
> it
> >> is counted as twenty signature operations. Signature operations
> involving
> >> invalidly encoded signatures or public keys are not counted towards the
> >> limit
> >
> > These seem like they will break static analysis entirely. That was a
> noted
> > reason for creating BIP 16 to replace BIP 12. Is it no longer a concern?
> Would
> > it make sense to require scripts to commit to the total accurate-sigop
> count
> > to fix this?
>

After implementing static counting and accurate counting... I was wrong.
Accurate/dynamic counting/limiting is quick and simple and can be
completely safe (the counting code can be told the limit and can
"early-out" validation).

I think making scripts commit to a total accurate sigop count is a bad
idea-- it would make multisignature signing more complicated for zero
benefit.  E.g. if you're circulating a partially signed transaction to that
must be signed by 2 of 5 people, you can end up with a transaction that
requires 2, 3, 4, or 5 signature operations to validate (depending on which
public keys are used to do the signing).  The first signer might have no
idea who else would sign and wouldn't know the accurate sigop count.


> >
> >> The amount of data hashed to compute signature hashes is limited to
> >> 1,300,000,000 bytes per block.
> >
> > The rationale for this wasn't in your blog post. I assume it's based on
> the
> > current theoretical max at 1 MB blocks? Even a high-end PC would
> probably take
> > 40-80 seconds just for the hashing, however - maybe a lower limit would
> be
> > best?
>

It is slightly more hashing than was required to validate block number
364,422.

There are a couple of advantages to a very high limit:

1) When the fork is over, special-case code for dealing with old blocks can
be eliminated, because all old blocks satisfy the new limit.

2) More importantly, if the limit is small enough it might get hit by
standard transactions, then block creation code (CreateNewBlock() /
getblocktemplate / or some external transaction-assembling software) will
have to solve an even more complicated bin-packing problem to optimize for
fees paid.

In practice, the 20,000 sigop limit will always be reached before
MAX_BLOCK_SIGHASH.



> >
> >> Miners express their support for this BIP by ...
> >
> > But m

[bitcoin-dev] BIP proposal: Increase block size limit to 2 megabytes

2016-02-05 Thread Gavin Andresen via bitcoin-dev
This has been reviewed by merchants, miners and exchanges for a couple of
weeks, and has been implemented and tested as part of the Bitcoin Classic
and Bitcoin XT implementations.

Constructive feedback welcome; argument about whether or not it is a good
idea to roll out a hard fork now will be unproductive, so I vote we don't
go there.

Draft BIP:
  https://github.com/gavinandresen/bips/blob/bump2mb/bip-bump2mb.mediawiki

Summary:
  Increase block size limit to 2,000,000 bytes.
  After 75% hashpower support then 28-day grace period.
  With accurate sigop counting, but existing sigop limit (20,000)
  And a new, high limit on signature hashing

Blog post walking through the code:
  http://gavinandresen.ninja/a-guided-tour-of-the-2mb-fork

Blog post on a couple of the constants chosen:
  http://gavinandresen.ninja/seventyfive-twentyeight

-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Hardfork bit BIP

2016-02-04 Thread Gavin Andresen via bitcoin-dev
It is always possible I'm being dense, but I still don't understand how
this proposal makes a chain-forking situation better for anybody.

If there are SPV clients that don't pay attention to versions in block
headers, then setting the block version negative doesn't directly help
them, they will ignore it in any case.

If the worry is full nodes that are not upgraded, then a block with a
negative version number will, indeed, fork them off the the chain, in
exactly the same way a block with new hard-forking consensus rules would.
And with the same consequences (if there is any hashpower not paying
attention, then a worthless minority chain might continue on with the old
rules).

If the worry is not-upgraded SPV clients connecting to the old,
not-upgraded full nodes, I don't see how this proposed BIP helps.

I think a much better idea than this proposed BIP would be a BIP that
recommends that SPV clients to pay attention to block version numbers in
the headers that they download, and warn if there is a soft OR hard fork
that they don't know about.

It is also a very good idea for SPV clients to pay attention to timestamps
in the block headers that the receive, and to warn if blocks were generated
either much slower or faster than statistically likely. Doing that (as
Bitcoin Core already does) will mitigate Sybil attacks in general.

-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Hardfork bit BIP

2016-02-04 Thread Gavin Andresen via bitcoin-dev
This BIP is unnecessary, in my opinion.

I'm going to take issue with items (2) and (3) that are the motivation for
this BIP:

" 2. Full nodes and SPV nodes following original consensus rules may not be
aware of the deployment of a hardfork. They may stick to an
economic-minority fork and unknowingly accept devalued legacy tokens."

If a hardfork is deployed by increasing the version number in blocks (as is
done for soft forks), then there is no risk-- Full and SPV nodes should
notice that they are seeing up-version blocks and warn the user that they
are using obsolete software.

It doesn't matter if the software is obsolete because of hard or soft fork,
the difference in risks between those two cases will not be understood by
the typical full node or SPV node user.

" 3. In the case which the original consensus rules are also valid under
the new consensus rules, users following the new chain may unexpectedly
reorg back to the original chain if it grows faster than the new one.
People may find their confirmed transactions becoming unconfirmed and lose
money."

If a hard or soft fork uses a 'grace period' (as described in BIP 9 or BIP
101) then there is essentially no risk that a reorg will happen past the
triggering block. A block-chain re-org of two thousand or more blocks on
the main Bitcoin chain is unthinkable-- the economic chaos would be
massive, and the reaction to such a drastic (and extremely unlikely) event
would certainly be a hastily imposed checkpoint to get everybody back onto
the chain that everybody was using for economic transactions.


Since I don't agree with the motivations for this BIP, I don't think the
proposed mechanism (a negative-version-number-block) is necessary. And
since it would simply add more consensus-level code, I believe the
keep-it-simple principle applies.


-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP Process: Status, comments, and copyright licenses

2016-02-02 Thread Gavin Andresen via bitcoin-dev
On Mon, Feb 1, 2016 at 5:53 PM, Luke Dashjr via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> I've completed an initial draft of a BIP that provides clarifications on
> the
> Status field for BIPs, as well as adding the ability for public comments on
> them, and expanding the list of allowable BIP licenses.
>
>
> https://github.com/luke-jr/bips/blob/bip-biprevised/bip-biprevised.mediawiki
>
> I plan to open discussion of making this BIP an Active status (along with
> BIP
> 123) a month after initial revisions have completed. Please provide any
> objections now, so I can try to address them now and enable consensus to be
> reached.
>


I like the more concrete definitions of the various statuses.

I don't like the definition of "consensus".  I think the definition
described gives too much centralized control to whoever controls the
mailing list and the wiki.

-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Best (block nr % 2016) for hard fork activation?

2016-01-29 Thread Gavin Andresen via bitcoin-dev
On Thu, Jan 28, 2016 at 9:31 PM, Jannes Faber via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Hi,
>
> Question if you'll allow me. This is not about Gavin's latest hard fork
> proposal but in general about any hard (or soft) fork.
>
> I was surprised to see a period expressed in human time instead of in
> block time:
>
> > Blocks with timestamps greater than or equal to the triggering block's
> timestamp plus 28 days (60*60*24*28 seconds) shall have the new limits.
>
>
Block timestamps are in the 80-byte block header, so activation is
completely deterministic and can be determined from just the sequence of
block headers. There are no edge cases to worry about.

But even more so I would expect there to be significant differences in
> effects on non-updated clients depending on the moment (expressed as block
> number) of applying the new rules. I see a few options, all relating to the
> 2016 blocks recalibration window.
>

It doesn't matter much where in the difficulty period the fork happens; if
it happens in the middle, the lower-power fork's difficulty will adjust a
little quicker.

Example:  (check my math, I'm really good at screwing up at basic
arithmetic):

Fork at block%2016:  25% hashpower will take 8 weeks to produce 2016
blocks, difficulty drops by 4.

Fork one-week (halfway) into difficulty period:  25% hashpower will take 4
weeks to adjust, difficulty drops by 5/2 = 2.5
It will then take another 3.2 weeks to get to the next difficult adjustment
period and normal 10-minute blocks.

That's an unrealisitic scenario, though-- there will not be 25% of hash
power on a minority fork. I wrote about why in a blog post today:

http://gavinandresen.ninja/minority-branches

If you assume a more realistic single-digit-percentage of hash power on the
minority fork, then the numbers get silly (e.g. two or three months of an
hour or three between blocks before a difficulty adjustment).


-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Time to worry about 80-bit collision attacks or not?

2016-01-12 Thread Gavin Andresen via bitcoin-dev
I'm convinced-- it is a good idea to worry about 80-bit collision attacks
now.

Thanks to all the people smarter than me who contributed to this
discussion, I learned a lot about collision attacks that I didn't know
before.

Would this be a reasonable "executive summary" :

If you are agreeing to lock up funds with somebody else, and they control
what public key to use, you are susceptible to collision attacks.

It is very likely an 80-bit-collision-in-ten-minutes attack will cost less
than $1million in 10 to twenty years (possibly sooner if there are crypto
breaks in that time).

If you don't trust the person with whom you're locking up funds and you're
locking up a significant amount of money (tens of millions of dollars
today, tens of thousands of dollars in a few years):

Then you should avoid using pay-to-script-hash addresses and instead use
the payment protocol and "raw" multisig outputs.

AND/OR

Have them give you a hierarchical deterministic (BIP32) seed, and derive a
public key for them to use.


--

Following the security in depth and validate all input secure coding
principles would mean doing both-- avoid p2sh AND have all parties to a
transaction exchange HD seeds, add randomness, and use the resulting public
keys in the transaction.


-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Time to worry about 80-bit collision attacks or not?

2016-01-08 Thread Gavin Andresen via bitcoin-dev
And to fend off the messag that I bet somebody is composing right now:

Yes, I know about a "security first" mindset.  But as I said earlier in the
thread, there is a tradeoff here between crypto strength and code
complexity, and "the strength of the crypto is all that matters" is NOT
security first.

-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Time to worry about 80-bit collision attacks or not?

2016-01-08 Thread Gavin Andresen via bitcoin-dev
On Fri, Jan 8, 2016 at 10:46 AM, Gavin Andresen 
wrote:

> And Ethan or Anthony:  can you think of a similar attack scheme if you
> assume we had switched to Schnorr 2-of-2 signatures by then?


Don't answer that, I was being dense again, Anthony's scheme works with
Schnorr...


-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Time to worry about 80-bit collision attacks or not?

2016-01-08 Thread Gavin Andresen via bitcoin-dev
On Fri, Jan 8, 2016 at 10:50 AM, Gavin Andresen 
wrote:

> But as I said earlier in the thread, there is a tradeoff here between
> crypto strength and code complexity, and "the strength of the crypto is all
> that matters" is NOT security first.


I should be more explicit about code complexity:

The big picture is "segwitness will help scale in the very short term."

So the spec gives two ways of stuffing the segwitness hash into the
scriptPubKey -- one way that uses a 32-bit hash, but if used would actually
make scalability a bit worse as coins moved into segwitness-locked
transactions (DUP HASH160 EQUALVERIFY pay-to-script-hash scriptpubkeys are
just 24 bytes).

And another way that add just one byte to the scriptpubkey.

THAT is the code complexity I'm talking about.  Better to always move the
script into the witness data, in my opinion, on the keep the design as
simple as possible principle.

It could be a 32-byte hash... but then the short-term scalability goal is
compromised.

Maybe I'm being dense, but I still think it is a no-brainer

-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Time to worry about 80-bit collision attacks or not?

2016-01-08 Thread Gavin Andresen via bitcoin-dev
Thanks, Anthony, that works!

So...

How many years until we think a 2^84 attack where the work is an ECDSA
private->public key derivation will take a reasonable amount of time?

And Ethan or Anthony:  can you think of a similar attack scheme if you
assume we had switched to Schnorr 2-of-2 signatures by then?


And to everybody who might not be reading this closely:  All of the above
is discussing collision attacks; none of it is relevant in the normal case
where your wallet generates the scriptPubKey.



-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Time to worry about 80-bit collision attacks or not?

2016-01-08 Thread Gavin Andresen via bitcoin-dev
On Fri, Jan 8, 2016 at 7:02 AM, Rusty Russell  wrote:

> Matt Corallo  writes:
> > Indeed, anything which uses P2SH is obviously vulnerable if there is
> > an attack on RIPEMD160 which reduces it's security only marginally.
>
> I don't think this is true?  Even if you can generate a collision in
> RIPEMD160, that doesn't help you since you need to create a specific
> SHA256 hash for the RIPEMD160 preimage.
>
> Even a preimage attack only helps if it leads to more than one preimage
> fairly cheaply; that would make grinding out the SHA256 preimage easier.
> AFAICT even MD4 isn't this broken.
>

It feels like we've gone over that before, but I can never remember where
or when. I believe consensus was that if we were using the broken MD5 in
all the places we use RIPEMD160 we'd still be secure today because of
Satoshi's use of nested hash functions everywhere.


> But just with Moore's law (doubling every 18 months), we'll worry about
> economically viable attacks in 20 years.[1]


> That's far enough away that I would choose simplicity, and have all SW
> scriptPubKeys simply be "<0> RIPEMD(SHA256(WP))" for now, but it's
> not a no-brainer.


Lets see if I've followed the specifics of the collision attack correctly,
Ethan (or somebody) please let me know if I'm missing something:

So attacker is in the middle of establishing a payment channel with
somebody. Victim gives their public key, attacker creates the innocent
fund-locking script  '2 V A 2 CHECKMULTISIG' (V is victim's public key, A
is attacker's) but doesn't give it to the victim yet.

Instead they then generate about 2^81scripts that are some form of
pay-to-attacker 
... wait, no that doesn't work, because SHA256 is used as the inner hash
function.  They'd have to generate 2^129 to find a cycle in SHA256.

Instead, they .. what? I don't see a viable attack unless RIPEMD160 and
SHA256 (or the combination) suffers a cryptographic break.


-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Time to worry about 80-bit collision attacks or not?

2016-01-07 Thread Gavin Andresen via bitcoin-dev
On Thu, Jan 7, 2016 at 8:26 PM, Matt Corallo 
wrote:

> So just because other attacks are possible we should weaken the crypto
> we use? You may feel comfortable weakening crypto used to protect a few
> billion dollars of other peoples' money, but I dont.
>

No...

I'm saying we can eliminate one somewhat unlikely attack (that there is a
bug in the code or test cases, today or some future version, that has to
decide what to do with "version 0" versus "version 1" witness programs) by
accepting the risk of another insanely, extremely unlikely attack.

Reference for those who are lost:

https://github.com/CodeShark/bips/blob/segwit/bip-codeshark-jl2012-segwit.mediawiki#witness-program

My proposal would be to just do a version 0 witness program now, that is
RIPEMD160(SHA256(script)).

And ten or twenty years from now, if there is a plausible attack on
RIPEMD160 and/or SHA256, revisit and do a version 11 (or whatever).

It will simplify the BIP, means half as many test cases have to be written,
means a little more scalability, and is as secure as the P2SH and P2PKH
everybody is using to secure their bitcoin today.

Tell you what:  I'll change my mind if anybody can describe a plausible
attack if we were using MD5(SHA256), given what we know about how MD5 is
broken.


---

I'm really disappointed with the "Here's the spec, take it or leave it"
attitude. What's the point of having a BIP process if the discussion just
comes down to "We think more is better. We don't care what you think."

-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Time to worry about 80-bit collision attacks or not?

2016-01-07 Thread Gavin Andresen via bitcoin-dev
On Thu, Jan 7, 2016 at 6:52 PM, Pieter Wuille 
wrote:

> Bitcoin does have parts that rely on economic arguments for security or
> privacy, but can we please stick to using cryptography that is up to par
> for parts where we can? It's a small constant factor of data, and it
> categorically removes the worry about security levels.
>
Our message may have crossed in the mod queue:

"So can we quantify the incremental increase in security of SHA256(SHA256)
over RIPEMD160(SHA256) versus the incremental increase in security of
having a simpler implementation of segwitness?"

I believe the history of computer security is that implementation errors
and sidechannel attacks are much, much more common than brute-force breaks.
KEEP IT SIMPLE.

(and a quibble:  "do a 80-bit search for B and C such that H(A and B) = H(B
and C)"  isn't enough, you have to end up with a C public key for which you
know the corresponding private key or the attacker just succeeds in burning
the funds)


-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Time to worry about 80-bit collision attacks or not?

2016-01-07 Thread Gavin Andresen via bitcoin-dev
Thanks, Ethan, that's helpful and I'll stop thinking that collision attacks
require 2^(n/2) memory...

So can we quantify the incremental increase in security of SHA256(SHA256)
over RIPEMD160(SHA256) versus the incremental increase in security of
having a simpler implementation of segwitness?

I'm going to claim that the difference in the first case is very, very,
very small-- the risk of an implementation error caused by having multiple
ways of interpreting the segwitness hash in the scriptPubKey is much, much
greater.

And even if there IS some risk of collision attack now or at some point in
the future, I claim that it is easy for wallets to mitigate that risk. In
fact, the principle of security in depth means wallets that don't
completely control the scriptPubKeys they're creating on behalf of users
SHOULD be coded to mitigate that risk (e.g. not allowing arbitrary data
around a user's public key in a Script so targeted substring attacks are
eliminated entirely).

Purely from a security point of view, I think a single 20-byte segwitness
in the scriptPubKey is the best design.
"Keep the design as simple and small as possible"
https://www.securecoding.cert.org/confluence/plugins/servlet/mobile#content/view/2426

Add in the implied capacity increase of smaller scriptPubKeys and I still
think it is a no-brainer.


On Thu, Jan 7, 2016 at 5:56 PM, Ethan Heilman  wrote:

> >Ethan:  your algorithm will find two arbitrary values that collide. That
> isn't useful as an attack in the context we're talking about here (both of
> those values will be useless as coin destinations with overwhelming
> probability).
>
> I'm not sure exactly the properties you want here and determining
> these properties is not an easy task, but the case is far worse than
> just two random values. For instance: (a). with a small modification
> my algorithm can also find collisions containing targeted substrings,
> (b). length extension attacks are possible with RIPEMD160.
>
> (a). targeted cycles:
>
> target1 = "str to prepend"
> target2 = "str to end with"
>
> seed = {0,1}^160
> x = hash(seed)
>
> for i in 2^80:
> x = hash(target1||x||target2)
> x_final = x
>
> y = hash(tartget1||x_final||target2)
>
> for j in 2^80:
> if y == x_final:
> print "cycle len: "+j
> break
> y = hash(target1||y||target2)
>
> If a collision is found, the two colliding inputs must both start with
> "str to prepend" and end with the phrase "str to end with". As before
> this only requires 2^81.5 computations and no real memory. For an
> additional 2**80 an adversary has an good change of finding two
> different targeted substrings which collide. Consider the case where
> the attacker mixes the targeted strings with the hash output:
>
> hash("my name is=0x329482039483204324423"+x[1]+", my favorite number
> is="+x) where x[1] is the first bit of x.
>
> (b). length extension attacks
>
> Even if all the adversary can do is create two random values that
> collide, you can append substrings to the input and get collisions.
> Once you find two random values hash(x) = hash(y), you could use a
> length extension attack on RIPEMD-160 to find hash(x||z) = hash(y||z).
>
> Now the bitcoin wiki says:
> "The padding scheme is identical to MD4 using Merkle–Damgård
> strengthening to prevent length extension attacks."[1]
>
> Which is confusing to me because:
>
> 1. MD4 is vulnerable to length extension attacks
> 2. Merkle–Damgård strengthening does not protect against length
> extension: "Indeed, we already pointed out that none of the 64
> variants above can withstand the 'extension' attack on the MAC
> application, even with the Merkle-Damgard strengthening" [2]
> 3. RIPEMD-160 is vulnerable to length extension attacks, is Bitcoin
> using a non-standard version of RIPEMD-160.
>
> RIPEMD160(SHA256()) does not protect against length extension attacks
> on SHA256, but should protect RIPEMD-160 against length extension
> attacks as RIPEMD-160 uses 512-bit message blocks. That being said we
> should be very careful here. Research has been done that shows that
> cascading the same hash function twice is weaker than using HMAC[3]. I
> can't find results on cascading RIPEMD160(SHA256()).
>
> RIPEMD160(SHA256()) seems better than RIPEMD160() though, but security
> should not rest on the notion that an attacker requires 2**80 memory,
> many targeted collision attacks can work without much memory.
>
> [1]: https://en.bitcoin.it/wiki/RIPEMD-160
> [2]: "Merkle-Damgard Revisited: How to Construct a Hash Function"
> https://www.cs.nyu.edu/~puniya/papers/merkle.pdf
> [

Re: [bitcoin-dev] Time to worry about 80-bit collision attacks or not?

2016-01-07 Thread Gavin Andresen via bitcoin-dev
Maybe I'm asking this question on the wrong mailing list:

Matt/Adam: do you have some reason to think that RIPEMD160 will be broken
before SHA256?
And do you have some reason to think that they will be so broken that the
nested hash construction RIPEMD160(SHA256()) will be vulnerable?

Adam: re: "where to stop"  :  I'm suggesting we stop exactly at the current
status quo, where we use RIPEMD160 for P2SH and P2PKH.

Ethan:  your algorithm will find two arbitrary values that collide. That
isn't useful as an attack in the context we're talking about here (both of
those values will be useless as coin destinations with overwhelming
probability).

Dave: you described a first preimage attack, which is 2**160 cpu time and
no storage.


-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Capacity increases for the Bitcoin system.

2015-12-11 Thread Gavin Andresen via bitcoin-dev
On Fri, Dec 11, 2015 at 11:18 AM, Jorge Timón  wrote:

> This is basically what I meant by
>
> struct hashRootStruct
> {
> uint256 hashMerkleRoot;
> uint256 hashWitnessesRoot;
> uint256 hashextendedHeader;
> }
>
> but my design doesn't calculate other_root as it appears in your tree (is
> not necessary).
>
> It is necessary to maintain compatibility with SPV nodes/wallets.

Any code that just checks merkle paths up into the block header would have
to change if the structure of the merkle tree changed to be three-headed at
the top.

If it remains a binary tree, then it doesn't need to change at all-- the
code that produces the merkle paths will just send a path that is one step
deeper.

Plus, it's just weird to have a merkle tree that isn't a binary tree.

-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Capacity increases for the Bitcoin system.

2015-12-09 Thread Gavin Andresen via bitcoin-dev
On Wed, Dec 9, 2015 at 3:03 AM, Gregory Maxwell via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> I think it would be logical to do as part of a hardfork that moved
> commitments generally; e.g. a better position for merged mining (such
> a hardfork was suggested in 2010 as something that could be done if
> merged mining was used), room for commitments to additional block
> back-references for compact SPV proofs, and/or UTXO set commitments.
> Part of the reason to not do it now is that the requirements for the
> other things that would be there are not yet well defined. For these
> other applications, the additional overhead is actually fairly
> meaningful; unlike the fraud proofs.
>

So just design ahead for those future uses. Make the merkle tree:


 root_in_block_header
 /  \
  tx_data_root  other_root
   /   \
segwitness_root reserved_for_future_use_root

... where reserved_for_future_use is zero until some future block version
(or perhaps better, is just chosen arbitrarily by the miner and sent along
with the block data until some future block version).

That would minimize future disruption of any code that produced or consumed
merkle proofs of the transaction data or segwitness data, especially if the
reserved_for_future_use_root is allowed to be any arbitrary 256-bit value
and not a constant that would get hard-coded into segwitness-proof-checking
code.


-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Capacity increases for the Bitcoin system.

2015-12-08 Thread Gavin Andresen via bitcoin-dev
On Tue, Dec 8, 2015 at 6:59 PM, Gregory Maxwell  wrote:

> > We also need to fix the O(n^2) sighash problem as an additional BIP for
> ANY
> > blocksize increase.
>
> The witness data is never an input to sighash, so no, I don't agree
> that this holds for "any" increase.
>

Here's the attack:

Create a 1-megabyte transaction, with all of it's inputs spending
segwitness-spending SIGHASH_ALL inputs.

Because the segwitness inputs are smaller in the block, you can fit more of
them into 1 megabyte. Each will hash very close to one megabyte of data.

That will be O(n^2) worse than the worst case of a 1-megabyte transaction
with signatures in the scriptSigs.

Did I misunderstand something or miss something about the 1-mb transaction
data and 3-mb segwitness data proposal that would make this attack not
possible?

RE: fraud proof data being deterministic:  yes, I see, the data can be
computed instead of broadcast with the block.

RE: emerging consensus of Core:

I think it is a huge mistake not to "design for success" (see
http://gavinandresen.ninja/designing-for-success ).

I think it is a huge mistake to pile on technical debt in
consensus-critical code. I think we should be working harder to make things
simpler, not more complex, whenever possible.

And I think there are pretty big self-inflicted current problems because
worries about theoretical future problems have prevented us from coming to
consensus on simple solutions.

-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Capacity increases for the Bitcoin system.

2015-12-08 Thread Gavin Andresen via bitcoin-dev
Thanks for laying out a road-map, Greg.

I'll need to think about it some more, but just a couple of initial
reactions:

Why segwitness as a soft fork? Stuffing the segwitness merkle tree in the
coinbase is messy and will just complicate consensus-critical code (as
opposed to making the right side of the merkle tree in block.version=5
blocks the segwitness data).

It will also make any segwitness fraud proofs significantly larger (merkle
path versus  merkle path to coinbase transactions, plus ENTIRE coinbase
transaction, which might be quite large, plus merkle path up to root).


We also need to fix the O(n^2) sighash problem as an additional BIP for ANY
blocksize increase. That also argues for a hard fork-- it is much easier to
fix it correctly and simplify the consensus code than to continue to apply
band-aid fixes on top of something fundamentally broken.


Segwitness will require a hard or soft-fork rollout, then a significant
fraction of the transaction-producing wallets to upgrade and start
supporting segwitness-style transactions.  I think it will be much quicker
than the P2SH rollout, because the biggest transaction producers have a
strong motivation to lower their fees, and it won't require a new type of
bitcoin address to fund wallets.  But it still feels like it'll be six
months to a year at the earliest before any relief from the current
problems we're seeing from blocks filling up.

Segwitness will make the current bottleneck (block propagation) a little
worse in the short term, because of the extra fraud-proof data.  Benefits
well worth the costs.

--

I think a barrier to quickly getting consensus might be a fundamental
difference of opinion on this:
   "Even without them I believe we’ll be in an acceptable position with
respect to capacity in the near term"

The heaviest users of the Bitcoin network (businesses who generate tens of
thousands of transactions per day on behalf of their customers) would
strongly disgree; the current state of affairs is NOT acceptable to them.



-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Blockchain verification flag (BIP draft)

2015-12-04 Thread Gavin Andresen via bitcoin-dev
Overall, good idea.

Is there a write-up somewhere describing in detail the 'accidental selfish
mining' problem that this mitigates? I think a link in the BIP to a fuller
description of the problem and how validation-skipping makes it go away
would be helpful.

RE: which bit to use:  the draft versionbits BIP and BIP101 use bit 30; to
avoid confusion, I think it would be better to use bit 0.

I agree with Jannes Faber, behavior with respect to SPV clients should be
to only tell them about fully validated headers. And I also agree that
immediately relaying full-proof-of-work blocks before validation (with an
indication that they haven't been fully validated) is a good idea, but that
discussion didn't reach consensus when I brought it up two years ago (
https://github.com/bitcoin/bitcoin/pull/3580).


-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [BIP Draft] Datastream compression of Blocks and Transactions

2015-12-03 Thread Gavin Andresen via bitcoin-dev
On Wed, Dec 2, 2015 at 1:57 PM, Emin Gün Sirer <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> How to Do It
>
> If we want to compress Bitcoin, a programming challenge/contest would be
> one of the best ways to find the best possible, Bitcoin-specific
> compressor. This is the kind of self-contained exercise that bright young
> hackers love to tackle. It'd bring in new programmers into the ecosystem,
> and many of us would love to discover the limits of compressibility for
> Bitcoin bits on a wire. And the results would be interesting even if the
> final compression engine is not enabled by default, or not even merged.
>

I love this idea. Lets build a standardized data set to test against using
real data from the network (has anybody done this yet?).

Something like:

Starting network topology:
list of:  nodeid, nodeid, network latency between the two peers

Changes to network topology:
list of:  nodeid, add/remove nodeid, time of change

Transaction broadcasts:
list of :  transaction, node id that first broadcast, time first broadcast

Block broadcasts:
list of :  block, node id that first broadcast, time first broadcast

Proposed transaction/block optimizations could then be measured against
this standard data set.


-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] OP_CHECKWILDCARDSIGVERIFY or "Wildcard Inputs" or "Coalescing Transactions"

2015-11-24 Thread Gavin Andresen via bitcoin-dev
On Tue, Nov 24, 2015 at 12:34 PM, Chris Priest via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> The technical reason for this is that you have to explicitly list each
> UTXO individually when making bitcoin transactions. There is no way to
> say "all the utxos". This post describes a way to achieve this. I'm
> not yet a bitcoin master, so there are parts of this proposal that I
> have not yet figured out entirely, but I'm sure other people who know
> more could help out.
>

So every input has:
 32-byte hash (transaction being spent)
 4-byte output (output being spent)
 4-byte sequence number
... plus the scriptSig. Which is as small as about 73 bytes if you're
spending a raw OP_CHECKSIG (which you can't do as a bitcoin address, but
could via the BIP70 payment protocol), and which is at least two serialized
bytes.

Best case for any scheme to coalesce scriptSigs would to somehow make
all-but-the-first scriptSig zero-length, so the inputs would be 42 bytes
instead of 40+73 bytes -- the coalesce transaction would be about one-third
the size, so instead of paying (say) $1 in transaction fees you'd pay 37
cents.

That's in the gray are of the "worth doing" threshold-- if it was a 10x
improvement (pay 10 cents instead of $1) it'd be in my personal "definitely
worth the trouble of doing" category.

RE: the scheme:  an OP_RINGSIGVERIFY is probably the right way to do this:
  https://en.wikipedia.org/wiki/Ring_signature

The funding transactions would be:   OP_RINGSIGVERIFY
... which might could be redeemed with  for one input and
then... uhh... maybe just  for the other
inputs that are part of the same ring signature group (OP_0 if the first
input has the signature that is good for all the other public keys, which
would be the common case).

-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] summarising security assumptions (re cost metrics)

2015-11-09 Thread Gavin Andresen via bitcoin-dev
On Sun, Nov 8, 2015 at 12:19 PM, Bryan Bishop  wrote:

> Gavin, could you please provide some clarity around the definition and
> meaning of "key-holder [decentralization]"? Is this about the absolute
> number of key-holders? or rather about the number of transactions (per unit
> time?) that key-holders make? Both/other?
>

Both.  If few transactions are possible, then that limits the number of
key-holders who can participate in the system.

Imagine the max block size was really small, and stretch your imagination
and just assume there would be enough demand that those small number of
transactions pay enough transaction fees to secure the network. Each
transaction must, therefore, pay a high fee. That limits the number of
keyholders to institutions with very-large-value transactions-- it is the
"Bitcoin as a clearing network for big financial players" model.

Using the Lightning Network doesn't help, since every Lightning Network
transaction IS a set of Bitcoin transactions, ready to be dropped onto the
main chain. If those Lightning Network transactions don't have enough fees,
then the whole security of the Lightning Protocol falls apart (since it
relies on being able to get timelocked transactions confirmed on the main
chain in case your trading partner cheats).

There is video of the Poon/Dryja talk:
https://youtu.be/TgjrS-BPWDQ?t=41m58s

-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] summarising security assumptions (re cost metrics)

2015-11-08 Thread Gavin Andresen via bitcoin-dev
On Thu, Nov 5, 2015 at 11:03 PM, Adam Back via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Some thoughts, hope this is not off-topic.
>
> Maybe we should summarise the security assumptions and design
> requirements.  It is often easier to have clear design discussions by
> first articulating assumptions and requirements.
>
> Validators: Economically dependent full nodes are an important part of
> Bitcoin's security model because they assure Bitcoin security by
> enforcing consensus rules.  While full nodes do not have orphan
> risk, we also dont want maliciously crafted blocks with pathological
> validation cost to erode security by knocking reasonable spec full
> nodes off the network on CPU (or bandwidth grounds).
>

Agreed. That is why BIP101 / BitcoinXT includes code to limit the relay and
validation cost of blocks.


>
> Miners: Miners are in a commodity economics competitive environment
> where various types of attacks and collusion, even with small
> advantage, may see actual use due to the advantage being significant
> relative to the at times low profit margin
>

Agreed, with a quibble: mining economics means they will ALWAYS have a low
profit margin.


>
> It is quite important for bitcoin decentralisation security that small
> miners not be significantly disadvantaged vs large miners.  Similarly
> it is important that there not be significant collusion advantages
> that create policy centralisation as a side-effect (for example what
> happened with "SPV mining" or validationless mining during BIP66
> deployment).  Examples of attacks include selfish-mining and
> amplifying that kind of attack via artificially large or
> pathologically expensive to validate blocks.  Or elevating orphan risk
> for others (a miner or collusion of miners is not at orphan risk for a
> block they created).
>

Okey dokey-- perhaps we should have another discussion about SPV mining, as
far as I know it harmed nobody besides the miners who mindlessly created
invalid, empty blocks (well, and besides being very annoying for developers
who had to figure out what was happening and get the offending miners to do
the right thing).

In any case, it seems to me all of this (except perhaps selfish mining) is
independent of the maximum block size, and solutions for all of the above
(including selfish mining) should be pursued regardless of what is done
with the max block size (e.g. I sent Ittay and Gun email a few minutes ago
with some might-be-wong-ideas for how weak block announcements might be
used to detect selfish mining).


>
> Validators vs Miner decentralisation balance:
>
> There is a tradeoff where we can tolerate weak miner decentralisation
> if we can rely on good validator decentralisation or vice versa.  But
> both being weak is risky.  Currently given mining centralisation
> itself is weak, that makes validator decentralisation a critical
> remaining defence - ie security depends more on validator
> decentralisation than it would if mining decentralisation was in a
> better shape.
>

I'm very disappointed you don't mention the tradeoff at "the other end of
the bathtub" -- Key-holder versus Validator decentralization balance. Did
you see the excellent Poon/Dryja "bathtub" presentation at Montreal?

https://scalingbitcoin.org/montreal2015/presentations/Day2/3-JosephPoonAndThaddeusDryja.pdf

Security:
>
> We should consider the pathological case not average or default behaviour
> because we can not assume people will follow the defaults, only the
> consensus-enforced rules.
>

Agreed, which is why BIP101/XT consider pathological behavior.


>
> We should not discount attacks that have not seen exploitation to
> date.  We have maybe benefitted from universal good-will (everybody
> thinks Bitcoin is cool, particularly people with skills to find and
> exploit attacks).
>

Disagree on wording: we should not ignore attacks that have not seen
exploitation. But in the never-ending-list of things to be worried about
and to write code for, attacks that have not been seen should be lower
priority than attacks that have been seen, either in Bitcoin or elsewhere.

E.g. Bitcoin has never seen a buffer-overflow attack, but we absolutely
positively need to put a very high priority on the network attack surface
-- we know buffer-overflow attacks are commonly exploited.

On the other hand, Bitcoin has never seen a "Goldfinger attack" (take a big
short position on Bitcoin, then find a way to destroy confidence so the
price drops and you can profit), and "Goldfinger attacks" don't seem to be
common anywhere (you don't see people taking huge short positions in
companies and then bombing their factories). There might be a reason
Bitcoin is more vulnerable, or the same checks-and-balances (e.g. whoever
took the other side of the large short has a strong incentive to report
you, and assuming you got paid in something other than Bitcoin that is
probably possible).
  (Aside: anybody who wants to talk about the likelihood of 

Re: [bitcoin-dev] A validation-cost metric for aggregate limits and fee determination

2015-11-05 Thread Gavin Andresen via bitcoin-dev
I have several thoughts:

Weighing CPU validation cost should be reasonably straightforward-- just
pick some arbitrary, commonly-available, recent hardware and then benchmark
the two things that take the bulk of validation time (hashing to create the
signature hash, then ECDSA validation), and weigh the terms in the
validation cost equation appropriately (e.g. hashing X GB of data takes the
same amount of CPU time as one libsecp256k1 validation, so count cpu cost
of an OP_CHECKSIG as 1 + X/actual_bytes_hashed).

But how should bandwidth cost be counted? There isn't an obvious "Y GB of
bandwidth-per-month equals 1 ECDSA validation. We need to find common units
for the terms in the validation cost equation for it to make sense,
otherwise we're adding apples and oranges.

I think the only units that will work is "percentage of maximum validation
ability for some reference hardware running with a network connection
capable of some reference bandwidth."

For example, imagine the reference was the typical home computer being sold
today running with some multiple or fraction of the average global
broadband connection speed of 5Mbps. CPU cost to validate a block can then
be expressed as a percentage of maximum capacity, as can bandwidth--
hooray, two metrics with the same units, so they can be added up.  If the
result is less than 100%, then the block is valid-- it can be received and
validated in a reasonable amount of time.


Rolling in UTXO growth is harder, for two reasons:
1) UTXO changes per block can be negative or positive, as opposed to
bandwidth/CPU costs.
2) It is not clear how to choose or benchmark "reference UTXO growth"

(1) could be finessed to just treat UTXO shrinkage as zero.
(2) could just be decided by picking a reasonable growth number. Since we
want the UTXO set to fit into main memory, something a bit below the
long-ish term price/performance trend of main memory would be a good target.

So, starting with that growth rate and an initial UTXO size in bytes,
divide by the number of blocks in a year to get a maximum UTXO growth in
bytes per block.

When validating a block, take the actual UTXO growth, express it as a
percentage of the maximum allowed (make it zero if it is negative), and
combine with the CPU and bandwidth percentages.

If the total is less than 100%, block is valid. Otherwise, invalid.



Now all of that worked through, I'm not 100% sure it solves the "do
miners or wallet have to solve a bin-packing problem to determine which
transactions to put into their blocks or what fees to attach."

I think it mostly works out-- instead of fee-per-kilobyte, it would be
fee-per-validation-cost (which is in the weird units "fraction of 100%
validation cost").

But the UTXO term might be a problem-- transactions that create more UTXOs
than they spend might end up being costly. I'm traveling right now, perhaps
somebody could pick some arbitrary reference points and try to get a rough
idea of what different transactions might pay in fees (e.g. if a
one-input-two-output had a cost of X, two-output-one-input would have a
cost of X/something).

I'm not convinced that a single validation cost metric is the best
approach-- it might be better to break the cost into three (UTXO growth,
CPU, and bandwidth) and just let miners set reasonable transaction
selection policies that keep each of the three under whatever caps are
imposed on each. If a miner comes up with a clever algorithm that lets them
pack in more transactions and get more fees, good for them!

But I do like the simplicity of a single validation cost metric.

-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Compatibility requirements for hard or soft forks

2015-11-02 Thread Gavin Andresen via bitcoin-dev
On Sun, Nov 1, 2015 at 6:46 PM, Tier Nolan via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> For guidelines
>
> * Transaction version numbers will be increased, if possible
> * Transactions with unknown/large version numbers are unsafe to use with
> locktime
> * Reasonable notice is given that the change is being contemplated
> * Non-opt-in changes will only be to protect the integrity of the network
>
> Locked transaction that can be validated without excessive load on the
> network should be safe to use, even if non-standard.
>
> An OP_CAT script that requires TBs of RAM to validate crosses the
> threshold of reasonableness.
>

I like those guidelines, although I'm sure there may be lots of arguing
over what fits under "protects the integrity of the network" or what
constitutes "reasonable notice" (publish a BIP at least 30 days before
rolling out a change? 60 days? a year?)

-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Compatibility requirements for hard or soft forks

2015-10-28 Thread Gavin Andresen via bitcoin-dev
I'm hoping this fits under the moderation rule of "short-term changes to
the Bitcoin protcol" (I'm not exactly clear on what is meant by
"short-term"; it would be lovely if the moderators would start a thread on
bitcoin-discuss to clarify that):


Should it be a requirement that ANY one-megabyte transaction that is valid
under the existing rules also be valid under new rules?

Pro:  There could be expensive-to-validate transactions created and given a
lockTime in the future stored somewhere safe. Their owners may have no
other way of spending the funds (they might have thrown away the private
keys), and changing validation rules to be more strict so that those
transactions are invalid would be an unacceptable confiscation of funds.

Con: It is extremely unlikely there are any such large, timelocked
transactions, because the Core code has had a clear policy for years that
100,000-byte transactions are "standard" and are relayed and
mined, and
larger transactions are not. The requirement should be relaxed so that only
valid 100,000-byte transaction under old consensus rules must be valid
under new consensus rules (larger transactions may or may not be valid).


I had to wrestle with that question when I implemented BIP101/Bitcoin XT
when deciding on a limit for signature hashing (and decided the right
answer was to support any "non-attack"1MB transaction; see
https://bitcoincore.org/~gavin/ValidationSanity.pdf for more details).

-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Is it possible for there to be two chains after a hard fork?

2015-09-29 Thread Gavin Andresen via bitcoin-dev
We really shouldn't have to go over "Bitcoin 101" on this mailing list, and
this discussion should move to the not-yet-created more general discussion
list.  I started this thread as a sanity check on myself, because I keep
seeing smart people saying that two chains could persist for more than a
few days after a hard fork, and I still don't see how that would possibly
work.

So: "fraud" would be 51% miners sending you bitcoin in exchange for
something of value, you wait for confirmations and send them that something
of value, and then the 51% reverses the transaction.

Running a full node doesn't help.

On Tue, Sep 29, 2015 at 1:55 PM, Allen Piscitello <
allen.piscite...@gmail.com> wrote:

> >A dishonest miner majority can commit fraud against you, they can mine
> only empty blocks, they can do various other things that render your money
> worthless.
>
> Mining empty blocks is not fraud.
>
> If you want to use terms like "honest miners" and "fraud", please define
> them so we can at least be on the same page.
>
> I am defining an honest miner as one that follows the rules of the
> protocol.  Obviously your definition is different.
>
> On Tue, Sep 29, 2015 at 12:51 PM, Mike Hearn  wrote:
>
>> >because Bitcoin's basic security assumption is that a supermajority of
>>> miners are 'honest.'
>>>
>>> Only if you rely on SPV.
>>>
>>
>> No, you rely on miners honesty even if you run a full node. This is in
>> the white paper. A dishonest miner majority can commit fraud against you,
>> they can mine only empty blocks, they can do various other things that
>> render your money worthless.
>>
>
>


-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Is it possible for there to be two chains after a hard fork?

2015-09-29 Thread Gavin Andresen via bitcoin-dev
On Tue, Sep 29, 2015 at 1:24 PM, Allen Piscitello <
allen.piscite...@gmail.com> wrote:

> I fail to see how always following a majority of miners no matter what
> their actions somehow equates to insanity.


Ok, I have a hidden assumption: I assume most miners are also not
completely insane.

I have met a fair number of them, and while they are often a little bit
crazy (all entrepreneurs are a little bit crazy), I am confident that the
vast majority of them are economically rational, and most of them are also
meta-rational: they want Bitcoin to succeed. We've seen them demonstrate
that meta-rationality when we've had accidental consensus forks.

If you start with the premise that more than half of Bitcoin miners would
do something crazy that would either destroy Bitcoin or would be completely
unacceptable to you, personally... then maybe you should look for some
other system that you might trust more, because Bitcoin's basic security
assumption is that a supermajority of miners are 'honest.'

-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Is it possible for there to be two chains after a hard fork?

2015-09-29 Thread Gavin Andresen via bitcoin-dev
I keep seeing statements like this:

On Tue, Sep 29, 2015 at 9:30 AM, Jonathan Toomim (Toomim Bros) via
bitcoin-dev  wrote:

> As a further benefit to hard forks, anybody who is ideologically opposed
> to the change can continue to use the old version successfully, as long as
> there are enough miners to keep the fork alive.


... but I can't see how that would work.

Lets say there is a hard fork, and 5% of miners stubbornly refuse to go
along with the 95% majority (for this thought experiment, it doesn't matter
if the old rules or new rules 'win').

Lets further imagine that some exchange decides to support that 5% and lets
people trade coins from that fork (one of the small altcoin exchanges would
definitely do this if they think they can make a profit).

Now, lets say I've got a lot of pre-fork bitcoin; they're valid on both
sides of the fork. I support the 95% chain (because I'm not insane), but
I'm happy to take people's money if they're stupid enough to give it to me.

So, I do the following:

1) Create a send-to-self transaction on the 95% fork that is ONLY valid on
the 95% fork (maybe I CoinJoin with a post-fork coinbase transaction, or
just move my coins into then out of an exchange's very active hot wallet so
I get coins with a long transaction history on the 95% side of the fork).

2) Transfer  those same coins to the 5% exchange and sell them for whatever
price I can get (I don't care how low, it is free money to me-- I will
still own the coins on the 95% fork).

I have to do step (1) to prevent the exchange from taking the
transfer-to-exchange transaction and replaying it on the 95% chain.

I don't see any way of preventing EVERYBODY who has coins on the 95% side
of the fork from doing that. The result would be a huge free-fall in price
as I, and everybody else, rushes to get some free money from anybody
willing to pay us to remain idealogically pure.

Does anybody think something else would happen, and do you think that
ANYBODY would stick to the 5% fork in the face of enormously long
transaction confirmation times (~3 hours), a huge transaction backlog as
lots of the 95%'ers try to sell their coins before the price drops, and a
massive price drop for coins on the 5% fork.

-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Let's deploy BIP65 CHECKLOCKTIMEVERIFY!

2015-09-28 Thread Gavin Andresen via bitcoin-dev
On Mon, Sep 28, 2015 at 9:28 AM, Peter Todd  wrote:

> > 2) Mr. Todd (or somebody) needs to write up a risk/benefit security
> > tradeoff analysis doo-hickey document and publish it. I'm reasonably
> > confident that the risks to SPV nodes can be mitigated (e.g. by deploying
> > mempool-only first, before the soft fork rolls out), but as somebody who
> > has only been moderately paying attention, BETTER COMMUNICATION is
> needed.
> > What should SPV wallet authors be doing right now, if anything? Once the
> > soft fork starts to roll out or activates, what do miners need to be
> aware
> > of? SPV wallet authors?
>
> Do you have such a document for your BIP101? That would save me a lot of
> time, and the need for that kind of document is significantly higher
> with BIP101 anyway.
>

Hmmm?  When I asked YOU for that kind of security analysis document, you
said you'd see if any of your clients would be willing to let you publish
one you'd done in the past. Then I never heard back from you.

So, no, I don't have one for BIP 101, but unless you were lying and just
trying to add Yet Another Hoop for BIP 101 to jump through, you should
already have something to start from.

RE: mempool only: yes, pull-req 5000 satisfies (and that's what I was
thinking of). There should be a nice, readable blog post explaining to
other full node implementors and wallet implementors why that was done for
Core and what they should do to follow 'best practices to be soft-fork
ready.'

-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Let's deploy BIP65 CHECKLOCKTIMEVERIFY!

2015-09-28 Thread Gavin Andresen via bitcoin-dev
I think three things need to happen:

1) Stop pretending that "everyone must agree to make consensus rule
changes." "Rough consensus" is what we've always gone with, and is good
enough.

2) Mr. Todd (or somebody) needs to write up a risk/benefit security
tradeoff analysis doo-hickey document and publish it. I'm reasonably
confident that the risks to SPV nodes can be mitigated (e.g. by deploying
mempool-only first, before the soft fork rolls out), but as somebody who
has only been moderately paying attention, BETTER COMMUNICATION is needed.
What should SPV wallet authors be doing right now, if anything? Once the
soft fork starts to roll out or activates, what do miners need to be aware
of? SPV wallet authors?

3) I agree CLTV is ready to roll out, that there is rough consensus a soft
fork is a reasonable way to do it, and that it should happen ASAP.

On Mon, Sep 28, 2015 at 6:48 AM, Mike Hearn via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> There is *no* consensus on using a soft fork to deploy this feature. It
> will result in the same problems as all the other soft forks - SPV wallets
> will become less reliable during the rollout period. I am against that, as
> it's entirely avoidable.
>
> Make it a hard fork and my objection will be dropped.
>
> Until then, as there is no consensus, you need to do one of two things:
>
> 1) Drop the "everyone must agree to make changes" idea that people here
> like to peddle, and do it loudly, so everyone in the community is correctly
> informed
>
> 2) Do nothing
>
>
-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Weak block thoughts...

2015-09-23 Thread Gavin Andresen via bitcoin-dev
On Wed, Sep 23, 2015 at 3:24 PM, Gregory Maxwell  wrote:

> On Wed, Sep 23, 2015 at 3:43 PM, Gavin Andresen via bitcoin-dev
>  wrote:
> [...]
> > A miner could try to avoid validation work by just taking a weak block
> > announced by somebody else, replacing the coinbase and re-computing the
> > merkle root, and then mining. They will be at a slight disadvantage to
> fully
>
> Take care, here-- if a scheme is used where e.g. the full solution had
> to be exactly identical to a prior weak block then the result would be
> making mining not progress free because bigger miners would have
> disproportionately more access to the weak/strong one/two punch. I
> think what you're thinking here is okay, but it wasn't clear to me if
> you'd caught that particular potential issue.
>

I'm assuming the optimized protocol would be forward-error-coded (e.g.
using IBLTs)  and NOT require the full solution (or follow-on weak blocks)
to be exactly the same.


> Avoiding this is why I've always previously described this idea as
> merged mined block DAG (with blocks of arbitrary strength) which are
> always efficiently deferentially coded against prior state. A new
> solution (regardless of who creates it) can still be efficiently
> transmitted even if it differs in arbitrary ways (though the
> efficiency is less the more different it is).
>

Yup, although I don't get the 'merge mined' bit; the weak blocks are
ephemeral, probably purged out of memory as soon as a few full blocks are
found...


> I'm unsure of what approach to take for incentive compatibility
> analysis. In the worst case this approach class has no better delays
> (and higher bandwidth); but it doesn't seem to me to give rise to any
> immediate incrementally strategic behavior (or at least none worse
> than you'd get from just privately using the same scheme).
>

I don't see any incentive problems, either. Worst case is more miners
decide to skip validation and just mine a variation of the
highest-fee-paying weak block they've seen, but that's not a disaster--
invalid blocks will still get rejected by all the non-miners running full
nodes.

If we did see that behavior, I bet it would be a good strategy for a big
hashrate miner to dedicate some of their hashrate to announcing invalid
weak blocks; if you can get your lazy competitors to mine it, then you
win

-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] [BIP Proposal] Version bits with timeout and delay.

2015-09-23 Thread Gavin Andresen via bitcoin-dev
I say keep it simple.

If the 75% threshold is hit, then support suddenly drops off below 50%,
"meh" -- there will be a big ruckus, everybody will freak out, and miners
will refuse to build big blocks because they'll worry that they'll get
orphaned.

Adding more complexity for a case that ain't gonna happen (and isn't a
disaster if it does) is a mistake, in my humble opinion.



On Wed, Sep 23, 2015 at 2:33 PM, Tom Harding via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> On 9/13/2015 11:56 AM, Rusty Russell via bitcoin-dev wrote:
>
>> '''Success: Activation Delay'''
>> The consensus rules related to ''locked-in'' soft fork will be enforced in
>> the second retarget period; ie. there is a one retarget period in
>> which the remaining 5% can upgrade.  At the that activation block and
>> after, the bit B may be reused for a different soft fork.
>>
>>
> Rather than a simple one-period delay, should there be a one-period
> "burn-in" to show sustained support of the threshold?  During this period,
> support must continuously remain above the threshold.  Any lapse resets to
> inactivated state.
>
> With a simple delay, you can have the embarrassing situation where support
> falls off during the delay period and there is far below threshold support
> just moments prior to enforcement, but enforcement happens anyway.
>
> BIP 101 has this problem too.
>
>
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>



-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Weak block thoughts...

2015-09-23 Thread Gavin Andresen via bitcoin-dev
I've been thinking about 'weak blocks' and SPV mining, and it seems to me
weak blocks will make things better, not worse, if we improve the mining
code a little bit.

First:  the idea of 'weak blocks' (hat tip to Rusty for the term) is for
miners to pre-announce blocks that they're working on, before they've
solved the proof-of-work puzzle. To prevent DoS attacks, assume that some
amount of proof-of-work is done (hence the term 'weak block') to rate-limit
how many 'weak block' messages are relayed across the network.


Today, miners are incentivized to start mining an empty block as soon as
they see a block with valid proof-of-work, because they want to spend as
little time as possible mining a not-best chain.

Imagine miners always pre-announce the blocks they're working on to their
peers, and peers validate those 'weak blocks' as quickly as they are able.

Because weak blocks are pre-validated, when a full-difficulty block based
on a previously announced weak block is found, block propagation should be
insanely fast-- basically, as fast as a single packet can be relayed across
the network the whole network could be mining on the new block.

I don't see any barrier to making accepting the full-difficulty block and
CreateNewBlock() insanely fast, and if those operations take just a
microsecond or three, miners will have an incentive to create blocks with
fee-paying transactions that weren't in the last block, rather than mining
empty blocks.

.

A miner could try to avoid validation work by just taking a weak block
announced by somebody else, replacing the coinbase and re-computing the
merkle root, and then mining. They will be at a slight disadvantage to
fully validating miners, though, because they WOULD have to mine empty
blocks between the time a full block is found and a fully-validating miner
announced their next weak block.

.

Weak block announcements are great for the network; they give transaction
creators a pretty good idea of whether or not their transactions are likely
to be confirmed in the next block. And if we're smart about implementing
them, they shouldn't increase bandwidth or CPU usage significantly, because
all the weak blocks at a given point in time are likely to contain the same
transactions.

-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Dynamic limit to the block size - BIP draft discussion

2015-09-08 Thread Gavin Andresen via bitcoin-dev
>
> 3) Let me put it another way, I've read that both Gavin and yourself are
> favorable to a dynamic limit on the block size. In your view, what is
> missing from this proposal, or what variables should be adjusted, to get
> the rules to a place where you and other Core developers would seriously
> consider it?
>

I'm not clear on what problem(s) you're trying to solve.

If you want blocks to be at least 60% full, then just specify a simple rule
like "maximum block size is 1.0/0.6 = 1.666 times the average block size
over the last N blocks (applied at every block or every 2016 blocks or
whatever, details don't really matter)".

If you want an upper limit on growth, then just implement a simple rule
like "Absolute maximum block size is 1 megabyte in 2016, 3.45 megabytes in
2017, and increases by a maximum of 3.45 times every year."

If you want me to take your proposal seriously, you need to justify why 60%
full is a good answer (and why we need a centralized decision on how full
blocks "should" be), and why 3.45 times-per-year is a good answer for
maximum growth (and, again, why we need a centralized decision on that).

-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIPS proposal for implementing AML-KYC in bitcoin

2015-08-27 Thread Gavin Andresen via bitcoin-dev
On Thu, Aug 27, 2015 at 9:39 AM, prabhat  wrote:

> So where is the solution? What to do?
>

This is a development list; organizations like https://coincenter.org/ work
on high-level policy issues.

Last I heard, competent law enforcement organizations said they were
perfectly capable of tracking down criminals using Bitcoin using
traditional investigative techniques (like infiltrating criminal
organizations or setting up honeypots). Given how many "dark markets" have
either disappeared or been taken down, it seems they are correct.

-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIPS proposal for implementing AML-KYC in bitcoin

2015-08-27 Thread Gavin Andresen via bitcoin-dev
Have you talked with anybody at the Bitcoin Foundation about this proposal?

As Chief Scientist of the Foundation, I am strongly opposed to any proposal
that puts the Foundation in a position of centralized authority, so this is
unacceptable: "The Bitcoin Foundation will act as fair play party and
enforcement body to control the misuse of vast financial powers which
bitcoin has."

The idea that a central organization can be trusted to keep secrets secure
is just fundamentally wrong. In the very recent past we have seen
government organizations fail in that task (the NSA, the OPM) and we see
commercial organizations that SHOULD be highly motivated to do a good job
also fail (e.g. the Ashley Madison leak).

Even if it were technically possible, I would be opposed because
decentralization is a bedrock principle of Bitcoin.

-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Fees and the block-finding process

2015-08-10 Thread Gavin Andresen via bitcoin-dev
On Fri, Aug 7, 2015 at 1:33 PM, Jorge Timón  wrote:

>
> On Aug 7, 2015 5:55 PM, "Gavin Andresen"  wrote:
> >
> > I think there are multiple reasons to raise the maximum block size, and
> yes, fear of Bad Things Happening as we run up against the 1MB limit is one
> of the reasons.
>
> What are the other reasons?
>
> > I take the opinion of smart engineers who actually do resource planning
> and have seen what happens when networks run out of capacity very seriously.
>
> When "the network runs out of capacity" (when we hit the limit) do we
> expect anything to happen apart from minimum market fees rising (above
> zero)?
> Obviously any consequences of fees rising are included in this concern.
>
It is frustrating to answer questions that we answered months ago,
especially when I linked to these in response to your recent "increase
advocates say that not increasing the max block size will KILL BITCOIN"
false claim:
  http://gavinandresen.ninja/why-increasing-the-max-block-size-is-urgent
  https://medium.com/@octskyward/crash-landing-f5cc19908e32

Executive summary: when networks get over-saturated, they become
unreliable.  Unreliable is bad.

Unreliable and expensive is extra bad, and that's where we're headed
without an increase to the max block size.

RE: the recent thread about "better deal with that type of thing now rather
than later" :  exactly the same argument can be made about changes needed
to support a larger block size-- "better to do that now than to do that
later."  I don't think either of those arguments are very convincing.


-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] What Lightning Is

2015-08-09 Thread Gavin Andresen via bitcoin-dev
While we're on the subject of payment hubs / lightning network...

I'd love to see somebody write up a higher-level description of what the
user experience is like, what communication happens underneath, and what
new pieces of infrastructure need to get built to make it all work.

A use-case to start with:

A customer starts with eleven on-chain bitcoin. They want to pay for a nice
cup of tea. Walk me through what happens before/during/after the
transaction, assuming I have a  lightning-enabled wallet on my iPhone and
the tea shop has a lightning-enabled cash register.

Assume neither the customer nor the tea shop are technically sophisticated
-- assume the customer is using an SPV wallet and the tea shop is using a
service similar to Bitpay.

-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Fwd: Block size following technological growth

2015-08-07 Thread Gavin Andresen via bitcoin-dev
On Fri, Aug 7, 2015 at 12:30 PM, Pieter Wuille via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> If the incentives for running a node don't weight up against the
> cost/difficulty using a full node yourself for a majority of people in the
> ecosystem, I would argue that there is a problem. As Bitcoin's fundamental
> improvement over other systems is the lack of need for trust, I believe
> that with increased adoption should also come an increased (in absolute
> terms) incentive for people to use a full node. I'm seeing the opposite
> trend, and that is worrying IMHO.


Are you saying that unless the majority of people in the ecosystem decide
to trust nothing but the genesis block hash (decide to run a full node)
there is a problem?

If so, then we do have a fundamental difference of opinion, but I've
misunderstood how you think about trust/centralization/convenience
tradeoffs in the past.

I believe people in the Bitcoin ecosystem will choose different tradeoffs,
and I believe that is OK-- people should be free to make those tradeoffs.

And given that the majority of people in the ecosystem were deciding that
using a centralized service or an SPV-level-security wallet was better even
two or three years ago when blocks were tiny (I'd have to go back and dig
up number-of-full-nodes and number-of-active-wallets at the big web-wallet
providers, but I bet there were an order of magnitude more people using
centralized services than running full nodes even back then), I firmly
believe that block size has very little to do with the decision to run a
full node or not.


-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Fees and the block-finding process

2015-08-07 Thread Gavin Andresen via bitcoin-dev
On Fri, Aug 7, 2015 at 11:16 AM, Pieter Wuille 
wrote:

> I guess my question (and perhaps that's what Jorge is after): do you feel
> that blocks should be increased in response to (or for fear of) such a
> scenario.
>

I think there are multiple reasons to raise the maximum block size, and
yes, fear of Bad Things Happening as we run up against the 1MB limit is one
of the reasons.

I take the opinion of smart engineers who actually do resource planning and
have seen what happens when networks run out of capacity very seriously.


And if so, if that is a reason for increase now, won't it be a reason for
> an increase later as well? It is my impression that your answer is yes,
> that this is why you want to increase the block size quickly and
> significantly, but correct me if I'm wrong.
>

Sure, it might be a reason for an increase later. Here's my message to
in-the-future Bitcoin engineers:  you should consider raising the maximum
block size if needed and you think the benefits of doing so (like increased
adoption or lower transaction fees or increased reliability) outweigh the
costs (like higher operating costs for full-nodes or the disruption caused
by ANY consensus rule change).


-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Fees and the block-finding process

2015-08-07 Thread Gavin Andresen via bitcoin-dev
Popping this into it's own thread:

Jorge asked:

> >> 1) If "not now", when will it be a good time to let the "market
> >> minimum fee for miners to mine a transaction" rise above zero?
>

I answered:


> > 1. If you are willing to wait an infinite amount of time, I think the
> > minimum fee will always be zero or very close to zero, so I think it's a
> > silly question.
>

Which Jorge misinterpreted to mean that I think there will always be at
least one miner willing to mine a transaction for free.

That's not what I'm thinking. It is just an observation based on the fact
that blocks are found at random intervals.

Every once in a while the network will get lucky and we'll find six blocks
in ten minutes. If you are deciding what transaction fee to put on your
transaction, and you're willing to wait until that
six-blocks-in-ten-minutes once-a-week event, submit your transaction with a
low fee.

All the higher-fee transactions waiting to be confirmed will get confirmed
in the first five blocks and, if miners don't have any floor on the fee
they'll accept (they will, but lets pretend they won't) then your
very-low-fee transaction will get confirmed.

In the limit, that logic becomes "wait an infinite amount of time, pay zero
fee."

So... I have no idea what the 'market minimum fee' will be, because I have
no idea how long people will be willing to wait, how many times they'll be
willing to retransmit a low-fee transaction that gets evicted from
memory-limited memory pools, or how much memory miners will be willing to
dedicate to storing transactions that won't confirm for a long time because
they're waiting for a flurry of blocks to be found.

-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Block size following technological growth

2015-08-06 Thread Gavin Andresen via bitcoin-dev
On Thu, Aug 6, 2015 at 1:15 PM, Jorge Timón  wrote:

> So I reformulate the question:
>
> 1) If "not now", when will it be a good time to let the "market
> minimum fee for miners to mine a transaction" rise above zero?


Two answers:

1. If you are willing to wait an infinite amount of time, I think the
minimum fee will always be zero or very close to zero, so I think it's a
silly question.

2. The "market minimum fee" should be determined by the market. It should
not be up to us to decide "when is a good time."


> 2) Do you have any criterion (automatic or not) that can result in you
> saying "no, this is too much" for any proposed size?
>

Sure, if keeping up with transaction volume requires a cluster of computers
or more than "pretty good" broadband bandwidth I think that's too far.
That's where original 20MB limit comes from, otherwise I'd have proposed a
much higher limit.


> Would you agree that blocksize increase proposals should have such a
> criterion/test?


Although I've been very clear with my criterion, no, I don't think all
blocksize increase proposals should have to justify "why this size" or "why
this rate of increase." Part of my frustration with this whole debate is
we're talking about a sanity-check upper-limit; as long as it doesn't open
up some terrible new DoS possibility I don't think it really matters much
what the exact number is.



> Regardless of the history of the consensus rule (which I couldn't care
> less about), I believe the only function that the maximum block size
> rule currently serves is limiting centralization.
> Since you deny that function, do you think the (artificial) consensus
> rule is currently serving any other purpose that I'm missing?
>

It prevents trivial denial-of-service attacks (e.g. I promise to send you a
1 Terabyte block, then fill up your memory or disk...).

And please read what I wrote: I said that the block limit has LITTLE effect
on MINING centralization.  Not "no effect on any type of centralization."

If the limit was removed entirely, it is certainly possible we'd end up
with very few organizations (and perhaps zero individuals) running full
nodes.

-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Block size following technological growth

2015-08-06 Thread Gavin Andresen via bitcoin-dev
On Thu, Aug 6, 2015 at 11:25 AM, Jorge Timón  wrote:

> 1) If "not now" when will it be a good time to let fees rise above zero?
>

Fees are already above zero. See
http://gavinandresen.ninja/the-myth-of-not-full-blocks


> 2) When will you consider a size to be too dangerous for centralization?
> In other words, why 20 GB would have been safe but 21 GB wouldn't have
> been (or the respective maximums and respective +1 for each block
> increase proposal)?
>

http://gavinandresen.ninja/does-more-transactions-necessarily-mean-more-centralized

3) Does this mean that you would be in favor of completely removing
> the consensus rule that limits mining centralization by imposing an
> artificial (like any other consensus rule) block size maximum?
>

I don't believe that the maximum block size has much at all to do with
mining centralization, so I don't accept the premise of the question.

-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Fwd: Block size following technological growth

2015-08-06 Thread Gavin Andresen via bitcoin-dev
On Thu, Aug 6, 2015 at 10:53 AM, Pieter Wuille 
wrote:

> So if we would have 8 MB blocks, and there is a sudden influx of users (or
> settlement systems, who serve much more users) who want to pay high fees
> (let's say 20 transactions per second) making the block chain inaccessible
> for low fee transactions, and unreliable for medium fee transactions (for
> any value of low, medium, and high), would you be ok with that?


Yes, that's fine. If the network cannot handle the transaction volume that
people want to pay for, then the marginal transactions are priced out. That
is true today (otherwise ChangeTip would be operating on-blockchain), and
will be true forever.


> If so, why is 8 MB good but 1 MB not? To me, they're a small constant
> factor that does not fundamentally improve the scale of the system.


"better is better" -- I applaud efforts to fundamentally improve the
scalability of the system, but I am an old, cranky, pragmatic engineer who
has seen that successful companies tackle problems that arise and are
willing to deploy not-so-perfect solutions if they help whatever short-term
problem they're facing.


> I dislike the outlook of "being forever locked at the same scale" while
> technology evolves, so my proposal tries to address that part. It
> intentionally does not try to improve a small factor, because I don't think
> it is valuable.


I think consensus is against you on that point.

-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Block size following technological growth

2015-08-06 Thread Gavin Andresen via bitcoin-dev
On Thu, Aug 6, 2015 at 10:06 AM, Pieter Wuille 
wrote:

> But you seem to consider that a bad thing. Maybe saying that you're
> claiming that this equals Bitcoin failing is an exaggeration, but you do
> believe that evolving towards an ecosystem where there is competition for
> block space is a bad thing, right?
>

No, competition for block space is good.

What is bad is artificially limiting or centrally controlling the supply of
that space.

-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Block size following technological growth

2015-08-06 Thread Gavin Andresen via bitcoin-dev
On Wed, Aug 5, 2015 at 9:26 PM, Jorge Timón <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> This is a much more reasonable position. I wish this had been starting
> point of this discussion instead of "the block size limit must be
> increased as soon as possible or bitcoin will fail".
>

It REALLY doesn't help the debate when you say patently false statements
like that.

My first blog post on this issue is here:
  http://gavinandresen.ninja/why-increasing-the-max-block-size-is-urgent

... and I NEVER say "Bitcoin will fail".  I say:

"If the number of transactions waiting gets large enough, the end result
will be an over-saturated network, busy doing nothing productive. I don’t
think that is likely– it is more likely people just stop using Bitcoin
because transaction confirmation becomes increasingly unreliable."

Mike sketched out the worst-case here:
  https://medium.com/@octskyward/crash-landing-f5cc19908e32

... and concludes:

"I believe there are no situations in which Bitcoin can enter an overload
situation and come out with its reputation and user base intact. Both would
suffer heavily and as Bitcoin is the founder of the cryptocurrency concept,
the idea itself would inevitably suffer some kind of negative
repercussions."




So please stop with the over-the-top claims about what "the other side"
believe, there are enough of those (on both sides of the debate) on reddit.
I'd really like to focus on how to move forward, and how best to resolve
difficult questions like this in the future.

-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Superluminal communication and the consensus block size limit

2015-08-05 Thread Gavin Andresen via bitcoin-dev
On Wed, Aug 5, 2015 at 7:24 PM, Jorge Timón <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Miner A is able to process 100 M tx/block while miner B is only able
> to process 10 M tx/block.
>
> Will miner B be able to maintain itself competitive against miner B?
>
> The answer is: it depends on the consensus maximum block size.
>

No, it depends on all of the variables that go into the mining
profitability equation.

Does miner B have access to cheaper electricity than miner A?
Access to more advanced mining hardware, sooner?
Ability to use excess heat generated from mining productively?
Access to inexpensive labor to oversee their operations?
Access to inexpensive capital to finance investment in hardware?

The number of fee-paying transactions a miner can profitably include in
their blocks will certainly eventually be part of that equation (it is
insignificant today), and that's fantastic-- we WANT miners to include lots
of transactions in their blocks.

-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] "A Transaction Fee Market Exists Without a Block Size Limit"--new research paper suggests

2015-08-04 Thread Gavin Andresen via bitcoin-dev
On Tue, Aug 4, 2015 at 2:41 PM, Dave Hudson via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Fundamentally a block maker (pool or aggregation of pools) does not orphan
> its own blocks.


Unless the block maker has an infinitely fast connection to it's hashpower
OR it's hashpower is not parallelized at all, that's not strictly true --
it WILL orphan its own blocks because two hashing units will find solutions
in the time it takes to communicate that solution to the block maker and to
the rest of the hashing units.

That's getting into "how many miners can dance on the head of a pin"
territory, though. I don't think we know whether the communication
advantages of putting lots of hashing power physically close together will
outweigh the extra cooling costs of doing that (or maybe some other
tradeoff I haven't thought of). That would be a fine topic for another
paper

-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Block size following technological growth

2015-08-04 Thread Gavin Andresen via bitcoin-dev
On Tue, Aug 4, 2015 at 7:27 AM, Pieter Wuille via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> I would say that things already demonstrately got terrible. The mining
> landscape is very centralized, with apparently a majority depending on
> agreements to trust each other's announced blocks without validation.
>
And that is a problem... why?

As far as I can tell, nobody besides miners running old and/or buggy
software lost money due to outsourced mining validation (please correct me
if I'm wrong-- I'm looking forward to Greg's post-mortem). The operators of
bitcoin.org seem to have freaked out and pushed the panic button (with dire
warnings of not trusting transactions until 20 confirmations), but theymos
was well known for using an old, patched version of Core for
blockexplorer.com so maybe that's not surprising.

As Bitcoin grows, pieces of the ecosystem will specialize. Satoshi's
original code did everything: hashing, block assembly, wallet, consensus,
network. That is changing, and that is OK.

I understand there are parts of the ecosystem you'd rather not see
specialized, like transaction selection / block assembly or validation. I
see it as a natural maturation. The only danger I see is if some unnatural
barriers to competition spring up.

> Full node count is at its historically lowest value in years, and
outsourcing of full validation keeps growing.

Both side effects of increasing specialization, in my opinion. Many
companies quite reasonably would rather hire somebody who specializes in
running nodes, keeping keys secure, etc rather than develop that expertise
themselves.

Again, not a problem UNLESS some unnatural barriers to competition spring
up.


> I believe that if the above would have happened overnight, people would
> have cried wolf. But somehow it happened slow enough, and "things kept
> working".
>
> I don't think that this is a good criterion. Bitcoin can "work" with
> gigabyte blocks today, if everyone uses the same few blockchain validation
> services, the same few online wallets, and mining is done by a cartel that
> only allows joining after signing a contract so they can sue you if you
> create an invalid block. Do you think people will then agree that "things
> got demonstratebly worse"?
>
> Don't turn Bitcoin into something uninteresting, please.
>
> Why is what you, personally, find interesting relevant?

I understand you want to build an extremely decentralized system, where
everybody participating trusts nothing except the genesis block hash.

I think it is more interesting to build a system that works for hundreds of
millions of people, with no central point of control and the opportunity
for ANYBODY to participate at any level. Permission-less innovation is what
I find interesting.

And I think the current "demonstrably terrible" Bitcoin system is still
INCREDIBLY interesting.

-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Eli Dourado on "governance"

2015-08-03 Thread Gavin Andresen via bitcoin-dev
On Mon, Aug 3, 2015 at 6:13 PM, GJB  wrote:

> Do you mean something like a Foundation?
>

No, I think one of the fundamental problems with the Foundation is it tries
to represent everybody's interests. The interests of exchanges are not
necessarily the same as end-users or miners, for example.

But it would make sense for exchanges (for example) to get together and
come to consensus on whatever issues are important to them, like the recent
consensus and then statement from "the Chinese miners" regarding the block
size issue.

-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Eli Dourado on "governance"

2015-08-03 Thread Gavin Andresen via bitcoin-dev
I haven't seen this excellent recent blog post by Eli Dourado referenced
here:
  https://readplaintext.com/how-should-bitcoin-be-governed-680192fcd92b

I agree with his conclusions: we need better communication/organization
mechanisms among 'stakeholders' and between the various factions
(developers, miners, merchants, exchanges, end-users).

And the preliminary results of using a prediction market to try to wrestle
with the tough tradeoffs looks roughly correct to me, too:
   https://blocksizedebate.com/

(my only big disagreement with those predictions is the 'Number of nodes'
-- I don't think replace-by-fee would affect that number, and I think even
with no change we will see the number of full nodes on the network drop to
a couple thousand, because the general-purpose-home-PC is headed the way of
the dodo:
http://www.businessinsider.com/pc-sales-plummet-in-q2-2015-gartner-idc-say-2015-7
).


-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Consensus fork activation thresholds: Block.nTime vs median time vs block.nHeight

2015-07-30 Thread Gavin Andresen via bitcoin-dev
I still think using the version and timestamp fields in the block header
are simplest and best.

Advantages:
  Available to SPV nodes with no change to the network protocol
  Available after headers downloaded, before full block data is available
  Once well past a fork, allows all block validation except validation
against the UTXO to happen in parallel, out-of-order, independent of any
other block.

Disadvantages:
  Not monotonically increasing


I think discussion about transactions in the memory pool are just a
distraction: no matter what criteria is used (timestamp, height, median
time), a blockchain re-organization could mean the validity of transactions
you've accepted into the memory pool (if you're accepting transactions that
switch from valid to invalid at the consensus change -- Core tries hard not
to do that via IsStandard policy) must be re-evaluated.

I don't strongly care if median time or block timestamp is used, I think
either will work. I don't like height, there are too many cases where the
time is known but the block height isn't (see, for example, the
max-outputs-in-a-transaction sanity check computation at line 190 of
bitcoin-tx.cpp -- bitcoin-tx.cpp has no idea what the current block height
is).


-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Block size following technological growth

2015-07-30 Thread Gavin Andresen via bitcoin-dev
On Thu, Jul 30, 2015 at 10:25 AM, Pieter Wuille via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:
>
> Some things are not included yet, such as a testnet whose size runs ahead
> of the main chain, and the inclusion of Gavin's more accurate sigop
> checking after the hard fork.
>
> Comments?
>

First, THANK YOU for making a concrete proposal!

Specific comments:

So we'd get to 2MB blocks in the year 2021. I think that is much too
conservative, and the most likely effect of being that conservative is that
the main blockchain becomes a settlement network, affordable only for
large-value transactions.

I don't think your proposal strikes the right balance between
centralization of payments (a future where only people running payment
hubs, big merchants, exchanges, and wallet providers settle on the
blockchain) and centralization of mining.



I'll comment on using median time generally in Jorge's thread, but why does
monotonically increasing matter for max block size? I can't think of a
reason why a max block size of X bytes in block N followed by a max size of
X-something bytes in block N+1 would cause any problems.

-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Why Satoshi's temporary anti-spam measure isn'ttemporary

2015-07-30 Thread Gavin Andresen via bitcoin-dev
On Thu, Jul 30, 2015 at 11:24 AM, Bryan Bishop via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> Because any decentralized system is going to have high transaction costs
> and scarcity anyway.


This is a meme that keeps coming up that I think just isn't true.

What other decentralized systems can we look at as role models?

How decentralized are they?

And why did they succeed when "more efficient" centralized systems did not?


The Internet is the most successful decentralized system to date; what
lessons should we learn?

How decentralized is the technology of the Internet (put aside governance
and the issues of who-assigns-blocks-of-IPs-and-registers-domain-names)?
How many root DNS servers?  How many BGP routers along the backbone would
need to be compromised to disrupt traffic? Why don't we see more
disruptions, or why are people willing to tolerate the disruptions that DO
happen?

And how did the Internet out-compete more efficient centralized systems
from the big telecom companies?  (I remember some of the arguments that
unreliable, inefficient packet-switching would never replace dedicated
circuits that couldn't get congested and didn't have inefficient timeouts
and retransmissions)


What other successful or unsuccessful decentralized systems should we be
looking at?


I'm old-- I graduated from college in 1988, so I've worked in tech through
the entire rise of the Internet. The lessons I believe we should take away
is that a system doesn't have to be perfect to be successful, and we
shouldn't underestimate people's ability to innovate around what might seem
to be insurmountable problems, IF people are given the ability to innovate.

Yes, people will innovate within a 1MB (or 1MB-scaling-to-2MB by 2021) max
block size, and yes, smaller blocks have utility. But I think we'll get a
lot more innovation and utility without such small, artificial limits.

-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Why Satoshi's temporary anti-spam measure isn't temporary

2015-07-30 Thread Gavin Andresen via bitcoin-dev
On Thu, Jul 30, 2015 at 8:50 AM, Pieter Wuille 
wrote:

> Let's scale the block size gradually over time, according to technological
> growth.


Yes, lets do that-- that is EXACTLY what BIP101 intends to do.

With the added belt&suspenders reality check of miners, who won't produce
blocks too big for whatever technology they're using.

---

So what do you think the scalability road map should look like? Should we
wait to hard fork until Blockstream Elements is ready for deploying on the
main network, and then have One Grand Hardfork that introduces all the
scalability work you guys have been working on (like Segregated Witness and
Lightning)?

Or is the plan to avoid controversy by people voluntarily moving their
bitcoin to a sidechain where all this scaling-up innovation happens?

No plan for how to scale up is the worst of all possible worlds, and the
lack of a direction or plan(s) is my main objection to the current status
quo.

And any plan that requires inventing brand-new technology is going to be
riskier than scaling up what we already have and understand, which is why I
think it is worthwhile to scale up what we have IN ADDITION TO working on
great projects like Segregated Witness and Lightning.

-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] For discussion: limit transaction size to mitigate CVE-2013-2292

2015-07-24 Thread Gavin Andresen via bitcoin-dev
After thinking about it, implementing it, and doing some benchmarking, I'm
convinced replacing the existing, messy, ad-hoc sigop-counting consensus
rules is the right thing to do.

The last two commits in this branch are an implementation:
   https://github.com/gavinandresen/bitcoin-git/commits/count_hash_size

>From the commit message in the last commit:

Summary of old rules / new rules:

Old rules: 20,000 inaccurately-counted-sigops for a 1MB block
New: 80,000 accurately-counted sigops for an 8MB block

A scan of the last 100,000 blocks for high-sigop blocks gets
a maximum of 7,350 sigops in block 364,773 (in a single, huge,
~1MB transaction).

For reference, Pieter Wuille's libsecp256k1 validation code
validates about 10,000 signatures per second on a single
2.7GHZ CPU core.

Old rules: no limit for number of bytes hashed to generate
signature hashes

New rule: 1.3gigabytes hashed per 8MB block to generate
signature hashes

Block 364,422 contains a single ~1MB transaction that requires
1.2GB of data hashed to generate signature hashes.

TODO: benchmark Core's sighash-creation code ('openssl speed sha256'
reports something like 1GB per second on my machine).

Note that in normal operation most validation work is done as transactions
are received from the network, and can be cached so it doesn't have to be
repeated when a new block is found. The limits described in this BIP are
intended, as the existing sigop limits are intended, to be an extra "belt
and suspenders" measure to mitigate any possible attack that involves
creating and broadcasting a very expensive-to-verify block.


Draft BIP:

  BIP: ??
  Title: Consensus rules to limit CPU time required to validate blocks
  Author: Gavin Andresen 
  Status: Draft
  Type: Standards Track
  Created: 2015-07-24

==Abstract==

Mitigate potential CPU exhaustion denial-of-service attacks by limiting
the maximum number of ECDSA signature verfications done per block,
and limiting the number of bytes hashed to compute signature hashes.

==Motivation==

Sergio Demian Lerner reported that a maliciously constructed block could
take several minutes to validate, due to the way signature hashes are
computed for OP_CHECKSIG/OP_CHECKMULTISIG ([[
https://bitcointalk.org/?topic=140078|CVE-2013-2292]]).
Each signature validation can require hashing most of the transaction's
bytes, resulting in O(s*b) scaling (where s is the number of signature
operations and b is the number of bytes in the transaction, excluding
signatures). If there are no limits on s or b the result is O(n^2) scaling
(where n is a multiple of the number of bytes in the block).

This potential attack was mitigated by changing the default relay and
mining policies so transactions larger than 100,000 bytes were not
relayed across the network or included in blocks. However, a miner
not following the default policy could choose to include a
transaction that filled the entire one-megaybte block and took
a long time to validate.

==Specification==

After deployment, the existing consensus rule for maximum number of
signature operations per block (20,000, counted in two different,
idiosyncratic, ad-hoc ways) shall be replaced by the following two rules:

1. The maximum number of ECDSA verify operations required to validate
all of the transactions in a block must be less than or equal to
the maximum block size in bytes divided by 100 (rounded down).

2. The maximum number of bytes hashed to compute ECDSA signatures for
all transactions in a block must be less than or equal to the
maximum block size in bytes times 160.

==Compatibility==

This change is compatible with existing transaction-creation software,
because transactions larger than 100,000 bytes have been considered
"non-standard"
(they are not relayed or mined by default) for years, and a block full of
"standard" transactions will be well-under the limits.

Software that assembles transactions into blocks and software that validates
blocks must be updated to enforce the new consensus rules.

==Deployment==

This change will be deployed with BIP 100 or BIP 101.

==Discussion==

Linking these consensus rules to the maximum block size allows more
transactions
and/or transactions with more inputs or outputs to be included if the
maximum
block size increases.

The constants are chosen to be maximally compatible with the existing
consensus rule,
and to virtually eliminate the possibility that bitcoins could be lost if
somebody had locked some funds in a pre-signed, expensive-to-validate,
locktime-in-the-future
transaction.

But they are chosen to put a reasonable upper bound on the CPU time
required to validate
a maximum-sized block.

===Alternatives to this BIP:===

1. A simple limit on transaction size (e.g. any transaction in a block must
be 100,000
bytes or smaller).

2. Fix the CHECKSIG/CHECKMULTISIG opcodes so they don't re-hash variations
of
the transaction's data. This is the "most correct" solution, but would
require
updating every piece of transaction-

Re: [bitcoin-dev] Bitcoin Core and hard forks

2015-07-23 Thread Gavin Andresen via bitcoin-dev
On Thu, Jul 23, 2015 at 3:14 PM, Eric Lombrozo  wrote:

> Mainstream usage of cryptocurrency will be enabled primarily by direct
> party-to-party contract negotiation…with the use of the blockchain
> primarily as a dispute resolution mechanism. The block size isn’t about
> scaling but about supply and demand of finite resources. As demand for
> block space increases, we can address it either by increasing computational
> resources (block size) or by increasing fees. But to do the former we need
> a way to offset the increase in cost by making sure that those who
> contribute said resources have incentive to do so.


There are so many things wrong with this paragraph I just can't let it
slide.

"Mainstream usage will be enabled primarily by..."  Maybe. Maybe not, we
don't know what use case(s) will primarily take cryptocurrency mainstream.
I believe it is a big mistake to pick one and bet "THIS is going to be the
winner".

"we can address it either by... or..."  False dichotomy. There are lots of
things we can do to decrease costs, and a lot of things have ALREADY been
done (e.g. running a pruned full node).  I HATE the "it must be this or
that" "us or them" attitude, it fosters unproductive bickering and
negativity.

(and yes, I'm human, I'm sure you can find instances in the recent past
where I did it, too... mea culpa)

-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bitcoin Core and hard forks

2015-07-23 Thread Gavin Andresen via bitcoin-dev
On Thu, Jul 23, 2015 at 12:17 PM, Tom Harding via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> On 7/23/2015 5:17 AM, Jorge Timón via bitcoin-dev wrote:
> > they will simply advance the front and start another battle, because
> > their true hidden faction is the "not ever side". Please, Jeff, Gavin,
> > Mike, show me that I'm wrong on this point. Please, answer my question
> > this time. If "not now", then when?
>
> Bitcoin has all the hash power.  The merkle root has effectively
> infinite capacity.  We should be asking HOW to scale the supporting
> information propagation system appropriately, not WHEN to limit the
> capacity of the primary time-stamping machine.
>
> We haven't tried yet.  I can't answer for the people you asked, but
> personally I haven't thought much about when we should declare failure.


Yes! Lets plan for success!

I'd really like to move from "IMPOSSIBLE because...  (electrum hasn't been
optimized
(by the way: you should run on SSDs, LevelDB isn't designed for spinning
disks),
what if the network is attacked?  (attacked HOW???), current p2p network is
using
the simplest, stupidest possible block propagation algorithm...)"

... to "lets work together and work through the problems and scale it up."

I'm frankly tired of all the negativity here; so tired of it I've decided
to mostly ignore
all the debate for a while, not respond to misinformation I see being spread
(like "miners have some incentive to create slow-to-propagate blocks"),
work with people like Tom and Mike who have a 'lets get it done' attitude,
and
focus on what it will take to scale up.

-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] For discussion: limit transaction size to mitigate CVE-2013-2292

2015-07-23 Thread Gavin Andresen via bitcoin-dev
On Mon, Jul 20, 2015 at 4:55 PM, Gregory Maxwell  wrote:

> On Mon, Jul 20, 2015 at 7:10 PM, Gavin Andresen via bitcoin-dev
>  wrote:
> > Mitigate a potential CPU exhaustion denial-of-service attack by limiting
> > the maximum size of a transaction included in a block.
>
> This seems like a fairly indirect approach. The resource being watched
> for is not the size (otherwise two transactions for 200k would be
> strictly worse than one 200k transactions) but the potential of N^2
> costs related to repeated hashing in checksig; which this ignores.
>

To get a feeling for the implementation complexity / correctness tradeoff,
I implemented changes to Core to count exactly how many signature operations
are performed and how many bytes are hashed to compute sighashes:

https://github.com/gavinandresen/bitcoin-git/commit/08ecd6f67d977271faa92bc1890b8f94b15c2792

I haven't benchmarked how much keeping track of the counts affects
performance (but I expect
it to be minimal compared to ECDSA signature validation, accessing inputs
from the UTXO, etc).

I like the idea of a consensus rule that directly addresses the attack--
e.g. "validating
a transaction must not require more than X megabytes hashed to compute
signature hashes."
(or: "validating a block must not require more than X megabytes hashed..."
which is
more symmetric with the current "maximum number of sigops allowed per
block")

Thinking about this and looking at block 364,292, I think I see a simple
optimization that would
speed up validation for transactions with lots of inputs:  use
SIGHASH_ANYONECANPAY
for all of the inputs instead of SIGHASH_ALL.

(which would make the transaction malleable-- if that's a concern, then
make one of the inputs
SIGHASH_ALL and the rest SIGHASH_ANYONECANPAY-- I think this is a change
that
should be made to Core and other wallets should make).

---

I'd like to hear from maintainers of other full implementations: how hard
would it be for you
to keep track of the number of bytes hashed to validate a transaction or
block, and use
it as a consensus rule?

-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bitcoin Node Speed Test

2015-07-23 Thread Gavin Andresen via bitcoin-dev
Ahh, data... a breath of fresh air...

Can you re-analyze for 8MB blocks?  There is no current proposal for 20MB
blocks.

Also, most hashing power is now using Matt Corallo's fast block propagation
network; slow 'block' propagation to merchants/end-users doesn't really
matter (as long as it doesn't get anywhere near the 10-minute block time).

On Thu, Jul 23, 2015 at 10:19 AM, slurms--- via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> On this day, the Bitcoin network was crawled and reachable nodes surveyed
> to find their maximum throughput in order to determine if it can safely
> support a faster block rate. Specifically this is an attempt to prove or
> disprove the common statement that 1MB blocks were only suitable slower
> internet connections in 2009 when Bitcoin launched, and that connection
> speeds have improved to the point of obviously supporting larger blocks.
>
>
> The testing methodology is as follows:
>
>  * Nodes were randomly selected from a peers.dat, 5% of the reachable
> nodes in the network were contacted.
>
>  * A random selection of blocks was downloaded from each peer.
>
>  * There is some bias towards higher connection speeds, very slow
> connections (<30KB/s) timed out in order to run the test at a reasonable
> rate.
>
>  * The connecting node was in Amsterdam with a 1GB NIC.
>
>
> Results:
>
>  * 37% of connected nodes failed to upload blocks faster than 1MB/s.
>
>  * 16% of connected nodes uploaded blocks faster than 10MB/s.
>
>  * Raw data, one line per connected node, kilobytes per second
> http://pastebin.com/raw.php?i=6b4NuiVQ
>
>
> This does not support the theory that the network has the available
> bandwidth for increased block sizes, as in its current state 37% of nodes
> would fail to upload a 20MB block to a single peer in under 20 seconds
> (referencing a number quoted by Gavin). If the bar for suitability is
> placed at taking only 1% of the block time (6 seconds) to upload one block
> to one peer, then 69% of the network fails for 20MB blocks. For comparison,
> only 10% fail this metric for 1MB blocks.
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>



-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP: Short Term Use Addresses for Scalability

2015-07-22 Thread Gavin Andresen via bitcoin-dev
On Wed, Jul 22, 2015 at 4:34 PM, Tier Nolan via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> It also requires most clients to be updated to support the new address
> system.


That's the killer: introducing Yet Another Type of Bitcoin Address takes a
very long time and requires a lot of people to change their code. At least,
that was the lesson learned when we introduced P2SH addresses.

I think it's just not worth it for a very modest space savings (10 bytes,
when scriptSig+scriptPubKey is about 120 bytes), especially with the
extreme decrease in security (going from 2^160 to 2^80 to brute-force).

-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] For discussion: limit transaction size to mitigate CVE-2013-2292

2015-07-21 Thread Gavin Andresen via bitcoin-dev
On Mon, Jul 20, 2015 at 4:55 PM, Gregory Maxwell  wrote:

> On Mon, Jul 20, 2015 at 7:10 PM, Gavin Andresen via bitcoin-dev
>  wrote:
> > Mitigate a potential CPU exhaustion denial-of-service attack by limiting
> > the maximum size of a transaction included in a block.
>
> This seems like a fairly indirect approach. The resource being watched
> for is not the size (otherwise two transactions for 200k would be
> strictly worse than one 200k transactions) but the potential of N^2
> costs related to repeated hashing in checksig; which this ignores.
>

Yes.  The tradeoff is implementation complexity: it is trivial to check
transaction size,
not as trivial to count signature operations, because
number-of-bytes-in-transaction
doesn't require any context.

But I would REALLY hate myself if in ten years a future version of me was
struggling to
get consensus to move away from some stupid 100,000 byte transaction size
limit
I imposed to mitigate a potential DoS attack.

So I agree, a limit on sigops is the right way to go. And if that is being
changed,
might as well accurately count exactly how many sigops a transaction
actually
requires to be validated...

-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] For discussion: limit transaction size to mitigate CVE-2013-2292

2015-07-20 Thread Gavin Andresen via bitcoin-dev
On Mon, Jul 20, 2015 at 3:43 PM, Tier Nolan via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> This could render transactions with a locktime in the future as
> unspendable.
>
> It is pretty low probability that someone has created a >100kB locked
> transaction though.
>
> It violates the principle that no fork should render someone's coins
> unspendable.
>

Mmmm you'd have to:

a) Have lost or thrown away the keys to the unspent transaction outputs
b) Have created a locktime'd transaction with a lock time after the
BIP100/101 switchover times
that is more than 100,000 bytes big
c) Have some special relationship with a miner that you trust to still be
around when the transaction
unlocks that would mine the bigger-than-standard transaction for you.

I don't think adding extra complexity to consensus-critical code to support
such an incredibly unlikely
scenario is the right decision here. I think it is more likely that the
extra complexity would trigger a bug
that causes a loss of bitcoin greater than the amount of bitcoin tied up in
locktime'ed transactions
(because I think there are approximately zero BTC tied up in >100K
locktime'ed transactions).


RE: limit size of transaction+parents:  Feature creep, belongs in another
BIP in my opinion. This one
is focused on fixing CVE-2013-2292


-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] For discussion: limit transaction size to mitigate CVE-2013-2292

2015-07-20 Thread Gavin Andresen via bitcoin-dev
Draft BIP to prevent a potential CPU exhaustion attack if a significantly
larger maximum blocksize is adopted:

  Title: Limit maximum transaction size
  Author: Gavin Andresen 
  Status: Draft
  Type: Standards Track
  Created: 2015-07-17

==Abstract==

Mitigate a potential CPU exhaustion denial-of-service attack by limiting
the maximum size of a transaction included in a block.

==Motivation==

Sergio Demian Lerner reported that a maliciously constructed block could
take several minutes to validate, due to the way signature hashes are
computed for OP_CHECKSIG/OP_CHECKMULTISIG ([[
https://bitcointalk.org/?topic=140078|CVE-2013-2292]]).
Each signature validation can require hashing most of the transaction's
bytes, resulting in O(s*b) scaling (where n is the number of signature
operations and m is the number of bytes in the transaction, excluding
signatures). If there are no limits on n or m the result is O(n^2) scaling.

This potential attack was mitigated by changing the default relay and
mining policies so transactions larger than 100,000 bytes were not
relayed across the network or included in blocks. However, a miner
not following the default policy could choose to include a
transaction that filled the entire one-megaybte block and took
a long time to validate.

==Specification==

After deployment, the maximum serialized size of a transaction allowed
in a block shall be 100,000 bytes.

==Compatibility==

This change should be compatible with existing transaction-creation
software,
because transactions larger than 100,000 bytes have been considered
"non-standard"
(they are not relayed or mined by default) for years.

Software that assembles transactions into blocks and that validates blocks
must be
updated to reject oversize transactions.

==Deployment==

This change will be deployed with BIP 100 or BIP 101.

==Discussion==

Alternatives to this BIP:

1. A new consensus rule that limits the number of signature operations in a
single transaction instead of limiting size. This might be more compatible
with
future opcodes that require larger-than-100,000-byte transactions, although
any such future opcodes would likely require changes to the Script
validation
rules anyway (e.g. the 520-byte limit on data items).

2. Fix the SIG opcodes so they don't re-hash variations of the
transaction's data.
This is the "most correct" solution, but would require updating every
piece of transaction-creating and transaction-validating software to change
how
they compute the signature hash.

==References==

[[https://bitcointalk.org/?topic=140078|CVE-2013-2292]]: Sergio Demian
Lerner's original report
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev