Re: [Bitcoin-development] Lets discuss what to do if SHA256d is actually broken

2014-06-03 Thread Rusty Russell
Luke Dashjr l...@dashjr.org writes:
 On Tuesday, June 03, 2014 4:29:55 AM xor wrote:
 Hi,
 
 I thought a lot about the worst case scenario of SHA256d being broken in a
 way which could be abused to
 A) reduce the work of mining a block by some significant amount
 B) reduce the work of mining a block to zero, i.e. allow instant mining.

 C) fabricate past blocks entirely.

 If SHA256d is broken, Bitcoin as it is fails entirely.

I normally just lurk, but I looked at this issue last year, so thought
I'd chime in.  I never finished my paper though...

In the event of an *anticipated* weakening of SHA256, a gradual
transition is possible which avoids massive financial disruption.

My scheme used a similar solve-SHA256-then-solve-SHA3 (requiring an
extra nonce for the SHA3), with the difficulty of SHA256 ramping down
and SHA3 ramping up over the transition (eg for a 1 year transition,
start with 25/26 SHA2 and 1/26 SHA3).

The hard part is to estimate what the SHA3 difficulty should be over
time.  My solution was to adjust only the SHA3 target on every *second*
difficulty change (otherwise assume that SHA2 and SHA3 have equally
changed rate and adjust targets on both).

This works reasonably well even if the initial SHA3 difficulty is way
off, and also if SHA2 breaks completely halfway through the transition.

I can provide more details if anyone is interested.

Cheers,
Rusty.

--
Learn Graph Databases - Download FREE O'Reilly Book
Graph Databases is the definitive new guide to graph databases and their 
applications. Written by three acclaimed leaders in the field, 
this first edition is now available. Download your free book today!
http://p.sf.net/sfu/NeoTech
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Lets discuss what to do if SHA256d is actually broken

2014-06-05 Thread Rusty Russell
Charlie 'Charles' Shrem csh...@gmail.com writes:
 Hey Rusty,

 This is intriguing, do you have a writeup somewhere I can read more about ?

OK, ignore the FIXMEs, but I rehashed my stupid sim code, added some
graphs to the (clearly unfinished) paper and uploaded it to github:

https://github.com/rustyrussell/bitcoin_hashchange

PDF is in there too, for easier reading.

Cheers,
Rusty.

--
Learn Graph Databases - Download FREE O'Reilly Book
Graph Databases is the definitive new guide to graph databases and their 
applications. Written by three acclaimed leaders in the field, 
this first edition is now available. Download your free book today!
http://p.sf.net/sfu/NeoTech
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


[Bitcoin-development] Increasing regularity of block times?

2014-10-30 Thread Rusty Russell
Hi all,

I've been toying with an algorithm to place a ceiling on
confirmation latency by allowing weaker blocks after a certain time.
Hope this isn't noise, but thought someone must have considered this
before, or know of flaws in the scheme?

Gory details:
http://rustyrussell.github.io/pettycoin/2014/10/30/More-Regular-Block-Times.html

Thanks,
Rusty.

--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Increasing regularity of block times?

2014-10-30 Thread Rusty Russell
Gregory Maxwell gmaxw...@gmail.com writes:
 On Thu, Oct 30, 2014 at 11:18 PM, Rusty Russell ru...@rustcorp.com.au wrote:
 Hi all,

 I've been toying with an algorithm to place a ceiling on
 confirmation latency by allowing weaker blocks after a certain time.
 Hope this isn't noise, but thought someone must have considered this
 before, or know of flaws in the scheme?

 Gory details:
 http://rustyrussell.github.io/pettycoin/2014/10/30/More-Regular-Block-Times.html

 Irregularity is a required property for convergence. Imagine what
 would happen in a network where a blocks were produced at an exact
 interval: Almost everyone would produce one the exact same time, and
 the network would fragment and because the process would continue it
 would not converge.

Your point is well made.

If everyone published their easy blocks at the 20 minute mark,
divergence would be a problem (though with 6/7 blocks being normal, the
network would probably recover).  I was proposing to relay them as
normal, they're just not accepted until 20 minutes.

(Though with the suggested variant of accepting the most-compatible
rather than first-seen block, this isn't so critical).

 It is precisely the variance  being some huge multiple of the network
 radius which allows the network to converge at all.

I hadn't thought about it that way, but I assumed GHOST mitigate this
down to some lower limit.  Or?

 Bitcoin testnet implements a rule that allows lower difficulty blocks
 after a delay (20 minutes, in fact), but it's a testing-toy... not
 secure or intended to be so. At least one altcoin has copied that
 behavior and been exploited on account of it.

Agreed, that would be foolish.  Note that in my proposal, block
timestamps wouldn't reflect the delay (removing incentive to push
timestamps forward, but making judging historical blockchain's validity
harder).

 If you're simply looking for faster evidence that the network is
 working on a particular transaction set, at some lower timescale:,
 then thats already possible.  e.g. look into how the p2pool sharechain
 builds a consensus around mining work used for pooling. The same
 mechanism can be used to give faster transaction selection evidence.

Nice idea.  Publishing WIP blocks like this could provide evidence, but
you'd have to incentivize miners to publish them.  Can you think of a
way to do that (which beats simply reducing the block time)?

 I'll dig up some citations for you later. Cheers.

Thanks for your time,
Rusty.

--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] [softfork proposal] Strict DER signatures

2015-01-21 Thread Rusty Russell
Pieter Wuille pieter.wui...@gmail.com writes:
 Hello everyone,

 We've been aware of the risk of depending on OpenSSL for consensus
 rules for a while, and were trying to get rid of this as part of BIP
 62 (malleability protection), which was however postponed due to
 unforeseen complexities. The recent evens (see the thread titled
 OpenSSL 1.0.0p / 1.0.1k incompatible, causes blockchain rejection.
 on this mailing list) have made it clear that the problem is very
 real, however, and I would prefer to have a fundamental solution for
 it sooner rather than later.

OK, I worked up a clearer (but more verbose) version with fewer
magic numbers.  More importantly, feel free to steal the test cases.

One weirdness is the restriction on maximum total length, rather than a
32 byte (33 with 0-prepad) limit on signatures themselves.

Apologies for my babytalk C++.  Am sure there's a neater way.

/* Licensed under Creative Commons zero (public domain). */
#include vector
#include cstdlib
#include cassert

#ifdef CLARIFY
bool ConsumeByte(const std::vectorunsigned char sig, size_t off,
 unsigned int val)
{
if (off = sig.size()) return false;

val = sig[off++];
return true;
}

bool ConsumeTypeByte(const std::vectorunsigned char sig, size_t off,
 unsigned int t)
{
unsigned int type;
if (!ConsumeByte(sig, off, type)) return false;

return (type == t);
}

bool ConsumeNonZeroLength(const std::vectorunsigned char sig, size_t off,
  unsigned int len)
{
if (!ConsumeByte(sig, off, len)) return false;

// Zero-length integers are not allowed.
return (len != 0);
}

bool ConsumeNumber(const std::vectorunsigned char sig, size_t off,
   unsigned int len)
{
// Length of number should be within signature.
if (off + len  sig.size()) return false;

// Negative numbers are not allowed.
if (sig[off]  0x80) return false;

// Zero bytes at the start are not allowed, unless it would
// otherwise be interpreted as a negative number.
if (len  1  (sig[off] == 0x00)  !(sig[off+1]  0x80)) return false;

// Consume number itself.
off += len;
return true;
}

// Consume a DER encoded integer, update off if successful.
bool ConsumeDERInteger(const std::vectorunsigned char sig, size_t off) {
unsigned int len;

// Type byte must be integer
if (!ConsumeTypeByte(sig, off, 0x02)) return false;
if (!ConsumeNonZeroLength(sig, off, len)) return false;
// Now the BE encoded value itself.
if (!ConsumeNumber(sig, off, len)) return false;

return true;
}

bool IsValidSignatureEncoding(const std::vectorunsigned char sig) {
// Format: 0x30 [total-length] 0x02 [R-length] [R] 0x02 [S-length] [S] 
[sighash]
// * total-length: 1-byte length descriptor of everything that follows,
// excluding the sighash byte.
// * R-length: 1-byte length descriptor of the R value that follows.
// * R: arbitrary-length big-endian encoded R value. It cannot start with 
any
// null bytes, unless the first byte that follows is 0x80 or higher, in 
which
// case a single null byte is required.
// * S-length: 1-byte length descriptor of the S value that follows.
// * S: arbitrary-length big-endian encoded S value. The same rules apply.
// * sighash: 1-byte value indicating what data is hashed.

// Accept empty signature as correctly encoded (but invalid) signature,
// even though it is not strictly DER.
if (sig.size() == 0) return true;

// Maximum size constraint.
if (sig.size()  73) return false;

size_t off = 0;

// A signature is of type compound.
if (!ConsumeTypeByte(sig, off, 0x30)) return false;

unsigned int len;
if (!ConsumeNonZeroLength(sig, off, len)) return false;

// Make sure the length covers the rest (except sighash).
if (len + 1 != sig.size() - off) return false;

// Check R value.
if (!ConsumeDERInteger(sig, off)) return false;

// Check S value.
if (!ConsumeDERInteger(sig, off)) return false;

// There should exactly one byte left (the sighash).
return off + 1 == sig.size() ? true : false;
}
#else
bool IsValidSignatureEncoding(const std::vectorunsigned char sig) {
// Format: 0x30 [total-length] 0x02 [R-length] [R] 0x02 [S-length] [S] 
[sighash]
// * total-length: 1-byte length descriptor of everything that follows,
// excluding the sighash byte.
// * R-length: 1-byte length descriptor of the R value that follows.
// * R: arbitrary-length big-endian encoded R value. It must use the 
shortest
// possible encoding for a positive integers (which means no null bytes 
at
// the start, except a single one when the next byte has its highest 
bit set).
// * S-length: 1-byte length descriptor of the S value that follows.
// * S: arbitrary-length big-endian encoded S value. The same rules apply.
// * sighash: 1-byte value indicating 

Re: [Bitcoin-development] [softfork proposal] Strict DER signatures

2015-01-20 Thread Rusty Russell
Pieter Wuille pieter.wui...@gmail.com writes:
 Hello everyone,

 We've been aware of the risk of depending on OpenSSL for consensus
 rules for a while, and were trying to get rid of this as part of BIP
 62 (malleability protection), which was however postponed due to
 unforeseen complexities. The recent evens (see the thread titled
 OpenSSL 1.0.0p / 1.0.1k incompatible, causes blockchain rejection.
 on this mailing list) have made it clear that the problem is very
 real, however, and I would prefer to have a fundamental solution for
 it sooner rather than later.

 I therefore propose a softfork to make non-DER signatures illegal
 (they've been non-standard since v0.8.0). A draft BIP text can be
 found on:

 https://gist.github.com/sipa/5d12c343746dad376c80

Cut and paste bug in the last check:

// Null bytes at the start of R are not allowed, unless it would otherwise be
// interpreted as a negative number.
if (lenS  1  (sig[lenR + 6] == 0x00)  !(sig[lenR + 7]  0x80))
return false;

You mean null bytes at the start of S.

Cheers,
Rusty.

--
New Year. New Location. New Benefits. New Data Center in Ashburn, VA.
GigeNET is offering a free month of service with a new server in Ashburn.
Choose from 2 high performing configs, both with 100TB of bandwidth.
Higher redundancy.Lower latency.Increased capacity.Completely compliant.
http://p.sf.net/sfu/gigenet
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] CLTV opcode allocation; long-term plans?

2015-05-07 Thread Rusty Russell
Peter Todd p...@petertodd.org writes:
 That said, if people have strong feelings about this, I would be willing
 to make OP_CLTV work as follows:

 nLockTime 1 OP_CLTV

 Where the 1 selects absolute mode, and all others act as OP_NOP's. A
 future relative CLTV could then be a future soft-fork implemented as
 follows:

 relative nLockTime 2 OP_CLTV

Mildly prefer to put that the other way around.

ie. the OP_NOP1 becomes OP_EXTENSION_PREFIX, the next op defines which
extended opcode it is (must be a push).

Cheers,
Rusty.

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-15 Thread Rusty Russell
Tier Nolan tier.no...@gmail.com writes:
 On Sat, May 9, 2015 at 4:36 AM, Gregory Maxwell gmaxw...@gmail.com wrote:

 An example would
 be tx_size = MAX( real_size  1,  real_size + 4*utxo_created_size -
 3*utxo_consumed_size).


 This could be implemented as a soft fork too.

 * 1MB hard size limit
 * 900kB soft limit

I like this too.

Some tweaks:

1) Nomenclature: call tx_size tx_cost and real_size tx_bytes?

2) If we have a reasonable hard *byte* limit, I don't think that we need
   the MAX().  In fact, it's probably OK to go negative.

3) ... or maybe not, if any consumed UTXO was generated before the soft
   fork (reducing Tier's perverse incentive).

4) How do we measure UTXO size?  There are some constant-ish things in
   there (eg. txid as key, height, outnum, amount).  Maybe just add 32
   to scriptlen?

5) Add a CHECKSIG cost.  Naively, since we allow 20,000 CHECKSIGs and
   1MB blocks, that implies a cost of 50 bytes per CHECKSIG (but counted
   correctly, unlike now).   

This last one implies that the initial cost limit would be 2M, but in
practice probably somewhere in the middle.

  tx_cost = 50*num-CHECKSIG
+ tx_bytes
+ 4*utxo_created_size
- 3*utxo_consumed_size

 A 250 byte transaction with 2 inputs and 2 outputs would have an adjusted
 size of 252 bytes.

Now cost == 352.

Cheers,
Rusty.

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-18 Thread Rusty Russell
Tier Nolan tier.no...@gmail.com writes:
 On Sat, May 16, 2015 at 1:22 AM, Rusty Russell ru...@rustcorp.com.au
 wrote:
 3) ... or maybe not, if any consumed UTXO was generated before the soft
fork (reducing Tier's perverse incentive).

 The incentive problem can be fixed by excluding UTXOs from blocks before a
 certain count.

 UTXOs in blocks before 375000 don't count.

OK.  Be nice if these were cleaned up, but I guess it's a sunk cost.

 4) How do we measure UTXO size?  There are some constant-ish things in
there (eg. txid as key, height, outnum, amount).  Maybe just add 32
to scriptlen?


 They can be stored as a fixed digest.  That can be any size, depending on
 security requirements.

 Gmaxwell's cost proposal is 3-4 bytes per UTXO change.  It isn't
 4*UXTO.size - 3*UTXO.size

He said utxo_created_size not utxo_created so I assumed scriptlen?

 It is only a small nudge.  With only 10% of the block space to play with it
 can't be massive.

But you made that number up?  The soft cap and hard byte limit are
different beasts, so there's no need for soft cost cap  hard byte
limit.

 This requires that transactions include scriptPubKey information when
 broadcasting them.

Brilliant!  I completely missed that possibility...

 5) Add a CHECKSIG cost.  Naively, since we allow 20,000 CHECKSIGs and
1MB blocks, that implies a cost of 50 bytes per CHECKSIG (but counted
correctly, unlike now).

 This last one implies that the initial cost limit would be 2M, but in
 practice probably somewhere in the middle.

   tx_cost = 50*num-CHECKSIG
 + tx_bytes
 + 4*utxo_created_size
 - 3*utxo_consumed_size

  A 250 byte transaction with 2 inputs and 2 outputs would have an adjusted
  size of 252 bytes.

 Now cost == 352.

 That is to large a cost for a 10% block change.  It could be included in
 the block size hard fork though.

I don't think so.  Again, you're mixing units.

 I think have one combined cost for
 transactions is good.  It means much fewer spread out transaction checks.
 The code for the cost formula would be in one place.

Agreed!  Unfortunately there'll always be 2, because we really do want a
hard byte limit: it's total tx bytes which brings most concerns about
centralization.  But ideally it'll be so rarely hit that it can be ~
ignored (and certainly not optimized for).

Cheers,
Rusty.

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] [RFC] Canonical input and output ordering in transactions

2015-06-06 Thread Rusty Russell
Mark Friedenbach m...@friedenbach.org writes:
 Rusty, this doesn't play well with SIGHASH_SINGLE which is used in
 assurance contracts among other things. Sometimes the ordering is set by
 the signing logic itself...

Ah, I forgot about that particular wart. Yech.  Implies that you can
order inputs or outputs, not both.

Something like outputs must be in order, inputs which do not
CHECK(MULTI)SIG_(VERIFY) a SIGHASH_SINGLE sig must be in order with
respect to each other.  But that's much less trivial since it implies
script evaluation.

In other news, I noticed Kristov Atlas's concurrent proposal just after
I posted this (via reddit).  He used far more words, but didn't note
this issue either AFAICT.

Thanks!
Rusty.

--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


[Bitcoin-development] [RFC] Canonical input and output ordering in transactions

2015-06-05 Thread Rusty Russell
Title: Canonical Input and Output Ordering
Author: Rusty Russell ru...@rustcorp.com.au
Discussions-To: Bitcoin Dev bitcoin-development@lists.sourceforge.net
Status: Draft
Type: Standards Track
Created: 2015-06-06

Abstract

This BIP provides a canonical ordering of inputs and outputs when
creating transactions.

Motivation

Most bitcoin wallet implementations randomize the outputs of
transactions they create to avoid trivial linkage analysis (especially
change outputs), however implementations have made mistakes in this area
in the past.

Using a canonical ordering has the same effect, but is simpler, more
obvious if incorrect, and can eventually be enforced by IsStandard() and
even a soft-fork to enforce it.

Specification

Inputs should be ordered like so:
index (lower value first)
txid (little endian order, lower byte first)

Outputs should be ordered like so:
amount (lower value first)
script (starting from first byte, lower byte first, shorter wins)

Rationale

Any single wallet is already free to implement this, but if other
wallets do not it would reduce privacy by making those transactions
stand out.  Thus a BIP is appropriate, especially if this were to
become an IsStandard() rule once widely adopted.

Because integers are fast to compare, they're sorted first, before the
lexographical ordering.

The other input fields do not influence the sort order, as any valid
transactions cannot have two inputs with the same index and txid.

Reference Implementation

https://github.com/rustyrussell/bitcoin/tree/bip-in-out-ordering

--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Consensus-enforced transaction replacement via sequence numbers

2015-06-09 Thread Rusty Russell
Tier Nolan tier.no...@gmail.com writes:
 What are the use cases for relative lock time verify?  I have 1 and I think
 that is the kind of thing it is useful for.

 I think that most cases are just to guarantee that the other party has a
 chance to react.  This means that 8191 blocks should be more than enough
 (and most would set it lower).

 For long term, the absolute version is just as good.  That depends on use
 cases.  You can't take step 4 until 3 months after step 3 has completed
 doesn't seem useful.

Lightning channels want them exactly like this to revoke old
transactions, which could be ancient on long-lived channels.

Cheers,
Rusty.

--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development