Re: [Bitcoin-development] Hard fork via miner vote

2015-06-20 Thread Pieter Wuille
On Sat, Jun 20, 2015 at 7:26 PM, David Vorick david.vor...@gmail.com
wrote:

 I see it as unreasonable to expect all nodes to upgrade during a hardfork.
 If you are intentionally waiting for that to happen, it's possible for an
 extreme minority of nodes to hold the rest of the network hostage by simply
 refusing to upgrade. However you want nodes to be able to protest until it
 is clear that they have lost the battle without being at risk of getting
 hardforked out of the network unexpectedly.


You can't observe the majority of nodes, only miners, and weighed by
hashrate. If you need a mechanism for protest, that should happen before
the hard fork change code is rolled out. I am assuming a completely
uncontroversial change, in order to not confuse this discussion with the
debate about what hard forks should be done.

So I am not talking about protest, just about deploying a change. And yes,
it is unreasonable to expect that every single node will upgrade. But there
is a difference between ignoring old unmaintained nodes that do not
influence anyone's behaviour, and ignoring the nodes that power miners
producing actual blocks. In addition, having no blocks on the old chain is
safer than producing a small number, as you want full nodes that have not
noticed the fork to fail rather than see a slow but otherwise functional
chain.

-- 
Pieter
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] BIP for Proof of Payment

2015-06-16 Thread Pieter Wuille
I don't see why existing software could create a 40-byte OP_RETURN but not
larger? The limitation comes from a relay policy in full nodes, not a
limitation is wallet software... and PoPs are not relayed on the network.

Regarding sharing, I think you're talking about a different use case. Say
you want to pay for 1-week valid entrance to some venue. I thought the
purpose of the PoP was to be sure that only the person who paid for it, and
not anyone else can use it during that week.

My argument against that is that the original payer can also hand the
private keys in his wallet to someone else, who would then become able to
create PoPs for the service. He does not lose anything by this, assuming
the address is not reused.

So, using a token does not change anything, except it can be provided to
the payer - instead of relying on creating an implicit identity based on
who seems to have held particular private keys in the past.
On Jun 16, 2015 9:41 PM, Kalle Rosenbaum ka...@rosenbaum.se wrote:

 2015-06-16 21:25 GMT+02:00 Pieter Wuille pieter.wui...@gmail.com:
  You can't avoid sharing the token, and you can't avoid sharing the
 private
  keys used for signing either. If they are single use, you don't lose
  anything by sharing them.

 Forwarding the PoP request would be a way to avoid sharing keys, as
 suggested above.

 
  Also you are not creating a real transaction. Why does the OP_RETURN
  limitation matter?

 This was discussed in the beginning of this thread: The idea is to
 simplify implementation. Existing software can be used as is to sign
 and validate PoPs

 Regards,
 Kalle

 
  On Jun 16, 2015 9:22 PM, Kalle Rosenbaum ka...@rosenbaum.se wrote:
 
  Thank you for your comments Pieter! Please find my answers below.
 
  2015-06-16 16:31 GMT+02:00 Pieter Wuille pieter.wui...@gmail.com:
   On Mon, Jun 15, 2015 at 1:59 PM, Kalle Rosenbaum ka...@rosenbaum.se
   wrote:
  
   2015-06-15 12:00 GMT+02:00 Pieter Wuille pieter.wui...@gmail.com:
   I'm not sure if we will be able to support PoP with CoinJoin. Maybe
   someone with more insight into CoinJoin have some input?
  
  
   Not really. The problem is that you assume a transaction corresponds
 to
   a
   single payment. This is true for simple wallet use cases, but not
   compatible
   with CoinJoin, or with systems that for example would want to combine
   multiple payments in a single transaction.
  
 
  Yes, you are right. It's not compatible with CoinJoin and the likes.
 
  
   48 bits seems low to me, but it does indeed solve the problem. Why not
   128
   or 256 bits?
 
  The nonce is limited because of the OP_RETURN output being limited to
  40 bytes of data: 2 bytes version, 32 bytes txid, 6 bytes nonce.
 
  
Why does anyone care who paid? This is like walking into a
 coffeshop,
noticing I don't have money with me, let me friend pay for me, and
then
have
the shop insist that I can't drink it because I'm not the buyer.
  
   If you pay as you use the service (ie pay for coffee upfront),
 there's
   no need for PoP. Please see the Motivation section. But you are right
   that you must have the wallet(s) that paid at hand when you issue a
   PoP.
  
   
Track payments, don't try to assign identities to payers.
  
   Please elaborate, I don't understand what you mean here.
  
  
   I think that is a mistake. You should not assume that the wallet who
   held
   the coins is the payer/buyer. That's what I said earlier; you're
   implicitly
   creating an identity (the one who holds these keys) based on the
   transaction. This seems fundamentally wrong to me, and not necessary.
   The
   receiver should not care who paid or how, he should care what was
 payed
   for.
 
  You are saying that it's a problem that the wallet used to pay, must
  also be used to issue the PoP? That may very well be a problem in some
  cases. People using PoP should of course be aware of it's limitations
  and act accordingly, i.e. don't pay for concert tickets for a friend
  and expect your friend to be able to enter the arena with her wallet.
  As Tom Harding noted, it is possible to transfer keys to your friend's
  wallet, but that might not be desirable if those keys are also used
  for other payments. Also that would weaken the security of an HD
  wallet, since a chain code along with a private key would reveal all
  keys in that tree. Another solution is that your friend forwards the
  PoP request to your wallet, through twitter or SMS, and you send the
  PoP for her. Maybe that forwarding mechanism can be built into wallets
  and automated so that the wallet automatically suggests to sign the
  PoP for your friend. This is probably something to investigate
  further, but not within the scope of this BIP.
 
  Of course the simplest solution would be to send money to your friend
  first so that she can pay for the ticket from her own wallet, but
  that's not always feasible.
 
  
   The easiest solution to this IMHO would be an extension

Re: [Bitcoin-development] BIP for Proof of Payment

2015-06-15 Thread Pieter Wuille
I did misunderstand that. That changes things significantly.

However, having paid is not the same as having had access to the input
coins. What about shared wallets or coinjoin?

Also, if I understand correctly, there is no commitment to anything you're
trying to say about the sender? So once I obtain a proof-of-payment from
you about something you paid, I can go claim that it's mine?

Why does anyone care who paid? This is like walking into a coffeshop,
noticing I don't have money with me, let me friend pay for me, and then
have the shop insist that I can't drink it because I'm not the buyer.

Track payments, don't try to assign identities to payers.
On Jun 15, 2015 11:35 AM, Kalle Rosenbaum ka...@rosenbaum.se wrote:

 Hi Pieter!

 It is intended to be a proof that you *have paid* for something. Not
 that you have the intent to pay for something. You cannot use PoP
 without a transaction to prove.

 So, yes, it's just a proof of access to certain coins that you no longer
 have.

 Maybe I don't understand you correctly?

 /Kalle

 2015-06-15 11:27 GMT+02:00 Pieter Wuille pieter.wui...@gmail.com:
  Now that you have removed the outputs, I don't think it's even a intent
 of
  payment, but just a proof of access to certain coins.
 
  On Jun 15, 2015 11:24 AM, Kalle Rosenbaum ka...@rosenbaum.se wrote:
 
  Hi all!
 
  I have made the discussed changes and updated my implementation
  (https://github.com/kallerosenbaum/poppoc) accordingly. These are the
  changes:
 
  * There is now only one output, the pop output, of value 0.
  * The sequence number of all inputs of the PoP must be set to 0. I
  chose to set it to 0 for all inputs for simplicity.
  * The lock_time of the PoP must be set to 4 (max block height
 lock
  time).
 
  The comments so far has been mainly positive or neutral. Are there any
  major objections against any of the two proposals? If not, I will ask
  Gregory Maxwell to assign them BIP numbers.
 
  The two BIP proposals can be found at
  https://github.com/kallerosenbaum/poppoc/wiki/Proof-of-Payment-BIP and
  https://github.com/kallerosenbaum/poppoc/wiki/btcpop-scheme-BIP. The
 source
  for the Proof of Payment BIP proposal is also in-lined below.
 
  A number of alternative names have been proposed:
 
  * Proof of Potential
  * Proof of Control
  * Proof of Signature
  * Signatory Proof
  * Popo: Proof of payment origin
  * Pots: Proof of transaction signer
  * proof of transaction intent
  * Declaration of intent
  * Asset-access-and-action-affirmation, AAaAA, or A5
  * VeriBit
  * CertiBTC
  * VBit
  * PayID
 
  Given this list, I still think Proof of Payment is the most
 descriptive
  to non-technical people.
 
  Regards,
  Kalle
 
 
  #
  pre
BIP: BIP number
Title: Proof of Payment
Author: Kalle Rosenbaum ka...@rosenbaum.se
Status: Draft
Type: Standards Track
Created: date created on, in ISO 8601 (-mm-dd) format
  /pre
 
  == Abstract ==
 
  This BIP describes how a wallet can prove to a server that it has the
  ability to sign a certain transaction.
 
  == Motivation ==
 
  There are several scenarios in which it would be useful to prove that
 you
  have paid for something. For example:
 
  * A pre-paid hotel room where your PoP functions as a key to the door.
  * An online video rental service where you pay for a video and watch it
 on
  any device.
  * An ad-sign where you pay in advance for e.g. 2 weeks exclusivity.
 During
  this period you can upload new content to the sign whenever you like
 using
  PoP.
  * Log in to a pay site using a PoP.
  * A parking lot you pay for monthly and the car authenticates itself
 using
  PoP.
  * A lottery where all participants pay to the same address, and the
 winner
  is selected among the transactions to that address. You exchange the
 prize
  for a PoP for the winning transaction.
 
  With Proof of Payment, these use cases can be achieved without any
  personal information (user name, password, e-mail address, etc) being
  involved.
 
  == Rationale ==
 
  Desirable properties:
 
  # A PoP should be generated on demand.
  # It should only be usable once to avoid issues due to theft.
  # It should be able to create a PoP for any payment, regardless of
 script
  type (P2SH, P2PKH, etc.).
  # It should prove that you have enough credentials to unlock all the
  inputs of the proven transaction.
  # It should be easy to implement by wallets and servers to ease
 adoption.
 
  Current methods of proving a payment:
 
  * In BIP0070, the PaymentRequest together with the transactions
 fulfilling
  the request makes some sort of proof. However, it does not meet 1, 2 or
 4
  and it obviously only meets 3 if the payment is made through BIP0070.
 Also,
  there's no standard way to request/provide the proof. If standardized it
  would probably meet 5.
  * Signing messages, chosen by the server, with the private keys used to
  sign the transaction. This could meet 1 and 2

Re: [Bitcoin-development] Miners: You'll (very likely) need to upgrade your Bitcoin Core node soon to support BIP66

2015-06-13 Thread Pieter Wuille
On Fri, Jun 12, 2015 at 10:37 AM, Tier Nolan tier.no...@gmail.com wrote:

 Once the 75% threshold is reached, miners who haven't updated are at risk
 of mining on invalid blocks.


Note that we've been above the 75% threshold since june 7th (before Peter's
main was sent).

-- 
Pieter
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Scaling Bitcoin with Subchains

2015-06-13 Thread Pieter Wuille
On Wed, May 20, 2015 at 4:55 AM, Andrew onelinepr...@gmail.com wrote:

 Hi

 I briefly mentioned something about this on the bitcoin-dev IRC room. In
 general, it seems experts (like sipa i.e. Pieter) are against using
 sidechains as a way of scaling. As I only have a high level understanding
 of the Bitcoin protocol, I cannot be sure if what I want to do is actually
 defined as a side chain, but let me just propose it, and please let me know
 whether it can work, and if not why not (I'm not scared of digging into
 more technical resources in order to fully understand). I do have a good
 academic/practical background for Bitcoin, and I'm ready to contribute code
 if needed (one of my contributions includes a paper wallet creator written
 in C).


In your proposal, transactions go to a chain based the addresses involved.
We can reasonably assume that different people's wallet will tend to be
distributed uniformly over several sidechains to hold their transactions
(if they're not, there is no scaling benefit anyway...). That means that
for an average transaction, you will need a cross-chain transfer in order
to get the money to the recipient (as their wallet will usually be
associated to a chain that is different from your own). Either you use an
atomic swap (which actually means you end up briefly with coins in the
destination chain, and require multiple transactions and a medium delay),
or you use the 2way peg transfer mechanism (which is very slow, and reduces
the security the recipient has to SPV).

Whatever you do, the result will be that most transactions are:
* Slower (a bit, or a lot, depending on what mechanism you use).
* More complex, with more failure modes.
* Require more and larger transactions (causing a total net extra load on
all verifiers together).

And either:
* Less secure (because you rely on a third party to do an atomic swap with,
or because of the 2 way peg transfer mechanism which has SPV security)
* Doesn't offer any scaling benefit (because the recipient needs to fully
validate both his own and the receiver chain).

In short, you have not added any scaling at all, or reduced the security of
the system significantly, as well as made it significantly less convenient
to use.

So no, sidechains are not a direct means for solving any of the scaling
problems Bitcoin has. What they offer is a mechanism for easier
experimentation, so that new technology can be built and tested without
needing to introduce a new currency first (with the related speculative and
network effect problems). That experimentation could eventually lead us to
discover mechanisms for better scaling, or for more scalability/security
tradeoffs (see for example the Witness Segregation that Elements Alpha has).

-- 
Pieter
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


[Bitcoin-development] Mining centralization pressure from non-uniform propagation speed

2015-06-12 Thread Pieter Wuille
Hello all,

I've created a simulator for Bitcoin mining which goes a bit further than
the one Gavin used for his blog post a while ago. The main difference is
support for links with different latency and bandwidth, because of the
clustered configuration described below. In addition, it supports different
block sizes, takes fees into account, does difficulty adjustments, and
takes processing and mining delays into account. It also simulates longer
periods of time, and averages the result of many simulations running in
parallel until the variance on the result is low enough.

The code is here: https://github.com/sipa/bitcoin-net-simul

The configuration used in the code right now simulates two groups of miners
(one 80%=25%+25%+30%, one 20%=5%+5%+5%+5%), which are well-connected
internally, but are only connected to each other through a slow 2 Mbit/s
link.

Here are some results.

This shows how the group of smaller miners loses around 8% of their
relative income (if they create larger blocks, their loss percentage goes
up slightly further):

Configuration:
  * Miner group 0: 80.00% hashrate, blocksize 2000.00
  * Miner group 1: 20.00% hashrate, blocksize 100.00
  * Expected average block size: 1620.00
  * Average fee per block: 0.25
  * Fee per byte: 0.000154
Result:
  * Miner group 0: 81.648985% income (factor 1.020612 with hashrate)
  * Miner group 1: 18.351015% income (factor 0.917551 with hashrate)

When fees become more important however, and half of a block's income is
due to fees, the effect becomes even stronger (a 15% loss), and the optimal
way to compete for small miners is to create larger blocks as well (smaller
blocks for them result in even less income):

Configuration:
  * Miner group 0: 80.00% hashrate, blocksize 2000.00
  * Miner group 1: 20.00% hashrate, blocksize 2000.00
  * Expected average block size: 2000.00
  * Average fee per block: 25.00
  * Fee per byte: 0.012500
Result:
  * Miner group 0: 83.063545% income (factor 1.038294 with hashrate)
  * Miner group 1: 16.936455% income (factor 0.846823 with hashrate)

The simulator is not perfect. It doesn't take into account that multiple
blocks/relays can compete for the same bandwidth, or that nodes cannot
process multiple blocks at once.

The numbers used may be unrealistic, and I don't mean this as a prediction
for real-world events. However, it does very clearly show the effects of
larger blocks on centralization pressure of the system. Note that this also
does not make any assumption of destructive behavior on the network - just
simple profit maximalization.

Lastly, the code may be buggy; I only did some small sanity tests with
simple networks.

-- 
Pieter
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Mining centralization pressure from non-uniform propagation speed

2015-06-12 Thread Pieter Wuille
If there is a benefit in producing larger more-fee blocks if they propagate
slowly, then there is also a benefit in including high-fee transactions
that are unlikely to propagate quickly through optimized relay protocols
(for example: very recent transactions, or out-of-band receoved ones). This
effect is likely an order of magnitude less important still, but the effect
is likely the same.
On Jun 12, 2015 8:31 PM, Peter Todd p...@petertodd.org wrote:

 On Fri, Jun 12, 2015 at 01:21:46PM -0400, Gavin Andresen wrote:
  Nice work, Pieter. You're right that my simulation assumed bandwidth for
  'block' messages isn't the bottleneck.
 
  But doesn't Matt's fast relay network (and the work I believe we're both
  planning on doing in the near future to further optimize block
 propagation)
  make both of our simulations irrelevant in the long-run?

 Then simulate first the relay network assuming 100% of txs use it, and
 secondly, assuming 100%-x use it.

 For instance, is it in miners' advantage in some cases to sabotage the
 relay network? The analyse say yes, so lets simulate that. Equally even
 the relay network isn't instant.

 --
 'peter'[:-1]@petertodd.org
 127ab1d576dc851f374424f1269c4700ccaba2c42d97e778

--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Mining centralization pressure from non-uniform propagation speed

2015-06-12 Thread Pieter Wuille
I'm merely proving the existence of the effect.
On Jun 12, 2015 8:24 PM, Mike Hearn m...@plan99.net wrote:

 are only connected to each other through a slow 2 Mbit/s link.


 That's very slow indeed. For comparison, plain old 3G connections
 routinely cruise around 7-8 Mbit/sec.

 So this simulation is assuming a speed dramatically worse than a mobile
 phone can get!

--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] BIP for Proof of Payment

2015-06-06 Thread Pieter Wuille
On Sat, Jun 6, 2015 at 5:05 PM, Kalle Rosenbaum ka...@rosenbaum.se wrote:

  What do you gain by making PoPs actually valid transactions? You could
 for
  example change the signature hashing algorithm (prepend a constant
 string,
  or add a second hashing step) for signing, rendering the signatures in a
 PoP
  unusable for actual transaction, while still committing to the same
 actual
  transaction. That would also remove the need for the OP_RETURN to catch
  fees.

 The idea is to simplify implementation. Existing software can be used
 as is to sign and validate PoPs. But I do agree that it would be a
 cleaner specification if we would make the PoP invalid as a
 transaction. I'm open to changes here. I do like the idea to prepend a
 constant string. But that would require changes in transaction signing
 and validation code, wouldn't it?


Yes, of course. An alternative is adding a 21M BTC output at the end, or
bitflipping the txin prevout hashes, or another reversible transformation
on the transaction data that is guaranteed to invalidate it.

I think that the risk of asking people to sign something that is not an
actual transaction, but could be used as one, is very scary.


  Also, I would call it proof of transaction intent, as it's a
 commitment to
  a transaction and proof of its validity, but not a proof that an actual
  transaction took place, nor a means to prevent it from being double
 spent.


 Naming is hard. I think a simpler name that explains what its main
 purpose is (prove that you paid for something) is better than a name
 that exactly tries to explain what it is.


Proof of Payment indeed does make me think it's something that proves you
paid. But as described, that is not what a PoP does. It proves the ability
to create a particular transaction, and committing to it. There is no
actual payment involved (plus, payment makes me think you're talking about
BIP70 payments, not simple Bitcoin transactions).


 Proof of transaction
 intent does not help me understand what this is about. But I would
 like to see more name suggestions. The name does not prevent people
 from using it for other purposes, ie internet over telephone network.


I don't understand why something like Proof of Transaction Intent would
be incompatible with internet over telephone network either...

-- 
Pieter
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] BIP for Proof of Payment

2015-06-06 Thread Pieter Wuille
What do you gain by making PoPs actually valid transactions? You could for
example change the signature hashing algorithm (prepend a constant string,
or add a second hashing step) for signing, rendering the signatures in a
PoP unusable for actual transaction, while still committing to the same
actual transaction. That would also remove the need for the OP_RETURN to
catch fees.

Also, I would call it proof of transaction intent, as it's a commitment
to a transaction and proof of its validity, but not a proof that an actual
transaction took place, nor a means to prevent it from being double spent.



On Sat, Jun 6, 2015 at 4:35 PM, Kalle Rosenbaum ka...@rosenbaum.se wrote:

 Hi

 Following earlier posts on Proof of Payment I'm now proposing the
 following BIP (To read it formatted instead, go to
 https://github.com/kallerosenbaum/poppoc/wiki/Proof-of-Payment-BIP).

 Regards,
 Kalle Rosenbaum

 pre
   BIP: BIP number
   Title: Proof of Payment
   Author: Kalle Rosenbaum ka...@rosenbaum.se
   Status: Draft
   Type: Standards Track
   Created: date created on, in ISO 8601 (-mm-dd) format
 /pre

 == Abstract ==

 This BIP describes how a wallet can prove to a server that it has the
 ability to sign a certain transaction.

 == Motivation ==

 There are several scenarios in which it would be useful to prove that you
 have paid for something. For example:

 * A pre-paid hotel room where your PoP functions as a key to the door.
 * An online video rental service where you pay for a video and watch it on
 any device.
 * An ad-sign where you pay in advance for e.g. 2 weeks exclusivity. During
 this period you can upload new content to the sign whenever you like using
 PoP.
 * Log in to a pay site using a PoP.
 * A parking lot you pay for monthly and the car authenticates itself using
 PoP.
 * A lottery where all participants pay to the same address, and the winner
 is selected among the transactions to that address. You exchange the prize
 for a PoP for the winning transaction.

 With Proof of Payment, these use cases can be achieved without any
 personal information (user name, password, e-mail address, etc) being
 involved.

 == Rationale ==

 Desirable properties:

 # A PoP should be generated on demand.
 # It should only be usable once to avoid issues due to theft.
 # It should be able to create a PoP for any payment, regardless of script
 type (P2SH, P2PKH, etc.).
 # It should prove that you have enough credentials to unlock all the
 inputs of the proven transaction.
 # It should be easy to implement by wallets and servers to ease adoption.

 Current methods of proving a payment:

 * In BIP0070, the PaymentRequest together with the transactions fulfilling
 the request makes some sort of proof. However, it does not meet 1, 2 or 4
 and it obviously only meets 3 if the payment is made through BIP0070. Also,
 there's no standard way to request/provide the proof. If standardized it
 would probably meet 5.
 * Signing messages, chosen by the server, with the private keys used to
 sign the transaction. This could meet 1 and 2 but probably not 3. This is
 not standardized either. 4 Could be met if designed so.

 If the script type is P2SH, any satisfying script should do, just like for
 a payment. For M-of-N multisig scripts, that would mean that any set of M
 keys should be sufficient, not neccesarily the same set of M keys that
 signed the transaction. This is important because strictly demanding the
 same set of M keys would undermine the purpose of a multisig address.

 == Specification ==

 === Data structure ===

 A proof of payment for a transaction T, here called PoP(T), is used to
 prove that one has ownership of the credentials needed to unlock all the
 inputs of T. It has the exact same structure as a bitcoin transaction with
 the same inputs and outputs as T and in the same order as in T. There is
 also one OP_RETURN output inserted at index 0, here called the pop output.
 This output must have the following format:

  OP_RETURN version txid nonce

 {|
 ! Field!! Size [B] !! Description
 |-
 | lt;version || 2|| Version, little endian, currently 0x01 0x00
 |-
 | lt;txid|| 32   || The transaction to prove
 |-
 | lt;nonce   || 6|| Random data
 |}

 The value of the pop output is set to the same value as the transaction
 fee of T. Also, if the outputs of T contains an OP_RETURN output, that
 output must not be included in the PoP because there can only be one
 OP_RETURN output in a transaction. The value of that OP_RETURN output is
 instead added to the value of the pop output.

 An illustration of the PoP data structure and its original payment is
 shown below.

 pre
   T
  +--+
  |inputs   | outputs|
  |   Value | Value   Script |
  +--+
  |input0 1 | 0   pay to A   |
  |input1 3 | 2   OP_RETURN some data  |
  

Re: [Bitcoin-development] BIP for Proof of Payment

2015-06-06 Thread Pieter Wuille
On Sat, Jun 6, 2015 at 5:18 PM, Luke Dashjr l...@dashjr.org wrote:

 I also agree with Pieter, that this should *not* be so cleanly compatible
 with Bitcoin transactions. If you wish to share code, perhaps using an
 invalid opcode rather than OP_RETURN would be appropriate.


Using an invalid opcode would merely send funds into the void. It wouldn't
invalidate the transaction.

-- 
Pieter
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Version bits proposal

2015-06-03 Thread Pieter Wuille
Thanks for all the comments.

I plan to address the feedback and work on an implementation next week.

On Tue, May 26, 2015 at 6:48 PM, Pieter Wuille pieter.wui...@gmail.com
wrote:

 Hello everyone,

 here is a proposal for how to coordinate future soft-forking consensus
 changes: https://gist.github.com/sipa/bf69659f43e763540550

 It supports multiple parallel changes, as well as changes that get
 permanently rejected without obstructing the rollout of others.

 Feel free to comment. As the gist does not support notifying participants
 of new comments, I would suggest using the mailing list instead.

 This is joint work with Peter Todd and Greg Maxwell.

 --
 Pieter

--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposed alternatives to the 20MB step function

2015-05-28 Thread Pieter Wuille
 until we have size-independent new block propagation

I don't really believe that is possible. I'll argue why below. To be clear,
this is not an argument against increasing the block size, only against
using the assumption of size-independent propagation.

There are several significant improvements likely possible to various
aspects of block propagation, but I don't believe you can make any part
completely size-independent. Perhaps the remaining aspects result in terms
in the total time that vanish compared to the link latencies for 1 MB
blocks, but there will be some block sizes for which this is no longer the
case, and we need to know where that is the case.

* You can't assume that every transaction is pre-relayed and pre-validated.
This can happen due to non-uniform relay policies (different codebases, and
future things like size-limited mempools), double spend attempts, and
transactions generated before a block had time to propagate. You've
previously argued for a policy of not including too recent transactions,
but that requires a bound on network diameter, and if these late
transactions are profitable, it has exactly the same problem as making
larger blocks non-proportionally more economic for larger pools groups if
propagation time is size dependent).
  * This results in extra bandwidth usage for efficient relay protocols,
and if discrepancy estimation mispredicts the size of IBLT or error
correction data needed, extra roundtrips.
  * Signature validation for unrelayed transactions will be needed at block
relay time.
  * Database lookups for the inputs of unrelayed transactions cannot be
cached in advance.

* Block validation with 100% known and pre-validated transactions is not
constant time, due to updates that need to be made to the UTXO set (and
future ideas like UTXO commitments would make this effect an order of
magnitude worse).

* More efficient relay protocols also have higher CPU cost for
encoding/decoding.

Again, none of this is a reason why the block size can't increase. If
availability of hardware with higher bandwidth, faster disk/ram access
times, and faster CPUs increases, we should be able to have larger blocks
with the same propagation profile as smaller blocks with earlier technology.

But we should know how technology scales with larger blocks, and I don't
believe we do, apart from microbenchmarks in laboratory conditions.

-- 
Pieter
 On Fri, May 8, 2015 at 3:20 AM, Matt Whitlock b...@mattwhitlock.name
wrote:

 Between all the flames on this list, several ideas were raised that did
 not get much attention. I hereby resubmit these ideas for consideration and
 discussion.

 - Perhaps the hard block size limit should be a function of the actual
 block sizes over some trailing sampling period. For example, take the
 median block size among the most recent 2016 blocks and multiply it by 1.5.
 This allows Bitcoin to scale up gradually and organically, rather than
 having human beings guessing at what is an appropriate limit.


A lot of people like this idea, or something like it. It is nice and
simple, which is really important for consensus-critical code.

With this rule in place, I believe there would be more fee pressure
(miners would be creating smaller blocks) today. I created a couple of
histograms of block sizes to infer what policy miners are ACTUALLY
following today with respect to block size:

Last 1,000 blocks:
  http://bitcoincore.org/~gavin/sizes_last1000.html

Notice a big spike at 750K -- the default size for Bitcoin Core.
This graph might be misleading, because transaction volume or fees might
not be high enough over the last few days to fill blocks to whatever limit
miners are willing to mine.

So I graphed a time when (according to statoshi.info) there WERE a lot of
transactions waiting to be confirmed:
   http://bitcoincore.org/~gavin/sizes_357511.html

That might also be misleading, because it is possible there were a lot of
transactions waiting to be confirmed because miners who choose to create
small blocks got lucky and found more blocks than normal.  In fact, it
looks like that is what happened: more smaller-than-normal blocks were
found, and the memory pool backed up.

So: what if we had a dynamic maximum size limit based on recent history?

The average block size is about 400K, so a 1.5x rule would make the max
block size 600K; miners would definitely be squeezing out transactions /
putting pressure to increase transaction fees. Even a 2x rule (implying
800K max blocks) would, today, be squeezing out transactions / putting
pressure to increase fees.

Using a median size instead of an average means the size can increase or
decrease more quickly. For example, imagine the rule is median of last
2016 blocks and 49% of miners are producing 0-size blocks and 51% are
producing max-size blocks. The median is max-size, so the 51% have total
control over making blocks bigger.  Swap the roles, and the median is
min-size.

Because of that, I think using an average is 

Re: [Bitcoin-development] Proposed alternatives to the 20MB stepfunction

2015-05-28 Thread Pieter Wuille
On May 28, 2015 10:42 AM, Raystonn . rayst...@hotmail.com wrote:

 I agree that developers should avoid imposing economic policy.  It is
dangerous for Bitcoin and the core developers themselves to become such a
central point of attack for those wishing to disrupt Bitcoin.

I could not agree more that developers should not be in charge of the
network rules.

Which is why - in my opinion - hard forks cannot be controversial things. A
controversial change to the software, forced to be adopted by the public
because the only alternative is a permanent chain fork, is a use of power
that developers (or anyone) should not have, and an incredibly dangerous
precedent for other changes that only a subset of participants would want.

The block size is also not just an economic policy. It is the compromise
the _network_ chooses to make between utility and various forms of
centralization pressure, and we should treat it as a compromise, and not as
some limit that is inferior to scaling demands.

I personally think the block size should increase, by the way, but only if
we can do it under a policy of doing it after technological growth has been
shown to be sufficient to support it without increased risk.

-- 
Pieter
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


[Bitcoin-development] Version bits proposal

2015-05-26 Thread Pieter Wuille
Hello everyone,

here is a proposal for how to coordinate future soft-forking consensus
changes: https://gist.github.com/sipa/bf69659f43e763540550

It supports multiple parallel changes, as well as changes that get
permanently rejected without obstructing the rollout of others.

Feel free to comment. As the gist does not support notifying participants
of new comments, I would suggest using the mailing list instead.

This is joint work with Peter Todd and Greg Maxwell.

-- 
Pieter
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] First-Seen-Safe Replace-by-Fee

2015-05-26 Thread Pieter Wuille
It's just a mempool policy rule.

Allowing the contents of blocks to change (other than by mining a competing
chain) would be pretty much the largest possible change to Bitcoin's
design
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] [BIP] Normalized Transaction IDs

2015-05-13 Thread Pieter Wuille
On Wed, May 13, 2015 at 1:27 PM, Tier Nolan tier.no...@gmail.com wrote:

 After more thought, I think I came up with a clearer description of the
 recursive version.

 The simple definition is that the hash for the new signature opcode should
 simply assume that the normalized txid system was used since the
 beginning.  All txids in the entire blockchain should be replaced with the
 correct values.

 This requires a full re-index of the blockchain.  You can't work out what
 the TXID-N of a transaction is without knowning the TXID-N of its parents,
 in order to do the replacement.

 The non-recursive version can only handle refunds one level deep.


This was what I was suggesting all along, sorry if I wasn't clear.

-- 
Pieter
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] [BIP] Normalized Transaction IDs

2015-05-13 Thread Pieter Wuille
On Wed, May 13, 2015 at 11:04 AM, Christian Decker 
decker.christ...@gmail.com wrote:

 If the inputs to my transaction have been long confirmed I can be
 reasonably safe in assuming that the transaction hash does not change
 anymore. It's true that I have to be careful not to build on top of
 transactions that use legacy references to transactions that are
 unconfirmed or have few confirmations, however that does not invalidate the
 utility of the normalized transaction IDs.


Sufficient confirmations help of course, but make systems like this less
useful for more complex interactions where you have multiple unconfirmed
transactions waiting on each other. I think being able to rely on this
problem being solved unconditionally is what makes the proposal attractive.
For the simple cases, see BIP62.

I remember reading about the SIGHASH proposal somewhere. It feels really
 hackish to me: It is a substantial change to the way signatures are
 verified, I cannot really see how this is a softfork if clients that did
 not update are unable to verify transactions using that SIGHASH Flag and it
 is adding more data (the normalized hash) to the script, which has to be
 stored as part of the transaction. It may be true that a node observing
 changes in the input transactions of a transaction using this flag could
 fix the problem, however it requires the node's intervention.


I think you misunderstand the idea. This is related, but orthogonal to the
ideas about extended the sighash flags that have been discussed here before.

All it's doing is adding a new CHECKSIG operator to script, which, in its
internally used signature hash, 1) removes the scriptSigs from transactions
before hashing 2) replaces the txids in txins by their ntxid. It does not
add any data to transactions, and it is a softfork, because it only impacts
scripts which actually use the new CHECKSIG operator. Wallets that don't
support signing with this new operator would not give out addresses that
use it.


 Compare that to the simple and clean solution in the proposal, which does
 not add extra data to be stored, keeps the OP_*SIG* semantics as they are
 and where once you sign a transaction it does not have to be monitored or
 changed in order to be valid.


OP_*SIG* semantics don't change here either, we're just adding a superior
opcode (which in most ways behaves the same as the existing operators). I
agree with the advantage of not needing to monitor transactions afterwards
for malleated inputs, but I think you underestimate the deployment costs.
If you want to upgrade the world (eventually, after the old index is
dropped, which is IMHO the only point where this proposal becomes superior
to the alternatives) to this, you're changing *every single piece of
Bitcoin software on the planet*. This is not just changing some validation
rules that are opt-in to use, you're fundamentally changing how
transactions refer to each other.

Also, what do blocks commit to? Do you keep using the old transaction ids
for this? Because if you don't, any relayer on the network can invalidate a
block (and have the receiver mark it as invalid) by changing the txids. You
need to somehow commit to the scriptSig data in blocks still so the POW of
a block is invalidated by changing a scriptSig.

There certainly are merits using the SIGHASH approach in the short term (it
 does not require a hard fork), however I think the normalized transaction
 ID is a cleaner and simpler long-term solution, even though it requires a
 hard-fork.


It requires a hard fork, but more importantly, it requires the whole world
to change their software (not just validation code) to effectively use it.
That, plus large up-front deployment costs (doubling the cache size for
every full node for the same propagation speed is not a small thing) which
may not end up being effective.

-- 
Pieter
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] [BIP] Normalized Transaction IDs

2015-05-13 Thread Pieter Wuille
On Wed, May 13, 2015 at 12:14 PM, Christian Decker 
decker.christ...@gmail.com wrote:


 On Wed, May 13, 2015 at 8:40 PM Pieter Wuille pieter.wui...@gmail.com
 wrote:

 On Wed, May 13, 2015 at 11:04 AM, Christian Decker 
 decker.christ...@gmail.com wrote:

 If the inputs to my transaction have been long confirmed I can be
 reasonably safe in assuming that the transaction hash does not change
 anymore. It's true that I have to be careful not to build on top of
 transactions that use legacy references to transactions that are
 unconfirmed or have few confirmations, however that does not invalidate the
 utility of the normalized transaction IDs.


 Sufficient confirmations help of course, but make systems like this less
 useful for more complex interactions where you have multiple unconfirmed
 transactions waiting on each other. I think being able to rely on this
 problem being solved unconditionally is what makes the proposal attractive.
 For the simple cases, see BIP62.


 If we are building a long running contract using a complex chain of
 transactions, or multiple transactions that depend on each other, there is
 no point in ever using any malleable legacy transaction IDs and I would
 simply stop cooperating if you tried. I don't think your argument applies.
 If we build our contract using only normalized transaction IDs there is no
 way of suffering any losses due to malleability.


That's correct as long as you stay within your contract, but you likely
want compatibility with other software, without waiting an age before and
after your contract settles on the chain. It's a weaker argument, though, I
agree.

I remember reading about the SIGHASH proposal somewhere. It feels really
 hackish to me: It is a substantial change to the way signatures are
 verified, I cannot really see how this is a softfork if clients that did
 not update are unable to verify transactions using that SIGHASH Flag and it
 is adding more data (the normalized hash) to the script, which has to be
 stored as part of the transaction. It may be true that a node observing
 changes in the input transactions of a transaction using this flag could
 fix the problem, however it requires the node's intervention.


 I think you misunderstand the idea. This is related, but orthogonal to
 the ideas about extended the sighash flags that have been discussed here
 before.

 All it's doing is adding a new CHECKSIG operator to script, which, in its
 internally used signature hash, 1) removes the scriptSigs from transactions
 before hashing 2) replaces the txids in txins by their ntxid. It does not
 add any data to transactions, and it is a softfork, because it only impacts
 scripts which actually use the new CHECKSIG operator. Wallets that don't
 support signing with this new operator would not give out addresses that
 use it.


 In that case I don't think I heard this proposal before, and I might be
 missing out :-)
 So if transaction B spends an output from A, then the input from B
 contains the CHECKSIG operator telling the validating client to do what
 exactly? It appears that it wants us to go and fetch A, normalize it, put
 the normalized hash in the txIn of B and then continue the validation?
 Wouldn't that also need a mapping from the normalized transaction ID to the
 legacy transaction ID that was confirmed?


There would just be an OP_CHECKAWESOMESIG, which can do anything. It can
identical to how OP_CHECKSIG works now, but has a changed algorithm for its
signature hash algorithm. Optionally (and likely in practice, I think), it
can do various other proposed improvements, like using Schnorr signatures,
having a smaller signature encoding, supporting batch validation, have
extended sighash flags, ...

It wouldn't fetch A and normalize it; that's impossible as you would need
to go fetch all of A's dependencies too and recurse until you hit the
coinbases that produced them. Instead, your UTXO set contains the
normalized txid for every normal txid (which adds around 26% to the UTXO
set size now), but lookups in it remain only by txid.

You don't need a ntxid-txid mapping, as transactions and blocks keep
referring to transactions by txid. Only the OP_CHECKAWESOMESIG operator
would do the conversion, and at most once.

A client that did not update still would have no clue on how to handle
 these transactions, since it simply does not understand the CHECKSIG
 operator. If such a transaction ends up in a block I cannot even catch up
 with the network since the transaction does not validate for me.


As for every softfork, it works by redefining an OP_NOP operator, so old
nodes simply consider these checksigs unconditionally valid. That does mean
you don't want to use them before the consensus rule is forked in
(=enforced by a majority of the hashrate), and that you suffer from the
temporary security reduction that an old full node is unknowingly reduced
to SPV security for these opcodes. However, as full node wallet, this
problem does not affect

Re: [Bitcoin-development] Long-term mining incentives

2015-05-13 Thread Pieter Wuille
On Wed, May 13, 2015 at 5:48 PM, Aaron Voisine vois...@gmail.com wrote:

 We have $3billion plus of value in this system to defend. The safe,
 conservative course is to increase the block size. Miners already have an
 incentive to find ways to encourage higher fees  and we can help them with
 standard recommended propagation rules and hybrid priority/fee transaction
 selection for blocks that increases confirmation delays for low fee
 transactions.


You may find that the most economical solution, but I can't understand how
you can call it conservative.

Suggesting a hard fork is betting the survival of the entire ecosystem on
the bet that everyone will agree with and upgrade to new suggested software
before a flag date.

-- 
Pieter
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] [BIP] Normalized Transaction IDs

2015-05-13 Thread Pieter Wuille
On Wed, May 13, 2015 at 1:32 PM, Tier Nolan tier.no...@gmail.com wrote:


 On Wed, May 13, 2015 at 9:31 PM, Pieter Wuille pieter.wui...@gmail.com
 wrote:


 This was what I was suggesting all along, sorry if I wasn't clear.

 That's great.  So, basically the multi-level refund problem is solved by
 this?


Yes. So to be clear, I think there are 2 desirable end-goal proposals
(ignoring difficulty of changing things for a minute):

* Transactions and blocks keep referring to other transactions by full
txid, but signature hashes are computed off normalized txids (which are
recursively defined to use normalized txids all the way back to coinbases).
Is this what you are suggesting now as well?

* Blocks commit to full transaction data, but transactions and signature
hashes use normalized txids.

The benefit of the latter solution is that it doesn't need fixing up
transactions whose inputs have been malleated, but comes at the cost of
doing a very invasive hard fork.

-- 
Pieter
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Long-term mining incentives

2015-05-13 Thread Pieter Wuille
On Wed, May 13, 2015 at 6:13 PM, Aaron Voisine vois...@gmail.com wrote:

 Conservative is a relative term. Dropping transactions in a way that is
 unpredictable to the sender sounds incredibly drastic to me. I'm suggesting
 increasing the blocksize, drastic as it is, is the more conservative choice.


Transactions are already being dropped, in a more indirect way: by people
and businesses deciding to not use on-chain settlement. That is very sad,
but it's completely inevitable that there is space for some use cases and
not for others (at whatever block size). It's only a things don't fit
anymore when you see on-chain transactions as the only means for doing
payments, and that is already not the case. Increasing the block size
allows for more utility on-chain, but it does not fundamentally add more
use cases - only more growth space for people already invested in being
able to do things on-chain while externalizing the costs to others.


 I would recommend that the fork take effect when some specific large
 supermajority of the pervious 1000 blocks indicate they have upgraded, as a
 safer alternative to a simple flag date, but I'm sure I wouldn't have to
 point out that option to people here.


That only measures miner adoption, which is the least relevant. The
question is whether people using full nodes will upgrade. If they do, then
miners are forced to upgrade too, or become irrelevant. If they don't, the
upgrade is risky with or without miner adoption.

-- 
Pieter
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] [BIP] Normalized Transaction IDs

2015-05-13 Thread Pieter Wuille
Normalized transaction ids are only effectively non-malleable when all
inputs they refer to are also non-malleable (or you can have malleability
in 2nd level dependencies), so I do not believe it makes sense to allow
mixed usage of the txids at all. They do not provide the actual benefit of
guaranteed non-malleability before it becomes disallowed to use the old
mechanism. That, together with the +- resource doubling needed for the UTXO
set (as earlier mentioned) and the fact that an alternative which is only a
softfork are available, makes this a bad idea IMHO.

Unsure to what extent this has been presented on the mailinglist, but the
softfork idea is this:
* Transactions get 2 txids, one used to reference them (computed as
before), and one used in an (extended) sighash.
* The txins keep using the normal txid, so not structural changes to
Bitcoin.
* The ntxid is computed by replacing the scriptSigs in inputs by the empty
string, and by replacing the txids in txins by their corresponding ntxids.
* A new checksig operator is softforked in, which uses the ntxids in its
sighashes rather than the full txid.
* To support efficiently computing ntxids, every tx in the utxo set
(currently around 6M) stores the ntxid, but only supports lookup bu txid
still.

This does result in a system where a changed dependency indeed invalidates
the spending transaction, but the fix is trivial and can be done without
access to the private key.
On May 13, 2015 5:50 AM, Christian Decker decker.christ...@gmail.com
wrote:

 Hi All,

 I'd like to propose a BIP to normalize transaction IDs in order to address
 transaction malleability and facilitate higher level protocols.

 The normalized transaction ID is an alias used in parallel to the current
 (legacy) transaction IDs to address outputs in transactions. It is
 calculated by removing (zeroing) the scriptSig before computing the hash,
 which ensures that only data whose integrity is also guaranteed by the
 signatures influences the hash. Thus if anything causes the normalized ID
 to change it automatically invalidates the signature. When validating a
 client supporting this BIP would use both the normalized tx ID as well as
 the legacy tx ID when validating transactions.

 The detailed writeup can be found here:
 https://github.com/cdecker/bips/blob/normalized-txid/bip-00nn.mediawiki.

 @gmaxwell: I'd like to request a BIP number, unless there is something
 really wrong with the proposal.

 In addition to being a simple alternative that solves transaction
 malleability it also hugely simplifies higher level protocols. We can now
 use template transactions upon which sequences of transactions can be built
 before signing them.

 I hesitated quite a while to propose it since it does require a hardfork
 (old clients would not find the prevTx identified by the normalized
 transaction ID and deem the spending transaction invalid), but it seems
 that hardforks are no longer the dreaded boogeyman nobody talks about.
 I left out the details of how the hardfork is to be done, as it does not
 really matter and we may have a good mechanism to apply a bunch of
 hardforks concurrently in the future.

 I'm sure it'll take time to implement and upgrade, but I think it would be
 a nice addition to the functionality and would solve a long standing
 problem :-)

 Please let me know what you think, the proposal is definitely not set in
 stone at this point and I'm sure we can improve it further.

 Regards,
 Christian


 --
 One dashboard for servers and applications across Physical-Virtual-Cloud
 Widest out-of-the-box monitoring support with 50+ applications
 Performance metrics, stats and reports that give you Actionable Insights
 Deep dive visibility with transaction tracing using APM Insight.
 http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development


--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] CLTV opcode allocation; long-term plans?

2015-05-12 Thread Pieter Wuille
I have no strong opinion, but a slight preference for separate opcodes.

Reason: given the current progress, they'll likely be deployed
independently, and maybe the end result is not something that cleanly fits
the current CLTV argument structure.
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] A suggestion for reducing the size of the UTXO database

2015-05-09 Thread Pieter Wuille
Miners do not care about the age of a UTXO entry, apart for two exceptions.
It is also economically irrelevant.
* There is a free transaction policy, which sets a small portion of block
space aside for transactions which do not pay sufficient fee. This is
mostly an altruistic way of encouraging Bitcoin adoption. As a DoS
prevention mechanism, there is a requirement that these free transactions
are of sufficient priority (computed as BTC-days-destroyed per byte),
essentially requiring these transactions to consume another scarce
resource, even if not money.
* Coinbase transaction outputs can, as a consensus rule, only be spent
after 100 confirmations. This is to prevent random reorganisations from
invalidating transactions that spend young coinbase transactions (which
can't move to the new chain). In addition, wallets also select more
confirmed outputs first to consume, for the same reason.
On May 9, 2015 1:20 PM, Raystonn rayst...@hotmail.com wrote:

 That policy is included in Bitcoin Core.  Miners use it because it is the
 default.  The policy was likely intended to help real transactions get
 through in the face of spam.  But it favors those with more bitcoin, as the
 priority is determined by amount spent multiplied by age of UTXOs.  At the
 very least the amount spent should be removed as a factor, or fees are
 unlikely to ever be paid by those who can afford them.  We can reassess the
 role age plays later.  One change at a time is better.
  On 9 May 2015 12:52 pm, Jim Phillips j...@ergophobia.org wrote:

 On Sat, May 9, 2015 at 2:43 PM, Raystonn rayst...@hotmail.com wrote:

 How about this as a happy medium default policy: Rather than select UTXOs
 based solely on age and limiting the size of the transaction, we select as
 many UTXOs as possible from as few addresses as possible, prioritizing
 which addresses to use based on the number of UTXOs it contains (more being
 preferable) and how old those UTXOs are (in order to reduce the fee)?

 If selecting older UTXOs gives higher priority for a lesser (or at least
 not greater) fee, that is an incentive for a rational user to use the older
 UTXOs.  Such policy needs to be defended or removed.  It doesn't support
 privacy or a reduction in UTXOs.

 Before starting this thread, I had completely forgotten that age was even
 a factor in determining which UTXOs to use. Frankly, I can't think of any
 reason why miners care how old a particular UTXO is when determining what
 fees to charge. I'm sure there is one, I just don't know what it is. I just
 tossed it in there as homage to Andreas who pointed out to me that it was
 still part of the selection criteria.


--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] A suggestion for reducing the size of the UTXO database

2015-05-09 Thread Pieter Wuille
It's a very complex trade-off, which is hard to optimize for all use cases.
Using more UTXOs requires larger transactions, and thus more fees in
general. In addition, it results in more linkage between coins/addresses
used, so lower privacy.

The only way you can guarantee an economical reason to keep the UTXO set
small is by actually having a consensus rule that punishes increasing its
size.
On May 9, 2015 12:02 PM, Andreas Schildbach andr...@schildbach.de wrote:

 Actually your assumption is wrong. Bitcoin Wallet (and I think most, if
 not all, other bitcoinj based wallets) picks UTXO by age, in order to
 maximize priority. So it keeps the number of UTXOs low, though not as
 low as if it would always pick *all* UTXOs.


 On 05/09/2015 07:09 PM, Jim Phillips wrote:
  Forgive me if this idea has been suggested before, but I made this
  suggestion on reddit and I got some feedback recommending I also bring
  it to this list -- so here goes.
 
  I wonder if there isn't perhaps a simpler way of dealing with UTXO
  growth. What if, rather than deal with the issue at the protocol level,
  we deal with it at the source of the problem -- the wallets. Right now,
  the typical wallet selects only the minimum number of unspent outputs
  when building a transaction. The goal is to keep the transaction size to
  a minimum so that the fee stays low. Consequently, lots of unspent
  outputs just don't get used, and are left lying around until some point
  in the future.
 
  What if we started designing wallets to consolidate unspent outputs?
  When selecting unspent outputs for a transaction, rather than choosing
  just the minimum number from a particular address, why not select them
  ALL? Take all of the UTXOs from a particular address or wallet, send
  however much needs to be spent to the payee, and send the rest back to
  the same address or a change address as a single output? Through this
  method, we should wind up shrinking the UTXO database over time rather
  than growing it with each transaction. Obviously, as Bitcoin gains wider
  adoption, the UTXO database will grow, simply because there are 7
  billion people in the world, and eventually a good percentage of them
  will have one or more wallets with spendable bitcoin. But this idea
  could limit the growth at least.
 
  The vast majority of users are running one of a handful of different
  wallet apps: Core, Electrum; Armory; Mycelium; Breadwallet; Coinbase;
  Circle; Blockchain.info; and maybe a few others. The developers of all
  these wallets have a vested interest in the continued usefulness of
  Bitcoin, and so should not be opposed to changing their UTXO selection
  algorithms to one that reduces the UTXO database instead of growing it.
 
  From the miners perspective, even though these types of transactions
  would be larger, the fee could stay low. Miners actually benefit from
  them in that it reduces the amount of storage they need to dedicate to
  holding the UTXO. So miners are incentivized to mine these types of
  transactions with a higher priority despite a low fee.
 
  Relays could also get in on the action and enforce this type of behavior
  by refusing to relay or deprioritizing the relay of transactions that
  don't use all of the available UTXOs from the addresses used as inputs.
  Relays are not only the ones who benefit the most from a reduction of
  the UTXO database, they're also in the best position to promote good
  behavior.
 
  --
  *James G. Phillips
  IV* https://plus.google.com/u/0/113107039501292625391/posts
  /Don't bunt. Aim out of the ball park. Aim for the company of
  immortals. -- David Ogilvy
  /
 
   /This message was created with 100% recycled electrons. Please think
  twice before printing./
 
 
 
 --
  One dashboard for servers and applications across Physical-Virtual-Cloud
  Widest out-of-the-box monitoring support with 50+ applications
  Performance metrics, stats and reports that give you Actionable Insights
  Deep dive visibility with transaction tracing using APM Insight.
  http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
 
 
 
  ___
  Bitcoin-development mailing list
  Bitcoin-development@lists.sourceforge.net
  https://lists.sourceforge.net/lists/listinfo/bitcoin-development
 




 --
 One dashboard for servers and applications across Physical-Virtual-Cloud
 Widest out-of-the-box monitoring support with 50+ applications
 Performance metrics, stats and reports that give you Actionable Insights
 Deep dive visibility with transaction tracing using APM Insight.
 http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Removing transaction data from blocks

2015-05-08 Thread Pieter Wuille
So, there are several ideas about how to reduce the size of blocks being
sent on the network:
* Matt Corallo's relay network, which internally works by remembering the
last 5000 (i believe?) transactions sent by the peer, and allowing the peer
to backreference those rather than retransmit them inside block data. This
exists and works today.
* Gavin Andresen's IBLT based set reconciliation for blocks based on what a
peer expects the new block to contain.
* Greg Maxwell's network block coding, which is based on erasure coding,
and also supports sharding (everyone sends some block data to everyone,
rather fetching from one peer).

However, the primary purpose is not to reduce bandwidth (though that is a
nice side advantage). The purpose is reducing propagation delay. Larger
propagation delays across the network (relative to the inter-block period)
result in higher forking rates. If the forking rate gets very high, the
network may fail to converge entirely, but even long before that point, the
higher the forking rate is, the higher the advantage of larger (and better
connected) pools over smaller ones. This is why, in my opinion,
guaranteeing fast propagation is one of the most essential responsibility
of full nodes to avoid centralization pressure.

Also, none of this would let us get rid of the block size at all. All
transactions still have to be transferred and processed, and due to
inherent latencies of communication across the globe, the higher the
transaction rate is, the higher the number of transactions in blocks will
be that peers have not yet heard about. You can institute a policy to not
include too recent transactions in blocks, but again, this favors larger
miners over smaller ones.

Also, if the end goal is propagation delay, just minimizing the amount of
data transferred is not enough. You also need to make sure the
communication mechanism does not add huge processing overheads or adds
unnecessary roundtrips. In fact, this is the key difference between the 3
techniques listed above, and several people are working on refining and
optimizing these mechanisms to make them practically usable.
On May 8, 2015 7:23 AM, Arne Brutschy abruts...@xylon.de wrote:

 Hello,

 At DevCore London, Gavin mentioned the idea that we could get rid of
 sending full blocks. Instead, newly minted blocks would only be
 distributed as block headers plus all hashes of the transactions
 included in the block. The assumption would be that nodes have already
 the majority of these transactions in their mempool.

 The advantages are clear: it's more efficient, as we would send
 transactions only once over the network, and it's fast as the resulting
 blocks would be small. Moreover, we would get rid of the blocksize limit
 for a long time.

 Unfortunately, I am too ignorant of bitcoin core's internals to judge
 the changes required to make this happen. (I guess we'd require a new
 block format and a way to bulk-request missing transactions.)

 However, I'm curious to hear what others with a better grasp of bitcoin
 core's internals have to say about it.

 Regards,
 Arne


 --
 One dashboard for servers and applications across Physical-Virtual-Cloud
 Widest out-of-the-box monitoring support with 50+ applications
 Performance metrics, stats and reports that give you Actionable Insights
 Deep dive visibility with transaction tracing using APM Insight.
 http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Mechanics of a hard fork

2015-05-07 Thread Pieter Wuille
On May 7, 2015 3:08 PM, Roy Badami r...@gnomon.org.uk wrote:

 On Thu, May 07, 2015 at 11:49:28PM +0200, Pieter Wuille wrote:
  I would not modify my node if the change introduced a perpetual 100 BTC
  subsidy per block, even if 99% of miners went along with it.

 Surely, in that scenario Bitcoin is dead.  If the fork you prefer has
 only 1% of the hash power it is trivially vulnerably not just to a 51%
 attack but to a 501% attack, not to mention the fact that you'd only
 be getting one block every 16 hours.

Yes, indeed, Bitcoin would be dead if this actually happens. But that is
still where the power lies: before anyone (miners or others) would think
about trying such a change, they would need to convince people and be sure
they will effectively modify their code.

-- 
Pieter
--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Mechanics of a hard fork

2015-05-07 Thread Pieter Wuille
I would not modify my node if the change introduced a perpetual 100 BTC
subsidy per block, even if 99% of miners went along with it.

A hardfork is safe when 100% of (economically relevant) users upgrade. If
miners don't upgrade at that point, they just lose money.

This is why a hashrate-triggered hardfork does not make sense. Either you
believe everyone will upgrade anyway, and the hashrate doesn't matter. Or
you are not certain, and the fork is risky, independent of what hashrate
upgrades.

And the march 2013 fork showed that miners upgrade at a different schedule
than the rest of the network.
On May 7, 2015 5:44 PM, Roy Badami r...@gnomon.org.uk wrote:


  On the other hand, if 99.99% of the miners updated and only 75% of
  merchants and 75% of users updated, then that would be a serioud split of
  the network.

 But is that a plausible scenario?  Certainly *if* the concensus rules
 required a 99% supermajority of miners for the hard fork to go ahead,
 then there would be absoltely no rational reason for merchants and
 users to refuse to upgrade, even if they don't support the changes
 introduces by the hard fork.  Their only choice, if the fork succeeds,
 is between the active chain and the one that is effectively stalled -
 and, of course, they can make that choice ahead of time.

 roy


 --
 One dashboard for servers and applications across Physical-Virtual-Cloud
 Widest out-of-the-box monitoring support with 50+ applications
 Performance metrics, stats and reports that give you Actionable Insights
 Deep dive visibility with transaction tracing using APM Insight.
 http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Block Size Increase

2015-05-06 Thread Pieter Wuille
On Thu, May 7, 2015 at 12:12 AM, Matt Corallo bitcoin-l...@bluematt.me
wrote:

 Recently there has been a flurry of posts by Gavin at
 http://gavinandresen.svbtle.com/ which advocate strongly for increasing
 the maximum block size. However, there hasnt been any discussion on this
 mailing list in several years as far as I can tell.


Thanks for bringing this up. I'll try to keep my arguments brief, to avoid
a long wall of text. I may be re-iterating some things that have been said
before, though.

I am - in general - in favor of increasing the size blocks: as technology
grows, there is no reason why the systems built on them can't scale
proportionally. I have so far not commented much about this, in a hope to
avoid getting into a public debate, but the way seems to be going now,
worries me greatly.

* Controversial hard forks. I hope the mailing list here today already
proves it is a controversial issue. Independent of personal opinions pro or
against, I don't think we can do a hard fork that is controversial in
nature. Either the result is effectively a fork, and pre-existing coins can
be spent once on both sides (effectively failing Bitcoin's primary
purpose), or the result is one side forced to upgrade to something they
dislike - effectively giving a power to developers they should never have.
Quoting someone: I did not sign up to be part of a central banker's
committee.

* The reason for increasing is need. If we need more space in blocks is
the reason to do an upgrade, it won't stop after 20 MB. There is nothing
fundamental possible with 20 MB blocks that isn't with 1 MB blocks.
Changetip does not put their microtransactions on the chain, not with 1 MB,
and I doubt they would with 20 MB blocks. The reason for increase should be
because we choose to accept the trade-offs.

* Misrepresentation of the trade-offs. You can argue all you want that none
of the effects of larger blocks are particularly damaging, so everything is
fine. They will damage something (see below for details), and we should
analyze these effects, and be honest about them, and present them as a
trade-off made we choose to make to scale the system better. If you just
ask people if they want more transactions, of course you'll hear yes. If
you ask people if they want to pay less taxes, I'm sure the vast majority
will agree as well.

* Miner centralization. There is currently, as far as I know, no technology
that can relay and validate 20 MB blocks across the planet, in a manner
fast enough to avoid very significant costs to mining. There is work in
progress on this (including Gavin's IBLT-based relay, or Greg's block
network coding), but I don't think we should be basing the future of the
economics of the system on undemonstrated ideas. Without those (or even
with), the result may be that miners self-limit the size of their blocks to
propagate faster, but if this happens, larger, better-connected, and more
centrally-located groups of miners gain a competitive advantage by being
able to produce larger blocks. I would like to point out that there is
nothing evil about this - a simple feedback to determine an optimal block
size for an individual miner will result in larger blocks for better
connected hash power. If we do not want miners to have this ability, we
(as in: those using full nodes) should demand limitations that prevent it.
One such limitation is a block size limit (whatever it is).

* Ability to use a full node. I very much dislike the trend of people
saying we need to encourage people to run full nodes, in order to make the
network more decentralized. Running 1000 nodes which are otherwise unused
only gives some better ability for full nodes to download the block chain,
or for SPV nodes to learn about transactions (or be Sybil-attacked...).
However, *using* a full node for validating your business (or personal!)
transactions empowers you to using a financial system that requires less
trust in *anyone* (not even in a decentralized group of peers) than
anything else. Moreover, using a full node is what given you power of the
systems' rules, as anyone who wants to change it now needs to convince you
to upgrade. And yes, 20 MB blocks will change people's ability to use full
nodes, even if the costs are small.

* Skewed incentives for improvements. I think I can personally say that I'm
responsible for most of the past years' performance improvements in Bitcoin
Core. And there is a lot of room for improvement left there - things like
silly waiting loops, single-threaded network processing, huge memory sinks,
lock contention, ... which in my opinion don't nearly get the attention
they deserve. This is in addition to more pervasive changes like optimizing
the block transfer protocol, support for orthogonal systems with a
different security/scalability trade-off like Lightning, making full
validation optional, ... Call me cynical, but without actual pressure to
work on these, I doubt much will change. Increasing the size of blocks now

Re: [Bitcoin-development] 75%/95% threshold for transaction versions

2015-04-17 Thread Pieter Wuille
 Anyone can alter the txid - more details needed. The number of altered
 txids in practice is not so high in order to make us believe anyone can
 do it easily. It is obvious that all current bitcoin transactions are
 malleable, but not by anyone and not that easy. At least I like to think
so.

Don't assume that because it does not (frequently) happen, that it cannot
happen. Large amounts of malleated transactions have happened in the past.
Especially if you build a system depends on non-malleability for its
security, you may at some point have an attacker who has financial gain
from malleation.

 From your answer I understand that right now if I create a transaction
 (tx1) and broadcast it, you can alter its txid at your will, without any
 mining power and/or access to my private keys so I would end up not
 recognizing my own transaction and probably my change too (if my systems
 rely hardly on txid)?

In theory, yes, anyone can alter the txid without invalidating it, without
mining power and without access to the sender's private keys.

All it requires is seeing a transaction on the network, doing a trivial
modification to it, and rebroadcasting it quickly. If the modifies version
gets mined, you're out of luck. Having mining power helps of course.

After BIP62, you will, as a sender, optionally be able to protect others
from malleating. You're always able to re-sign yourself.

-- 
Pieter
--
BPM Camp - Free Virtual Workshop May 6th at 10am PDT/1PM EDT
Develop your own process in accordance with the BPMN 2 standard
Learn Process modeling best practices with Bonita BPM through live exercises
http://www.bonitasoft.com/be-part-of-it/events/bpm-camp-virtual- event?utm_
source=Sourceforge_BPM_Camp_5_6_15utm_medium=emailutm_campaign=VA_SF___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] 75%/95% threshold for transaction versions

2015-04-15 Thread Pieter Wuille
On Apr 16, 2015 1:46 AM, s7r s...@sky-ip.org wrote:
 but for transaction versions? In simple terms, if  75% from all the
 transactions in the latest 1000 blocks are version 'n', mark all
 previous transaction versions as non-standard and if  95% from all the
 transactions in the latest 1000 blocks are version 'n' mark all previous
 transaction versions as invalid.

What problem are you trying to solve?

The reason why BIP62 (as specified, it is just a draft) does not make v1
transactions invalid is because it is opt-in. The creator of a transaction
needs to agree to protect it from malleability, and this subjects him to
extra rules in the creation.

Forcing v3 transactions would require every piece of wallet software to be
changed.

-- 
Pieter
--
BPM Camp - Free Virtual Workshop May 6th at 10am PDT/1PM EDT
Develop your own process in accordance with the BPMN 2 standard
Learn Process modeling best practices with Bonita BPM through live exercises
http://www.bonitasoft.com/be-part-of-it/events/bpm-camp-virtual- event?utm_
source=Sourceforge_BPM_Camp_5_6_15utm_medium=emailutm_campaign=VA_SF___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


[Bitcoin-development] Deprecating Bitcoin Core's regtest-specific `setgenerate` behaviour

2015-04-12 Thread Pieter Wuille
Hello everyone,

Bitcoin Core's `setgenerate` RPC call has had a special meaning for
-regtest (namely instantaneously mining a number of blocks, instead of
starting a background CPU miner).

We're planning to deprecate that overloaded behaviour, and replace it with
a separate RPC call `generate`. Is there any software or user who would
need compatibility with the old behaviour? We're generally very
conservative in changing RPC behaviour, but as this is not related to any
production functionality, we may as well just switch it.

Note that the bitcoin.org developer documentation will need to be updated.

-- 
Pieter
--
BPM Camp - Free Virtual Workshop May 6th at 10am PDT/1PM EDT
Develop your own process in accordance with the BPMN 2 standard
Learn Process modeling best practices with Bonita BPM through live exercises
http://www.bonitasoft.com/be-part-of-it/events/bpm-camp-virtual- event?utm_
source=Sourceforge_BPM_Camp_5_6_15utm_medium=emailutm_campaign=VA_SF___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] [softfork proposal] Strict DER signatures

2015-02-06 Thread Pieter Wuille
On Mon, Jan 26, 2015 at 10:35 AM, Gregory Maxwell gmaxw...@gmail.com wrote:
 I'd like to request a BIP number for this.

 Sure. BIP0066.

Four implementations exist now:
* for master: https://github.com/bitcoin/bitcoin/pull/5713 (merged)
* for 0.10.0: https://github.com/bitcoin/bitcoin/pull/5714 (merged,
and included in 0.10.0rc4)
* for 0.9.4: https://github.com/bitcoin/bitcoin/pull/5762
* for 0.8.6: https://github.com/bitcoin/bitcoin/pull/5765

The 0.8 and 0.9 version have reduced test code, as many tests rely on
new test framework code in 0.10 and later, but the implementation code
is identical. Work to improve that is certainly welcome.

-- 
Pieter

--
Dive into the World of Parallel Programming. The Go Parallel Website,
sponsored by Intel and developed in partnership with Slashdot Media, is your
hub for all things parallel software development, from weekly thought
leadership blogs to news, videos, case studies, tutorials and more. Take a
look and join the conversation now. http://goparallel.sourceforge.net/
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] [softfork proposal] Strict DER signatures

2015-02-03 Thread Pieter Wuille
On Tue, Feb 3, 2015 at 4:00 AM, Wladimir laa...@gmail.com wrote:
 One way to do that is to just - right now - add a patch to 0.10 to
 make those non-standard. This requires another validation flag, with a
 bunch of switching logic.

 The much simpler alternative is just adding this to BIP66's DERSIG
 right now, which is a one-line change that's obviously softforking. Is
 anyone opposed to doing so at this stage?

 Not opposed, but is kind of late for 0.10, I had hoped to tag rc4 today.

I understand it's late, which is also why I ask for opinions. It's
also not a priority, but if we release 0.10 without, it will first
need a cycle of making this non-standard, and then in a further
release doing a second softfork to enforce it.

It's a 2-line change; see #5743.

-- 
Pieter

--
Dive into the World of Parallel Programming. The Go Parallel Website,
sponsored by Intel and developed in partnership with Slashdot Media, is your
hub for all things parallel software development, from weekly thought
leadership blogs to news, videos, case studies, tutorials and more. Take a
look and join the conversation now. http://goparallel.sourceforge.net/
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] [softfork proposal] Strict DER signatures

2015-02-03 Thread Pieter Wuille
On Tue, Feb 3, 2015 at 10:15 AM, Pieter Wuille pieter.wui...@gmail.com wrote:
 The much simpler alternative is just adding this to BIP66's DERSIG
 right now, which is a one-line change that's obviously softforking. Is
 anyone opposed to doing so at this stage?

I'm retracting this proposed change.

Suhar Daftuas pointed out that there remain edge-cases which are not
covered (a 33-byte R or S whose first byte is not a zero). The intent
here is really making sure that signature validation and parsing can
be entirely separated, and that signature checking itself does not
need a third return value (invalid encoding, in addition to valid
signature and invalid signature). If we don't want to make
assumptions about how that implementation works, the only guaranteed
way of doing that is requiring that R and S are in fact within the
range allowed by secp256k1, which would require an integer decoder
inside the signature encoding checker. I consider that to be
unreasonable.

In addition, a much cleaner solution that covers this as well has
already been proposed: only allow 0 (the empty byte vector) as invalid
signature. That would 100% align signature validity with decoding, and
is much simpler to implement.

-- 
Pieter

--
Dive into the World of Parallel Programming. The Go Parallel Website,
sponsored by Intel and developed in partnership with Slashdot Media, is your
hub for all things parallel software development, from weekly thought
leadership blogs to news, videos, case studies, tutorials and more. Take a
look and join the conversation now. http://goparallel.sourceforge.net/
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] [softfork proposal] Strict DER signatures

2015-02-02 Thread Pieter Wuille
On Sun, Jan 25, 2015 at 6:48 AM, Gregory Maxwell gmaxw...@gmail.com wrote:
 So I think we should just go ahead with R/S length upper bounds as
 both IsStandard and in STRICTDER.

I would like to fix this at some point in any case.

If we want to do that, we must at least have signatures with too-long
R or S values as non-standard.

One way to do that is to just - right now - add a patch to 0.10 to
make those non-standard. This requires another validation flag, with a
bunch of switching logic.

The much simpler alternative is just adding this to BIP66's DERSIG
right now, which is a one-line change that's obviously softforking. Is
anyone opposed to doing so at this stage?

-- 
Pieter

--
Dive into the World of Parallel Programming. The Go Parallel Website,
sponsored by Intel and developed in partnership with Slashdot Media, is your
hub for all things parallel software development, from weekly thought
leadership blogs to news, videos, case studies, tutorials and more. Take a
look and join the conversation now. http://goparallel.sourceforge.net/
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] var_int ambiguous serialization consequences

2015-02-01 Thread Pieter Wuille
Hashes are always computed by reserializing data structures, never by
hashing wire data directly. This has been the case in every version of the
reference client's code that I know of.

This even meant that for example a block of 99 bytes with non-shortest
length for the transaction count could be over the mazimum block size, but
still be valid.

As Wladimir says, more recently we switched to just failing to deserialize
(by throwing an exception) whenever a non-shortest form is used.
On Feb 1, 2015 1:34 AM, Tamas Blummer ta...@bitsofproof.com wrote:

 I wonder of consequences if var_int is used in its longer than necessary
 forms (e.g encoding 1 as 0xfd0100 instead of 0x01)

 This is already of interest if applying size limit to a block, since
 transaction count is var_int but is not part of the hashed header or the
 merkle tree.

 It could also be used to create variants of the same transaction message
 by altered representation of txIn and txout counts, that would remain valid
 provided signatures validate with the shortest form, as that is created
 while re-serializing for signature hashing. An implementation that holds
 mempool by raw message hashes could be tricked to believe that a modified
 encoded version of the same transaction is a real double spend. One could
 also mine a valid block with transactions that have a different hash if
 regularly parsed and re-serialized. An SPV client could be confused by such
 a transaction as it was present in the merkle tree proof with a different
 hash than it gets for the tx with its own serialization or from the raw
 message.

 Tamas Blummer
 Bits of Proof



 --
 Dive into the World of Parallel Programming. The Go Parallel Website,
 sponsored by Intel and developed in partnership with Slashdot Media, is
 your
 hub for all things parallel software development, from weekly thought
 leadership blogs to news, videos, case studies, tutorials and more. Take a
 look and join the conversation now. http://goparallel.sourceforge.net/
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development


--
Dive into the World of Parallel Programming. The Go Parallel Website,
sponsored by Intel and developed in partnership with Slashdot Media, is your
hub for all things parallel software development, from weekly thought
leadership blogs to news, videos, case studies, tutorials and more. Take a
look and join the conversation now. http://goparallel.sourceforge.net/___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] [softfork proposal] Strict DER signatures

2015-01-25 Thread Pieter Wuille
On Thu, Jan 22, 2015 at 6:41 PM, Zooko Wilcox-OHearn
zo...@leastauthority.com wrote:
 * Should the bipstrictder give a rationale or link to why accept the
 0-length sig as correctly-encoded-but-invalid? I guess the rationale
 is an efficiency issue as described in the log entry for
 https://github.com/sipa/bitcoin/commit/041f1e3597812c250ebedbd8f4ef1565591d2c34

I've lately been updating the BIP text without updating the code in
the repository; I've synced them now. The sigsize=0 case was actually
already handled elsewhere already, so I removed the code and added a
comment about it now in the BIP text.

 * Does this mean there are still multiple ways to encode a correctly
 encoded but invalid signature, one of which is the 0-length string?
 Would it make sense for this change to also treat any *other*
 correctly-encoded-but-invalid sig (besides the 0-length string) as
 incorrectly-encoded? Did I just step in some BIP62?

You didn't miss anything; that's correct. In fact, Peter Todd already
pointed out the possibility of making non-empty invalid signatures
illegal. The reason for not doing it yet is that I'd like this BIP to
be minimal and uncontroversial - it's a real problem we want to fix as
fast as is reasonable. It wouldn't be hard to make this a standardness
rule though, and perhaps later softfork it in as consensus rule if
there was sufficient agreement about it.

 * It would be good to verify that all the branches of the new
 IsDERSignature() from
 https://github.com/sipa/bitcoin/commit/0c427135151a6bed657438ffb2e670be84eb3642
 are tested by the test vectors in
 https://github.com/sipa/bitcoin/commit/f94e806f8bfa007a3de4b45fa3c9860f2747e427
 . Eyeballing it, there are about 20 branches touched by the patch, and
 about 24 new test vectors.

A significiant part of DERSIG behaviour (which didn't change, only the
cases in which it is enforced) was already tested, in fact. Some
branches remained untested however; I've added extra test cases in the
repository. They give 100% coverage for IsValidSignatureEncoding (the
new name for IsDERSignature) now (tested with gcov).

 * It would be good to finish the TODOs in
 https://github.com/sipa/bitcoin/commit/b7986119a5d41337fea1e83804ed6223438158ec
 so that it was actually testing the upgrade behavior.

I agree, but that requires very significant changes to the codebase,
as we currently have no way to mine blocks with non-acceptable
transactions. Ideally, the RPC tests gain some means of
building/mining blocks from without the Python test framework. Things
like that would make the code changes also hard to backport, which we
definitely will need to do to roll this out quickly.

 * missing comment:
 https://github.com/sipa/bitcoin/commit/e186f6a80161f9fa45fbced82ab1d22f081b942c#commitcomment-9406643

Fixed.

 Okay, that's all I've got. Hope it helps! Thanks again for your good work!

Thanks!

-- 
Pieter

--
New Year. New Location. New Benefits. New Data Center in Ashburn, VA.
GigeNET is offering a free month of service with a new server in Ashburn.
Choose from 2 high performing configs, both with 100TB of bandwidth.
Higher redundancy.Lower latency.Increased capacity.Completely compliant.
http://p.sf.net/sfu/gigenet
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] [softfork proposal] Strict DER signatures

2015-01-25 Thread Pieter Wuille
On Tue, Jan 20, 2015 at 8:35 PM, Pieter Wuille pieter.wui...@gmail.com wrote:
 I therefore propose a softfork to make non-DER signatures illegal
 (they've been non-standard since v0.8.0). A draft BIP text can be
 found on:

 https://gist.github.com/sipa/5d12c343746dad376c80

I'd like to request a BIP number for this.

-- 
Pieter

--
Dive into the World of Parallel Programming. The Go Parallel Website,
sponsored by Intel and developed in partnership with Slashdot Media, is your
hub for all things parallel software development, from weekly thought
leadership blogs to news, videos, case studies, tutorials and more. Take a
look and join the conversation now. http://goparallel.sourceforge.net/
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] [softfork proposal] Strict DER signatures

2015-01-25 Thread Pieter Wuille
On Wed, Jan 21, 2015 at 8:32 PM, Rusty Russell ru...@rustcorp.com.au wrote:
 One weirdness is the restriction on maximum total length, rather than a
 32 byte (33 with 0-prepad) limit on signatures themselves.

Glad that you point this out; I believe that's a weakness with more
impact now that this function is used for consensus. Let me clarify.

This function was originally written for Bitcoin Core v0.8.0, where it
was only used to enforce non-standardness, not consensus. In that
setting, there was no need to require a maximum length for the R and S
arguments, as overly-long R or S values (which, because of a further
rule, do not have excessive padding) will always result in integers =
2^256, which means the encoded signature would never be valid
according to the ECDSA specification. A restriction on the total
length is required however, as BER allows multi-byte length
descriptors, which this function cannot (and shouldn't, as it's not
DER) parse.

However, in the currently proposed soft fork, non-DER results in
immediate script failure, which is distinguishable from invalid
signatures (by negating the result of a CHECKSIG, for example using a
NOT after it). I must admit that having invalid signatures with
overly-long R or S but acceptable R+S size be distinguishable from
invalid signatures where R+S is too large is ugly, and unnecessary.

Adding individual R and S length restrictions (ideally: saying that no
more than 32 bytes, excluding the padding 0 byte in front, is invalid)
would be trivial, but it means deviating slightly from the
standardness rule implementation that has been deployed for a while.
There should not really be much risk in doing so, as there are still
no node implementation releases (apart from the v0.10.0 rc's) that
would mine a CHECKSIG whose result is negated.

So, I think there are two options:
* Just add this R/S length restriction rule as a standardness
requirement, but not make it part of the soft fork. A later softfork
can then add this easily. The same can be done for several other
changes if they are deemed useful, like only allowing 0 (the empty
array) as invalid signature (any other causes failure script
immediately), requiring correct encoding even for non-evaluated
signatures, ...
* Add it to the softfork now, and be done with it.

Opinions?

-- 
Pieter

--
New Year. New Location. New Benefits. New Data Center in Ashburn, VA.
GigeNET is offering a free month of service with a new server in Ashburn.
Choose from 2 high performing configs, both with 100TB of bandwidth.
Higher redundancy.Lower latency.Increased capacity.Completely compliant.
http://p.sf.net/sfu/gigenet
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] [softfork proposal] Strict DER signatures

2015-01-21 Thread Pieter Wuille
On Tue, Jan 20, 2015 at 11:45 PM, Rusty Russell ru...@rustcorp.com.au wrote:
 // Null bytes at the start of R are not allowed, unless it would otherwise be
 // interpreted as a negative number.
 if (lenS  1  (sig[lenR + 6] == 0x00)  !(sig[lenR + 7]  0x80))
 return false;

 You mean null bytes at the start of S.

Thanks, fixed.

-- 
Pieter

--
New Year. New Location. New Benefits. New Data Center in Ashburn, VA.
GigeNET is offering a free month of service with a new server in Ashburn.
Choose from 2 high performing configs, both with 100TB of bandwidth.
Higher redundancy.Lower latency.Increased capacity.Completely compliant.
http://p.sf.net/sfu/gigenet
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] [softfork proposal] Strict DER signatures

2015-01-21 Thread Pieter Wuille
On Wed, Jan 21, 2015 at 3:37 PM, Gavin Andresen gavinandre...@gmail.com wrote:
 DERSIG BIP looks great to me, just a few nit-picky changes suggested:

 You mention the DER standard : should link to
 http://www.itu.int/ITU-T/studygroups/com17/languages/X.690-0207.pdf (or
 whatever is best reference for DER).

 this would simplify avoiding OpenSSL in consensus implementations  --
 this would make it easier for non-OpenSSL implementations

 causing opcode failure  : I know what you mean by opcode failure, but it
 might be good to be more explicit.

 since v0.8.0, and nearly no transactions --  and very few
 transactions...

 reducing this avenue for malleability is useful on itself as well  :
 awkward English. How about just This proposal has the added benefit of
 reducing transaction malleability (see BIP62).

Nit addressed, hopefully.

-- 
Pieter

--
New Year. New Location. New Benefits. New Data Center in Ashburn, VA.
GigeNET is offering a free month of service with a new server in Ashburn.
Choose from 2 high performing configs, both with 100TB of bandwidth.
Higher redundancy.Lower latency.Increased capacity.Completely compliant.
http://p.sf.net/sfu/gigenet
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] [softfork proposal] Strict DER signatures

2015-01-21 Thread Pieter Wuille
On Wed, Jan 21, 2015 at 2:29 PM, Douglas Roark d...@bitcoinarmory.com wrote:
 Nice paper, Pieter. I do have a bit of feedback.

Thanks for the comments. I hope I have clarified the text a bit accordingly.

 1)The first sentence of Deployment has a typo. We reuse the
 double-threshold switchover mechanism from BIP 34, with the same
 *thresholds*, []

Fixed.

 2)I think the handling of the sighash byte in the comments of
 IsDERSignature() could use a little tweaking. If you look at
 CheckSignatureEncoding() in the actual code (src/script/interpreter.cpp
 in master), it's clear that the sighash byte is included as part of the
 signature struct, even though it's not part of the actual DER encoding
 being checked by IsDERSignature(). This is fine. I just think that the
 code comments in the paper ought to make this point clearer, either in
 the sighash description, or as a comment when checking the sig size
 (i.e., size-3 is valid because sighash is included), or both.

I've renamed the function to IsValidSignatureEncoding, as it is not
strictly about DER (it adds a Bitcoin-specific byte, and supports and
empty string too).

 3)The paper says a sig with size=0 is correctly coded but is neither
 valid nor DER. Perhaps this code should be elsewhere in the Bitcoin
 code? It seems to me that letting a sig pass in IsDERSignature() when
 it's not actually DER-encoded is incorrect.

I've expanded the comments about it a bit.

-- 
Pieter

--
New Year. New Location. New Benefits. New Data Center in Ashburn, VA.
GigeNET is offering a free month of service with a new server in Ashburn.
Choose from 2 high performing configs, both with 100TB of bandwidth.
Higher redundancy.Lower latency.Increased capacity.Completely compliant.
http://p.sf.net/sfu/gigenet
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] [softfork proposal] Strict DER signatures

2015-01-21 Thread Pieter Wuille
On Wed, Jan 21, 2015 at 11:18 PM, Matt Whitlock b...@mattwhitlock.name wrote:
 To be more in the C++ spirit, I would suggest changing the (const 
 std::vectorunsigned char sig, size_t off) parameters to 
 (std::vectorunsigned char::const_iterator itr, std::vectorunsigned 
 char::const_iterator end).

I agree that is more in the spirit of C++, but part of the motivation
for including C++ code that it mostly matches the exact code that has
been used in the past two major Bitcoin Core releases (to interpret
signatures as standard).

-- 
Pieter

--
New Year. New Location. New Benefits. New Data Center in Ashburn, VA.
GigeNET is offering a free month of service with a new server in Ashburn.
Choose from 2 high performing configs, both with 100TB of bandwidth.
Higher redundancy.Lower latency.Increased capacity.Completely compliant.
http://p.sf.net/sfu/gigenet
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


[Bitcoin-development] [softfork proposal] Strict DER signatures

2015-01-20 Thread Pieter Wuille
Hello everyone,

We've been aware of the risk of depending on OpenSSL for consensus
rules for a while, and were trying to get rid of this as part of BIP
62 (malleability protection), which was however postponed due to
unforeseen complexities. The recent evens (see the thread titled
OpenSSL 1.0.0p / 1.0.1k incompatible, causes blockchain rejection.
on this mailing list) have made it clear that the problem is very
real, however, and I would prefer to have a fundamental solution for
it sooner rather than later.

I therefore propose a softfork to make non-DER signatures illegal
(they've been non-standard since v0.8.0). A draft BIP text can be
found on:

https://gist.github.com/sipa/5d12c343746dad376c80

The document includes motivation and specification. In addition, an
implementation (including unit tests derived from the BIP text) can be
found on:

https://github.com/sipa/bitcoin/commit/bipstrictder

Comments/criticisms are very welcome, but I'd prefer keeping the
discussion here on the mailinglist (which is more accessible than on
the gist).

-- 
Pieter

--
New Year. New Location. New Benefits. New Data Center in Ashburn, VA.
GigeNET is offering a free month of service with a new server in Ashburn.
Choose from 2 high performing configs, both with 100TB of bandwidth.
Higher redundancy.Lower latency.Increased capacity.Completely compliant.
http://p.sf.net/sfu/gigenet
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Recent EvalScript() changes mean CHECKLOCKTIMEVERIFY can't be merged

2014-12-15 Thread Pieter Wuille
On Mon, Dec 15, 2014 at 1:47 PM, Peter Todd p...@petertodd.org wrote:

 BtcDrak was working on rebasing my CHECKLOCKTIMEVERIFY¹ patch to master a few
 days ago and found a fairly large design change that makes merging it 
 currently
 impossible. Pull-req #4890², specifically commit c7829ea7, changed the
 EvalScript() function to take an abstract SignatureChecker object, removing 
 the
 txTo and nIn arguments that used to contain the transaction the script was in
 and the txin # respectively. CHECKLOCKTIMEVERIFY needs txTo to obtain the
 nLockTime field of the transaction, and it needs nIn to obtain the nSequence 
 of
 the txin.

I agree, and I was thinking earlier that some rebasing would be needed
for CLTV when the change was made. I think this is a good thing
though: #4890 introduced a clear separation between the script
evaluation code and what it can access out of its environment (the
transaction being verified). As CLTV changes the amount available out
of the environment, this indeed requires changing the interface.

 We need to fix this if CHECKLOCKTIMEVERIFY is to be merged.

Done. See https://github.com/sipa/bitcoin/commit/cltv2 for a rebased
version of the BIP65 code on top of 0.10 and master. I haven't ported
any tests you may have that are not in the BIP, to avoid doing double
work. Those should apply cleanly. There is a less clean version (IMHO)
with smaller code changes wrt to the BIP code in my 'cltv' branch too.

 Secondly, that this change was made, and the manner in which is was made, is I
 think indicative of a development process that has been taking significant
 risks with regard to refactoring the consensus critical codebase.

I fully agree that we shouldn't be taking unnecessary risks when
changing consensus code. For example, I closed #5091 (which I would
very much have liked as a code improvement) when realizing the risks.
That said, I don't believe we are at a point where we can just freeze
anything that touches consensus-related, and sometimes refactorings
are necessary. In particular, #4890 introduced separation between a
very fundamental part of consensus logic (script logic) and an
optional optimization for it (caching). If we ever want to get to a
separate consensus code tree or repository, possibly with more strict
reviews, I think changes like this are inevitable.

 I know I
 personally have had a hard time keeping up with the very large volume of code
 being moved and changed for the v0.10 release, and I know BtcDrak - who is
 keeping Viacoin up to date with v0.10 - has also had a hard time giving the
 changes reasonable review. The #4890 pull-req in question had no ACKs at all,
 and only two untested utACKS, which I find worrying for something that made
 significant consensus critical code changes.

I'm sorry to hear that that, and I do understand that many code
movements make this harder. If this is a concern shared by many
people, we can always decide to roll back some refactorings in the
0.10 branch. On the other hand, we don't even have release candidates
yet (which are a pretty important part of the testing and reviewing
process), and doing so would delay things further. 0.10 has many very
significant improvements which are beneficial to the network too,
which I'm sure you're aware of.

It's perfectly reasonable that not everyone has the same bandwidth
available to keep up with changes, and perhaps that means slowing
things down. Again, I don't want to say this was reviewed before, we
can't go back to this - but did you really need 3 months to realize
this change? I also see that elsewhere you're complaining about #5421
of yours which hasn't made it in yet - after less than 2 weeks. Yes, I
like the change, and I will review it. Surely you are not arguing it
can be merged without decent review?

 While it would be nice to have a library encapsulating the consensus code, 
 this
 shouldn't come at the cost of safety, especially when the actual users of that
 library or their needs is still uncertain. This is after all a multi-billion
 project where a simple fork will cost miners alone tens of thousands of 
 dollars
 an hour; easily much more if it results in users being defrauded. That's also
 not taking into account the significant negative PR impact and loss of trust. 
 I
 personally would recommend *not* upgrading to v0.10 due to these issues.

I have been very much in favor of a libconsensus library, and for
several reasons. It's a step towards separating out the
consensus-critical parts from optional pieces of the codebase, and it
is a step towards avoiding the reimplementing consensus code is very
dangerous! ... but we really don't have a way to allow you to reuse
the existing code either argument. It does not fully accomplish
either of those goals, but gradual steps with time to let changes
mature in between are nice.

 A much safer approach would be to keep the code changes required for a
 consensus library to only simple movements of code for this release, 

Re: [Bitcoin-development] Increasing the OP_RETURN maximum payload size

2014-11-17 Thread Pieter Wuille
On Mon, Nov 17, 2014 at 4:19 AM, Alan Reiner etothe...@gmail.com wrote:

 On 11/16/2014 02:04 PM, Jorge Timón wrote:
 I remember people asking in #bitcoin-dev Does anyone know any use
 case for greater sizes OP_RETURNs? and me answering I do not know of
 any use cases that require bigger sizes.

 For reference, there was a brief time where I was irritated that the
 size had been reduced to 40 bytes, because I had an application where I
 wanted to put ECDSA in signatures in the OP_RETURN, and you're going to
 need at least 64 bytes for that.   Unfortunately I can't remember now
 what that application was, so it's difficult for me to argue for it.
 But I don't think that's an unreasonable use case:  sending a payment
 with a signature, essentially all timestamped in the blockchain.

You can still send the signature out of band (for example using the
payment protocol), and just have the transaction commit to a hash of
that signature (or message in general), either using an OP_RETURN
output to store the hash, or using the pay-to-contract scheme that
Jorge mentioned above. That has exactly the same timestamping
properties.

My main concern with OP_RETURN is that it seems to encourage people to
use the blockchain as a convenient transport channel, rather than just
for data that the world needs to see to validate it. I'd rather
encourage solutions that don't require additional data there, which in
many cases (but not all) is perfectly possible.

-- 
Pieter

--
Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
from Actuate! Instantly Supercharge Your Business Reports and Dashboards
with Interactivity, Sharing, Native Excel Exports, App Integration  more
Get technology previously reserved for billion-dollar corporations, FREE
http://pubads.g.doubleclick.net/gampad/clk?id=157005751iu=/4140/ostg.clktrk
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Increasing the OP_RETURN maximum payload size

2014-11-17 Thread Pieter Wuille
On Mon, Nov 17, 2014 at 12:43 PM, Flavien Charlon
flavien.char...@coinprism.com wrote:
 My main concern with OP_RETURN is that it seems to encourage people to use
 the blockchain as a convenient transport channel

 The number one user of the blockchain as a storage and transport mechanism
 is Counterparty, and limiting OP_RETURN to 40 bytes didn't prevent them from
 doing so. In fact they use multi-sig outputs which is worse than OP_RETURN
 since it's not always prunable, and yet let them store much more than 40
 bytes.

It wasn't limited to stop them from using it. It was limited to avoid
giving others the impression that OP_RETURN was intended for data
storage. For the intended purpose (making a transaction commit to some
external data) a 32-byte hash + 8 byte id is more than sufficient.

 For Open Assets, we need to store a URL in the OP_RETURN output (with
 optionally a hash) plus some bytes of overhead. 40 bytes comes really short
 for that. The benefit of having a URL in there is that any storage mechanism
 can be used (Web, FTP, BitTorrent, MaidSafe...), whereas with only a hash,
 you have to hardcode the storing mechanism in the protocol (and even then, a
 hash is not enough to address a HTTP or FTP resource). Storing only a hash
 is fine for the most basic timestamping application, but it's hardly enough
 to build something interesting.

Do you really need that data published to everyone? You're at the very
least exposing yourself to censorship, and (depending on the design)
potentially decreased privacy for your users. I would expect that for
most colored coin applications, just having the color transfer
information in external data sent directly to the receiver with
transactions committing to it should suffice.

-- 
Pieter

--
Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
from Actuate! Instantly Supercharge Your Business Reports and Dashboards
with Interactivity, Sharing, Native Excel Exports, App Integration  more
Get technology previously reserved for billion-dollar corporations, FREE
http://pubads.g.doubleclick.net/gampad/clk?id=157005751iu=/4140/ostg.clktrk
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Increasing the OP_RETURN maximum payload size

2014-11-17 Thread Pieter Wuille
On Mon, Nov 17, 2014 at 1:31 PM, Chris Pacia ctpa...@gmail.com wrote:
 If users wishes to use stealth addresses with out of band communication, the
 benefits of HD would largely be lost and they would be back to making
 regular backups -- this time after every transaction rather than every 100.


That is inevitable for any wallet that offers any functionality beyond
just maintaining a balance and the ability to send coins. In
particular, anything that wishes to list previous transaction (with
timestamps, history, metadata, messages sent using the payment
protocol, ...) needs backups.

What HD wallets (or any type of deterministic derivation scheme) offer
is the fact that you can separate secret data and public data. You
only need one safe backup of the master secret key - all the rest can
at most result in privacy loss and not in lost coins.

-- 
Pieter

--
Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
from Actuate! Instantly Supercharge Your Business Reports and Dashboards
with Interactivity, Sharing, Native Excel Exports, App Integration  more
Get technology previously reserved for billion-dollar corporations, FREE
http://pubads.g.doubleclick.net/gampad/clk?id=157005751iu=/4140/ostg.clktrk
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] SCRIPT_VERIFY_STRICTENC and CHECKSIG NOT

2014-11-06 Thread Pieter Wuille
On Thu, Nov 6, 2014 at 2:38 AM, Peter Todd p...@petertodd.org wrote:
 However the implementation of the STRICTENC flag simply makes pubkey
 formats it doesn't recognize act as through the signature was invalid,
 rather than failing the transaction. Similar to the invalid due to too
 many sigops DoS attack I found before, this lets you fill up the mempool
 with garbage transactions that will never be mined. OTOH I don't see any
 way to exploit this in a v0.9.x IsStandard() transaction, so we haven't
 shipped code that actually has this vulnerability. (dunno about
 alt-implementations)

Yeah, there's even a comment in script/interpreter.h currently about
how STRICTENC is not softfork safe. I didn't realize that this would
lead to the mempool accepting invalid transactions (I thought there
was a second validity check with the actual consensus rules; if not,
maybe we need to add that).

 I suggest we either change STRICTENC to simply fail unrecognized pubkeys
 immediately - similar to how non-standard signatures are treated - or
 fail the script if the pubkey is non-standard and signature verification
 succeeds.

Sounds good to me, I disliked those semantics too.

-- 
Pieter

--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] SCRIPT_VERIFY_STRICTENC and CHECKSIG NOT

2014-11-06 Thread Pieter Wuille
On Thu, Nov 6, 2014 at 2:47 AM, Pieter Wuille pieter.wui...@gmail.com wrote:
 I suggest we either change STRICTENC to simply fail unrecognized pubkeys
 immediately - similar to how non-standard signatures are treated - or
 fail the script if the pubkey is non-standard and signature verification
 succeeds.

 Sounds good to me, I disliked those semantics too.

Of course: do we apply this rule to all pubkeys passed to
CHECKMULTISIG (my preference...), or just the ones that are otherwise
checked?

This will likely make existing outputs hard to spend as well (I don't
have numbers), are we okay with that? We probably can't make this a
consensus rule, as it may make existing P2SH outputs/addresses
unspendable.

-- 
Pieter

--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


[Bitcoin-development] BIP62 and future script upgrades

2014-11-04 Thread Pieter Wuille
Hi all,

one of the rules in BIP62 is the clean stack requirement, which
makes passing more inputs to a script than necessary illegal.

Unfortunately, this rule needs an exception for P2SH scripts: the test
can only be done after (and not before) the second stage evaluation.
Otherwise it would reject all spends from P2SH (which rely on
superfluous inputs to pass data to the second stage).

I submitted a Pull Request to clarify this in BIP62:
https://github.com/bitcoin/bips/pull/115

However, this also leads to the interesting observation that the
clean-stack rule is incompatible with future P2SH-like constructs -
which would be very useful if we'd ever want to deploy a Script 2.0.
Any such upgrade would suffer from the same problem as P2SH, and
require an exception in the clean-stack rule, which - once deployed -
is no longer a softfork.

Luke suggested on the pull request to not apply this rule on every
transaction with nVersion = 3, which indeed solves the problem. I
believe this can easily be generalized: make the (non mandatory) BIP62
rules only apply to transaction with strict nVersion==3, and not to
higher ones. The higher ones are non-standard anyway, and shouldn't be
used before there is a rule that applies to them anyway - which could
include some or all of BIP62 if wanted at that point still.

Opinions?

--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] BIP62 and future script upgrades

2014-11-04 Thread Pieter Wuille
On Tue, Nov 4, 2014 at 5:38 AM, Mike Hearn m...@plan99.net wrote:
 This is another problem that only exists because of the desire to soft fork.
 If script 2.0 is a hard fork upgrade, you no longer need weird hacks like
 scripts-which-are-not-scripts.

I agree.
I also agree that the desire for softforks sometimes lead to ugly hacks.
I also that they are not nice philosophically because they reduce
the security model of former full nodes to SPV wrt. the new rules
without their knowledge.
I also agree that hardforks should be possible when they're useful.

But in practice, hardforks have a much larger risk which just isn't
justified for everything. Especially when it's about introducing a new
transaction type that won't be used before the softfork takes place
anyway.

And to keep the option for doing future softforks open, I believe we
need to be aware of the effects of changes like this.

-- 
Pieter

--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] BIP62 and future script upgrades

2014-11-04 Thread Pieter Wuille
On Tue, Nov 4, 2014 at 11:56 AM, Jeff Garzik jgar...@bitpay.com wrote:
 On Tue, Nov 4, 2014 at 8:13 PM, Peter Todd p...@petertodd.org wrote:
 On another topic, I'm skeptical of the choice of nVersion==3 - we'll
 likely end up doing more block.nVersion increases in the future, and
 there's no reason to think they'll have anything to do with
 transactions. No sense creating a rule that'll be so quickly broken.

 Moderately agreed.

 Earlier in BIP 62 lifetime, I had commented on ambiguity that arose
 from bumping tx version simply because we were bumping block version.
 The ambiguity was corrected, but IMO remains symptomatic of potential
 problems and confusion down the road.

 Though I ACK'd the change, my general preference remains to disconnect
 TX and block version.

I prefer to see consensus rules as one set of rules (especially
because they only really apply to blocks - the part for lone
transactions is just policy), and thus have a single numbering. Still,
I have no strong opinion about it and have now heard 3 'moderately
against' comments. I'm fine with using nVersion==2 for transactions.

-- 
Pieter

--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] BIP62 and future script upgrades

2014-11-04 Thread Pieter Wuille
Ok, addressed these (and a few other things) in
https://github.com/bitcoin/bips/pull/117:
* Better names for the rules.
* Clarify interaction of BIP62 with P2SH.
* Clarify that known hashtypes are required, despite not being part of DER.
* Use v2 transactions instead of v3 transactions.
* Apply the optional rules only to strict v2, and not higher or lower.


On Tue, Nov 4, 2014 at 12:07 PM, Peter Todd p...@petertodd.org wrote:
 On Tue, Nov 04, 2014 at 12:00:43PM -0800, Pieter Wuille wrote:
 On Tue, Nov 4, 2014 at 11:56 AM, Jeff Garzik jgar...@bitpay.com wrote:
  On Tue, Nov 4, 2014 at 8:13 PM, Peter Todd p...@petertodd.org wrote:
  On another topic, I'm skeptical of the choice of nVersion==3 - we'll
  likely end up doing more block.nVersion increases in the future, and
  there's no reason to think they'll have anything to do with
  transactions. No sense creating a rule that'll be so quickly broken.
 
  Moderately agreed.
 
  Earlier in BIP 62 lifetime, I had commented on ambiguity that arose
  from bumping tx version simply because we were bumping block version.
  The ambiguity was corrected, but IMO remains symptomatic of potential
  problems and confusion down the road.
 
  Though I ACK'd the change, my general preference remains to disconnect
  TX and block version.

 I prefer to see consensus rules as one set of rules (especially
 because they only really apply to blocks - the part for lone
 transactions is just policy), and thus have a single numbering. Still,
 I have no strong opinion about it and have now heard 3 'moderately
 against' comments. I'm fine with using nVersion==2 for transactions.

 Keep in mind that we may even have a circumstance where we need to
 introduce *two* different new tx version numbers in a single soft-fork,
 say because we find an exploit that has two different fixes, each of
 which breaks something.

 I don't think we have any certainty how new features will be added in
 the future - just look at how we only recently realised new opcodes
 won't be associated with tx version number bumps - so I'm loath to setup
 expectations.

 Besides, transactions can certainly be verified for correctness in a
 stand-alone fashion outside a block; CHECKLOCKTIMEVERIFY was
 specifically designed so that verifying scripts containing it could be
 done in a self-contained manner only referencing the transaction the
 script was within.

 --
 'peter'[:-1]@petertodd.org
 036655c955dd94ba7f9856814f3cb87f003e311566921807

--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


[Bitcoin-development] Malleable booleans

2014-10-13 Thread Pieter Wuille
Hi all,

while working on a BIP62 implementation I discovered yet another type
of malleability: the interpretation of booleans.

Any byte array with non-zero bytes in it (ignoring the highest bit of
the last byte, which is the sign bit when interpreting as a number) is
interpreted as true, anything else as false. Other than numbers,
they're not even restricted to 4 bytes. Worse, the code for dealing
with booleans is not very consistent: OP_BOOLAND and OP_BOOLOR first
interpret their arguments as numbers, and then compare them to 0 to
turn them into boolean values.

This means that scripts that use booleans as inputs will be inherently
malleable. Given that that seems actually useful (passing in booleans
to guide some OP_IF's during execution of several alternatives), I
would like to change BIP62 to also state that interpreted booleans
must be of minimal encoded size (in addition to numbers).

Any opinions for or against?

--
Comprehensive Server Monitoring with Site24x7.
Monitor 10 servers for $9/Month.
Get alerted through email, SMS, voice calls or mobile push notifications.
Take corrective actions from your mobile device.
http://p.sf.net/sfu/Zoho
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


[Bitcoin-development] Request for review/testing: headers-first synchronization in Bitcoin Core

2014-10-11 Thread Pieter Wuille
Hi all,

I believe that a large change that I've been working on for Bitcoin
Core is ready for review and testing: headers-first synchronization.
In short, it changes the way the best chain is discovered, downloaded
and verified, with several advantages:
* Parallel block downloading (much faster sync on typical network connections).
* No more stalled downloads.
* Much more robust against unresponsive or slow peers.
* Removes a class of DoS attacks related to peers feeding you
low-difficulty valid large blocks on a side branch.
* Reduces the need for checkpoints in the code.
* No orphan blocks stored in memory anymore (reducing memory usage during sync).
* A major step step towards an SPV mode using the reference codebase.

Historically, this mode of operation has been known for years (Greg
Maxwell wrote up a description of a very similar method in
https://en.bitcoin.it/wiki/User:Gmaxwell/Reverse_header-fetching_sync
in early 2012, but it was known before that), but it took a long time
to refactor these code enough to support it.

Technically, it works by replacing the single-peer blocks download by
a single-peer headers download (which typically takes seconds/minutes)
and verification, and simultaneously fetching blocks along the best
known headers chain from all peers that are known to have the relevant
blocks. Downloading is constrained to a moving window to avoid
unbounded unordering of blocks on disk (which would interfere with
pruning later).

At the protocol level, it increases the minimally supported version
for peers to 31800 (corresponding to bitcoin v3.18, released in
december 2010), as earlier versions did not support the getheaders P2P
message.

So, the code is available as a github pull request
(https://github.com/bitcoin/bitcoin/pull/4468), or packaged on
http://bitcoin.sipa.be/builds/headersfirst, where you can also find
binaries to test with.

Known issues:
* At the very start of the sync, especially before all headers are
processed, downloading is very slow due to a limited number of blocks
that are requested per peer simultaneously. The policies around this
will need some experimentation can certainly be improved.
* Blocks will be stored on disk out of order (in the order they are
received, really), which makes it incompatible with some tools or
other programs. Reindexing using earlier versions will also not work
anymore as a result of this.
* The block index database will now hold headers for which no block is
stored on disk, which earlier versions won't support. If you are fully
synced, it may still be possible to go back to an earlier version.

Unknown issues:
* Who knows, maybe it will replace your familiy pictures with Nyan
Cat? Use at your own risk.

TL;DR: Review/test https://github.com/bitcoin/bitcoin/pull/4468 or
http://bitcoin.sipa.be/builds/headersfirst.

-- 
Pieter

--
Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
http://p.sf.net/sfu/Zoho
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Small update to BIP 62

2014-09-13 Thread Pieter Wuille
On Fri, Sep 12, 2014 at 6:35 PM, Pieter Wuille pieter.wui...@gmail.com wrote:
 Changes: https://github.com/bitcoin/bips/pull/102/files

 Gregory, Jeff: does this address your concerns?
 Others: comments?

I've made another change in the PR, as language about strictly only
compressed or uncompressed public keys was missing; please have a
look.

-- 
Pieter

--
Want excitement?
Manually upgrade your production database.
When you want reliability, choose Perforce
Perforce version control. Predictably reliable.
http://pubads.g.doubleclick.net/gampad/clk?id=157508191iu=/4140/ostg.clktrk
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Small update to BIP 62

2014-09-12 Thread Pieter Wuille
On Mon, Sep 8, 2014 at 1:31 AM, Pieter Wuille pieter.wui...@gmail.com wrote:
 I've sent out a new pull request
 (https://github.com/bitcoin/bips/pull/102/files) that:
 * Changes the order of the rules.
 * Adds more reference documentation about minimal pushes and number encodings.
 * Clarified that extra consensus rules cannot prevent someone from
 creating outputs whose spending transactions will be malleable.

 I haven't changed which rules are mandatory in v3, so this is a pure
 clarification  reorganization of the text.

Changes: https://github.com/bitcoin/bips/pull/102/files

Gregory, Jeff: does this address your concerns?
Others: comments?

-- 
Pieter

--
Want excitement?
Manually upgrade your production database.
When you want reliability, choose Perforce
Perforce version control. Predictably reliable.
http://pubads.g.doubleclick.net/gampad/clk?id=157508191iu=/4140/ostg.clktrk
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Small update to BIP 62

2014-09-07 Thread Pieter Wuille
On Wed, Sep 3, 2014 at 6:34 PM, Pieter Wuille pieter.wui...@gmail.com wrote:
 On Mon, Sep 1, 2014 at 10:48 PM, Gregory Maxwell gmaxw...@gmail.com wrote:
 Not related to this change but the definition of rule 4 may not be
 sufficiently specific-- without a definition someone could reasonably
 reach a different conclusion about OP_1NEGATE being a push
 operation, or might even decide any operation which added to the
 stack was a push operation.

 Good catch - I'll write an update soon.

 Perhaps the rules should be reordered so that the applicable to all
 transactions ones are contiguous and first?
 Ok.

 The first six and part of the seventh can be fixed by extra consensus rules.

 This should clarify that the scriptPubkey can still specify rules that
 are inherently malleable [...]
 I'll try to reword.

I've sent out a new pull request
(https://github.com/bitcoin/bips/pull/102/files) that:
* Changes the order of the rules.
* Adds more reference documentation about minimal pushes and number encodings.
* Clarified that extra consensus rules cannot prevent someone from
creating outputs whose spending transactions will be malleable.

I haven't changed which rules are mandatory in v3, so this is a pure
clarification  reorganization of the text.

Any comments?

-- 
Pieter

--
Slashdot TV.  
Video for Nerds.  Stuff that matters.
http://tv.slashdot.org/
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Small update to BIP 62

2014-09-03 Thread Pieter Wuille
On Mon, Sep 1, 2014 at 10:48 PM, Gregory Maxwell gmaxw...@gmail.com wrote:
 Not related to this change but the definition of rule 4 may not be
 sufficiently specific-- without a definition someone could reasonably
 reach a different conclusion about OP_1NEGATE being a push
 operation, or might even decide any operation which added to the
 stack was a push operation.

Good catch - I'll write an update soon.

 Any particular reason to enforce 2 and 4 but not also 5?  Violation of
 5 is already non-standard and like 2,4 should be safely enforceable.

Perhaps we can go further, and include 6 as well? I see zero use cases
for zero-padded numbers, as their interpretation is already identical
to the non-padded case. I wouldn't include 1 (as it would break a
large amount of wallets today), 3 (which may have a use case in more
complex scripts with conditionals) or 7 (the superfluous element
consumed by CHECKMULTISIG could potentially be used for something in
the future).

 Perhaps the rules should be reordered so that the applicable to all
 transactions ones are contiguous and first?

Ok.

 The first six and part of the seventh can be fixed by extra consensus rules.

 This should clarify that the scriptPubkey can still specify rules that
 are inherently malleable-- e.g. require the input stack contain two
 pushes which OP_ADD to 11.  Or a more elaborate one-- a 1 of 2 check
 multisig where the pubkey not selected for signing is selected by a
 push in the signature. The current text seems to ignore isomorphisms
 of this type. ... they're not important for what the BIP is trying to
 achieve, but the document shouldn't cause people to not think that
 sort of thing exists.

I'll try to reword.

-- 
Pieter

--
Slashdot TV.  
Video for Nerds.  Stuff that matters.
http://tv.slashdot.org/
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Reconsidering github

2014-08-23 Thread Pieter Wuille
On Sat, Aug 23, 2014 at 8:17 AM, Troy Benjegerdes ho...@hozed.org wrote:
 On Fri, Aug 22, 2014 at 09:20:11PM +0200, xor wrote:
 On Tuesday, August 19, 2014 08:02:37 AM Jeff Garzik wrote:
  It would be nice if the issues and git repo for Bitcoin Core were not
  on such a centralized service as github, nice and convenient as it is.

 Assuming there is a problem with that usually is caused by using Git the 
 wrong
 way or not knowing its capabilities. Nobody can modify / insert a commit
 before a GnuPG signed commit / tag without breaking the signature.
 More detail at the bottom at [1], I am sparing you this here because I 
 suspect
 you already know it and there is something more important I want to stress:

Note that we're generally aiming (though not yet enforcing) to have
merges done through the github-merge tool, which performs the merge
locally, shows the resulting diff, compares it with the merge done by
github, and GnuPG signs it.

That allows using github as easy-access mechanism for people to
contribute and inspect, while having a higher security standard for
the actual changes done to master.

-- 
Pieter

--
Slashdot TV.  
Video for Nerds.  Stuff that matters.
http://tv.slashdot.org/
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Outbound connections rotation

2014-08-18 Thread Pieter Wuille
Yes, I believe peer rotation is useful, but not for privacy - just for
improving the network's internal knowledge.

I haven't looked at the implementation yet, but how I imagined it would be
every X minutes you attempt a new outgoing connection, even if you're
already at the outbound limit. Then, if a connection attempt succeeds,
another connection (according to some scoring system) is replaced by it.
Given such a mechanism, plus reasonable assurances that better connections
survive for a longer time, I have no problem with rotating every few
minutes.
On Aug 18, 2014 7:23 PM, Gregory Maxwell gmaxw...@gmail.com wrote:

 On Mon, Aug 18, 2014 at 9:46 AM, Ivan Pustogarov ivan.pustoga...@uni.lu
 wrote:
  Hi there,
  I'd like to start a discussion on periodic rotation of outbound
 connections.
  E.g. every 2-10 minutes an outbound connections is dropped and replaced
  by a new one.

 Connection rotation would be fine for improving a node's knoweldge
 about available peers and making the network stronger against
 partitioning.

 I haven't implemented this because I think your motivation is
 _precisely_ opposite the behavior. If you keep a constant set of
 outbound peers only those peers learn the origin of your transactions,
 and so it is unlikely that any particular attacker will gain strong
 evidence. If you rotate where you send out your transactions then with
 very high probability a sybil pretending to be many nodes will observe
 you transmitting directly.

 Ultimately, since the traffic is clear text, if you expect to have any
 privacy at all in your broadcasts you should be broadcasting over tor
 or i2p.


 --
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development

--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Synchronization: 19.5 % orphaned blocks at height 197'324

2014-08-10 Thread Pieter Wuille
On Sun, Aug 10, 2014 at 4:07 PM, Bob McElrath bob_bitc...@mcelrath.org wrote:
 I had the same problem (repeatedly) which came down a hardware problem.

This is actually an independent problem (though something to be aware
of). Flaky hardware can make synchronization fail completely - as it
relies on being able to exactly assess the validity of everything in
the blockchain.

Stilll...

 m...@bitwatch.co [m...@bitwatch.co] wrote:
 Hello all,

 I'm currently synchronizing a new node and right now, at a progress of a
 height of 197'324 blocks, I count in my debug.log an aweful amount of
 38'447 orphaned blocks which is about 19.5 %.

 It has been I while since I watched the synchronization process closely,
 but this number seems pretty high to me.

Orphan blocks during synchronization are unfortunately very common,
and the result of a mostly broken download logic in the client. They
are blocks that are further ahead in the chain than the point where
you're currently synchronized to, and thus can't be validated yet.
Note that 'orphan' here means 'we do not know the parent'; it doesn't
just mean 'not in the main chain'. They are blocks that are received
out of order.

As Jeff mentions, headers-first synchronization fixes this problem
(and many other download-logic related things), by first verifying the
headers in the chain (thus already having partially validated
everything), and then downloading the blocks (in not necessarily the
right order) anymore, from multiple peers in parallel. There is
currently a pull request for it, but it's not production ready
(#4468).

 I'm wondering about the following: would it be possible for a malicious
 party to generate chains of blocks with low difficulity which are not
 part of the main chain to slow down the sync process?

Yes and no. While you're still synchronization, and don't actually
know the best chain, a peer could send you stale branches (with valid
proof of work), which you would accept, store and process. But it has
to be done very early, as once you learn of a good-enough chain, a
branch with more proof of work would be requires due to some
heuristics designed to exactly prevent such an attack.

-- 
Pieter

--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Abusive and broken bitcoin seeders

2014-07-30 Thread Pieter Wuille
At least my crawler (bitcoin-seeder:0.01) software shouldn't reconnect
more frequently than once every 15 minutes. But maybe the two
connections you saw were instances?

On Wed, Jul 30, 2014 at 3:50 PM, Wladimir laa...@gmail.com wrote:
 The version message helpfully tells me my own IP address but not theirs ;p

 Try -logips. Logging peer IPs was disabled by default after #3764.

 BTW I'm seeing the same abusive behavior. Who is running these? Why do
 the requests need to be so frequent?

 Wladimir

 --
 Infragistics Professional
 Build stunning WinForms apps today!
 Reboot your WinForms applications with our WinForms controls.
 Build a bridge from your legacy apps to the future.
 http://pubads.g.doubleclick.net/gampad/clk?id=153845071iu=/4140/ostg.clktrk
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development

--
Infragistics Professional
Build stunning WinForms apps today!
Reboot your WinForms applications with our WinForms controls. 
Build a bridge from your legacy apps to the future.
http://pubads.g.doubleclick.net/gampad/clk?id=153845071iu=/4140/ostg.clktrk
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Small update to BIP 62

2014-07-19 Thread Pieter Wuille
On Jul 18, 2014 4:56 PM, Wladimir laa...@gmail.com wrote:

 On Fri, Jul 18, 2014 at 5:39 PM, Mike Hearn m...@plan99.net wrote:
  The rationale doesn't seem to apply to rule #4, what's so special about
that
  one?

  4. Non-push operations in scriptSig Any non-push operation in a
scriptSig invalidates it.

 Having non-push operations in the scriptSig is a source of
 malleability, as there can be multiple sequences of opcodes that
 evaluate to the same result.

Well yes, but that is true for each of the rules and is already covered by
the previous specification in BIP62. Making it mandatory even for old
transaction does not really protect much against malleability as there are
several other sources of malleability that cannot be made mandatory in old
transactions left.

The reason for including #4 is just allowing this does not benefit anyone.

-- 
Pieter
--
Want fast and easy access to all the code in your enterprise? Index and
search up to 200,000 lines of code with a free copy of Black Duck
Code Sight - the same software that powers the world's largest code
search on Ohloh, the Black Duck Open Hub! Try it now.
http://p.sf.net/sfu/bds___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


[Bitcoin-development] Small update to BIP 62

2014-07-18 Thread Pieter Wuille
Hi all,

I've sent a pull request to make a small change to BIP 62 (my
anti-malleability proposal) which is still a draft; see:
* https://github.com/bitcoin/bips/pull/90 (the request)
* https://github.com/sipa/bips/blob/bip62up/bip-0062.mediawiki (the result)

It makes two of the 7 new rules mandatory in new blocks, even for
old-style transactions. Both are already non-standard since 0.8.0, and
have no use cases in my opinion.

The reason for this change is dropping the requirement for signature
verification engines to be bug-for-bug compatible with OpenSSL (which
supports many non-standard encodings for signatures). Requiring strict
DER compliance for signatures means any implementation just needs to
support DER.

Comments?

-- 
Pieter

--
Want fast and easy access to all the code in your enterprise? Index and
search up to 200,000 lines of code with a free copy of Black Duck
Code Sight - the same software that powers the world's largest code
search on Ohloh, the Black Duck Open Hub! Try it now.
http://p.sf.net/sfu/bds
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Small update to BIP 62

2014-07-18 Thread Pieter Wuille
On Fri, Jul 18, 2014 at 5:39 PM, Mike Hearn m...@plan99.net wrote:
 The rationale doesn't seem to apply to rule #4, what's so special about that
 one?

Nothing really. If it's controversial in any way, I'm fine with
changing that. It's just one those things that nobody needs, nobody
uses, has never been standard, and shouldn't have been possible in the
first place IMHO. Given that, it's easier to just make it a consensus
rule.

 Although I agree not having to support all of DER is nice, in practice I
 think all implementations do and libraries to parse DER are widespread.
 Given that the last time we modified tx rules without bumping version
 numbers we managed to break the only functioning iPhone client, I've become
 a big fan of backwards compatibility: seems the default choice should be to
 preserve compatibility over technical niceness until the old versions have
 been fully phased out.

I'm not comfortable with dropping OpenSSL-based signature parsing
until we have well-defined rules about which encodings are valid. At
this point I'm not even convinced we *know* about all possible ways to
modify signature encodings without invalidating them.

But perhaps we should investigate how many non-DER signatures still
make it into blocks first...

-- 
Pieter

--
Want fast and easy access to all the code in your enterprise? Index and
search up to 200,000 lines of code with a free copy of Black Duck
Code Sight - the same software that powers the world's largest code
search on Ohloh, the Black Duck Open Hub! Try it now.
http://p.sf.net/sfu/bds
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Small update to BIP 62

2014-07-18 Thread Pieter Wuille
On Fri, Jul 18, 2014 at 5:45 PM, Pieter Wuille pieter.wui...@gmail.com wrote:
 But perhaps we should investigate how many non-DER signatures still
 make it into blocks first...

In the last 11 blocks (4148 transactions), apparently none.

-- 
Pieter

--
Want fast and easy access to all the code in your enterprise? Index and
search up to 200,000 lines of code with a free copy of Black Duck
Code Sight - the same software that powers the world's largest code
search on Ohloh, the Black Duck Open Hub! Try it now.
http://p.sf.net/sfu/bds
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Plans to separate wallet from core

2014-06-24 Thread Pieter Wuille
I'd like to point out that there is quite a difference between what
core nodes should be like and what the codebase core nodes are built
from must support.

Given sufficiently modularized code (which I think everyone seems to
be in favor of, regardless of the goals), you can likely build a
binary that does full verification and maintains some indexes of some
sort.

I still believe that what we push for to run as the core nodes of the
network should aim for purely verification and relay, and nothing
else, but people can and will do things differently if the source code
allows it. And that's fine.

On Tue, Jun 24, 2014 at 3:26 PM, Jorge Timón jti...@monetize.io wrote:
 I think this is my main question, what's the advantage of having the
 processes talking via the p2p protocol instead of something more
 direct when you control both processes?

IMHO, maintaining a correct view of the current state of the chain
(excluding blocks, just headers) is already sufficiently hard (I hope
that everyone who ever implemented an SPV wallet can agree). You
simplify things a bit by not needing to verify what the peer claims if
you trust them, but not much. You still need to support
reorganizations, counting confirmations, making sure you stay
up-to-date. These are functions the (SPV) P2P protocol has already
shown to do well, and there are several codebases out there that
implement it. No need to reinvent the wheel with a marginally more
efficient protocol, if it means starting over everything else.

-- 
Pieter

--
Open source business process management suite built on Java and Eclipse
Turn processes into business applications with Bonita BPM Community Edition
Quickly connect people, data, and systems into organized workflows
Winner of BOSSIE, CODIE, OW2 and Gartner awards
http://p.sf.net/sfu/Bonitasoft
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Bitcoin miner heads-up: getwork RPC going away

2014-06-21 Thread Pieter Wuille
On Sat, Jun 7, 2014 at 10:29 AM, Wladimir laa...@gmail.com wrote:
 On Sat, Jun 7, 2014 at 3:55 AM, Jeff Garzik jgar...@bitpay.com wrote:
 As such, it is planned to remove getwork in the next release.  Most
 if not all pool servers have switched away from getwork years ago.

 To be clear, they switched to getblocktemplate and submitblock
 which provides a much more flexible and scalable way to do mining.
 This is explained in https://en.bitcoin.it/wiki/Getblocktemplate .

As there doesn't seem to be any objection, we may go ahead and merge
https://github.com/bitcoin/bitcoin/pull/4100 (which among other things
removes the getwork RPC).

-- 
Pieter

--
HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions
Find What Matters Most in Your Big Data with HPCC Systems
Open Source. Fast. Scalable. Simple. Ideal for Dirty Data.
Leverages Graph Analysis for Fast Processing  Easy Data Exploration
http://p.sf.net/sfu/hpccsystems
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Possible attack: Keeping unconfirmed transactions

2014-06-06 Thread Pieter Wuille
Whenever you do a reissuing of a transaction that didn't go through
earlier, you should make sure to reuse one of the inputs for it. That
guarantees that both cannot confirm simultaneously.

On Sat, Jun 7, 2014 at 12:21 AM, Raúl Martínez r...@i-rme.es wrote:
 Alice does not intercept the transaction, she only saves it and expect that
 it will not be confirmed (because has 0 fee for example).

 Also using the Payment Protocol I believe that Alice is the only person that
 can relay Bob's transaction.

 Source: https://github.com/bitcoin/bips/blob/master/bip-0070.mediawiki

 When the merchant's server receives the Payment message, it must determine
 whether or not the transactions satisfy conditions of payment. If and only
 if they do, if should broadcast the transaction(s) on the Bitcoin p2p
 network.



 2014-06-07 0:11 GMT+02:00 Toshi Morita to...@peernova.com:

 From what I know, Alice does not know to which node Bob will broadcast the
 transaction. Therefore, Alice cannot intercept the transaction and prevent
 the rest of the network from seeing it.

 Toshi



 On Fri, Jun 6, 2014 at 3:02 PM, Raúl Martínez r...@i-rme.es wrote:

 I dont know if this attack is even possible, it came to my mind and I
 will try to explain it as good as possible.

 Some transacions keep unconfirmed forever and finally they are purged by
 Bitcoin nodes, mostly due to the lack of fees.


 Example:
 -

 Alice is selling a pizza to Bob, Bob is now making the payment with
 Bitcoin.
 The main goal of this attack is to store a unconfirmed transaction send
 by Bob for a few days (it will not be included in the blockchain because it
 has no fee or due to other reason), Bob might resend the payment or might
 just cancel the deal with Alice.

 Bob forgets about that failed trade but a couple of days later, Alice,
 who has stored the signed transacion, relays the transaction to the network
 (or mines it directly with his own hashpower).
 Bob does not know what is happening, he believed that that transaction
 was canceled forever, he even does not remember the failed pizza deal.

 Alice has now the bitcoins and Bob does not know what happened with his
 money.

 -

 This might also work with the Payment Protocol because when using it Bob
 does not relay the transaction to the network, its Alices job to do it,
 Alice stores it and tells Bob to resend the payment, Bob creates another
 transaction (If has the same inputs as the first TX this does not work)
 (this one is relayed by Alice to the network).

 Alice comes back a couple of days later and mines with his hashrate the
 first transaction (the one she didnt relayed to the network).

 Alice now has two payments, Bob does not know what happened.


 ---

 I hope that I explained well this possible attack, I dont know if there
 is already a fix for this problem or if it is simply impossible to execute
 this kind of attack.

 Thanks for your time.






 --
 Learn Graph Databases - Download FREE O'Reilly Book
 Graph Databases is the definitive new guide to graph databases and
 their
 applications. Written by three acclaimed leaders in the field,
 this first edition is now available. Download your free book today!
 http://p.sf.net/sfu/NeoTech
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development




 --
 Learn Graph Databases - Download FREE O'Reilly Book
 Graph Databases is the definitive new guide to graph databases and their
 applications. Written by three acclaimed leaders in the field,
 this first edition is now available. Download your free book today!
 http://p.sf.net/sfu/NeoTech
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development


--
Learn Graph Databases - Download FREE O'Reilly Book
Graph Databases is the definitive new guide to graph databases and their 
applications. Written by three acclaimed leaders in the field, 
this first edition is now available. Download your free book today!
http://p.sf.net/sfu/NeoTech
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Future Feature Proposal - getgist

2014-06-05 Thread Pieter Wuille
On Thu, Jun 5, 2014 at 7:43 PM, Richard Moore m...@ricmoo.com wrote:
 I was considering names like getcheckpoints() to use the terminology that
 already seemed to be in place, but they were too long :)

 I have been using getheaders() in my thick client to quickly grab all the
 headers before downloading the full blocks since I can grab more at a time.
 Even with getblocks(), there is the case for a  getgist() call. Right now
 you call getblocks(), which can take some time to get the corresponding
 inv(), at which time you can then start the call to getdata() as well as the
 next call to getblocks().

 With a gist, for example of segment_count 50, you could call getgist(), then
 with the response, you could request 50 getblocks() each with a
 block_locator of 1 hash from the gist (and optimally the stop_hash of the
 next hash in the gist) to 50 different peers, providing 25,000 (50 x 500)
 block hashes.

I don't understand. If you're using getheaders(), there is no need to
use getblocks() anymore. You just do a getdata() immediately for the
block hashes you have the headers but not the transactions for.

In general, I think we should aim for as much verifiability as
possible. Much of the reference client's design is built around doing
as much validation on received data as soon as possible, to avoid
being misled by a particular peer. Getheaders() provides this: you
receive a set of headers starting from a point you already know, in
order, and can validate them syntactically and for proof-of-work
immediately. That allows building a very-hard-to-beat tree structure
locally already, at which point you can start requesting blocks along
the good branches of that tree immediately - potentially in parallel
from multiple peers. In fact, that's the planned approach for the
headers-first synchronization.

The getgist() proposed here allows the peer to basically give you
bullshit headers at the end, and you won't notice until you've
downloaded every single block (or at least every single header) up to
that point. That risks wasting time, bandwidth and diskspace,
depending on implementation.

Based on earlier experimenting with my former experimental
headersfirst branch, it's quite possible to have 2 mostly independent
synchronization mechanisms going on; 1) asking and downloading headers
from every peer, and validating them, and 2) asking and downloading
blocks from multiple peers in parallel, for blocks corresponding to
validated headers. Downloading the headers succeeds within minutes,
and within seconds you have enough to start fetching blocks. After
that point, you can keep a download window full with outstanding
block requests, and as blocks go much slower than headers, the headers
process never becomes a blocker for blocks to download.

Unless we're talking about a system with billions of headers to
download, I don't think this is a worthwhile optimization.

-- 
Pieter

--
Learn Graph Databases - Download FREE O'Reilly Book
Graph Databases is the definitive new guide to graph databases and their 
applications. Written by three acclaimed leaders in the field, 
this first edition is now available. Download your free book today!
http://p.sf.net/sfu/NeoTech
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] testnet-seed.bitcoin.petertodd.org is up again

2014-05-30 Thread Pieter Wuille
I don't think it would be too hard to add support for a option to the
seeder for non-matching requests, forward to other DNS server at
IP:PORT, so you could cascade them.

On Fri, May 30, 2014 at 4:51 PM, Robert McKay rob...@mckay.com wrote:
 No, I don't think so. The problem is the 'aa' flag is missing (see the
 'flags' section in dig). Perhaps if you could suppress the authority
 records the recursor would give up and just accept the non-authorative
 answer, but that isn't a good solution even if it might work for some
 resolvers.

 Rob

 On Fri, 30 May 2014 15:13:36 +0100, Alex Kotenko wrote:
 Hmm, you might be right, as queries
 dig @node.alexykot.me [8] testnet-seed.alexykot.me [9]

 and
  dig @node.alexykot.me [10] -p 18353 testnet-seed.alexykot.me
 [11]

 are giving different authority sections.

 Hmm, but if I setup custom SOA record for it - it should work,
 right?
  What SOA name should it be actually, assuming that NS record for
 testnet-seed.alexykot.me [12] is pointing at alexykot.me [13]?

 Best regards,

 Alex Kotenko

 2014-05-30 14:41 GMT+01:00 Robert McKay :

 Hi Alex,

 I think the problem is with my suggestion to use bind forwarding..
 basically bind is stripping off the authorative answer bit in the
 reply.. this causes the recursor to go into a loop chasing the
 authority server which again returns a non-authoritve answer with
 itself as the authority again. Im not sure if this can be fixed
 without hacking the bind src, so maybe it wasnt such a great
 suggestion in the first place. Basically I think if bind was
 returning authorative answers it would work, but I cant see any way
 to make that happen in the bind configuration.

 Rob

 On Fri, 30 May 2014 14:19:05 +0100, Alex Kotenko wrote:

 Hi Peter

 Ive setup DNS seeds myself a week ago, at
 testnet-seed.alexykot.me [1] [6]
 and bitcoin-seed.alexykot.me [2] [7], but there is a problem with
 DNS
 settings that we with Andreas couldnt sort out quickly.

 The problem itself is that I can reach my nameserver and get
 dnsseed
 response if I query it directly with
  dig @node.alexykot.me [3] [8] testnet-seed.alexykot.me [4] [9]

  dig @node.alexykot.me [5] [10] bitcoin-seed.alexykot.me [6]
 [11]

 But when I try nslookup testnet-seed.alexykot.me [7] [12] -
 it
 fails.
 I guess the problem is in my DNS settings but I cant figure out
 what
 is it.

 S o could you share
 how you configured DNS
   for your seed
 to help me debug
  mine
 ?

 Best regards,

 Alex Kotenko




 Links:
 --
 [1] http://testnet-seed.alexykot.me
 [2] http://bitcoin-seed.alexykot.me
 [3] http://node.alexykot.me
 [4] http://testnet-seed.alexykot.me
 [5] http://node.alexykot.me
 [6] http://bitcoin-seed.alexykot.me
 [7] http://testnet-seed.alexykot.me
 [8] http://node.alexykot.me/
 [9] http://testnet-seed.alexykot.me/
 [10] http://node.alexykot.me/
 [11] http://testnet-seed.alexykot.me/
 [12] http://testnet-seed.alexykot.me
 [13] http://alexykot.me
 [14] mailto:rob...@mckay.com


 --
 Time is money. Stop wasting it! Get your web API in 5 minutes.
 www.restlet.com/download
 http://p.sf.net/sfu/restlet
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development

--
Time is money. Stop wasting it! Get your web API in 5 minutes.
www.restlet.com/download
http://p.sf.net/sfu/restlet
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] testnet-seed.bitcoin.petertodd.org is up again

2014-05-30 Thread Pieter Wuille
On Fri, May 30, 2014 at 5:40 PM, Andreas Schildbach
andr...@schildbach.de wrote:
 I maybe have made this suggestion in the past, but why don't we teach
 the seeder (or maybe even plain bitcoind) how to write a zone file and
 then use matured DNS servers to serve this zone?

 I admit I never ran my own DNS so I'm not sure if that can work -- but
 to me it sounds like the easiest approach plus everyone can just use
 stock server software.

That's what Matt's implementation is doing. You don't have to run mine :)

I chose not to do so, as I wanted to be able to serve a different
response to every query, but more diversity is a good thing.

-- 
Pieter

--
Time is money. Stop wasting it! Get your web API in 5 minutes.
www.restlet.com/download
http://p.sf.net/sfu/restlet
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] statoshi.info is now live

2014-05-14 Thread Pieter Wuille
On May 14, 2014 1:42 PM, Jameson Lopp jameson.l...@gmail.com wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 Thanks; I've received several suggestions for other metrics to collect
that I hope to implement soon, but you're right in that tracking per-peer
pings is a different type of metric than what I'm currently collecting. I
actually noted the lack of pong messages in a post I made a few weeks ago:
http://coinchomp.com/2014/04/27/peeking-hood-running-bitcoin-node/

See pull request #2784.

(Sent from my phone)

-- 
Pieter
--
Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE
Instantly run your Selenium tests across 300+ browser/OS combos.
Get unparalleled scalability from the best Selenium testing platform available
Simple to use. Nothing to install. Get started now for free.
http://p.sf.net/sfu/SauceLabs___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] ECDH in the payment protocol

2014-05-09 Thread Pieter Wuille
I believe stealth addresses and the payment protocol both have their
use cases, and that they don't overlap.

If you do not want to communicate with the receiver, you typically do
not want them to know who is paying or for what (otherwise you're
already talking to them in some way, right?). That's perfect for
things like anonymous donations.

In pretty much every other case, communicating directly with the
receiver has benefits. Negotiation of the transaction details,
messages associated with the transaction, refund information, no need
to scan the blockchain for incoming transaction... and the ability to
cancel if either party doesn't agree.

Instead of adding stealth functionality to the payment protocol as a
last resort, I'd rather see the payment protocol improve its
atomicity. Either you want the data channel sender-receiver, or you
don't. If it isn't available and you want it, things should fail. If
you don't want it, you shouldn't try to use it in the first place.

On Fri, May 9, 2014 at 5:34 PM, Mike Hearn m...@plan99.net wrote:
 Ah, you're still misunderstanding my point: You can get atomicity in the
 worst-case where the communications medium fails *and* stealth payments
 that use up no extra space in the blockchain. This gives you the best of
 both worlds.


 Sounds great! How does a lightweight client identify such transactions
 without any markers?

 Regardless, there are lots of other useful features that require BIP70 to
 work well person to person, like messages, refund addresses, etc. So
 extending it with ECDH makes sense in the end anyway no matter what.

 --
 Is your legacy SCM system holding you back? Join Perforce May 7 to find out:
 #149; 3 signs your SCM is hindering your productivity
 #149; Requirements for releasing software faster
 #149; Expert tips and advice for migrating your SCM now
 http://p.sf.net/sfu/perforce
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development


--
Is your legacy SCM system holding you back? Join Perforce May 7 to find out:
#149; 3 signs your SCM is hindering your productivity
#149; Requirements for releasing software faster
#149; Expert tips and advice for migrating your SCM now
http://p.sf.net/sfu/perforce
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] ECDH in the payment protocol

2014-05-09 Thread Pieter Wuille
On Fri, May 9, 2014 at 8:13 PM, Peter Todd p...@petertodd.org wrote:
 I don't think we're going to find that's practical unfortunately due to
 change. Every payment I make ties up txouts, so if we try to base the
 atomicity of payments on whether or not the payee decides to broadcast
 the transaction the payor is stuck with txouts that they can't use until
 the payee makes up their mind. That leads to lots and lots of nasty edge
 cases.

I haven't talked much about it except for on IRC, but my idea was this:
* PaymentACK messages become signed (with the same key as the payment
request, or using some delegation mechanism, so that the same key
doesn't need to remain online).
* Instead of a full Bitcoin transaction, a Payment message contains a
scriptSig-less Bitcoin transaction + a limit on its byte size (and
perhaps a limit on its sigop count).
* The sender sends such a Payment to the receiver before actually
signing the transaction (or at least, before revealing the signed
form).
* The receiver only ACKs if the transaction satisfies the request, is
within time limits, isn't unlikely to confirm.
* If the sender likes the ACK (the refund and memo fields are intact,
the transaction isn't changed, the signature key is valid, ...), he
either sends the full transaction (with receiver taking responsibility
for broadcasting) or broadcasts it himself.

Together, this means that a paymentACK + a confirmed matching Bitcoin
transaction can count as proof of payment. Both parties have a chance
to disagree with the transaction, and are certain all communicated
data (apart from transaction signatures) in both directions happened
correctly before committing. It would completely remove the chance
that the Bitcoin transaction gets broadcast without the receiver
liking it (for legitimate or illegitimate reasons), or without the
backchannel functioning correctly.

It's also compatible with doing multiple payments in one Bitcoin
transaction - you can ask for ACKs from multiple parties before
signing the transaction.

Of course, the sender can still withhold the signed transaction (in
which case nothing happened, but we probably need a second timeout),
or the receiver can always claim to have never received the
transaction. The sender can broadcast the transaction himself in order
to prevent that, after obtaining an ACK.

-- 
Pieter

--
Is your legacy SCM system holding you back? Join Perforce May 7 to find out:
#149; 3 signs your SCM is hindering your productivity
#149; Requirements for releasing software faster
#149; Expert tips and advice for migrating your SCM now
http://p.sf.net/sfu/perforce
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Bug in key.cpp

2014-05-06 Thread Pieter Wuille
On Tue, May 6, 2014 at 5:12 AM,  odinn.cyberguerri...@riseup.net wrote:
 You are right there is a bug in there.

 But the value is not 25% I think.  Tinker some more. :-)


 Afaict, 3 is a perfectly valid value, meaning 25% of sig- key recoveries
 would fail erroneously...

Values 2 and 3 are only needed in theory. They together shouldn't
occur more than once in 2**127 (when the signature value is between
the group size and the field size).

That said, this is indeed a bug.

-- 
Pieter

--
Is your legacy SCM system holding you back? Join Perforce May 7 to find out:
#149; 3 signs your SCM is hindering your productivity
#149; Requirements for releasing software faster
#149; Expert tips and advice for migrating your SCM now
http://p.sf.net/sfu/perforce
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] BIP32 wallet structure in use? Remove it?

2014-04-26 Thread Pieter Wuille
On Sat, Apr 26, 2014 at 2:24 PM, Pavol Rusnak st...@gk2.sk wrote:
 On 04/26/2014 12:48 PM, Tier Nolan wrote:
 Maybe the solution is to have a defined way to import an unknown wallet?

 That is nonsense. There is no way how to import the wallet if you don't
 know its structure.

I agree. Especially when multiple chains are combined (multisig) for
P2SH usage, defining things like a gap limit becomes impossible
without knowing some metadata.

However, perhaps it is possible to define something like BIP44
import-compatible, meaning that the application doesn't actually
support all of BIP44 features, but does guarantee not losing any funds
when imported? Similar things could be done for other purpose types.

-- 
Pieter

--
Start Your Social Network Today - Download eXo Platform
Build your Enterprise Intranet with eXo Platform Software
Java Based Open Source Intranet - Social, Extensible, Cloud Ready
Get Started Now And Turn Your Intranet Into A Collaboration Platform
http://p.sf.net/sfu/ExoPlatform
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] New BIP32 structure

2014-04-24 Thread Pieter Wuille
On Thu, Apr 24, 2014 at 8:54 AM, Thomas Voegtlin thoma...@gmx.de wrote:
 Why do clients need to use the features in BIP 64? If Electrum doesn't want 
 to
 use accounts, [...]

 To clarify:
 Electrum plans to have bip32 accounts; Multibit will not, afaik.

To clarify:
BIP64 has a much stricter definition for accounts than BIP32.

In BIP32, it is not well specified what accounts are used for. They
can be used for subwallets, receive accounts (as in bitcoind's
account feature), recurring payments, part of a chain used as
multisig addresses, ... determined individually for each index.

In BIP64, they are strictly used for subwallets, and can't be used by
anything else.

-- 
Pieter

--
Start Your Social Network Today - Download eXo Platform
Build your Enterprise Intranet with eXo Platform Software
Java Based Open Source Intranet - Social, Extensible, Cloud Ready
Get Started Now And Turn Your Intranet Into A Collaboration Platform
http://p.sf.net/sfu/ExoPlatform
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Coinbase reallocation to discourage Finney attacks

2014-04-23 Thread Pieter Wuille
On Wed, Apr 23, 2014 at 5:34 PM, Kevin kevinsisco61...@gmail.com wrote:
 I have some questions:
 1.  How can we work towards solving the double-spending problem?

We have this awesome technology that solves the double-spending
problem. It's called a blockchain. Of course, it only works when
transactions are actually in a block.

This issue is about double-spending preventing before they're
confirmed. This is (and has always been) just a best-effort mechanism
in the network.

 2.  Is it possible to scan for double-spending and correct it?

That is what is being proposed here, by introducing a mechanism where
miners can vote to penalize other miners if they seem to allow (too
many?) double spends.

 3.  Is the network at large not secure enough?

Not very relevant.

-- 
Pieter

--
Start Your Social Network Today - Download eXo Platform
Build your Enterprise Intranet with eXo Platform Software
Java Based Open Source Intranet - Social, Extensible, Cloud Ready
Get Started Now And Turn Your Intranet Into A Collaboration Platform
http://p.sf.net/sfu/ExoPlatform
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] New BIP32 structure

2014-04-23 Thread Pieter Wuille
On Tue, Apr 8, 2014 at 5:41 PM, slush sl...@centrum.cz wrote:
 I've discussed the solution of Litecoin seed in BIP32 HMAC with Litecoin
 devs already, and after long discussion we've concluded that it is generally
 bad idea.

 When changing Bitcoin seed constant to something different, same *entropy*
 will produce different *master node*. That's actually the opposite what's
 requested, because xprv serialization format stores *node*, not *entropy*.
 By changing HMAC constant, you still won't be able to store one node and
 derive wallets for multiple coins at same time.

Storing the seed is superior to storing the master node already
(whether coin specific or not), as it is smaller.

All this changes is making the seed the super master which allows
generating the coin-specific masters (which get an actual useful
function: revealing your entire-tree, but only one coin's subset of
it).

 * Every encoded node (including master nodes) has a chain-specific
 serialization magic.

 This is in practice almost the same as your suggestion, except that
 the m/cointype' in m/cointype'/account'/change/n is replaced by
 different masters. The only disadvantage I see is that you do not have
 a way to encode the super master that is the parent of all
 chain-specific masters. You can - and with the same security
 properties - encode the seed, though.


 Actually I don't understand why there's such disagreement about cointype
 level here, what it breaks? I see it as the cleanest solution so far. It is
 forward and backward compatible, does need any special extension to bip32
 (to be strict, bip32 says Bitcoin seed, so client using Litecoin seed
 cannot be bip32 compatible).

Fair enough, it would break strictly BIP32. Then again, BIP32 is a
*Bitcoin* improvement proposal, and not something that necessarily
applies to other coins (they can adopt it of course, I don't care).

What I dislike is that this removes the ability of using the magic in
the serialization to prevent importing a chain from the wrong coin.
The standard could just say that instead of Bitcoin seed, you'd use
Coin seed:  + magic, so you don't need an extra mapping from
cointype to seed strings.

-- 
Pieter

--
Start Your Social Network Today - Download eXo Platform
Build your Enterprise Intranet with eXo Platform Software
Java Based Open Source Intranet - Social, Extensible, Cloud Ready
Get Started Now And Turn Your Intranet Into A Collaboration Platform
http://p.sf.net/sfu/ExoPlatform
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] New BIP32 structure

2014-04-23 Thread Pieter Wuille
That's the point. BIP64 specifies such a structure, and you have to scan
all that it defines.

If you want to write wallet software that does not have the complexity to
deal with just one account, it is not BIP64 compliant. It could try to
define its own purpose system, with a hierarchy without accounts in it. I'm
not sure this is a very interesting use case, but I like how strict it is.
Construction of related chains for multisig addresses is perhaps a better
example of a different purpose structure.
 On 23 Apr 2014 22:03, Luke-Jr l...@dashjr.org wrote:

 On Wednesday, April 23, 2014 7:57:46 PM slush wrote:
  On Wed, Apr 23, 2014 at 9:55 PM, Luke-Jr l...@dashjr.org wrote:
   Any wallet should import all the coins just fine, it just wouldn't
 *use*
   any
   account other than 0. Remember addresses are used to receive bitcoins;
   once the UTXOs are in the wallet, they are no longer associated with
 the
   address or
   any other details of how they were received.
 
  Wallet don't see UTXO until it scans all branches/accounts on HD node
  import.

 Yes, it should scan all possible (under the BIP-defined structure) branches
 regardless of which features it supports.

 Luke


 --
 Start Your Social Network Today - Download eXo Platform
 Build your Enterprise Intranet with eXo Platform Software
 Java Based Open Source Intranet - Social, Extensible, Cloud Ready
 Get Started Now And Turn Your Intranet Into A Collaboration Platform
 http://p.sf.net/sfu/ExoPlatform
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development

--
Start Your Social Network Today - Download eXo Platform
Build your Enterprise Intranet with eXo Platform Software
Java Based Open Source Intranet - Social, Extensible, Cloud Ready
Get Started Now And Turn Your Intranet Into A Collaboration Platform
http://p.sf.net/sfu/ExoPlatform___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] New BIP32 structure

2014-04-23 Thread Pieter Wuille
I believe Luke means scanning chains as defined by the structure, but not
handing out addresses from other accounts than the first one.

That's certainly a possibly way to compatibly implement BIP64, but it
doesn't reduce all that much complexity. I hope people would choose that
over defining their own accountless structure though.
On 23 Apr 2014 22:06, Pavol Rusnak st...@gk2.sk wrote:

 On 04/23/2014 10:01 PM, Luke-Jr wrote:
  Yes, it should scan all possible (under the BIP-defined structure)
 branches
  regardless of which features it supports.

 So you suggest to scan for accounts, show balances but don't allow user
 to spend them? Does not seem right to me.

 --
 Best Regards / S pozdravom,

 Pavol Rusnak st...@gk2.sk


 --
 Start Your Social Network Today - Download eXo Platform
 Build your Enterprise Intranet with eXo Platform Software
 Java Based Open Source Intranet - Social, Extensible, Cloud Ready
 Get Started Now And Turn Your Intranet Into A Collaboration Platform
 http://p.sf.net/sfu/ExoPlatform
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development

--
Start Your Social Network Today - Download eXo Platform
Build your Enterprise Intranet with eXo Platform Software
Java Based Open Source Intranet - Social, Extensible, Cloud Ready
Get Started Now And Turn Your Intranet Into A Collaboration Platform
http://p.sf.net/sfu/ExoPlatform___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] New BIP32 structure

2014-04-23 Thread Pieter Wuille
On Wed, Apr 23, 2014 at 10:43 PM, Pavol Rusnak st...@gk2.sk wrote:
 On 04/23/2014 10:41 PM, Luke-Jr wrote:
 I don't see how. The user knows he has money in different subwallets. As long
 as he has a way to specify which subwallet he is accessing in 
 single-subwallet
 clients, there shouldn't be a problem.

 Right. But these clients have no right to call themselves BIP64
 compatible then.

Would you consider software which scans all accounts as specified by
BIP64, but has no user interface option to distinguish them in any
way, view them independently, and has no ability to keep the coins
apart... compatible with BIP64?

According to the argument here mentioned earlier (all or nothing),
it is, as it will not break interoperability with other BIP64
software. Still, it doesn't support the accounts feature, and perhaps
that's fine?

-- 
Pieter

--
Start Your Social Network Today - Download eXo Platform
Build your Enterprise Intranet with eXo Platform Software
Java Based Open Source Intranet - Social, Extensible, Cloud Ready
Get Started Now And Turn Your Intranet Into A Collaboration Platform
http://p.sf.net/sfu/ExoPlatform
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] New BIP32 structure

2014-04-23 Thread Pieter Wuille
On Wed, Apr 23, 2014 at 11:33 PM, Pavol Rusnak st...@gk2.sk wrote:
 On 04/23/2014 11:22 PM, Gregory Maxwell wrote:
 Hopefully it would be clarified as only a MUST NOT do so silently...
 I have funds split across two wallets and it WONT LET ME SPEND THEM
 sounds like a terrible user experience. :)

 That is a subjective matter. To me it makes PERFECT SENSE that funds
 across accounts NEVER MIX. One can still send funds from one account to
 another and then perform another spend.

In that case, maybe it makes sense to define another purpose id
without accounts as well already.

I believe many simple wallets will find multiple subwallets too
burdening for the user experience, or not worth the technical
complexity.

-- 
Pieter

--
Start Your Social Network Today - Download eXo Platform
Build your Enterprise Intranet with eXo Platform Software
Java Based Open Source Intranet - Social, Extensible, Cloud Ready
Get Started Now And Turn Your Intranet Into A Collaboration Platform
http://p.sf.net/sfu/ExoPlatform
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] bits: Unit of account

2014-04-20 Thread Pieter Wuille
I told him specifically to bring it here (on a pull request for
Bitcoin Core), as there is no point in making such convention changes
to just one client.

I wasn't aware of any discussion about the bits proposal here before.

On Sun, Apr 20, 2014 at 4:28 PM, Tamas Blummer ta...@bitsofproof.com wrote:
 People on this list are mostly engineers who have no problem dealing with
 magnitudes and have rather limited empathy for people who have a problem
 with them.
 They also tend to think, that because they invented money 2.0 they would not
 need to care of finance's or people's current customs.

 The importance of their decisions in these questions will fade as people
 already use wallets other than the core.

 Bring this particular discussion elsewhere, to the wallet developer.

 BTW the topic was discussed here several times, you have my support and Jeff
 Garzik's.

 Regards,

 Tamas Blummer
 http://bitsofproof.com

 On 20.04.2014, at 15:15, Rob Golding rob.gold...@astutium.com wrote:

 The average person is not going to be confident that the prefix they
 are using is the correct one,


 The use of any 'prefix' is one of choice and entirely unnecessary, and there
 are already established 'divisions' in u/mBTC for those that feel they need
 to use such things.

 people WILL send 1000x more or less than
 intended if we go down this road,


 Exceptionally unlikely - I deal every day with currencies with 0, 2 and 3
 dp's in amount ranging from 'under 1 whole unit' to tens of thousands - Not
 once in 20 years has anyone ever 'sent' more or less than intended - oh,
 they've 'intended' to underpay just fine, but never *unintended*.

 I propose that users are offered a preference to denominate the
 Bitcoin currency in a unit called a bit. Where one bitcoin (BTC)
 equals one million bits (bits) and one bit equals 100 satoshis.


 I propose that for people unable to understand what a bitcoin is, they can
 just use satoshi's and drop this entire proposal.

 Rob


 --
 Learn Graph Databases - Download FREE O'Reilly Book
 Graph Databases is the definitive new guide to graph databases and their
 applications. Written by three acclaimed leaders in the field,
 this first edition is now available. Download your free book today!
 http://p.sf.net/sfu/NeoTech
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development



 --
 Learn Graph Databases - Download FREE O'Reilly Book
 Graph Databases is the definitive new guide to graph databases and their
 applications. Written by three acclaimed leaders in the field,
 this first edition is now available. Download your free book today!
 http://p.sf.net/sfu/NeoTech
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development


--
Learn Graph Databases - Download FREE O'Reilly Book
Graph Databases is the definitive new guide to graph databases and their
applications. Written by three acclaimed leaders in the field,
this first edition is now available. Download your free book today!
http://p.sf.net/sfu/NeoTech
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] bits: Unit of account

2014-04-20 Thread Pieter Wuille
On Apr 21, 2014 3:37 AM, Un Ix slashdevn...@hotmail.com wrote:

 Something tells me this would be reduced to a single syllable in common
usage I.e. bit.

What units will be called colloquially is not something developers will
determine. It will vary, depend on language and culture, and is not
relevant to this discussion in my opinion.

It may well be that people in some geographic or language area will end up
(or for a while) calling 1e-06 BTC bits. That's fine, but using that as
official name in software would be very strange and potentially confusing
in my opinion. As mentioned by others, that would seem to me like calling
dollars bucks in bank software. Nobody seems to have a problem with
having colloquial names, but US dollar or euro are far less ambiguous
than bit. I think we need a more distinctive name.

-- 
Pieter
--
Start Your Social Network Today - Download eXo Platform
Build your Enterprise Intranet with eXo Platform Software
Java Based Open Source Intranet - Social, Extensible, Cloud Ready
Get Started Now And Turn Your Intranet Into A Collaboration Platform
http://p.sf.net/sfu/ExoPlatform___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Warning message when running wallet in Windows XP (or drop support?)

2014-04-16 Thread Pieter Wuille
On Wed, Apr 16, 2014 at 5:12 PM, Kevin kevinsisco61...@gmail.com wrote:
 I think we should get to the bottom of this.  Should we assume that xp is
 not secure enough?

Yes.

 What is this warning?

Windows XP is no longer maintained. Don't use such a system for
protecting your money.

 Who is issuing this warning?

Microsoft: http://windows.microsoft.com/en-us/windows/end-support-help

The suggestion here is to make Bitcoin Core detect when it's running
on Windows XP, and warn the user (they are likely unaware of the
risks).

-- 
Pieter

--
Learn Graph Databases - Download FREE O'Reilly Book
Graph Databases is the definitive new guide to graph databases and their
applications. Written by three acclaimed leaders in the field,
this first edition is now available. Download your free book today!
http://p.sf.net/sfu/NeoTech
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Warning message when running wallet in Windows XP (or drop support?)

2014-04-16 Thread Pieter Wuille
On Wed, Apr 16, 2014 at 11:39 PM, Mark Friedenbach m...@monetize.io wrote:
 On 04/16/2014 02:29 PM, Kevin wrote:
 Okay, so how about an autoupdate function which pulls a work around off
 the server?  Sooner or later, the vulnerabilities must be faced.

 NO. Bitcoin Core will never have an auto-update functionality. That
 would be a single point of failure whose compromise could result in the
 theft of every last bitcoin held in a Bitcoin Core wallet.

Or, even accidentally, cause a hard forking bug to be rolled out (or
worsen one).

-- 
Pieter

--
Learn Graph Databases - Download FREE O'Reilly Book
Graph Databases is the definitive new guide to graph databases and their
applications. Written by three acclaimed leaders in the field,
this first edition is now available. Download your free book today!
http://p.sf.net/sfu/NeoTech
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Bug in 2-of-3 transaction signing in Bitcoind?

2014-04-15 Thread Pieter Wuille
The first input seems to be already spent by another transaction
(which looks very similar).

0.9 should report a more detailed reason for rejection, by the way.



On Tue, Apr 15, 2014 at 5:05 PM, Mike Hearn m...@plan99.net wrote:
 Check debug.log to find out the reason it was rejected.



 --
 Learn Graph Databases - Download FREE O'Reilly Book
 Graph Databases is the definitive new guide to graph databases and their
 applications. Written by three acclaimed leaders in the field,
 this first edition is now available. Download your free book today!
 http://p.sf.net/sfu/NeoTech
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development


--
Learn Graph Databases - Download FREE O'Reilly Book
Graph Databases is the definitive new guide to graph databases and their
applications. Written by three acclaimed leaders in the field,
this first edition is now available. Download your free book today!
http://p.sf.net/sfu/NeoTech
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Bitcoind-in-background mode for SPV wallets

2014-04-10 Thread Pieter Wuille
There were earlier discussions.

The two ideas were either using one or a few service bits to indicate
availability of blocks, or to extend addr messages with some flags to
indicate this information.

I wonder whether we can't have a hybrid: bits to indicate general
degree of availability of blocks (none, only recent, everything), but
indicate actual availability only upon actually connecting (through a
version extension, or - preferably - a separate message). Reason is
that the actual blocks available are likely to change frequently (if
you keep the last week of blocks, a 3-day old addr entry will have
quite outdated information), and not that important to actual peer
selection - only to drive the decision which blocks to ask after
connection.

On Thu, Apr 10, 2014 at 1:09 PM, Wladimir laa...@gmail.com wrote:
 On Thu, Apr 10, 2014 at 8:04 AM, Tamas Blummer ta...@bitsofproof.com
 wrote:

 Serving headers should be default but storing and serving full blocks
 configurable to ranges, so people can tailor to their bandwith and space
 available.


 I do agree that it is important.

 This does require changes to the P2P protocol, as currently there is no way
 for a node to signal that they store only part of the block chain. Also,
 clients will have to be modified to take this into account. Right now they
 are under the assumption that every full node can send them every (previous)
 block.

 What would this involve?

 Do you know of any previous work towards this?

 Wladimir


 --
 Put Bad Developers to Shame
 Dominate Development with Jenkins Continuous Integration
 Continuously Automate Build, Test  Deployment
 Start a new project now. Try Jenkins in the cloud.
 http://p.sf.net/sfu/13600_Cloudbees
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development


--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Chain pruning

2014-04-10 Thread Pieter Wuille
On Thu, Apr 10, 2014 at 6:47 PM, Brian Hoffman brianchoff...@gmail.com wrote:
 Looks like only about ~30% disk space savings so I see your point. Is there
 a critical reason why blocks couldn't be formed into superblocks that are
 chained together and nodes could serve a specific superblock, which could be
 pieced together from different nodes to get the full blockchain? This would
 allow participants with limited resources to serve full portions of the
 blockchain rather than limited pieces of the entire blockchain.

As this is a suggestion that I think I've seen come up once a month
for the past 3 years, let's try to answer it thoroughly.

The actual state of the blockchain is the UTXO set (stored in
chainstate/ by the reference client). It's the set of all unspent
transaction outputs at the currently active point in the block chain.
It is all you need for validating future blocks.

The problem is, you can't just give someone the UTXO set and expect
them to trust it, as there is no way to prove that it was the result
of processing the actual blocks.

As Bitcoin's full node uses a zero trust model, where (apart from
one detail: the order of otherwise valid transactions) it never
assumes any data received from the outside it valid, it HAS to see the
previous blocks in order to establish the validity of the current UTXO
set. This is what initial block syncing does. Nothing but the actual
blocks can provide this data, and it is why the actual blocks need to
be available. It does not require everyone to have all blocks, though
- they just need to have seen them during processing.

A related, but not identical evolution is merkle UTXO commitments.
This means that we shape the UTXO set as a merkle tree, compute its
root after every block, and require that the block commits to this
root hash (by putting it in the coinbase, for example). This means a
full node can copy the chain state from someone else, and check that
its hash matches what the block chain commits to. It's important to
note that this is a strict reduction in security: we're now trusting
that the longest chain (with most proof of work) commits to a valid
UTXO set (at some point in the past).

In essence, combining both ideas means you get superblocks (the UTXO
set is essentially the summary of the result of all past blocks), in a
way that is less-than-currently-but-perhaps-still-acceptably-validated.

-- 
Pieter

--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Chain pruning

2014-04-10 Thread Pieter Wuille
On Thu, Apr 10, 2014 at 8:19 PM, Paul Rabahy prab...@gmail.com wrote:
 Please let me know if I have missed something.

A 51% attack can make you believe you were paid, while you weren't.

Full node security right now validates everything - there is no way
you can ever be made to believe something invalid. The only attacks
against it are about which version of valid history eventually gets
chosen.

If you trust hashrate for determining which UTXO set is valid, a 51%
attack becomes worse in that you can be made to believe a version of
history which is in fact invalid.

-- 
Pieter

--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Chain pruning

2014-04-10 Thread Pieter Wuille
On Thu, Apr 10, 2014 at 10:12 PM, Tier Nolan tier.no...@gmail.com wrote:
 On Thu, Apr 10, 2014 at 7:32 PM, Pieter Wuille pieter.wui...@gmail.com
 wrote:

 If you trust hashrate for determining which UTXO set is valid, a 51%
 attack becomes worse in that you can be made to believe a version of
 history which is in fact invalid.


 If there are invalidation proofs, then this isn't strictly true.

I'm aware of fraud proofs, and they're a very cool idea. They allow
you to leverage some herd immunity in the system (assuming you'll be
told about invalid data you received without actually validating it).
However, they are certainly not the same thing as zero trust security
a fully validating node offers.

For example, a sybil attack that hides the actual best chain + fraud
proofs from you, plus being fed a chain that commits to an invalid
UTXO set.

There are many ideas that make attacks harder, and they're probably
good ideas to deploy, but there is little that achieves the security
of a full node. (well, perhaps a zero-knowledge proof of having run
the validation code against the claimed chain tip to produce the known
UTXO set...).
-- 
Pieter

--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] New BIP32 structure

2014-04-08 Thread Pieter Wuille
I see the cause of our disagreement now.

You actually want to share a single BIP32 tree across different
currency types, but do it in a way that guarantees that they never use
the same keys.

I would have expected that different chains would use independent
chains, and have serializations encode which chain they belong to.

Let me offer an alternative suggestion, which is compatible with the
original default BIP32 structure:
* You can use one seed across different chains, but the master nodes
are separate.
* To derive the master node from the seed, the key string Bitcoin
seed is replaced by something chain-specific.
* Every encoded node (including master nodes) has a chain-specific
serialization magic.

This is in practice almost the same as your suggestion, except that
the m/cointype' in m/cointype'/account'/change/n is replaced by
different masters. The only disadvantage I see is that you do not have
a way to encode the super master that is the parent of all
chain-specific masters. You can - and with the same security
properties - encode the seed, though.

-- 
Pieter


On Tue, Apr 8, 2014 at 3:43 PM, slush sl...@centrum.cz wrote:
 tl;dr;

 It is dangerous to expect that other seed than xprv does not contain
 bitcoins or that xprv contains only bitcoins, because technically are both
 situations possible. It is still safer to do the lookup; the magic itself is
 ambiguous.

 Marek

 On Tue, Apr 8, 2014 at 3:40 PM, slush sl...@centrum.cz wrote:


 Serialization magic of bip32 seed is in my opinion completely unnecessary.
 Most of software does not care about it anyway; You can use xprv/xpub pair
 for main net, testnet, litecoin, dogecoin, whatevercoin.

 Instead using the same seed (xprv) and then separate the chains *inside*
 the bip32 path seems more useful to me.

 Marek



--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Why are we bleeding nodes?

2014-04-07 Thread Pieter Wuille
On Mon, Apr 7, 2014 at 2:19 PM, Jameson Lopp jameson.l...@gmail.com wrote:
 I'm glad to see that I'm not the only one concerned about the consistent 
 dropping of nodes. Though I think that the fundamental question should be: 
 how many nodes do we really need? Obviously more is better, but it's 
 difficult to say how concerned we should be without more information. I 
 posted my thoughts last month: 
 http://coinchomp.com/2014/03/19/bitcoin-nodes-many-enough/

In my opinion, the number of full nodes doesn't matter (as long as
it's enough to satisfy demand by other nodes).

What matters is how hard it is to run one. If someone is interesting
in verifying that nobody is cheating on the network, can they, and can
they without significant investment? Whether they actually will
depends also no how interesting the currency and its digital transfers
are.

 On 04/07/2014 07:34 AM, Mike Hearn wrote:
 At the start of February we had 10,000 bitcoin nodes. Now we have 8,500 and
 still falling:

http://getaddr.bitnodes.io/dashboard/chart/?days=60

My own network crawler (which feeds my DNS seeder) hasn't seen any
significant drop that I remember, but I don't have actual logs. It's
seeing around 6000 well reachable nodes currently, which is the
highest number I've ever seen (though it's been around 6000 for quite
a while now).

-- 
Pieter

--
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test  Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees_APR
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


[Bitcoin-development] Finite monetary supply for Bitcoin

2014-04-01 Thread Pieter Wuille
Hi all,

I understand this is a controversial proposal, but bear with me please.

I believe we cannot accept the current subsidy schedule anymore, so I
wrote a small draft BIP with a proposal to turn Bitcoin into a
limited-supply currency. Dogecoin has already shown how easy such
changes are, so I consider this a worthwhile idea to be explored.

The text can be found here: https://gist.github.com/sipa/9920696

Please comment!

Thanks,

-- 
Pieter

--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


  1   2   3   >