Re: [bitcoin-dev] [Bitcoin-development] BIP for Proof of Payment

2015-07-24 Thread Kalle Rosenbaum via bitcoin-dev
These BIPs have been assigned 120 and 121:

120: Proof of Payment
121: Proof of Payment URI scheme

Regards,
Kalle
Den 24 jul 2015 08:27 skrev Kalle Rosenbaum ka...@rosenbaum.se:

 These BIPs have been assigned 120 and 121:

 120: Proof of Payment
 121: Proof of Payment URI scheme

 Regards,
 Kalle
 Den 21 jun 2015 16:39 skrev Kalle Rosenbaum ka...@rosenbaum.se:

 Hi Greg!

 After a lot of constructive discussion, feedback and updating, I'm
 requesting that you please assign these proposals BIP numbers. It's both
 the Proof of Payment proposal and the Proof of Payment URI scheme
 proposal that I'm referring to.

 The wikimedia source is available here:
 https://github.com/kallerosenbaum/poppoc/wiki/Proof-of-Payment-BIP and
 https://github.com/kallerosenbaum/poppoc/wiki/btcpop-scheme-BIP.

 Is this what you need in order to proceed or is there something else you
 need from me?

 Best regards,
 /Kalle

 2015-06-17 11:51 GMT+02:00 Kalle Rosenbaum ka...@rosenbaum.se:

 2015-06-16 21:48 GMT+02:00 Pieter Wuille pieter.wui...@gmail.com:
  I don't see why existing software could create a 40-byte OP_RETURN but
 not
  larger? The limitation comes from a relay policy in full nodes, not a
  limitation is wallet software... and PoPs are not relayed on the
 network.

 You are probably right here. The thing is that I don't know how *all*
 wallet signing and validating software is written, so I figure it's
 better to stick to a valid output. Since I don't *need* more data
 than 40 bytes, why bother. There's another constraint to this as well:
 The other BIP proposal, Proof of Payment URI scheme, includes a
 nonce parameter in the URI. If the nonce is very long, the QR code
 will be unnecessarily big. The server should try to detect a brute
 force of the 48 bit nonce, or at least delay the pop requests by some
 100 ms or so.

 Do you think this is an actual problem, and why? Is your suggestion to
 use a bigger nonce, given the above?

 
  Regarding sharing, I think you're talking about a different use case.
 Say
  you want to pay for 1-week valid entrance to some venue. I thought the
  purpose of the PoP was to be sure that only the person who paid for
 it, and
  not anyone else can use it during that week.
 

 That's right. That's one use case. You pay for the 1-week entrance and
 then you use your wallet to sign PoPs when you enter the venue.

  My argument against that is that the original payer can also hand the
  private keys in his wallet to someone else, who would then become able
 to
  create PoPs for the service. He does not lose anything by this,
 assuming the
  address is not reused.
 

 Yes, that is possible. It's about the same as giving out a
 username/password for a service that you have paid for. In the case of
 a concert ticket, it's simple. Just allow one entrance per payment.
 But in the example you gave, it's a bit more complicated. You could
 for example give all guests a bracelet upon first entry or upon first
 exit. Or you can put a stamp on people leaving the venue, and demand
 that all re-entries show the stamp, possibly along with a new PoP.
 Pretty much as is done already. Different use cases will need
 different protection. In this example, the value added by PoP is that
 the venue does not have to distribute tickets in advance. This in turn
 allows for better privacy for the customer, who don't have to give out
 personal information such as an email-address.

  So, using a token does not change anything, except it can be provided
 to the
  payer - instead of relying on creating an implicit identity based on
 who
  seems to have held particular private keys in the past.
 

 Yes, that's a difference, but it comes at the cost of security. The
 stolen token can be used over and over. In the case of PoP it's only
 usable once, and it's only created when it's actually needed,
 minimizing the window of opportunity for the thief.

 Regards,
 Kalle

  On Jun 16, 2015 9:41 PM, Kalle Rosenbaum ka...@rosenbaum.se wrote:
 
  2015-06-16 21:25 GMT+02:00 Pieter Wuille pieter.wui...@gmail.com:
   You can't avoid sharing the token, and you can't avoid sharing the
   private
   keys used for signing either. If they are single use, you don't lose
   anything by sharing them.
 
  Forwarding the PoP request would be a way to avoid sharing keys, as
  suggested above.
 
  
   Also you are not creating a real transaction. Why does the OP_RETURN
   limitation matter?
 
  This was discussed in the beginning of this thread: The idea is to
  simplify implementation. Existing software can be used as is to sign
  and validate PoPs
 
  Regards,
  Kalle
 
  
   On Jun 16, 2015 9:22 PM, Kalle Rosenbaum ka...@rosenbaum.se
 wrote:
  
   Thank you for your comments Pieter! Please find my answers below.
  
   2015-06-16 16:31 GMT+02:00 Pieter Wuille pieter.wui...@gmail.com
 :
On Mon, Jun 15, 2015 at 1:59 PM, Kalle Rosenbaum 
 ka...@rosenbaum.se
wrote:
   
2015-06-15 12:00 GMT+02:00 Pieter Wuille 
 

Re: [bitcoin-dev] BIP 102 - kick the can down the road to 2MB

2015-07-24 Thread Slurms MacKenzie via bitcoin-dev
 Sent: Friday, July 24, 2015 at 10:52 AM
 From: Thomas Zander via bitcoin-dev bitcoin-dev@lists.linuxfoundation.org
 To: bitcoin-dev@lists.linuxfoundation.org
 Subject: Re: [bitcoin-dev] BIP 102 - kick the can down the road to 2MB

 
 The reference to bandwidth increases makes no sense, the bandwidth in most of 
 the world is already far exceeding the 8Mb limit. Not everyone lives where 
 you 
 live :)
 
 In Germany you buy a 150Mbit connection for a flatrate and a cheap monthly 
 rate, for instance.  Not saying that Germany is where all the miners are, but 
 since 150Mbit allows one to comfortably have 16 megabyte blocks, it is a good 
 example of how far off Luke's calculations are from real-world.

I'll have better stats available soon, but this does not reflect the current 
state of the network. 

Only 4% of my initial crawl presented bandwidth above your stated 18MB/s. 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP 102 - kick the can down the road to 2MB

2015-07-24 Thread Thomas Zander via bitcoin-dev
On Friday 17. July 2015 20.29.16 Luke Dashjr via bitcoin-dev wrote:
 We are unlikely to approach 1 MB of actual volume by November, so I would 
 prefer to see the activation date on this moved later - maybe November
 2016,  if not 2017. It would also be an improvement to try to follow
 reasonably- expected bandwidth increases, so 15% (1.15 MB) rather than
 doubling. Doubling in only a few months seems to be far from a
 conservative increase.

The reference to bandwidth increases makes no sense, the bandwidth in most of 
the world is already far exceeding the 8Mb limit. Not everyone lives where you 
live :)

In Germany you buy a 150Mbit connection for a flatrate and a cheap monthly 
rate, for instance.  Not saying that Germany is where all the miners are, but 
since 150Mbit allows one to comfortably have 16 megabyte blocks, it is a good 
example of how far off Luke's calculations are from real-world.

I don't belief your argument to push forward holds.
-- 
Thomas Zander
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bitcoin Roadmap 2015, or If We Do Nothing Analysis

2015-07-24 Thread Milly Bitcoin via bitcoin-dev

On 7/23/2015 10:57 PM, Dave Scotese via bitcoin-dev wrote:

I used Google to establish that there is not already a post from 2015
that mentions roadmap in the subject line.  Such would be a good
skeleton for anyone new to the list (like me).


Just a point about terminology:

Roadmap - A plan of proposed changed used to meet some sort of goal.  If 
the goal is increased scaling then you list a series of changes that 
need to be done to achieve that goal.


Baseline - That is the If We Do Nothing Analysis.  Each proposed 
change will generally have one or more alternatives which are compared 
to the baseline.


those would be 2 different documents.

Russ





___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bitcoin Roadmap 2015, or If We Do Nothing Analysis

2015-07-24 Thread Dave Scotese via bitcoin-dev
On Fri, Jul 24, 2015 at 4:38 AM, Mike Hearn he...@vinumeris.com wrote:

 It's worth noting that even massive companies with $30M USD of funding
 don't run a single Bitcoin Core node


 This has nothing to do with block sizes, and everything to do with Core
 not directly providing the services businesses actually want.

 The whole node count is falling because of block sizes is nothing more
 than conjecture presented as fact. The existence of multiple companies who
 could easily afford to do this but don't because they perceive it as
 valueless should be a wakeup call there.


Regardless of why node count is falling, many people who used to run a full
node stopped doing so.  To mitigate that, their chances of getting
something out of it have to be greater.  What if propagating a valid
transaction generated a small chance of earning a piece of the fee?
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bitcoin Roadmap 2015, or If We Do Nothing Analysis

2015-07-24 Thread Eric Lombrozo via bitcoin-dev

 On Jul 24, 2015, at 10:40 AM, Peter Todd via bitcoin-dev 
 bitcoin-dev@lists.linuxfoundation.org wrote:
 
 On Fri, Jul 24, 2015 at 07:09:13AM -0700, Adam Back via bitcoin-dev wrote:
 (Claim of large bitcoin ecosystem companies without full nodes) this
 says to me rather we have a need for education: I run a full node
 myself (intermittently), just for my puny collection of bitcoins.  If
 I ran a business with custody of client funds I'd wake up in a cold
 sweat at night about the security and integrity of the companies full
 nodes, and reconciliation of client funds against them.
 
 However I'm not sure the claim is accurate ($30m funding and no full
 node) but to take the hypothetical that this pattern exists, security
 people and architects at such companies must insist on the company
 running their own full node to depend on and cross check from
 otherwise they would be needlessly putting their client's funds at
 risk.
 
 FWIW, blockchain.info is obviously *not* running a full node as their
 wallet was accepting invalid confirmations on transactions caused by the
 recent BIP66 related fork; blockchain.info has $30m in funding.
 
 Coinbase also was not running a full node not all that long ago, instead
 running a custom Ruby implementation that caused their service to go
 down whenever it forked. (and would have also accepted invalid
 confirmations) I believe right now they're running that implementation
 behind a full node however.
 
 The crypto currency security standards document probably covers
 requirement for fullnode somewhere
 https://cryptoconsortium.github.io/CCSS/ - we need some kind of basic
 minimum bar standard for companies to aim for and this seems like a
 reasonable start!
 
 Actually I've been trying to get the CCSS standard to cover full nodes,
 and have been getting push-back:
 
 https://github.com/CryptoConsortium/CCSS/issues/15
 
 tl;dr: Running a full node is *not* required by the standard right now
 at any certification level.
 
 This is of course completely ridiculous... But I haven't had much much
 time to put into getting that changed so maybe we just need some better
 explanations to the others maintaining the standard. That said, if the
 standard stays that way, obviously I'm going to have to ask to have my
 name taken off it.

For the record, there’s pretty much unanimous agreement that running a full 
node should be a requirement at the higher levels of certification (if not the 
lower ones as well). I’m not sure exactly what pushback you’re referring to.


 In terms of a constructive discussion, I think it's interesting to
 talk about the root cause and solutions: decentralisation (more
 economically dependent full nodes, lower miner policy centralisation),
 more layer 2 work.  People interested in scaling, if they havent,
 should go read the lightning paper, look at the github and participate
 in protocol or code work.  I think realistically we can have this
 running inside of a year.  That significantly changes the dynamic.
 Similarly a significant part of mining centralisation is artificial
 and work is underway that will improve that.
 
 I would point out that lack of understanding of how Bitcoin works, as
 well as a lack of understanding of security engineering in general, is
 probably a significant contributor to these problems. Furthermore
 Bitcoin and cryptocurrencies in general are still small enough that many
 forseeable low probability but high impact events haven't happened,
 making it difficult to explain to non-technical stakeholders why they
 should be listening to experts rather than charlatans and fools.
 
 After a few major centralization related failures have occured, we'll
 have an easier job here. Unfortunately there's also a good chance we
 only get one shot at this due to how easy it is to kill PoW systems at
 birth...
 
 --
 'peter'[:-1]@petertodd.org
 14438a428adfcf4d113a09b87e4a552a1608269ff137ef2d
 ___
 bitcoin-dev mailing list
 bitcoin-dev@lists.linuxfoundation.org
 https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] For discussion: limit transaction size to mitigate CVE-2013-2292

2015-07-24 Thread Gavin Andresen via bitcoin-dev
After thinking about it, implementing it, and doing some benchmarking, I'm
convinced replacing the existing, messy, ad-hoc sigop-counting consensus
rules is the right thing to do.

The last two commits in this branch are an implementation:
   https://github.com/gavinandresen/bitcoin-git/commits/count_hash_size

From the commit message in the last commit:

Summary of old rules / new rules:

Old rules: 20,000 inaccurately-counted-sigops for a 1MB block
New: 80,000 accurately-counted sigops for an 8MB block

A scan of the last 100,000 blocks for high-sigop blocks gets
a maximum of 7,350 sigops in block 364,773 (in a single, huge,
~1MB transaction).

For reference, Pieter Wuille's libsecp256k1 validation code
validates about 10,000 signatures per second on a single
2.7GHZ CPU core.

Old rules: no limit for number of bytes hashed to generate
signature hashes

New rule: 1.3gigabytes hashed per 8MB block to generate
signature hashes

Block 364,422 contains a single ~1MB transaction that requires
1.2GB of data hashed to generate signature hashes.

TODO: benchmark Core's sighash-creation code ('openssl speed sha256'
reports something like 1GB per second on my machine).

Note that in normal operation most validation work is done as transactions
are received from the network, and can be cached so it doesn't have to be
repeated when a new block is found. The limits described in this BIP are
intended, as the existing sigop limits are intended, to be an extra belt
and suspenders measure to mitigate any possible attack that involves
creating and broadcasting a very expensive-to-verify block.


Draft BIP:

  BIP: ??
  Title: Consensus rules to limit CPU time required to validate blocks
  Author: Gavin Andresen gavinandre...@gmail.com
  Status: Draft
  Type: Standards Track
  Created: 2015-07-24

==Abstract==

Mitigate potential CPU exhaustion denial-of-service attacks by limiting
the maximum number of ECDSA signature verfications done per block,
and limiting the number of bytes hashed to compute signature hashes.

==Motivation==

Sergio Demian Lerner reported that a maliciously constructed block could
take several minutes to validate, due to the way signature hashes are
computed for OP_CHECKSIG/OP_CHECKMULTISIG ([[
https://bitcointalk.org/?topic=140078|CVE-2013-2292]]).
Each signature validation can require hashing most of the transaction's
bytes, resulting in O(s*b) scaling (where s is the number of signature
operations and b is the number of bytes in the transaction, excluding
signatures). If there are no limits on s or b the result is O(n^2) scaling
(where n is a multiple of the number of bytes in the block).

This potential attack was mitigated by changing the default relay and
mining policies so transactions larger than 100,000 bytes were not
relayed across the network or included in blocks. However, a miner
not following the default policy could choose to include a
transaction that filled the entire one-megaybte block and took
a long time to validate.

==Specification==

After deployment, the existing consensus rule for maximum number of
signature operations per block (20,000, counted in two different,
idiosyncratic, ad-hoc ways) shall be replaced by the following two rules:

1. The maximum number of ECDSA verify operations required to validate
all of the transactions in a block must be less than or equal to
the maximum block size in bytes divided by 100 (rounded down).

2. The maximum number of bytes hashed to compute ECDSA signatures for
all transactions in a block must be less than or equal to the
maximum block size in bytes times 160.

==Compatibility==

This change is compatible with existing transaction-creation software,
because transactions larger than 100,000 bytes have been considered
non-standard
(they are not relayed or mined by default) for years, and a block full of
standard transactions will be well-under the limits.

Software that assembles transactions into blocks and software that validates
blocks must be updated to enforce the new consensus rules.

==Deployment==

This change will be deployed with BIP 100 or BIP 101.

==Discussion==

Linking these consensus rules to the maximum block size allows more
transactions
and/or transactions with more inputs or outputs to be included if the
maximum
block size increases.

The constants are chosen to be maximally compatible with the existing
consensus rule,
and to virtually eliminate the possibility that bitcoins could be lost if
somebody had locked some funds in a pre-signed, expensive-to-validate,
locktime-in-the-future
transaction.

But they are chosen to put a reasonable upper bound on the CPU time
required to validate
a maximum-sized block.

===Alternatives to this BIP:===

1. A simple limit on transaction size (e.g. any transaction in a block must
be 100,000
bytes or smaller).

2. Fix the CHECKSIG/CHECKMULTISIG opcodes so they don't re-hash variations
of
the transaction's data. This is the most correct solution, but would
require
updating every piece 

Re: [bitcoin-dev] Bitcoin Roadmap 2015, or If We Do Nothing Analysis

2015-07-24 Thread Eric Lombrozo via bitcoin-dev
Thanks for bringing up the CCSS, Adam and Peter.

I was actually working on a post inviting everyone in this mailing list to come 
and participate…but you guys beat me to it. :)

The CCSS is an open standard, born out of the belief that sharing the 
industry's best practices amongst each other and with the community at large 
benefits everyone.

To read more about it and how you can contribute, please visit 
http://blog.cryptoconsortium.org/contributing-to-the-ccss/ 
http://blog.cryptoconsortium.org/contributing-to-the-ccss/

The standard:  https://cryptoconsortium.github.io/CCSS/ 
https://cryptoconsortium.github.io/CCSS/

The github repository: https://github.com/CryptoConsortium/CCSS 
https://github.com/CryptoConsortium/CCSS


- Eric

 On Jul 24, 2015, at 10:43 AM, Peter Todd via bitcoin-dev 
 bitcoin-dev@lists.linuxfoundation.org wrote:
 
 On Fri, Jul 24, 2015 at 03:39:08PM +0200, Thomas Zander via bitcoin-dev wrote:
 On Friday 24. July 2015 05.37.30 Slurms MacKenzie via bitcoin-dev wrote:
 It's worth noting that even massive companies with $30M USD of funding don't
 run a single Bitcoin Core node,
 
 I assume you mean that they don't have a Bitcoin Core node that is open to
 incoming connections. Since that is the only thing you can actually test, no?
 
 We can test the fact that blockchain.info's wallet and block explorer
 has behaved in a way consistent with not running a full node - they have
 shown invalid data that any full node would reject on multiple
 occasions, most recently invalid confirmations during the BIP66 fork.
 
 --
 'peter'[:-1]@petertodd.org
 06baf20e289b563e3ec69320275086169a47e9c58d4abfba
 ___
 bitcoin-dev mailing list
 bitcoin-dev@lists.linuxfoundation.org
 https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Making Electrum more anonymous

2015-07-24 Thread Slurms MacKenzie via bitcoin-dev
 Sent: Friday, July 24, 2015 at 2:12 PM
 From: s7r via bitcoin-dev bitcoin-dev@lists.linuxfoundation.org
 To: bitcoin-dev@lists.linuxfoundation.org
 Subject: Re: [bitcoin-dev] Making Electrum more anonymous

 Privacy concerned people should run their own Electrum server and make
 it accessible via .onion, and connect the bitcoind running on the
 electrum server host only to other onion peers (onlynet=tor). We should
 highlight that using Electrum with Tor cannot leak more that some
 addresses belong to the same wallet, which is not the end of the world.
 

It leaks your timezone too. As pointed out in another thread running a 
electrum-server instance is no easy task and can't really be suggested to 
another as a sensible thing to run for themselves. Enthusiasts maybe, but 
they'll just want to run Bitcoin core and skip the behemoth middleman. 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bitcoin Roadmap 2015, or If We Do Nothing Analysis

2015-07-24 Thread Peter Todd via bitcoin-dev
On Fri, Jul 24, 2015 at 07:09:13AM -0700, Adam Back via bitcoin-dev wrote:
 (Claim of large bitcoin ecosystem companies without full nodes) this
 says to me rather we have a need for education: I run a full node
 myself (intermittently), just for my puny collection of bitcoins.  If
 I ran a business with custody of client funds I'd wake up in a cold
 sweat at night about the security and integrity of the companies full
 nodes, and reconciliation of client funds against them.
 
 However I'm not sure the claim is accurate ($30m funding and no full
 node) but to take the hypothetical that this pattern exists, security
 people and architects at such companies must insist on the company
 running their own full node to depend on and cross check from
 otherwise they would be needlessly putting their client's funds at
 risk.

FWIW, blockchain.info is obviously *not* running a full node as their
wallet was accepting invalid confirmations on transactions caused by the
recent BIP66 related fork; blockchain.info has $30m in funding.

Coinbase also was not running a full node not all that long ago, instead
running a custom Ruby implementation that caused their service to go
down whenever it forked. (and would have also accepted invalid
confirmations) I believe right now they're running that implementation
behind a full node however.

 The crypto currency security standards document probably covers
 requirement for fullnode somewhere
 https://cryptoconsortium.github.io/CCSS/ - we need some kind of basic
 minimum bar standard for companies to aim for and this seems like a
 reasonable start!

Actually I've been trying to get the CCSS standard to cover full nodes,
and have been getting push-back:

https://github.com/CryptoConsortium/CCSS/issues/15

tl;dr: Running a full node is *not* required by the standard right now
at any certification level.

This is of course completely ridiculous... But I haven't had much much
time to put into getting that changed so maybe we just need some better
explanations to the others maintaining the standard. That said, if the
standard stays that way, obviously I'm going to have to ask to have my
name taken off it.

 In terms of a constructive discussion, I think it's interesting to
 talk about the root cause and solutions: decentralisation (more
 economically dependent full nodes, lower miner policy centralisation),
 more layer 2 work.  People interested in scaling, if they havent,
 should go read the lightning paper, look at the github and participate
 in protocol or code work.  I think realistically we can have this
 running inside of a year.  That significantly changes the dynamic.
 Similarly a significant part of mining centralisation is artificial
 and work is underway that will improve that.

I would point out that lack of understanding of how Bitcoin works, as
well as a lack of understanding of security engineering in general, is
probably a significant contributor to these problems. Furthermore
Bitcoin and cryptocurrencies in general are still small enough that many
forseeable low probability but high impact events haven't happened,
making it difficult to explain to non-technical stakeholders why they
should be listening to experts rather than charlatans and fools.

After a few major centralization related failures have occured, we'll
have an easier job here. Unfortunately there's also a good chance we
only get one shot at this due to how easy it is to kill PoW systems at
birth...

-- 
'peter'[:-1]@petertodd.org
14438a428adfcf4d113a09b87e4a552a1608269ff137ef2d


signature.asc
Description: Digital signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bitcoin Roadmap 2015, or If We Do Nothing Analysis

2015-07-24 Thread Peter Todd via bitcoin-dev
On Fri, Jul 24, 2015 at 03:39:08PM +0200, Thomas Zander via bitcoin-dev wrote:
 On Friday 24. July 2015 05.37.30 Slurms MacKenzie via bitcoin-dev wrote:
  It's worth noting that even massive companies with $30M USD of funding don't
  run a single Bitcoin Core node,
 
 I assume you mean that they don't have a Bitcoin Core node that is open to 
 incoming connections. Since that is the only thing you can actually test, no?

We can test the fact that blockchain.info's wallet and block explorer
has behaved in a way consistent with not running a full node - they have
shown invalid data that any full node would reject on multiple
occasions, most recently invalid confirmations during the BIP66 fork.

-- 
'peter'[:-1]@petertodd.org
06baf20e289b563e3ec69320275086169a47e9c58d4abfba


signature.asc
Description: Digital signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] bitcoin-dev Digest, Vol 2, Issue 95

2015-07-24 Thread Dave Scotese via bitcoin-dev
 Alternatively I think instead of displaying a meaningless number we ought
 to go by a percentage (the double spend improbability) and go by
 'confidence'.


That is a great idea, and not too hard to implement.  A bit of code can
determine over the last N blocks, how many blocks that were at the current
depth of the present transaction were orphaned and divide that by the total
number of blocks solved (orphaned or not) while those N blocks were
solved.  That's the historical number (H), and then the 51% attack number
(A) can make an explicit assumptions like Assuming a bad actor has 51% of
the hashing power for 24 hours starting right now, the block holding this
transaction has an X% chance of being orphaned.  Report # confirmations
as 99.44% confidence using [100% - max(H,A)].
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bitcoin Core and hard forks

2015-07-24 Thread Tom Harding via bitcoin-dev
On 7/24/2015 2:24 AM, Jorge Timón wrote:

 Regarding increasing the exchange rate it would be really nice to
 just push a button and double bitcoin's price just before the next
 subsidy halving, but unfortunately that's something out of our control. 

Jorge, right now, from the activity on github, you are working at least
as hard as anyone else, probably harder.  Why?  Why, if not to make
bitcoin more valuable?

Even apart from the convenience/curse of real-time exchange markets,
just with an abstract definition of value, isn't that exactly what a
developer can influence, if not control?

Isn't figuring out ways to increase the value of bitcoin what we are doing?

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Bitcoin, Perceptions, and Expectations

2015-07-24 Thread gb via bitcoin-dev

Validated - (seen on network) 

Settled/Cleared - 1 conf

Finalised - 6 confs

On Sat, 2015-07-25 at 00:37 +1000, Vincent Truong via bitcoin-dev wrote:
 
 Fast transactions
 Fast transactions implies it is slower than Visa, and Visa is
 'instant' by comparison from the spender's POV. Bitcoin is still very
 instant because wallets still send notifications/pings when
 transactions are first seen, not when it goes into a block. We
 shouldn't mislead people into thinking a transaction literally takes
 10 minutes to travel the globe.
 
 Maybe this feels like PR speak. But being too humble about Bitcoin's
 attributes isn't a good idea either.
 
 If we're going to look at perception, image and expectations, perhaps
 we can start to look at redefining some terminology too. Like
 confirmations, which is an arbitrary concept. Where possible we should
 describe it with finance terminology.
 
 0 conf transaction
 0 conf is the 'transaction' - just the act of making an exchange. It
 doesn't imply safe and I believe using the word 'settle' in place of
 confirmations will automatically click with merchants.
 
 1st conf
 A 'confirmation' is a 'settlement'. If it is 'settled', it implies
 final (except by court order), whereas confirmation usually means 'ah,
 I've seen it come through'. I rarely hear any sales clerk call credit
 card transactions confirmed. More often you will hear 'approved'
 instead. Although 1st conf can be overtaken, so...
 
 n confirmations
 This term can probably stay since I can't come up with a better word.
 Settlements only happen once, putting a number next to it breaks the
 meaning of the word. Settled with 4 confirmations seems pretty
 clear. Alternatively I think instead of displaying a meaningless
 number we ought to go by a percentage (the double spend improbability)
 and go by 'confidence'. Settled with 92% confidence. Or we can pick
 an arbitrary number like 6 and use 'settling...' and 'settled' when
 reached.
 
 ___
 bitcoin-dev mailing list
 bitcoin-dev@lists.linuxfoundation.org
 https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] For discussion: limit transaction size to mitigate CVE-2013-2292

2015-07-24 Thread odinn via bitcoin-dev
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Interesting, so this basically would merge into an already existing
BIP (Jeff Garzik's).  However, it proposes some changes.

OK

CVE-2013-2292 is a severity thingy of high which is described as

bitcoind and Bitcoin-Qt 0.8.0 and earlier allow remote attackers to
cause a denial of service (electricity consumption) by mining a block
to create a nonstandard Bitcoin transaction containing multiple
OP_CHECKSIG script opcodes.

(munches popcorn)

I do appreciate seeing the effort toward working something toward /
into Garzik's proposal.  The general idea that I suggested before - to
work some new ideas (not XT-related), into a BIP, and to work with
Jeff Garzik on getting something done, seems to be the direction that
you are taking... so I'm hopeful that continues.

- -O

On 07/24/2015 01:59 PM, Gavin Andresen via bitcoin-dev wrote:
 After thinking about it, implementing it, and doing some
 benchmarking, I'm convinced replacing the existing, messy, ad-hoc
 sigop-counting consensus rules is the right thing to do.
 
 The last two commits in this branch are an implementation: 
 https://github.com/gavinandresen/bitcoin-git/commits/count_hash_size

  From the commit message in the last commit:
 
 Summary of old rules / new rules:
 
 Old rules: 20,000 inaccurately-counted-sigops for a 1MB block New:
 80,000 accurately-counted sigops for an 8MB block
 
 A scan of the last 100,000 blocks for high-sigop blocks gets a
 maximum of 7,350 sigops in block 364,773 (in a single, huge, ~1MB
 transaction).
 
 For reference, Pieter Wuille's libsecp256k1 validation code 
 validates about 10,000 signatures per second on a single 2.7GHZ CPU
 core.
 
 Old rules: no limit for number of bytes hashed to generate 
 signature hashes
 
 New rule: 1.3gigabytes hashed per 8MB block to generate signature
 hashes
 
 Block 364,422 contains a single ~1MB transaction that requires 
 1.2GB of data hashed to generate signature hashes.
 
 TODO: benchmark Core's sighash-creation code ('openssl speed
 sha256' reports something like 1GB per second on my machine).
 
 Note that in normal operation most validation work is done as 
 transactions are received from the network, and can be cached so
 it doesn't have to be repeated when a new block is found. The
 limits described in this BIP are intended, as the existing sigop
 limits are intended, to be an extra belt and suspenders measure
 to mitigate any possible attack that involves creating and
 broadcasting a very expensive-to-verify block.
 
 
 Draft BIP:
 
 BIP: ?? Title: Consensus rules to limit CPU time required to
 validate blocks Author: Gavin Andresen gavinandre...@gmail.com 
 mailto:gavinandre...@gmail.com Status: Draft Type: Standards
 Track Created: 2015-07-24
 
 ==Abstract==
 
 Mitigate potential CPU exhaustion denial-of-service attacks by
 limiting the maximum number of ECDSA signature verfications done
 per block, and limiting the number of bytes hashed to compute
 signature hashes.
 
 ==Motivation==
 
 Sergio Demian Lerner reported that a maliciously constructed block
 could take several minutes to validate, due to the way signature
 hashes are computed for OP_CHECKSIG/OP_CHECKMULTISIG 
 ([[https://bitcointalk.org/?topic=140078|CVE-2013-2292]]). Each
 signature validation can require hashing most of the transaction's 
 bytes, resulting in O(s*b) scaling (where s is the number of
 signature operations and b is the number of bytes in the
 transaction, excluding signatures). If there are no limits on s or
 b the result is O(n^2) scaling (where n is a multiple of the number
 of bytes in the block).
 
 This potential attack was mitigated by changing the default relay
 and mining policies so transactions larger than 100,000 bytes were
 not relayed across the network or included in blocks. However, a
 miner not following the default policy could choose to include a 
 transaction that filled the entire one-megaybte block and took a
 long time to validate.
 
 ==Specification==
 
 After deployment, the existing consensus rule for maximum number
 of signature operations per block (20,000, counted in two
 different, idiosyncratic, ad-hoc ways) shall be replaced by the
 following two rules:
 
 1. The maximum number of ECDSA verify operations required to
 validate all of the transactions in a block must be less than or
 equal to the maximum block size in bytes divided by 100 (rounded
 down).
 
 2. The maximum number of bytes hashed to compute ECDSA signatures
 for all transactions in a block must be less than or equal to the 
 maximum block size in bytes times 160.
 
 ==Compatibility==
 
 This change is compatible with existing transaction-creation
 software, because transactions larger than 100,000 bytes have been
 considered non-standard (they are not relayed or mined by
 default) for years, and a block full of standard transactions
 will be well-under the limits.
 
 Software that assembles transactions into blocks and software that
 validates blocks must 

Re: [bitcoin-dev] Bitcoin Roadmap 2015, or If We Do Nothing Analysis

2015-07-24 Thread Mike Hearn via bitcoin-dev

 It's worth noting that even massive companies with $30M USD of funding
 don't run a single Bitcoin Core node


This has nothing to do with block sizes, and everything to do with Core not
directly providing the services businesses actually want.

The whole node count is falling because of block sizes is nothing more
than conjecture presented as fact. The existence of multiple companies who
could easily afford to do this but don't because they perceive it as
valueless should be a wakeup call there.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev