Re: [Bitcoin-development] improving development model (Re: Concerns Regarding Threats by a Developer to Remove Commit Access from Other Developers

2015-06-19 Thread Tom Harding
On 6/19/2015 6:43 AM, Mike Hearn wrote:
 No surprise, the position of Blockstream employees is that hard forks
 must never happen and that everyone's ordinary transactions should go
 via some new network that doesn't yet exist.

If my company were working on spiffy new ideas that required a hard fork
to implement, I'd be rather dismayed to see the blocksize hard fork
happen *before those ideas were ready*.

Because then I'd eventually have to convince people that those ideas
were worth a hard fork all on their own.  It would be much easier to
convince people to roll them in with the already necessary blocksize
hard fork, if that event could be delayed.

As far as I know, Blockstream representatives have never said that
waiting for other changes to be ready is a reason to delay the blocksize
hard fork.  So if this were the real reason, it would suggest they have
been hiding their true motives for making such a fuss about the
blocksize issue.

I've got no evidence at all to support thoughts like this... just the
paranoid mindset that seems to infect a person who gets involved in
bitcoin.  But the question is every bit as valid as Adam's query into
your motives.



--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Mining centralization pressure from non-uniform propagation speed

2015-06-18 Thread Tom Harding
On 06/12/2015 06:51 PM, Pieter Wuille wrote:
 However, it does very clearly show the effects of
 larger blocks on centralization pressure of the system.

On 6/14/2015 10:45 AM, Jonas Nick wrote:
 This means that your scenario is not the result of a cartel but the result of 
 a long-term network partition.


Pieter, to Jonas' point, in your scenario the big miners are all part of 
the majority partition, so centralization pressure (pressure to merge 
with a big miner) cannot be separated from pressure to be connected to 
the majority partition.

I ran your simulation with a large (20%) miner in a 20% minority 
partition, and 16 small (5%) miners in a majority 80% partition, well 
connected.  The starting point was your recent update, which had a more 
realistic slow link speed of 100 Mbit/s (making all of the effects 
smaller).

To summarize the results across both your run and mine:

** Making small blocks when others are making big ones - BAD
** As above, and fees are enormous - VERY BAD

** Being separated by a slow link from majority hash power - BAD

** Being a small miner with blocksize=20MB - *NOT BAD*


Configuration:
   * Miner group 0: 20.00% hashrate, blocksize 2000.00
   * Miner group 1: 80.00% hashrate, blocksize 100.00
   * Expected average block size: 480.00
   * Average fee per block: 0.25
   * Fee per byte: 0.000521
Result:
   * Miner group 0: 20.404704% income (factor 1.020235 with hashrate)
   * Miner group 1: 79.595296% income (factor 0.994941 with hashrate)

Configuration:
   * Miner group 0: 20.00% hashrate, blocksize 2000.00
   * Miner group 1: 80.00% hashrate, blocksize 2000.00
   * Expected average block size: 2000.00
   * Average fee per block: 0.25
   * Fee per byte: 0.000125
Result:
   * Miner group 0: 19.864232% income (factor 0.993212 with hashrate)
   * Miner group 1: 80.135768% income (factor 1.001697 with hashrate)

Configuration:
   * Miner group 0: 20.00% hashrate, blocksize 2000.00
   * Miner group 1: 80.00% hashrate, blocksize 100.00
   * Expected average block size: 480.00
   * Average fee per block: 25.00
   * Fee per byte: 0.052083
Result:
   * Miner group 0: 51.316895% income (factor 2.565845 with hashrate)
   * Miner group 1: 48.683105% income (factor 0.608539 with hashrate)

Configuration:
   * Miner group 0: 20.00% hashrate, blocksize 2000.00
   * Miner group 1: 80.00% hashrate, blocksize 2000.00
   * Expected average block size: 2000.00
   * Average fee per block: 25.00
   * Fee per byte: 0.012500
Result:
   * Miner group 0: 19.865943% income (factor 0.993297 with hashrate)
   * Miner group 1: 80.134057% income (factor 1.001676 with hashrate)


--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] BIP for Proof of Payment

2015-06-15 Thread Tom Harding
://github.com/bitcoin/bips/blob/master/bip-0070.mediawiki
 BIP0070]:
  Payment Protocol
 
  [[btcpop scheme BIP]]
 
  #
 
  2015-06-06 23:25 GMT+02:00 Kalle Rosenbaum ka...@rosenbaum.se
 mailto:ka...@rosenbaum.se:
   Thank you all for the feedback.
  
   I will change the data structure as follows:
  
   * There will be only one output, the pop output, and no
 outputs from
   T will be copied to the PoP.
   * The pop output will have value 0.
   * The sequence number of all inputs of the PoP will be set to
 0. I
   chose to set it to 0 for all inputs for simplicity.
   * The lock_time of the PoP is always set to 4.
  
   Any comments on this?
  
   /Kalle
  
   2015-06-06 19:00 GMT+02:00 Kalle Rosenbaum
 ka...@rosenbaum.se mailto:ka...@rosenbaum.se:
   2015-06-06 18:10 GMT+02:00 Tom Harding t...@thinlink.com
 mailto:t...@thinlink.com:
   On Jun 6, 2015 8:05 AM, Kalle Rosenbaum
 ka...@rosenbaum.se mailto:ka...@rosenbaum.se wrote:
  
   I'm open to changes here.
  
   I suggest:
  
   - Don't include any real outputs.   They are redundant
 because the
   txid is
   already referenced.
  
   with the nLocktime solution, the copied outputs are not needed.
  
  
   - Start the proof script, which should be invalid, with a magic
   constant and
   include space for future expansion.  This makes PoP's easy
 to identify
   and
   extend.
  
   I did remore the constant (a PoP literal ascii encoded string)
   because it didn't add much. The recipient will expect a pop,
 so it
   will simply treat it as one. I did add a 2 byte version
 field to make
   it extendable.
  
  
   - Proof of Potential
  
   Noted :-)
  
   Thank you
   /Kalle
 
 
 
 
 --
 
  ___
  Bitcoin-development mailing list
  Bitcoin-development@lists.sourceforge.net
 mailto:Bitcoin-development@lists.sourceforge.net
  https://lists.sourceforge.net/lists/listinfo/bitcoin-development
 
 



 --


 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development

--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proposal: SPV Fee Discovery mechanism

2015-06-11 Thread Tom Harding
On 6/11/2015 6:10 AM, Peter Todd wrote:
 On Wed, Jun 10, 2015 at 02:18:30PM -0700, Aaron Voisine wrote:
 The other complication is that this will tend to be a lagging indicator
 based on network congestion from the last time you connected. If we assume
 that transactions are being dropped in an unpredictable way when blocks are
 full, knowing the network congestion *right now* is critical, and even then
 you just have to hope that someone who wants that space more than you do
 doesn't show up after you disconnect.
 Hence the need for ways to increase fees on transactions after initial
 broadcast like replace-by-fee and child-pays-for-parent.

 Re: dropped in an unpredictable way - transactions would be dropped
 lowest fee/KB first, a completely predictable way.

Quite agreed.  Also, transactions with unconfirmed inputs should be 
among the first to get dropped, as discussed in the Dropped-transaction 
spam thread.  Like all policy rules, either of these works in 
proportion to its deployment.

Be advised that pull request #6068 emphasizes the view that the network 
will never have consistent mempool/relay policies, and on the contrary 
needs a framework that supports and encourages pluggable, generally 
parameterized policies that could (some might say should) conflict 
wildly with each other.

It probably doesn't matter that much.  Deploying a new policy still 
wouldn't be much easier than deploying a patched version.  I mean, 
nobody has proposed a policy rule engine yet (oops).



--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] BIP for Proof of Payment

2015-06-06 Thread Tom Harding
On Jun 6, 2015 8:05 AM, Kalle Rosenbaum ka...@rosenbaum.se wrote:

 I'm open to changes here.

I suggest:

- Don't include any real outputs.   They are redundant because the txid is
already referenced.

- Start the proof script, which should be invalid, with a magic constant
and include space for future expansion.  This makes PoP's easy to identify
and extend.

- Proof of Potential
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] soft-fork block size increase (extension blocks)

2015-06-01 Thread Tom Harding
On 6/1/2015 10:21 AM, Adam Back wrote:
 if it stays as is for a year, in a wait and see, reduce spam, see
 fee-pressure take effect as it has before, work on improving improve
 decentralisation metrics, relay latency, and do a blocksize increment
 to kick the can if-and-when it becomes necessary and in the mean-time
 try to do something more long-term ambitious about scale rather than
 volume.

What's your estimate of the lead time required to kick the can,
if-and-when it becomes necessary?

The other time-series I've seen all plot an average block size.  That's
misleading, because there's a distribution of block sizes.  If you bin
by retarget interval and plot every single block, you get this

http://i.imgur.com/5Gfh9CW.png

The max block size has clearly been in play for 8 months already.



--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] First-Seen-Safe Replace-by-Fee

2015-05-26 Thread Tom Harding

I think this is a significant step forward.

I suggest you also need to ensure that no inputs can be removed or 
changed (other than scriptsigs) -- only added.  Otherwise, the semantics 
change too much for the original signers.  Imagine a tx with two inputs 
from different parties.  Should it be easy for party 1 to be able to 
eliminate party 2 as a contributor of funds?  It's not difficult to 
imagine real-world consequences to not having contributed to the 
transaction.  And unless you can think of a reason, tx-level attributes 
like nLocktime should not change either.

The result would be something very like CPFP, but with the new inputs 
and outputs merged into the original tx, keeping most of the overhead 
savings you describe.

It should be submitted to bitcoin/bitcoin because like most inconsistent 
relay policies, inconsistently deployed FSS RBF invites attacks (see 
https://gist.github.com/aalness/a78e3e35b90f52140f0d).

Generally, to be kind to zeroconf:

  - Align relay and validation rules
  - Keep first-seen
  - Relay double-spends as alerts
  - Allow nLocktime transactions into the mempool a bit before they 
become final
  - ...

It's not unlike making a best-effort to reduce sources of malleability.  
FSS RBF should be compatible with this if deployed consistently.



On 5/25/2015 10:13 PM, Peter Todd wrote:
 Summary
 ---

 First-seen-safe replace-by-fee (FSS RBF) does the following:

 1) Give users effective ways of getting stuck transactions unstuck.
 2) Use blockchain space efficiently.

 without:

 3) Changing the status quo with regard to zeroconf.

 The current Bitcoin Core implementation has first-seen mempool
 behavior. Once transaction t1 has been accepted, the transaction is
 never removed from the mempool until mined, or double-spent by a
 transaction in a block. The author's previously proposed replace-by-fee
 replaced this behavior with simply accepting the transaction paying the
 highest fee.

 FSS RBF is a compromise between these two behaviors. Transactions may be
 replaced by higher-fee paying transactions, provided that all outputs in
 the previous transaction are still paid by the replacement. While not as
 general as standard RBF, and with higher costs than standard RBF, this
 still allows fees on transaction to be increased after the fact with
 less cost and higher efficiency than child-pays-for-parent in many
 common situations; in some situations CPFP is unusable, leaving RBF as
 the only option.


 Semantics
 -

 For reference, standard replace-by-fee has the following criteria for
 determining whether to replace a transaction.

 1) t2 pays  fees than t1

 2) The delta fees pay by t2, t2.fee - t1.fee, are = the minimum fee
 required to relay t2. (t2.size * min_fee_per_kb)

 3) t2 pays more fees/kb than t1

 FSS RBF adds the following additional criteria to replace-by-fee before
 allowing a transaction t1 to be replaced with t2:

 1) All outputs of t1 exist in t2 and pay = the value in t1.

 2) All outputs of t1 are unspent.

 3) The order of outputs in t2 is the same as in t1 with additional new
 outputs at the end of the output list.

 4) t2 only conflicts with a single transaction, t1

 5) t2 does not spend any outputs of t1 (which would make it an invalid
 transaction, impossible to mine)

 These additional criteria respect the existing first-seen behavior of
 the Bitcoin Core mempool implementation, such that once an address is
 payed some amount of BTC, all subsequent replacement transactions will
 pay an equal or greater amount. In short, FSS-RBF is zeroconf safe and
 has no affect on the ability of attackers to doublespend. (beyond of
 course the fact that any changes what-so-ever to mempool behavior are
 potential zeroconf doublespend vulnerabilities)


 Implementation
 --

 Pull-req for git HEAD: https://github.com/bitcoin/bitcoin/pull/6176

 A backport to v0.10.2 is pending.

 An implementation of fee bumping respecting FSS rules is available at:

 https://github.com/petertodd/replace-by-fee-tools/blob/master/bump-fee.py


 Usage Scenarios
 ---

 Case 1: Increasing the fee on a single tx
 -

 We start with a 1-in-2-out P2PKH using transaction t1, 226 bytes in size
 with the minimal relay fee, 2.26uBTC. Increasing the fee while
 respecting FSS-RBF rules requires the addition of one more txin, with
 the change output value increased appropriately, resulting in
 transaction t2, size 374 bytes. If the change txout is sufficient for
 the fee increase, increasing the fee via CPFP requires a second
 1-in-1-out transaction, 192 bytes, for a total of 418 bytes; if another
 input is required, CPFP requires a 2-in-1-out tx, 340 bytes, for a total
 of 566 bytes.

 Benefits: 11% to 34%+ cost savings, and RBF can increase fees even in
cases where the original transaction didn't have a change
output.


 Case 2: Paying multiple recipients in succession
 

Re: [Bitcoin-development] First-Seen-Safe Replace-by-Fee

2015-05-26 Thread Tom Harding
On 5/26/2015 4:11 PM, Gregory Maxwell wrote:
 On Tue, May 26, 2015 at 11:00 PM, Tom Harding t...@thinlink.com wrote:
 The bitcoin transaction is part of a real-world deal with unknown
 connections to the other parts
 I'm having a hard time parsing this.  You might as well say that its
 part of a weeblix for how informative it is, since you've not defined
 it.

For example, you are paying for concert tickets.  The deal is concert 
tickets for bitcoin.  Or you're buying a company with 3 other investors.


 not the case if paying parties are kicked out of the deal, and possibly
 don't learn about it right away.
 The signatures of a transaction can always be changed any any time,
 including by the miner, as they're not signed.

Miners can't update the signature on input #0 after removing input #1.



 A subset of parties to an Armory simulfunding transaction (an actual
 multi-input use case) could replace one signer's input after they broadcast
 it.
 They can already do this.

Replacement is about how difficult it is to change the tx after it is 
broadcast and seen by observers.


 Maybe the
 receiver cares where he is paid from or is basing a subsequent decision on
 it.  Maybe a new output is being added, whose presence makes the transaction
 less likely to be confirmed quickly, with that speed affecting the business.
 The RBF behavior always moves in the direction of more prefered or
 otherwise the node would not switch to the replacement. Petertodd
 should perhaps make that more clear.

 But your maybes are what I was asking you to clarify. You said it
 wasn't hard to imagine; so I was asking for specific clarification.

Pick any one maybe.  They're only maybes because it's not realistic 
for them all to happen at once.



 With Kalle's Proof of Payment proposed standard, one payer in a two-input
 transaction could decide to boot the other, and claim the concert tickets
 all for himself.  The fact that he pays is not the only consideration in the
 real world -- what if these are the last 2 tickets?
 They can already do that.

Not without replacement, after broadcast, unless they successfully pay 
twice.



 I'd argue that changing how an input is signed doesn't change the deal.  For
 example if a different 2 of 3 multisig participants sign, those 3 people
 gave up that level of control when they created the multisig.
 Then why do you not argue that changing the input set does not change
 the weeblix?

 Why is one case of writing out a participant different that the other
 case of writing out a participant?

In the multisig input case, each signer already accepted the possibility 
of being written out.  Peter Todd's proposal is in the spirit of not 
willfully making unconfirmed txes less reliable.  I'm suggesting that 
multi-input signers should be included in the set of people for whom 
they don't get less reliable.



 Replacement is new - we have a choice what kind of warnings we need to give
 to signers of multi-input transactions.  IMHO we should avoid needing a
 stronger warning than is already needed for 0-conf.
 How could a _stronger_ warning be required?

We'd have to warn signers to multi-input txes instead of just warning 
receivers.


--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Long-term mining incentives

2015-05-16 Thread Tom Harding
On 5/16/2015 1:35 PM, Owen Gunden wrote:
 There are alternatives that still use bitcoin as the unit of value,
 such as sidechains, offchain, etc. To say that these are not bitcoin
 is misleading.


Is it?  Nobody thinks euro accepted implies Visa is ok, even though
Visa is just a bunch of extra protocol surrounding an eventual bank deposit.



--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


[Bitcoin-development] No Bitcoin For You

2015-05-14 Thread Tom Harding
A recent post, which I cannot find after much effort, made an excellent
point.

If capacity grows, fewer individuals would be able to run full nodes. 
Those individuals, like many already, would have to give up running a
full-node wallet :(

That sounds bad, until you consider that the alternative is running a
full node on the bitcoin 'settlement network', while massive numbers of
people *give up any hope of directly owning bitcoin at all*.

If today's global payments are 100Ktps, and move to the Lightning
Network, they will have to be consolidated by a factor of 25000:1 to fit
into bitcoin's current 4tps capacity as a settlement network.  You
executing a personal transaction on that network will be about as likely
as you personally conducting a $100 SWIFT transfer to yourself today. 
For current holders, just selling or spending will get very expensive!

Forcing block capacity to stay small, so that individuals can run full
nodes, is precisely what will force bitcoin to become a backbone that is
too expensive for individuals to use.  I can't avoid the conclusion that
Bitcoin has to scale, and we might as well be thinking about how.

There may be a an escape window.  As current trends continue toward a
landscape of billions of SPV wallets, it may still be possible for
individuals collectively to make up the majority of the network, if more
parts of the network itself rely on SPV-level security.

With SPV-level security, it might be possible to implement a scalable
DHT-type network of nodes that collectively store and index the
exhaustive and fast-growing corpus of transaction history, up to and
including currently unconfirmed transactions.  Each individual node
could host a slice of the transaction set with a configurable size,
let's say down to a few GB today.

Such a network would have the desirable property of being run by the
community.  Most transactions would be submitted to it, and like today's
network, it would disseminate blocks (which would be rapidly torn apart
and digested).  Therefore miners and other full nodes would depend on
it, which is rather critical as those nodes grow closer to data-center
proportions.



--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Block Size Increase

2015-05-07 Thread Tom Harding
On 5/7/2015 12:54 PM, Jeff Garzik wrote:
 In the short term, blocks are bursty, with some on 1 minute intervals, 
 some with 60 minute intervals.  This does not change with larger blocks.


I'm pretty sure Alan meant that blocks are already filling up after long 
inter-block intervals.



 2) Where do you want to go?  Should bitcoin scale up to handle all the 
 world's coffees?

Alan was very clear.  Right now, he wants to go exactly where Gavin's 
concrete proposal suggests.



--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Block Size Increase

2015-05-07 Thread Tom Harding
On 5/7/2015 7:09 PM, Jeff Garzik wrote:

 G proposed 20MB blocks, AFAIK - 140 tps
 A proposed 100MB blocks - 700 tps
 For ref,
 Paypal is around 115 tps
 VISA is around 2000 tps (perhaps 4000 tps peak)

 I ask again:  where do we want to go?   This is the existential
 question behind block size.

 Are we trying to build a system that can handle Paypal volumes?  VISA
 volumes?

 It's not a snarky or sarcastic question:  Are we building a system to
 handle all the world's coffees?  Is bitcoin's main chain and network -
 Layer 1 - going to receive direct connections from 500m mobile phones,
 broadcasting transactions?

 We must answer these questions to inform the change being discussed
 today, in order to decide what makes the most sense as a new limit. 
 Any responsible project of this magnitude must have a better story
 than zomg 1MB, therefore I picked 20MB out of a hat  Must be able to
 answer /why/ the new limit was picked.

 As G notes, changing the block size is simply kicking the can down the
 road:
 http://gavinandresen.ninja/it-must-be-done-but-is-not-a-panacea  
 Necessarily one must ask, today, what happens when we get to the end
 of that newly paved road.



Accepting that outcomes are less knowable further into the future is not
the same as failing to consider the future at all.  A responsible
project can't have a movie-plot roadmap.  It needs to give weight to
multiple possible future outcomes.
http://en.wikipedia.org/wiki/Decision_tree

One way or another, the challenge is to decide what to do next.  Beyond
that, it's future decisions all the way down. 

Alan argues that 7 tps is a couple orders of magnitude too low for any
meaningful commercial activity to occur, and too low to be the final
solution, even with higher layers.  I agree.  I also agree with you,
that we don't really know how to accomplish 700tps right now.

What we do know is if we want to bump the limit in the short term, we
ought to start now, and until there's a better alternative root to the
decision tree, it just might be time to get moving.




--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Block Size Increase

2015-05-07 Thread Tom Harding
On 5/7/2015 6:40 AM, Jorge Timón wrote:
 Known: There's a major problem looming for miners at the next block reward
 halving. Many are already in a bad place and without meaningful fees then
 sans a 2x increase in the USD:BTC ratio then many will simply have to leave
 the network, increasing centralisation risks. There seems to be a fairly
 pervasive assumption that the 300-ish MW of power that they currently use is
 going to pay for itself (ignoring capital and other operating costs).
 I take this as an argument for increasing fee competition and thus,
 against increasing the block size.


That doesn't follow.  Supposing average fees per transaction decrease
with block size, total fees / block reach an optimum somewhere.  While
the optimum might be at infinity, it's certainly not at zero, and it's
not at all obvious that the optimum is at a block size lower than 1MB.



--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Block Size Increase

2015-05-06 Thread Tom Harding

On 5/6/2015 3:12 PM, Matt Corallo wrote:

Long-term incentive compatibility requires
that there be some fee pressure, and that blocks be relatively
consistently full or very nearly full.


I think it's way too early to even consider a future era when the fiat 
value of the block reward is no longer the biggest-by-far mining incentive.


Creating fee pressure means driving some people to choose something 
else, not bitcoin. Too many people using bitcoin is nowhere on the 
list of problems today.  It's reckless to tinker with adoption in hopes 
of spurring innovation on speculation, while a can kick is available.


Adoption is currently at miniscule, test-flight, relatively 
insignificant levels when compared to global commerce.  As Gavin 
discussed in the article, under Block size and miner fees… again, the 
best way to maximize miner incentives is to focus on doing things that 
are likely to increase adoption, which, in our fiat-dominated world, 
lead to a justifiably increased exchange rate.


Any innovation attractive enough to relieve the block size pressure will 
do so just as well without artificial stimulus.


Thanks for kicking off the discussion.

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Proof of Payment

2015-04-26 Thread Tom Harding
On 4/22/2015 1:03 PM, Kalle Rosenbaum wrote:

 I've built a proof-of-concept for Proof of Payment. It's available at
 http://www.rosenbaum.se:8080. The site contains links to the source
 code for both the server and a Mycelium fork as well as pre-built apk:s.


  There are several scenarios in which it would be useful to
 prove that you have paid for something. For example:
  A pre-paid hotel room where your PoP functions as a key to the
 door.
  An online video rental service where you pay for a video and
 watch it on any device.
  An ad-sign where you pay in advance for e.g. 2-weeks
 exclusivity. During this period you can upload new content to the
 sign whenever you like using PoP.
  A lottery where all participants pay to the same address, and
 the winner of the T-shirt is selected among the transactions to
 that address. You exchange the T-shirt for a PoP for the winning
 transaction.


Kalle,

You propose a standard format for proving that wallet-controlled funds
COULD HAVE BEEN spent as they were in a real transaction.  Standardized
PoP would give wallets a new way to communicate with the outside world.

PoP could allow payment and delivery to be separated in time in a
standard way, without relying on a mechanism external to bitcoin's
cryptosystem, and enable standardized real-world scenarios where sender
!= beneficiary, and/or receiver != provider.

Payment:
sender - receiver

Delivery:
beneficiary - provider

Some more use cases might be:
Waiting in comfort:
 - Send a payment ahead of time, then wander over and collect the goods
after X confirmations.

Authorized pickup :
 - Hot wallet software used by related people could facilitate the use
of 1 of N multisig funds.  Any one of the N wallets could collect goods
and services purchased by any of the others.

Non-monetary gifts:
 - Sender exports spent keys to a beneficiary, enabling PoP to work as a
gift claim

Contingent services:
 - Without Bob's permission, a 3rd party conditions action on a payment
made from Alice to Bob.  For example, if you donated at least .02 BTC to
Dorian, you (or combining scenarios, any of your N authorized family
members), can come to my dinner party.

I tried out your demo wallet and service and it worked as advertised.

Could the same standard also be used to prove that a transaction COULD
BE created?  To generalize the concept beyond actual payments, you could
call it something like proof of payment potential.

Why not make these proofs permanently INVALID transactions, to remove
any possibility of their being mined and spending everything to fees
when used in this way, and also in cases involving reorganizations?

I agree that PoP seems complementary to BIP70.



--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


[Bitcoin-development] Address Expiration to Prevent Reuse

2015-03-24 Thread Tom Harding
The idea of limited-lifetime addresses was discussed on 2014-07-15 in

http://thread.gmane.org/gmane.comp.bitcoin.devel/5837

It appears that a limited-lifetime address, such as the fanciful

address = 4HB5ld0FzFVj8ALj6mfBsbifRoD4miY36v_349366

where 349366 is the last valid block for a transaction paying this 
address, could be made reuse-proof with bounded resource requirements, 
if for locktime'd tx paying address, the following were enforced by 
consensus:

  - Expiration
Block containing tx invalid at height  349366

  - Finality
Block containing tx invalid if (349366 - locktime)  X
(X is the address validity duration in blocks)

  - Uniqueness
Block containing tx invalid if a prior confirmed tx has paid address

Just an an idea, obviously not a concrete proposal.


--
Dive into the World of Parallel Programming The Go Parallel Website, sponsored
by Intel and developed in partnership with Slashdot Media, is your hub for all
things parallel software development, from weekly thought leadership blogs to
news, videos, case studies, tutorials and more. Take a look and join the 
conversation now. http://goparallel.sourceforge.net/
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] replace-by-fee v0.10.0rc4

2015-02-22 Thread Tom Harding
On 2/11/2015 10:47 PM, Peter Todd wrote:
 My replace-by-fee patch is now available for the v0.10.0rc4 release:

  https://github.com/petertodd/bitcoin/tree/replace-by-fee-v0.10.0rc4


This patch immediately simplifies successful double-spends of 
unconfirmed transactions.  But the idea that it gives a path to making 
zeroconf transactions economically secure is quite dubious.

* You don't provide sufficient means to detect and relay double-spends, 
which is necessary to trigger a scorched-earth reaction.  Not all 
double-spends will conform to your replacement rules.

   * Maybe XT nodes would help to overcome this.  But meanwhile, in the 
ANYONECANPAY design, Bob's replacement is a triple-spend.  Even XT nodes 
won't relay it.

* It's unclear when, if ever, any senders/receivers will actually try to 
use scorched-earth as a double-spend deterrent.


Also, this patch significantly weakens DoS protections:

* It removes the early conflict check, making all conflict processing 
more expensive

   * There is no attempt to protect against the same transaction being 
continually replaced with the fee bumped by a minimal amount.

--
Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
from Actuate! Instantly Supercharge Your Business Reports and Dashboards
with Interactivity, Sharing, Native Excel Exports, App Integration  more
Get technology previously reserved for billion-dollar corporations, FREE
http://pubads.g.doubleclick.net/gampad/clk?id=190641631iu=/4140/ostg.clktrk
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] replace-by-fee v0.10.0rc4

2015-02-12 Thread Tom Harding
On 2/11/2015 10:47 PM, Peter Todd wrote:
 ... replace-by-fee ...

Replace-by-fee creates the power to repudiate an entire tree of 
payments, and hands this power individually to the owner of each input 
to the top transaction.  Presumably this is why the original replacement 
code at least required that all of the same inputs be spent, even if the 
original outputs got jilted.

Replace-by-fee strengthens the existing *incentive discontinuity* at 
1-conf, and shouts it from the rooftops.  There is diffraction around 
hard edges.  Expect more Finney attacks, even paid ones, if 
replace-by-fee becomes common.  Regardless of how reliable 0-conf can 
ever be (much more reliable than today imho), discontinuities are very 
undesirable.

There is no money in mining other people's double-spends.  Miners of all 
sizes would welcome a fair way to reduce them to improve the quality of 
the currency, whether or not that way is DSDW.  You mischaracterize DSDW 
as being in any way trust- or vote-based.  It is based on statistics, 
which is bitcoin-esque to the core.


--
Dive into the World of Parallel Programming. The Go Parallel Website,
sponsored by Intel and developed in partnership with Slashdot Media, is your
hub for all things parallel software development, from weekly thought
leadership blogs to news, videos, case studies, tutorials and more. Take a
look and join the conversation now. http://goparallel.sourceforge.net/
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] replace-by-fee v0.10.0rc4

2015-02-12 Thread Tom Harding
On 2/12/2015 6:25 AM, Tamas Blummer wrote:

 Miner will see a mixed picture and will struggle to act “honestly” on 
 a statistical measure.

The statistics come from the aggregate actions of all nodes, especially 
those miners who watch p2p transactions and assemble blocks.

Any one node makes deterministic decisions based on its own observation 
-- just like today's valid/invalid decision based on whether a blocktime 
is within the next 2 hours or not.

The idea is that miners will exclude respends because they put the block 
at risk of being forked off, with no offsetting payback.  The design 
point is to make sure this is sufficiently unlikely to happen 
accidentally, or via some attack vector.



--
Dive into the World of Parallel Programming. The Go Parallel Website,
sponsored by Intel and developed in partnership with Slashdot Media, is your
hub for all things parallel software development, from weekly thought
leadership blogs to news, videos, case studies, tutorials and more. Take a
look and join the conversation now. http://goparallel.sourceforge.net/
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Update to Double-Spend Deprecation Window Proposal

2015-02-09 Thread Tom Harding
Many thanks for the feedback Peter.  Please if you would, see below

On 2/8/2015 10:32 PM, Peter Todd wrote:
 Seeing a transaction is not a guarantee that any other node has seen it; not 
 seeing a transaction is not a guarantee other nodes have not seen a spend.

In no way does proposal rely on such assumptions.  It develops local 
rules which result in a desirable outcome for the network as a whole, 
under the applicable statistics.


 you're measuring a network that isn't under attack; Bitcoin must be robust 
 against attacks, and it must not create incentives to launch them.

Two specific attacks are addressed at some length.  No one is keener 
than I to learn of new ones, or flaws in those treatments.


 Institutionalising the punishment of miners being they did not have perfect 
 connectivity - an unattainable goal in a trust less, decentralised system - 
 is athema to the goals of having a decentralised systmem and will only lead 
 to smaller mining operations being punished for being the victim of attacks 
 on their network connectivity that are only made profitable by this proposal.

Building from unavoidable imperfections is the necessary spirit when 
interfacing with physical reality.  I would defer to miners whether 
these specific worries outweigh the benefits of helping to achieve a 30 
second network, rather than a 10±10 minute network.


 Equally your proposal actually makes it *easier* to pull off apparently 
 single-confirm double-spend attacks - any miner who ignores a block 
 containing the apparent double-spend is just as likely to be aiding an 
 attacker trying to get a 1-conf transaction double-spent. This forces 
 *everyone* to waiting *longer* before accepting a transaction because now 
 even a single-confirmation is no longer good evidence of an accepted 
 transaction. In an ecosystem where hardly anyone relies on zeroconf anyway 
 your putting a much larger group of people at risk who weren't at risk before.

I agree on one point -- it is necessary to let transactions mature for 
something on the order of 15 to 30 seconds before mining them, as 
discussed in proposal.  I quite disagree regarding Finney (1-conf) 
attacks.  In fact this proposal is the only one I've seen that actually 
stops most Finney attacks -- all those where the block comes more than 
30 seconds after tx1.



--
Dive into the World of Parallel Programming. The Go Parallel Website,
sponsored by Intel and developed in partnership with Slashdot Media, is your
hub for all things parallel software development, from weekly thought
leadership blogs to news, videos, case studies, tutorials and more. Take a
look and join the conversation now. http://goparallel.sourceforge.net/
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


[Bitcoin-development] Update to Double-Spend Deprecation Window Proposal

2015-02-08 Thread Tom Harding

This update strengthens the incentive not to confirm double-spends after 
time T (30 seconds).  To grow and stabilize adoption, it is necessary to 
influence the miner of the block after a deprecated block, who in turn 
is concerned with the block after that. Accordingly, the disincentive is 
changed from a simple delay to a temporary chain work penalty, which can 
be negative.  Hal Finney first suggested this in 2011.

The penalty is graduated in two steps based on the respend gap, for 
reasons explained within.  I believe it is the minimum required to 
achieve the intended result.

Double-Spend Deprecation Window
https://github.com/dgenr8/out-there/blob/master/ds-dep-win.md


--
Dive into the World of Parallel Programming. The Go Parallel Website,
sponsored by Intel and developed in partnership with Slashdot Media, is your
hub for all things parallel software development, from weekly thought
leadership blogs to news, videos, case studies, tutorials and more. Take a
look and join the conversation now. http://goparallel.sourceforge.net/
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] IMPULSE: Instant Payments using the Bitcoin protocol

2015-01-22 Thread Tom Harding
On 1/17/2015 12:45 PM, Rune Kjær Svendsen wrote:
 PDF: http://impulse.is/impulse.pdf

 I'd love to hear this list's thoughts.


Will success be defined by BitPay Payment Channels Accepted Here signs 
appearing in shop windows?


--
New Year. New Location. New Benefits. New Data Center in Ashburn, VA.
GigeNET is offering a free month of service with a new server in Ashburn.
Choose from 2 high performing configs, both with 100TB of bandwidth.
Higher redundancy.Lower latency.Increased capacity.Completely compliant.
http://p.sf.net/sfu/gigenet
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] DS Deprecation Window

2014-11-06 Thread Tom Harding

Added a section Confidence to include tx1 and subsection Deliberate 
delay attack
https://github.com/dgenr8/out-there/blob/master/ds-dep-win.md

I found that under concerted attack, if miner excludes any transaction 
first seen less than 30 seconds ago, or double-spent less than 30 
seconds after first seen, he should expect 5 of 1 nodes to delay his 
block.

Hal Finney remarked that this idea would need careful analysis. More 
help is very welcome.
https://bitcointalk.org/index.php?topic=3441.msg48789#msg48789

Cheers!

On 10/28/2014 10:38 AM, Tom Harding wrote:
 So, I think it will be possible to quantify and target the risk of 
 including tx1...



--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] DS Deprecation Window

2014-10-28 Thread Tom Harding
On 10/27/2014 7:36 PM, Gregory Maxwell wrote:
 Consider a malicious miner can concurrently flood all other miners
 with orthogonal double spends (which he doesn't mine himself). These
 other miners will all be spending some amount of their time mining on
 these transactions before realizing others consider them
 double-spends.

If I understand correctly, the simplest example of this attack is three 
transactions spending the same coin, distributed to two miners like this:

 Miner AMiner B
Mempool   tx1a   tx1b
Relayed   tx2tx2

Since relay has to be limited, Miner B doesn't know about tx1a until it 
is included in Miner A's block, so he delays that block (unless it 
appears very quickly).

To create this situation, attacker has to transmit all three 
transactions very quickly, or mempools will be too synchronized. 
Attacker tries to make it so that everyone else has a tx1a conflict that 
Miner A does not have.  Ditto for each individual victim, with different 
transactions (this seems very difficult).

Proposal shows that there is always a tiny risk to including tx1 when a 
double-spend is known, and I agree that this attack can add something to 
that risk.  Miner A can neutralize his risk by excluding any tx1 known 
to be double-spent, but as Thomas Zander wrote, that is an undesirable 
outcome.

However, Miner A has additional information - he knows how soon he 
received tx2 after receiving tx1a.

The attack has little chance of working if any of the malicious 
transactions are sent even, say, 10 seconds apart from each other. 
Dropping the labels for transmit-order numbering, if the 1-2 transmit 
gap is large, mempools will agree on 1.  If 1-2 gap is small, but the 
gap to 3 is large, mempools will agree on the 1-2 pair, but possibly 
have the order reversed.  Either way, mempools won't disagree on the 
existence of 1 unless the 1-3 gap is small.

So, I think it will be possible to quantify and target the risk of 
including tx1a to an arbitrarily low level, based on the local 
measurement of the time gap to tx2, and an effective threshold won't be 
very high.  It does highlight yet again, the shorter the time frame, the 
greater the risk.


--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


[Bitcoin-development] DS Deprecation Window

2014-10-27 Thread Tom Harding
Greetings Bitcoin Dev,

This is a proposal to improve the ability of bitcoin users to rely on 
unconfirmed transactions.  It can be adopted incrementally, with no hard 
or soft fork required.

https://github.com/dgenr8/out-there/blob/master/ds-dep-win.md

Your thoughtful feedback would be very much appreciated.

It is not yet implemented anywhere.

Cheers,
Tom Harding
CA, USA


--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] DS Deprecation Window

2014-10-27 Thread Tom Harding
Matt,

You're right, thanks.  Without double-spend relay, miner won't know that 
some txes conflict with anything.  I'll add that first-double-spends are 
relayed per #4570.

Miner has to be very careful including a double-spend in his block -- he 
hopes:

  - that based on his measured time offset from the first spend he 
received, at most a tiny fraction of the network will delay his block

  - that not too many nodes saw an earlier spend that he didn't see, 
which could increase that fraction

  - that most other nodes saw his tx.  Any who didn't will only learn 
about it by receiving his block, and they will assign it the time when 
they receive the block.  That's likely to be more than T (30 seconds) 
after an earlier spend, so they would delay the block.

The best course of action is intended to be for miner to exclude fast ( 
2 hours) double spends completely.


On 10/27/2014 1:17 PM, Matt Corallo wrote:
 miners are incentivized to go connect to everyone on the network and
 look for double-spends

 On 10/27/14 19:58, Tom Harding wrote:
 https://github.com/dgenr8/out-there/blob/master/ds-dep-win.md

--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] [BIP draft] CHECKLOCKTIMEVERIFY - Prevent a txout from being spent until an expiration time

2014-10-07 Thread Tom Harding
On 10/7/2014 8:50 AM, Gavin Andresen wrote:

 I don't have any opinion on the hard- versus soft- fork debate. I 
 think either can work.


Opinion: if a soft work works, it should be preferred, if for no other 
reason than once a hard-fork is planned, the discussion begins about 
what else to throw in.  To minimize the frequency of hard-forks, the 
time for that is when the change being considered actually requires one.

--
Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
http://pubads.g.doubleclick.net/gampad/clk?id=154622311iu=/4140/ostg.clktrk
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] bitcoinj 0.12

2014-10-03 Thread Tom Harding


I'm stunned by what bitcoinj can do these days.  Just reading the 
release notes gives one app ideas.  Mike, Awesome.



On 10/3/2014 5:49 AM, Mike Hearn wrote:
I'm pleased to announce version 0.12 of bitcoinj, one of the worlds 
most popular Bitcoin libraries.


--
Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
http://pubads.g.doubleclick.net/gampad/clk?id=154622311iu=/4140/ostg.clktrk___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] SPV clients and relaying double spends

2014-09-27 Thread Tom Harding
On 9/25/2014 7:37 PM, Aaron Voisine wrote:
 Of course you wouldn't want nodes to propagate alerts without
 independently verifying them
How would a node independently verify a double-spend alert, other than 
by having access to an actual signed double-spend?

#4570 relays the first double-spend AS an alert.  Running this branch on 
mainnet, I have been keeping a live list of relayed double-spend 
transactions at http://respends.thinlink.com


--
Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
http://pubads.g.doubleclick.net/gampad/clk?id=154622311iu=/4140/ostg.clktrk
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] deterministic transaction expiration

2014-08-08 Thread Tom Harding
Having explored more drastic approaches, it looks like Kaz' basic idea 
stands well.  His #1...

 1. start setting nLockTime to the current height by default in newly
 created transactions (or slightly below the current height, for
 reorg-friendliness)

is already implemented in bitcoin-qt #2340, and a final call on 
merging it was already sent to this list.  After some thought I agree 
with its policy of eventually setting nLockTime at current-height + 1 by 
default.  This is the best reasonably expected height of any tx 
created right now.  It discourages fee-sniping, and if a reorg happens 
anyway, it won't actually delay inclusion of tx beyond the reasonable 
expectation sans reorg.

However right now, #2340 takes a very cautious approach and sets to 
current-height - 10 by default, with randomness to mitigate worries 
about loss of privacy.

Kaz' #2, #3 and #4 are future actions.  #4 only goes most of the way ...

 4. add a new IsStandard rule rejecting transactions with an nLockTime
 more than N blocks behind the current tip (for some fixed value N, to
 be determined)

... a janitor mechanism is desirable to purge mempool of txes more than 
N behind current-height.

Nodes dropping a tx N blocks after they became eligible to be mined (the 
meaning of nLockTime) makes sense.  It is not an overloading or new use 
for nLockTime, but a logical extension of it.  As Kaz pointed out, this 
solves a big problem with expiring by locally measured age: 
unintentional resurrection.


--
Want fast and easy access to all the code in your enterprise? Index and
search up to 200,000 lines of code with a free copy of Black Duck
Code Sight - the same software that powers the world's largest code
search on Ohloh, the Black Duck Open Hub! Try it now.
http://p.sf.net/sfu/bds
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] deterministic transaction expiration

2014-08-06 Thread Tom Harding


How is eventual expiration of a tx that started life with an nLockTime 
in the future breaking, any more than any other tx expiring?



On 8/6/2014 6:54 AM, Mike Hearn wrote:
We could however introduce a new field in a new tx version. We know we 
need to rev the format at some point anyway.



On Wed, Aug 6, 2014 at 2:55 PM, Jeff Garzik jgar...@bitpay.com 
mailto:jgar...@bitpay.com wrote:


 ...and existing users and uses of nLockTime suddenly become
worthless, breaking payment channel refunds and other active uses
of nLockTime.

You cannot assume the user is around to rewrite their nLockTime,
if it fails to be confirmed before some arbitrary deadline being set.



On Wed, Aug 6, 2014 at 12:01 AM, Tom Harding t...@thinlink.com
mailto:t...@thinlink.com wrote:

...




If nLockTime is used for expiration, transaction creator can't
lie to
help tx live longer without pushing initial confirmation
eligibility
into the future.  Very pretty.  It would also enable fill or
kill
transactions with a backdated nLockTime, which must be
confirmed in a
few blocks, or start vanishing from mempools.



--
Infragistics Professional
Build stunning WinForms apps today!
Reboot your WinForms applications with our WinForms controls. 
Build a bridge from your legacy apps to the future.
http://pubads.g.doubleclick.net/gampad/clk?id=153845071iu=/4140/ostg.clktrk___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] deterministic transaction expiration

2014-08-06 Thread Tom Harding


Today we have first-eligible-height (nLockTime), and mempool expiration 
measured from this height would work for the goals being discussed, no 
fork or protocol rev.


With first-eligible-height and last-eligible-height, creator could 
choose a lifetime shorter than the max,  and in addition, lock the whole 
thing until some point in the future.



On 8/6/2014 9:15 AM, Jeff Garzik wrote:
A fork is not necessarily required, if you are talking about 
information that deals primarily with pre-consensus mempool behavior.  
You can make a network TX with some information that is digitally 
signed, yet discarded before it reaches miners.



On Wed, Aug 6, 2014 at 11:42 AM, Peter Todd p...@petertodd.org 
mailto:p...@petertodd.org wrote:


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256



On 6 August 2014 08:17:02 GMT-07:00, Christian Decker
decker.christ...@gmail.com mailto:decker.christ...@gmail.com
wrote:
+1 for the new field, overloading fields with new meaning is
definitely
not
a good idea.

To add a new field the best way to do it is create a new,
parallel, tx format where fields are committed by merkle radix
tree in an extensible and provable way. You'd then commit to that
tree with a mandatory OP_RETURN output in the last txout, or with
a new merkle root.

Changing the tx format itself in a hard-fork is needlessly
disruptive, and in this case, wastes opportunities for improvement.
-BEGIN PGP SIGNATURE-
Version: APG v1.1.1

iQFQBAEBCAA6BQJT4kzQMxxQZXRlciBUb2RkIChsb3cgc2VjdXJpdHkga2V5KSA8
cGV0ZUBwZXRlcnRvZGQub3JnPgAKCRAZnIM7qOfwhamzCAC+zRaXRodP63+ke3K+
Viapiepvk4uIOlqxqtMB2O0zWcyu2+xCJDiRPykK/6HLDBeFDEC9/dGK8++Lovl6
//qZ340LOPFlgT2kYy9E5h/yX469fhtsWhBCv2K47fWwkMS0S/0r4SQnCkbt2R2c
4dQjkoldhw6rNMBTUmwvhSlL30KsT/msWTZiX7DW/YjfOzezEJzy+mYyKp9Sk7ba
1fOiBXORk7mNOs7sTYTvje3sqEGpGTOLP08cY/RCEvl6bG8mHkPqwiojq+3biHFP
RsoBVu1f5cbnU7Wq0gPNdVnQssnEQDadyTX8gT0Wze7PuVyaZT2mXFZBKzSHuLy2
sJKN
=oPSo
-END PGP SIGNATURE-




--
Jeff Garzik
Bitcoin core developer and open source evangelist
BitPay, Inc. https://bitpay.com/


--
Infragistics Professional
Build stunning WinForms apps today!
Reboot your WinForms applications with our WinForms controls.
Build a bridge from your legacy apps to the future.
http://pubads.g.doubleclick.net/gampad/clk?id=153845071iu=/4140/ostg.clktrk


___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


--
Infragistics Professional
Build stunning WinForms apps today!
Reboot your WinForms applications with our WinForms controls. 
Build a bridge from your legacy apps to the future.
http://pubads.g.doubleclick.net/gampad/clk?id=153845071iu=/4140/ostg.clktrk___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] deterministic transaction expiration

2014-08-05 Thread Tom Harding
On 8/5/2014 12:10 PM, Kaz Wesley wrote:
 Any approach based on beginning a transaction expiry countdown when a 
 transaction is received (as in mempool janitor) seems unviable to me: 
 once a node has forgotten a transaction, it must be susceptible to 
 reaccepting it;

It's hard to argue with that logic.

If nLockTime is used for expiration, transaction creator can't lie to 
help tx live longer without pushing initial confirmation eligibility 
into the future.  Very pretty.  It would also enable fill or kill 
transactions with a backdated nLockTime, which must be confirmed in a 
few blocks, or start vanishing from mempools.


--
Infragistics Professional
Build stunning WinForms apps today!
Reboot your WinForms applications with our WinForms controls. 
Build a bridge from your legacy apps to the future.
http://pubads.g.doubleclick.net/gampad/clk?id=153845071iu=/4140/ostg.clktrk
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] deterministic transaction expiration

2014-08-01 Thread Tom Harding
On 7/31/2014 5:58 PM, Kaz Wesley wrote:
 1. start setting nLockTime to the current height by default in newly
 created transactions (or slightly below the current height, for
 reorg-friendliness)

Reorg-frendliness is the opposite of the rationale behind #2340, which 
proposes setting nLockTime at current-height + 1 to prevent 
fee-sniping reorgs...


 2. once users have had some time to upgrade to clients that set
 nLockTime, start discouraging transactions without nLockTime --
 possibly with a slightly higher fee required for relay
 3. start rate-limiting relay of transactions without an nLockTime
 (maybe this alone could be used to achieve [2])
 4. add a new IsStandard rule rejecting transactions with an nLockTime
 more than N blocks behind the current tip (for some fixed value N, to
 be determined)


One way to proceed is implement #3753 (mempool janitor) in such a way 
that transactions with nLockTime are allowed to live a bit longer in the 
mempool (say 500 blocks) than those without (72 hours).  In other words, 
as a first step, just actually start expiring things from the mempool in 
bitcoin core, and leave any relay fee adjustments or rate limiting for 
later.  The isStandard change would be a good complement to #3753, to 
avoid relaying a tx that will soon expire by the nLockTime rule anyway.



--
Want fast and easy access to all the code in your enterprise? Index and
search up to 200,000 lines of code with a free copy of Black Duck
Code Sight - the same software that powers the world's largest code
search on Ohloh, the Black Duck Open Hub! Try it now.
http://p.sf.net/sfu/bds
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] instant confirmation via payment protocol backwards compatible proto buffer extension

2014-06-17 Thread Tom Harding
On 6/16/2014 8:09 AM, Daniel Rice wrote:
 What if we solved doublespends like this: If a node receives 2 
 transactions that use the same input, they can put both of them into 
 the new block as a proof of double spend, but the bitcoins are not 
 sent to the outputs of either transactions. They are instead treated 
 like a fee and given to the block solver node. This gives miners the 
 needed incentive and tools to end doublespends instead of being forced 
 to favor one transaction over the other.

Before considering a hard fork with unpredictable effects on the 
uncertainty window, it would be interesting to look at a soft fork that 
would directly target the goal of reducing the uncertainty window, like 
treating locally-detected double-spends aged  T as invalid (see earlier 
message A statistical consensus rule for reducing 0-conf double-spend 
risk).

If anything is worth a soft fork, wouldn't reducing the double-spend 
uncertainty window by an order of magnitude be in the running?

Reducing the reasons that transactions don't get relayed, which actually 
seems to have a shot of happening pretty soon, would also make this kind 
of thing work better.


--
HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions
Find What Matters Most in Your Big Data with HPCC Systems
Open Source. Fast. Scalable. Simple. Ideal for Dirty Data.
Leverages Graph Analysis for Fast Processing  Easy Data Exploration
http://p.sf.net/sfu/hpccsystems
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] instant confirmation via payment protocol backwards compatible proto buffer extension

2014-06-17 Thread Tom Harding
On 6/16/2014 8:48 AM, Mike Hearn wrote:
 In practice of course this is something payment processors like Bitpay 
 and Coinbase will think about. Individual cafes etc who are just using 
 mobile wallets won't be able to deal with this complexity: if we can't 
 make native Bitcoin work well enough there, we're most likely to just 
 lose that market or watch it become entirely centralised around a 
 handful of payment processing companies.

I have trouble seeing how could the real-time anonymous payments market 
can be cleanly separated from everything else.  If trusted third parties 
become the norm for that market, there will inevitably be a huge overlap 
effect on other markets that bitcoin can serve best, even today.  I 
don't see how any currency, any cash, can concede this market.


--
HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions
Find What Matters Most in Your Big Data with HPCC Systems
Open Source. Fast. Scalable. Simple. Ideal for Dirty Data.
Leverages Graph Analysis for Fast Processing  Easy Data Exploration
http://p.sf.net/sfu/hpccsystems
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] A statistical consensus rule for reducing 0-conf double-spend risk

2014-05-12 Thread Tom Harding
Sorry to run on, a correction is needed.  A much better approximation 
requires that the rule-following minority finds the next TWO blocks, so 
the cost is

(total miner revenue of block)*(fraction of hashpower following the rule)^2

So the lower bound cost in this very pessimistic scenario is .0025 BTC,  
still quite high for one transaction.  I guess miner could try to make a 
business out of mining double-spends, to defray that cost.


On 5/11/2014 9:41 PM, Tom Harding wrote:
 Back up to the miner who decided to include a seasoned double-spend 
 in his block.  Let's say he saw it 21 seconds after he saw an earlier 
 spend, and included it, despite the rule.

 The expected cost of including the respend is any revenue loss from 
 doing so: (total miner revenue of block)*(fraction of hashpower 
 following the rule).  So today, if only 1% of hashpower follows the 
 rule (ie a near total failure of consensus implementation), he still 
 loses at least .25 BTC.

 .25 BTC is about 1000x the typical double-spend premium I'm seeing 
 right now.  Wouldn't the greedy-rational miner just decide to include 
 the earlier spend instead 



--
Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE
Instantly run your Selenium tests across 300+ browser/OS combos.
Get unparalleled scalability from the best Selenium testing platform available
Simple to use. Nothing to install. Get started now for free.
http://p.sf.net/sfu/SauceLabs
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] A statistical consensus rule for reducing 0-conf double-spend risk

2014-05-06 Thread Tom Harding
Christophe Biocca wrote:

 it becomes trivial with a few tries to split the network into two
 halves: (tx1 before tx2, tx2 before tx1).

before implies T=0.  That is a much too optimistic choice for T; 50% 
of nodes would misidentify the respend.


 Tom Harding t...@thinlink.com wrote:
- Eventually, node adds a consensus rule:
   Do not accept blocks containing a transaction tx2 where
   - tx2 respends an output spent by another locally accepted
 transaction tx1, and
   - timestamp(tx2) - timestamp(tx1)  T


--
Is your legacy SCM system holding you back? Join Perforce May 7 to find out:
#149; 3 signs your SCM is hindering your productivity
#149; Requirements for releasing software faster
#149; Expert tips and advice for migrating your SCM now
http://p.sf.net/sfu/perforce
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


[Bitcoin-development] A statistical consensus rule for reducing 0-conf double-spend risk

2014-05-03 Thread Tom Harding
This idea was suggested by Joe on 2011-02-14 
https://bitcointalk.org/index.php?topic=3441.msg48484#msg48484 .  It 
deserves another look.

Nodes today make a judgment regarding which of several conflicting 
spends to accept, and which is a double-spend.  But there is no 
incorporation of these collective judgments into the blockchain.  So 
today, it's the wild west, right up until the next block.  To address this:

  - Using its own clock, node associates a timestamp with every 
transaction upon first seeing its tx hash (at inv, in a block, or when 
created)
  - Node relays respend attempts (subject to anti-DOS rules, see github 
PR #3883)
  - Eventually, node adds a consensus rule:
 Do not accept blocks containing a transaction tx2 where
 - tx2 respends an output spent by another locally accepted 
transaction tx1, and
 - timestamp(tx2) - timestamp(tx1)  T

What is T?

According to http://bitcoinstats.com/network/propagation/ recent tx 
propagation has a median of 1.3 seconds.  If double-spender introduces 
both transactions from the same node, assuming propagation times 
distributed exponentially with median 1.3 seconds, the above consensus 
rule with reject threshold T = 7.4 seconds would result in 
mis-identification of the second-spend by less than 1% of nodes.*

If tx1 and tx2 are introduced in mutually time-distant parts of the 
network, a population of nodes in between would be able to accept either 
transaction, as they can today.  But the attacker still has to introduce 
them at close to the same time, or the majority of the network will 
confirm the one introduced earlier.

Merchant is watching also, and these dynamics mean he will not have to 
watch for very long to gain confidence if he was going to get 
double-spent, he would have learned it by now.  The consensus rule also 
makes mining a never-broadcast double-spend quite difficult, because the 
network assigns it very late timestamps.  Miner has to get lucky and 
find the block very quickly.  In other words, it converges to a Finney 
attack.

This would be the first consensus rule that anticipated less than 100% 
agreement.  But the parameters could be chosen so that it was still 
extremely conservative.  Joe also suggested a fail-safe condition: drop 
this rule if block has 6 confirmations, to prevent a fork in unusual 
network circumstances.

We can't move toward this, or any, solution without more data. Today, 
the network is not transparent to double-spend attempts, so we mostly 
have to guess what the quantitative effects would be.  The first step is 
to share the data broadly by relaying first double-spend attempts as in 
github PR #3883.


*Calcs:
For Exp(lambda), median ln(2)/lambda = 1.3 == lambda = .533
Laplace(0,1/lambda)  .01 == T = 7.34 seconds


--
Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE
Instantly run your Selenium tests across 300+ browser/OS combos.  Get 
unparalleled scalability from the best Selenium testing platform available.
Simple to use. Nothing to install. Get started now for free.
http://p.sf.net/sfu/SauceLabs
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Double-spending unconfirmed transactions is a lot easier than most people realise

2014-04-23 Thread Tom Harding
On 4/22/2014 9:03 PM, Matt Whitlock wrote:
 On Tuesday, 22 April 2014, at 8:45 pm, Tom Harding wrote:
 A network where transaction submitters consider their (final)
 transactions to be unchangeable the moment they are transmitted, and
 where the network's goal is to confirm only transactions all of whose
 UTXO's have not yet been seen in a final transaction's input, has a
 chance to be such a network.
 Respectfully, this is not the goal of miners. The goal of miners is to 
 maximize profits. Always will be. If they can do that by enabling 
 replace-by-fee (and they can), then they will. Altruism does not factor into 
 business.

The rational miner works hard digging hashes out of the ether, and wants 
the reward to be great.  How much more valuable would his reward be if 
he were paid in something that is spendable like cash on a 1-minute 
network for coffee and other innumerable real-time transactions, versus 
something that is only spendable on a 15-minute network?

There is a prisoner's dilemma, to be sure, but do the fees from helping 
people successfully double-spend their coffee supplier really outweigh 
the increased value to the entire network - including himself - of 
ensuring that digital cash actually works like cash?



--
Start Your Social Network Today - Download eXo Platform
Build your Enterprise Intranet with eXo Platform Software
Java Based Open Source Intranet - Social, Extensible, Cloud Ready
Get Started Now And Turn Your Intranet Into A Collaboration Platform
http://p.sf.net/sfu/ExoPlatform
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Coinbase reallocation to discourage Finney attacks

2014-04-23 Thread Tom Harding

On 4/23/2014 2:23 PM, Tier Nolan wrote:
 An interesting experiment would be a transaction proof of 
 publication chain.

What if a transaction could simply point back to an earlier transaction, 
forming a chain?  Not a separately mined blockchain, just a way to 
establish an official publication (execution) order. Double spends would 
be immediately actionable with such a sequence. Transactions in a block 
could eventually be required to be connected in such a chain.  Miners 
would have to keep or reject a whole mempool chain, since they lack the 
keys to change the sequence.  They would have to prune a whole tx 
subchain to insert a double spend (and this would still require private 
keys to the double spend utxo's).

This idea seemed promising, until I realized that with the collision 
rebasing required, it would barely scale to today's transaction rate.  
Something that scales to 10,000's of transactions per second, and really 
without limit, is needed.

Anyway, I wrote it up here: 
https://github.com/dgenr8/out-there/blob/master/tx-chains.md


--
Start Your Social Network Today - Download eXo Platform
Build your Enterprise Intranet with eXo Platform Software
Java Based Open Source Intranet - Social, Extensible, Cloud Ready
Get Started Now And Turn Your Intranet Into A Collaboration Platform
http://p.sf.net/sfu/ExoPlatform
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Economics of information propagation

2014-04-22 Thread Tom Harding
Jonathan -

These are a few things I've been wishing for recent data on:

  - 95th percentile transaction propagation time vs. fees/kb, vs. total fees
  - Count of blocks bypassing well-propagated transactions vs. fees/kb, 
vs. total fees
  - Signed-double-spend confirmation probability vs. broadcast time 
offset from first spend

On 4/20/2014 5:30 PM, Jonathan Levin wrote:
 at coinometrics we are working on a modified client to capture information on 
 network propagation and would invite any suggestions of any other useful 
 statistics that would be useful in the development of software.



--
Start Your Social Network Today - Download eXo Platform
Build your Enterprise Intranet with eXo Platform Software
Java Based Open Source Intranet - Social, Extensible, Cloud Ready
Get Started Now And Turn Your Intranet Into A Collaboration Platform
http://p.sf.net/sfu/ExoPlatform
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Double-spending unconfirmed transactions is a lot easier than most people realise

2014-04-22 Thread Tom Harding

Since no complete solution to preventing 0-confirmation respends in the 
bitcoin network has been proposed, or is likely to exist, when 
evaluating partial solutions let's ask what kind of network does this 
move toward?

Does the solution move toward a network with simple rules, where the 
certainty that decreases from the many-confirmations state, down to 1 
confirmation, does not immediately disappear just below the time of 1 
confirmation?

A network where transaction submitters consider their (final) 
transactions to be unchangeable the moment they are transmitted, and 
where the network's goal is to confirm only transactions all of whose 
UTXO's have not yet been seen in a final transaction's input, has a 
chance to be such a network.  If respend attempts are broadcast widely, 
then after a time on the order of transaction propagation time ( 1 
minute) has passed, participants have a good chance to avoid relying on 
a transaction whose funds are spent to someone else.  This is both 
because after this time the network is unlikely to split on the primacy 
of one spend, and because the recipient, able to see a respend attempt, 
can withhold delivery of the good or service until confirmation.

Or, does the solution move toward a network that
  - Requires participants to have knowledge of the policies of multiple 
entities, like Eligius and whoever maintains the blacklist mentioned below?
  - Requires a transaction submitter to intently monitor transactions 
and try to climb over the top of attempted respends with 
scorched-earth triple spends, until a random moment some time between, 
let's say, 5 and 15 minutes in the future?
  - Punts the problem to off-network solutions?


On 4/22/2014 1:31 PM, Peter Todd wrote:
 You may have seen my reddit post of the same title a few days ago:

 http://www.reddit.com/r/Bitcoin/comments/239bj1/doublespending_unconfirmed_transactions_is_a_lot/

 I've done some more experiments since, with good results. For instance
 here's a real-world double-spend of the gambling service Lucky Bit:

 Original: 7801c3b996716025dbac946ca7a123b7c1c5429341738e8a6286a389de51bd20

 0100012a14c8e6ce1e625513847b2ff271b3e6a1849f2a634c601b7f383ef710483f796a4730440220692d09f5415f23118f865b81430990a15517954fd14a8bda74a5a38c4f2f39450220391f6251e39cdd3cab7363b912b897146a0a78e295f6ecd23b078c9f64ca7ae8012103a11c09c09874833eedc58a031d01d161ab4d2eba3874959537c5609ef5d5401f030c4d0f001976a914d5245b64fcf8e873a9d1c0bfe2d258492bec6cc888ac400d03001976a914da5dde8abec4f3b67561bcd06aaf28b790cff75588ac10271976a914c4c5d791fcb4654a1ef5e03fe0ad3d9c598f982788ac

 Double-spend: f4e8e930bdfa3666b4a46c67544e356876a72ec70060130b2c7078c4ce88582a

 0100012a14c8e6ce1e625513847b2ff271b3e6a1849f2a634c601b7f383ef710483f796a473044022074f0c6912b482c6b51f1a91fb2bdca3f3dde3a3aed4fc54bd5ed563390011c2d02202719fe49578591edfbdd4b79ceeaa7f9550e4323748b3dbdd4135f38e70c476d012103a11c09c09874833eedc58a031d01d161ab4d2eba3874959537c5609ef5d5401f01d9c90f001976a914d5245b64fcf8e873a9d1c0bfe2d258492bec6cc888ac

 The double-spend was mined by Eligius and made use of the fact that
 Eligius blacklists transactions to a number of addresses considered to
 be spam by the pool operators; affected transactions are not added to
 the Eligus mempool at all. Lucky Bit has a real-time display of bets as
 they are accepted; I simply watched that display to determine whether or
 not I had lost. With Eligius at 8% and the house edge at 1.75% the
 attack is profitable when automated. My replace-by-fee patch(1) was
 used, although as there are only a handful of such nodes running - none
 connected directly to Eligius from what I can determine - I submitted
 the double-spend transactions to Eligius directly via their pushtxn
 webform.(2)

 Of course, this is an especially difficult case, as you must send the
 double-spend after the original transaction - normally just sending a
 non-standard tx to Eligius first would suffice. Note how this defeats
 Andresen's double-spend-relay patch(3) as proposed since the
 double-spend is a non-standard transaction.

 In discussion with Lucky Bit they have added case-specific code to
 reject transactions with known blacklisted outputs; the above
 double-spend I preformed is no longer possible. Of course, if the
 (reused) Lucky Bit addresses are added to that blacklist, that approach
 isn't viable - I suggest they switch to a scheme where addresses are not
 reused. (per-customer? rotated?) They also have added code to keep track
 of double-spend occurances and trigger human intervention prior to
 unacceptable losses. Longer term as with most services (e.g. Just-Dice)
 they intend to move to off-chain transactions. They are also considering
 implementing replace-by-fee scorched earth(4) - in their case a single
 pool, such as Eligius, implementing it would be enough to make the
 attack unprofitable. It may also be enough security to