Re: [Bitcoin-development] [RFC] [BIP proposal] Dealing with malleability

2014-02-20 Thread Michael Gronager
As I see the BIP it is basically stressing that ver 1 transactions are 
malleable.

It then addresses the need for unmalleable transactions for e.g. spending 
unconfirmed outputs in a deterministic way (i.e. no 3rd party can sabotage) - 
this transaction type is defined as ver 3.

A lot of clients today spend unconfirmed outputs (even bitcoin-qt) and as such 
make an implicit assumption that this is kind of safe, which it is not - it can 
be intervened and sabotaged through tx malleability.

What I suggested was to ensure that a subclass of version 1 transactions become 
unmalleable - namely those with sighash=all. Note that only the sender can 
modify the sighash as it is part of the hash signed. So instead of defining a 
version 3, we could constrain version 1 txes with sighash=all to have a 
unmalleable hash. If you e.g. would like to still have a sighash=all type of 
transaction with malleable features you can simply use that sighash=all today 
is checked for using sighash0x1f=sighash_all, so just OR'ing with 0x20 or 0x40 
will get you the 'old' feature.

I do however buy the argument of Peter and Gregory that there might exist 
unpublished transactions out there that does not even conform to the DER rules 
etc, and as such we cannot forbid them from being mined, nor can we timestamp 
them and include 'only the old ones'. Hence we cannot change the consensus rule 
for version 1 transactions - and only changing the relay rules will not provide 
a certain guarantee.

So, I think the two line argument for the BIP is as follows:
1. We cannot change the consensus rules for version 1 transactions as that 
might invalidate unpublished non-standard transactions (= voiding peoples 
money, which is a line we don't want to cross)
2. The prime usecase for unmalleable transactions is being able to spend 
unconfirmed outputs - this is done today by almost all clients, but it is 
really broken. Hence a need for a fix asap.

I am all in favor for the BIP, but I expect the realistic timeline for enforced 
version 3 transactions is roughly one year, which is better than two, but it 
would have been nice to get it faster...

/M


On Feb 19, 2014, at 10:11 PM, Pieter Wuille pieter.wui...@gmail.com wrote:

 On Wed, Feb 19, 2014 at 9:28 PM, Michael Gronager grona...@mac.com wrote:
 I think that we could guarantee fewer incidents by making version 1 
 transactions unmalleable and then optionally introduce a version 3 that 
 supported the malleability feature. That way most existing problematic 
 implementations would be fixed and no doors were closed for people 
 experimenting with other stuff - tx v 3 would probably then be called 
 experimental transactions.
 
 Just to be clear: this change is not directly intended to avoid
 incidents. It will take way too long to deploy this. Software should
 deal with malleability. This is a longer-term solution intended to
 provide non-malleability guarantees for clients that a) are upgraded
 to use them  b) willing to restrict their functionality. As there are
 several intended use cases for malleable transactions (the sighash
 flags pretty directly are a way to signify what malleabilities are
 *wanted*), this is not about outlawing malleability.
 
 While we could right now make all these rules non-standard, and
 schedule a soft fork in a year or so to make them illegal, it would
 mean removing potential functionality that can only be re-enabled
 through a hard fork. This is significantly harder, so we should think
 about it very well in advance.
 
 About new transaction and block versions: this allows implementing and
 automatically scheduling a softfork without waiting for wallets to
 upgrade. The non-DER signature change was discussed for over two
 years, and implemented almost a year ago, and we still notice wallets
 that don't support it. We can't expect every wallet to be instantly
 modified (what about hardware wallets like the Trezor, for example?
 they may not just be able to be upgraded). Nor is it necessary: if
 your software only spends confirmed change, and tracks all debits
 correctly, there is no need.
 
 -- 
 Pieter



signature.asc
Description: Message signed with OpenPGP using GPGMail
--
Managing the Performance of Cloud-Based Applications
Take advantage of what the Cloud has to offer - Avoid Common Pitfalls.
Read the Whitepaper.
http://pubads.g.doubleclick.net/gampad/clk?id=121054471iu=/4140/ostg.clktrk___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] [RFC] [BIP proposal] Dealing with malleability

2014-02-19 Thread Michael Gronager
Why introduce a new transaction version for this purpose ? Wouldn't it be more 
elegant to simply let:

1. the next bitcoin version prettify all relayed transactions as 
deterministic transactions fulfilling the scheme 1-6 effectively blocking any 
malleability attack? If miners would upgrade then all transactions in blocks 
would have a deterministic hash. 

2. In a version later one could block relay of non deterministic transactions, 
as well as the acceptance of blocks with non-confirming transactions.

To non-standard conforming clients this prettify change of hash would be seen 
as a constant malleability attack, but given the prettify code it is to fix 
any client into producing only conforming transactions, just by running the 
transaction through it before broadcast.

There is a possible fork risk in step 2. above - if a majority of miners still 
havn't upgraded to 1 when 2 is introduced. We could monitor % non conforming 
transaction in a block and only introduce 2. once that number is sufficiently 
small for a certain duration - criteria:
* Switch on forcing of unmalleable transactions in blocks when there has been 
only conforming transactions for 1000 blocks.


On Feb 13, 2014, at 1:47 AM, Gregory Maxwell gmaxw...@gmail.com wrote:

 On Wed, Feb 12, 2014 at 4:39 PM, Alex Morcos mor...@gmail.com wrote:
 I apologize if this has been discussed many times before.
 
 It has been, but there are probably many people like you who have not
 bothered researching who may also be curious.
 
 As a long term solution to malleable transactions, wouldn't it be possible
 to modify the signatures to be of the entire transaction.  Why do you have
 to zero out the inputs?  I can see that this would be a hard fork, and maybe
 it would be somewhat tricky to extract signatures first (since you can sign
 everything except the signatures), but it would seem to me that this is an
 important enough change to consider making.
 
 Because doing so would be both unnecessary and ineffective.
 
 Unnecessary because we can very likely eliminate malleability without
 changing what is signed. It will take time, but we have been
 incrementally moving towards that, e.g. v0.8 made many kinds of
 non-canonical encoding non-standard.
 
 Ineffective— at least as you describe it— because the signatures
 _themselves_ are malleable.
 
 --
 Android apps run on BlackBerry 10
 Introducing the new BlackBerry 10.2.1 Runtime for Android apps.
 Now with support for Jelly Bean, Bluetooth, Mapview and more.
 Get your Android app in front of a whole new audience.  Start now.
 http://pubads.g.doubleclick.net/gampad/clk?id=124407151iu=/4140/ostg.clktrk
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development



signature.asc
Description: Message signed with OpenPGP using GPGMail
--
Managing the Performance of Cloud-Based Applications
Take advantage of what the Cloud has to offer - Avoid Common Pitfalls.
Read the Whitepaper.
http://pubads.g.doubleclick.net/gampad/clk?id=121054471iu=/4140/ostg.clktrk___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] [RFC] [BIP proposal] Dealing with malleability

2014-02-19 Thread Michael Gronager
Twisting your words a bit I read:

* you want to support relay of transactions that can be changed on the fly, but 
you consider it wrong to modify them.
* #3 is already not forwarded, but you still find it relevant to support it.

Rational use cases of #3 will be pretty hard to find given the fact that they 
can be changed on the fly. We are down to inclusion in blocks by miners for 
special purposes - or did I miss out something?

I think that we could guarantee fewer incidents by making version 1 
transactions unmalleable and then optionally introduce a version 3 that 
supported the malleability feature. That way most existing problematic 
implementations would be fixed and no doors were closed for people 
experimenting with other stuff - tx v 3 would probably then be called 
experimental transactions.

/M


On Feb 19, 2014, at 3:38 PM, Pieter Wuille pieter.wui...@gmail.com wrote:

 On Wed, Feb 19, 2014 at 3:11 PM, Michael Gronager grona...@mac.com wrote:
 Why introduce a new transaction version for this purpose ? Wouldn't it be 
 more elegant to simply let:
 
 1. the next bitcoin version prettify all relayed transactions as 
 deterministic transactions fulfilling the scheme 1-6 effectively blocking 
 any malleability attack? If miners would upgrade then all transactions in 
 blocks would have a deterministic hash.
 
 I consider actively mutating other's transactions worse than not
 relaying them. If we want people to make their software deal with
 malleability, either will work.
 
 Regarding deterministic hash: that's impossible. Some signature hash
 types are inherently (and intentionally) malleable. I don't think we
 should pretend to want to change that. The purpose is making
 non-malleability a choice the sender of a transaction can make.
 
 Most of the rules actually are enforced by IsStandard already now.
 Only #1 and #7 aren't. #1 affects the majority of all transactions, so
 changing it right now would be painful. #7 only affects multisig.
 
 2. In a version later one could block relay of non deterministic 
 transactions, as well as the acceptance of blocks with non-confirming 
 transactions.
 
 To non-standard conforming clients this prettify change of hash would be 
 seen as a constant malleability attack, but given the prettify code it is 
 to fix any client into producing only conforming transactions, just by 
 running the transaction through it before broadcast.
 
 There is a possible fork risk in step 2. above - if a majority of miners 
 still havn't upgraded to 1 when 2 is introduced. We could monitor % non 
 conforming transaction in a block and only introduce 2. once that number is 
 sufficiently small for a certain duration - criteria:
 * Switch on forcing of unmalleable transactions in blocks when there has 
 been only conforming transactions for 1000 blocks.
 
 The problem in making these rules into consensus rule (affecting
 tx/block validity) is that some rules (in particular #3) may not be
 wanted by everyone, as they effectively limit the possibilities of the
 script language further. As it is ultimately only about protecting
 senders who care about non-malleability, introducing a new transaction
 version is a very neat way of accomplishing that. The new block
 version number is only there to coordinate the rollout, and choosing
 an automatic forking point.
 
 -- 
 Pieter



signature.asc
Description: Message signed with OpenPGP using GPGMail
--
Managing the Performance of Cloud-Based Applications
Take advantage of what the Cloud has to offer - Avoid Common Pitfalls.
Read the Whitepaper.
http://pubads.g.doubleclick.net/gampad/clk?id=121054471iu=/4140/ostg.clktrk___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Network propagation speeds

2013-11-25 Thread Michael Gronager
Hi Christian,

Cool - thanks for posting - agree, that it would be nice to normalize
the results with block size - so divide by size and:
1. see if there is a correlation (we all presume there still is)
2. plot the delay graph as e.g. normalized to the averaged blocksize or
lets define a standard block size of 200kb or what ever so we can
compare the plot btw days.

Also, does the correlation of propagation times hold for transaction
sizes as well (would be ice to find the logical t0 and the constant - I
guess the interesting measure is not kb but signatures, so number of
inputs - some correlation with size though).

Best,

Michael

On 24/11/13, 17:37 , Christian Decker wrote:
 Sure thing, I'm looking for a good way to publish these measurements,
 but I haven't found a good option yet. They are rather large in size,
 so I'd rather not serve them along with the website as it hasn't got
 the capacity. Any suggestions? If the demand is not huge I could
 provide them on a per user basis.
 --
 Christian Decker
 
 
 On Sun, Nov 24, 2013 at 5:26 PM, Gregory Maxwell gmaxw...@gmail.com wrote:
 On Sun, Nov 24, 2013 at 8:20 AM, Christian Decker
 decker.christ...@gmail.com wrote:
 Since this came up again during the discussion of the Cornell paper I
 thought I'd dig up my measurement code from the Information
 Propagation paper and automate it as much as possible.

 Could you publish the block ids and timestamp sets for each block?

 It would be useful in correlating propagation information against
 block characteristics.
 
 --
 Shape the Mobile Experience: Free Subscription
 Software experts and developers: Be at the forefront of tech innovation.
 Intel(R) Software Adrenaline delivers strategic insight and game-changing 
 conversations that shape the rapidly evolving mobile landscape. Sign up now. 
 http://pubads.g.doubleclick.net/gampad/clk?id=63431311iu=/4140/ostg.clktrk
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development
 


--
Shape the Mobile Experience: Free Subscription
Software experts and developers: Be at the forefront of tech innovation.
Intel(R) Software Adrenaline delivers strategic insight and game-changing 
conversations that shape the rapidly evolving mobile landscape. Sign up now. 
http://pubads.g.doubleclick.net/gampad/clk?id=63431311iu=/4140/ostg.clktrk
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Even simpler minimum fee calculation formula: f bounty*fork_rate/average_blocksize

2013-11-15 Thread Michael Gronager
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi Peter,

Love to see things put into formulas - nice work!

Fully agree on the your fist section: As latency determines maximum
block earnings, define a 0-latency (big-miner never orphans his own
blocks) island and growing that will of course result in increased earnings.

So build your own huge mining data center and you rock.

However, that is hardly the real work scenario today. Instead we have
pools (Huge pools). It would be interesting to do the calculation:

Q = Total pool size (fraction of all mining power)
q = My mining power (do.)
e = fraction of block fee that pool reserves

It is pretty obvious that given your formulas small miners are better
off in a pool (can't survive as solo miners), but there will be a
threshold q_min above which you are actually better off on you own -
depending also on e. (excluding here all benefits of a stable revenue
stream provided by pools)

Next interesting calculation would be bitcoin rate as a function of pool
size, I expect a sharp dip somewhere in the 40%s of hardware controlled
by one entity ;)

Finally, as you mention yourselves, qualification of the various
functions is needed. This could e.g. suggest if we are like to get 3 or
10 miners on the long run.

And now for section 2. You insert a definition of f(L) = a-bL. I think
the whole idea of letting f depend on L is superfluous. As a miner you
are always free to choose which transactions to include. You will always
choose those with the biggest fee, so really it is only the average fee
that is relevant: f(L) = c. Any dependence in L will be removed by the
reshuffeling. To include an extra transaction will require either that
it has a fee larger than another (kicking that out out) or that it has a
fee so large that it covers for the other transaction too. Also recall
that there is a logical minimum fee (as I have already shown), and a
maximum optimal block size - that is until the bounty becomes 0 (which
is where other effects kick in).

 Here's what I've got to date. The first two sections is just a
 relatively simple proof that mining is more profitable as centralization
 increases under any circumstance, even before any real-world factors are
 taken into account. (other than non-zero latency and bandwidth) Nice
 homework problem, and neat that you can easily get a solid proof, but
 academic because it doesn't say anything about the magnitude of the
 incentives.
 
 The latter part is the actual derivation with proper model of
 supply-and-demand for fees. Or will be: while you can of course solve
 the equations with mathematica or similar - getting a horrid mess - I'm
 still trying to see if I can simplify them sanely in a way that's
 step-by-step understandable. Taking me longer than I'd like; sobering to
 realize how rusty I am. That said if any you do just throw it at
 Mathematica, looks like you get a result where the slope of your
 expected block return is at least quadratic with increasing hashing
 power. (though I spent all of five minutes eyeballing that result)
 
 
 \documentclass{article}
 \usepackage{url}
 \usepackage{mathtools}
 \begin{document}
 \title{Expected Return}
 \author{Peter Todd}
 \date{FIXME}
 \maketitle
 
 \section{Expected return of a block}
 \label{sec:exp-return-of-a-block}
 
 Let $f(L)$, a continuous function,\footnote{Transactions do of course give a
 discontinuous $f$. For a large $L$ the approximation error is negligible.} be
 the fee-per-byte available to a rational miner for the last transaction
 included in a block of size $L$. $f(L)$ is a continuous function defined for 
 $L
 \ge 0$. Supply and demand dictates that:
 
 \begin{equation}
 f(L) \ge f(L+\epsilon) \label{eq:f-increases}
 \end{equation}
 
 A reasonable example for $f$ might be $f(L) = kL$, representing the demand 
 side
 of a linear supply and demand plot. For a block of size $L$ that is optimally
 filled with transactions the value of those fees is just the integral:
 
 \begin{equation}
 E_f(L) = \int_0^L f(l)\,dl
 \end{equation}
 
 Let $P(Q,L)$, a continuous function, be the probability that a block of size
 $L$ produced by a miner with relative hashing power $Q$ will be orphaned.
 Because a miner will never orphan their own blocks the following holds true:
 
 \begin{equation}
 P(Q,L) \le P(Q + \epsilon,L) \label{eq:p-increases}
 \end{equation}
 
 Similarly because larger blocks take longer to propagate and thus risk getting
 orphaned by another miner finding a block at the same time:
 
 \begin{equation}
 P(Q,L) \ge P(Q,L + \epsilon)
 \end{equation}
 
 By combining $P(Q, L)$, $E_f(L)$ and the inflation subsidy $B$, gives us the
 expected return of a block for a given size and hashing power:\footnote{Note
 how real world marginal costs can be accommodated easily in the definitions of
 $f$ and $B$.}
 
 \begin{equation}
 E(Q,L) = P(Q,L)[E_f(L) + B]
 \end{equation}
 
 The optimal size is simply the size $L$ at which $E(Q, 

Re: [Bitcoin-development] Even simpler minimum fee calculation formula: f bounty*fork_rate/average_blocksize

2013-11-15 Thread Michael Gronager
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 15/11/13, 11:32 , Peter Todd wrote:

 alpha = (1/113)*600s/134kBytes = 39.62uS/byte = 24kB/second
 
 Which is atrocious... 

alpha = P_fork*t_block/S = 1/113*454000/134 = 29ms/kb

or 272kbit pr second - if you assume this is a bandwidth then I agree it
is strikingly small (ISDN like), but this is not the case, the size
dependence of this number originates both from the limited network
bandwidth and from the validation and verification time of the blocks as
well as the latency in sending thee again.

The connection between propagation time and fork rate cannot be denied,
and the bandwidth can be deducted from that alone - see Decket et al.

t_0 on a 1km link is on the order of 40ms, and that is only counting
the finite light speed in the fibers - if you ping the same distance you
get roughly 1-200ms (due to latencies in network equipment). at a size
of ~100kbyte t_0 hence becomes irrelevant.

 This also indicates that pools haven't taken the simple step of peering
 with each other using high-bandwidth nodes with restricted numbers of
 peers

agree

 , which shows you how little attention they are paying to
 optimizing profits.  Right now mining pulls in $1.8 million/day, so
 that's up to $16k wasted.

yup, but the relevant comparison is not 16k vs 1.8m, but the pool
operator earnings which are on the order of 1% of the 1.8m so it is 18k
vs 16k - I wouldn't mind doubling my income...

 
 However, because miners don't orphan themselves, that $16k loss is born
 disproportionately by smaller miners... which also means the 24kB/sec
 bandwidth estimate is wrong, and the real number is even worse.

Yes, agree

 In
 theory anyway, could just as easily be the case that larger pools have
 screwed up relaying still such that p2pool's forwarding wins.

Yeah, we should resurrect p2pool ;)

 
 
 
 --
 DreamFactory - Open Source REST  JSON Services for HTML5  Native Apps
 OAuth, Users, Roles, SQL, NoSQL, BLOB Storage and External API Access
 Free app hosting. Or install the open source package on any LAMP server.
 Sign up and see examples for AngularJS, jQuery, Sencha Touch and Native!
 http://pubads.g.doubleclick.net/gampad/clk?id=63469471iu=/4140/ostg.clktrk
 
 
 
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development
 

-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJShgniAAoJEKpww0VFxdGRrwQIALKsOtBUaAaQTX9ikN+10mSE
pE2dp2VnUvfUqpXf3MgJtAvg2RFqHjziyBMYmpMw5tLJPpeUthpNXm6Vm/Yg0DdL
JXSESIrd4Pdb/xPk2Fh9OKHmR1SB/8VxtRL2Vj1HmzzBcBiCylcaBuKlRkizvGSF
KrUm3EOFUfzgGYFUnqNceZ3CuQHWFAXbsitNqU6Vop8JOTgiSLhUrvb7r3W7Ewuy
jM3H2KAk/PrdGXwna3sUfDXmmOxmPm1pBy6+OaBTHEv+ALkreD++XSUnLUUTky9N
nZt2g7eMEFHIkVooj/HOGiwAvVwd7r86etiyUi8c2Pd46ff2OP5h1uiP/Qr28MA=
=Bsv9
-END PGP SIGNATURE-

--
DreamFactory - Open Source REST  JSON Services for HTML5  Native Apps
OAuth, Users, Roles, SQL, NoSQL, BLOB Storage and External API Access
Free app hosting. Or install the open source package on any LAMP server.
Sign up and see examples for AngularJS, jQuery, Sencha Touch and Native!
http://pubads.g.doubleclick.net/gampad/clk?id=63469471iu=/4140/ostg.clktrk
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Even simpler minimum fee calculation formula: f bounty*fork_rate/average_blocksize

2013-11-15 Thread Michael Gronager
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

 

 Q = Total pool size (fraction of all mining power) q = My mining
 power (do.) e = fraction of block fee that pool reserves
 
 
 Unfortunately the math doesn't work that way. For any Q, a bigger
 Q gives you a higher return. Remember that the way I setup those
 equations in section 3.2 is such that I'm actually modeling two
 pools, one with Q hashing power and one with (1-Q) hashing power.
 Or maybe more accurately, it's irrelevant if the (1-Q) hashing
 power is or isn't a unified pool.

My Q and q are meant differently, I agree to your Q vs Q-1 argument,
but the q is me as a miner participating in a pool Q. If I
participate in a pool I pay the pool owner a fraction, e, but at the
same time I become part of an economy of scale (well actually a math
of scale...) and that can end up paying for the lost e. The question
is what is the ratio q/Q where I should rather mine on my own ? This
question is interesting as it will make bigger miners break away from
pools into solo mining, but I also agree that from pure math the most
advantageous scenario is the 100% mining rig.

 The equations give an incentive to centralize all the way up to 1
 miner with 100% hashing power.
 
 Of course, if that one pool were p2pool, that might be ok!

Ha, yes, and then the math for p2pool starts... a math where we have
much more stales...


-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJShgxWAAoJEKpww0VFxdGRoiwH/3RGTH503PJ8UWuyKjrxscb4
dG3TyThZCDs12DvtC+2TPKnIkQFinGx9442tZU/O+qmwsGJsNVoEcnGmKEYz/vlI
XzFF30ugslB4FKwHZYRqXELaKR4RvUtSzu6td8P3n+e6d0MZsuemMornpbXZkw3n
CbMlYuiG4h3iUAwTaOTS26cFbZoo6eyogydDjnS7Ogi2Ur85Rydi/Lj24rj7UxYB
+WUkYAv3bCqCzTkv1LxO7HwY1SICZDmoGRbuil5M7bJ+MftYt6Q6DVprGSVP0mOV
9eEVeMVY/WmMZCI/01ruXpzC3gxU60vOd/a3q9G2hd9Tn00HzugAllEXh7ZzzUs=
=unP8
-END PGP SIGNATURE-

--
DreamFactory - Open Source REST  JSON Services for HTML5  Native Apps
OAuth, Users, Roles, SQL, NoSQL, BLOB Storage and External API Access
Free app hosting. Or install the open source package on any LAMP server.
Sign up and see examples for AngularJS, jQuery, Sencha Touch and Native!
http://pubads.g.doubleclick.net/gampad/clk?id=63469471iu=/4140/ostg.clktrk
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


[Bitcoin-development] Even simpler minimum fee calculation formula: f bounty*fork_rate/average_blocksize

2013-11-13 Thread Michael Gronager
Last week I posted a writeup: On the optimal block size and why
transaction fees are 8 times too low (or transactions 8 times too big).

Peter Todd made some nice additions to it including different pool sizes
into the numbers.

However, it occurred to me that things can in fact be calculated even
simpler: The measured fork rate will mean out all the different pool
sizes and network latencies and will as such provide a simple number we
can use to estimate the minimum fee. Key assumption is that the latency
will depend on block size (# txns) and the fork rate will depend on latency.

Using the formulas from last week:

P_fork = t_propagate/t_blocks

and:

t_propagate = t_0 + alpha*S ~= alpha*S

We get a measure for alpha as a function of the average fork rate and
average block size:

alpha = P_fork*t_block/S

Further, take the formula for the minimum fee:

f  alpha*E_bounty/t_block

And insert the formula for alpha:

f  P_fork*E_bounty/S_average

Luckily the fork frequency and the average block size are easily
measurable. blockchain.info keeps historical graphs of number of
orphaned blocks pr day - average over the last year is 1.5. Average
number of blocks per day over the last year is 169, which yields a
P_fork of ~1/113. Average block size in the same time is 134kBytes,
which yields a minimum fee:

f  0.00165XBT/kb or 0.00037XBT/txn

So the 0.0001 is only 4 times too small. Further, let us look at the
trend over the last 12 months. Pieter Wuille claimed that there has been
several improvements over the last half year that would bring down the
latency, there has also been speculations regarding direct connections
between the major pools etc - lets see if this is indeed true.

If you look instead of 360 days, only at the last 90 days the average
block size has been 131kBytes, and the fork rate has been ~1/118, which
results in a minimum fee of:

f  0.00162XBT/kb or 0.00037XBT/txn

So a small improvement but not statistically important...

Last question, recalling that optimal revenue block size is a function
of the txn-fee (from the last writeup) - lets see what fee it takes to
support a block size of 131kBytes:

S = 1/2 * (t_block/alpha - E_bounty/f)

S = 1/2 * (S/P_fork - E_bounty/f)

f = E_bounty/[(1/P_fork-2)*S] = 0.00165XBT/kB

So a 4 times increase is still sufficient for the current load.

Anyway - the all important number is alpha, the network latency which we
expect to be dependent of various things such as interconnectivity,
bandwidths, software quality etc, where mainly the latter is within our
hands to bring down the fee. And you can actually setup the standard
client to choose a better fee, as all the parameters in the formula are
easily measured!

--
DreamFactory - Open Source REST  JSON Services for HTML5  Native Apps
OAuth, Users, Roles, SQL, NoSQL, BLOB Storage and External API Access
Free app hosting. Or install the open source package on any LAMP server.
Sign up and see examples for AngularJS, jQuery, Sencha Touch and Native!
http://pubads.g.doubleclick.net/gampad/clk?id=63469471iu=/4140/ostg.clktrk
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Even simpler minimum fee calculation formula: f bounty*fork_rate/average_blocksize

2013-11-13 Thread Michael Gronager
Just a quick comment on the actual fees (checked at blockchain.info) the
average fee over the last 90 days is actually ~0.0003BTC/txn - so not
too far behind the theoretical minimum of 0.00037BTC/txn.

I suppose, though, that it has more to do with old clients and fee
settings (0.0005) than network wisdom ;)

On 13/11/13, 12:52 , Michael Gronager wrote:
 Last week I posted a writeup: On the optimal block size and why
 transaction fees are 8 times too low (or transactions 8 times too big).
 
 Peter Todd made some nice additions to it including different pool sizes
 into the numbers.
 
 However, it occurred to me that things can in fact be calculated even
 simpler: The measured fork rate will mean out all the different pool
 sizes and network latencies and will as such provide a simple number we
 can use to estimate the minimum fee. Key assumption is that the latency
 will depend on block size (# txns) and the fork rate will depend on latency.
 
 Using the formulas from last week:
 
 P_fork = t_propagate/t_blocks
 
 and:
 
 t_propagate = t_0 + alpha*S ~= alpha*S
 
 We get a measure for alpha as a function of the average fork rate and
 average block size:
 
 alpha = P_fork*t_block/S
 
 Further, take the formula for the minimum fee:
 
 f  alpha*E_bounty/t_block
 
 And insert the formula for alpha:
 
 f  P_fork*E_bounty/S_average
 
 Luckily the fork frequency and the average block size are easily
 measurable. blockchain.info keeps historical graphs of number of
 orphaned blocks pr day - average over the last year is 1.5. Average
 number of blocks per day over the last year is 169, which yields a
 P_fork of ~1/113. Average block size in the same time is 134kBytes,
 which yields a minimum fee:
 
 f  0.00165XBT/kb or 0.00037XBT/txn
 
 So the 0.0001 is only 4 times too small. Further, let us look at the
 trend over the last 12 months. Pieter Wuille claimed that there has been
 several improvements over the last half year that would bring down the
 latency, there has also been speculations regarding direct connections
 between the major pools etc - lets see if this is indeed true.
 
 If you look instead of 360 days, only at the last 90 days the average
 block size has been 131kBytes, and the fork rate has been ~1/118, which
 results in a minimum fee of:
 
 f  0.00162XBT/kb or 0.00037XBT/txn
 
 So a small improvement but not statistically important...
 
 Last question, recalling that optimal revenue block size is a function
 of the txn-fee (from the last writeup) - lets see what fee it takes to
 support a block size of 131kBytes:
 
 S = 1/2 * (t_block/alpha - E_bounty/f)
 
 S = 1/2 * (S/P_fork - E_bounty/f)
 
 f = E_bounty/[(1/P_fork-2)*S] = 0.00165XBT/kB
 
 So a 4 times increase is still sufficient for the current load.
 
 Anyway - the all important number is alpha, the network latency which we
 expect to be dependent of various things such as interconnectivity,
 bandwidths, software quality etc, where mainly the latter is within our
 hands to bring down the fee. And you can actually setup the standard
 client to choose a better fee, as all the parameters in the formula are
 easily measured!
 
 --
 DreamFactory - Open Source REST  JSON Services for HTML5  Native Apps
 OAuth, Users, Roles, SQL, NoSQL, BLOB Storage and External API Access
 Free app hosting. Or install the open source package on any LAMP server.
 Sign up and see examples for AngularJS, jQuery, Sencha Touch and Native!
 http://pubads.g.doubleclick.net/gampad/clk?id=63469471iu=/4140/ostg.clktrk
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development
 


--
DreamFactory - Open Source REST  JSON Services for HTML5  Native Apps
OAuth, Users, Roles, SQL, NoSQL, BLOB Storage and External API Access
Free app hosting. Or install the open source package on any LAMP server.
Sign up and see examples for AngularJS, jQuery, Sencha Touch and Native!
http://pubads.g.doubleclick.net/gampad/clk?id=63469471iu=/4140/ostg.clktrk
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Even simpler minimum fee calculation formula: f bounty*fork_rate/average_blocksize

2013-11-13 Thread Michael Gronager
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi John,

Thanks for the feedback - comments below:

 However, it occurred to me that things can in fact be calculated even
 simpler: The measured fork rate will mean out all the different pool
 sizes and network latencies and will as such provide a simple number we
 can use to estimate the minimum fee.
 
 Are you sure about that? You are assuming linearity where none may exist.

Well, my work from last week and now is a model. A model enabling you to
easily calculate the minimum fee and as a miner which transaction to
include to not shoot yourselves in the foot risking to create an
orphaned block.

The assumption that there is a linearity between block size and latency
is shown pretty well in the paper by Decker et. al (see last weeks
post). What I add this week is mainly more up to date numbers and a
formula dependent only of data that is easy to measure. (fork rate and
block size).

 
 Are those stats accurate? Have any pool operators at least confirmed that the
 orphaned blocks that blockchain.info reports match their own records?

Probably not - but the are at least a minimum - in case they are higher,
the fee should go up further.

 
 My gut feeling is to relay all orphaned blocks. We know that with a high
 investment and sybil attack as blockchain.info has done you can have better
 awareness of orphaned blocks than someone without those resources. If having
 that awareness is ever a profitable thing we have both created an incentive to
 sybil attack the network and we have linked profitability to high up-front
 capital investments.

Another way to measure latency is to setup a node that only listens but
do not relay data. By measuring the propagation of blocks of different
size as well as transactions, you can get a propagation distribution and
from that an average. However, the relevant propagation time is the one
between the pools/(single miners). Which you cannot assess using this
scheme - however, it would be nice to compare it to the orphan block scheme.

 
 With relayed orphans you could even have P2Pool enforce an optimal tx 
 inclusion
 policy based on a statistical model by including proof of those orphans into
 the P2Pool share chain. P2Pool needs to take fees into account soon, but 
 simply
 asking for blocks with the highest total fees or even highest fee/kb appears 
 to
 be incomplete according to what your and Peter's analysis is suggesting.

Indeed, and nice... But note that it is never of benefit for the miner
to include a transaction with a fee of less than ~0.0004BTC - unless it
is linked to another transaction that pay an extra fee.

There have been a lot of assumptions on the fee size and generally it
has been linked to the bitcoin exchange rate. This analysis shows that
this is wrong. Also it shows that the scalability of bitcoin is directly
linked to the network and node latency (with the current latency it will
never beneficial for miners to include more than ~30k transactions in a
block or ~70 pr second resulting in ~10MB blocks).
However, halving the latency will double the capacity, down to the
minimum which is governed by the speed of light.

 
 
 --
 DreamFactory - Open Source REST  JSON Services for HTML5  Native Apps
 OAuth, Users, Roles, SQL, NoSQL, BLOB Storage and External API Access
 Free app hosting. Or install the open source package on any LAMP server.
 Sign up and see examples for AngularJS, jQuery, Sencha Touch and Native!
 http://pubads.g.doubleclick.net/gampad/clk?id=63469471iu=/4140/ostg.clktrk
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development
 

-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJSg+H7AAoJEKpww0VFxdGRn+gIAIgju90DED5r//USqKvkQsYI
JDj0tLBLMg9BPXOOt3eJ+NX4YE4lW+QkwqDd/swuJxLmj0l9BQKgt1lTb/f0P/cY
GdE14gh5EYlvNzY1h0TGKcMe8NTWXU0/tC+Clpy4sqBHPXW/eF/77sLQUnFRrLKi
sT48aHOOFUdBLdlyylUzzevh/FFVLidkKqV031tv52+BFHcTFd4kRPwZXgBSs9YH
U66MkJ4ytAqeOfJue9n7Qn4kJF9kNIhRpqTrtapqu8jglLfuYlJ3s5fwaw9FxQdR
+On4IWeXzURQ6tcVRCovCq/2lxRKIbYGlW7HGVASjRmm68/+8YUAfFsYFl6DIgA=
=9tbL
-END PGP SIGNATURE-

--
DreamFactory - Open Source REST  JSON Services for HTML5  Native Apps
OAuth, Users, Roles, SQL, NoSQL, BLOB Storage and External API Access
Free app hosting. Or install the open source package on any LAMP server.
Sign up and see examples for AngularJS, jQuery, Sencha Touch and Native!
http://pubads.g.doubleclick.net/gampad/clk?id=63469471iu=/4140/ostg.clktrk
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net

[Bitcoin-development] On the optimal block size and why transaction fees are 8 times too low (or transactions 8 times too big)

2013-11-07 Thread Michael Gronager
Following the discussion on the recent mining sybil trick, I reread the
article on block propagation by Decker et al.* and decided to use it for
doing a proper estimate of transaction fee size and optimal block size.

The propagation of a block depends on and is roughly proportional to its
size. Further, the slower a block propagates the higher the risk of a
fork, so as a miner you are basically juggling the risk of a fork
(meaning you loose your bounty) vs the opportunity for including more
transactions and hence also get those fees.

This alone will dictate the minimal transaction fee as well as the
optimal block size!

Lets try to put it into equations. For the purpose of this initial study
lets simplify the work by Decker et al. Roughly, we can say that the
average propagation time for a block is t_propagate, and the average
time between blocks is t_blocks. Those are roughly 10sec and 600sec
respectively. The risk of someone else mining a block before your block
propagates is roughly**:

P_fork = t_propagate/t_blocks (~1/60)

Also note that propagation time is a function of block size, S:

t_propagate = t_0 + alpha*S

where Decker et al have determined alpha to 80ms/kb. We also define the
fee size pr kilobyte, f, so

E_fee = f*S

Given these equations the expected average earning is:

E = P_hashrate*(1 - P_fork)*(E_bounty + E_fees)

And inserting:

E  = P_hashrate*[1 - (t_0 + alpha*S)/t_block]*(E_bounty + f*S)

We would like to choose the fee so the more transactions we include the
more we earn. I.e. dE/dS  0:

dE/dS = P_hashrate*{[(t_block - t_0)*f - alpha*E_bounty]/t_block -
2*alpha*f/t_block*S}

Which gives:

 f  alpha*E_bounty/(t_block-t_0) ~ alpha*E_bounty/t_block

or f  80*25/60 = 0.0033 or assuming a standard transaction size of
0.227kb:

f_tx  0.00076.

Note that this number is 8 times higher than the current transaction
fee! So the current optimal block size is an empty block i.e. without
other transactions than the coinbase! (miners don't listen now...)

Lets see what you loose by e.g. including 1000 transactions:

E(1000) = P_hashrate*24.34XBT

Which is a loss of 2.6% compared to not including transactions at all!

So there are two ways forward from here. 1) raise the minimum fee, and
2) make transactions smaller. We cannot make transactions much smaller,
but we can utilize that most of them have already been broadcasted
verified and validated and then just include their hash in the block***.
This changes the relevant size for a transaction from 0.227kb to
0.032kb. Which makes f_tx = 0.00011. We are almost there!

Now assume that we implement this change and raise the minimum fee to
0.00015, what is then the optimal block size (dE/dS = 0) ?

 S = 1/2 * (t_block/alpha - E_bounty/f)

Which gives 1083kb for a bounty of 25 and 2417kb for a bounty of 12.5.
Optimal size in case of no bounty or an infinite fee is 3750MB.

Final conclusions is that the fee currently is too small and that there
is no need to keep a maximum block size, the fork probability will
automatically provide an incentive to not let block grows into infinity.

*)
http://www.tik.ee.ethz.ch/file/49318d3f56c1d525aabf7fda78b23fc0/P2P2013_041.pdf
**) The calculations should be done using the proper integrals and
simulations, but I will leave that for academia ;)
***) A nice side effect from switching to broadcasting transactions in
blocks as only their hash is that it decouples fee size from transaction
size!

--
November Webinars for C, C++, Fortran Developers
Accelerate application performance with scalable programming models. Explore
techniques for threading, error checking, porting, and tuning. Get the most 
from the latest Intel processors and coprocessors. See abstracts and register
http://pubads.g.doubleclick.net/gampad/clk?id=60136231iu=/4140/ostg.clktrk
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] On the optimal block size and why transaction fees are 8 times too low (or transactions 8 times too big)

2013-11-07 Thread Michael Gronager
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 7/11/13, 21:31 , Peter Todd wrote:
 Final conclusions is that the fee currently is too small and that
 there is no need to keep a maximum block size, the fork
 probability will automatically provide an incentive to not let
 block grows into infinity.
 

Great additions! - I was about to do a second iteration of the
calculations including the pool size, but you beat me to it - thanks!

Still the picture remains the same - you can half the fee if you are a
large pool

 Q=0- f = 0.0033 BTC/kB Q=0.1  - f = 0.0027 BTC/kB Q=0.25 - f
 = 0.0018 BTC/kB Q=0.40 - f = 0.0012 BTC/kB

You second list of numbers is an unlikely extreme:

 k = 1mS/kB

The propagation latency in the network is more due to the block
verification than due to its network (fiber) propagation time,
bringing down the number of hops helps tremendously, so I agree that
we can probably bring down k by a factor of ~10 (k=8-12) if we
consider only pools directly connected. This should bring us close to
break even with the current fee size, but we should really get some
empirical data for interconnected large pools. However - important
note - if you are a 1% miner - don't include transactions!

 
 Q=0- f = 0.42 BTC/kB Q=0.1  - f = 0.34 BTC/kB Q=0.25
 - f = 0.23 BTC/kB Q=0.40 - f = 0.15 BTC/kB
 

 
 This problem is inherent to the fundemental design of Bitcoin: 
 regardless of what the blocksize is, or how fast the network is,
 the current Bitcoin consensus protocol rewards larger mining pools
 with lower costs per KB to include transactions.

I don't see a problem of rewarding economy of scale, as long as the
effect is not too grave (raising the min fee would actually make it
more profitable for smaller miners).

Michael

 1)
 http://www.mail-archive.com/bitcoin-development@lists.sourceforge.net/msg03200.html

 
 
 
 --

 
November Webinars for C, C++, Fortran Developers
 Accelerate application performance with scalable programming
 models. Explore techniques for threading, error checking, porting,
 and tuning. Get the most from the latest Intel processors and
 coprocessors. See abstracts and register 
 http://pubads.g.doubleclick.net/gampad/clk?id=60136231iu=/4140/ostg.clktrk

 
 
 
 ___ Bitcoin-development
 mailing list Bitcoin-development@lists.sourceforge.net 
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development
 

-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJSfA0SAAoJEKpww0VFxdGRSEUIALFws8/nNDGPDFWKX2N19jWA
YecC7ZdMgN+1xmf+z2TNjaREvUqI1BLbYO3qQj9AsvTgkMZDwo8c5hMfJL7//V+z
vLiygTbEcorEbyM54w8yTuDVBqdNEg22Cn2T35DIEmqxGP5OSqw+vEBp2B4Y7asv
GG+JgYTVNJf6kZ1GV8cXYnXVBgfccZfXllBYOIPjyk2tdz7HMJN10WKUePbSJtg+
zcvly05JY70d1quERj/fXxVsHpPP6BrH5sH+h4WPxM27+i6R3N90JLAWbB9D4h2s
oYK9MMlH3UC3HR4AR7po4xxuOpxOK3Exa6d9ACQGPGtLRNVWmHiBFT2SViKViK4=
=gALT
-END PGP SIGNATURE-

--
November Webinars for C, C++, Fortran Developers
Accelerate application performance with scalable programming models. Explore
techniques for threading, error checking, porting, and tuning. Get the most 
from the latest Intel processors and coprocessors. See abstracts and register
http://pubads.g.doubleclick.net/gampad/clk?id=60136231iu=/4140/ostg.clktrk
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Auto-generated miner backbone

2013-11-04 Thread Michael Gronager
We propose a simple, backwards-compatible change to the Bitcoin
protocol to address this problem and raise the threshold. Specifically,
when a miner learns of competing branches of the same length, it should
propagate all of them, and choose which one to mine on uniformly at random.

So only in the case of two competing chains... The Selfish Miner today
has an advantage knowing which chain the other will work on, and by
simply choosing the other they get their advantage making it likely that
it is the other that will waste their effort. By using the random scheme
this advantage is gone.

Note again that it is only in the case of two competing chains, which
will happen on average every 60 blocks. So it is only roughly once every
60 block that you change from choosing one chain to doing a 50% random.

A rough calculation on earnings will be that you loose roughly 1/(2*60)
~ 1% of your blocks using this scheme. But at the same time you make it
harder for such an attack to happen. (This number might be slightly
higher, as working in parallel on both chains will make the two chains
last longer, so agree that we need a bit more analysis...)

I also agree that it is a kind of a Sybil attack, but I think we should
accept the risk of a Sybil attack but of course minimize it, rather than
introducing various social network (ip addresses) solutions, which in
one way or the other always have some central auth / oracle assumption.



On 4/11/13, 13:03 , Mike Hearn wrote:
 The suggested change is actually very simple (minutes of coding) and
 elegant and addresses precisely the identified problem.
 
 
 Disagree. Unless I'm misunderstanding what they propose, their suggested
 change would mean anyone could broadcast a newly discovered block at any
 point and have a 50% chance of being the winner. That is a fundamental
 change to the dynamics of how Bitcoin works that would require careful
 thought and study.
 
 Also, their solution doesn't really address the problem they bring up,
 it just changes the size of the threshold required. 
 
 Fundamentally, their attack is a sybil attack. It doesn't work if they
 can't delay or block a pools competitors because mostly their block will
 come in second place and they'll lose the race. Thus the solution should
 be a solution to sybil attacks.


--
Android is increasing in popularity, but the open development platform that
developers love is also attractive to malware creators. Download this white
paper to learn more about secure code signing practices that can help keep
Android apps secure.
http://pubads.g.doubleclick.net/gampad/clk?id=65839951iu=/4140/ostg.clktrk
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] is there a way to do bitcoin-staging?

2013-10-14 Thread Michael Gronager
Hi Alan,

What you describe in the ultimate blockchain compression I have already
coded the authenticated datastructure part of in libcoin
(https://github.com/libcoin/libcoin) - next step is to include a p2pool
style mining, where a parallel chain serves several purposes:
1. to validate the root hash at a higher frequency than the 10 min
2. to enable distributed mining, easily (part of libcoind)
3. to utilize the soft fork by defining the root hash in coinbase blocks
as v3 and once we cross the limit all blocks are v3.

I will have a closer look at you bitcoin talk post to see how well my
approach and ideas fit to yours.

Michael

On 20/5/13 08:34 , Alan Reiner wrote:
 This is exactly what I was planning to do with the inappropriately-named
 Ultimate Blockchain Compression
 https://bitcointalk.org/index.php?topic=88208.0.  I wanted to
 reorganize the blockchain data into an authenticated tree, indexed by
 TxOut script (address), instead of tx-hash.  Much like a regular merkle
 tree, you can store the root in the block header, and communicate
 branches of that tree to nodes, to prove inclusion (and exclusion!) of
 TxOuts for any given script/address.  Additionally, you can include at
 each node, the sum of BTC in all nodes below it, which offers some other
 nice benefits.
 
 I think this idea is has epic upside-potential for bitcoin if it works
 -- even SPV nodes could query their unspent TxOut list for their
 wallet from any untrusted peer and compare the result directly to the
 blockheaders/POW.  Given nothing but the headers, you can verify the
 balance of 100 addresses with 250 kB.  But also epic failure-potential
 in terms of feasibility and cost-to-benefit for miners.  For it to
 really work, it's gotta be part of the mainnet validation rules, but no
 way it can be evaluated realistically without some kind of staging. 
 Therefore, I had proposed that this be merge-mined on a meta-chain
 first...get a bunch of miners on board to agree to merge mine and see it
 in action.  It seemed like a perfectly non-disruptive way to prove out a
 particular idea before we actually consider making a protocol change
 that significant.  Even if it stayed on its own meta chain, as long as
 there is some significant amount of hashpower working on it, it can
 still be a useful tool. 
 
 Unfortunately, my experience with merged mining is minimal, so I'm still
 not clear how feasible/reliable it is as an alternative to direct
 blockchain integration.  That's a discussion I'd like to have.
 
 -Alan
 
 
 On 5/19/2013 11:08 AM, Peter Vessenes wrote:
 I think this is a very interesting idea. As Bitcoiners, we often stuff
 things into the 'alt chain' bucket in our heads; I wonder if this idea
 works better as a curing period, essentially an extended version of
 the current 100 block wait for mined coins.

 An alternate setup comes to mind; I can imagine this working as a sort
 of gift economy; people pay real BTC for merge-mined beta BTC as a
 way to support development. There is no doubt a more elegant and
 practical solution that might have different economic and crypto
 characteristics.



 On Sun, May 19, 2013 at 6:23 AM, Adam Back a...@cypherspace.org
 mailto:a...@cypherspace.org wrote:

 Is there a way to experiment with new features - eg committed
 coins - that
 doesnt involve an altcoin in the conventional sense, and also
 doesnt impose
 a big testing burden on bitcoin main which is a security and
 testing risk?

 eg lets say some form of merged mine where an alt-coin lets call it
 bitcoin-staging?  where the coins are the same coins as on
 bitcoin, the
 mining power goes to bitcoin main, so some aspect of merged
 mining, but no
 native mining.  and ability to use bitcoins by locking them on
 bitcoin to
 move them to bitcoin-staging and vice versa (ie exchange them 1:1
 cryptographically, no exchange).

 Did anyone figure anything like that out?  Seems vaguely doable and
 maybe productive.  The only people with coins at risk of defects
 in a new
 feature, or insufficiently well tested novel feature are people
 with coins
 on bitcoin-staging.

 Yes I know about bitcoin-test this is not it.  I mean a real live
 system,
 with live value, but that is intentionally wanting to avoid
 forking bitcoins
 parameters, nor value, nor mindshare dillution.  In this way something
 potentially interesting could move forward faster, and be les
 risky to the
 main bitcoin network.  eg particularly defenses against

 It might also be a more real world test test (after bitcoin-test)
 because
 some parameters are different on test, and some issues may not
 manifest
 without more real activity.

 Then also bitcoin could cherry pick interesting patches and merge
 them after
 extensive real-world validation with real-money at stake (by early
 adopters).

 Adam

 
 

Re: [Bitcoin-development] HTTP REST API for bitcoind

2013-07-23 Thread Michael Gronager
Hi Andreas / Jeff,

Access to the UTXO set can be done using libcoin (see the coinexplorer 
example), which also has a rest interface. Access to the UTXO set pr 
address/script requires indexing of all scripts, which was easy in libcoin as 
the blockchain is stored in a sqlite database. Integrating this in bitcoind 
would require setting up and maintaining such an index ad hoc.

Michael


On Jul 23, 2013, at 10:27 , Andreas Schildbach andr...@schildbach.de wrote:

 On 07/22/2013 09:42 PM, Jeff Garzik wrote:
 
 The general goal of the HTTP REST interface is to access
 unauthenticated, public blockchain information.  There is no plan to
 add wallet interfacing/manipulation via this API.
 
 Is it planned to expose the UXTO set of a given address? That would be
 useful for SPV wallets to be able to swipe a previously unknown private
 key (e.g. paper wallet).
 
 
 
 --
 See everything from the browser to the database with AppDynamics
 Get end-to-end visibility with application monitoring from AppDynamics
 Isolate bottlenecks and diagnose root cause in seconds.
 Start your free trial of AppDynamics Pro today!
 http://pubads.g.doubleclick.net/gampad/clk?id=48808831iu=/4140/ostg.clktrk
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development


--
See everything from the browser to the database with AppDynamics
Get end-to-end visibility with application monitoring from AppDynamics
Isolate bottlenecks and diagnose root cause in seconds.
Start your free trial of AppDynamics Pro today!
http://pubads.g.doubleclick.net/gampad/clk?id=48808831iu=/4140/ostg.clktrk
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] HTTP REST API for bitcoind

2013-07-23 Thread Michael Gronager
 
 The only way to do this safely at an SPV security assumption, is by
 having an address-indexed committed merkle UTXO-set tree, like the
 one proposed by Alan Reiner, and being implemented by Mark
 Friedenback. I know Michael Gronager has something similar implemented,
 but I don't know whether it is script-indexed.

The MerkleTrie I have in libcoin is indexed on UTXOs only. However, adding
an extra index for scripts would be pretty easy (half day of coding), or even 
having the two merged into one index.

The burden imposed on validating nodes for keeping such an index is really 
minimal. When using the UTXO MerkleTrie I switch off the sqlite index of these 
and vise versa, so there are hardly any measurable timing difference.

However, the MerkleTrie index is currently re-build on startup (which takes ~30 
sec on my laptop), keeping it synced with disk would be optimal and in the long 
run necessary as even the UTXO set will grow over time.

 To be actually useful,
 it likely needs to be enforced by miners - putting a significant
 burden on validation nodes. Still, if it can be done efficiently,
 I think this would be worth it, but more research is needed first in
 any case.
 


--
See everything from the browser to the database with AppDynamics
Get end-to-end visibility with application monitoring from AppDynamics
Isolate bottlenecks and diagnose root cause in seconds.
Start your free trial of AppDynamics Pro today!
http://pubads.g.doubleclick.net/gampad/clk?id=48808831iu=/4140/ostg.clktrk
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] SPV bitcoind? (was: Introducing BitcoinKit.framework)

2013-07-18 Thread Michael Gronager
Hi Bazyli,

I actually do my main development on Mac OSX, so it surprises me to hear - I 
build Xcode projects with libcoin daily on Mac OSX and linux, on Windows it is 
agreeable more of a fight to build. QT is really not needed, I kept it there 
for BitcoinQT, that was once part of the tree too, will remove it as the qt 
part got split out.

Building clean on Mac requires OpenSSL, BDB and Boost - all can be installed 
using homebrew, also remember to use the latest cmake, and a normal cmake xcode 
call: cmake -GXcode should do the job. Otherwise pls send me the debug output. 

A few quick notes for building stuff there:
 - try with coinexplorer, it is the base code I am using - it splits out the 
wallet from the server, nice if you e.g. want to build a webcoin like server.
 - The wallet parts from bitcoind I don't use personally, so if you have 
problems with these I need to have a closer look.

Also note that as the first version of libcoin was a direct refactorization of 
bitcoin, the current one add a lot of different features and handles things 
quite differently - you can e.g. lookup any unspent output by script (bitcoin 
address) in milliseconds (nice for web wallets).

Finally: 

   Because of the templates that bitcoind is actually using that's not 
 gonna work ever. That's why BitcoinKit is a separate dynamic library that's 
 compiled with gcc (or at least llvm pretending to be gcc ;P)

As I mentioned it also compiles on Linux (gcc) - gcc is quite savvy when it 
comes to templates - I agree that the template stuff from Database.h is quite 
involved, but as I mentioned before try with coinexplorer.

- I will try to do a from scratch recompilation to see if I experience similar 
issues...

Also - if you are good at creating frameworks on Mac OSX using cmake, help 
would be appreciated! I think that libcoin by defaults build using shared libs, 
this configurable from ccmake using the dynamic library option.

Thanks,

Michael


--
See everything from the browser to the database with AppDynamics
Get end-to-end visibility with application monitoring from AppDynamics
Isolate bottlenecks and diagnose root cause in seconds.
Start your free trial of AppDynamics Pro today!
http://pubads.g.doubleclick.net/gampad/clk?id=48808831iu=/4140/ostg.clktrk
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] SPV bitcoind? (was: Introducing BitcoinKit.framework)

2013-07-18 Thread Michael Gronager
Hi Bazyli,

Just did a fresh build based on git (Xcode) - had one issue: the paillier and 
account tests were missing - please comment them out in tests/CMakeLists.txt, 
then coinexplorer should build nicely.

Note I did a git push as well, so you need to do a git pull first.

/Michael
--
See everything from the browser to the database with AppDynamics
Get end-to-end visibility with application monitoring from AppDynamics
Isolate bottlenecks and diagnose root cause in seconds.
Start your free trial of AppDynamics Pro today!
http://pubads.g.doubleclick.net/gampad/clk?id=48808831iu=/4140/ostg.clktrk
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] SPV bitcoind? (was: Introducing BitcoinKit.framework)

2013-07-17 Thread Michael Gronager

 Is that still accurate Michael?
 

The 90 minutes is not - the blockchain has grown quite a lot since last year, 
and as for the 3.5 speed, I havn't tested it since Pieter's ultraprune - 
libcoin also has something similar to ultraprune, done directly in the sqlite 
database backend, but I should run a head to head again - could be fun. I would 
assume, though, that the result would be similar timings.

However, by having a merkle tree hash of all UTXOs they become downloadable in 
a trusted manner from any other client - something that enables bootstrap in 
minutes, so the old numbers becomes less relevant in this setting.

 
 On Wed, Jul 17, 2013 at 4:58 PM, Wendell w...@grabhive.com wrote:
 The libcoin/bitcoind client downloads the entire block chain 3.5 times 
 faster than the bitcoin/bitcoind client. This is less than 90 minutes on a 
 modern laptop!
 
 


--
See everything from the browser to the database with AppDynamics
Get end-to-end visibility with application monitoring from AppDynamics
Isolate bottlenecks and diagnose root cause in seconds.
Start your free trial of AppDynamics Pro today!
http://pubads.g.doubleclick.net/gampad/clk?id=48808831iu=/4140/ostg.clktrk
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


[Bitcoin-development] BIP0032

2013-05-27 Thread Michael Gronager
Pieter,

I was re-reading BIP0032, and checking some of the equations... It seems
to me that there is something wrong (or I have missed something).

As I see it there can only be one HMAC function, used for both private
and public derivation - I assume that:
[1]  CKD((k_par, c_par), i) - (k_i, c_i)
[2]  CKD'((K_par, c_par), i) - (K_i, c_i)

Where K_par = k_par*G, will result in K_i = k_i*G (and identical c_i's
in both expressions).

Now following your formulas for [1]:
  k_i = I_L + k_par (mod n)
where I_L = {HMACSHA512(c_par, 0x00||k_par||i)}_L (denoting left
256bits). Further c_i = I_R.
This gives a K_i = k_i*G = I_L*G + k_par(mod n)*G

Now follow the formula for [2]:
  K_i = (I_L+k_par)*G = I_L*G + K_par
This is not the same as above, however, if we remove the (mod n) we are
getting closer, but still the value of I_L are different in the two
equations as: HMACSHA512(c_par, 0x00||k_par||i)  HMAXSHA512(c_par,
X(k_par*G)||i).

We can, however, fix things if we change private child key derivation to:

To define CDK((k_par, c_par), i) - (k_i, c_i):
* (no difference in deriving public or private):
I = HMACSHA512(c_par, X(k_par*G)||i)
* Split I into I_L, I_R (256bits each)
* k_i = k_par + I_L
* c_i = I_R
* and, if using public derivation, we use K_i = (k_par + I_L)*G

Now for pure public derivation (i.e. we don't know the private key):
To define CDK'((K_par, c_par), i) - (K_i, c_i):
* I = HMACSHA512(c_par, X(K_par)||i)
* Split I into I_L and I_R
* K_i = K_par + I_L*G (= k_par*G + I_L*G = (k_par+I_L)*G = k_i*G)
* c_i = I_R

Now we have the right properties, but it required quite some changes,
also note that c_i are now equal in both private and public derivation.

Comments ?


Sincerely,

Michael

--
Try New Relic Now  We'll Send You this Cool Shirt
New Relic is the only SaaS-based application performance monitoring service 
that delivers powerful full stack analytics. Optimize and monitor your
browser, app,  servers with just a few lines of code. Try New Relic
and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_may
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] BIP0032

2013-05-27 Thread Michael Gronager
Which again means that the statement regarding Audits through the Master
Public key, M, is wrong - only incoming and outgoing transaction of
_publicly_ derived wallets will be part of the audit... Privately
derived wallets cannot be obtained, though you could, without loss of
security, share also the addition points from privately derived wallets:
(m/i')*G, but there is no concept of a single public master key.

==
Audits: M
In case an auditor needs full access to the list of incoming and
outgoing payments, one can share the master public extended key. This
will allow the auditor to see all transactions from and to the wallet,
in all accounts, but not a single secret key.
==



--
Try New Relic Now  We'll Send You this Cool Shirt
New Relic is the only SaaS-based application performance monitoring service 
that delivers powerful full stack analytics. Optimize and monitor your
browser, app,  servers with just a few lines of code. Try New Relic
and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_may
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Hardware BitCoin wallet as part of Google Summer of Code

2013-04-29 Thread Michael Gronager
Are you familiar with this:

http://code.google.com/p/opencryptotoken/

It does ecc and as it is based on an atmel micro controller, adding a display 
is pretty straight forward  

Michael 

On 29/04/2013, at 18.28, Peter Todd p...@petertodd.org wrote:

 On Mon, Apr 29, 2013 at 10:30:47PM +0800, Crypto Stick wrote:
 Crypto Stick is an open source USB key for encryption and secure
 authentication.
 We have been accepted as a mentor organization for Google
 Summer of Code (GSOC) 2013. One of our project ideas is to develop a
 physical BitCoin wallet according to
 https://en.bitcoin.it/wiki/Smart_card_wallet
 
 A word of caution: hardware Bitcoin wallets really do need some type of
 display so the wallet itself can tell you where the coins it is signing
 are being sent, and that in turn implies support for the upcoming
 payment protocol so the wallet can also verify that the address is
 actually the address of the recipient the user is intending to send
 funds too. The current Crypto Stick hardware doesn't even have a button
 for user interaction. (press n times to approve an n-BTC spend)
 
 Having said that PGP smart cards and USB keys already have that problem,
 but the consequences of signing the wrong document are usually less than
 the consequences of sending some or even all of the users funds to a
 thief. You can usually revoke a bad signature after the fact with a
 follow-up message.
 
 Not to say hardware security for private keys isn't a bad thing, but the
 protections are a lot more limited than users typically realize.
 
 
 I will say though I am excited that this implies that the Crypto Stick
 could have ECC key support in the future.
 
 -- 
 'peter'[:-1]@petertodd.org
 --
 Try New Relic Now  We'll Send You this Cool Shirt
 New Relic is the only SaaS-based application performance monitoring service 
 that delivers powerful full stack analytics. Optimize and monitor your
 browser, app,  servers with just a few lines of code. Try New Relic
 and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_apr
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development
--
Try New Relic Now  We'll Send You this Cool Shirt
New Relic is the only SaaS-based application performance monitoring service 
that delivers powerful full stack analytics. Optimize and monitor your
browser, app,  servers with just a few lines of code. Try New Relic
and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_apr___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Ok to use 0.8.x?

2013-03-14 Thread Michael Gronager

 Bitcoin version 0.8.0 is safe to use for everything EXCEPT creating blocks.
 
 So: safe for everybody except solo miners / pool operators.

And even solo miners / pool operators can use it if connected to the network 
only through a 0.7 node.




--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_mar
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Blocksize and off-chain transactions

2013-03-13 Thread Michael Gronager
Please note that it was not 0.8 that had issues, but 0.7(and downwards).

I really think changing features in 0.8 aiming for a fluffy limit to avoid lock 
object errors on 0.7 is the wrong way to go, and it will never cover for a 
similar situations in the future.

Instead I would like to propose a setup for considerate mining:
* Run pools either on newest or second newest version (up to you depending on 
which features you like as a pool admin) - say e.g. 0.8
* Connect to the rest of the bitcoin network _only_ through a node of the other 
version - say e.g. 0.7

This guarantees that no blocks will get into the network that will not be 
accepted by both 0.8 and 0.7. Those two  versions together should add up to say 
90%.

Once everyone else (90%) have upgraded to the newest, (0.8), drop the 0.7 and 
start to introduce 0.9 instead.

/M



--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_mar
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Blocksize and off-chain transactions

2013-03-13 Thread Michael Gronager
I hear consensus that at some point we need a hardfork (== creating blocks that 
will not be accepted by 0.7 clients).

Miners generate block, hence they are the ones who should filter themselves 
though some consensus. 


 But we cannot just drop support for old nodes. It is completely unreasonable 
 to put the
 _majority_ of the network on a fork, without even as much as a discussion 
 about it.
 Oh, you didn't get the memo? The rules implemented in your client are 
 outdated. - that
 is not how Bitcoin works: the network defines the rules.

Consensus was rapidly reached a day ago: To ensure the majority (all of?) the 
network could accept the blocks mined, and not just 0.8. This was the right 
decision! Too many was dependent on =0.7

So, the question is not if, but when to do a hardfork. We need to define and 
monitor the % of nodes running different versions (preferably a weighted 
average - some nodes, like e.g. blockchain.info  mtgox serve many...). Once 
there was the rowit bitcoinstatus page - do we have another resource for this ?

Then the second question is how to ensure we don't create a fork again? Pieter 
(and others?) are of the opinion that we should mimic a 0.7 
lock-object-starvation-reject-rule. I don't like this for three reasons:
1. I find it hard to ensure we have actually coined the bug precisely
2. I expect that similar issues will happen again
3. The current issue was between two versions, but in the future it could be 
between two implementations - then trying implement or even to coordinate 
strange rules becomes very unlikely.

Hence the scheme for considerate mining - it is the only scheme that 
guarantees 100% that no block are released that will not be accepted by a 
supermajority of the install base.

Another nice thing about it - it requires no development :)

So simply run in serial in front of all considerate miners nodes of different 
versions until a certain threshold of the install base is reached.

/M



--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_mar
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Warning: many 0.7 nodes break on large number of tx/block; fork risk

2013-03-12 Thread Michael Gronager
Yes, 0.7 (yes 0.7!) was not sufficiently tested it had an undocumented and 
unknown criteria for block rejection, hence the upgrade went wrong.

More space in the block is needed indeed, but the real problem you are 
describing is actually not missing space in the block, but proper handling of 
mem-pool transactions. They should be pruned on two criteria:

1. if they gets to old 24hr
2. if the client is running out of space, then the oldest should probably be 
pruned 

clients are anyway keeping, and re-relaying, their own transactions and hence 
it would mean only little, and only little for clients. Dropping free / old 
transaction is a much a better behavior than dying... Even a scheme where the 
client dropped all or random mempool txes would be a tolerable way of handling 
things (dropping all is similar to a restart, except for no user intervention).

Following that, increase the soft and hard limit to 1 and eg 10MB, but miners 
should be the last to upgrade.

/M


On 12/03/2013, at 10:10, Mike Hearn m...@plan99.net wrote:

 Just so we're all on the same page, can someone confirm my
 understanding  - are any of the following statements untrue?
 
 BDB ran out of locks.
 However, only on some 0.7 nodes. Others, perhaps nodes using different
 flags, managed it.
 We have processed 1mb sized blocks on the testnet.
 Therefore it isn't presently clear why that particular block caused
 lock exhaustion when other larger blocks have not.
 
 The reason for increasing the soft limit is still present (we have run
 out of space).
 Therefore transactions are likely to start stacking up in the memory
 pool again very shortly, as they did last week.
 There are no bounds on the memory pool size. If too many transactions
 enter the pool then nodes will start to die with OOM failures.
 Therefore it is possible that we have a very limited amount of time
 until nodes start dying en-masse.
 Even if nodes do not die, users have no way to find out what the
 current highest fees/bids for block space are, nor any way to change
 the fee on sent transactions.
 Therefore Bitcoin will shortly start to break for the majority of
 users who don't have a deep understanding of the system.
 
 
 If all the above statements are true, we appear to be painted into a
 corner - can't roll forward and can't roll back, with very limited
 time to come up with a solution. I see only a small number of
 alternatives:
 
 1) Start aggressively trying to block or down-prioritize SatoshiDice
 transactions at the network level, to buy time and try to avoid
 mempool exhaustion. I don't know a good way to do this, although it
 appears that virtually all their traffic is actually coming via
 blockchain.infos My Wallet service. During their last outage block
 sizes seemed to drop to around 50kb. Alternatively, ask SD to
 temporarily suspend their service (this seems like a long shot).
 
 2) Perform a crash hard fork as soon as possible, probably with no
 changes in it except a new block size limit. Question - try to lift
 the 1mb limit at the same time, or not?
 
 
 
 
 On Tue, Mar 12, 2013 at 2:01 AM, Pieter Wuille pieter.wui...@gmail.com 
 wrote:
 Hello again,
 
 block 015c50b165fcdd33556f8b44800c5298943ac70b112df480c023
 (height=225430) seems indeed to have cause pre-0.8 and 0.8 nodes to fork (at
 least mostly). Both chains are being mined on - the 0.8 one growing faster.
 
 After some emergency discussion on #bitcoin-dev, it seems best to try to get
 the majority mining power back on the old chain, that is, the one which
 0.7 accepts (with
 01c108384350f74090433e7fcf79a606b8e797f065b130575932 at height
 225430). That is the only chain every client out there will accept. BTC
 Guild is switching to 0.7, so majority should abandon the 0.8 chain soon.
 
 Short advice: if you're a miner, please revert to 0.7 until we at least
 understand exactly what causes this. If you're a merchant, and are on 0.8,
 stop processing transactions until both sides have switched to the same
 chain again. We'll see how to proceed afterwards.
 
 --
 Pieter
 
 
 
 On Tue, Mar 12, 2013 at 1:18 AM, Pieter Wuille pieter.wui...@gmail.com
 wrote:
 
 Hello everyone,
 
 Í've just seen many reports of 0.7 nodes getting stuck around block
 225430, due to running out of lock entries in the BDB database. 0.8 nodes do
 not seem to have a problem.
 
 In any case, if you do not have this block:
 
  2013-03-12 00:00:10 SetBestChain: new
 best=015aab28064a4c521d6a5325ff6e251e8ca2edfdfe6cb5bf832c
 height=225439  work=853779625563004076992  tx=14269257  date=2013-03-11
 23:49:08
 
 you're likely stuck. Check debug.log and db.log (look for 'Lock table is
 out of available lock entries').
 
 If this is a widespread problem, it is an emergency. We risk having
 (several) forked chains with smaller blocks, which are accepted by 0.7
 nodes. Can people contact pool operators to see which fork they are on?
 Blockexplorer and blockchain.info seem to be stuck as well.
 
 

Re: [Bitcoin-development] Warning: many 0.7 nodes break on large number of tx/block; fork risk

2013-03-12 Thread Michael Gronager
Well a reversed upgrade is an upgrade that went wrong ;)

Anyway, the incident makes it even more important for people to upgrade, well 
except, perhaps, for miners...

Forks are caused by rejection criteria, hence: 
1. If you introduce new rejection criteria in an upgrade miners should upgrade 
_first_.
2. If you loosen some rejection criteria miners should upgrade _last_.
3. If you keep the same criteria assume 2.

/M

On 12/03/2013, at 13:11, Mike Hearn m...@plan99.net wrote:

 I'm not even sure I'd say the upgrade went wrong. The problem if
 anything is the upgrade didn't happen fast enough. If we had run out
 of block space a few months from now, or if miners/merchants/exchanges
 had upgraded faster, it'd have made more sense to just roll forward
 and tolerate the loss of the older clients.
 
 This really reinforces the importance of keeping nodes up to date.
 
 On Tue, Mar 12, 2013 at 12:44 PM, Pieter Wuille pieter.wui...@gmail.com 
 wrote:
 On Tue, Mar 12, 2013 at 11:13:09AM +0100, Michael Gronager wrote:
 Yes, 0.7 (yes 0.7!) was not sufficiently tested it had an undocumented and 
 unknown criteria for block rejection, hence the upgrade went wrong.
 
 We're using 0.7 as a short moniker for all clients, but this was a 
 limitation that all
 BDB-based bitcoins ever had. The bug is simply a limit in the number of lock 
 objects
 that was reached.
 
 It's ironic that 0.8 was supposed to solve all problems we had due to BDB 
 (except the
 wallet...), but now it seems it's still coming back to haunt us. I really 
 hated telling
 miners to go back to 0.7, given all efforts to make 0.8 signficantly more 
 tolerable...
 
 More space in the block is needed indeed, but the real problem you are 
 describing is actually not missing space in the block, but proper handling 
 of mem-pool transactions. They should be pruned on two criteria:
 
 1. if they gets to old 24hr
 2. if the client is running out of space, then the oldest should probably 
 be pruned
 
 clients are anyway keeping, and re-relaying, their own transactions and 
 hence it would mean only little, and only little for clients. Dropping free 
 / old transaction is a much a better behavior than dying... Even a scheme 
 where the client dropped all or random mempool txes would be a tolerable 
 way of handling things (dropping all is similar to a restart, except for no 
 user intervention).
 
 Right now, mempools are relatively small in memory usage, but with small 
 block sizes,
 it indeed risks going up. In 0.8, conflicting (=double spending) 
 transactions in the
 chain cause clearing the mempool of conflicts, so at least the mempool is 
 bounded by
 the size of the UTXO subset being spent. Dropping transactions from the 
 memory pool
 when they run out of space seems a correct solution. I'm less convinced 
 about a
 deterministic time-based rule, as that creates a double spending incentive 
 at that
 time, and a counter incentive to spam the network with your 
 risking-to-be-cleared
 transaction as well.
 
 Regarding the block space, we've seen the pct% of one single block chain 
 space consumer
 grow simultaneously with the introduction of larger blocks, so I'm not 
 actually convinced
 there is right now a big need for larger blocks (note: right now). The 
 competition for
 block chain space is mostly an issue for client software which doesn't deal 
 correctly
 with non-confirming transactions, and misleading users. It's mostly a 
 usability problem
 now, but increasing block sizes isn't guaranteed to fix that; it may just 
 make more
 space for spam.
 
 However, the presence of this bug, and the fact that a full solution is 
 available (0.8),
 probably helps achieving consensus fixing it (=a hardfork) is needed, and we 
 should take
 advantage of that. But please, let's not rush things...
 
 --
 Piter
 
 --
 Symantec Endpoint Protection 12 positioned as A LEADER in The Forrester  
 Wave(TM): Endpoint Security, Q1 2013 and remains a good choice in the  
 endpoint security space. For insight on selecting the right partner to 
 tackle endpoint security challenges, access the full report. 
 http://p.sf.net/sfu/symantec-dev2dev
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development


--
Symantec Endpoint Protection 12 positioned as A LEADER in The Forrester  
Wave(TM): Endpoint Security, Q1 2013 and remains a good choice in the  
endpoint security space. For insight on selecting the right partner to 
tackle endpoint security challenges, access the full report. 
http://p.sf.net/sfu/symantec-dev2dev
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo

Re: [Bitcoin-development] Warning: many 0.7 nodes break on large number of tx/block; fork risk

2013-03-12 Thread Michael Gronager
 Forks are caused by rejection criteria, hence:
 1. If you introduce new rejection criteria in an upgrade miners should 
 upgrade _first_.
 2. If you loosen some rejection criteria miners should upgrade _last_.
 3. If you keep the same criteria assume 2.
 
 And ... if you aren't aware that you're making a change ???

then only half should upgrade :-P

Well I thought I covered that by 3... But, question is of course if we could 
have been in a situation where 0.8 had been the one rejecting blocks? 

So miners could go with a filtering approach: only connect to the network 
through a node of a version one less than the current. That would still have 
caused block 225430 to be created, but it would never have been relayed and 
hence no harm. (and if the issue had been in 0.8 the block would not even have 
been accepted there in the first place). Downside is some lost seconds.

/M


--
Symantec Endpoint Protection 12 positioned as A LEADER in The Forrester  
Wave(TM): Endpoint Security, Q1 2013 and remains a good choice in the  
endpoint security space. For insight on selecting the right partner to 
tackle endpoint security challenges, access the full report. 
http://p.sf.net/sfu/symantec-dev2dev
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Blocking uneconomical UTXO creation

2013-03-11 Thread Michael Gronager
The point with UTXO is in the long run to be able to switch from a p2p network 
where everyone stores, validates and verifies everything to a DHT where the 
load of storing, validating and verifying can be shared. 

If we succeed with that then I don't see a problem in a growing set of UTXO, 
may that be due to abuse/misuse or just massive use. A properly designed DHT 
should be able to scale to this.

However, that being said, if you worry about the size of the UTXO set you 
should change the current coin choosing algorithm to simply get rid of dust. 

The current algorithm (ApproximateBestSubset) tend to accumulate dust as dust 
tend to be on an other scale than a real transactions and hence it is never 
included.

Regarding the demurrage/escheatment road, I agree that this is for another 
project. However, if users/developers like this idea, they can just implement a 
coin choosing algorithm donating dust as miner fee and use it on their 
satoshi-dice polluted wallet ;)

/M
  
On 11/03/2013, at 21:08, Rune Kjær Svendsen runesv...@gmail.com wrote:

 On Mon, Mar 11, 2013 at 12:01 PM, Jorge Timón jtimo...@gmail.com wrote:
 On 3/10/13, Peter Todd p...@petertodd.org wrote:
  It's also been suggested multiple times to make transaction outputs with
  a value less than the transaction fee non-standard, either with a fixed
  constant or by some sort of measurement.
 
 As said on the bitcointalk thread, I think this is the wrong approach.
 This way you effectively disable legitimate use cases for payments
 that are worth less than the fees like smart property/colored coins.
 While the transactions pay fees, they should not be considered spam
 regardless of how little the quantities being moved are.
 
 Then your only concern are unspent outputs and comparing fees with
 values doesn't help in any way.
 
  
 Just activate a non-proportional
 demurrage (well, I won't complain if you just turn bitcoin into
 freicoin, just think that non-proportional would be more acceptable by
 most bitcoiners) that incentives old transactions to be moved and
 destroys unspent transactions with small amounts that don't move to
 another address periodically. This has been proposed many times before
 too, and I think it makes a lot more sense.
 
 From an economic point of view this *does* make sense, in my opinion. Storing 
 an unspent transaction in the block chain costs money because we can't prune 
 it. However, it would completely destroy confidence in Bitcoin, as far as I 
 can see. It makes sense economically, but it  isn't feasible if we want to 
 maintain people's confidence in Bitcoin.
 
 I like Jeff's proposal of letting an alt-coin implement this. If it gets to 
 the point where Bitcoin can't function without this functionality, it'll be a 
 lot easier to make the transition, instead of now, when it's not really 
 needed, and the trust in Bitcoin really isn't that great.
 
 /Rune
 
  
 
 --
 Symantec Endpoint Protection 12 positioned as A LEADER in The Forrester
 Wave(TM): Endpoint Security, Q1 2013 and remains a good choice in the
 endpoint security space. For insight on selecting the right partner to
 tackle endpoint security challenges, access the full report.
 http://p.sf.net/sfu/symantec-dev2dev
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development
 
 --
 Symantec Endpoint Protection 12 positioned as A LEADER in The Forrester  
 Wave(TM): Endpoint Security, Q1 2013 and remains a good choice in the  
 endpoint security space. For insight on selecting the right partner to 
 tackle endpoint security challenges, access the full report. 
 http://p.sf.net/sfu/symantec-dev2dev___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development


--
Symantec Endpoint Protection 12 positioned as A LEADER in The Forrester  
Wave(TM): Endpoint Security, Q1 2013 and remains a good choice in the  
endpoint security space. For insight on selecting the right partner to 
tackle endpoint security challenges, access the full report. 
http://p.sf.net/sfu/symantec-dev2dev
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


[Bitcoin-development] Chain dust mitigation: Demurrage based Chain Vacuuming

2012-12-03 Thread Michael Gronager
(Also posted on the forum: https://bitcointalk.org/index.php?topic=128900.0)

The amount of dust in the block chain is getting large and it is growing all 
the time. Currently 11% of unspent tx outputs (UTXO) are of 1Satoshi 
(0.0001BTC), 32% is less than 0.0001BTC and 60% is less than 0.001BTC. 
(Thanks to Jan for digging out these numbers!)

This means that a huge part of the block chain is used for essentially nothing 
- e.g. the sum of the 11% is worth roughly 2 US cents !

The main source for these 1 Satoshi payouts is Sahtoshi Dice. And nothing wrong 
with that, however, we should work on ensuring that too many too small payments 
will not kill the size of the blockchain in the end - further, they are 
essentially too small to be included in other transaction as the added fee will 
often make it more expensive to remove them. Hence, there is no incentive to 
get rid of them.

I have an idea for a possible mitigation of this problem - introduction of 
demurrage - not as in it normal meaning as a percentage over time 
(see:http://en.wikipedia.org/wiki/Demurrage_(currency) btw, this has also been 
tried in freicoin), but as a mean to recycle pennies over time. The proposal is 
simple - UTXOs age out if not re-transacted - the smaller the coin the faster 
the aging:
1-99 Satoshi: lives for 210 blocks
100- Satoshi: lives for 2100 blocks
1-99 Satoshi: lives for 21000 blocks
100- Satoshi: lives for 21 blocks

Only amounts above 1BTC lives forever - (or we could even impose aging on those 
too..)

The aged coins are simply included in the block mining reward, creating another 
incentive for miners. Further, if we include all coins in this recycle scheme 
coins will never be lost forever. 

This scheme will impose some lifetimes also on e.g. colored coins (hence you 
need to use a certain amount to borrow space on the blockchain for the time 
needed, or simply transact them).

If you like this I would be happy to write it into a BIP.

Thoughts ?
--
Keep yourself connected to Go Parallel: 
BUILD Helping you discover the best ways to construct your parallel projects.
http://goparallel.sourceforge.net
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Chain dust mitigation: Demurrage based Chain Vacuuming

2012-12-03 Thread Michael Gronager
 1) Wouldn't the need to re-transact your coins to keep them safe from 
 vultures, result in people frantically sending coins to themselves, and 
 thus expand the block chain, instead of reduce growth?

Not at the rate suggested

 2) putting those hard limits in passes a value judgement that IMO should not 
 be present in the protocol. 1BTC may be worth a lot some day, or it could go 
 the other way around, with dust spam of 10+ BTC. Either way the limits will 
 have to be changed again, with yet another fork.

Well, retransmitting 1BTC ones every 4 years isn't that bad. So I don't see a 
need for another fork for this reason.

 3) The (normal) user does not have a view of his balance consisting of inputs 
 and outputs of various sizes. He just sees his balance as one number. And 
 somehow, inexplicably (except through a very difficult explanation), it's 
 going down... what if he has 1 BTC in 0.999 BTC units? And it's 
 gone after 21 blocks.

Agree to this - and also to the fact that it will be hard to introduce - it 
would be changing the protocol quite a lot (perhaps too much).

A better set of relay fee rules rewarding a decrease in # UTXOs is probably the 
(easiest) way forward.

/M
 


--
Keep yourself connected to Go Parallel: 
BUILD Helping you discover the best ways to construct your parallel projects.
http://goparallel.sourceforge.net
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Payment Protocol Proposal: Invoices/Payments/Receipts

2012-11-27 Thread Michael Gronager
Short comments:

* What if the SignedReceipt is not received AND the transactions IS posted on 
the p2p. Then you have payed for the goods, but you don't have a receipt. This 
could happen both from malice or system failures.
** Suggestion - sign the invoice with the key to which to send the transaction, 
the proof of payment, equivalent to a signed receipt is then in the blockchain.

This scheme would work both with or without x509, if you want to include x509, 
the message in the invoice could simply be signed by the x509 certificate as 
well.

PRO: Any user can send signed invoices, not only those with a x509 cert.
PRO: No limbo situation with no SignedReceipt
CON: This disables the use of anything but payment to key/address incl multisig 
etc.

However, the wast majority of use will anyway be payment to key/address.

Support of general pay to script could be supported through the payment scheme 
proposed earlier by Mike: No non-fee payments are accepted, except in a group - 
i.e. it is up to the merchant to generate the final transaction incl the fees, 
and that one could be to a general script. This also keeps the support of pay 
to general script needed for a client to a minimum.

Cheers,

Michael



--
Monitor your physical, virtual and cloud infrastructure from a single
web console. Get in-depth insight into apps, servers, databases, vmware,
SAP, cloud infrastructure, etc. Download 30-day Free Trial.
Pricing starts from $795 for 25 servers or applications!
http://p.sf.net/sfu/zoho_dev2dev_nov
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Payment Protocol Proposal: Invoices/Payments/Receipts

2012-11-27 Thread Michael Gronager
 
 The SignedReceipt message is useful in the sense that it shows
 confirmation by the merchant, but if you don't get one, you can still
 prove you paid the invoice. So from this perspective perhaps
 SignedReceipt should be renamed to Acceptance or something like that,
 and then the spec should call out that a signed invoice plus accepted
 Bitcoin transactions is mathematically a proof of purchase.

Which is why I find the SignedReceipt somewhat superfluous. If you implement 
a payment system, like bit-pay/wallet you are likely to double that through 
some sort of e-mail receipt anyway.

Further, the inclusion of x509 is not really needed in the spec - you don't 
need to sign the invoice with an x509, you can use the payment key. The proof 
would still be equally binding, and valid also for non holders of x509 (server) 
certificates (like normal people).
Finally, host certificates does not normally keep in their purpose S/MIME 
Signing. So you are bending the intended use of the x509 certificate anyway.

/M

 
 --
 Monitor your physical, virtual and cloud infrastructure from a single
 web console. Get in-depth insight into apps, servers, databases, vmware,
 SAP, cloud infrastructure, etc. Download 30-day Free Trial.
 Pricing starts from $795 for 25 servers or applications!
 http://p.sf.net/sfu/zoho_dev2dev_nov
 ___
 Bitcoin-development mailing list
 Bitcoin-development@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bitcoin-development


--
Monitor your physical, virtual and cloud infrastructure from a single
web console. Get in-depth insight into apps, servers, databases, vmware,
SAP, cloud infrastructure, etc. Download 30-day Free Trial.
Pricing starts from $795 for 25 servers or applications!
http://p.sf.net/sfu/zoho_dev2dev_nov
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


Re: [Bitcoin-development] Payment Protocol Proposal: Invoices/Payments/Receipts

2012-11-27 Thread Michael Gronager
 
 If a merchant/payment processor is willing to take the risk of zero or
 low confirmation transactions (because they are insured against it,
 for example), they were allowed to reply accepted immediately, and
 this would be a permanent proof of payment, even if the actual Bitcoin
 transaction that backs it gets reverted.

I guess that moves the discussion from developers to lawyers ;) Even though you 
send a signed receipt, if you can proof you didn't get the money, you will 
never be expected to deliver the goods. (and you can even write that in the the 
receipt ...)

So the SignedReceipt is legally not worth the bits it is composed of, hence I 
don't see the point in supporting it.

If you are selling atoms you can usually wait for N confirmations (even though 
you start shipping I guess you can recall a parcel within 144 blocks). If you 
are selling bits (like access to a site), you can revoke that access once you 
discover the transaction did not go through. So I can't find a use case where a 
Signed Receipt in the proposed form is advantageous.

/M
--
Monitor your physical, virtual and cloud infrastructure from a single
web console. Get in-depth insight into apps, servers, databases, vmware,
SAP, cloud infrastructure, etc. Download 30-day Free Trial.
Pricing starts from $795 for 25 servers or applications!
http://p.sf.net/sfu/zoho_dev2dev_nov
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development


[Bitcoin-development] BIP-12, 16, 17

2012-01-28 Thread Michael Gronager
Dear Bitcoiners,

I have been following some of the debate on the various BIP suggestions for 
enabling e.g. multisignature transactions. ( First a little rant - it seems 
like the discussion takes place in at least 5 different forums plus the IRC, 
this is so annoying. Please keep the discussion at one place and refer to this 
for people asking questions other places - including me, now... ).

I have some issues with BIP-16, it is mainly the lines 265-269 in the reference 
implementation 
(https://github.com/gavinandresen/bitcoin-git/blob/pay_to_script_hash/src/base58.h):
 

PUBKEY_ADDRESS = 0,
SCRIPT_ADDRESS = 5,
PUBKEY_ADDRESS_TEST = 111,
SCRIPT_ADDRESS_TEST = 196,

The purpose of the networkID is broken by this, as it ties additional 
information into an address as a hack. In the BIP-12 implementation I argued 
that this notification on address level is not needed, and should not be 
introduced, I am still of the same opinion. The bitcoin code has enough of 
globals and cross references inside the code s it is today, lets not add 
another one...

If we want more information in a bitcoin address we could just as well 
cannibalize it from the checksum - today it is 4 bytes (1 to 4mia) it could be 
2 or 3 bytes (1 to 65k or 16M) and that would not break the current meaning of 
the network ID. This would have the same effect - that you could not mistake 
two different addresses and create a non-redeemable transaction.

The BIP-17 seems a step forward, but I also agree with Gavins note on one on 
the forums, that it behaves differently in input and output scripts. So it 
obviously need some further work too.

Cheers,

Michael
--
Try before you buy = See our experts in action!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-dev2
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development