, but it
would have been nice to get it faster...
/M
On Feb 19, 2014, at 10:11 PM, Pieter Wuille pieter.wui...@gmail.com wrote:
On Wed, Feb 19, 2014 at 9:28 PM, Michael Gronager grona...@mac.com wrote:
I think that we could guarantee fewer incidents by making version 1
transactions
Why introduce a new transaction version for this purpose ? Wouldn't it be more
elegant to simply let:
1. the next bitcoin version prettify all relayed transactions as
deterministic transactions fulfilling the scheme 1-6 effectively blocking any
malleability attack? If miners would upgrade then
:
On Wed, Feb 19, 2014 at 3:11 PM, Michael Gronager grona...@mac.com wrote:
Why introduce a new transaction version for this purpose ? Wouldn't it be
more elegant to simply let:
1. the next bitcoin version prettify all relayed transactions as
deterministic transactions fulfilling the scheme 1
Hi Christian,
Cool - thanks for posting - agree, that it would be nice to normalize
the results with block size - so divide by size and:
1. see if there is a correlation (we all presume there still is)
2. plot the delay graph as e.g. normalized to the averaged blocksize or
lets define a standard
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi Peter,
Love to see things put into formulas - nice work!
Fully agree on the your fist section: As latency determines maximum
block earnings, define a 0-latency (big-miner never orphans his own
blocks) island and growing that will of course result
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 15/11/13, 11:32 , Peter Todd wrote:
alpha = (1/113)*600s/134kBytes = 39.62uS/byte = 24kB/second
Which is atrocious...
alpha = P_fork*t_block/S = 1/113*454000/134 = 29ms/kb
or 272kbit pr second - if you assume this is a bandwidth then I
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Q = Total pool size (fraction of all mining power) q = My mining
power (do.) e = fraction of block fee that pool reserves
Unfortunately the math doesn't work that way. For any Q, a bigger
Q gives you a higher return. Remember that the way I
Last week I posted a writeup: On the optimal block size and why
transaction fees are 8 times too low (or transactions 8 times too big).
Peter Todd made some nice additions to it including different pool sizes
into the numbers.
However, it occurred to me that things can in fact be calculated even
network wisdom ;)
On 13/11/13, 12:52 , Michael Gronager wrote:
Last week I posted a writeup: On the optimal block size and why
transaction fees are 8 times too low (or transactions 8 times too big).
Peter Todd made some nice additions to it including different pool sizes
into the numbers
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi John,
Thanks for the feedback - comments below:
However, it occurred to me that things can in fact be calculated even
simpler: The measured fork rate will mean out all the different pool
sizes and network latencies and will as such provide a
Following the discussion on the recent mining sybil trick, I reread the
article on block propagation by Decker et al.* and decided to use it for
doing a proper estimate of transaction fee size and optimal block size.
The propagation of a block depends on and is roughly proportional to its
size.
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 7/11/13, 21:31 , Peter Todd wrote:
Final conclusions is that the fee currently is too small and that
there is no need to keep a maximum block size, the fork
probability will automatically provide an incentive to not let
block grows into
We propose a simple, backwards-compatible change to the Bitcoin
protocol to address this problem and raise the threshold. Specifically,
when a miner learns of competing branches of the same length, it should
propagate all of them, and choose which one to mine on uniformly at random.
So only in
Hi Alan,
What you describe in the ultimate blockchain compression I have already
coded the authenticated datastructure part of in libcoin
(https://github.com/libcoin/libcoin) - next step is to include a p2pool
style mining, where a parallel chain serves several purposes:
1. to validate the root
Hi Andreas / Jeff,
Access to the UTXO set can be done using libcoin (see the coinexplorer
example), which also has a rest interface. Access to the UTXO set pr
address/script requires indexing of all scripts, which was easy in libcoin as
the blockchain is stored in a sqlite database.
The only way to do this safely at an SPV security assumption, is by
having an address-indexed committed merkle UTXO-set tree, like the
one proposed by Alan Reiner, and being implemented by Mark
Friedenback. I know Michael Gronager has something similar implemented,
but I don't know whether
Hi Bazyli,
I actually do my main development on Mac OSX, so it surprises me to hear - I
build Xcode projects with libcoin daily on Mac OSX and linux, on Windows it is
agreeable more of a fight to build. QT is really not needed, I kept it there
for BitcoinQT, that was once part of the tree too,
Hi Bazyli,
Just did a fresh build based on git (Xcode) - had one issue: the paillier and
account tests were missing - please comment them out in tests/CMakeLists.txt,
then coinexplorer should build nicely.
Note I did a git push as well, so you need to do a git pull first.
/Michael
Is that still accurate Michael?
The 90 minutes is not - the blockchain has grown quite a lot since last year,
and as for the 3.5 speed, I havn't tested it since Pieter's ultraprune -
libcoin also has something similar to ultraprune, done directly in the sqlite
database backend, but I
Pieter,
I was re-reading BIP0032, and checking some of the equations... It seems
to me that there is something wrong (or I have missed something).
As I see it there can only be one HMAC function, used for both private
and public derivation - I assume that:
[1] CKD((k_par, c_par), i) - (k_i,
Which again means that the statement regarding Audits through the Master
Public key, M, is wrong - only incoming and outgoing transaction of
_publicly_ derived wallets will be part of the audit... Privately
derived wallets cannot be obtained, though you could, without loss of
security, share also
Are you familiar with this:
http://code.google.com/p/opencryptotoken/
It does ecc and as it is based on an atmel micro controller, adding a display
is pretty straight forward
Michael
On 29/04/2013, at 18.28, Peter Todd p...@petertodd.org wrote:
On Mon, Apr 29, 2013 at 10:30:47PM +0800,
Bitcoin version 0.8.0 is safe to use for everything EXCEPT creating blocks.
So: safe for everybody except solo miners / pool operators.
And even solo miners / pool operators can use it if connected to the network
only through a 0.7 node.
Please note that it was not 0.8 that had issues, but 0.7(and downwards).
I really think changing features in 0.8 aiming for a fluffy limit to avoid lock
object errors on 0.7 is the wrong way to go, and it will never cover for a
similar situations in the future.
Instead I would like to propose
I hear consensus that at some point we need a hardfork (== creating blocks that
will not be accepted by 0.7 clients).
Miners generate block, hence they are the ones who should filter themselves
though some consensus.
But we cannot just drop support for old nodes. It is completely
Yes, 0.7 (yes 0.7!) was not sufficiently tested it had an undocumented and
unknown criteria for block rejection, hence the upgrade went wrong.
More space in the block is needed indeed, but the real problem you are
describing is actually not missing space in the block, but proper handling of
, 2013 at 12:44 PM, Pieter Wuille pieter.wui...@gmail.com
wrote:
On Tue, Mar 12, 2013 at 11:13:09AM +0100, Michael Gronager wrote:
Yes, 0.7 (yes 0.7!) was not sufficiently tested it had an undocumented and
unknown criteria for block rejection, hence the upgrade went wrong.
We're using 0.7
Forks are caused by rejection criteria, hence:
1. If you introduce new rejection criteria in an upgrade miners should
upgrade _first_.
2. If you loosen some rejection criteria miners should upgrade _last_.
3. If you keep the same criteria assume 2.
And ... if you aren't aware that you're
The point with UTXO is in the long run to be able to switch from a p2p network
where everyone stores, validates and verifies everything to a DHT where the
load of storing, validating and verifying can be shared.
If we succeed with that then I don't see a problem in a growing set of UTXO,
may
(Also posted on the forum: https://bitcointalk.org/index.php?topic=128900.0)
The amount of dust in the block chain is getting large and it is growing all
the time. Currently 11% of unspent tx outputs (UTXO) are of 1Satoshi
(0.0001BTC), 32% is less than 0.0001BTC and 60% is less than
1) Wouldn't the need to re-transact your coins to keep them safe from
vultures, result in people frantically sending coins to themselves, and
thus expand the block chain, instead of reduce growth?
Not at the rate suggested
2) putting those hard limits in passes a value judgement that IMO
Short comments:
* What if the SignedReceipt is not received AND the transactions IS posted on
the p2p. Then you have payed for the goods, but you don't have a receipt. This
could happen both from malice or system failures.
** Suggestion - sign the invoice with the key to which to send the
The SignedReceipt message is useful in the sense that it shows
confirmation by the merchant, but if you don't get one, you can still
prove you paid the invoice. So from this perspective perhaps
SignedReceipt should be renamed to Acceptance or something like that,
and then the spec should
If a merchant/payment processor is willing to take the risk of zero or
low confirmation transactions (because they are insured against it,
for example), they were allowed to reply accepted immediately, and
this would be a permanent proof of payment, even if the actual Bitcoin
transaction
Dear Bitcoiners,
I have been following some of the debate on the various BIP suggestions for
enabling e.g. multisignature transactions. ( First a little rant - it seems
like the discussion takes place in at least 5 different forums plus the IRC,
this is so annoying. Please keep the discussion
35 matches
Mail list logo