Just so we're all on the same page, can someone confirm my
understanding - are any of the following statements untrue?
BDB ran out of locks.
However, only on some 0.7 nodes. Others, perhaps nodes using different
flags, managed it.
We have processed 1mb sized blocks on the testnet.
Therefore it
On Tue, Mar 12, 2013 at 10:10:15AM +0100, Mike Hearn wrote:
There are no bounds on the memory pool size. If too many transactions
enter the pool then nodes will start to die with OOM failures.
Therefore it is possible that we have a very limited amount of time
until nodes start dying en-masse.
However, most nodes are not running in such a loop today. Probably
almost no nodes are.
I suppose you could consider mass node death to be more benign than a
hard fork, but both are pretty damn serious and warrant immediate
action. Otherwise we're going to see the number of nodes drop sharply
Yes, 0.7 (yes 0.7!) was not sufficiently tested it had an undocumented and
unknown criteria for block rejection, hence the upgrade went wrong.
More space in the block is needed indeed, but the real problem you are
describing is actually not missing space in the block, but proper handling of
On Tue, Mar 12, 2013 at 11:10:47AM +0100, Mike Hearn wrote:
However, most nodes are not running in such a loop today. Probably
almost no nodes are.
I suppose you could consider mass node death to be more benign than a
hard fork, but both are pretty damn serious and warrant immediate
action.
On Tue, Mar 12, 2013 at 11:13:09AM +0100, Michael Gronager wrote:
Following that, increase the soft and hard limit to 1 and eg 10MB, but miners
should be the last to upgrade.
We just saw a hard-fork happen because we ran into previously unknown
scaling issues with the current codebase. Why
clients are anyway keeping, and re-relaying, their own transactions
and hence it would mean only little, and only little for clients.
Not all end-user clients are always-on though
--
Symantec Endpoint Protection 12
We just saw a hard-fork happen because we ran into previously unknown
scaling issues with the current codebase.
Technically, it with the previous codebase ;)
--
Symantec Endpoint Protection 12 positioned as A LEADER in
I'm not even sure I'd say the upgrade went wrong. The problem if
anything is the upgrade didn't happen fast enough. If we had run out
of block space a few months from now, or if miners/merchants/exchanges
had upgraded faster, it'd have made more sense to just roll forward
and tolerate the loss of
Well a reversed upgrade is an upgrade that went wrong ;)
Anyway, the incident makes it even more important for people to upgrade, well
except, perhaps, for miners...
Forks are caused by rejection criteria, hence:
1. If you introduce new rejection criteria in an upgrade miners should upgrade
On Tue, Mar 12, 2013 at 2:10 AM, Mike Hearn m...@plan99.net wrote:
BDB ran out of locks.
However, only on some 0.7 nodes. Others, perhaps nodes using different
flags, managed it.
We have processed 1mb sized blocks on the testnet.
Therefore it isn't presently clear why that particular block
On 3/12/2013 5:18 AM, Jorge Timón wrote:
A related question...some people mentioned yesterday on #bitcoin-dev
that 0.5 appeared to be compatible with 0.8.
Was that only for the fatal block and would have forked 0.8 later
too or is it something else?
I'm having a hard time understanding this
Forks are caused by rejection criteria, hence:
1. If you introduce new rejection criteria in an upgrade miners should
upgrade _first_.
2. If you loosen some rejection criteria miners should upgrade _last_.
3. If you keep the same criteria assume 2.
And ... if you aren't aware that you're
Hello everyone,
Í've just seen many reports of 0.7 nodes getting stuck around block 225430,
due to running out of lock entries in the BDB database. 0.8 nodes do not
seem to have a problem.
In any case, if you do not have this block:
2013-03-12 00:00:10 SetBestChain: new
14 matches
Mail list logo