'only' in this sentence: "only two orders of magnitude higher"
- is just like in this one:
"We're raising $100,000 for the Tesla S and we're not short of $99,900, we're
only short of $99,000..."
W dniu 2023-01-22 16:13:42 użytkownik John Tromp via bitcoin-dev
napisał:
> > Right now the
> Right now the total reward per transaction is $63, three orders of magnitude
> higher than typical fees.
No need to exaggerate; this is only two orders of magnitude higher
than current fees, which are typically over $0.50
___
bitcoin-dev mailing list
This is the phrase that should be recalled very often:
"the total reward per transaction is Three Orders of Magnitude
higher than typical fees. Sufficient fee increases to bring back hashing power
in a scenario like that would cause Enormous Disruption to many things,
including Lightning
On Sun, Jan 01, 2023 at 11:42:50PM +1100, Alfie John wrote:
> On 31 Dec 2022, at 10:28 am, Peter Todd via bitcoin-dev
> wrote:
> >
> >> This way:
> >>
> >> 1. system cannot be played
> >> 2. only in case of destructive halving: system waits for the recovery of
> >> network security
> >
> >
if by security you mean the security of the currency, i don't think people
have much to worry about
coinbase as far as i know is starting to behave more bank-like. i think
there is a nostr bot that does block updates and doesn't factor in coinbase
at all
On Sat, Jan 7, 2023 at 2:13 PM Jaroslaw
> Anyways if it turns out that fees alone don't look like they're supporting
> enough security, we have a good amount of time to come to that conclusion and
> do something about it.
The worst-case scenario is that the first global hashrate regression may take
place in 2028.
Instead of the
> In Bitcoin "the show must go on" and someone must pay for it. Active
[and/or] passive users
I certainly agree.
> or more precisely: tiny inflation
> Right now security comes from almost fully from ~1.8% inflation.
Best I could find, fees make up about 13% of miner revenue
Right now security comes from almost fully from ~1.8% inflation.
In November mempool was inflated to ~150MB and people were rather waiting for
cheap transactions back.
Instead of being happy that system is closer for a while to default working
area.
Deflation in Bitcoin is not 1:1 matter like
> is surely better than not delaying it.
I might agree, but I don't think it really solves the problem well enough
to be worth it. Any solution that would solve the problem better would make
delaying halvings unnecessary.
> there is non-zero risk that people will hoard it more and more,
Is a storage fee averaged out over many future blocks - but not hardcoded value
and regulated by a free market?
The problem with demurrage I see is that the fee is taken when you spend. There
is no additional income for miners if people are still hoarding.
In tail emission even if people are
Yes, the idea is:
if mining activity is growing - let's execute consecutive halvings
but if miner exodus has happened - let's delay next halving until mining
activity is recovered to previous levels
If it gets to the point where a sudden drop in mining difficulty happens -
delaying the next
On 31 Dec 2022, at 10:28 am, Peter Todd via bitcoin-dev
wrote:
>
>> This way:
>>
>> 1. system cannot be played
>> 2. only in case of destructive halving: system waits for the recovery of
>> network security
>
> The immediate danger we have with halvings is that in a competitive market,
>
On Fri, Dec 23, 2022 at 07:43:36PM +0100, jk...@op.pl wrote:
>
> Necessary or not - it doesn't hurt to plan the robust model, just in case.
> The proposal is:
>
> Let every 210,000 the code calculate the average difficulty of 100 last
> retargets (100 fit well in 210,000 / 2016 = 104.166)
>
If the idea is to ensure that a catastrophic miner exodus doesn't happen,
the "difference" you're calculating should only care about downward
differences. Upward differences indicate more mining activity and so
shouldn't cause a halving skip.
But I don't think any scheme like this that only acts
It seems like the more elegant solution could be by using a chainwork parameter
instead.
i.e. comparison just before halving - if the last 210,000 block interval has a
higher chainwork difference between the begining and the end of interval
than any other such inter-halving interval before.
Necessary or not - it doesn't hurt to plan the robust model, just in case. The
proposal is:
Let every 210,000 the code calculate the average difficulty of 100 last
retargets (100 fit well in 210,000 / 2016 = 104.166)
and compare with the maximum of all such values calculated before, every
16 matches
Mail list logo