Dear list,
Apparently my emails are being marked as spam, despite being sent from
GMail's web interface. I've pinged our sysadmin.
It's a problem with the mailing list software, not your setup. BitPay could
disable the phishing protections but that seems like a poor solution. The
only real
It is a trivial *code* change. It is not a trivial change to the
economics of a $3.2B system.
Hmm - again I'd argue the opposite.
Up until now Bitcoin has been unconstrained by the hard block size limit.
If we raise it, Bitcoin will continue to be unconstrained by it. That's the
default
Next week on April 15th Gavin, Wladimir, Corey and myself will be at
DevCore London:
https://everyeventgives.com/event/devcore-london
If you're in town why not come along?
It's often the case that conferences can be just talking shops, without
much meat for real developers. So in the
I don't think it's quite a blank check, but it would enable replay attacks
in the form of sending the money to the same place it was sent before if an
address ever receives coins again.
Right, good point. I wonder if this sort of auto forwarding could even be a
useful feature. I can't think
And allegations that the project is run like wikipedia or an edit war
are verifyably untrue.
Check the commit history.
This was a reference to a post by Gregory on Reddit where he said if Gavin
were to do a pull request for the block size change and then merge it, he
would revert it. And I
If you think it's not clear enough, which may explain why you did not even
attempt to follow it for your block size increase, feel free to make
improvements.
As the outcome of a block size BIP would be a code change to Bitcoin Core,
I cannot make improvements, only ask for them. Which is
So then: make a proposal for a better process, post it to this list.
Alright. Here is a first cut of my proposal. It can be inserted into an
amended BIP 1 after What belongs in a successful BIP?. Let me know what
you think.
The following section applies to BIPs that affect the block chain
Yeah, but increasing block-size is not a longterm solution.
Are you sure? That sort of statement is hard to answer because it doesn't
say what you think long term is, or how much you expect Bitcoin to grow.
Satoshi thought it was a perfectly fine long term solution because he
thought hardware
Hi Adam,
I am still confused about whether you actually support an increase in the
block size limit to happen right now. As you agree that this layer 2 you
speak of doesn't exist yet, and won't within the next 10-12 months (do you
agree that actually?), can you please state clearly that you will
Or alternatively, fix the reasons why users would have negative
experiences with full blocks
It's impossible, Mark. *By definition* if Bitcoin does not have sufficient
capacity for everyone's transactions, some users who were using it will be
kicked out to make way for the others. Whether
We already removed the footer because it was incompatible with DKIM
signing. Keeping the [Bitcoin-dev] prepend tag in subject is compatible
with DKIM header signing only if the poster manually prepends it in their
subject header.
I still see footers being added to this list by
The new list currently has footers removed during testing. I am not
pleased with the need to remove the subject tag and footer to be more
compatible with DKIM users.
Lists can do what are effectively MITM attacks on people's messages in any
way they like, if they resign for the messages
If we assume that transactions are being dropped in an unpredictable way
when blocks are full, knowing the network congestion *right now* is
critical, and even then you just have to hope that someone who wants that
space more than you do doesn't show up after you disconnect.
Yeah, my
Re: dropped in an unpredictable way - transactions would be dropped
lowest fee/KB first, a completely predictable way.
Quite agreed.
No, Aaron is correct. It's unpredictable from the perspective of the user
sending the transaction, and as they are the ones picking the fees, that is
what
Hi Bryan,
Specifically, when Adam mentioned your conversations with non-technical
people, he did not mean Mike has talked with people who have possibly not
made pull requests to Bitcoin Core, so therefore Mike is a non-programmer.
Yes, my comment was prickly and grumpy. No surprises, I did
How do you plan to deal with security incident response for the
duration you describe where you will have control while you are deploying
the unilateral hard-fork and being in sole maintainership control?
How do we plan to deal with security incident response - exactly the same
way as
are only connected to each other through a slow 2 Mbit/s link.
That's very slow indeed. For comparison, plain old 3G connections routinely
cruise around 7-8 Mbit/sec.
So this simulation is assuming a speed dramatically worse than a mobile
phone can get!
Sure, and you did indeed say that.
--
___
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
Sequence numbers appear to have been originally intended as a mechanism
for transaction replacement within the context of multi-party transaction
construction, e.g. a micropayment channel.
Yes indeed they were. Satoshis mechanism was more general than micropayment
channels and could do HFT
Twenty is scary.
To whom? The only justification for the max size is DoS attacks, right?
Back when Bitcoin had an average block size of 10kb, the max block size was
100x the average. Things worked fine, nobody was scared.
The max block size is really a limit set by hardware capability, which
By the time a hard fork can happen, I expect average block size will be
above 500K.
Yes, possibly.
Would you support a rule that was larger of 1MB or 2x average size ?
That is strictly better than the situation we're in today.
It is, but only by a trivial amount - hitting the limit is
If the plan is a fix once and for all, then that should be changed too.
It could be set so that it is at least some multiple of the max block size
allowed.
Well, but RAM is not infinite :-) Effectively what these caps are doing is
setting the minimum hardware requirements for running a
It's surprising to see a core dev going to the public to defend a proposal
most other core devs disagree on, and then lobbying the Bitcoin ecosystem.
I agree that it is a waste of time. Many agree. The Bitcoin ecosystem
doesn't really need lobbying - my experience from talking to businesses
Ignorant. You seem do not understand the current situation. We
suffered from orphans a lot when we started in 2013. It is now your
turn.
Then please enlighten me. You're unable to download block templates from a
trusted node outside of the country because the bandwidth requirements are
too
I don't see this as an issue of sensitivity or not. Miners are businesses
that sell a service to Bitcoin users - the service of ordering transactions
chronologically. They aren't charities.
If some miners can't provide the service Bitcoin users need any more, then
OK, they should not/cannot mine.
(at reduced security if it has software that doesnt understand it)
Well, yes. Isn't that rather key to the issue? Whereas by simply
increasing the block size, SPV wallets don't care (same security and
protocol as before) and fully validating wallets can be updated with a very
small code
Mike, this proposal was purposefully constructed to maintain as well as
possible the semantics of Satoshi's original construction. Higher sequence
numbers -- chronologically later transactions -- are able to hit the chain
earlier, and therefore it can be reasonably argued will be selected by
The prior (and seemingly this) assurance contract proposals pay the
miners who mines a chain supportive of your interests and miners whom
mine against your interests identically.
The same is true today - via inflation I pay for blocks regardless of
whether they contain or double spend my
I wrote an article that explains the hashing assurance contract concept:
https://medium.com/@octskyward/hashing-7d04a887acc8
(it doesn't contain an in depth protocol description)
--
1,000 *people* in control vs. 10 is two orders of
magnitude more decentralized.
Yet Bitcoin has got worse by all these metrics: there was a time before
mining pools when there were ~thousands of people mining with their local
CPUs and GPUs. Now the number of full nodes that matter for block
!
On Thu, Apr 9, 2015 at 10:23 PM, Mike Hearn m...@plan99.net wrote:
Next week on April 15th Gavin, Wladimir, Corey and myself will be at
DevCore London:
https://everyeventgives.com/event/devcore-london
If you're in town why not come along?
It's often the case that conferences can be just
But the majority of the hashrate can now perform double spends on your
chain! They can send bitcoins to exchanges, sell it, extract the money and
build a new longer chain to get their bitcoins back.
Obviously if the majority of the mining hash rate is doing double spending
attacks on
The measure is miner consensus. How do you intend to measure
exchange/merchant acceptance?
Asking them.
In fact, we already have. I have been talking to well known people and CEOs
in the Bitcoin community for some time now. *All* of them support bigger
blocks, this includes:
- Every
And looking at the version (aka user-agent) strings of publicly reachable
nodes on the network.
(e.g. see the count at https://getaddr.bitnodes.io/nodes/ )
Yeah, though FYI Luke informed me last week that I somehow managed to take
out the change to the user-agent string in Bitcoin XT,
Hi Andrew,
Your belief that Bitcoin has to be constrained by the belief that hardware
will never improve is extremist, but regardless, your concerns are easy to
assuage: there is no requirement that the block chain be stored on hard
disks. As you note yourself the block chain is used for
Hi Thomas,
My problem is that this seems to lacks a vision.
Are you aware of my proposal for network assurance contracts?
There is a discussion here:
https://www.mail-archive.com/bitcoin-development@lists.sourceforge.net/msg07552.html
But I agree with Gavin that attempting to plan for 20
some wallets (e.g., Andreas Schildbach's wallet) don't even allow it - you
can only spend confirmed UTXOs. I can't tell you how aggravating it is to
have to tell a friend, Oh, oops, I can't pay you yet. I have to wait for
the last transaction I did to confirm first. All the more aggravating
If capacity grows, fewer individuals would be able to run full nodes.
Hardly. Nobody is currently exhausting the CPU capacity of even a normal
computer currently and even if we did a 20x increase in load overnight,
that still wouldn't even warm up most machines good enough to be always on.
Wallets are incentivised to do a better job with defragmentation already,
as if you have lots of tiny UTXOs then your fees end up being huge when
trying to make a payment.
The reason they largely don't is just one of manpower. Nobody is working on
it.
As a wallet developer myself, one way I'd
Very nice Emin! This could be very useful as a building block for oracle
based services. If only there were opcodes for working with X.509 ;)
I'd suggest at least documenting in the FAQ how to extract the data from
the certificate:
openssl pkcs12 -in virtual-notary-cert-stocks-16070.p12 -nodes
CPFP also solves it just fine.
--
One dashboard for servers and applications across Physical-Virtual-Cloud
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give
Very interesting Matt.
For what it's worth, in future bitcoinj is very likely to bootstrap from
Cartographer nodes (signed HTTP) rather than DNS, and we're also steadily
working towards Tor by default. So this approach will probably stop working
at some point. As breaking PorcFest would kind of
601 - 642 of 642 matches
Mail list logo