Additional costs would be in terms of A) chance of user error/application
error -- proposed method is much simpler, as well as extra bytes for
control flow ( 4 per script if I am counting right).
The costs on a normal script do seem slightly more friendly, except this
method allows for
In a system like bitcoin, where the system has to keep running, you
have to consider how to roll out upgrades, and the costs associated
with that.
* the general cost of any network-wide change, versus P2SH which is
already analyzed by devs, rolled out and working
* the cost of P2SH output is
Glad we got to the bottom of that. That's quite a nasty compiler/language
bug I must say. Not even a warning. Still, python crashes when trying to
print the name of a null character. It wouldn't surprise me if there are
other weird issues lurking. Would definitely sleep better with a more
I noticed this article today.
GHash Commits to 40% Hashrate Cap at Bitcoin Mining Summit
http://www.coindesk.com/ghash-commits-40-hashrate-cap-bitcoin-mining-summit/
Here's a quote from Satoshi when the mining arms race began:
We should have a gentleman’s agreement to postpone the GPU arms
Here is a good article that helped me with what's going wrong:
http://www.oracle.com/technetwork/articles/javase/supplementary-142654.html
Basically, Java is stuck at 16 bits per char due to legacy reasons. They
admit that for a new language, they would probably use 32 (or 24?) bits
per char.
Can someone explain to these guys and the public why promising to limit
yourselves to *only* a 50% chance of successfully double-spending a 6
confirm transaction is still not acceptable?
q=0.4
z=0 P=1
z=1 P=0.828861
z=2 P=0.736403
z=3 P=0.664168
z=4 P=0.603401
z=5
Define acceptable. The 40% thing is marketing and a temporary
solution. And people come down on both sides of whether or not
marketing 40% is a good idea.
I think it is a baby step that is moving in the right direction. You
want the numbers and sentiment moving in that direction (down, versus
On Thu, Jul 17, 2014 at 09:35:20AM -0400, Mark Friedenbach wrote:
Can someone explain to these guys and the public why promising to limit
yourselves to *only* a 50% chance of successfully double-spending a 6
confirm transaction is still not acceptable?
Hi, Mark.
We were asked on the
On Thu, Jul 17, 2014 at 6:14 PM, Jeff Garzik jgar...@bitpay.com wrote:
Historical note: On one hand, Satoshi seemed to dislike the early
emergence of GPU mining pools quite a bit.
To my knowledge, Satoshi left the project before mining pools got a
traction.
slush
* the general cost of any network-wide change, versus P2SH which is
already analyzed by devs, rolled out and working
* the cost of updating everybody to relay this new transaction type,
whereas P2SH Just Works already
fair -- I think that there may be a big benefit realizable with this kind
of
On Wed, Jul 16, 2014 at 10:56 AM, Jeremy jlru...@mit.edu wrote:
Hey all,
I had an idea for a new transaction type. The base idea is that it is
matching on script hashes much like pay to script hash, but checks for one
of N scripts.
This seems strictly less flexible and efficient than the
OVERVIEW
To improve block propagation, add a new block message that doesn't include
transactions the peer is known to have. The message must never require an
additional round trip due to any transactions the peer doesn't have, but
should
be compatible with peers sometimes forgetting transactions
A couple of half-baked thoughts:
On Thu, Jul 17, 2014 at 5:35 PM, Kaz Wesley kezi...@gmail.com wrote:
If there's support for this proposal, I can begin working on the specific
implementation details, such as the bloom filters, message format, and
capability advertisment, and draft a BIP once
I'm moving this design document to a gist so that I can integrate
changes as they come up:
https://gist.github.com/kazcw/43c97d3924326beca87d
One thing that I think is an important improvement over my initial
idea is that the bloom filters don't need to be kept around and built
up, they can just
On Thu, Jul 17, 2014 at 2:35 PM, Kaz Wesley kezi...@gmail.com wrote:
A node should be able to forget invs it has seen without invalidating what
peers
know about its known txes. To allow for this, a node assembles a bloom
filter of
Another option would be to just guarantee to keep at least the
15 matches
Mail list logo